uuid
int64
541B
3,299B
dataset
stringclasses
1 value
text
stringlengths
1
4.29M
1,314,259,996,063
arxiv
\section*{Introduction} The optimal stopping time problem has been wildely studied in case of {\em reward} given by a right continuous left limited (RCLL) positive adapted process $(\phi_t)$ defined on $[0, T]$ (see for example Shiryaev (1978), El Karoui (1981), Karatzas and Shreve (1998) or Peskir and Shiryaev (2006)). If $T>0$ is the fixed time horizon and if $T_0$ denotes the set of stopping times $\theta$ smaller than $T$, the problem consists in computing the maximal reward given by $$v(0)= \sup \{\,E[ \phi_\tau],\; \tau \in T_0\,\}\,,$$ in finding conditions for the existence of an optimal stopping time and giving a method to compute these optimal stopping times. Classicaly, the {\em value function} at time $S\in T_0$ is defined by $v(S)={\rm{ess}\;}\displaystyle{\sup}\{\, E[\phi_\tau\, |\,{\cal F}_S], \tau\in T_0\,{\rm and}\, \tau \geq S\,{\rm a.s.}\,\}$. The value function is given by a family of random variable $\{\,v(S), S\in T_0\,\}$. By using the right continuity of the reward $(\phi_t)$, it can be shown that there exists a RCLL adapted process $(v_t)$ which {\em aggregates} the family of random variable $\{\,v(S), S\in T_0\,\}$ that is such that $v_S=v(S)$ a.s. for each $S\in T_0$. This process is the {\em Snell envelope} of $(\phi_t)$, that is the smallest supermartingale process that dominates $\phi$. Moreover, when the reward $(\phi_t)$ is continuous, the stopping time defined trajectorially by \begin{equation*}\label{optun} \overline \theta (S)=\inf\{\,t\geq S, \;v_t=\phi_t\,\}\, \end{equation*} is optimal. Recall that El Karoui (1981) has introduced the more general notion of a reward given by a {\em family} $\{\,\phi(\theta), \theta\in T_0\,\}$ {\em of positive random variables} which satisfies some compatibility properties. In the recent paper of Kobylanski et al. (2009), this notion appears to be the appropriate one to study the $d$-multiple optimal stopping time problem. Moreover, in this work, Kobylanski et al. (2009) show that under quite weak assumptions (right and left continuity in expectation along stopping times of the reward), the minimal optimal stopping time for the value function at time $S$ \begin{equation}\label{casp} v(S)= {\rm{ess}\;}\displaystyle{\sup}\{\,E[\phi(\theta)\, |\,{\cal F}_S], \;\theta \in T_0\,{\rm and}\, \theta \geq S\,{\rm a.s.} \,\}\,, \end{equation} is given by \begin{equation}\label{te}\theta_{*}(S) :=\essinf \{\,\theta \in T_0,\,\, \theta \geq S\,{\rm a.s.} \,{\rm and} \, u(\theta) =\phi(\theta) \,\,\mbox {\rm a.s.} \,\}. \end{equation} Let us emphasize that the minimal optimal stopping time $\theta_*(S)$ is no longer defined as a hitting time of processes but as an essential infimum of random variables. Also, this result allows to deal with the optimal stopping problem only in terms of admissible families of random variables. It presents the advantage that it does no longer require aggregation results. Indeed, the existence of optimal stopping times as well as the characterization of the minimal one can be done by using only the value function family and the reward family and no longer the aggregated processes. We stress on that in the multiple case, it avoids long and heavy proofs, due to some difficult aggregation problems. It allows to solve the problem under weaker assumptions than before in the unified framework of families of random variables. In the present work, we consider the case of a one optimal stopping time problem with a discontinuous reward. More precisely, the reward is given by a family of random variables which satisfies some compatibility conditions and which is supposed to be upper-semicontinuous over stopping times in expectation. Note that these assumptions in terms of smoothness of the reward are optimal in order to ensure the existence of an optimal stopping time. Indeed, in the deterministic case, the upper-semicontinuity is the minimal assumption on a function $\phi: [0,T] \rightarrow \mathbb{R}$; $t \mapsto \phi(t)$ which ensures that the supremum of $\phi$ is attained on any closed subset of $[0,T]$. Under these assumptions, we show the existence of an optimal stopping time for the value function (\ref{casp}) which is given by the essential infimum $\theta_{*}(S)$ defined by (\ref{te}). We stress on that the mathematical tools which are used in this proof are not sophisticated tools, as those of the general theory of processes, but just the use of well chosen supermartingale systems and an appropriate construction of penalized stopping times. We also show that $\theta_{*}(S)$ is the minimal optimal stopping time. Also, the stopping time given by \begin{equation*} \check{ \theta}(S) ={\rm{ess}\;}\displaystyle{\sup}\{\;\theta \in T_0,\,\, \theta \geq S\,\,{\rm a.s.}\,\,{\rm and}\,\, E[v(\theta)]= E[v(S)] \,\}, \end{equation*} is proven to be the maximal optimal stopping time. Note that an important tool in this work is the use of the family of random variables defined by $v^+(S)={\rm{ess}\;}\displaystyle{\sup}\{\, E[\phi(\theta)\, |\,{\cal F}_S]$, $\theta \in T_0$ and $\theta > S$ a.s.\,for each stopping time $S$. Some properties and links between $v$, $v^+$ and $\phi$ are studied in this paper. These new results allow to solve the case of a reward process $(\phi_t)$ which can be much less regular than in the previous works. For instance, this allows to solve the case of a reward process given by $\phi_t = f(X_t)$, where $f$ is upper-semicontinuous and $(X_t)$ is a RCLL process supposed to be left continuous along stopping times. This opens a way to a large range of applications, for instance in finance. \vspace{0,2cm} The paper is organised as follows. In section 1, we give some first properties on $v$ and $v^+$. In particular, we have $v(S)= \phi(S)\vee v^+(S)$ a.s. for each $S$ $\in$ $T_0$ and the family $\{\, v^+(S), S \in T_0\,\}$ is right continuous along stopping times. In section 2, we show the existence of an optimal stopping time under some minimal assumptions. We begin by constructing $\varepsilon$-optimal stopping times which are appropriate to this case. Then, these $\varepsilon$-optimal stopping times are shown to tend to $\theta_{*}(S)$ as $\varepsilon$ tends to $0$. Moreover, $\theta_{*}(S)$ is proven to be an optimal stopping time for $v(S)$ and even the minimal one. At last, the stopping time $\check{ \theta}(S)$ is proven to be the maximal optimal stopping time. In section 3, we give some strict supermartingale conditions on $v$ which ensure the equality between $v$ and $\phi$ (locally). Secondly, we give some conditions on $v$ and $v^+$ which ensure the equality between $v$ and $v^+$ (for some stopping times which are specified). At last, we give a few applications of the classical Doob-Meyer decomposition in particular when the reward is right continuous in expectation, which allows to use aggregation results. In section 4, we give some examples where the reward is given by an upper semicontinuous function of a RCLL adapted process $(X_t)$ which is left continuous in expectation. We stress on that, except in the last part, all the properties established in this work do not require any result of the general theory of processes. \vspace{0,5cm} Let ${\mathbb F}=(\Omega, {\cal F}, ({\cal F}_t)_{\; { 0\leq t\leq T}},P)$ be a probability space equipped with a filtration $({\cal F}_t)_{\; { 0\leq t\leq T}}$ satisfying the usual conditions of right continuity and augmentation by the null sets ${\cal F}= {\cal F}_T$. We suppose that ${\cal F}_0$ contains only sets of probability $0$ or $1$. The time horizon is a fixed constant $T\in ]0,\infty[$. We denote by $T_{0}$ the collection of stopping times of ${\mathbb{F} }$ with values in $[0 , T]$. More generally, for any stopping times $S$, we denote by $T_{S}$ (resp. $T_{S^+}$) the class of stopping times $\theta\in T_0$ with $\theta\geq S$ a.s.\, (resp. $\theta>S $ a.s. on $\{S<T\}$ and $\theta=T$ a.s. on $\{S=T\}$). We also define $T_{[S, S^{'}]}$ the set of $\theta\in T_0$ with $S \leq \theta \leq S^{'}$ a.s. and $T_{]S, S^{'}]}$ the set of $\theta\in T_0$ with $S < \theta \leq S^{'}$ a.s.. Similarly, the set $T_{]S, S^{'}]}$ on $A$ will denote the set of $\theta\in T_0$ with $S < \theta \leq S^{'}$ a.s.\, on $A$. We use the following notation: For $t\in\mathbb{R}$ and for real valued random variables $X$ and $X_n$, $n\in$ $\mathbb{N}$, ``$X_n\uparrow X$'' stands for ``the sequence $(X_n)$ is nondecreasing and converges to $X$ a.s.''.
1,314,259,996,064
arxiv
\section{Introduction of the problem and the main results} This paper is a continuation of the analysis initiated in \cite{LePa1, LMP-prs1, LMP-prs2, a4, LePa2}, regarding the scaling of the residual energy and the derivation of the dimensionally-reduced models in the description of shape-formation in prestrained thin films. The study of materials which assume non-trivial rest configurations in the absence of exterior forces or boundary conditions arise in various contexts, e.g.: morphogenesis by growth, swelling or shrinkage, torn plastic sheets, engineered polymer gels, and many others. Below, we briefly remind the mathematical setting of the problem, called the ``incompatible elasticity'', and we further present the main results of this paper, consisting of: (i) the derivation of the variational model for linearized Kirchhoff-like energy subject to the Monge-Amp\`ere constraint, (ii) the derivation of the matching property for the continuation of infinitesimal isometries to exact isometries of metrics with positive Gauss curvature, and (iii) a study of uniqueness/multiplicity of the minimizers to the derived model, in the rotationally symmetric case. \subsection{The set-up and the non-Euclidean elasticity model} Let $\Omega$ be an open bounded subset of $\mathbb{R}^2$. Consider a family of $3$d plates: $$\Omega^h = \Omega\times (-h/2, h/2), \qquad 0<h< <1,$$ viewed as the reference configurations of thin elastic tissues. A typical point in $\Omega^h$ is denoted by $x=(x',x_3)$ where $x'\in\Omega$ and $|x_3|<h/2$. Each $\Omega^h$ is assumed to undergo an activation process, whose instantaneous growth is described by a smooth, invertible tensor: $$A^h=[A_{ij}^h] :\overline{\Omega^h}\rightarrow\mathbb{R}^{3\times 3} \quad \mbox{ with: } \det A^h(x)>0.$$ The multiplicative decomposition model \cite{Rod, LePa1, kupferman, klein} in the description of shape formation due to the prestrain, relies on the assumption that for a deformation $u^h :\Omega^h \rightarrow \mathbb{R}^3$, its elastic energy $I^h_W(u^h)$ is written in terms of the elastic tensor $F= \nabla u^h (A^h)^{-1}$ accounting for the reorganization of the body $\Omega^h$ in response to $A^h$. That is, we write: $$ \nabla u^h = F A^h,$$ and define: \begin{equation}\label{IhW} I^h_W(u^h) = \frac{1}{h}\int_{\Omega^h} W(F) ~\mbox{d}x = \frac{1}{h}\int_{\Omega^h} W(\nabla u^h(A^h)^{-1}) ~\mbox{d}x \qquad \forall u^h\in W^{1,2}(\Omega^h,\mathbb{R}^3). \end{equation} The elastic energy density $W:\mathbb{R}^{3\times 3}\rightarrow \mathbb{R}_{+}$ is assumed to satisfy the standard \cite{ciarbookvol3, FJMhier} conditions of normalization, frame indifference (with respect to the special orthogonal group $SO(3)$ of proper rotations in $\mathbb{R}^3$), and second order nondegeneracy: \begin{equation}\label{frame} \begin{split} \forall F\in \mathbb{R}^{3\times 3} \quad \forall R\in SO(3) \qquad & W(R) = 0, \quad W(RF) = W(F)\\ & W(F)\geq c~ \mathrm{dist}^2(F, SO(3)), \end{split} \end{equation} for a constant $c>0$. We also assume that there exists a monotone nonnegative function $\omega:[0,+\infty] \to [0, +\infty]$ which converges to zero at $0$, and a quadratic form ${\mathcal Q}_3$ on $\R^{3\times 3}$, with: \begin{equation}\label{Q3} \forall F\in \R^{3\times 3} \qquad |W(\mbox{Id} + F) - \mathcal{Q}_3(F)| \le \omega(|F|)|F|^2. \end{equation} This condition is satisfied in particular if $W$ is $\mathcal{C}^2$ regular in a neighborhood of $SO(3)$, wheras $\mathcal{Q}_3 = \frac 12 D^2 W(\mathrm{Id})$. Also, note that (\ref{frame}) implies that ${\mathcal Q}_3$ is nonnegative, is positive definite on symmetric matrices and ${\mathcal Q}_3 (F)= {\mathcal Q}_3({\mathrm{sym}} ~ F)$ for all $F\in \R^{3\times 3}$ (see Lemma \ref{Q3prop} for a proof of these standard observations). The model (\ref{IhW}) has been extensively studied in \cite{LePa1, LMP-prs1, LMP-prs2, lm1, lm2, M3, k1, k2, k3}. Recall (which is quite easy to check) that $I^h_W(u^h)=0$ is equivalent, via (\ref{frame}) and the polar decomposition theorem, to: \begin{equation}\label{zero} (\nabla u^h)^{T}\nabla u^h = (A^h)^{T}(A^h) \quad \mbox{ and } \quad \det \nabla u^h > 0 \quad \mbox{in }\Omega^h. \end{equation} The above can be interpreted in the following way: $I^h_W(u^h) = 0$ if and only if $u^h$ is an isometric immersion of the Riemannian metric $G^h = (A^h)^{T}(A^h)$. Therefore, the quantity: \begin{equation}\label{eh} e_h = \inf\Big\{I^h_W(u^h); ~ u^h\in W^{1,2}(\Omega^h, \mathbb{R}^3)\Big\} \end{equation} measures the residual energy at free equilibria of the configuration $\Omega^h$ that has been prestrained by $G^h$. This is consistent with \cite[Theorem 2.2]{LePa1}, which observes that $e_h>0$ whenever $G^h$ has no smooth isometric immersion in $\mathbb{R}^3$, i.e. when there is no $u^h$ with (\ref{zero}) or, equivalently, when the Riemann curvature tensor of the metric $G^h$ does not vanish identically on $\Omega^h$. \subsection{Growth tensors $A^h$ considered in this paper} Given now a sequence of growth tensors $A^h$, the main objective is to analyze the scaling of the residual energy in (\ref{eh}) in terms of the thickness $h$, and the asymptotic behavior of the minimizers of the energies $I^h_W$ as $h\to 0$. Note that when $A^h\equiv\mbox{Id}_3$, the model (\ref{IhW}) reduces to the classical nonlinear elasticity, and it is augmented by the applied force term $\int_{\Omega^h} f^h u^h$. In this context, questions of dimension reduction have been studied in the seminal papers \cite{FJMgeo, FJMhier} and led to the rigorous derivation of the hierarchy of elastic $2d$ models, differentiated by the scaling of $f^h$. In this paper, we will be concerned with growth tensors $A^h$ which bifurcate from the Euclidean case $A=\mbox{Id}_3$, and are of the form: \begin{equation}\label{ahform-new} A^h(x', x_3)=\mathrm{Id}_3 + h^\gamma S_g(x') + h^{\gamma/2}x_3 B_g(x'). \end{equation} The ``stretching'' and ``bending'' tensors $S_g, B_g:\overline\Omega\rightarrow \mathbb{R}^{3\times 3}$ are two given smooth matrix fields, while the scaling exponent $\gamma$ belongs to the range: $$0<\gamma<2.$$ The critical cases $\gamma=0,2$ have been analyzed previously, and let to the fully nonlinear bending model in \cite{LePa1, a4} for $\gamma=0$, and the von K\'arm\'an-like morphogenesis model \cite{LMP-prs1, LMP-prs2} for $\gamma=2$. \smallskip Observe now that $A^h$ in (\ref{ahform-new}) yields: \begin{equation*} G^h(x', x_3)= (A^h)^T (A^h) = \mathrm{Id}_3 + 2h^\gamma \mbox{ sym} S_g(x') + 2h^{\gamma/2}x_3 \mbox{sym} B_g(x') +\mbox{ higher order terms}. \end{equation*} Interpreting the term $p_h=\mathrm{Id}_2 + 2h^\gamma (\mbox{sym} S_g)_{2\times 2} $ as the first fundamental form of the mid-plate $\Omega$, and $h^{\gamma/2}(\mbox{sym} B_g)_{2\times 2}$ as its second fundamental form, the compatibility of these forms through the Gauss-Codazzi equations at the leading order terms in the expansion in $h$, is expressed by the following conditions: \begin{equation}\label{compa} \mathrm{curl }\big((\mathrm{sym }~ B_g)_{2\times 2}\big) \equiv 0 \quad \mbox{ and } \quad \displaystyle\mathrm{curl}^T\mathrm{curl}~ ( S_g)_{2\times 2} + \mathrm{det}\big((\mathrm{sym }~ B_g)_{2\times 2}\big) \equiv 0 \mbox{ in } \Omega, \end{equation} Hence, if (\ref{compa}) is violated, then any isometric immersion $u_h:\Omega\to\mathbb{R}^3$ of $p_h$ will have the second fundamental form: $h^{\gamma/2} \Pi\neq h^{\gamma/2}\mbox{sym} B_g$. Expanding the energy of the deformation: \begin{equation}\label{expan} u^h(x', x_3) = u_h(x') + x_3 N^h(x'), \qquad N^h(x') =\frac{\partial_1 u_h\times \partial_2 u_h}{|\partial_1 u_h\times \partial_2 u_h|} \end{equation} (which is the Kirchhoff-Love extension of $u_h$ in the direction of the normal vector $N^h$ to the surface $u_h(\Omega)$), and gathering the remaining terms after the cancellation of $p_h$, we obtain: $$I_W^h(u^h)\approx \frac{1}{h}\int_{\Omega^h} |(\nabla u^h)^T(\nabla u^h) - G^h|^2~\mbox{d}x \approx \frac{1}{h}\int_{\Omega^h} |2h^{\gamma/2}x_3 \big((\mbox{sym} B_g(x'))_{2\times 2} - \Pi (x')\big)|^2 ~\mbox{d}x \approx C h^{\gamma+2}.$$ As we shall see, the scaling $h^{\gamma+2}$ above is sharp, and the residual 2d energy is indeed given in terms of the square of difference in the scaled second fundamental forms: $|(\mbox{sym} B_g)_{2\times 2} - \Pi|^2$. We state our main results in the next subsections. \subsection{The variational limit with Monge-Amp\`ere constraint: case of $1<\gamma<2$} The main result of this paper is the identification of the asymptotic behavior of the minimizers of $I^h_W$ as $h\to 0$, through deriving the $\Gamma$-limit of the rescaled energies $ h^{-(\gamma+2)} I_W^h$. This limit, given in the Theorem below, consists of minimizing the bending content, relative to the ideal bending $(\mbox{sym} B_g(x'))_{2\times 2} $, under the nonlinear constraint of the form $\det\nabla^2 v = f$. Our result, which concerns arbitrary functions $f$, is a generalization to the non-Euclidean setting of \cite[Theorem~2] {FJMhier}, where the degenerate Monge-Amp\`ere type constraint ($f\equiv 0$) was rigorously derived in the context of standard nonlinear elasticity. \begin{theorem}\label{compactness} Let $A^h$ be given as in (\ref{ahform-new}), with an arbitrary exponent $\gamma$ in the range: $$0<\gamma<2.$$ Assume that a sequence of deformations $u^h\in W^{1,2}(\Omega^h,\mathbb{R}^3$) satisfies: \begin{equation} \label{boundinh} I^h_W(u^h) \leq Ch^{\gamma+2}, \end{equation} where $W$ fulfills (\ref{frame}) and (\ref{Q3}). Then there exist rotations $\bar R^h\in SO(3)$ and translations $c^h\in\mathbb{R}^3$ such that for the normalized deformations: $$y^h\in W^{1,2}(\Omega^1,\mathbb{R}^3), \qquad y^h(x',x_3) = (\bar R^h)^T u^h(x',hx_3) - c^h,$$ the following holds (up to a subsequence that we do not relabel): \begin{itemize} \item[(i)] $y^h(x',x_3)$ converge in $W^{1,2}(\Omega^1,\mathbb{R}^3)$ to $x'$. \item[(ii)] The scaled displacements: $\displaystyle{V^h(x')=\frac{1}{h^{\gamma/2}}\fint_{-1/2}^{1/2}y^h(x',t) - x'~\mathrm{d}t}$ converge to a vector field $V$ of the form $V = (0,0,v)^T$. This convergence is strong in $W^{1,2}(\Omega,\mathbb{R}^3)$. The only non-zero out-of-plane scalar component $v$ of $V$ satisfies: $v\in W^{2,2}(\Omega,\mathbb{R})$ and: \begin{equation}\label{MP-constraint} \det{\nabla ^2 v} = - \mathrm{curl}^T\mathrm{curl}~ ( S_g)_{2\times 2} \quad \mbox{ in } \Omega. \end{equation} In other words: $v\in\mathcal{A}_f$, where: $$ \mathcal{A}_f = \left\{ v\in W^{2,2}(\Omega); ~ \det\nabla^2 v = f\right\} \quad \mbox{ and } \quad f = - \mathrm{curl}^T\mathrm{curl}~ ( S_g)_{2\times 2}. $$ \item[(iii)] Moreover: \begin{equation}\label{MA-bd} \liminf_{h\to 0} \frac{1}{h^{\gamma+2}} I_W^h(u^h) \geq \mathcal{I}_f(v), \end{equation} where $\mathcal{I}_f:W^{2,2}(\Omega)\to\bar{\mathbb{R}}_+$ is given by: \begin{equation}\label{linpresKirchhoff} \begin{split} \mathcal{I}_f(v)= \left\{\begin{array}{ll} {\displaystyle \frac{1}{12} \int_\Omega \mathcal{Q}_2\Big(\nabla^2 v + (\mathrm{sym}~ B_g)_{2\times 2}\Big)}, & \mbox{ if } v\in\mathcal{A}_f,\\ +\infty & \mbox{ if } v\not\in\mathcal{A}_f \end{array}\right. \end{split} \end{equation} and the quadratic nondegenerate form $\mathcal{Q}_2$, acting on matrices $F\in\mathbb{R}^{2\times 2}$ is: \begin{equation}\label{defQ} \mathcal{Q}_2(F) =\min\Big\{\mathcal{Q}_3(\tilde F); ~\tilde F\in\mathbb{R}^{3\times 3}, \tilde F_{2\times 2}= F\Big\}. \end{equation} \end{itemize} \end{theorem} The result above can be interpreted as follows. The smallness of the energy scaling in (\ref{boundinh}) relative to the scaling in (\ref{ahform-new}), induces the deformations $u_h(x') = u^h(x', 0)$ of the mid-plate $\Omega$ to be perturbations of a rigid motion: \begin{equation}\label{rs1} u_h(x') = x' + h^{\gamma/2} v(x')e_3 + \mbox{higher order terms}. \end{equation} Moreover, the Gaussian curvatures $\kappa$ of the metric $p_h = \mbox{Id}_2 + 2h^\gamma (\mbox{sym} S_g)_{2\times 2}$ and of the surface $u_h(\Omega)$ coincide at their highest order in the expansion in terms of $h$. This is precisely the meaning of the constraint (\ref{MP-constraint}), in view of the formulas: \begin{equation}\label{curv} \begin{split} & \kappa\big(\mbox{Id}_{2} + 2\epsilon^2 (\mbox{sym } S_g)_{2\times 2}\big) = - \epsilon^2\mathrm{curl}^T\mathrm{curl}~ ( S_g)_{2\times 2} +\mathcal{O}(\epsilon^4) \\ & \kappa\big(\nabla (\mbox{id}_2 +\epsilon ve_3)^T \nabla (\mbox{id}_2 +\epsilon ve_3)\big) = \epsilon^2\det\nabla^2v +\mathcal{O}(\epsilon^4). \end{split} \end{equation} All other curvatures, besides $\kappa$, contribute to the limiting energy $\mathcal{I}_f$. Indeed, $\mathcal{I}_f$ measures the $L^2$ difference between the full second fundamental forms: the form $h^{\gamma/2} (\mbox{sym } B_g)_{2\times 2}$ deduced from $A^h$, and that of the surface $u_h(\Omega)$ given by: $$(\nabla u_h)^T\nabla N^h = -h^{\gamma/2} \nabla^2 v + \mbox{ higher order terms}.$$ \medskip We now turn to the optimality of the energy bound in (\ref{MA-bd}) and of the scaling (\ref{boundinh}). \begin{theorem}\label{limsup} Assume (\ref{ahform-new}), (\ref{frame}) and (\ref{Q3}). Moreover, assume that $\Omega$ is simply connected and: \begin{equation*} 1<\gamma<2. \end{equation*} Then, for every $v\in \mathcal{A}_f$, there exists a sequence of deformations $u^h\in W^{1,2}(\Omega^{h},\mathbb{R}^3)$ such that the following holds: \begin{itemize} \item[(i)] The sequence $y^h(x',x_3) = u^h(x',hx_3)$ converges in $W^{1,2}(\Omega^1,\mathbb{R}^3)$ to $x'$. \item[(ii)] $\displaystyle V^h(x') = h^{-\gamma/2}\fint_{-h/2}^{h/2}(u^h(x',t) - x')~\mathrm{d}t$ converge in $W^{1,2}(\Omega,\mathbb{R}^3)$ to $(0,0,v)^T$. \item[(iii)] One has: $\displaystyle \lim_{h\to 0} \frac{1}{h^{\gamma+2}} I_W^h(u^h) = \mathcal{I}_f(v)$, where $\mathcal{I}_f$ is as in (\ref{linpresKirchhoff}). \end{itemize} \end{theorem} \begin{theorem}\label{minsconverge} Assume (\ref{ahform-new}), (\ref{frame}), (\ref{Q3}). Let $\Omega$ be simply connected and let $1<\gamma<2$. Then: \begin{itemize} \item[(i)] $\mathcal {A}_f \neq \emptyset$ if and only if there exists a uniform constant $C \geq 0 $ such that: $$e_h = \inf I^h_W \le C h^{\gamma+2}. $$ \item[] Under this condition, for any minimizing sequence $u^h\in W^{1,2}(\Omega^h,\mathbb{R}^3)$ for $I^h_W$, i.e. when: \begin{equation}\label{approxmin} \lim_{h\to 0} \frac 1{h^{\gamma+2}}\left( I^h_W (u^h) - \inf I^h_W \right ) = 0, \end{equation} the convergences (i), (ii) of Theorem \ref{compactness} hold up to a subsequence, and the limit $v$ is a minimizer of the functional $\mathcal I_f$ defined as in (\ref{linpresKirchhoff}). Moreover, for any (global) minimizer $v$ of ${\mathcal I}_f$, there exists a minimizing sequence $u^h$, satisfying (\ref{approxmin}) together with (i), (ii) and (iii) of Theorem \ref{limsup}. \item[(ii)] If (\ref{compa}) is violated, i.e. when: \begin{equation}\label{lincondition} \mathrm{curl }\big((\mathrm{sym }~ B_g)_{2\times 2}\big) \not\equiv 0, \quad \mbox{ or } \quad \displaystyle\mathrm{curl}^T\mathrm{curl}~ ( S_g)_{2\times 2} + \mathrm{det}\big((\mathrm{sym }~ B_g)_{2\times 2}\big) \not\equiv 0, \end{equation} then: $$\exists c>0 \qquad \inf I^h_W \geq c h^{\gamma+2}. $$ \end{itemize} \end{theorem} The conditions in \eqref{lincondition} guarantee that the highest order terms in the expansion of the Riemann curvature tensor components $R_{1213}$, $R_{2321}$ and $R_{1212}$ of $G^h=(A^h)^TA^h$ do not vanish. Also, vanishing of either of them implies that $\inf \mathcal {I}_f >0$ (see Lemma \ref{GCM}), which combined with Theorem \ref{compactness} yields the lower bound on $\inf I^h_W$. The mechanical significance of these components of the curvature tensor is not known to the authors, but it seems that certain components have a more important role in determining the energy scaling; compare with \cite[Theorems 4.1, 4.3, 4.5]{a4}. The scaling analysis in Theorem \ref{minsconverge} is new, and in particular it does not follow from our prior results in \cite{LMP-prs1}, valid for another family of growth tensors $A^h$ than (\ref{ahform-new}). In a sense, the scaling exponents $\gamma$, $\gamma/2$ and $\gamma+2$ pertain to the critical case in \cite[Theorem 1.1] {LMP-prs1}, and thus the results in Theorem \ref{minsconverge} and Theorem \ref{compactness} are also optimal from this point of view. \subsection{The matching property: a full range case of $0<\gamma<2$} It is clear from Theorem \ref{compactness} that the recovery sequence $u^h$ in Theorem \ref{limsup} will have the form (\ref{expan}), with $u_h$ as in (\ref{rs1}). We can write this expansion with more precision, including a higher order correction $w_h:\Omega\to\mathbb{R}^3$: \begin{equation}\label{rs2} u_h(x') = x' + h^{\gamma/2} v(x')e_3 + h^\gamma w_h + \mbox{higher order terms}. \end{equation} In order to match the ideal metric $p_h=\mbox{Id}_2+2h^\gamma (\mbox{sym }S_g)_{2\times 2}$ with the metric induced by $u_h$: \begin{equation}\label{metrica} (\nabla u_h)^T(\nabla u_h) = \mbox{Id}_2 + 2h^\gamma \Big(\frac{1}{2}\nabla v\otimes\nabla v + {\mathrm{sym}}\nabla w_h\Big) +\mathcal{O}(h^{3\gamma/2}), \end{equation} one hence needs that: \begin{equation}\label{haha} -{\mathrm{sym}}\nabla w_h = \frac{1}{2}\nabla v\otimes\nabla v - (\mbox{sym }S_g)_{2\times 2}. \end{equation} On a simply connected domain $\Omega$, equation (\ref{haha}) is solvable in terms of $w_h$ if and only if the tensor in its right hand side belongs to the kernel of the operator $\mbox{curl}^T\mbox{curl}$, which becomes: $$0 = \mbox{curl}^T\mbox{curl} \Big(\frac{1}{2}\nabla v\otimes\nabla v - (\mbox{sym }S_g)_{2\times 2} \Big) = -\det\nabla^2v - \mbox{curl}^T\mbox{curl} (S_g)_{2\times 2}, $$ and is readily satisfied in view of (\ref{MP-constraint}). It follows from careful calculations in the proof of Theorem \ref{limsup} that the constraint (\ref{MP-constraint}) allows precisely for the existence of a correction $w_h$ in (\ref{rs2}) so that the discrepancy of the metrics in $p_h$ and (\ref{metrica}) does not exceed the residual energy bound (\ref{boundinh}), when $\gamma$ is in the range $1<\gamma<2$. In order to cover a larger range of $\gamma$, one needs hence to ``improve'' the recovery sequence (\ref{rs2}) towards matching the metrics in (\ref{metrica}) and the metrics $G^h(\cdot, x_3=0) = p_h + \mbox{ higher order terms}$, with a better accuracy. This is the content of our next result (see \cite[Theorem 7]{FJMhier} for a parallel result valid in the degenerate case $S_g\equiv 0$). \begin{theorem}\label{matching} Assume that $\Omega$ is simply connected and that $-\mathrm{curl}^T\mathrm{curl} (S_g)_{2\times 2}\geq c>0$ in $\Omega$. For $0<\beta<1$ let $v\in\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R})$ satisfy: \begin{equation*} \det\nabla^2 v = -\mathrm{curl}^T\mathrm{curl} (S_g)_{2\times 2} \quad \mbox{ in } \Omega. \end{equation*} Let $s_\epsilon:\Omega\to\mathbb{R}^{2\times 2}_{sym}$ be a given sequence of smooth symmetric tensor fields, such that: $\sup \|s_\epsilon\|_{\mathcal{C}^{1,\beta}} < +\infty$. Then there exists a sequence $w_\epsilon\in\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R}^3)$, such that: \begin{equation}\label{metric} \forall \epsilon>0 \quad \nabla(\mathrm{id}_2 + \epsilon ve_3 + \epsilon^2w_\epsilon)^T \nabla(\mathrm{id}_2 + \epsilon ve_3 + \epsilon^2w_\epsilon) = \mathrm{Id}_2 + 2\epsilon^2 (\mathrm{sym}~S_g)_{2\times 2} + \epsilon^3s_\epsilon, \end{equation} and: $\sup \|w_\epsilon\|_{\mathcal{C}^{2,\beta}}<+\infty$. \end{theorem} The applicability of Theorem \ref{matching} is limited by the strong assumption of H\"older regularity in $v\in\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R})$. Clearly, it is too restrictive for constructing a recovery sequence when $v\in W^{2,2}(\Omega)$. However, when the $\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R})$ solutions $v$ of (\ref{MP-constraint}) are dense in the set of all $W^{2,2}(\Omega, \mathbb{R})$ solutions of the same equation, with respect to the $W^{2,2}$ topology, then one can use a diagonal argument. As shown in \cite{LMP-arma}, the mentioned density property holds for star-shaped domains with a constant positive linearized curvature constraint, and consequently we obtain: \begin{theorem}\label{limsup2} Assume (\ref{ahform-new}), (\ref{frame}) and (\ref{Q3}). Moreover, assume that $\Omega$ is star-shaped with respect to a ball, and that $f = -\mathrm{curl}^T\mathrm{curl}(S_g)_{2\times 2} \equiv c_0>0$ in $\Omega$. Let: \begin{equation*} 0<\gamma<2. \end{equation*} Then, for every $v\in \mathcal{A}_f$, there exists a sequence of deformations $u^h\in W^{1,2}(\Omega^{h},\mathbb{R}^3)$ such that (i), (ii) and (iii) of Theorem \ref{limsup} hold. Moreover, all the assertions of Theorem \ref{minsconverge} hold as well. \end{theorem} \smallskip \subsection{On the multiplicity of solutions to the limit model} Our final set of results concerns the question of uniqueness of the minimizers to the model (\ref{linpresKirchhoff}). We first observe that both uniqueness and existence of a one-parameter family of global minimizers are possible (see Example \ref{ex1} and Example \ref{ex2}). Naturally, for the radial function $f=f(r)\ge 0$, uniqueness is tied to the radial symmetry of minimizers. One approach is to study the relaxed problem, and replace the constraint set $\mathcal{A}_f$ by $\mathcal{A}_{f}^* = \{v\in W^{2,2}(\Omega); ~ \det\nabla^2v\geq f\}$. In particular, as a corollary to Theorem \ref{decrease} and Corollary \ref{condi} we obtain the following result: \begin{theorem} Assume that $f \in L^2(B(0,1))$ is radially symmetric i.e.: $f=f(r)$ and $\int_0^1 r f ^2(r)~\mathrm{d}r<\infty$. Assume further that $f\geq c >0$, and that $ f $ is a.e. nonincreasing, i.e.: \begin{equation} \forall a.e.~ r\in [0,1] \quad\forall a.e.~ x\in [0,r]\qquad f (r)\leq f (x). \end{equation} Then the functional $~\mathcal{I}(v) = \int_{B(0,1)} |\nabla^2v|^2$, restricted to the constraint set $\mathcal{A}_f$, has a unique (up to an affine map) minimizer, which is radially symmetric and given by $v_ f $ in (\ref{minimizer}). \end{theorem} It is unclear to the authors whether the above theorem holds for every positive $f$. However, we can establish that the radial solution to the constraint equation is always a critical point. More precisely, we have the following result: \begin{theorem}\label{criptmA} Assume that $f\in\mathcal{C}^\infty(\bar B(0,1))$ is radially symmetric i.e. $f=f(r)$, and that $f\geq c>0$. Then the radially symmetric $v=v(r)\in \mathcal{A}_f$ must be a critical point of the functional $~\mathcal{I}(v) = \int_{B(0,1)} |\nabla^2v|^2$, restricted to the constraint set $\mathcal{A}_f$. \end{theorem} \smallskip \subsection{Notation} Throughout the paper, we use the following notational convention. For a matrix $F$, its $n\times m$ principal minor is denoted by $F_{n\times m}$. When $m=n$ then the symmetric part of a square matrix $F$ is: $\mbox{sym } F = 1/2(F + F^T)$. The superscript $^T$ refers to the transpose of a matrix or an operator. The operator $\mbox{curl}^T\mbox{curl}$ acts on $2\times 2$ square matrix fields $F$ by taking first $\mbox{curl}$ of each row (returning $2$ scalars) and then taking $\mbox{curl}$ of the resulting $2$d vector, so that: $\mbox{curl}^T\mbox{curl} F = \partial_{11}^2 F_{22} - \partial_{12}^2(F_{12}+F_{21}) + \partial_{22}^2 F_{11}.$ In particular, we see that: $\mbox{curl}^T\mbox{curl } F = \mbox{curl}^T\mbox{curl} (\mbox{sym }F)$. Further, for any $F \in{\mathbb R}^{2\times2}$, by $F^*\in{\mathbb R}^{3\times 3}$ we denote the matrix for which $ (F^*)_{2\times2} = F$ and $(F^*)_{i3}= (F^*)_{3i} =0$, $i=1, \ldots, 3$. By $\nabla_{tan}$ we denote taking derivatives $\partial_1$ and $\partial_2$ in the in-plate directions $e_1=(1,0,0)^T$ and $e_2=(0,1,0)^T$. The derivative $\partial_3$ is taken in the out-of-plate direction $e_3=(0,0,1)^T$. Finally, we will use the Landau symbols $\mathcal{O}(h^\alpha)$ and $o(h^\alpha)$ to denote quantities which are of the order of, or vanish faster than $h^\alpha$, as $h\to 0$. By $C$ we denote any universal constant, depending on $\Omega$ and $W$, but independent of other involved quantities, so that $C=\mathcal{O}(1)$. \medskip \subsection{Discussion and relation to previous works} We now comment on the ``critical exponents'' of $\gamma$, i.e. the boundary values of ranges in which our analysis is valid. To draw a parallel with the previous results, in particular the seminal paper \cite{FJMhier} and the conjecture in \cite{LePa2} for the hierarchy of models for nonlinear elastic shells, we note the following heuristics. Given an exponent $\gamma>0$, we expect (in view of Theorem \ref{compactness} and its proof) the residual energy to scale as $h^{\gamma+2}$, under suitable non-vanishing curvature conditions on the prestrain metric. Following \cite{LePa2}, where the critical exponents for the energy were shown to be: $\{\beta_n= 2+ \frac 2n\}_{n\in \mathbb N}$, we let $\gamma_n= \beta_n -2 = \frac 2n$, with $\gamma_0 = \infty$ and $\gamma_\infty = 0$. We say that $(V_1, \ldots, V_n) : \Omega \to (\R^3)^n$ is an $n$th order isometry of the prestrained plate when the sequence of metrics induced by the one-parameter family of infinitesimal bendings $u_h= {\rm id}_2 + \sum_{k=1}^n h^{k\gamma/2} V_k$ differ from the prescribed metrics $G^h$ by terms of order at most $\mathcal O(h^{(n+1)\gamma/2})$. If $n=1$, any normal out-of-plane displacement $V_1 = (0,0,v)^T$ is a 1st order isometry, while for $n=\infty$, the resulting bending $u_h$ is formally an exact isometry. \smallskip In this framework, several regimes can be distinguished: \begin{itemize} \item[(i)] When $ \gamma_n <\gamma < \gamma_{n-1}$, we expect the limiting energy to be a linearized bending model with the $n$th-order isometry constraint. \item[(ii)] At the critical values $\gamma = \gamma_n$, the isometry constraint of the limit model should be of order $n-1$, but in addition to the bending energy term, the limiting energy will also contain the $n$th order stretching term. \item[(iii)] Whenever the structure of the pre-strain tensor $S_g$ allows for it, any $n$th order isometry can be matched to a higher order isometry of some order $m>n$. In that case, the theories in the range $ \gamma_m <\gamma < \gamma_n$ are expected to collapse to the same theory (with the $n$th order isometry constraint). \end{itemize} The results in this paper can be now interpreted as follows. In Theorems \ref{limsup} and \ref{minsconverge} we derived the correct model, with the second order isometry constraint \eqref{MP-constraint}, corresponding to the values of $\gamma$ between $\gamma_2=1$ and $\gamma_1=2$. The constraint \eqref{MP-constraint} is naturally derived for the full range $0<\gamma<\gamma_1$ (Theorem \ref{compactness}), but this information is not enough for characterizing the limiting model when $\gamma \le \gamma_2=1$. Theorem \ref{matching} and the corresponding density result provides the tools to let all the expected higher order constraints for the full range $0=\gamma_\infty < \gamma \le \gamma_2=1$ be derived from the second order constraint \eqref{MP-constraint}. This leads to Theorem \ref{limsup2}. For other instances where such matching properties have been proved and exploited to a similar purpose see \cite{lemopa1, FJMhier, holepa, LMP-arma}. The continuation of infinitesimal bendings has also been used in \cite{ho2, ho3} to derive the Euler-Lagrange equations of elastic shell models. In the absence of better techniques to show a direct $n$th order to exact isometry continuation (when the assumptions of Theorem \ref{matching} do not hold), one could hope to improve the results of Theorem \ref{minsconverge}, say to the range $ 2/3= \gamma_3<\gamma \le \gamma_2 =1$, provided that a matching of $2$nd order isometries to $3$rd order isometries is at hand. Solving this problem involves analyzing a linear system of PDEs, rather than the full nonlinear isometry equation as in Theorem \ref{matching}. In general, this strategy, which was adapted in \cite{holepa} for developable surfaces (see also \cite{ho2}) leads to matching of $n$th order isometries to $(n+1)$th order isometries, and hence it could potentially imply that Theorem \ref{minsconverge} is indeed true for the full range $0<\gamma<2$. This is, however, still a technically difficult problem and beyond our current understanding. The two extreme critical cases are: $\gamma_1=2$ which leads to the prestrained von K\'arm\'an model, whose rigorous derivation was given in \cite {LMP-prs1}, and $\gamma_\infty =0$ which corresponds to the prestrained Kirchhoff model, that has been considered in \cite{a4, LePa1}. The Monge-Amp\`ere constrained model studied in this paper lies in between the Kirchhoff and von K\'arm\'an models and can be compared to either of them. It can also be seen as a natural generalization, to the prestrained case, of a similar model derived in \cite{FJMhier} which involves the degenerate constraint $\det\nabla^2v=0$. Finally, the regime $\gamma> \gamma_1$ will lead to a simple linear bending model. \bigskip \noindent {\bf Acknowledgments.} This project is based on the work supported by the National Science Foundation. M.L. is partially supported by the NSF Career grant DMS-0846996. M.R.P. is partially supported by the NSF grant DMS-1210258. \section{Compactness and lower bound: A proof of Theorem \ref{compactness}} {\bf 1.} Recall that in \cite{LMP-prs1} we dealt with the general growth tensor family $A_h$. The following quantities, which we compute for the present case scenario (\ref{ahform-new}) play the role in the scaling analysis below: \begin{equation}\label{strange} \begin{split} & \|\nabla_{tan} (A^h_{~|x_3=0})\|_{L^\infty(\Omega)} + \|\partial_3 A^h\|_{L^\infty(\Omega^h)}\leq Ch^{\gamma/2} \\ & \|A^h\|_{L^\infty(\Omega^h)} + \|(A^h)^{-1}\|_{L^\infty(\Omega^h)} \leq C. \end{split} \end{equation} We now quote the following approximation result, which can be directly obtained from the geometric rigidity estimate \cite{FJMgeo}, in view of the bounds (\ref{strange}): \begin{theorem} \label{thapprox} \cite[Theorem 1.6]{LMP-prs1} Let $u^h\in W^{1,2}(\Omega^h,\mathbb{R}^3)$ satisfy $\ds \lim_{h\to 0} \frac{1}{h^2} I_W^h(u^h) = 0$ (which is in particular implied by (\ref{boundinh}). Then there exist matrix fields $R^h\in W^{1,2}(\Omega,\mathbb{R}^{3\times 3})$, such that $R^h(x')\in SO(3)$ for a.e. $x'\in\Omega$, and: \begin{equation}\label{m1} \frac{1}{h}\int_{\Omega^h}|\nabla u^h(x) - R^h(x')A^h(x)|^2~\mathrm{d}x \leq C h^{2+\gamma}, \qquad \int_\Omega |\nabla R^h|^2 \leq C h^{\gamma}. \end{equation} \end{theorem} \medskip Towards the proof of compactness in Theorem \ref{compactness}, we now outline the argument in \cite{LMP-prs1} which yields (i) and (ii). We only emphasize points that lead to the new constraint (\ref{MP-constraint}). Assume (\ref{boundinh}) and let $R^h\in W^{1,2}(\Omega, SO(3))$ be the matrix fields as in Theorem \ref{thapprox}. Define the averaged rotations: $\tilde R^h = \mathbb{P}_{SO(3)}\fint_\Omega R^h,$ which satisfy: \begin{equation}\label{5.60} \int_\Omega |R^h - \tilde R^h|^2 \leq C\Big(\int_\Omega |R^h - \fint R^h|^2 + \mbox{dist}^2(\fint R^h, SO(3))\Big)\leq Ch^{\gamma}, \end{equation} and also let: \begin{equation}\label{m00} \hat R^h = \mathbb{P}_{SO(3)}\fint_{\Omega^h} (\tilde R^h)^T\nabla u^h, \end{equation} which is well defined in view of (\ref{m1}) and (\ref{5.60}). Consequently: \begin{equation}\label{5.66} |\hat R^h -\mbox{Id}|^2 \leq C|\fint_{\Omega^h} (\tilde R^h)^T\nabla u^h - \mbox{Id}|^2\leq Ch^{\gamma}. \end{equation} Defining: $\bar R^h = \tilde R^h \hat R^h$, or equivalently: $\ds\bar R^h = \mathbb{P}_{SO(3)}\fint_{\Omega^h}\nabla u^h$, it follows by (\ref{5.60}), (\ref{5.66}) and (\ref{m1}): \begin{equation}\label{m2} \int_\Omega |R^h - \bar R^h|^2 \leq Ch^{\gamma} \quad\mbox{ and } \quad\lim_{h\to 0} (\bar R^h)^TR^h =\mbox{Id} \quad \mbox{ in } W^{1,2}(\Omega, \mathbb{R}^{3\times 3}). \end{equation} Consider the translation vectors $c^h\in\mathbb{R}^3$, such that: \begin{equation}\label{m22} \int_\Omega V^h = 0 \quad \mbox{ and } \quad \mbox{skew}\int_\Omega\nabla V^h = 0. \end{equation} To prove Theorem \ref{compactness} (i), we now use (\ref{5.66}) in: \begin{equation}\label{5.111} \begin{split} \|(\nabla y^h - &\mbox{Id})_{3\times 2}\|^2_{L^2(\Omega^1)} \leq \frac{1}{h}\int_{\Omega^h} |(\bar R^h)^T\nabla u^h - \mbox{Id}|^2 \\ &\leq C \left(\frac{1}{h}\int_{\Omega^h} |(\tilde R^h)^T\nabla u^h - \mbox{Id}|^2 ~\mbox{d}x + |\hat R^h - \mbox{Id}|^2 \right) \leq Ch^\gamma, \end{split} \end{equation} and notice that by (\ref{m1}) one has: $\ds \|\partial_3 y^h\|^2_{L^2(\Omega^1)} \leq C h \int_{\Omega^h}|\nabla u^h|^2 \leq Ch^2$. This yields convergence of $y^h$ by means of the Poincar\'e inequality and (\ref{m22}). We also remark that (\ref{5.111}) implies the weak convergence of $V^h$ (up to a subsequence) in $W^{1,2}(\Omega,\mathbb{R}^3)$. \medskip {\bf 2.} Consider the matrix fields $D^h\in W^{1,2}(\Omega,\mathbb{R}^{3\times 3})$: \begin{equation}\label{Dh2} \begin{split} D^h(x') & = \frac{1}{h^{\gamma/2}}\fint_{-h/2}^{h/2} (\bar R^h)^TR^h(x') A^h(x',t) - \mbox{Id} ~\mbox{d}t \\ & = h^{\gamma/2} (\bar R^h)^TR^h(x') S_g(x') + \frac{1}{h^{\gamma/2}}\left((\bar R^h)^TR^h(x') - \mbox{Id}\right). \end{split} \end{equation} By (\ref{m2}) and (\ref{m1}), it clearly follows that: $\|D^h\|_{W^{1,2}(\Omega)}\leq C$. Hence, up to a subsequence: \begin{equation}\label{m3} \begin{split} \lim_{h\to 0} & D^h = D \quad \mbox{ and } \quad \lim_{h\to 0} \frac{1}{h^{\gamma/2}} \left((\bar R^h)^TR^h- \mbox{Id}\right) = D \\ & \mbox{ weakly in } W^{1,2}(\Omega, \mathbb{R}^{3\times 3}) \mbox{ and (strongly) in } L^q(\Omega, \mathbb{R}^{3\times 3}) \quad \forall q\geq 1. \end{split} \end{equation} Using (\ref{m2}), (\ref{m1}) and the identity $(R-\mbox{Id})^T(R-\mbox{Id}) = -2\mbox{sym}(R-\mbox{Id})$, valid for all $R\in SO(3)$, we obtain: $\ds \|\mbox{sym}((\bar R^h)^TR^h - \mbox{Id})\|_{L^2(\Omega)} \leq Ch^{\gamma}$. Consequently, the limiting matrix field $D$ has skew-symmetric values. Further, by (\ref{m2}) and (\ref{m3}): \begin{equation}\label{m5} \begin{split} \lim_{h\to 0} \frac{1}{h^{\gamma/2}}\mbox{sym} D^h & = \lim_{h\to 0} \Bigg( \mbox{sym} \left((\bar R^h)^TR^h S_g\right) - \frac{1}{2}\frac{1}{h^\gamma} \left((\bar R^h)^TR^h(x')- \mbox{Id}\right)^T \left((\bar R^h)^TR^h(x')- \mbox{Id}\right)\Bigg) \\ & = \mbox{sym } S_g + \frac{1}{2}D^2 \quad \mbox{ in } L^q(\Omega, \mathbb{R}^{3\times 3}) \quad \forall q\geq 1. \end{split} \end{equation} Regarding convergence of $V^h$, we have: \begin{equation}\label{hestimate} \begin{split} \|\nabla V^h - D^h_{3\times 2}\|_{L^2(\Omega)}^2 & \leq \frac{C}{h^{\gamma}}\int_\Omega\left| \fint_{-h/2}^{h/2}R^h(x')A^h_{3\times 2}(x',t) - \nabla_{tan}u^h(x',t)~\mbox{d}t \right|^2~\mbox{d}x'\\ & \leq \frac{C}{h^{\gamma+1}}\int_{\Omega^h} |\nabla u^h(x) - R^h(x') A^h(x)|^2~\mbox{d}x \leq Ch^2, \end{split} \end{equation} and hence by (\ref{m3}) $\nabla V^h$ converges in $L^2(\Omega,\mathbb{R}^{3\times 2})$ to $D$. Consequently, by (\ref{m22}): \begin{equation}\label{m6} \lim_{h\to 0} V^h = V \mbox{ in } W^{1,2}(\Omega, \mathbb{R}^{3}), \quad V\in W^{2,2}(\Omega, \mathbb{R}^{3})\quad \mbox{ and } ~~\nabla V= D_{3\times 2}. \end{equation} By Korn's inequality, $V_{tan}$ must be constant, hence $0$ in view of (\ref{m22}). This ends the proof of the first claim in Theorem \ref{compactness} (ii). \medskip {\bf 3.} We now show (\ref{MP-constraint}). By (\ref{5.111}) we have: $$ \ds \|{\mathrm{sym}} \nabla V^h\|^2_{L^2(\Omega)} \le \frac 1h \int_{\Omega^h} |(\bar R^h)^T \nabla u^h - \mbox{Id}|^2 \le Ch^\gamma. $$ We conclude, using (\ref{hestimate}) and (\ref{m5}) that: $$ \ds \lim_{h\to 0} \frac {1}{h^{\gamma/2}} {\mathrm{sym}}\, \nabla V_{tan}^h = \lim_{h\to 0} \Big(\frac {1}{h^{\gamma/2}} {\mathrm{sym}} (D^h)_{tan} + \mathcal{O}(h^{1-\gamma/2})\Big) = (\mbox{sym } S_g + \frac{1}{2}D^2)_{tan}, $$ weakly in $L^2(\Omega)$. As a consequence, Korn's inequality implies the existence of a displacement field $w\in W^{1,2}(\Omega,\R^2)$ for which $$ \ds {\mathrm{sym}} \nabla w= (\mbox{sym } S_g + \frac{1}{2}D^2)_{tan} = \mbox{sym } ( S_g)_{2\times 2} - \frac 12 \nabla v \otimes \nabla v, $$ where we calculated $D^2$ through \eqref{m6}, knowing that ${\mathrm{sym}} \, D =0$ and $V= (0,0, v)^T$. Applying the operator ${\mathrm{curl}} ^T {\mathrm{curl}}$ to both sides of the above formula yields the required result. \medskip {\bf 4.} To prove the lower bound in (ii), define the rescaled strains $P^h\in L^2(\Omega^1, \mathbb{R}^{3\times 3})$ by: $$P^h(x', x_3) = \frac{1}{h^{\gamma/2+1}}\Big((R^h(x'))^T \nabla u^h(x', hx_3)A^h(x', hx_3)^{-1} - \mbox{Id}\Big).$$ Clearly, by (\ref{m1}) $\|P^h\|_{L^2(\Omega^1)}\leq C$ and hence, up to a subsequence: \begin{equation}\label{ma0} \lim_{h\to 0} P^h = P \qquad \mbox{weakly in } L^2(\Omega^1,\mathbb{R}^{3\times 3}). \end{equation} Precisely the same arguments as in \cite{lemopa1}, yield: \begin{equation}\label{ma3} P(x)_{3\times 2} = P_0(x')_{3\times 2} + x_3 P_1(x')_{3\times 2}, \end{equation} for some $P_0\in L^2(\Omega, \mathbb{R}^{3\times 3})$ where: \begin{equation}\label{madefP1} P_1(x') = \nabla(D(x')e_3) - B_g(x'). \end{equation} \medskip Before concluding the proof of the lower bound in Theorem \ref{compactness} (iii), we need to gather a few simple consequences of (\ref{frame}) and (\ref{Q3}). \begin{lemma} \label{Q3prop} Assume that $W$ satisfies (\ref{frame}) and (\ref{Q3}). Then the quadratic form $Q_3$ is nonnegative, is positive definite on symmetric matrices, and $Q_3(F) = Q_3(\mathrm{sym} F)$ for all $F\in\mathbb{R}^3$. \end{lemma} \begin{proof} Let $F\in \mathbb{R}^{3\times 3}$ and $A\in so(3)$. Since $e^{tA}\in SO(3)$, by the frame invariance of $W$ we get: \begin{equation*} \begin{split} \forall t\in\mathbb{R}\qquad W(\mbox{Id}_3 + tF) & = W\big(e^{tA}(\mbox{Id}_3 + tF)\big) = W\Big((\mbox{Id}_3 + tA + \mathcal{O}(t^2))(\mbox{Id}_3 + tF)\Big) \\ & = W\big(\mbox{Id}_3 + t(F+A) + \mathcal{O}(t^2)\big). \end{split} \end{equation*} Applying (\ref{Q3}) to both sides of the above equality, it follows that: \begin{equation*} \begin{split} t^2 |Q_3(F) - Q_3\big( (F+A) + \mathcal{O}(t)\big)| & = |Q_3(tF) - Q_3( t(F+A) + \mathcal{O}(t^2))| \\ & \leq \omega\big(t|F|\big) t^2 |F|^2 + \omega\big(t|F+A|+\mathcal{O}(t^2)\big) t^2 ||F+A| + \mathcal{O}(t)|^2. \end{split} \end{equation*} Dividing both sides by $t^2$ and passing to the limit with $t\to 0$, implies that $Q_3(F+A) = Q_3(F)$, where we also used the fact that $\omega$ converges to zero at $0$. Consequently: $$\forall F\in\mathbb{R}^{3\times 3}\qquad Q_3(F) = Q_3(\mathrm{sym} F).$$ It remains now to prove that $Q_3$ is strictly positive definite on symmetric matrices. Let $F\in\mathbb{R}^{3\times 3}_{sym}$. Then, for every $t$ small enough, $\mbox{dist}(\mbox{Id}_3 + tF, SO(3)) = |(\mbox{Id}_3 + tF) -\mbox{Id}_3| = |tF|$. It now follows that: \begin{equation*} \begin{split} Q_3(F) & = \frac{1}{t^2} Q_3(tF) \geq \frac{1}{t^2} \Big( W(\mbox{Id}_3 + tF) - \omega(tF) t^2 |F|^2\Big) \\ & \geq \frac{1}{t^2} \Big( c ~\mbox{dist}^2(\mbox{Id}_3 + tF, SO(3)) - \omega(tF) t^2 |F|^2 \Big) \geq \frac{c}{2} |F|^2, \end{split} \end{equation*} where again we used (\ref{Q3}) and (\ref{frame}). \end{proof} \medskip We are now ready to conclude the proof of Theorem \ref{compactness}. Recalling (\ref{Q3}), we obtain: \begin{equation*} \begin{split} \frac{1}{h^{\gamma+2}} W \Big(\nabla u^h(x) A^h(x)^{-1} \Big)& = \frac{1}{h^{\gamma+2}} W\Big(R^h(x)^T \nabla u^h(x) A^h(x)^{-1} \Big) \\ & = \frac{1}{h^{\gamma+2}} W(\mbox{Id} + h^{\gamma/2+1} P^h(x)) = \mathcal{Q}_3(P^h(x)) + \omega(h^{\gamma/2+1} |P^h|) \mathcal{O}(|P^h(x)|^2). \end{split} \end{equation*} Consider now sets $\mathcal{U}_h = \{x\in \Omega^1; ~~h|P^h(x', x_3)|\leq 1\}$. Clearly $\chi_{\mathcal{U}_h}$ converges to $1$ in $L^1(\Omega^1)$, with $h\to 0$, as $hP^h$ converges to $0$ pointwise a.e. by (\ref{m1}). Remembering that $\ds \lim_{t \to 0} \omega(t) =0$, we get: \begin{equation}\label{ma6} \begin{split} \liminf_{h\to 0} \frac{1}{h^{\gamma+2}}& I^h_W(u^h) \geq \liminf_{h\to 0} \frac{1}{h^{\gamma+2}} \int_{\Omega^1}\chi_{\mathcal{U}_h}W\Big(\nabla u^h(x',hx_3) A^h(x',hx_3)^{-1}\Big)~\mbox{d}x\\ & = \liminf_{h\to 0} \left(\int_{\Omega^1}\mathcal{Q}_3(\chi_{\mathcal{U}_h}P^h) + o(1) \int_{\Omega^1}|P^h|^2\right)\\ & \geq \frac{1}{2}\int_{\Omega^1}\mathcal{Q}_3\Big(\mbox{sym }P(x)\Big)~\mbox{d}x, \end{split} \end{equation} where the last inequality follows by (\ref{m1}) guaranteeing convergence to $0$ of the term $o(1)\int |P^h|^2$, and by the fact that $\chi_{\mathcal{U}_h}P^h$ converges weakly to $P$ in $L^2(\Omega^1,\mathbb{R}^{3\times 3})$ (see (\ref{ma0})) in view of the properties of $\mathcal{Q}_3$ in Lemma \ref{Q3prop}. Further, by \eqref{defQ} and (\ref{madefP1}): \begin{equation}\label{ma7} \begin{split} \frac{1}{2}\int_{\Omega^1}\mathcal{Q}_3(\mbox{sym }P) & \geq \frac{1}{2}\int_{\Omega^1}\mathcal{Q}_2(\mbox{sym }P_{2\times 2}(x))~\mbox{d}x\\ & = \frac{1}{2}\int_{\Omega^1}\mathcal{Q}_2\Big(\mbox{sym }P_0(x')_{2\times 2} + x_3 \mbox{sym }P_1(x')_{2\times 2}\Big)~\mbox{d}x \\ & = \frac{1}{2}\int_{\Omega^1}\mathcal{Q}_2(\mbox{sym }P_0(x')_{2\times 2}) + \frac{1}{2}\int_{\Omega^1}x_3^2\mathcal{Q}_2(\mbox{sym }P_1(x')_{2\times 2})\\ & \ge \frac{1}{12}\int_{\Omega}\mathcal{Q}_2\Big(\mbox{sym }(\nabla De_3)_{2\times 2} - (\mbox{sym } B_g)_{2\times 2}\Big). \end{split} \end{equation} Now, in view of Theorem \ref{compactness} (ii) and (\ref{m6}) one easily sees that: $$ \Big(\nabla De_3\Big)_{2\times 2} = -\nabla v^2,$$ which yields the claim in Theorem \ref{compactness} (iii), by (\ref{ma6}) and (\ref{ma7}). \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip \section{Recovery sequence: Proofs of Theorem \ref{limsup} and Theorem \ref{minsconverge}} Recalling (\ref{defQ}), let $c(F)\in {\mathbb R}^3$ be the unique vector so that: $${\mathcal Q}_2 (F) = {\mathcal Q}_3 \Big( F^* + \mbox{sym}(c \otimes e_3) \Big).$$ The mapping $c:{\mathbb R}^{2\times 2}_{sym} \rightarrow {\mathbb R}^3$ is well-defined and linear, by the properties of $Q_3$ in Lemma \ref{Q3prop}. Also, for all $F\in {\mathbb R}^{3\times3}$, by $l(F)$ we denote the unique vector in ${\mathbb R}^3$, linearly depending on $F$, for which: \begin{equation}\label{vecl} \mbox{sym}\big(F - (F_{2\times 2})^*\big) = \mbox{sym}\big(l(F) \otimes e_3\big). \end{equation} \medskip {\bf 1.} Let the given out-of-plane displacement $v\in\mathcal{A}_f$ be as in Theorem \ref{limsup}. The constraint \eqref{MP-constraint} can be rewritten as: $$ \ds -\frac 12 {\mathrm{curl}}^T {\mathrm{curl}} (\nabla v \otimes \nabla v) = - {\mathrm{curl}} ^T {\mathrm{curl}} ( S_g)_{2\times 2} = - {\mathrm{curl}} ^T {\mathrm{curl}} ({\mathrm{sym}}\, S_g)_{2\times 2} . $$ Recall that a matrix field $B\in L^2 (\Omega , \R_{sym}^{2\times 2})$ is in the kernel of the linear operator ${\mathrm{curl}}^T {\mathrm{curl}}$ if and only if $B= {\mathrm{sym}} \nabla w$ for some $w\in W^{1,2}(\Omega, \R^2)$. Hence, we conclude that: $$ \ds {\mathrm{sym}} \nabla w = - \frac 12 \nabla v \otimes \nabla v + {\mathrm{sym}} ( S_g)_{2\times 2}. $$ By the Sobolev embedding theorem in the two-dimensional domain $\Omega$, $v\in W^{2,2}(\Omega)$ implies that: $\nabla v \in W^{1,q}(\Omega, \R^2)$ for all $q<\infty$. Consequently: $${\mathrm{sym}} \nabla w \in W^{1,p}(\Omega,\mathbb{R}^{3\times 3}) \qquad \forall 1\le p<2.$$ Fix $1<p<2$ such that: $\gamma>2/p$ and that $W^{1,p}(\Omega)$ embeds in $L^{8}(\Omega)$. This is possible since $\gamma<2$ and so $p$ can be chosen as close to $2$ as we wish. Using Korn's inequality and through a possible modification of $w$ by an affine mapping, we can assume that: $$w\in W^{2,p} \cap W^{1,8}(\Omega,\mathbb{R}^2).$$ Call $\lambda= 1/p$ and observe that: \begin{equation}\label{lambda} \frac{2-\gamma} {2(p-1)} < \lambda < \frac{\gamma}{2}. \end{equation} Following \cite[Proposition 2]{FJMhier}, by partition of unity and a truncantion argument, as a special case of the Lusin-type result for Sobolev functions, there exist sequences $v^h \in W^{2,\infty}(\Omega)$ and $w^h \in W^{2,\infty}(\Omega, \R^2)$ such that: \begin{equation}\label{vhapprox} \begin{split} & \lim_{h\to 0} \|v^h - v\|_{W^{2,2}(\Omega)} + \|w^h - w\|_{W^{2,p}(\Omega, \R^2)}= 0, \\ & \|v^h\|_{W^{2,\infty}(\Omega)} + \|w^h\|_{W^{2,\infty}(\Omega, \R^2)} \leq C h^{-\lambda} ,\\ & \lim_{h\to 0} h^{-2\lambda} \left|\left\{x\in \Omega; ~~ v^h(x) \neq v(x)\right\} \right|+ h^{-p\lambda} \left |\left\{x\in \Omega; ~~ w^h(x) \neq w(x)\right\} \right| =0. \end{split} \end{equation} Hence, $\Omega$ is partitioned into a disjoint union $\Omega = \mathcal{U}_h \cup O_h$, where: \begin{equation}\label{uhoh} \begin{split} &\mathcal{U}_h = \left\{x\in \Omega; ~~ v^h(x) = v(x)\right\} \cap \left\{x\in \Omega; ~~ w^h(x) = w(x)\right\},\\ & |O_h| =o(h^{p\lambda}) + o(h^{2\lambda}) = o(h^{p\lambda}). \end{split} \end{equation} We observe that the second order stretching $s(v^h, w^h)$ satisfies: $$ s(v^h, w^h) = {\mathrm{sym}} \nabla w^h + \frac 12 \nabla v^h \otimes \nabla v^h - {\mathrm{sym}} ( S_g)_{2\times 2}=0 \quad \mbox{ in } \mathcal{U}_h. $$ Now, a similar argument as in \cite[Lemma 6.1]{lemopa1} yields: \begin{equation}\label{mainerror} \ds \|s(v^h, w^h)\|_{L^\infty(\Omega)} = o(h^{\lambda (p/2-1)}) \quad \mbox{and} \quad \|s(v^h, w^h)\|^2_{L^2(\Omega)} = o(h^{2\lambda(p-1)}). \end{equation} \medskip {\bf 2.} Define the recovery sequence: \begin{equation}\label{recoveryseq} \begin{split} \forall (x', x_3)\in\Omega^h\qquad u^h (x', x_3) = & \left[\begin{array}{c}x'\\0 \end{array}\right] + \left[\begin{array}{c}h^\gamma w^h(x')\\ h^{\gamma/2}v^h(x')\end{array}\right] + x_3 \left[\begin{array}{c}-h^{\gamma/2}\nabla v^h(x')\\1\end{array}\right] \\ & + h^\gamma x_3 d^{0,h}(x') + \frac{1}{2}h^{\gamma/2} x_3^2 d^{1,h}(x), \end{split} \end{equation} where the Lipschitz continuous fields $d^{0,h}\in W^{1,\infty}(\Omega,\mathbb{R}^3)$ is given by: $$ d^{0,h} = l( S_g) - \frac 12 |\nabla v^h|^2 e_3 + c \Big(\mbox{sym} \nabla w^h + \frac 12 \nabla v^h \otimes \nabla v^h - (\mbox{sym } S_g)_{2\times2} \Big), $$ while the smooth fields $d^{1,h}$ obey: \begin{equation}\label{nd01h} \lim_{h\to 0} \sqrt{h} \|d^{1,h}\|_{W^{1,\infty}(\Omega)} = 0, \end{equation} \begin{equation}\label{d01} \lim_{h\to 0} d^{1,h} = l( B_g) + c\Big(-\nabla^2 v - (\mbox{sym } B_g)_{2\times 2}\Big) \quad \mbox{ in } L^2(\Omega). \end{equation} The convergence statements in (i), (ii) of Theorem \ref{limsup} are now verified by a straightforward calculation. In order to establish (iii) we will estimate the energy of the sequence $u^h$ in (\ref{recoveryseq}). Calculating the deformation gradient we first obtain: $$\nabla u^h = \mbox{Id} + h^{\gamma} (\nabla w^h)^* + h^{\gamma/2} D^h-h^{\gamma/2}x_3 (\nabla^2 v^h)^* + h^\gamma \left[\begin{array}{cc} x_3 \nabla d^{0,h} & d^{0,h} \end{array} \right] + h^{\gamma/2} \left[\begin{array}{cc} \frac{1}{2} x_3^2 \nabla d^{1,h} & x_3 d^{1,h} \end{array} \right], $$ where the skew-symmetric matrix field $D^h$ is given as: $$ D^h = \left [\begin{array}{cc} 0 & - (\nabla v^h)^T\\ \nabla v^h & 0 \end{array}\right]. $$ Recall that: $(A^h)^{-1} = \mbox{Id}- h^{\gamma} S_g - h^{\gamma/2} x_3 B_g + \mathcal{O}(h^{2\gamma})$. We hence obtain: \begin{equation*} (\nabla u^h) (A^h)^{-1} = \mbox{Id} + F^h \end{equation*} where, using $\lambda<\gamma/2< 1$: \begin{equation}\label{fh} \begin{split} F^h & = h^{\gamma} ((\nabla w^h)^*- S_g ) + h^{\gamma/2} D^h-h^{\gamma/2}x_3 ((\nabla^2 v^h)^* + B_g ) + h^\gamma \left[\begin{array}{cc} x_3 \nabla d^{0,h} & d^{0,h} \end{array} \right] \\ & \qquad + h^{\gamma/2} \left[\begin{array}{cc} \frac{1}{2} x_3^2 \nabla d^{1,h} & x_3 d^{1,h} \end{array} \right] - h^{\gamma} S_g - h^{\gamma/2} x_3 B_g \\ & \qquad + \mathcal{O}(h^{2\gamma})( |\nabla w^h| + |d^{0,h}|)+ \mathcal{O} ({h^{3\gamma/2}}) |D^h| + \mathcal{O}(h^{1+ \gamma}) \\ & = o(1). \end{split} \end{equation} Hence: \begin{equation}\label{bigfoot} (A^h)^{-1,T} (\nabla u^h )^T (\nabla u^h) (A^h)^{-1}= \mbox{Id}_3 + 2{\mathrm{sym}} \, F^h + (F^h)^T F^h = \mbox{Id} + K^h + q^h, \end{equation} where: \begin{equation*} K^h = 2h^\gamma {\mathrm{sym}} \Big((\nabla w^h)^* - \frac 12 (D^h)^2 - S_g+ d^{0,h} \otimes e_3\Big) + 2h^{\gamma/2}x_3~ {\mathrm{sym}} \Big(-(\nabla^2 v^h)^* - B_g + d^{1,h}\otimes e_3\Big), \end{equation*} and: \begin{equation*} \begin{split} q^h & = \mathcal{O}(h^{2\gamma}) \big( |\nabla w^h| + |\nabla w^h|^2 |d^{0,h}|\big)+ \mathcal{O} ({h^{3\gamma/2}}) |D^h| \big(1+ |\nabla w^h|+|D^h| +|d^{0,h}|\big) \\ & \qquad + \mathcal{O}(h^{1+ \gamma-\lambda}) \big(1+ |\nabla w^h|^2 + |D^h|^2 + |d^{0,h}|^2\big) + \mathcal {O}(h^{(\gamma+3) /2}) \\ & =o(1). \end{split} \end{equation*} Note that $(D^h)^2 =- (\nabla v^h \otimes \nabla v^h)^* - |\nabla v^h|^2 (e_3 \otimes e_3)$. Therefore: \begin{equation*} \begin{split} \mbox{sym} & \left((\nabla w^h)^* - \frac 12 (D^h)^2 - S_g+ d^{0,h} \otimes e_3\right)\\ & = \left(\mbox{sym}\nabla w^h + \frac 12 \nabla v^h \otimes\nabla v^h - (\mbox{sym } S_g)_{2\times2}\right)^* + \mbox{sym}\left(\big(d^{0,h} - l({ S_g}) + \frac 12 |\nabla v^h|^2 e_3\big) \otimes e_3\right) \\ & = s(v^h, w^h)^* + {\mathrm{sym}} \Big(c(s(v^h, w^h)) \otimes e_3\Big). \end{split} \end{equation*} Call: \begin{equation*} \begin{split} b(v^h) & = \mbox{sym}\left(-(\nabla^2 v^h)^* - B_g + d^{1,h}\otimes e_3 \right) \\ & = \left(-\nabla^2 v^h - (\mbox{sym } B_g)_{2\times 2}\right)^* + \mbox{sym}\left((d^{1,h} - l({ B_g}))\otimes e_3 \right). \end{split} \end{equation*} We therefore obtain: $$ \ds K^h = 2h^{\gamma/2} x_3 b(v^h) + \mathcal{O}(h^{\gamma}) |s(v^h, w^h)| = o(1). $$ Note also that: \begin{equation}\label{limitbending} \lim_{h\to 0} b(v^h) = \left(-\nabla^2 v - (\mbox{sym } B_g)_{2\times 2}\right)^* + {\mathrm{sym}}\, \Big ( c \left(-\nabla^2 v - (\mbox{sym } B_g)_{2\times 2} \right ) \otimes e_3 \Big ) \quad \mbox{ in } L^2(\Omega). \end{equation} \medskip {\bf 3.} We now observe the following convergence rates: \begin{lemma}\label{errors} We have: \begin{itemize} \item[(i)] $ \ds h^{-1} \|q^h\|^2_{L^2(\mathcal{U}_h\times (-\frac{h}{2}, \frac{h}{2}))}= o(h^{\gamma+2})$, \item[(ii)] $\ds h^{-1}\| |q^h||K^h|\|_{L^1(\mathcal{U}_h \times (-\frac{h}{2}, \frac{h}{2}))} = o(h^{\gamma+2})$. \end{itemize} \end{lemma} \begin{proof} Recall that $v^h$ and $w^h$ are uniformly bounded in $W^{1,8}(\Omega)$. To prove (i) observe that: \begin{equation*} \ds \frac{1}{h} \|q^h\|^2_{L^2(\mathcal{U}_h\times (-\frac{h}{2}, \frac{h}{2}))} \leq \|C^h\|_{L^1(\Omega)} \mathcal{O} (h^{4\gamma} + h^{ 3\gamma} + h^{2(1+\gamma-\lambda)} + h^{\gamma+3}) = o(h^{\gamma+2}), \end{equation*} where we collected all the terms involving $|D^h|, |\nabla w^h|$ and $|d^{0,h}| \le C(1+|\nabla w^h|+|D^h|^2)$ in the quantity $C^h$, which can be shown to be uniformly bounded in $L^1(\Omega)$. To see (ii), we estimate: \begin{equation*} \begin{split} \frac{1}{h} \| |q^h||K^h|\|_{L^1(\mathcal{U}_h\times (-\frac{h}{2}, \frac{h}{2}))} & \le h^{-1/2} \|q^h\|_{L^2} \Big (h^{(\gamma+2)/2} \|b(v^h)\|_{L^2(\Omega)} + h^\gamma \|s(v^h, w^h)\|_{L^2(\Omega)}\Big) \\ & = o(h^{(\gamma+2)/2}) \Big [ h^{(\gamma+2)/2} + o(h^{\gamma+ \lambda (p-1)}) \Big] \\ & = o(h^{\gamma+2}) + o(h^{3\gamma/2 + \lambda p - \lambda +1}) = o(h^{\gamma+2}), \end{split} \end{equation*} where we used (i), \eqref{mainerror} and \eqref{limitbending}. \end{proof} Now we observe that, since $F^h= o(1)$ in (\ref{fh}), the matrix field $\mbox{Id}_3 + F^h$ is uniformly close to $SO(3)$ for appropriately small $h$, and hence it has a positive determinant. By (\ref{bigfoot}) and in view of the polar decomposition theorem, there exists an $SO(3)$ valued field $R^h:\Omega^h\to\mathbb{R}^{3\times 3}$ such that: $$ \mbox{Id}_3 + F^h =R^h \sqrt{\mbox{Id} + K^h + q^h} \quad \mbox{ in } \Omega^h. $$ We hence obtain, by Taylor expanding the square root operator around $\mbox{Id}_3$, and using \eqref{frame} : \begin{equation*} W \Big(\nabla u^h (A^h)^{-1} \Big ) = W \Big( R^h( \sqrt{\mbox{Id}_3 + K^h + q^h}) \Big ) = W \Big ( \mbox{Id}_3 + \frac12 (K^h + q^h) + \mathcal{O}(|K^h + q^h|^2) \Big ). \end{equation*} Recalling (\ref{Q3}), we hence obtain: \begin{equation*}\label{bigexpansion} \begin{split} W \Big(\nabla u^h (A^h)^{-1} \Big ) & \leq \mathcal{Q}_3 \left(\frac{1}{2} (K^h + q^h) + \mathcal{O}(|K^h + q^h|^2)\right) \\ & \qquad + \omega\Big (|K^h+ q^h| + \mathcal{O}(|K^h + q^h|^2) \Big ) \Big ||K^h+ q^h|+ \mathcal{O}(|K^h + q^h|^2)\Big |^2 \\ & \leq \mathcal{Q}_3 \left(\frac{1}{2} K^h\right) + \mathcal{O}\left(|K^h||q^h| + |q^h|^2\right) + o(1) |K^h|^2, \end{split} \end{equation*} where we used the fact that $|K^h| + |q^h| = o(1)$ and $\omega(t) \to 0$ as $t\to 0$. We now estimate the energy $I^h_W$ using the above inequality and Lemma \ref{errors}: \begin{equation*} \begin{split} I^h_W(u^h) & = \frac 1h \int_{\Omega^h} W(\nabla u^h (A^h)^{-1}) = \frac{1}{h} \int_{\Omega^h} \mathcal{Q}_3 \left(\frac{1}{2} K^h\right) + \mathcal{O}\left(|K^h||q^h| + |q^h|^2\right)+ o(1) |K^h|^2 ~\mbox{d}x \\ & \leq \frac{1}{h} \int_{\Omega^h} \mathcal{Q}_3\Big(h^{\gamma/2} x_3 b(v^h) + \mathcal{O}(h^{\gamma}) |s(v^h, w^h)|\Big)~ \mbox{d}x + o(h^{\gamma+2}). \end{split} \end{equation*} Integrating in the $x_3$ direction and applying the estimate \eqref{mainerror} finally yields: \begin{equation*} \begin{split} I^h_W(u^h) & \le \frac{1}{12} \int_{\Omega} h^{\gamma+2} \mathcal{Q}_3 \Big(b(v^h) \Big)~\mbox{d}x + \mathcal{O}(h^{2\gamma}) \|s(v^h, w^h)\|^2_{L^2(\Omega)} + o(h^{\gamma+2}) \\ & = \frac{1}{12} h^{\gamma+2}\int_{\Omega} \mathcal{Q}_3 \Big(b(v^h)\Big)~\mbox{d}x + o(h^{2\lambda(p-1)+ 2\gamma}) + o(h^{\gamma+2}) \\ & = \frac{1}{12} h^{\gamma+2}\int_{\Omega} \mathcal{Q}_3 \Big(b(v^h) \Big)~\mbox{d}x + o(h^{\gamma+2}), \end{split} \end{equation*} since by the choice of $\lambda$ in \eqref{lambda}, we have $2\lambda(p-1)+ 2\gamma > \gamma+2$. In view of \eqref{limitbending} it follows that: \begin{equation}\label{finalestimate} \limsup_{h\to 0} \frac{1}{h^{\gamma+2}} I^h_W(u^h) \le \mathcal{I}_f (v), \end{equation} which, combined with Theorem \eqref{compactness}, proves the desired limit (iii) in Theorem \ref{limsup}. \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip \bigskip Theorem \ref{minsconverge} follows now from the next result: \begin{lemma}\label{GCM} When $\Omega$ is simply connected, the following are equivalent: \begin{itemize} \item[(i)] There exists $v\in W^{2,2}(\Omega)$ such that $\det (\nabla^2 v) = -{\mathrm{curl}} ^T {\mathrm{curl}} ( S_g)_{2\times 2}$ and $\mathcal{I}_f(v) = 0$, \item[(ii)] $\mathrm{curl }\big((\mathrm{sym }~ B_g)_{2\times 2}\big) = 0$ and $\displaystyle\mathrm{curl}^T\mathrm{curl}~ ( S_g)_{2\times 2}= - \mathrm{det}\big((\mathrm{sym }~ B_g)_{2\times 2}\big).$ \end{itemize} The two equations in (ii) are the linearized Gauss-Codazzi-Mainardi equations corresponding to the metric $\mathrm{Id} + 2 h^{\gamma} (\mathrm{sym}~ S_g)_{2\times 2}$ and the shape operator $h^{\gamma/2}(\mathrm{sym }~ B_g)_{2\times 2}$ on the mid-plate $\Omega$. \end{lemma} \begin{proof} The proof is straightforward and equivalent to that of \cite[Lemma 6.1]{LMP-prs1}. \end{proof} \begin{remark} Another construction of the recovery sequence, following the general approach of \cite{FJMgeo}, will appear in \cite{POthesis}. We briefly present this argument for the simplified case when $ B_g = 0$. Define $u^h$ as in (\ref{recoveryseq}), where instead of (\ref{nd01h}) and (\ref{d01}) we require the following of the Lipschitz warping coefficients $d^{0,h}$ and $d^{1,h}$: \begin{equation}\label{new1} \begin{split} & d^{0,h} = l( S_g) - \frac{1}{2}|\nabla v^h|^2 e_3,\\ & \lim_{h\to 0}\|d^{1,h} - c(-\nabla^2 v)\|_{L^2(\Omega)} = 0, \qquad \lim_{h\to 0} h^{\gamma/2} \|d^{1,h}\|_{W^{1,\infty}(\Omega)} = 0. \end{split} \end{equation} The truncation sequences $v^h\in W^{2,\infty}(\Omega)$ and $w^h\in W^{1,\infty}(\Omega,\mathbb{R}^2)$ should satisfy the conditions below. Define the truncation scale and the truncation exponent: $$ \lambda= 1+\frac{\gamma}{2}, \qquad q = \frac{2+\gamma}{\gamma-1} > 4,$$ so that $w\in W^{1,q}(\Omega,\mathbb{R}^2)$. Then, given an appropriately small constant $ \epsilon_0>0$, the result in \cite[Proposition 2]{FJMhier} allows for having: \begin{equation}\label{new} \begin{split} & \lim_{h\to 0} \|v^h - v\|_{W^{2,2}(\Omega)} + \|w^h - w\|_{W^{1,q}(\Omega, \R^2)}= 0, \\ & \|v^h\|_{W^{2,\infty}(\Omega)}\leq \epsilon_0 h^{-\lambda}, \qquad \|w^h\|_{W^{1,\infty}(\Omega, \R^2)} \leq \epsilon_0 h^{-2\lambda/q},\\ & \lim_{h\to 0} h^{-2\lambda} \left|\left\{x\in \Omega; ~~ v^h(x) \neq v(x)\right\} \right|+ h^{-2\lambda} \left |\left\{x\in \Omega; ~~ w^h(x) \neq w(x)\right\} \right| =0, \end{split} \end{equation} where the constants $C$ above depend only on $\Omega$ and $\gamma$, but are independent of $h$ and $ \epsilon_0$. The main new observation follows now from the Brezis-Wainger inequality \cite[Theorem 2.9.4]{ziemer}, applied to the sequence $\nabla v^h\in W^{1,4}$, uniformly bounded in ${W^{1,2}}$, which yields: \begin{equation}\label{BW} \|\nabla v^h\|_{L^\infty} \leq C\Big(1+\log^{1/2}\big(1+ \|\nabla v^h\|_{W^{1,4}}\big)\Big)\leq C\Big(1+\log^{1/2}\big(1+ S_0h^{-\lambda}\big)\Big) \leq C\log(1/h) \end{equation} for all $h$ sufficiently small. In particular: $\|\nabla v^h\|_{L^\infty} \leq C h^{-\gamma/4}$ and as a result, we obtain the following bounds: $$\|D^h\|_{L^\infty}\leq C h^{-\gamma/4}, \quad \|d^{0,h}\|_{L^\infty} \leq C(1+h^{-\gamma/2}),\quad \|\nabla d^{0,h}\|_{L^\infty} \leq C(1+ h^{-\lambda-\gamma/4}),$$ which together with (\ref{new1}), (\ref{new}) give: $$\|\nabla u^h - \mbox{Id}_3\|_{L^\infty}\leq C \epsilon_0.$$ Consequently: $$\mbox{dist}(\nabla u^h (A^h)^{-1}, SO(3)) \leq \|\nabla u^h (A^h)^{-1} - \mbox{Id}_3\|_{L^\infty} \leq \|\nabla u^h - \mbox{Id}_3\|_{L^\infty} + \|\nabla u^h \big((A^h)^{-1} - \mbox{Id}_3\big)\|_{L^\infty}\leq C \epsilon_0,$$ for all $h$ sufficiently small. Let the sets $\mathcal{U}_h$, $O_h$ be as in (\ref{uhoh}). Then, in view of boundedness of $W$ close to $SO(3)$ and (\ref{new}) we have: $$\frac{1}{h^{2+\gamma}}\frac{1}{h}\int_{O_h\times (-\frac{h}{2}, \frac{h}{2})} W(\nabla u^h (A^h)^{-1})~\mbox{d}x\leq \frac{C}{h^{2+\gamma}} |O_h| = \frac{C}{h^{-2\lambda}} |O_h| \to 0 \quad \mbox{ as } h\to 0,$$ while on the ``good set'' $\mathcal{U}_h$, the estimates follow using the fact that $v^h =v$ and $w^h=w$, as in \cite{FJMgeo}. \end{remark} \section{The matching property and an efficient recovery sequence: A proof of Theorem \ref{matching} and Theorem \ref{limsup2}} {\bf 1.} We decompose the unknown vector field $w_\epsilon$ into its tangential and normal components: $$w_\epsilon = w_{\epsilon,tan} + w_\epsilon^3e_3,$$ where $w_{\epsilon,tan}\in\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R}^2)$. Denoting: $z_\epsilon = \epsilon w_\epsilon^3\in\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R})$, the equation (\ref{metric}) is equivalent to: \begin{equation}\label{metric3} \nabla(\mbox{id}_2 + \epsilon^2w_{\epsilon,tan})^T \nabla(\mbox{id}_2 + \epsilon^2w_{\epsilon,tan}) = \mbox{Id}_2 + 2\epsilon^2(\mbox{sym }S_g)_{2\times 2} - \epsilon^2 (\nabla v + \nabla z_\epsilon)\otimes (\nabla v + \nabla z_\epsilon) + \epsilon^3 s_\epsilon. \end{equation} We shall first find the formula for the Gaussian curvature of the 2d metric in the right hand side of (\ref{metric3}), where we denote $v_1=v+z_\epsilon$, and: \begin{equation}\label{8.4} g_\epsilon(z_\epsilon) = \mbox{Id}_2 + 2\epsilon^2(\mbox{sym }S_g)_{2\times 2} - \epsilon^2 \nabla v_1 \otimes \nabla v_1 + \epsilon^3 s_\epsilon. \end{equation} Call $P_\epsilon = [P_{ij}]_{i,j=1,2} = \mbox{Id}_2 + 2\epsilon^2(\mbox{sym } S_g)_{2\times 2} + \epsilon^3 s_\epsilon$. The Christoffel symbols, the inverse and the determinant of $P_\epsilon$, satisfy: \begin{equation}\label{chri} \begin{split} & \Gamma_{ij}^k = \frac{1}{2}P^{kl}\left(\partial_jP_{il} + \partial_i P_{jl} - \partial_j P_{ij}\right) = 1 +\mathcal{O}(\epsilon^2)\\ & (P_\epsilon)^{-1} = [P^{ij}] = \frac{1}{\mathrm{det} P_\epsilon} \mathrm{cof}[P_{ij}] = \mbox{Id}_2 +\mathcal{O}(\epsilon^2)\\ & \det P_\epsilon = 1+\mathcal{O}(\epsilon^2). \end{split} \end{equation} By \cite[Lemma 2.1.2]{hanhong}, we have: \begin{equation}\label{cur} \kappa\big(P_\epsilon - \epsilon^2\nabla v_1\otimes\nabla v_1\big) = \frac{\kappa(P_\epsilon)}{\big(1-\epsilon^2(P^{ij}\partial_iv_1 \partial_jv_1)\big)^2} - \frac{\epsilon^2 \mathrm{det}(\nabla^2 v_1 - [\Gamma_{ij}^k\partial_kv_1]_{ij})}{\big(1-\epsilon^2(P^{ij}\partial_iv_1 \partial_jv_1)\big)^4\mathrm{det}P_\epsilon}. \end{equation} In fact, the formula above is obtained, by a direct calculation, for $v_1$ smooth. When $v_1\in \mathcal{C}^{2,\beta}$, one approximates $v_1$ by smooth sequence $v_1^n$, and notes that each $\kappa_n = \kappa(\mbox{Id}_2 + 2\epsilon^2(S_g)_{2\times 2} - \epsilon^2(\nabla v_1^n\otimes \nabla v_1^n)) + \epsilon^3 s_\epsilon$ is given by (\ref{cur}), while the sequence $\kappa_n$ converges in $\mathcal{C}^{0,\beta}$ to the right hand side in (\ref{cur}). Since $\kappa_n$ converges in distributions to $\kappa(P_\epsilon -\epsilon^2(\nabla v_1\otimes \nabla v_1))$, as follows from the definition of Gauss curvature $\kappa = {R_{1212}}/{\mbox{det} g_\epsilon}$, (\ref{cur}) holds for $v_1\in \mathcal{C}^{2,\beta}$ as well. \medskip {\bf 2.} We now see that $ \kappa(g_\epsilon(z_\epsilon)) = 0$ if and only if $\Phi(\epsilon, z_\epsilon) = 0$, where: \begin{equation*} \begin{split} \Phi(\epsilon,z) = & \big(1-\epsilon^2P^{ij}\partial_i(v + z) \partial_j(v+z)\big)^2 \big(\det P_\epsilon \big) \frac{1}{\epsilon^2} \kappa(P_\epsilon) - \mbox{det}\big(\nabla^2 v +\nabla^2 z - [\Gamma_{ij}^k\partial_k (v+z)]_{ij}\big). \end{split} \end{equation*} Consider $\Phi:(-\epsilon_0,\epsilon_0) \times \mathcal{C}^{2,\beta}_0(\bar\Omega,\mathbb{R}) \rightarrow \mathcal{C}^{0,\beta}(\bar\Omega,\mathbb{R}) $ and look for $z_\epsilon \in \mathcal{C}^{2,\beta}_0(\bar\Omega,\mathbb{R})$ satisfying $\Phi(\epsilon, z_\epsilon)=0$. By using (\ref{curv}) to approximate $\kappa(P_\epsilon)$ and recalling (\ref{chri}), we get: \begin{equation*} \begin{split} \Phi(\epsilon,z) = & - \big(1+\mathcal{O}(\epsilon^2)|\nabla v + \nabla z|^2\big)^2 (1+\mathcal{O}(\epsilon^2)) \big(\mbox{curl}^T\mbox{curl}(S_g)_{2\times 2} + \mathcal{O}(\epsilon^2)\big)\\ &\qquad\qquad \qquad\qquad \qquad\qquad - \mbox{det}\big(\nabla^2 v +\nabla^2 z +\mathcal{O}(\epsilon^2)|\nabla v + \nabla z|\big). \end{split} \end{equation*} It easily follows that: $\Phi(0,0) = - \mbox{curl}^T\mbox{curl}(S_g)_{2\times 2} -\det\nabla^2 v =0$, and that the partial derivative $\mathcal{L} = \partial \Phi/\partial z (0,0) : \mathcal{C}_0^{2,\beta}(\bar\Omega,\mathbb{R}) \rightarrow \mathcal{C}^{0,\beta}(\bar\Omega,\mathbb{R})$ is a linear continuous operator of the form: \begin{equation*} \begin{split} \forall z\in\mathcal{C}_0^{2,\beta} \qquad \mathcal{L}(z) &= \lim_{\epsilon\to 0} \frac{1}{\epsilon} \Phi(0,\epsilon z) = - \lim \frac{1}{\epsilon} \big(\det(\nabla^2v + \epsilon\nabla^2z) - \det\nabla^2v\big) = -\mbox{cof} \nabla^2v:\nabla^2 z. \end{split} \end{equation*} Clearly, $\mathcal{L}$ above is invertible to a continuous linear operator, because of the uniform ellipticity of $\nabla^2v$, implied by $\det\nabla^2v $ being strictly positive. By the implicit function theorem there exists hence the solution operator: $\mathcal{Z}:(-\epsilon_0, \epsilon_0)\rightarrow \mathcal{C}_0^{2,\beta}(\bar\Omega,\mathbb{R})$ such that $z_\epsilon = \mathcal{Z}(\epsilon)$ satisfies $\Phi(\epsilon, z_\epsilon)=0$. Moreover: $$\mathcal{Z}'(0) = \mathcal{L}^{-1}\circ \left(\frac{\partial \Phi}{\partial \epsilon} (0,0)\right) = 0, \quad \mbox{ because } \frac{\partial \Phi}{\partial \epsilon} (0,0) = 0. $$ Consequently, we also obtain: $ \|w_\epsilon^3\|_{\mathcal{C}^{2,\beta}} = \frac{1}{\epsilon} \|z_\epsilon\|_{\mathcal{C}^{2,\beta}} \to 0$, as $\epsilon\to 0$. \medskip {\bf 3.} By \cite{mardare} it now follows that for each small $\epsilon$ there is exactly one (up to rotations) orientation preserving isometric immersion $\phi_\epsilon\in\mathcal{C}^2(\bar\Omega, \mathbb{R}^2)$ of $g_\epsilon(z_\epsilon)$: \begin{equation}\label{isomh} \nabla\phi_\epsilon^T\nabla\phi_\epsilon = g_\epsilon(z_\epsilon) \quad \mbox{and} \quad \det\nabla\phi_\epsilon>0. \end{equation} We now sketch the argument that in fact: $\phi_\epsilon = \mbox{id} +\epsilon^2w_{\epsilon,tan}$ with some $w_{\epsilon,tan}$ uniformly bounded in $\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R}^2)$. The proof proceeds as in \cite[Theorem 4.1]{LMP-arma}, where the reader may find many more details. Firstly, (\ref{isomh}) is equivalent to: $\nabla^2\phi_\epsilon - [\tilde \Gamma_{ij}^k \partial_k\phi_\epsilon]_{ij} = 0,$ where $\tilde\Gamma_{ij}^k$ are the Christoffel symbols of the metric $g_\epsilon(z_\epsilon)$ in (\ref{8.4}). By (\ref{isomh}) and the boundedness of $\tilde\Gamma_{ij}^k$, it follows that: $\|\phi_\epsilon\|_{\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R}^2)} \leq C.$ Further, $\|\tilde \Gamma_{ij}^k\|_{\mathcal{C}^{0,\beta}} = \mathcal{O}(\epsilon^2)$ and so: \begin{equation}\label{cc} \exists A_\epsilon\in\mathbb{R}^{2\times 2} \qquad \|\nabla\phi_\epsilon - A_\epsilon\|_{\mathcal{C}^{1,\beta}}\leq C\epsilon^2. \end{equation} In fact, $\mbox{dist}(A_\epsilon, SO(3))\leq C\epsilon^2$, so without loss of generality: $\|\nabla\phi_\epsilon - \mbox{Id}_3\|_{\mathcal{C}^{1,\beta}}\leq C\epsilon^2$ and therefore: $ \|\phi_\epsilon - \mbox{id}\|_{\mathcal{C}^{2,\beta}} \leq C\epsilon^2.$ Consequently, $\phi_\epsilon = \mbox{id}_2 + \epsilon^2w_{\epsilon,tan}$ with $\|w_{\epsilon, tan}\|_{\mathcal{C}^{2,\beta}} \leq C$. This ends the proof of Theorem \ref{matching}. \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip \bigskip {\bf 4.} We now sketch the proof of Theorem \ref{limsup2}. The complete calculations are similar to \cite[Theorem 3.5]{LMP-arma} and can be found in \cite{POthesis}. We recall first a result on density of regular solutions to the elliptic 2d Monge-Amp\`ere equation: \begin{proposition}\label{thm-density}\cite[Theorem 3.2]{LMP-arma} Assume that $\Omega$ is star-shaped with respect to an interior ball $B\subset \Omega$. For a constant $c_0>0$, recall the definition: $$ {\mathcal A}_{c_0}= \big\{ u\in W^{2,2}(\Omega); ~~ \det\nabla^2 u=c_0 \mbox{ a.e. in } \Omega \big\}. $$ Then $\mathcal A_{c_0} \cap C^\infty( \bar \Omega)$ is dense in ${\mathcal A}_{c_0}$ with respect to the $W^{2,2}$ norm. \end{proposition} In view of the above, it is enough to prove Theorem \ref{limsup2} for $v\in\mathcal{C}^{2,\beta}(\bar\Omega)$ satisfying $\det\nabla^2v = c_0$. In the general case of $v\in W^{2,2}(\Omega)$ satisfying the same constraint, the result follows by a diagonal argument. By Theorem \ref{matching} used with $\epsilon=h^{\gamma/2}$ and $s_\epsilon=\epsilon (S_g^2)_{2\times 2}$, there exists an equibounded sequence $w_h\in\mathcal{C}^{2,\beta}(\bar\Omega,\mathbb{R}^3)$ such that the deformations $u_h(x') = x'+ h^{\gamma/2} v(x') e_3 + h^{\gamma} w_h(x')$ are isometrically equivalent to the metric in: \begin{equation}\label{isom} \forall 0< h \ll 1 \qquad (\nabla u_h)^T\nabla u_h = \mbox{Id}_2+2h^\gamma (\mbox{sym } S_g)_{2\times 2} + h^{2\gamma} (S_g^2)_{2\times 2}. \end{equation} Define now the recovery sequence $u^h\in \mathcal{C}^{1,\beta}(\Omega^h,\mathbb{R}^3)$ by the formula: \begin{equation}\label{expan3} u^h(x', x_3) = u_h(x') + x_3 b^h(x') + \frac{x_3^2}{2}h^{\gamma/2} \big(d^h(x') - l(B_g(x'))\big), \end{equation} where $l(B_g)$ is defined as in (\ref{vecl}), the ``Cosserat'' vector fields $ b^h:\Omega\to\mathbb{R}^3$ are given by: $$\left[\begin{array}{ccc} \partial_1 u_h & \partial_2 u_h & b^h\end{array}\right]^T \left[\begin{array}{ccc} \partial_1 u_h & \partial_2 u_h & b^h\end{array}\right] = G^h(\cdot, 0) \quad \mbox{ in } \Omega,$$ and $d^h\in\mathcal{C}^{1,\beta}(\bar\Omega,\mathbb{R}^3)$ are the ``warping'' vector fields, approximating the effective warping $d\in\mathcal{C}^{0,\beta}(\bar\Omega,\mathbb{R}^3)$: \begin{equation}\label{warp} \begin{split} h^{\gamma/2}\|d^h\|_{\mathcal{C}^{1,\beta}} &\leq C \quad \mbox{ and } \quad \lim_{h\to 0} \|d^h - d\|_{L^\infty} = 0,\\ \mathcal{Q}_2\big(\nabla^2 v + \mbox{sym} (B_g)_{2\times 2} \big) & = \mathcal{Q}_3\big((\nabla^2 v + \mbox{sym} (B_g)_{2\times 2})^* + \mbox{sym}(d\otimes e_3)\big). \end{split} \end{equation} Note that (\ref{expan3}) is consistent with (\ref{expan}) at the highest order terms in the expansion in $h$. \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip \section{On the uniqueness of minimizers to the Monge-Amp\`ere constrained energy} In this section, we discuss the multiplicity of minimizers to the limiting problem (\ref{linpresKirchhoff}). Given a bounded, simply connected $\Omega\subset\mathbb{R}^2$ and a function $ f \in L^1(\Omega)$, we consider the functional: \begin{equation}\label{prob} \mathcal{I}(v) = \int_\Omega |\nabla^2 v|^2~\mbox{d}x' \quad \mbox{subject to the constraint: } \mathcal{A}_ f =\{v\in W^{2,2}(\Omega); ~ \det \nabla^2v= f \}. \end{equation} Here, we assumed that $\mathcal{Q}_2(F_{2\times 2}) = |\mbox{sym} (F_{2\times 2})|^2$ for every $F_{2\times 2}\in\mathbb{R}^{2\times 2}$, which is consistent with (\ref{defQ}) and (\ref{Q3}), when $W(F) = \frac{1}{2}\mbox{dist}^2(F, SO(3))$ for $F$ close to $SO(3)$. Indeed, expanding $\mbox{dist}^2(\mbox{Id}+\epsilon A, SO(3)) = |\sqrt{(\mbox{Id}+\epsilon A)^T (\mbox{Id}+\epsilon A)} - \mbox{Id}|^2 = \epsilon^2 |\mbox{sym } A|^2 + \mathcal{O}(\epsilon^3)$, we see that $\mathcal{Q}_3(A) = |\mbox{sym }A|^2$, which implies the form of $\mathcal{Q}_2$. This scenario corresponds to the isotropic elastic energy density with the Lam\'e coefficients $\lambda = 0$, $\mu = \frac{1}{2}$ (see \cite{FJMhier} for more details). \smallskip We now observe that the minimization problem for (\ref{prob}) may have multiple or unique solutions, depending on the choice of a smooth constraint function $f$. \begin{example}\label{ex1} (i) Let $\Omega=B(0,1)\subset\mathbb{R}^2$. Then for $f\equiv -1$ the problem (\ref{prob}) has a non-trivial one-parameter family of absolute minimizers: $\ds v_\theta (x_1,x_2) = (\cos\theta) \frac{x_1^2-x_2^2}{2} + (\sin\theta) (x_1x_2)$. Indeed, for $v\in \mathcal{A}_{f\equiv -1}$ the quantity $|\nabla^2 v|^2 = (\mbox{tr } \nabla^2v)^2 - 2 \det\nabla^2v = (\mbox{tr } \nabla^2v)^2 +2$ is minimized when $\mbox{tr} \nabla^2v=\Delta v=0$, that is readily satisfied with: $\nabla^2v_\theta = \left[\begin{array}{cc} \cos\theta & \sin\theta \\ \sin\theta & -\cos\theta\end{array}\right]$. (ii) On the other hand, for $f\equiv 1$, (\ref{prob}) has a unique minimizer: $\ds v(x_1,x_2) = \frac{x_1^2+x_2^2}{2}$. This is because for $v\in \mathcal{A}_{f\equiv 1}$ we have: $|\nabla^2 v|^2 = (\mbox{tr } \nabla^2v)^2 - 2 = (\lambda_1 + \lambda_2)^2 -2$, where $\lambda_1, \lambda_2$ are the eigenvalues of $\nabla^2v$. This quantity achieves its minimum, under the constraint $\lambda_1\lambda_2 = 1$, precisely when $\lambda_1 = \lambda_2 =1$. \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip \end{example} \begin{example}\label{ex2} A similar argument as in Example \ref{ex1} (i), allows for a construction of a one-parameter family of absolute minimizers $v_\theta$ to (\ref{prob}) when a smooth function $f:\bar\Omega\to\mathbb{R}$ satisfies: \begin{equation}\label{fff} f\leq c_0<0 \quad \mbox{ and } \quad \Delta (\log |f|)=0 \quad \mbox{in } \Omega. \end{equation} Indeed, define $\lambda=\sqrt{|f|}$. Clearly, the function $\lambda$ is positive, smooth and satisfies $\Delta(\log\lambda) = 0$ in $\bar\Omega$. Hence there exists $\phi\in\mathcal{C}^\infty(\bar\Omega)$ such that the function $(\log\lambda + i\phi)$ is holomorphic in $\Omega\subset\mathbb{C}$. Trivially, for every $\theta\in\mathbb{R}$, the function $(\log\lambda + i(\phi+\theta))$ is holomorphic, as is its exponential: $$\exp(\log\lambda + i(\phi+\theta)) = \lambda\cos(\phi +\theta) + i\lambda\sin (\phi +\theta). $$ Writing the associated Cauchy-Riemann equations we note that they are precisely the vanishing of the $curl$ of the symmetric matrix field in the left hand side of: \begin{equation}\label{gradi} \left[\begin{array}{cc} \lambda\cos(\phi+\theta) & -\lambda\sin(\phi+\theta) \\ -\lambda\sin(\phi+\theta) & -\lambda\cos(\phi + \theta)\end{array}\right] = \nabla^2v_\theta. \end{equation} Consequently, since $\Omega$ is simply connected, for each $\theta$ there exists a smooth $v_\theta:\bar\Omega\to\mathbb{R}$ as in (\ref{gradi}). We see that: \begin{equation}\label{gradi2} \Delta v_\theta = 0 \quad \mbox{ and } \quad \det\nabla^2 v_\theta = -\lambda^2 =-|f|=f, \end{equation} which proves the claim. For completeness, we now prove that (\ref{fff}) is in fact equivalent to the existence of some $v$ satisfying (\ref{gradi2}). Denote $\lambda=\sqrt{f}$ and let $r_1, r_2:\Omega\to\mathbb{R}^3$ be the (unit-length) eigenvectors fields of $\nabla^2v$ corresponding to the eigenvalues $\lambda$ and $-\lambda$. Since $\langle r_1, r_2\rangle = 0$, we may write: $[r_1, r_2] = R_\phi = \left[\begin{array}{cc} \cos\phi & -\sin\phi \\ \sin\phi & \cos\phi\end{array}\right]\in SO(2)$, for some smooth function $\phi:\Omega\to (0, 2\pi)$. The fact that the range of $\phi$ may be taken in $(0,2\pi)$ follows from the simply-connectedness of $\Omega$. We obtain: \begin{equation*} \nabla^2v = R_\phi ~\mbox{diag}\{\lambda, -\lambda\} ~R_\phi^T = \left[\begin{array}{cc} \lambda\cos(2\phi) & \lambda\sin(2\phi) \\ \lambda\sin(2\phi) & -\lambda\cos(2 \phi)\end{array}\right] = \left[\begin{array}{cc} \lambda\cos(-2\phi) & -\lambda\sin(-2\phi) \\ -\lambda\sin(-2\phi) & -\lambda\cos(-2 \phi)\end{array}\right]. \end{equation*} Since $curl$ of the matrix field in the right hand side above vanishes in $\Omega$, we reason as in (\ref{gradi}) and see that the (nonzero) function $\lambda \exp(-2i\phi)$ satisfy the Cauchy-Riemann equations, and hence it is holomorphic in $\Omega\subset\mathbb{C}$. Further, its logarithm: $(\log\lambda -2 i\phi)$ is well defined and holomorphic as well. Consequently: $\Delta (\log\lambda)=0$, which concludes the proof of (\ref{fff}). \hspace*{\fill}\mbox{\ \rule{.1in}{.1in}}\medskip \end{example} \medskip In what follows, we want to derive conditions for uniqueness of minimizers to (\ref{prob}). In this context, it is useful to consider the relaxed constraint: \begin{equation*} \mathcal{A}_ f ^*=\{v\in W^{2,2}(\Omega); ~ \det \nabla^2v\geq f \}. \end{equation*} We will denote by $\mathcal{I}_ f $ and $\mathcal{I}_ f ^*$ the restrictions of $I$ to $\mathcal{A}_ f $ and $\mathcal{A}_ f ^*$, respectively. Clearly: $$\inf \mathcal{I}_ f ^*\leq \inf \mathcal{I}_ f .$$ The following straightforward lemma has been observed in \cite{ho} as well: \begin{lemma}\label{exist} Assume that $\mathcal{A}_ f \neq \emptyset$ ($\mathcal{A}_ f ^*\neq \emptyset$). Then $I_ f $ ($I_ f ^*$) admits a minimizer. Moreover, there must be $ f \in L^1\log L^1(\Omega)$, namely: $$\int_{\Omega'} | f \log(2+ f )| < \infty,$$ for every subset $\Omega'$ compactly contained in $\Omega$. \end{lemma} \begin{proof} Take a minimizing sequence $v_n\in\mathcal{A}_ f $; it satisfies: $\|\nabla^2v_n\|_{L^2(\Omega)}\leq C$. By modifying $v_n $ by $\fint v$ and $(\fint \nabla v)x$, in view of the Poincare inequality it follows that: $\|v_n\|_{W^{2,2}(\Omega)}\leq C$. Therefore $v_n\rightharpoonup v$ weakly in $W^{2,2}(\Omega)$ (up to a subsequence), which implies $\mathcal{I}(v)\leq\liminf \mathcal{I}(v_n)$. We hence see that $v$ is a minimizer of $\mathcal{I}_ f $ ($\mathcal{I}_ f ^*$) if only $v$ satisfies the appropriate constraint. Since $\nabla v_n \rightharpoonup \nabla v$ weakly in $W^{1,2}(\Omega)$, then the same convergence is also valid strongly in any $L^p(\Omega)$ for $p\in[1, \infty)$, and so $\nabla v_n \otimes \nabla v_n \rightarrow \nabla v\otimes \nabla v$ strongly in $L^2(\Omega)$. Applying $\mbox{curl}^T\mbox{curl}$, this yields the following convergence, in the sense of distributions: $$\det\nabla^2 v_n = -\mbox{curl}^T\mbox{curl} (\nabla v_n \otimes \nabla v_n) \rightarrow -\mbox{curl}^T\mbox{curl} (\nabla v \otimes \nabla v) = \det\nabla^2 v.$$ Consequently, if $v_n\in\mathcal{A}_ f $ then $v\in\mathcal{A}_ f $ as well (likewise, if $v_n\in\mathcal{A}_f^*$ then $v\in\mathcal{A}_ f ^*$). The final assertion follows from the celebrated result in \cite{muller}: If $v\in W^{1,2}(\Omega,\mathbb{R}^n)$ on $\Omega\subset\mathbb{R}^n$ satisfies $\det\nabla v\geq 0$ then $\det\nabla v\in L^1\log L^1(\Omega)$. \end{proof} \begin{lemma}\label{unique} Assume that $ f \geq c>0$ in $\Omega$. Let $v_1, v_2\in\mathcal{A}_ f ^*$ be two minimizers of $\mathcal{I}_ f ^*$. Then $\nabla^2 v_1 = \nabla^2 v_2$, i.e. $v_1-v_2$ is an affine function. In particular, the function: $$\psi[ f ] = \det\nabla^2 (\mathrm{argmin}~\mathcal{I}_{ f }^*) = \det\nabla^2 v_1$$ is well defined and it satisfies: $\psi[ f ]\geq f $ and $\psi[ f ]\in L^1\log L^1(\Omega)$. \end{lemma} \begin{proof} By \cite[Theorem 6.1]{LMP-arma}, without loss of generality (possibly replacing $v_i$ by $-v_i$) we may assume that $\nabla^2 v_1$ and $\nabla^2 v_2$ are strictly positive definite a.e. in the domain. For $\lambda\in [0,1]$, consider $v_\lambda=\lambda v_1 + (1-\lambda) v_2$. We claim that $v_\lambda\in\mathcal{A}_ f ^*$. This follows by the Brunn-Minkowski inequality: $$(\det\nabla^2 v_\lambda)^{1/2} \geq \lambda (\det\nabla^2 v_1)^{1/2} + (1-\lambda) (\det\nabla^2 v_2)^{1/2} \geq \lambda \sqrt{ f } + (1-\lambda) \sqrt{f} = \sqrt{f}.$$ Also: $\mathcal{I}(v_\lambda)\leq \lambda \mathcal{I}(v_1) + (1-\lambda)\mathcal{I}(v_2) = \min \mathcal{I}_ f ^*$, and so this inequality is in fact an equality. Since the $L^2$ norm is a strictly convex function, we conclude that $\nabla^2v_1 = \nabla^2 v_2$. \end{proof} \begin{remark} Consider the related functional $I_\Delta(v) = \int_\Omega |\Delta v|^2$, constrained to $\mathcal{A}_ f $ or $\mathcal{A}_ f ^*$, which we respectively denote by $I_{\Delta, f }$ and $I_{\Delta, f }^*$. Since $|\nabla^2 v|^2 = |\Delta v|^2 - 2\det\nabla^2 v$, any minimizing sequence $v_n$ of $I_{\Delta, f }$ or $I_{\Delta, f }^*$, satisfies $\|\nabla^2v_n\|_{L^2(\Omega)}\leq C.$ Arguing as in the proof of Lemma \ref{exist} we obtain existence of minimizers to both problems. On the other hand, there is no uniqueness as in Lemma \ref{unique}, in the sense that two minimizers of $I_{\Delta, f }^*$ may differ by a non-affine harmonic function. We now observe that if $\min \mathcal{I}_{ f } = \min \mathcal{I}_{ f }^*$, then $\min I_{\Delta, f } = \min I_{\Delta, f }^*$. Indeed, let $v_0\in \mathcal{A}_ f $ be the common minimizer of $\mathcal{I}_{ f }$ and $\mathcal{I}_ f ^*$. Then: $$\forall v\in\mathcal{A}_ f ^*\quad \mathcal{I}_{\Delta}(v) = I(v) + 2\int_\Omega \det \nabla^2 v \geq \mathcal{I}(v_0) + 2\int_\Omega f = I_\Delta (v_0),$$ hence $v_0$ is also the common minimizer of $I_{\Delta, f }$ and $I_{\Delta, f }^*$. \end{remark} \section{On the uniqueness of minimizers: the radially symmetric case} In this section we assume that $\Omega = B(0,1)\subset\mathbb{R}^2$ and that: $$f = f (r)\geq c > 0$$ is a radial function such that $ f \in L^1(\Omega)$, i.e.: $\int_0^1 r f (r)~\mbox{d}r <\infty.$ \begin{lemma}\label{radial} If a radial function $v=v(r)\in W^{2,2}(\Omega)$ satisfies $\det\nabla^2 v = f $, then: \begin{equation*} |v'(r)|^2 = \int_0^r 2s f (s)~\mathrm{d}s. \end{equation*} In particular, there exists at most one (up to a constant) radial function $v=v_ f $ as above. \end{lemma} \begin{proof} Let $v=v(r)$ be as in the statement of the Lemma. Recall that writing $\partial_r v = v'$, the gradient of $v$ in polar coordinates has the form: $\nabla v (r,\theta)= (v'(r) \cos\theta, v'(r) \sin\theta)^T$. We now check directly that: $$\det\nabla^2 v = \frac{1}{r}v' v'' = \frac{1}{2r}\left(|v'|^2\right)'.$$ Hence, there must be: \begin{equation}\label{calc} |v'(r)|^2 = \int_0^r 2s f (s)~\mbox{d}s + C, \end{equation} for some $C \geq 0$. Since $v\in W^{2,2}(\Omega)$, we get: $\Delta v = v'' + \frac{1}{r} v'\in L^2(\Omega)$, or equivalently: $$\int_{\Omega} |v''|^2 + \frac{1}{r^2} |v'|^2 + \frac{2}{r} v' v'' <\infty.$$ Note that the last term above equals $2 f \in L^1(\Omega)$, and thus $\frac{1}{r^2}|v'|^2\in L^1(\Omega)$. By (\ref{calc}) we conclude: $$\int_0^1 \frac{2\pi C}{r} <2\pi \int_0^1\frac{1}{r}|v'(r)|^2~\mbox{d}r = \int_\Omega \frac{1}{r^2}|v'|^2 <\infty,$$ and so there must be $C=0$. \end{proof} \begin{corollary}\label{condi} A necessary and sufficient condition for existence of a radial function $v=v(r)\in W^{2,2}(\Omega)$ solving $\det\nabla^2 v = f $ is: \begin{equation}\label{condi2} \int_0^1 r|\log r| f (r)~\mathrm{d}r < \infty ~~\mbox{ and } ~~ \int_0^1\frac{r^3 f (r)^2}{\int_0^r s f (s)\mbox{d}s}~\mathrm{d}r < \infty. \end{equation} The solution $v_ f $ is then given by (uniquely, up to a constant): \begin{equation}\label{minimizer} v_ f (r) = \int_0^r \left(\int_0^s2t f (t)~\mathrm{d}t\right)^{1/2}~\mathrm{d}s. \end{equation} In particular, (\ref{condi2}) is satisfied when $ f \in L^2(\Omega)$, and consequently $\mathcal{A}_f\neq \emptyset$. \end{corollary} \begin{proof} By Lemma \ref{radial} it follows that the solution $v$ is given by $v_ f $ in (\ref{minimizer}). Clearly $\nabla v_ f \in \mathcal{C}^1(\bar\Omega)$, so it remains to check when $\nabla^2 v_ f \in L^2(\Omega)$. We compute: \begin{equation}\label{hes} \begin{split} \int_\Omega |\nabla^2 v_ f |^2 & = \int_\Omega |v_f''|^2 + \frac{1}{r^2} |v_f'|^2 = 2\pi \int_0^1 r|v_f''|^2 + \frac{|v_f'|^2}{r}~\mbox{d}r \\ & = 2\pi \int_0^1\frac{r^3 f (r)^2}{\int_0^r 2s f (s)\mbox{d}s}~\mbox{d}r + 2\pi \int_0^1 2r|\log r| f (r)~\mbox{d}r, \end{split} \end{equation} proving the first claim. When $ f \in L^2(\Omega)$, then $\int_0^1 r f ^2(r)~\mbox{d}r<\infty$, and so: \begin{equation*} \begin{split} &\int_0^1 r|\log r| f (r)~\mbox{d}r \leq \big(\int_0^1 r|\log r|^2 \big)^{1/2} \big(\int_0^1 r f ^2 \big)^{1/2} <\infty \\ &\int_0^1\frac{r^3 f (r)^2}{\int_0^r s f (s)\mbox{d}s}~\mbox{d}r \leq \int_0^1\frac{r^3 f (r)^2}{\int_0^r cs\mbox{d}s} ~\mbox{d}r \leq \int_0^1 r f ^2 < \infty \end{split} \end{equation*} which concludes the proof. \end{proof} \begin{lemma}\label{radial2} (i) Assume that $\mathcal{A}_ f ^*\neq\emptyset$. Then the unique (up to an affine map) minimizer of $\mathcal{I}_ f ^*$ is radially symmetric, given by $v_{\psi[ f ]}$ where $\psi[ f ]$ satisfies (\ref{condi2}). (ii) Assume that $\mathcal{I}_ f $ has the unique (up to an affine map) minimizer. Then, it is radially symmetric and hence given by $v_ f $ in (\ref{minimizer}). Also, $ f $ satisfies conditions (\ref{condi2}). \end{lemma} \begin{proof} We will prove (ii). The proof of (i) relies on Lemma \ref{exist} and Lemma \ref{unique} and the same argument as below. Let $v\in W^{2,2}(\Omega)$ be a minimizer of $\mathcal{I}_ f $, which we modify (if needed) so that: $v(0) = 0$ and $\fint\nabla v=0$. For any $\theta\in [0, 2\pi)$ let $R_\theta=\left[\begin{array}{cc} \cos\theta & -\sin\theta\\ \sin\theta & \cos \theta\end{array}\right]$ be the planar rotation by angle $\theta$. Note that $\nabla^2(v\circ R_\theta) = R_\theta^T \big( (\nabla^2 v)\circ R_\theta\big) R_\theta$, so $\det\nabla^2 (v\circ R_\theta) = (\det \nabla^2v)\circ R_\theta$. In view of radial symmetry of $ f $, if follows that $v\circ R_\theta\in \mathcal{A}_ f ^*$ and $\mathcal{I}(v\circ R_\theta) = \mathcal{I}(v)$. Therefore, by uniqueness, $v=v\circ R_\theta$ is radially symmetric and so the result follows from Corollary \ref{condi}. \end{proof} \begin{theorem}\label{decrease} Assume that $\mathcal{A}_ f ^*\neq\emptyset $, and that $ f $ is a.e. nonincreasing, i.e.: \begin{equation}\label{dec} \forall a.e.~ r\in [0,1] \quad\forall a.e.~ x\in [0,r]\qquad f (r)\leq f (x). \end{equation} Then both problems $\mathcal{I}_ f $ and $\mathcal{I}_ f ^*$ have a unique (up to an affine map) minimizer. The minimizer is common to both problems, necessarily radially symmetric and given by $v_ f $ in (\ref{minimizer}). \end{theorem} \begin{proof} By Lemma \ref{radial2}, the radial function $v_{\psi[ f ]}$ is the unique minimizer of $\mathcal{I}_ f ^*$. Consider $v_ f $ given by (\ref{minimizer}). We will prove that $\mathcal{I}(v_ f )\leq \mathcal{I}(v_\psi)$. This will imply that $v_ f \in W^{2,2}(\Omega)$ and hence, by uniqueness of minimizers there must be: $v_ f = v_\psi$, as claimed in the Theorem. Recall that $\psi\geq f $ and note that $\int_0^r2s f (s)~\mbox{d}s\geq r^2 f (r)$ in view of (\ref{dec}). As in (\ref{hes}), we compute: \begin{equation*} \begin{split} \int_\Omega |\nabla^2 v_\psi|^2 & - \int_\Omega |\nabla^2 v_ f |^2 = 2\pi \int_0^1\frac{r^3\psi(r)^2}{\int_0^r 2s\psi(s)\mbox{d}s} - \frac{r^3 f (r)^2}{\int_0^r 2s f (s)\mbox{d}s}~\mbox{d}r + 2\pi \int_0^1 \frac{\int_0^r 2s(\psi - f )\mbox{d}s}{r}~\mbox{d}r \\ & \geq - 2\pi \int_0^1\frac{r^3 f ^2 \int_0^r 2s(\psi - f )\mbox{d}s}{(\int_0^r 2s\psi(s)\mbox{d}s)(\int_0^r 2s f (s)\mbox{d}s)} \mbox{d}r + 2\pi \int_0^1 \frac{\int_0^r 2s(\psi - f )\mbox{d}s}{r}~\mbox{d}r \\ & \geq - 2\pi \int_0^1\frac{r^3 f ^2 \int_0^r 2s(\psi - f )\mbox{d}s}{(\int_0^r 2s f (s)\mbox{d}s)^2} \mbox{d}r + 2\pi \int_0^1 \frac{\int_0^r 2s(\psi - f )\mbox{d}s}{r}~\mbox{d}r \\ & \geq - 2\pi \int_0^1\frac{r^3 f ^2 \int_0^r 2s(\psi - f )\mbox{d}s}{(r^2 f (r))^2} \mbox{d}r + 2\pi \int_0^1 \frac{\int_0^r 2s(\psi - f )\mbox{d}s}{r}~\mbox{d}r = 0. \end{split} \end{equation*} The proof is now achieved in view of Corollary \ref{condi} and Lemma \ref{radial2}. \end{proof} \begin{remark} Note that $v_ f $ in general, is not a minimizer of the relaxed problem $\mathcal{I}_ f ^*$. Consider $ f _\epsilon(r) = \epsilon\chi_{(0, 1/2]} + \chi_{(1/2,1]}$. Then $v_{ f _\epsilon}\in W^{2,2}(\Omega)$ and, by (\ref{hes}): \begin{equation*} \begin{split} \int_{\Omega} |\nabla^2 v_{ f _\epsilon}|^2 & \geq 2\pi\int_0^1 r|v''(r)|^2~\mbox{d}r \geq 2\pi \int_{1/2}^1 \frac{r^3}{\frac{\epsilon}{4} + (r^2 - \frac{1}{4})}~\mbox{d}r \geq C \int_{1/2}^1 \frac{1}{r^2 - (1-\epsilon)/4}~\mbox{d}r \\ & \geq C\left(\log(1-\frac{\sqrt{1-\epsilon}}{2}) - \log(\frac{1-\sqrt{1-\epsilon}}{2}) - \log(1+\frac{\sqrt{1-\epsilon}}{2}) + \log(\frac{1+\sqrt{1-\epsilon}}{2})\right) \\ & \qquad \to \infty \qquad \mbox{ as } \epsilon\to 0. \end{split} \end{equation*} On the other hand $ f _\epsilon\leq \psi\equiv 1$ and we see that $\int_{\Omega} |\nabla^2 v_{\psi}|^2 = 2\pi$, where $v_\psi = \frac{1}{2}r^2$. Therefore $\mathcal{I}(v_\psi)< \mathcal{I}(v_{f_\epsilon})$ for all small $\epsilon$. A standard approximation argument leads to similar counter-examples with smooth $f$. \end{remark} \section{Critical points of the Monge-Amp\`ere constrained energy in the radial case: a proof of Theorem \ref{criptmA}} The Euler-Lagrange equations for the problem (\ref{prob}) are complicated, which is due to the, in general, unknown structure of the tangent space to the constraint set $\mathcal{A}_f$. Consider instead the functional: \begin{equation*} \Lambda(v,\lambda) = \int_\Omega |\nabla^2 v|^2 + \int_\Omega \lambda (\det \nabla^2 v -f) , \qquad v\in W^{2,2}, \quad \lambda\in L^\infty. \end{equation*} The following result is to be compared with \cite{ho}, where a converse statement is proved in a limited setting: \begin{lemma} If $(v,\lambda)$ is a critical point for $\Lambda$ then $v$ is a critical point for (\ref{prob}). \end{lemma} \begin{proof} Let $w$ be a tangent vector to $\mathcal{A}_f$ at a given $v\in\mathcal{A}_f$, so that there exists a continuous curve $\phi: [0,1] \to \mathcal{A}_f$ with $\phi(0)=v$ such that $\phi'(0)=w$. Note that $\phi(\epsilon) = v+\epsilon w + o(\epsilon)\in \mathcal{A}_f$. Expanding $\det$ in the usual manner we obtain: $$f=\det\nabla^2\phi(\epsilon) = \det(\nabla^2v + \epsilon\nabla^2 w + o(\epsilon)) = \det\nabla^2v + \epsilon \mbox{cof}\nabla^2w : \nabla^2v + o(\epsilon)$$ which implies that: \begin{equation}\label{tan} \mathrm{cof}\nabla^2w : \nabla^2 v =0 \qquad \mbox{ a.e. in } \Omega. \end{equation} To prove (i), let $(v,\lambda)$ be a critical point of $\Lambda$. Taking variation $\mu$ in $\lambda$ we get: $\int \mu (\det\nabla^2 v-f) =0$, thus $v\in \mathcal{A}_f$. Taking now a variation $w$ in $v$ we obtain: \begin{equation}\label{var} 2\int\nabla^2v:\nabla^2w + \int\lambda~ \mbox{cof}\nabla^2v:\nabla^2w = 0 \qquad\forall w\in W^{2,2}. \end{equation} In particular, for every $w$ satisfying (\ref{tan}) the above reduces to $\int\nabla^2v:\nabla^2w = 0$ which is the variation of pure bending functional $\mathcal{I}$. Hence $v$ must indeed be a critical point of (\ref{prob}). \end{proof} \begin{lemma} The Euler-Lagrange equations of $\Lambda$ and the natural boundary conditions are: \begin{equation}\label{EL} \begin{split} &2\Delta^2 v +\mathrm{cof}\nabla^2v : \nabla^2 \lambda = 0 \qquad \mbox{in }~\Omega,\\ &\mathrm{det}\nabla^2v = f \qquad \mbox{in }~\Omega, \end{split} \end{equation} \begin{equation}\label{bdary} \begin{split} &\partial_\tau \left[\Big(2\nabla^2v + \lambda \mathrm{cof}\nabla^2v\Big): (\tau\otimes \vec n)\right] + \Big(2\nabla\Delta v + (\mathrm{cof}\nabla^2v )\nabla\lambda\Big)\vec n = 0 \qquad \mbox{on }~ \partial\Omega, \\ & \Big(2\nabla^2v + \lambda \mathrm{cof}\nabla^2v\Big): (\vec n\otimes \vec n) = 0 \qquad \mbox{on }~ \partial\Omega. \end{split} \end{equation} \end{lemma} \begin{proof} Assuming enough regularity on $v,\lambda$, integration by parts gives: $$2\int_\Omega\nabla^2v:\nabla^2 w = 2\int_\Omega w\Delta^2 v + 2\int_{\partial\Omega} \Big[(\nabla^2v \nabla w) \vec n - w (\nabla\Delta v )\vec n\Big],$$ $$\int_\Omega \lambda~ \mbox{cof} \nabla^2 v:\nabla^2 w = \int_\Omega w ~\mbox{cof}\nabla^2v : \nabla^2 \lambda + \int_{\partial\Omega} \Big[ \lambda ((\mbox{cof}\nabla^2 v)\nabla w)\vec n - w ((\mbox{cof}\nabla^2 v)\nabla \lambda)\vec n \Big]$$ In view of (\ref{var}) the above calculations yield (\ref{EL}) and: $$\int_{\partial\Omega} \Big[\Big((2\nabla^2v + \lambda \mbox{cof}\nabla^2v)\nabla w\Big) \vec n - w\Big(2\nabla\Delta v + (\mbox{cof}\nabla^2v )\nabla\lambda\Big)\vec n\Big] = 0 \qquad \forall w\in W^{2,2}.$$ Writing now $\nabla w = (\partial_\tau w)\tau + (\partial_{\vec n} w)\vec n$, where $\tau$ is the unit vector tangent to $\partial\Omega$ we get: \begin{equation*} \begin{split} \int_{\partial\Omega} \Big[ (\partial_\tau w)\Big(2\nabla^2v + \lambda \mbox{cof}\nabla^2v\Big)&: (\tau\otimes \vec n) - w\Big(2\nabla\Delta v + (\mbox{cof}\nabla^2v )\nabla\lambda\Big)\vec n\Big]\\ &+ \int_{\partial\Omega}(\partial_{\vec n} w)\Big(2\nabla^2v + \lambda \mbox{cof}\nabla^2v\Big): (\vec n\otimes \vec n) = 0 \qquad \forall w\in W^{2,2}. \end{split} \end{equation*} Integrating by parts on the boundary in the first integral above, we deduce (\ref{bdary}). \end{proof} \medskip The proof of Theorem \ref{criptmA} follows now directly from the result below. \begin{proposition}\label{lemlem} Assume that $f\in\mathcal{C}^\infty(\bar B(0,1))$ is radially symmetric i.e. $f=f(r)$, and that $f\geq c>0$. Let $v=v(r)\in \mathcal{A}_f$ be a radial solution to the constraint: $\det\nabla^2v=f$ in $B(0,1)$. Then there is a radial function $\lambda=\lambda(r)\in \mathcal{C}^\infty(\bar B(0,1))$ such that $(v,\lambda)$ is a critical point for $\Lambda$. \end{proposition} \begin{proof} Recall that since $f$ is smooth and positive, then by \cite[Theorem 6.3]{LMP-arma} any $W^{2,2}$ solution of the Monge-Amp\`ere equation $\det \nabla^2 v = f$ in $B(0,1)$ satisfies $v\in\mathcal{C}^\infty(B(0,1))$. On the other hand, by radial symmetry, $v=v_f$ given in (\ref{minimizer}), so we conclude that in fact: $v\in\mathcal{C}^\infty(\bar B(0,1))$. In particular $v\in \mathcal{C}^\infty([0,1])$ and $ v'(0) = (\Delta v)'(0) = 0$. Let $R_\theta$ denote the planar rotation by angle $\theta$. In polar coordinates, we have: $$ \ds \nabla v(r,\theta) = v'(r) R_\theta e_1 = v'(r)\vec n, \qquad \nabla^2 v(r,\theta) = R_\theta\left [ \begin{array}{cc} v'' & 0 \\ 0 & \frac{v'}{r} \end{array} \right ] R^T_\theta,$$ and also note that: ${\rm cof} (R_\theta A R_\theta^T) = R_\theta ({\rm cof} A) R_\theta^T$. We now rewrite (\ref{EL}) (\ref{bdary}) using the ansatz $\lambda=\lambda(r)$ and assuming sufficient regularity. First, (\ref{EL}) becomes: $\ds \frac{1}{r} (v'' \lambda' + v'\lambda'') = -2\big((\Delta v)'' + \frac{(\Delta v)'}{r}\big)$, where we used that $\ds \Delta v = v''+\frac{v'}{r}$. Equivalently: $\ds (\lambda' v')' = -2 \big(r(\Delta v)'\big)'$, which becomes: \begin{equation}\label{solution} \lambda'(r) = -2\frac{r}{v'(r)}(\Delta v)' \quad \mbox{in } (0,1). \end{equation} Note that this is consistent with $\lambda'(0) = 0$, because: \begin{equation}\label{aiuto} \lim_{r\to 0} \frac{v'(r)}{r} = \big(\lim_{r\to 0}\frac{(v'(r))^2}{r^2}\big)^{1/2} = \big(\lim_{r\to 0}\frac{2\int_0^r sf(s)~\mbox{d}s}{r^2}\big)^{1/2} =\big(\lim_{r\to 0}\frac{2r f(r)}{2r}\big)^{1/2}= \sqrt{f(0)} \neq 0. \end{equation} We now examine the boundary equations (\ref{bdary}). We have: $$ \ds \Big(2\nabla^2v + \lambda \mathrm{cof}\nabla^2v\Big): (\tau\otimes \vec n) = R_\theta A(r) R_\theta^T : (\tau \otimes\vec n) = A(r) : (R_\theta^T \tau\otimes R_\theta^T \vec n) = A(r) : (e_2 \otimes e_1) $$ for a matrix field $A$ depending only on $r$, and hence: $$ \partial_\tau \left[\Big(2\nabla^2v + \lambda \mathrm{cof}\nabla^2v\Big): (\tau\otimes \vec n)\right] =0. $$ Also, in view of (\ref{solution}): $$ \Big(2\nabla\Delta v + (\mathrm{cof}\nabla^2v )\nabla\lambda\Big)\vec n = 2 (\Delta v)' + \lambda' \big\langle \left [ \begin{array}{cc} \frac{v'}{r} & 0 \\ 0 & {v''} \end{array} \right ] R^T_\theta \vec n , R^T_\theta \vec n \big\rangle = 2 (\Delta v)'+ \frac{v'}{r} \lambda' = 0, $$ so that the first equation in (\ref{bdary}) is automatically satisfied. Similarly: $$ \Big(2\nabla^2v + \lambda \mathrm{cof}\nabla^2v\Big): (\vec n\otimes \vec n) = \Big ( 2 \left [ \begin{array}{cc} v'' & 0 \\ 0 & \frac{v'}{r} \end{array} \right ] + \lambda \left [ \begin{array}{cc} \frac{v'}{r} & 0 \\ 0 & {v''} \end{array} \right ] \Big ) : (e_1 \otimes e_1) = 2 v'' + \lambda v', $$ so that the second equation in (\ref{bdary}) is satisfied if and only if: \begin{equation}\label{inicond} 2 v''(1) + \lambda (1) v'(1) = 0. \end{equation} Let $\lambda\in\mathcal{C}^1([0,1])$ be the solution of the initial value problem (\ref{solution}) (\ref{inicond}). As a side note, we remark that $\lambda$ possesses the following limits: $\lim_{r\to 0}\lambda''(r) = \lim_{r\to 0} \frac{\lambda'(r)}{r}= -2 \lim_{r\to 0} \frac{(\Delta v)'}{v'} = (\Delta v)''(0), $ so it follows directly that $\lambda=\lambda(r)\in W^{2,\infty}(B(0,1))$. In fact, $\lambda$ is a distributional solution of (\ref{EL}) so in view of the elliptic regularity: $\lambda\in\mathcal{C}^\infty(B(0,1))$. Since (\ref{EL}) (\ref{bdary}) hold, the proof of Proposition \ref{lemlem} is accomplished. \end{proof}
1,314,259,996,065
arxiv
\section{Introduction} During the last decade, deep learning and other representation learning approaches have achieved remarkable success, largely obviating the need for manual feature engineering and achieving new state-of-the-art scores across a broad range of data types, tasks, and domains. However, they have largely done so via complex architectures that have required massive labeled training data sets. Unfortunately, manually collecting, curating, and labeling these training sets is often prohibitively time-consuming and labor-intensive. The data-hungry nature of these models has thus led to increased demand for innovative ways of collecting cheap yet substantial labeled training data sets, and in particular, labeling them. To tackle the label scarcity bottleneck, a variety of classical approaches have seen a resurgence of interest. For instance, active learning (AL)~\cite{settles2009active,ren2021survey} aims to select the most informative samples to train the model with a limited labeling budget. Semi-supervised learning~(SSL)~\cite{tarvainen2017mean,xie2020unsupervised} leverages a set of unlabeled data to improve the model's performance. Transfer learning approaches~\cite{pan2009survey,wilson2020survey} pre-train a model or a set of representations on a source domain to enhance the performance on a different target domain. However, these approaches still require a set of clean labeled data to achieve satisfactory performance, thus do not fully address the label scarcity bottleneck. To truly reduce the burdens of training data annotation, practitioners have resorted to cheaper sources of labels. One classic approach is \emph{distant supervision} where external knowledge bases are leveraged to obtain noisy labels~\cite{hoffmann2011knowledge}. There are also other options, including crowdsourced labels~\cite{yuen2011survey}, heuristic rules~\cite{Awasthi2020Learning}, feature annotation~\cite{mann2010generalized}, and others. A natural question is: \emph{could we combine these approaches, and an even broader range of potential weak supervision inputs, in a principled and abstracted way?} The recently-proposed programmatic weak supervision (PWS) frameworks provided affirmative answer to this question~\cite{Ratner16,ratner2017snorkel}. Specifically, in PWS, users encode \emph{weak supervision sources}, \eg, heuristics, knowledge bases, and pre-trained models, in the form of \emph{labeling functions (LFs)}, which are user-defined programs that each provide labels for some subset of the data, collectively generating a large set of training labels. The labeling functions are usually noisy with varying error rates and may generate conflicting labels on certain data points. To address these issues, researchers have developed \emph{label models}~\cite{Ratner16,Ratner19,fu2020fast,Varma2019multi} which aggregate the noisy votes of labeling functions to produce training labels. Then, the training labels is in turn used to train an \emph{end model} for downstream tasks. These two-stage methods mainly focus on the efficiency and effectiveness of label model, while maintaining the maximal flexibility of the end model. In addition to the two-stage methods, later researchers also explored the possibility of coupling the label model and the end model in an end-to-end manner~\cite{ren2020denoising,lan2020connet}. We refer to these one-stage methods as \emph{joint models}. An overview of weak supervision pipeline can be found in Fig.\ref{fig:overview}. In addition, these LFs often have clear dependencies among them~\cite{Ratner16} and therefore it is crucial to specify and take into consideration the appropriate dependency structure~\cite{MisspecificationInDP}. However, manually specifying the dependency structure would bring extra burden to practitioners; to reduce human efforts, researches have attempted to learn the dependency structure automatically~\cite{Bach2017LearningTS,Varma2017InferringGM,Varma2019LearningDS}. Very recently, researchers have also explored the possibility of generating these LFs automatically~\cite{varma2018snuba} or interactively~\cite{boecking2021interactive}. In this paper, we present the first survey on PWS to introduce its recent advances, with special focus on its formulations, methodology, applications, and future research directions. We organize this survey as follows: after a brief introduction of PWS in Sec.~\ref{sec:background}, we review approaches for each component within a standard PWS workflow, namely, the label model (Sec.~\ref{sec:label_model}), end model (Sec.~\ref{sec:end_model}), and joint model (Sec.~\ref{sec:joint_model}). Then, we briefly address complementary approaches for the limited label scenario and how they interact with PWS. Finally, we discuss the challenges and future directions (Sec.~\ref{sec:future}). We hope that this survey can provide a comprehensive review for interested researchers, and inspire more research in this and related areas. \section{Preliminary} \label{sec:background} \begin{table*}[t] \centering \caption{Comparisons among existing methods for each component of the PWS pipeline. *: NPLM and PLRM are able to utilize new types of LFs as described in Sec~\ref{sec:label_model}.} \scalebox{0.58}{ \setlength{\tabcolsep}{2em} \begin{tabular}{ l l l c c c c c} \toprule \multirow{2}{*}{\textbf{Module}} & \multirow{2}{*}{\textbf{Target Task}} & \multirow{2}{*}{\textbf{Method}} & \multicolumn{4}{c}{\textbf{Input}} \\ \cmidrule(lr){4-7} & & & \textbf{X} & \textbf{P(Y)} & \textbf{Additional Information} & \textbf{LF dependency}\\ \toprule \multirow{12}{*}{Label Model} & \multirow{6}{*}{Classification} & Data Programming~\cite{Ratner16} & & & & \checkmark\\ & & MeTaL~\cite{Ratner19} & & \checkmark & & \checkmark\\ & & FlyingSquid~\cite{fu2020fast} & & \checkmark & & \checkmark\\ & & CAGE~\cite{chatterjee2020robust} & & & User-provided Quality of LFs & \\ & & NPLM$^{*}$ ~\cite{yu2021learning} & & & \\ & & PLRM$^{*}$ ~\cite{zhang2021creating} & & & \\\cmidrule(lr){2-7} & \multirow{4}{*}{Sequence Tagging} & Dugong~\cite{Varma2019multi} & & \checkmark & & \checkmark\\ & & HMM~\cite{lison2020named} & & \checkmark \\ & & Linked HMM~\cite{safranchik2020weakly} & & & Linking Functions & \\ & & CHMM~\cite{li2021bertifying} & \checkmark \\ \cmidrule(lr){2-7} & Classification, Ranking, Regression & \multirow{2}{*}{UWS~\cite{shin2021universalizing}} & & & & \multirow{2}{*}{\checkmark}\\ & Learning in Hyperbolic Manifolds\\ \midrule \multirow{1}{*}{End Model} & \multirow{1}{*}{Classification} & COSINE~\cite{yu-etal-2021-fine} & \checkmark \\ \midrule \multirow{9}{*}{Joint Model} & \multirow{7}{*}{Classification} & {Denoise~\cite{ren2020denoising}} & \checkmark \\ & & WeaSEL~\cite{cachay2021endtoend} & \checkmark & \checkmark \\ & & ALL~\cite{arachie2019adversarial} & \checkmark & & Error Rate of LFs & \\ & & AMCL~\cite{mazzetto:icml21} & \checkmark & & Set of Labeled data& \\ & & ImplyLoss~\cite{Awasthi2020Learning} & \checkmark & & Exemplar Data of LFs & \\ & & ASTRA~\cite{karamanolakis2021self} & \checkmark & & Set of Labeled data& \\ & & SPEAR~\cite{maheshwari2021semi} & \checkmark & & Set of Labeled data &\\\cmidrule(lr){2-7} & \multirow{2}{*}{Sequence Tagging} & ConNet~\cite{lan2020connet} & \checkmark \\ & & DWS~\cite{parker2021named} & \checkmark \\ \bottomrule \end{tabular} } \label{tab:methods} \end{table*} Now, we formally define the setting of PWS. We are given a dataset $D$ with $n$ data points and the $i$-th data point is denoted by $X_i \in \mathcal{X}$. For each $X_i$, there is an unobserved true label denoted by $Y_i \in \mathcal{Y}$. Let $m$ be the number of sources $\{S_j\}_{j\in[m]}$, each assigning a label $\lambda_j \in \mathcal{Y}$ to some $X_i$ to vote on its respective $Y_i$, or abstaining ($\lambda_j = -1$). In addition, some methods could handle the dependencies among sources by inputting the dependency graph of sources $G_{dep}$. For concreteness, we follow the general convention of PWS~\cite{Ratner16} and refer to these sources as \emph{labeling functions (LFs)} throughout the paper. The goal is to apply $m$ LFs to the unlabeled dataset $\bm{X}=[X_1, X_2, \ldots, X_n]$ to create an $n \times m$ label matrix $L$, and to then use $L$ and $\bm{X}$ to produce an end machine learning model $f_w : \mathcal{X} \rightarrow \mathcal{Y}$. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figures/overview.pdf} \caption{An overview of PWS pipeline~\protect\cite{zhang2021wrench}.} \label{fig:overview} \end{figure} In general, PWS methods could be classified into two categories as shown in Fig~\ref{fig:overview}: \paragraph{Two-stage Method.} A two-stage method works as follows. In the first stage, a \textit{label model} is used to combine the label matrix $L$ into either probabilistic soft training labels or one-hot hard training labels, which are in turn used to train the desired \emph{end model} in the second stage. We review the label models and end models in the literature separately. \paragraph{One-stage Method.} The one-stage methods attempt to train a label model and end model simultaneously. Specifically, they usually design a neural network for label aggregation while utilizing another neural network for final prediction. These approaches offer a more straightforward way for tackling weak labels. We refer to the model designed for one-stage method as a \emph{joint model}. \section{Labeling Functions} At the core of PWS are the labeling functions (LFs) that provide potentially noisy weak labels that fuel the entire learning pipeline. In this section, we provide an overview over the popular types of LFs, how they are generally developed, and the potential dependency structure among the LFs. \subsection{Labeling Function Types} In PWS, users encode different weak supervision sources into LFs, each of which noisily annotates a subset of data points. While an LF can be as general as any function $\lambda: \mathcal{X} \to \mathcal{Y} \cup \{-1\}$ that takes as input a data point and either outputs a corresponding label or abstain, we introduce the most common types of LFs used in practice. \subsubsection{User-written Heuristics} In practical applications, users generally have domain knowledge about the target learning task of interest. One common type of LF is to express the domain knowledge into heuristic labeling rules that associate corresponding labels to the data points. For example, in text applications, users write keyword- or regex-based LFs that assign corresponding labels to the data points that contains the keyword or matches the specified regular expression~\cite{ratner2017snorkel,meng2018weakly,Awasthi2020Learning}. In image applications, users write LFs that provide labels to the image inputs containing specific objects, or possessing some user-specified visual/spatial properties~\cite{Varma2017InferringGM,chen2019scene,fu2020fast}. \subsubsection{Existing Knowledge} \paragraph{Knowledge Bases.} Oftentimes, external knowledge bases can be used to provide weak supervision over the learning task of interest, commonly known as the distant supervision approach~\cite{hoffmann2011knowledge,liang2020bond}. For example, in a relation extraction task to identify mentions of spouse relationships in news article, \cite{ratner2017snorkel} writes LFs that match the text inputs against the knowledge base DBPedia\footnote{\url{https://www.dbpedia.org/}} to search for known spouse relationships. \paragraph{Pre-trained Models.} Existing pre-trained models from a related task can be used as LFs to provide weak labels. For example, in a product classification task at Google, \cite{bach2019snorkel} leverages existing semantic topic model to identify contents irrelevant to the category of products of interest. In \cite{zhang2021creating}, pre-trained image classification model that has a different output label space from the target classification task is used as LFs to provide indirect weak supervision for the learning task of interest. \paragraph{Third-party Tools.} To collect weak labels cheaply, there are several existing third-party tools available that can serve as LFs. For example, for review sentiment analysis, users can simply use TextBlob\footnote{\url{https://textblob.readthedocs.io/en/dev/}} to assign labels for each review. Take named entity recognition (NER) as another example, there are several tagging tools such as spaCy\footnote{\url{https://spacy.io/}}, NLTK\footnote{\url{https://www.nltk.org/}}, etc, and \cite{lison2020named} adopt them as LFs for weakly-supervised NER task. Note that, the above tools are not perfect, as the weak labels generated via their outputs contains much noise. \subsubsection{Crowd-sourced Labels.} Crowd-sourcing is the classic and well-studied approach of obtaining less accurate label annotations from non-expert contributors with lower annotation cost~\cite{DawidSkene,yuen2011survey}. In the PWS setting, each crowd-sourcing contributor can be represented as an LF that noisily annotates the data points~\cite{ratner2017snorkel,lan2020connet}. For example, in a weather sentiment classification task, each crowd-source contributor---who grades the sentiment of tweets relating to the weather into five different categories---is considered as a LF. \subsection{Labeling Function Generation} In the PWS learning paradigm, the first and foremost step is to create a set of LFs that are used to generate the weak labels for learning the subsequent models. In practice, the LFs are typically developed by subject matter experts (SMEs) who have adequate knowledge about the task of interest. When developing LFs, in addition to leveraging existing domain knowledge, SMEs usually refer to a small subset of data points sampled from the unlabeled set, called the \textit{development set}, to extract further task/dataset-specific labeling heuristics that complement the pre-existing domain knowledge~\cite{ratner2017snorkel}. This process of LF development could sometimes be challenging and time-consuming even for domain experts. For example, it often requires SMEs to explore a considerable amount of development data to generate ideas for LFs~\cite{varma2018snuba,darwin,boecking2021interactive}. As a result, researchers have recently aim to reduce the amount of efforts spent in designing weak supervision sources through three main directions, namely, \textit{automatic generation}, \textit{interactive generation}, and \textit{guided generation} of LFs. \paragraph{Automatic Generation.} One direction to alleviate the burden in designing LFs in PWS paradigm is to automate the process of LF development. \cite{varma2018snuba} propose a system, Snuba, that generates LFs automatically by learning weak classification models on a small set of labeled dataset. TALLOR~\cite{TALLOR} takes as input an initial set of seed LFs that are generally simpler, and automatically learn more accurate compound LFs from multiple simple labeling rules. Similarly, GLaRA~\cite{glara} learns to augment a set of seed LFs automatically by exploiting the semantic relationships between candidate and seed LFs through a graph-based model. Notably, while we refer to this line of methods as ``automatic generation'' approaches, they do require a minimum amount of initial supervision, either in the form of small labeled set or seed LFs. \paragraph{Interactive Generation.} In contrast to fully automating the generation of LFs \textit{after} given a seed supervision set, interactive generation approaches cast LF development as an interactive process where users are iteratively queried for feedback used in discovering useful LFs from a large set of candidates~\cite{darwin,boecking2021interactive}. Specifically, in Darwin~\cite{darwin} and IWS~\cite{boecking2021interactive}, a set of candidate LFs is first generated based on $n$-grams or context-free grammar information. Then, in each iteration, the user is queried to annotate whether a presented LF, proposed by the system, is useful or not (i.e., better than random accuracy). Based on the feedback provided in each iteration, the systems learn to adapt and identify a set of high-precision LFs from the candidate set, which is used as the final set of LFs in the PWS learning pipeline. Compared to the standard active learning approaches which relies on instance-level annotations, the interactive generation approaches are shown to achieve better performance with lower annotation costs. \paragraph{Guided Generation.} Based on the current workflow of LF development where SMEs write LFs by looking at a small development set of data, guided-generation approaches aim to assist the users in developing LFs by intelligently curate the development set in order to efficiently \textit{guide} SMEs in exploring the data and developing informative LFs that could lead to strong resultant models~\cite{cohen-wang2019interactive}. The idea resembles traditional active learning~\cite{settles2009active} in the sense that the goal is to strategically select data points from the unlabeled set and solicit informative supervision from the users, except that the supervision is provided at the functional-level (i.e., LFs) instead of individual label level. \section{Label Model} \label{sec:label_model} The multiple LFs we have for a given dataset often overlap and conflict with each other. In PWS, \emph{label model} is used to integrate the LFs' output predictions into probabilistic labels, aiming to accurately recover the unobserved ground truth labels. Till now, various label models have been proposed and most of them are based on probabilistic graphical models. It is worthwhile to note that LFs developed in practice often exhibit statistical dependency among each other~\cite{Ratner16,MisspecificationInDP}. Incorporating the dependency information into the label model has been shown to be critical to the model's ability to correctly estimate the latent ground truths~\cite{Ratner16,Bach2017LearningTS,Varma2017InferringGM,MisspecificationInDP}. However, not all label models take into account the LF dependency structure when aggregating the LFs' votes, where some approaches simply assume conditional independence between the LFs. In this section, we first discuss general approaches used to incorporate LF dependency in label model. Then, we introduce more in detail different existing label models, categorized by their target learning tasks, with discussion on how the LF dependency is handled in some of the approaches. \subsection{LF Dependency Structure} Earlier work on PWS rely on users to manually specify the dependency structure among the LFs~\cite{Ratner16}. For example, users could specify two LFs to be \textit{similar}; or one LF to be \textit{fixing} or \textit{reinforcing} another; or two LFs are \textit{exclusive}. Nevertheless, as manually specifying such dependency structure is generally hard for users, researchers have recently turned to \textit{learning} or \textit{inferring} the dependency structure automatically without user supervision. To automatically \textit{learn} the dependency structure, \cite{Bach2017LearningTS} proposes to maximize the $\ell_1$-regularized marginal pseudo-likelihood of a factor graph with high-order dependencies and select the dependencies that have non-zero weights; \cite{Varma2019LearningDS} exploits the sparsity of label model and leverages robust PCA technique to capture the underlying dependency structure. On the other hand, instead of \textit{learning} the structure from the observed labels, \cite{Varma2017InferringGM} proposes an alternative approach that \textit{infers} the relations between different LFs by statically analyzing the source code of the LFs. Having the dependency structure on hand, whether manually specified or automatically learned/inferred, a prevailing approach to incorporate the dependency information into the label model is to embed the dependency relationships into label models, which are typically graphical models, through factor functions~\cite{Ratner16,shin2021universalizing} or graph structure~\cite{Ratner19,fu2020fast,Varma2019multi}. In the following subsections, we introduce the label models for different learning tasks in more detail, and provide an overview of these methods in Table~\ref{tab:methods}. \subsection{Label Model for Classification} For classification problems, majority voting (MV) is the most straight-forward approach for aggregating different LFs, as it simply uses the consensus from the multiple LFs to obtain more reliable labels without introducing any trainable parameters. Crowdsourcing models~\cite{DawidSkene,dalvi2013aggregating,raykar2010learning,khetan2018learning} usually leverage the expectation maximization (EM) algorithm to estimate the accuracy of each worker as well as infer the latent ground truth labels, which can also be applied here when we regard each LF as a worker. Apart from these approaches, we review several label models tailored for PWS problems. These label models are all based on probabilistic graphical model and aim to maximize the probability of observing the outputs of LFs. Specifically, they share an optimization problem as following: \begin{equation} \label{eq:ci} \max_{\theta} P(L; \theta) = \sum_{Y}P(L, Y; \theta) ~. \end{equation} The key differences among existing label models are the way they parameterize the joint distribution $P_{\theta}(L, Y)$ and how the parameters are estimated. In particular, Data programming (DP)~\protect\cite{Ratner16} models the distribution $P(L, Y; \theta)$ as a factor graph. It is able to describe the distribution in terms of pre-defined factor functions, which reflects the dependency of any subset of random variables and are also used to encode the dependency structure of LFs. The log-likelihood is optimized by SGD where the gradient is estimated by Gibbs sampling. MeTaL~\protect\cite{Ratner19}, instead, models the distribution via a Markov Network and recover the parameters via a matrix completion-style approach. Later on, FlyingSquid~\cite{fu2020fast} is proposed to accelerate the learning process for binary classification problems. It models the distribution as a binary Ising model, where each LF is represented by two random variables, and a Triplet Method is used to recover the parameters and therefore no learning is needed. Notably, the latter two methods encode the dependency structure of LFs into the structure of the graphical model and require label prior as input. Additionally, researchers have attempted to extend the scope of usable LFs. CAGE~\cite{chatterjee2020robust} extends the existing label models to support continuous LFs. In addition, it leverages user-provided quality for LFs to increase the training stability and making it less sensitive to initialization. Moreover, NPLM~\cite{yu2021learning} enables users to utilize partial LFs that output a subset of possible class labels and PLRM~\cite{zhang2021creating} allows the usage of indirect LFs that only predict unseen but related class; both works are built on probabilistic graphical model similar to~\cite{Ratner16} and greatly expand the scope of usable LFs in PWS. \subsection{Label Model for Sequence Tagging} Sequence tagging problems are more complex since there are dependencies among consecutive tokens. To model such properties, Hidden Markov Models (HMM)~\cite{baum1966statistical} have been proposed, which represent true labels as latent variables and inferring them from the independently observed noisy labels through expectation-maximization algorithm \cite{welch2003hidden}. \cite{lison2020named} directly apply HMM for named entity recognition task and \cite{safranchik2020weakly} propose Linked-HMM to incorporate unique linking rules as an adjunct supervision source additional to general weak labels on tokens. Moreover, Conditional hidden Markov model (CHMM)~\cite{li2021bertifying} substitutes the constant transition and emission matrices by token-wise counterpart predicted from the BERT embeddings to model the evolve of true labels in a context-dependent manner. Another characteristics for sequence tagging problems is that the supervision can be provided at different resolutions (e.g. frame, window, and scene-level for videos). To integrate them together, Dugong~\cite{Varma2019multi} has been propose to assign probabilistic labels for data with graphical models. Dugong also accelerates the inference speed with SGD based optimization techniques. Finally, as shown in~\cite{zhang2021wrench}, label models for classification task could also be applied on sequence tagging problem with certain adaptations. \subsection{Label Model for General Learning Tasks} Very recently, UWS~\cite{shin2021universalizing} goes beyond the traditional tasks and generalizes PWS frameworks to handle more kinds of tasks including ranking, regression, and learning in hyperbolic manifolds with an efficient method-of-moments approach in the embedding space. \section{End Model} \label{sec:end_model} After obtaining the probabilistic labels, the end model is used to train a discriminative model on downstream tasks. Since the probabilistic training labels derived from the label model may still contain noise, \cite{ratner2017snorkel} suggests using a noise-aware loss as the training objective for the end model. However, one drawback for such end models are that they are usually trained only on the data covered by weak supervision, but there may exist an ignorable portion of data that are not covered by any LFs. Motivated by this, COSINE~\cite{yu-etal-2021-fine} designs a better end model by leveraging the data uncovered by LFs. Specifically, it utilizes these uncovered data in a self-training manner and generates pseudo labels for each unlabel data. Apart from the above methods, other approaches designed for learning with noisy labels~\cite{song2020learning} can also be utilized as end models. \section{Joint Model} \label{sec:joint_model} The traditional pipeline for PWS usually trains the label model and end model separately, in contrast, \emph{joint model} aims to train the label model and the end model in an end-to-end manner, allowing them to enhance each other mutually. In addition, the joint model usually leverages neural network as label model instead of aforementioned statistical label model; such a design choice not only facilitates the co-training of label model and end model, but also reflects the motivation of considering data feature during the training of label model, leading to a instance-dependent label model, \ie, $P(L, Y|X; \theta)$. As opposed to statistical label models (Sec.~\ref{sec:label_model}) that \textit{explicitly} incorporate LF dependency through the graph structure of underlying graphical models, it is observed that neural network based joint models are able to \textit{implicitly} capture the dependencies among the LFs in the learning process~\cite{cachay2021endtoend}. However, existing joint models generally cannot incorporate pre-given dependency structure. \subsection{Joint Model for Classification} Denoise~\cite{ren2020denoising} and WeaSEL~\cite{cachay2021endtoend} first reparameterize prior probabilistic posteriors with a neural network, then assign scores for each PWS source for aggregation. After that, the posterior network and the end model are trained simultaneously to maximize the agreement between them. \cite{arachie2019adversarial,mazzetto:icml21} both formulate the weakly supervised classification problems as a constrained min-max optimization problem, and ALL~\cite{arachie2019adversarial} learns a prediction model that has the highest expected accuracy with respect to an adversarial labeling of an unlabeled dataset, where this labeling must satisfy error constraints on the weak supervision sources. Differently, AMCL~\cite{mazzetto:aistats21} constructs the constraints based on the expected loss within a small set of clean data. To denoise LFs more effectively, several methods propose to use a small number of labeled data in training. ImplyLoss~\cite{Awasthi2020Learning} jointly train a rule denoising network based on exemplars for each label, as well as a classification model with a soft implication loss. SPEAR~\cite{maheshwari2021semi} extends ImplyLoss by designing additional loss functions on both labeled and unlabeled data and encourages the consistency between the two models. In addition, ASTRA~\cite{karamanolakis2021self} adopts self-training for PWS with a teacher-student framework. The student model is initialized with a small number of labeled data and generates pseudo-labels for instances not covered by LFs, while the teacher model combines LFs with the output from the student model for the final prediction. \subsection{Joint Model for Sequence Tagging} For sequence tagging problem, Consensus Network (ConNet)~\cite{lan2020connet} trains BiLSTM-CRF \cite{ma2016end} with multiple CRF layers for each labeling source individually. Then, it aggregates the CRF transitions with attention scores conditioned on the quality of LFs and outputs a unified label sequence. DWS~\cite{parker2021named} uses a CRF layer to capture statistical dependencies among tokens, weak labels and latent true labels. Moreover, it adopts hard EM algorithm for model training: in the E-step, it finds the most probable labels for the given sequence; in the M-step, it maximizes the probability for the corresponding labels. \section{Datasets and Applications} \label{sec:dataset} In this section, we briefly mention existing datasets and applications in PWS. Very recently, WRENCH~\cite{zhang2021wrench}, a comprehensive benchmark for PWS, is released along with 14 classification datasets and 8 sequence tagging datasets covering text, video, and tabular data; and WALNUT~\cite{zheng2021walnut} is another benchmark for semi-PWS with a focus on Natural Language Understanding tasks. \paragraph{Natural Language Processing.} In natural language processing, text classification is a popular application area for PWS~\cite{ren2020denoising,yu-etal-2021-fine,shu2020learning}. In addition, some works apply PWS on other field of studies, such as relation extraction~\cite{zhou2020nero,liu2017heterogeneous,Mallory2020ExtractingCR}, named entity recognition~\cite{safranchik2020weakly,lison2020named,li2021bertifying,lan2020connet,fries2017swellshark}, and dialogue systems~\cite{DBLP:conf/aaai/MallinarSUGGHLZ19}. \paragraph{Computer Vision.} Researchers also explore the applications of PWS on vision domain, \eg, image classification~\cite{das2020goggles}, image segmentation~\cite{hooper2020cut}, video analysis~\cite{fu2020fast,Varma2019multi}, scene graph prediction~\cite{chen2019scene}, and autonomous driving~\cite{Weng2019UtilizingWS}. \paragraph{Biomedical.} There are various applications of PWS on areas in biomedical domain, such as genome study~\cite{Kuleshov2019AMD}, bioinformatics~\cite{fries2017swellshark,Mallory2020ExtractingCR}, clinical data~\cite{Fries2019WeaklySC,Fries2021OntologydrivenWS,Wang2019ACT}, and medical images~\cite{DBLP:conf/miccai/SaabDGRSRR19,Saab2020WeakSA,DBLP:journals/patterns/DunnmonRSKMSGLL20}. \paragraph{Others.} Apart from aforementioned areas, PWS are also used in software engineering~\cite{rao2021search}, mobile sensing~\cite{furst2020transport,khattar2019multi}, E-commerce platforms~\cite{mathewdefraudnet,zhang2021fraud}, and multi-agent systems~\cite{DBLP:conf/iclr/ZhanZYSL19}. \section{Systems} Snorkel~\cite{ratner2017snorkel} Tagruler Knodle~\cite{sedova2021knodle}: modularizing WS pipelines SPEAR~\cite{abhishek2021spear}: Open source toolkit for semi-supervised WS. skweak~\cite{skweak}: facilitate the use of weak supervision for NLP tasks such as text classification and sequence labelling. TagRuler~\cite{tagruler} \section{Complementary Approaches} In this section, we briefly describe how PWS can be connected to or combined with complementary machine learning approaches that also aim to deal with the label scarcity issue. \paragraph{Active Learning.} Active learning (AL) attempts to handle the label scarcity issue by interactively annotating the most informative samples to achieve good performance. As complementary approach, PWS could be utilized to improve AL. For example, \cite{Mallinar2020IterativeDP} expands the initial labeled set in AL by querying labels for those which are the most relevant to existing labeled ones based on LFs, and \cite{Nashaat2018HybridizationOA} applied PWS to generate initial noisy training labels to improve the efficiency of a later active learning process. On the other hand, AL could in turn help PWS: \cite{biegel2021active} asks experts to provide labels for which the label model is most likely to be mistaken, and Asterisk \cite{nashaat2020asterisk} employs AL to enhance the label model and proposed a selection policy based on the estimate accuracy of LFs and the output of label model. \paragraph{Transfer Learning.} Transfer learning (TL), which adapts a trained model to the new tasks and consequently tends to require less labeled data than training from scratch, has recently attract increasing attention, especially for the great success of fine-tuning huge pretrained models with few labels. We note that TL and PWS are orthogonal to each other and could be combined together to achieve the best performance, since TL could reduce but not eliminate the demand of labeled data, which could be offered by PWS. Indeed, current state-of-the-art PWS methods usually rely on fine-tuning pretrained models with labels produced by label model~\cite{zhang2021wrench}. \paragraph{Semi-Supervised Learning.} Semi-supervised Learning (SSL) aims to train the model with a small amount of labeled data with a large amount of unlabeled data. The idea of leveraging unlabeled data to improve training has also been applied to PWS methods as \cite{karamanolakis2021self,yu-etal-2021-fine} use self-training to bootstrap over unlabeled data. Moreover, \cite{xu2021dp} improves SSL by leveraging the idea of PWS; specifically, they use the labeled data to generate LFs that are in turn used to annotate the unlabeled data, finally the model is trained on the whole dataset with provided or synthesized labels. To sum up, SSL and PWS are also complementary and future works include developing more advanced methods to combine clean labels and weak labels together to further boost the performance. \section{Challenges and Future Directions} \label{sec:future} \paragraph{Extend to More Complex Tasks.} The majority of the PWS methods only support classification or sequence tagging tasks, while there are a variety of tasks that require high-level reasoning over concepts such as question answering~\cite{rajpurkar2016squad}, navigation~\cite{gupta2017cognitive} and scene graph generation~\cite{ye2021linguistic}, and curating labeled data for these tasks requires even more human efforts. Moreover, in these tasks, the input data may come from multiple modalities including text, image and tables, while the current PWS methods only consider LFs with one specific modality. Hence, it is crucial while challenging to develop multi-modal PWS methods to improve the data efficiency on these tasks. \paragraph{Extend the Scope of Usable LFs.} Although researchers have made attempts to extend the scope of usable LFs~\cite{zhang2021creating,yu2021learning}, there are other sources that could potentially be used as LFs, \eg, physical rules, for more complex tasks. The ultimate goal of PWS is to leverage as more existing sources as possible to minimize human efforts in the curation of training data. \paragraph{Ethical and Trustworthy AI.} One of the most pressing concerns in the AI community right now is ensuring that AI techniques and models are applied ethically. Within this area of focus, one of the most important and challenging topics is ensuring that the training data which informs models is ethically labeled and managed, transparent, auditable, and bias-free. PWS approaches offer a step-change opportunity in this regard, since they result in training labels generated by code which can be inspected, audited, governed, and edited to reduce bias. However, by the same token, PWS methods can also lead to more direct bias in training data sets if used and modeled improperly~\cite{geva2019modeling,lucy2021gender}. Overall, further systematic study in this area is highly critical, and has great opportunity for improving the state of data in AI from an ethics and governance perspective. \section{Conclusion} Manual annotations are always of great importance to training machine learning models, but usually expensive and time-consuming. Programmatic weak supervision (PWS) offers a promising direction to achieve large-scale annotations with minimal human efforts. In this article, we review the PWS area by introducing existing approaches for each component inside a PWS workflow. We also describe how PWS could interact with methods from related fields for better performance on downstream applications. Then, we list existing datasets and recent applications of PWS in the literature. Finally, we discuss current challenges and future directions in the PWS area, hoping to inspire future research advances in PWS. \section{Multi-level Nested Linearized Averaging Stochastic Gradient \\ Method}\label{sec:modifiednasa} In this section, we present a linearized variant of \cref{alg:originalnasa} which can achieve the best known rate of convergence for problem \eqnok{eq:mainprob} for any $T\geq 1$, under Assumptions~\ref{fi_lips} and~\ref{assumption:original_nasa_assumption}. Indeed, when $T>2$, we have accumulated errors in estimating the inner function values. Hence, in~\cref{alg:originalnasa} we use mini-batch sampling in \eqnok{def_wi} to reduce the noise associated with the stochastic function values. However, this increases the sample complexity of the algorithm. To resolve this issue, instead of using the point estimates of $f_i$'s, we use their stochastic linear approximations in \eqnok{def_wi_new}. This modification reduces the bias error in estimating the inner function values which together with a refined convergence analysis enable us to obtain a sample complexity of ${\cal O}_T(1/\epsilon^4)$ with \cref{alg:modifiednasa}, for any $T\geq 1$ without using any mini-batches. \textcolor{black}{Furthermore,~\cref{alg:modifiednasa} works with any constant choice of step-size parameter $\beta$ (independent of the problem parameters), making it easy to implement.} As mentioned previously,~\cref{alg:modifiednasa} is motivated by the algorithm in \cite{rusz20} proposed for solving nonsmooth multi-level stochastic composition problems. However, \cite{rusz20} assumes that all functions $f_i$ explicitly depend on the decision variable $x$ which makes the composition function a variant of the general case in \eqnok{eq:mainprob}. It is also worth mentioning that other linearization techniques have been used in~\cite{duchi2018stochastic, davis2019stochastic} in estimating the stochastic inner function values for weakly convex two-level composition problems. \begin{algorithm} \caption{Multi-level Nested Linearized Averaging Stochastic Gradient Method} \begin{algorithmic} \STATE Set $b_k=1$ in Algorithm~\ref{alg:originalnasa} and replace \eqnok{def_wi} with \beq\label{def_wi_new} w_i^{k+1} = (1 - \tau_k)w_i^k + \tau_k G_i^{k+1} + (J_i^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k), \qquad 1\leq i \leq T. \eeq \end{algorithmic} \label{alg:modifiednasa} \end{algorithm} To establish the rate of convergence of \cref{alg:modifiednasa}, we first need the next result which provides the recursion on the errors in estimating the inner function values. \begin{lemma}\label{new_nasa_lemma_fn_w_error} Let $\{x^k\}_{k \ge 0}$ and $\{w^k_i\}_{k \ge 0}$ be generated by~\cref{alg:modifiednasa}. Define, for $1\leq i\leq T$, \begin{align} e_i^{k+1}:= f_i(w_{i+1}^k) - G_i^{k+1},&~\hat{e_i}^{k+1}: = \nabla f_i(w_{i+1}^k) - J_i^{k+1},\label{new_nasa_errors}\\ \hat A_{k,i} := f_i(w_{i+1}^{k+1}) - f_i(w_{i+1}^k&) - \nabla f_i(w_{i+1}^k)^\top (w_{i+1}^{k+1}-w_{i+1}^k) \label{new_nasa_A_ki}. \end{align} \begin{itemize}[leftmargin=0.2in] \item [a)] Under~\cref{fi_lips}, we have, for~$1 \leq i \leq T$, \begin{align} & \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 \leq (1-\tau_k)\|f_i(w_{i+1}^k) - w_i^k\|^2 + \tau_k^2\|e_i^{k+1}\|^2 + \dot{r}_i^{k+1} \notag\\ & + \left[8 L_{ f_i}^2+ L_{\nabla f_i} \|f_i(w_{i+1}^k) - w_i^k\|+ \| \hat{e_i}^{k+1}\|^2\right] \|w_{i+1}^{k+1}-w_{i+1}^k\|^2, \label{new_nasa_fn_w_error_i}\\ \dot{r}^{k+1}_i &:=2\tau_k\langle e_i^{k+1}, \hat A_{k,i} + (1-\tau_k)(f_i(w_{i+1}^k) - w_i^k) + (\hat{e_i}^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k)\rangle \nonumber \\ &+2\langle (\hat{e_i}^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k), \hat A_{k,i}+(1-\tau_k)(f_i(w_{i+1}^k)-w_i^k)\rangle.\label{new_nasa_r_i} \end{align} \item [b)] Furthermore, we have for $1\leq i \leq T$, $\| w_i^{k+1} - w_i^k\|^2 \leq$ \begin{align*} & \tau_k^2\left[2\| f_i(w_{i+1}^k) - w_i^k\|^2 + \|e_i^{k+1}\|^2 + \frac{2}{\tau_k^2}\|J_i^{k+1}\|^2 \| w_{i+1}^{k+1} - w_{i+1}^k \|^2\right]+2 \ddot{r}^{k+1}_i, \end{align*} where $ \ddot{r}^{k+1}_i := \tau_k \langle -e_i^{k+1}, \tau_k(f_i(w_{i+1}^k) - w_i^k) + (J_i^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k)\rangle.$ \end{itemize} \end{lemma} \begin{proof} We first prove part a). When $1\leq i < T$, by definition of $\hat A_{k,i}, \hat{e_i}^{k+1},G_i^{k+1},w_i^{k+1}$, and $\dot{r}_i^{k+1}$, we have \begin{align} & \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 \notag\\ = & \|\hat A_{k,i} + f_i(w_{i+1}^k) + \nabla f_i(w_{i+1}^k)^\top (w_{i+1}^{k+1}-w_{i+1}^k) \notag \\ &\qquad- (1-\tau_k)w_i^k - \tau_kG_i^{k+1} - (J_i^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k)\|^2 \notag\\ = & \|\hat A_{k,i} + (\widehat{e_i}^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k) + (1-\tau_k)(f_i(w_{i+1}^k) - w_i^k) + \tau_ke_i^{k+1}\|^2 \label{fi_wi_modified_nasa}\\ =& \|(\widehat{e_i}^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k)\|^2 + \|\hat A_{k,i} + (1-\tau_k)(f_i(w_{i+1}^k)-w_i^k)\|^2 + \tau_k^2\|e_i^{k+1}\|^2 + \dot{r}_i^{k+1} \notag\\ \leq & \|\hat A_{k,i} + (1-\tau_k)(f_i(w_{i+1}^k) - w_i^k)\|^2 +\tau_k^2\|e_i^{k+1}\|^2 + \dot{r}_i^{k+1} + \|\widehat{e_i}^{k+1}\|^2\|w_{i+1}^{k+1}-w_{i+1}^k\|^2\notag\\ \leq & (1-\tau_k)\|f_i(w_{i+1}^k) - w_i^k\|^2+\|\hat A_{k,i}\|^2+ 2(1-\tau_k)\langle \hat A_{k,i}, f_i(w_{i+1}^k) - w_i^k\rangle +\tau_k^2\|e_i^{k+1}\|^2 \notag\\ &\qquad + \dot{r}_i^{k+1}+ \|\widehat{e_i}^{k+1}\|^2\|w_{i+1}^{k+1}-w_{i+1}^k\|^2.\label{fi_wi_modified_nasa2} \end{align} Now, noting that under~\cref{fi_lips}, we have \beq\label{Ak_bnd} \|\hat A_{k,i}\| \le \frac{1}{2}\min\left\{4 L_{f_i}\|w_{i+1}^{k+1}-w_{i+1}^k\|, L_{\nabla f_i}\|w_{i+1}^{k+1}-w_{i+1}^k\|^2 \right\}, \eeq and using Cauchy–Schwarz inequality in \eqnok{fi_wi_modified_nasa2}, we obtain \cref{new_nasa_fn_w_error_i}. To show part b), noting definition of \cref{def_wi_new} and \cref{new_nasa_errors}, Cauchy-Schwartz and Young's inequality, for $1\leq i \leq T$, \begin{align*} &\| w_i^{k+1} - w_i^k\|^2 \\ =& \| \tau_k(G_i^{k+1}-w_i^k) + (J_i^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k)\|^2 \\ = &\tau_k^2\|G_i^{k+1} - w_i^k\|^2 + \| (J_i^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k)\|^2 + 2\tau_k\langle G_i^{k+1} - w_i^k, (J_i^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k)\rangle \\ \leq & \tau_k^2 \| G_i^{k+1} - w_i^k\|^2 + 2\|J_i^{k+1}\|^2 \|w_{i+1}^{k+1} - w_{i+1}^{k}\|^2 + \tau_k^2 \|f_i(w_{i+1}^k) - w_i^k\|^2 \\ &+ 2\tau_k\langle -e_i^{k+1}, (J_i^{k+1})^\top(w_{i+1}^{k+1} - w_{i+1}^k)\rangle \\ =& 2\tau_k^2 \| f_i(w_{i+1}^k) - w_i^k\|^2 + \tau_k^2 \|e_i^{k+1}\|^2 + 2\|J_i^{k+1}\|^2 \|w_{i+1}^{k+1} - w_{i+1}^k\|^2 \\ &+ 2\tau_k \langle -e_i^{k+1}, \tau_k(f_i(w_{i+1}^k) - w_i^k) + (J_i^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k)\rangle. \end{align*} \end{proof} \noindent In the next result, we show how the moments of $\|w_i^{k+1} - w_i^k\|$ decrease in the corresponding order of $\tau_k$. This is a crucial step on bounding the errors in estimating the inner function values. \begin{lemma}\label{moment_bounds} Under~\cref{fi_lips}, \cref{assumption:original_nasa_assumption}, for $1\leq i \leq T$, and with the choice of $\tau_0=1$ (for simplicity), we have \begin{align} &\mathbb{E}[ \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 | \mathscr{F}_k] \leq \sigma_{G_i}^2+(4L_{f_i}^2+ \hat \sigma_{J_i}^2)c_{i+1} ,\label{fi-wi_bound}\\ &\mathbb{E}[\|w_i^{k+1} - w_i^k\|^2 | \mathscr{F}_k] \leq c_i~\tau^2_k, \label{new_nasa_conditional_expectation_w_T_second_power} \end{align} where, for $1 \le i \le T$, \begin{align} c_i := 3 \sigma_{G_T}^2+2(4L_{f_T}^2+\hat \sigma_{J_T}^2+\sigma_{J_T}^2) c_{i+1},~~~\text{with}~~~ c_{T+1} := \left(\prod_{i=1}^T\sigma_{J_i}^2\right) \beta^{-2}.\label{def_ci} \end{align} \end{lemma} \vgap \begin{proof} Recall the definitions of $\hat A_{k,i}, e_i^{k+1}, \hat{e}_i^{k+1}$ and, for $1\leq i\leq T$, define \begin{align} D_{k,i} := \hat A_{k,i} + \tau_ke_i^{k+1} + \hat{e}_i^{k+1}(w_{i+1}^{k+1}-w_{i+1}^k).\label{new_nasa_D_ki} \end{align} Then, by \eqnok{fi_wi_modified_nasa}, for $1\leq i\leq T$, we have \begin{align} f_i(w_{i+1}^{k+1}) - w_i^{k+1} & = (1-\tau_k)(f_i(w_{i+1}^k)-w_i^k) + D_{k,i},\label{new_nasa_fi_w_i_D_ki} \end{align} which together with the convexity of $\|\cdot\|^2$, imply that \begin{align} \|f_i(w_{i+1}^{k+1})-w_i^{k+1}\|^2 &\leq (1-\tau_k)\|f_i(w_{i+1}^{k})-w_i^k\|^2 + \frac{1}{\tau_k}\|D_{k,i}\|^2. \label{fi-wi_Dki}\ \end{align} Moreover, we have \begin{align} \|D_{k,i}\|^2 &= \|\hat A_{k,i}\|^2 + \tau_k^2 \|e_i^{k+1}\|^2 + \|(\hat{e}_i^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k)\|^2 +2 r'_{k,i}, \label{Dki_squared}\\ r'_{k,i} &= \langle \hat A_{k,i},\tau_k e_i^{k+1}+(\hat{e}_i^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k) \rangle + \tau_k \langle e_i^{k+1}, (\hat{e}_i^{k+1})^\top(w_{i+1}^{k+1}-w_{i+1}^k)\rangle,\nonumber \end{align} which together with the fact that $\mathbb{E}[r'_{k,i}| \mathscr{F}_k]=0$ under~\cref{assumption:original_nasa_assumption}, imply that \begin{align} \mathbb{E}[\|D_{k,i}\|^2| \mathscr{F}_k] &= \mathbb{E}[\|\hat A_{k,i}\|^2| \mathscr{F}_k] + \tau_k^2 \mathbb{E}[\|e_i^{k+1}\|^2| \mathscr{F}_k] + \mathbb{E}[\|\hat{e}_i^{k+1}(w_{i+1}^{k+1}-w_{i+1}^k)\|^2| \mathscr{F}_k] \nonumber\\ &\le \tau_k^2 \mathbb{E}[\|e_i^{k+1}\|^2| \mathscr{F}_k] + \left(4L_{f_i}^2+\mathbb{E}[\|\hat{e}_i^{k+1}\|^2|\mathscr{F}_k]\right) \mathbb{E}[\|w_{i+1}^{k+1}-w_{i+1}^k\|^2| \mathscr{F}_k],\label{exp_Dki_squared} \end{align} where the second inequality follows from \eqnok{Ak_bnd}. Hence, noting \eqnok{dk_bnd}, $w_{T+1}^{k} =x^k$, we have \begin{align*} \mathbb{E}[\|D_{k,T}\|^2 | \mathscr{F}_k] & \leq \tau_k^2\left[\sigma_{G_T}^2+\left(4L_{f_T}^2+\hat \sigma_{J_i}^2\right)\left(\prod_{i=1}^T\sigma_{J_i}^2\right) \beta^{-2}\right]. \end{align*} Using \eqnok{fi-wi_Dki} with $i=T$, the above inequality, and~\cref{Gamma_lemma} with the choice of $\tau_0=1$, we have \beq\label{fT_squared_bnd} \mathbb{E}[\|f_T(x^{k})-w_T^{k}\|^2 | \mathscr{F}_k] \leq \sigma_{G_T}^2+\left(4L_{f_T}^2+\hat \sigma_{J_i}^2\right)\left(\prod_{i=1}^T\sigma_{J_i}^2\right) \beta^{-2}. \eeq Moreover, by \cref{new_nasa_lemma_fn_w_error}.b) and under \cref{assumption:original_nasa_assumption}, we have $\mathbb{E}[\|w_{i+1}^{k+1}-w_i^k\|^2 | \mathscr{F}_k] \le $ \beq\label{wi_ineq} \tau_k^2\mathbb{E}\left[2\| f_i(w_{i+1}^k) - w_i^k\|^2 + \|e_i^{k+1}\|^2 + \frac{2}{\tau_k^2}\|J_i^{k+1}\|^2 \| w_{i+1}^{k+1} - w_{i+1}^k \|^2 \Big| \mathscr{F}_k\right], \eeq implying that \beq \mathbb{E}[\|w_T^{k+1}-w_T^k\|^2 | \mathscr{F}_k] \le \tau_k^2\left[3 \sigma_{G_T}^2+2(4L_{f_T}^2+\hat \sigma_{J_T}^2+\sigma_{J_T}^2)\left(\prod_{i=1}^T\sigma_{J_i}^2\right) \beta^{-2}\right]. \eeq This completes the proof of~\cref{fi-wi_bound} and \cref{new_nasa_conditional_expectation_w_T_second_power} when $i=T$. We now use backward induction to complete the proof. By the above result, the base case of $i=T$ holds. Assume that $\mathbb{E}[\|w_{i+1}^{k+1}-w_{i+1}^k\|^2|\mathscr{F}_k] \leq c_{i+1}\tau_k^2$ for some $1\leq i < T$. Hence, by \cref{Dki_squared} and under~\cref{assumption:original_nasa_assumption}, we have \begin{align*} \mathbb{E}[\|D_{k,i}\|^2|\mathscr{F}_k] \leq \tau_k^2[\sigma_{G_i}^2+(4L_{f_i}^2+ \hat \sigma_{J_i}^2)c_{i+1}], \end{align*} which together with \cref{Gamma_lemma}, imply that \begin{align*} \mathbb{E}[\|f_i(w_{i+1}^{k})-w_i^{k}\|^2 | \mathscr{F}_k] \leq \sigma_{G_i}^2+(4L_{f_i}^2+ \hat \sigma_{J_i}^2)c_{i+1}]. \end{align*} Thus, by \cref{wi_ineq}, we obtain \begin{align*} \mathbb{E}[\|w_i^{k+1}-w_i^k\|^2|\mathscr{F}_k] & \leq \tau_k^2[3\sigma_{G_i}^2+2(4L_{f_i}^2+ \hat \sigma_{J_i}^2+\sigma^2_{J_i})c_{i+1}], \end{align*} which together with the definition of $c_i$ in \eqnok{def_ci}, complete the proof. \end{proof} \vgap \noindent The next result is the counterpart of \cref{original_nasa_merit_function_lemma} for \cref{alg:modifiednasa}. \begin{lemma} \label{new_nasa_merit_function_lemma} Recall the definition of the merit function in~\cref{merit_function}. Define $w^k:= (w_1^k,\dots,w_T^k)$ for $k \geq 0$. Let $\{x^k,z^k,u^k,w_1^k,\dots,w_T^k\}_{k \geq 0}$ be the sequence generated by~\cref{alg:modifiednasa}. Suppose that \begin{align} \gamma_1 \ge \lambda >0, \quad \beta >\lambda, \quad (\beta-\lambda ) (\gamma_j - \lambda) \ge 4 T C_j^2 , \qquad j \in \{2,\ldots,T\}, \label{new_nasa_merit_function_upperbound_assumption} \end{align} where $C_j$'s are defined in Lemma~\ref{FZ_difference}. Then, under~\cref{fi_lips} and~\cref{assumption:original_nasa_assumption}, we have \begin{align} \label{modified_nasa_main_rec} &\lambda \sum_{k=0}^{N-1}\tau_k\left[\|d^k\|^2 + \sum_{i=1}^{T-1}\|f_i(w_{i+1}^k) - w_i^k\|^2 + \|f_T(x^k) - w_T^k\|^2 \right] \notag\\ &\leq W_\gamma(x^0,z^0,w^0) + \sum_{k=0}^{N-1} \hat{R}^{k+1}, \end{align} where, for any $k \geq 0$, \begin{align} \hat{R}^{k+1} & := \left(\sum_{i=1}^T\gamma_i\hat{r}_i^{k+1}\right) + \frac{\tau_k^2}{2}\left[(L_{\nabla F} + L_{\nabla \eta} \right] + \tau_k\langle d^k,\Delta_k\rangle + \frac{L_{\nabla \eta}}{2}\|z^{k+1} - z^k\|^2, \label{def_Rhatk}\\ \hat{r}_i^{k+1} & = \left[8 L_{ f_i}^2+ L_{\nabla f_i} \|f_i(w_{i+1}^k) - w_i^k\|+ \| \hat{e_i}^{k+1}\|^2\right] \|w_{i+1}^{k+1}-w_{i+1}^k\|^2 \notag\\ &+\tau_k^2\|e_i^{k+1}\|^2 + \dot{r}_i^{k+1},\notag \end{align} and $\Delta_k$ and $\dot{r}_i^{k+1}$ are, respectively, defined in \eqnok{def_deltak} and \eqnok{new_nasa_r_i}. Furthermore, notice that~\cref{new_nasa_merit_function_upperbound_assumption} is satisfied, when we pick {\color{black} \begin{align}\label{new_nasa_merit_function_gamma_choice} \gamma_1 = \lambda = \sqrt{T}, \qquad \beta = 2\sqrt{T}, \qquad \gamma_j = \sqrt{T}(1 + 4 C_j^2) \qquad 2\le j \le T. \end{align} } \end{lemma} \begin{proof} Noting \cref{new_nasa_lemma_fn_w_error} and definition of $\hat{r}_i^{k+1}$, we have, $\forall i \in \{1,2,\ldots, T\},$ \begin{align*} \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 - \|f_i(w_{i+1}^k) - w_i^k\|^2 & \leq -\tau_k\|f_i(w_{i+1}^k) - w_i^k\|^2 + \hat{r}_i^{k+1}. \end{align*} Combining the above inequalities with \eqnok{F_lips}, \eqnok{eta_lips}, noting definition of the merit function in \eqnok{merit_function}, and in the view of \cref{FZ_difference}, we obtain \begin{align*} & W_\gamma(x^{k+1},z^{k+1},w^{k+1}) - W_\gamma(x^k,z^k,w^k) \\ \leq & -\beta \tau_k\|d^k\|^2 + \sum_{j=2}^{T-1}\tau_kC_j\|d^k\| \|f_j(w_{j+1}^k)-w_j^k\| + \tau_kC_T\|d^k\|\|f_T(x^k)-w_T^k\| \\ -& \sum_{j=1}^{T-1}\gamma_j\tau_k\|f_j(w_{j+1}^k) - w_j^k\|^2 - \gamma_T\tau_k\|f_T(x^k) - w_T^k\|^2 + \hat{R}^{k+1} \end{align*} {\color{black}Now if condition \cref{new_nasa_merit_function_upperbound_assumption} holds, for any $i \in \{1,\ldots,T\}$, we have \begin{align*} &-\frac{\beta}{T}\|d^k\|^2 + C_i\|d^k\| \|f_i(w_{i+1}^k)-w_i^k\|- \gamma_i\|f_i(w_{i+1}^k) - w_i^k\|^2 \\ \le & -\lambda \big[\frac{1}{T}\|d^k\|^2 + \|f_i(w_{i+1}^k) - w_i^k\|^2\big]. \end{align*} Combining the above inequalities, we obtain \begin{align*} & W_\gamma(x^{k+1},z^{k+1},w^{k+1}) - W_\gamma(x^k,z^k,w^k) \\ \leq& -\lambda \tau_k \Big[\|d^k\|^2 + \sum_{j=1}^{T-1}\|f_j(w_{j+1}^k) - w_j^k\|^2 + \|f_T(x^k)-w_T^k\|^2\Big] + \hat{R}^{k+1}. \end{align*} } Thus, by summing up the above inequalities and re-arranging the terms, we obtain \eqnok{modified_nasa_main_rec}. Finally, it is easy to see that \cref{new_nasa_merit_function_upperbound_assumption} holds, by picking the parameters as in~\cref{new_nasa_merit_function_gamma_choice}. \end{proof} \vgap In the next result, we show the error terms in the right hand side of \eqnok{modified_nasa_main_rec} is bounded in the order of $\sum_{k=1}^N \tau_k^2$ in expectation. \begin{proposition} \label{new_nasa_big_proposition} Let $\hat{R}^k$ be defined in \eqnok{def_Rhatk}. The, under \cref{assumption:original_nasa_assumption}, we have \[ \mathbb{E}[\hat{R}^{k+1} | \mathscr{F}_k] \leq \hat{\sigma}^2 \tau_k^2, \qquad \forall k \geq 1, \] where \begin{align} \hat{\sigma}^2 & := \sum_{i=1}^{T}\gamma_i\left(\left[8L_{f_i}^2+L_{\nabla f_i}\sqrt{\sigma_{G_i}^2+(4L_{f_i}^2+ \hat \sigma_{J_i}^2)c_{i+1}}+ \hat \sigma_{J_i}^2 \right]{c}_{i+1} + \sigma_{G_i}^2\right) \notag\\ &+ \frac{1}{2\beta^2}\left(\prod_{i=1}^T\sigma_{J_i}^2\right)[(1+4\beta^2) L_{\nabla \eta}+L_{\nabla F}]. \label{new_nasa_big_proposition_sigma_squared} \end{align} \end{proposition} \begin{proof} Under \cref{assumption:original_nasa_assumption}, we have, for $1 \leq i \leq T$, \[ \mathbb{E}[\Delta_k | \mathscr{F}_k] = 0, \quad \mathbb{E}[\dot{r}_i^{k+1}|\mathscr{F}_k] = 0, \quad \mathbb{E}[ \| \hat{e_i}^{k+1}\|^2|\mathscr{F}_k] \le \sigma_{G_i}^2, \quad \mathbb{E}[ \| {e_i}^{k+1}\|^2|\mathscr{F}_k] \le \hat \sigma_{J_i}^2. \] Moreover, by \cref{moment_bounds} and Holder's inequality, we have $\mathbb{E}[\|w_i^{k+1} - w_i^k\|^2 | \mathscr{F}_k] \leq c_i~\tau^2_k$ and \begin{align*} &\mathbb{E}[ \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\| \|w_i^{k+1} - w_i^k\|^2| \mathscr{F}_k] \\ \le& \left(\mathbb{E}[ \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 | \mathscr{F}_k]\right)^{\frac12} \mathbb{E}[\|w_i^{k+1} - w_i^k\|^2 | \mathscr{F}_k]\\ \leq& c_i \sqrt{\sigma_{G_i}^2+(4L_{f_i}^2+ \hat \sigma_{J_i}^2)c_{i+1}} \tau^2_k \end{align*} The result then follows by noting the definition of $\hat \sigma^2$ in \cref{new_nasa_big_proposition_sigma_squared} \end{proof} \vgap We are now ready to state the convergence rates via the following theorem. \begin{theorem}\label{optimal_theorem} Suppose that $\{ x^k, z^k \}_{k\geq 0}$ are generated by~\cref{alg:modifiednasa}, and~\cref{fi_lips} and~\cref{assumption:original_nasa_assumption} hold. Also assume the parameters satisfy~\cref{new_nasa_merit_function_gamma_choice} and the step sizes $\{\tau_k\}$ satisfy \eqnok{gamma_condition}. \begin{itemize} \item [(a)] The results in parts a) and b) of \cref{nasa_main_theorem} still hold by replacing $\sigma^2$ by $\hat \sigma^2$. \item [b)] If $\tau_k$ is set as in \eqnok{def_tau}, the results of part c) of \cref{nasa_main_theorem} also hold with $\hat \sigma^2$ replacing $\sigma^2$. \end{itemize} \end{theorem} \begin{proof} The proof follows from the same arguments as in the proof of \cref{nasa_main_theorem} by noticing \eqnok{modified_nasa_main_rec}, and \cref{new_nasa_big_proposition}, hence, we skip the details. \end{proof} \begin{remark}\label{remark:rmk2} Note that \cref{alg:modifiednasa} does not use a mini-batch of samples in any iteration. Thus, \eqnok{nasa_main_theorem3} (in which $\sigma^2$ is replaced by $\hat \sigma^2$) implies that the total sample complexity of \cref{alg:modifiednasa} for finding an $\epsilon$-stationary point of \cref{eq:mainprob}, is bounded by ${\cal O}(c^T T^4/\epsilon^4)={\cal O}_T(1/\epsilon^4)$ which is better in the order of magnitude than the complexity bound of \cref{alg:originalnasa}. Furthermore, this bound matches the complexity bound obtained in \cite{GhaRuswan20} for the two-level composite problem which in turn is in the same order for single-level smooth stochastic optimization. Finally, it is worth noting that this complexity bound for \cref{alg:modifiednasa} is obtained without assuming boundedness of the feasible set or any dependence of the parameter $\beta$ on Lipschitz constants. Indeed, $\beta$ can be set to any positive number in the order of ${\cal O}(\sqrt{T})$ due to \eqnok{new_nasa_merit_function_gamma_choice}, and $\tau_k$ depends only on the total number of iterations $N$ due to \eqnok{def_tau}. This makes \cref{alg:modifiednasa} parameter-free and easy to implement. \end{remark} \section{Multi-level Nested Averaging Stochastic Gradient Method}\label{sec:originalnasa} In this section, we present our first algorithm for solving problem \eqnok{eq:mainprob}. As mentioned in~\cref{sec:intro}, the previously proposed stochastic gradient-type methods suffer in terms of the convergence rates when applied for solving this problem~\cite{yang2019multi-level}. The main reason is the increased bias when estimating the stochastic gradient of $F$, for $T \ge 2$. Our proposed algorithm has a multi-level structure -- in addition to estimating the gradient of $F$, we also estimate the values of inner functions $f_i$ by a mini-batch moving average technique, extending the approach in~\cite{GhaRuswan20} for any $T>1$. This will enable us to provide an algorithm with improved convergence rates to the stationary points compared to the prior work~\cite{yang2019multi-level}. Our approach is formally presented in~\cref{alg:originalnasa}. \begin{algorithm}[ht] \caption{Multi-level Nested Averaging Stochastic Gradient Method} \begin{algorithmic} \STATE \textbf{Input:} \textcolor{black}{Positive integer sequences $\{b_k,\tau_k\}_{k \ge 0}$, step-size parameter $\beta$, and initial points $x^0 \in X, \quad z^0 \in \bbr^{d_T}, \qquad w_i^0 \in \bbr^{d_{i-1}} \quad 1 \le i \le T$, and a probability mass function $P_R(\cdot)$ supported over $\{1,2,\ldots,N\}$, where $N$ is the number of iterations}. \STATE 0. Generate a random integer number $R$ according to $P_R(\cdot)$. \FOR{$k = 0,1, 2, \dots, R$} \STATE 1. Compute \begin{equation} \label{def_uk} u^k = \argmin_{y \in X}\ \left\{\langle z^k, y-x^k \rangle + \frac{\beta}{2} \|y-x^k\|^2\right\}, \end{equation} stochastic gradients $J_i^{k+1}$, and function values $G_{i,j}^{k+1}$ at $w_{i+1}^k$ for $i=\{1,\dots,T\}, j=\{1,\dots,b_k\}$ by denoting $w_{T+1}^k \equiv x^k$. \STATE 2. Set \begin{align} x^{k+1} &= (1 - \tau_k)x^k + \tau_k u^k, \label{def_xk}\\ z^{k+1} &= (1 - \tau_k)z^k + \tau_k \prod_{i=1}^TJ_{T+1-i}^{k+1}, \label{def_zk}\\ w_i^{k+1} &= (1 - \tau_k)w_i^k + \tau_k \bar G_i^{k+1}, \qquad 1 \le i \le T,\label{def_wi} \end{align} where \beq\label{def_barG} \bar G_i^{k+1} = \frac{1}{b_k}\sum_{j=1}^{b_k} G_{i,j}^{k+1}. \eeq \ENDFOR \STATE \textbf{Output:} $(x^R, z^R, w_1^R, \ldots, w_T^R)$. \end{algorithmic} \label{alg:originalnasa} \end{algorithm} We now add a few remarks about \cref{alg:originalnasa}. First, note that at each iteration of this algorithm, we update the triple $(x^k, \{w^k\}_{i=1}^T, z^k)$, which are the convex combinations of the solutions to subproblem \eqnok{def_uk}, the estimates of inner function values $f_i$, and the stochastic gradient of $F$ at these points, respectively. It should be mentioned that we do not need to estimate the values of the outer function $f_1$. However, we include $w^k_1$ for the sake of completeness. Second, when $T=2$ and $b_k=1$, this algorithm reduces to the NASA algorithm presented in \cite{GhaRuswan20}. Indeed, \cref{alg:originalnasa} is a direct generalization of the NASA method to the multi-level case $T \ge 3$. However, to prove convergence of \cref{alg:originalnasa}, we need to take a batch of samples in each iteration to reduce the noise associated with estimation of the inner function values, when $T >2$. We now provide our convergence analysis for~\cref{alg:originalnasa}. To do so, we define the following filtration, $$\mathscr{F}_k:=\sigma(\{x^0,\ldots, x^k, z^0,\ldots, z^k, w_1^0,\ldots,w_1^k, \dots, w_T^0, \dots, w_T^k, u^0,\ldots, u^k\}).$$ Next, we state our main assumptions on the individual functions and the stochastic first-order oracle we use. \begin{assumption}\label{fi_lips} All functions $f_1,\dots,f_T$ and their derivatives are Lipschitz continuous with Lipschitz constants $L_{f_i}$ and $L_{\nabla f_i}$, respectively. \end{assumption} \begin{assumption}\label{assumption:original_nasa_assumption} Denote $w_{T+1}^k \equiv x^k$. For each $k$, $w_{i+1}^k$ being the input, the stochastic oracle outputs $G_{i}^{k+1} \in \mathbb{R}^{d_i}$ and $J_{i}^{k+1} \in \mathbb{R}^{d_i \times d_{i-1}}$ such that \begin{enumerate} \item For $i \in \{1,\ldots,T\}$, we have $\mathbb{E}[J_{i}^{k+1}| \mathscr{F}_k] = \nabla f_i(w_{i+1}^k),~\text{and}~~\mathbb{E}[G_{i}^{k+1}| \mathscr{F}_k] = f_i(w_{i+1}^k)$. \item For $i \in \{1,\ldots,T\}$, we have $\mathbb{E}[\| G_i^{k+1} - f_i(w_{i+1}^k) \|^2 | \mathscr{F}_k] \leq \sigma_{G_i}^2,\mathbb{E}[\| J_i^{k+1} - \nabla f_i(w_{i+1}^k) \|^2 | \mathscr{F}_k] \leq \hat \sigma_{J_i}^2,~\text{and}~~\mathbb{E}[\|J_i^{k+1}\|^2 | \mathscr{F}_k] \leq \sigma_{J_i}^2$. Here $\|\cdot\|$ denotes the Euclidean norm for vectors and the Frobenius norm for matrices. \item Given $\mathscr{F}_k$, the outputs of the stochastic oracle at each level $i$, $G_{i}^{k+1}$ and $J_{i}^{k+1}$, are independent. \item Given $\mathscr{F}_k$, the outputs of the stochastic oracle are independent between levels i.e., $\{G_{i}^{k+1}\}_{i=1,\ldots,T}$ are independent and so are $\{J_{i}^{k+1}\}_{i=1,\ldots,T}$. \end{enumerate} \end{assumption} \cref{fi_lips} is a standard smoothness assumption made in the literature on nonlinear optimization. Similarly, Parts 1 and 2 in~\cref{assumption:original_nasa_assumption} are standard unbiasedness and bounded variance assumptions on the stochastic gradient, common in the literature. At this point, we re-emphasize that the assumptions made in~\cite{zhang2019multi} are stronger than our assumptions above, as they require mean-square smoothness of the individual random functions $G_i$ and their gradients. Parts 3 and 4 are also essential to establish the convergence results in the multi-level case; similar assumptions have been made, for example, in~\cite{yang2019multi-level}. In the next couple of technical results, we provide some properties of composite functions that are required for our subsequent results. \begin{lemma} \label{F_Lipschitz_Constant} Define $F_i(x) = f_i \circ f_{i+1} \circ \cdots f_T(x)$. Under~\cref{fi_lips}, the gradient of $F_i$ is Lipschitz continuous with constant \[ L_{\nabla F_i} = \sum_{j=i}^T \left[L_{\nabla f_j} \prod_{l=i}^{j-1} L_{f_l} \prod_{l=j+1}^T L_{f_l}^2\right]. \] \begin{proof} We show the result by backward induction. Under~\cref{fi_lips}, gradient of $F_T=f_T$ is Lipschitz continuous and so is that of $F_{T-1}$ since for any $x, y \in X$, we have \begin{align*} \| \nabla F_{T-1}(x) - \nabla F_{T-1}(y)\| &= \|\nabla f_T(x)\nabla f_{T-1}(f_T(x)) - \nabla f_T(y)\nabla f_{T-1}(f_T(y))\| \\ & \leq \| \nabla f_T(x)\| \| \nabla f_{T-1}(f_T(x)) - \nabla f_{T-1}(f_T(y))\| \\ &+ \| \nabla f_{T-1}(f_T(y))\| \| \nabla f_T(x) - \nabla f_T(y)\| \\ & \leq (L_{f_T}^2L_{\nabla f_{T-1}} + L_{f_{T-1}}L_{\nabla f_T})\|x -y\|. \end{align*} Now, suppose that gradient of $F_{i+1}$ is Lipschitz continuous for any $i \le T-1$. Then, similar to the above relation, $\nabla F_i$ is Lipschitz continuous with constant \begin{align*} L_{\nabla F_i} =& L_{F_{i+1}}^2 L_{\nabla f_i} + L_{f_i}L_{\nabla F_{i+1}}\\ = &L_{\nabla f_i} \prod_{j=i+1}^TL_{f_j}^2 + L_{f_i}\sum_{j=i+1}^T \left[L_{\nabla f_j} \prod_{l=i+1}^{j-1} L_{f_l} \prod_{l=j+1}^T L_{f_l}^2\right] \\ =&\sum_{j=i}^T \left[L_{\nabla f_j} \prod_{l=i}^{j-1} L_{f_l} \prod_{l=j+1}^T L_{f_l}^2\right]. \end{align*} \end{proof} \end{lemma} We remark that the above result has also been proved in~\cite{zhang2019multi}, Lemma 5.2., with a slightly different proof. \begin{lemma} \label{new_nasa_grad_F_z_hat_error} Define $F_i(x) = f_i \circ f_{i+1} \circ \cdots f_T(x)$ and the gradient term $\nabla \bar f_i(x,\bar w_i) = \nabla f_T(x) \nabla f_{T-1}(w_T)\cdots\nabla f_i(w_{i+1})$ with $\bar w_i = (w_{i+1},\ldots, w_T)$ for any $x \in X, w_j \in \bbr^{d_j} \ \ j=i+1,\ldots, T$. Then under~\cref{fi_lips}, we have \begin{align*} \|\nabla F_i(x) - \nabla \bar f_i(x,\bar w_i)\| \leq \sum_{j=i}^{T-1}\frac{L_{\nabla f_j}}{L_{f_j}}L_{f_i} \cdots L_{f_T}\|F_{j+1}(x) - w_{j+1} \|. \end{align*} \end{lemma} \begin{proof} We show the result by backward induction. The case $i=T$ is trivial. When $i=T-1$, under~\cref{fi_lips}, we have \begin{align*} \| \nabla F_{T-1}(x) - \nabla f_T(x)\nabla f_{T-1}(w_T)\| & = \| \nabla f_T(x) [\nabla f_{T-1}(f_T(x))- \nabla f_{T-1}(w_T)]\| \\ &\leq L_{\nabla f_{T-1}}L_{f_T}\|f_T(x) - w_T\|. \end{align*} Now assume that for any $i \le T-2$, \begin{align*} \|\nabla F_{i+1}(x) - \nabla \bar f_{i+1}(x,\bar w_{i+1} ) \| \leq \sum_{j=i+1}^{T-1}\frac{L_{\nabla f_j}}{L_{f_j}}L_{f_{i+1}} \cdots L_{f_T}\| F_{j+1}(x) - w_{j+1} \|. \end{align*} We then have \begin{align*} &\|\nabla F_i(x) - \nabla \bar f_i(x,\bar w_i ) \| = \|\nabla F_{i+1}(x)\nabla f_i(F_{i+1}(x)) - \nabla \bar f_i(x, \bar w_i )\|\\ &\le \|\nabla f_i(F_{i+1}(x))\| \|\nabla F_{i+1}(x) - \nabla \bar f_{i+1}(x, \bar w_{i+1} )\|\\ &+\|\nabla \bar f_{i+1}(x, \bar w_{i+1})\|\|\nabla f_i(F_{i+1}(x))-\nabla f_i(w_{i+1})\| \\ &\le L_{f_i} \|\nabla F_{i+1}(x) - \nabla \bar f_{i+1}(x, \bar w_{i+1})\|+L_{\nabla f_i} L_{f_{i+1}} \cdots L_{f_T} \|F_{i+1}(x)-w_{i+1}\|\\ &\le L_{f_i} \sum_{j=i+1}^{T-1}\frac{L_{\nabla f_j}}{L_{f_j}}L_{f_{i+1}} \cdots L_{f_T}\| F_{j+1}(x) - w_{j+1} \|\\ &+L_{\nabla f_i} L_{f_{i+1}} \cdots L_{f_T} \|F_{i+1}(x)-w_{i+1}\|=\sum_{j=i}^{T-1}\frac{L_{\nabla f_j}}{L_{f_j}}L_{f_i} \cdots L_{f_T}\| F_{j+1}(x) - w_{j+1} \|. \end{align*} \end{proof} \begin{lemma} \label{new_nasa_fjf_T_wj_error} Under~\cref{fi_lips}, for any $ j \in \{1,\ldots, T-1\}$, we have \begin{align*} \|f_j \circ \cdots \circ f_T(x) - w_j\| & \leq \|f_j(w_{j+1}) - w_j\| + \sum_{\ell = j+1}^T\left(\prod_{i = j}^{\ell - 1}L_{f_i}\right)\|f_{\ell}(w_{\ell+1}) - w_{\ell}\|. \end{align*} \end{lemma} \begin{proof} We show the results by backward induction. For $j = T-1$, we have \begin{align*} &\|f_{T-1} \circ f_T(w_{T+1}) - w_{T-1} \| \\ &\leq \| f_{T-1} \circ f_T(w_{T+1}) - f_{T-1}(w_T) \| + \|f_{T-1}(w_T) - w_{T-1}\| \\ & \leq L_{f_{T-1}}\|f_T(w_{T+1}) - w_T\| + \| f_{T-1}(w_T) - w_{T-1}\|. \end{align*} Now suppose the result holds for $j+1, j \in \{1,\ldots, T-2\}$. Then, we have \begin{align*} & \| f_j \circ f_{j+1} \circ \cdots f_T(w_{T+1}) - w_j\| \\ \leq & \|f_{j} \circ \cdots f_T(w_{T+1}) - f_j(w_{j+1}) + f_j(w_{j+1}) - w_j\| \\ \leq & L_{f_j}\|f_{j+1} \circ \cdots \circ f_T(w_{T+1}) - w_{j+1}\| + \| f_j(w_{j+1}) - w_j\| \\ \leq & L_{f_j}\left[\| f_{j+1}(w_{j+2}) - w_{j+1}\| + \sum_{\ell = j+2}^T\left(\prod_{i=j+1}^{\ell -1}L_{f_i}\right)\|f_{\ell}(w_{\ell+1}) - w_{\ell}\|\right] \\ & + \|f_j(w_{j+1}) - w_j\| \\ = & \|f_j(w_{j+1}) - w_j\| + \sum_{\ell = j+1}^T\left(\prod_{i=j}^{\ell-1}L_{f_i}\right)\|f_{\ell}(w_{\ell+1}) - w_{\ell}\|, \end{align*} where the third inequality follows by the induction hypothesis. \end{proof} \begin{lemma} \label{FZ_difference} Define \begin{align*} R_1 & = L_{\nabla f_1}L_{f_2} \cdots L_{f_T}, \qquad R_j = L_{f_1} \dots L_{f_{j-1}}L_{\nabla f_j}L_{f_{j+1}} \cdots L_{f_T}/L_{f_j} \quad 2 \le j \leq T-1, \\ C_2 &= R_1, \quad C_j = \sum_{i=1}^{j-2} R_i \left(\prod_{l=i+1}^{j-1}L_{f_l}\right) \quad 3 \le j \leq T. \end{align*} Assume that~\cref{fi_lips} holds. Then for $T \geq 3$, \beq \left\| \nabla F(x) - \nabla f_T(x) \prod_{i=2}^T \nabla f_{T+1-i}(w_{T+2-i}) \right\| \leq \sum_{j=2}^{T-1} C_j\|f_j(w_{j+1}) - w_j\| + C_T\|f_T(x) - w_T\| \eeq \end{lemma} \begin{proof} By~\cref{new_nasa_grad_F_z_hat_error} and \cref{new_nasa_fjf_T_wj_error}, we have \begin{align*} & \left\| \nabla F(x) - \nabla f_T(x) \prod_{i=2}^T \nabla f_{T+1-i}(w_{T+2-i}) \right\| \leq \sum_{j=1}^{T-1}R_j\|f_{j+1} \circ \cdots \circ f_T(x) - w_{j+1}\| \\ & = \sum_{j=1}^{T-2}R_j\|f_{j+1} \circ \cdots \circ f_T(x) - w_{j+1}\| + R_{T-1}\|f_T(x) - w_T\| \\ & = \sum_{j=1}^{T-2}R_j\|f_{j+1}(w_{j+2}) - w_{j+1}\| + \sum_{j=1}^{T-2}R_j\sum_{\ell = j+2}^T\left(\prod_{i=j+1}^{\ell-1}L_{f_i}\right)\|f_{\ell}(w_{\ell +1}) -w_{\ell}\| \\ &+ R_{T-1}\|f_T(x) - w_T\|. \end{align*} {\color{black} Aggregating the constants for $\|f_j(w_{j+1})-w_j\|$, we get the result.} \end{proof} \vgap \textcolor{black}{The following result also shows the Lipschitz continuity of the gradient of the objective function} of the subproblem \eqnok{def_uk}. One can see \cite{GhaRuswan20} for a simple proof. \begin{lemma}\label{eta_lips_lemma} Let $\eta(x,z)$ be defined as \[ \eta(x,z) = \min_{y \in X}\ \left\{\langle z, y-x \rangle + \frac{\beta}{2} \|y-x\|^2\right\}. \] Then the gradient of $\eta$ w.r.t. $(x,z)$ is Lipschitz continuous with the constant \[L_{\nabla \eta} = 2\sqrt{(1+\beta)^2+(1+\tfrac{1}{2\beta})^2}.\] \end{lemma} \vgap \noindent In the next result, we provide a recursion inequality for the error in estimating $f_i(w_{i+1})$ by $w_i$. \begin{lemma} \label{nasa_lemma_fn_w_error} Let $\{x^k\}_{k \ge 0}$ and $\{w^k_i\}_{k \ge 0} \ \ 1 \le i \le T$ be generated by~\cref{alg:originalnasa}. Denote \beq \label{nasa_A_ki} d^k = u^k - x^k, \qquad w_{T+1}^k \equiv x^k \quad \forall k \ge 0, \qquad A_{k,i} = f_i(w_{i+1}^{k+1}) - f_i(w_{i+1}^k) \quad 1 \le i \le T. \eeq \begin{itemize}[leftmargin=0.1in] \item [a)] For any $i \in \{1,\ldots, T\}$, \begin{align} & \| f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 \leq (1-\tau_k)\| f_i(w_{i+1}^k) - w_i^k\|^2 + \frac{1}{\tau_k}\|A_{k,i}\|^2 + \tau_k^2\|e_i^{k+1}\|^2 + r_i^{k+1},\label{nasa_fn_w}\\ &\|w_i^{k+1} - w_i^k\|^2 \leq \tau_k^2\left[\| f_i(w_{i+1}^k) - w_i^k\|^2 + \|e_i^{k+1}\|^2 - 2\langle e_i^{k+1},f_i(w_{i+1}^k) - w_i^k\rangle \right],\label{nasa_wi_increment_squared} \end{align} where \beq\label{nasa_ri} r_i^{k+1} = 2\tau_k \langle e_i^{k+1}, A_{k,i} + (1-\tau_k)(f_i(w_{i+1}^k) - w_i^k) \rangle, \qquad e_i^{k+1}= f_i(w_{i+1}^k)-\bar G_i^{k+1}. \eeq \item [b)] If, in addition, $f_i$'s are Lipschitz continuous, we have \begin{align} \| f_T(x^{k+1}) - w_T^{k+1}\|^2 &\leq (1-\tau_k)\| f_T(x^k) - w_T^k\|^2 + L_{f_T} \tau_k \|d^k\|^2 + \tau_k^2\|e_T^{k+1}\|^2 + r_T^{k+1}, \\ \| f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 & \leq (1-\tau_k)\|f_i(w_{i+1}^k) - w_i^k\|^2 + \tau_k^2\|e_i^{k+1}\|^2 + \bar{r}_i^{k+1} \notag \\ & + L_{f_i}^2\tau_k \left[\|f_{i+1}(w_{i+2}^k) - w_{i+1}^k\|^2 + \|e_{i+1}^{k+1}\|^2\right] \qquad 1 \le i \le T-1, \label{nasa_upperbound_fiwi_squared} \end{align} where \begin{align} \bar{r}_{i}^{k+1} & = -2\tau_kL_{f_i}^2\langle e_{i+1}^{k+1},f_{i+1}(w_{i+2}^k)-w_{i+1}^k\rangle + r_i^{k+1}. \label{original_nasa_ri_bar} \end{align} \end{itemize} \end{lemma} \begin{proof} Noting \cref{def_wi}, \cref{nasa_fn_w}, and \cref{nasa_ri}, we have \begin{align*} \|f_i(w_{i+1}^{k+1}) - w_i^{k+1}\|^2 & = \|A_{k,i} + f_i(w_{i+1}^k) - (1-\tau_k)w_i^k - \tau_k(f_i(w_{i+1}^k) - e_i^{k+1})\|^2 \\ & = \|A_{k,i} + (1-\tau_k)(f_i(w_{i+1}^k) - w_i^k) + \tau_ke_i^{k+1}\|^2 \\ & = \|A_{k,i} +(1-\tau_k)(f_i(w_{i+1}^k)-w_i^k)\|^2 + \tau_k^2\|e_i^{k+1}\|^2 + r_i^{k+1}. \end{align*} Then, in the view of \cref{nasa_ri}, \cref{nasa_fn_w} follows by noting that \begin{align*} & \|A_{k,i} +(1-\tau_k)(f_i(w_{i+1}^k)-w_i^k)\|^2 \\ = & \|A_{k,i}\|^2 + (1-\tau_k)^2\|f_i(w_{i+1}^k)-w_i^k\|^2 + 2(1-\tau_k)\langle A_{k,i}, f_i(w_{i+1}^k) - w_i^k\rangle \nonumber\\ \leq & \|A_{k,i}\|^2 + (1-\tau_k)^2\|f_i(w_{i+1}^k) - w_i^k\|^2 + \left(\frac{1}{\tau_k}-1\right)\|A_{k,i}\|^2 \nonumber\\ & + (1-\tau_k)\tau_k\|f_i(w_{i+1}^k)-w_i^k\|^2 \nonumber\\ = & (1-\tau_k)\|f_i(w_{i+1}^k)-w_i^k\|^2 + \frac{1}{\tau_k}\|A_{k,i}\|^2,\label{Ak_fk} \end{align*} due to Cauchy-Schwarz and Young's inequalities. Also, \cref{nasa_wi_increment_squared} directly follows from \cref{def_wi} since \begin{align*} \|w_i^{k+1} - w_i^k\|^2 & = \|\tau_k(\bar G_i^{k+1} - w_i^k)\|^2 = \tau_k^2\|f_i(w_{i+1}^k) - w_i^k - e_i^{k+1}\|^2 \\ & = \tau_k^2\left[\| f_i(w_{i+1}^k) - w_i^k\|^2 + \|e_i^{k+1}\|^2 - 2\langle e_i^{k+1},f_i(w_{i+1}^k) - w_i^k\rangle \right]. \end{align*} To show part b), note that by \cref{def_xk}, \cref{nasa_A_ki}, and Lipschitz continuity of $f_i$, we have \[ \| A_{k,T} \| \leq L_{f_T}\| w_{T+1}^{k+1} - w_{T+1}^k\|=L_{f_T} \tau_k \|d^k\|, \qquad \| A_{k,i} \| \leq L_{f_i}\| w_{i+1}^{k+1} - w_{i+1}^k\|, \] for $1 \le i \le T-1$. The results then follows by noting \cref{nasa_fn_w} and \cref{nasa_wi_increment_squared}. \end{proof} We remark that the mini-batch sampling in \eqnok{def_barG} is only used to reduce the upper bound on the expectation of $\tau_k \|e_{i+1}^{k+1}\|^2$ in the right hand side of \eqnok{nasa_upperbound_fiwi_squared}. Moreover, we do not need this inequality for $i=1$ when establishing the convergence rate of \cref{alg:originalnasa}. Thus, when $T \le 2$, this algorithm converges without using mini-batch of samples in each iteration, as shown in \cite{GhaRuswan20}. \textcolor{black}{Recalling the definition of $F^*$ from Section~\ref{sec:intro} and denoting $w:= (w_1,\dots,w_T)$, we define, for some positive constants $\gamma =(\gamma_1,\ldots, \gamma_T)$, the merit function \beq\label{merit_function} W_\gamma(x,z,w) = F(x) - F^* - \eta(x,z) + \sum_{i=1}^{T-1}\gamma_i \| f_i(w_{i+1}) - w_i\|^2 + \gamma_T \| f_T(x) - w_T\|^2, \eeq which will be used in our next result for establishing convergence analysis of \cref{alg:originalnasa}. It is worth noting that $W_\gamma(x,z,w) \ge 0$ due to that facts that $F(x) \ge F^*, \eta(x,z) \le0$ (by Lemma~\ref{eta_lips_lemma}), and $\gamma >0$. The precise values of the constants $\gamma_1,\ldots, \gamma_T$ will be set later in our analysis. We should also mention that the above summation can start from $i=2$, in which case the convergence analysis is slightly simpler. However, we use~\eqnok{merit_function} in our analysis since, as a byproduct, it gives us an online certificate for the stochastic values of the objective function.} The above function is an extension of the one used in \cite{GhaRuswan20} for the case of $T=2$, to the multi-level setting of $T\ge 1$. A variant of this function (including only the first two terms in \eqnok{merit_function}) was used in the literature as early as \cite{ruszczynski1987linearization} and was used later in \cite{rusz20-1} for nonsmooth single-level stochastic optimization. \begin{lemma} \label{original_nasa_merit_function_lemma} Suppose that the sequences $\{x^k,z^k,u^k,w_1^k,\dots,w_T^k\}_{k \geq 0}$ are generated by~\cref{alg:originalnasa} and~\cref{fi_lips} holds. \begin{itemize}[leftmargin=0.1in] \item [a)] If \begin{align}\label{parameter_cond} &\gamma_1 \ge \lambda >0, \qquad \gamma_j-\gamma_{j-1}L^2_{f_{j-1}}-\lambda >0, \notag\\ &4(\beta-\lambda-\gamma_T)(\gamma_j-\gamma_{j-1}L^2_{f_{j-1}}-\lambda) \ge TC_j^2 \quad j=2,\ldots, T, \end{align} where $C_j$'s are defined in~\cref{FZ_difference}, we have \begin{align}\label{main_recursion} \lambda \sum_{k=0}^{N-1}\tau_k\left[\|d^k\|^2 + \sum_{i=1}^{T-1}\|f_i(w_{i+1}^k) - w_i^k\|^2 + \|f_T(x^k) - w_T^k\|^2 \right] &\leq W_\gamma(x^0,z^0,w^0)\notag\\ &+\sum_{k=0}^{N-1}R^{k+1}, \end{align} where \begin{align} R^{k+1} &:= \tau_k^2 \sum_{i=1}^{T}\gamma_i \| e_i^{k+1}\|^2+\tau_k \sum_{i=1}^{T-1}\gamma_i L_{f_i}^2 \| e_{i+1}^{k+1}\|^2 + \sum_{i=1}^{T-1}\gamma_i \bar{r_i}^{k+1}+\gamma_T r_T^{k+1} \notag\\ &~~~~~+ \tau_k \langle d^k, \Delta^k\rangle+\frac{(L_{\nabla F}+L_{\nabla \eta})\tau_k^2}{2}\|d^k\|^2 + \frac{L_{\nabla \eta}}{2}\|z^{k+1} - z^k\|^2, \label{def_R_error}\\ \Delta^k&:= \nabla f_T(x^k) \prod_{i=2}^T \nabla f_{T+1-i}(w^k_{T+2-i})-\prod_{i=1}^T J_{T-i+1}^{k+1},\label{def_deltak} \end{align} and $r^{k+1}_i, \bar r^{k+1}_i$ are defined in \cref{nasa_ri} and \cref{original_nasa_ri_bar}, respectively. \item[b)] If parameters are chosen as \begin{align}\label{parameter_def} \gamma_j &:= 2^{j-1}(L_{f_1}\cdots L_{f_{j-1}})^2 \sqrt{T} \quad 2 \leq j \leq T, \qquad \beta \geq \lambda + \gamma_T+\frac{T \max_{2 \leq i \leq T}C_i^2}{4\lambda},\notag\\ \gamma_1 &= \lambda = \frac{1}{2}\min_{2 \leq i \leq T}(\gamma_i - \gamma_{i-1}L_{f_{i-1}}^2)= \frac{\min_{2 \leq i \leq T} \gamma_i}{4}. \end{align} Then, conditions in \cref{parameter_cond} are satisfied. \end{itemize} \end{lemma} \begin{proof} First, note that by~\cref{F_Lipschitz_Constant}, we have \begin{align}\label{F_lips} F(x^{k+1}) &\le F(x^k) +\langle \nabla F(x^k), x^{k+1}-x^k \rangle +\frac{L_{\nabla F}}{2}\|x^{k+1}-x^k\|^2 \notag \\ &= F(x^k) +\tau_k \langle \nabla F(x^k), d^k \rangle + \frac{L_{\nabla F}\tau_k^2}{2}\|d^k\|^2. \end{align} Second, note that by the optimality condition of \cref{def_uk}, we have \beq \label{opt_QP} \langle z^k+\beta(u^k-x^k), x^k-u^k \rangle \ge 0,~\text{which implies}~\langle z^k, d^k \rangle +\beta \|d^k\|^2 \le 0. \eeq Then, noting \cref{def_xk}, \cref{def_zk}, and in the view of~\cref{eta_lips_lemma}, we obtain \begin{align}\label{eta_lips} &\eta(x^k,z^k)- \eta(x^{k+1},z^{k+1}) \notag\\ &\le \langle z^k+ \beta(u^k-x^k), x^{k+1}-x^k \rangle- \langle u^k-x^k, z^{k+1}-z^k\rangle \notag\\ &+\frac{L_{\nabla \eta}}{2} \left[\|x^{k+1}-x^k\|^2 +\|z^{k+1}-z^k\|^2 \right]\notag \\ &= \tau_k \langle 2z^k+ \beta d^k, d^k \rangle- \tau_k \langle d^k, \prod_{i=1}^TJ_{T-i+1}^{k+1}\rangle +\frac{L_{\nabla \eta}}{2} \left[\|x^{k+1}-x^k\|^2 +\|z^{k+1}-z^k\|^2 \right] \notag \\ &\leq -\beta\tau_k \|d^k\|^2 - \tau_k\langle d^k, \prod_{i=1}^TJ_{T-i+1}^{k+1}\rangle + \frac{L_{\nabla \eta}}{2}\left[\tau_k^2\|d^k\|^2 + \|z^{k+1} - z^k\|^2 \right]. \end{align} Third, noting~\cref{nasa_lemma_fn_w_error}.b), we have \begin{align} &\sum_{i=1}^{T-1}\gamma_i \left[\| f_i(w^{k+1}_{i+1}) - w^{k+1}_i\|^2 - \| f_i(w^k_{i+1}) - w^k_i\|^2 \right] \notag\\ &\qquad+ \gamma_T \left[\| f_T(x^{k+1}) - w^{k+1}_T\|^2-\| f_T(x^k) - w^k_T\|^2\right]\notag\\ \le& \sum_{i=1}^{T-1}\gamma_i\bigg\{-\tau_k\big[\|f_i(w_{i+1}^k) - w_i^k\|^2- L_{f_i}^2\|f_{i+1}(w_{i+2}^k) - w_{i+1}^k\|^2 \notag \\ &\qquad - L_{f_i}^2 \| e_{i+1}^{k+1}\|^2 \big]+ \tau_k^2 \|e_i^{k+1}\|^2 + \bar{r_i}^{k+1}\bigg\} \notag\\ &\qquad + \gamma_T \left\{-\tau_k\left[\|f_T(x^k) - w_T^k\|^2 - L_{f_T}^2\|d^k\|^2 \right]+\tau_k^2 \|e_T^{k+1}\|^2 + r_T^{k+1}\right\}\notag\\ =&- \tau_k \bigg\{\gamma_1 \|f_1(w_2^k) - w_1^k\|^2 + \sum_{j=2}^{T-1}[\gamma_j-\gamma_{j-1}L_{f_{j-1}}^2] \|f_j(w_{j+1}^k) - w_j^k\|^2 \notag \\ &\qquad + [\gamma_T-\gamma_{T-1}L_{f_{T-1}}^2]\|f_T(x^k) - w_T^k\|^2\bigg\}+ \sum_{i=1}^{T-1}\gamma_i \bar{r_i}^{k+1}+\gamma_T r_T^{k+1}\notag\\ &\qquad+\tau_k \left[\sum_{i=1}^{T-1}\gamma_i L_{f_i}^2 \| e_{i+1}^{k+1}\|^2+\gamma_T\|d^k\|^2\right] +\tau_k^2 \sum_{i=1}^{T}\gamma_i \| e_i^{k+1}\|^2.\label{fun_diff} \end{align} Combining the above relation with \cref{eta_lips}, \cref{F_lips}, noting definition of merit function in \cref{merit_function}, and in the view of~\cref{FZ_difference}, we obtain \begin{align*} & W_\gamma(x^{k+1},z^{k+1},w^{k+1}) - W_\gamma(x^k,z^k,w^k) \\ &\le -\tau_k(\beta-\gamma_T)\|d^k\|^2+\tau_k \|d^k\|\left[\sum_{j=2}^{T-1} C_j\|f_j(w^k_{j+1}) - w^k_j\| + C_T\|f_T(x) - w_T\|\right]\\ &\qquad- \tau_k \bigg\{\gamma_1 \|f_1(w_2^k) - w_1^k\|^2 + \sum_{j=2}^{T-1}[\gamma_j-\gamma_{j-1}L_{f_{j-1}}^2] \|f_j(w_{j+1}^k) - w_j^k\|^2 \notag \\ &\qquad+ [\gamma_T-\gamma_{T-1}L_{f_{T-1}}^2]\|f_T(x^k) - w_T^k\|^2\bigg\}+R^{k+1},\notag \end{align*} where $R^{k+1}$ is defined in \cref{def_R_error}. {\color{black}Now, if \cref{parameter_cond} holds, we have \begin{align*} & -\left(\frac{\beta-\gamma_T}{T}\right)\|d^k\|^2- (\gamma_j-\gamma_{j-1}L_{f_{j-1}}^2) \|f_j(w_{j+1}^k) - w_j^k\|^2 + C_j \|d^k\| \|f_j(w^k_{j+1}) - w^k_j\| \\ &\le -\lambda \Big[\frac{1}{T}\|d^k\|^2 +\|f_j(w_{j+1}^k) - w_j^k\|^2 \Big] \qquad \forall j \in \{1,\ldots,T\}, \end{align*} which together with the above inequality imply that} \begin{align} &W_\gamma(x^{k+1},z^{k+1},w^{k+1}) - W_\gamma(x^k,z^k,w^k) \notag\\ &\le -\lambda \tau_k\left[\|d^k\|^2 + \sum_{i=1}^{T-1}\|f_i(w_{i+1}^k) - w_i^k\|^2 + \|f_T(x^k) - w_T^k\|^2 \right]+R^{k+1}.\notag \end{align} Summing up the above inequalities and re-arranging the terms, we obtain \cref{main_recursion}. It can be easily verified that condition \cref{parameter_cond} is satisfied by the choice of parameters in \cref{parameter_def}. \end{proof} The next technical result helps us to simplify our convergence analysis. \begin{lemma} \label{Gamma_lemma} Consider a sequence $\{\tau_k\}_{k \geq 0} \in (0,1]$, and define \begin{align}\label{def_Gamma} \Gamma_k= \Gamma_1 \prod_{i=1}^{k-1}(1-\tau_i) \qquad k \ge 2, \qquad \Gamma_1 = \begin{cases}1 & \text{ if } \tau_0 = 1, \\ 1-\tau_0 & \text{otherwise.}\\\end{cases} \end{align} \begin{itemize} \item [a)] For any $k \ge 1$, we have \[ \alpha_{i,k} = \frac{\tau_i}{\Gamma_{i+1}}\Gamma_k \quad 1 \le i \le k, \qquad \sum_{i=0}^{k-1}\alpha_{i,k}=\begin{cases}1 & \text{ if } \tau_0 = 1, \\ 1-\Gamma_k & \text{otherwise.}\\\end{cases} \] \item [b)]Suppose that $q_{k+1} \leq (1-\tau_k)q_k + p_k \ \ k \ge 0$ for sequences $\{q_k,p_k\}_{k \geq 0}$. Then, we have \[ q_k \leq \Gamma_k \left[a q_0 + \sum_{i=0}^{k-1}\frac{p_i}{\Gamma_{i+1}}\right], \qquad a = \begin{cases}0 & \text{ if } \tau_0 = 1, \\ 1 & \text{otherwise.}\\\end{cases} \] \end{itemize} \end{lemma} \begin{proof} To show part a), note that \begin{align*} \sum_{i=0}^{k-1} \alpha_{i,k}&=\Gamma_k \sum_{i=0}^{k-1}\frac{\tau_i}{\Gamma_{i+1}}= \frac{\tau_0 \Gamma_k}{\Gamma_1} + \sum_{i=1}^{k-1}\frac{\tau_i\Gamma_k}{\Gamma_{i+1}} = \frac{\tau_0 \Gamma_k}{\Gamma_1} + \Gamma_k\sum_{i=1}^{k-1}\left(\frac{1}{\Gamma_{i+1}}-\frac{1}{\Gamma_i}\right) \\ &= 1 - \frac{\Gamma_k}{\Gamma_1}(1-\tau_0). \end{align*} To show part b), by dividing both sides of the inequality by $\Gamma_{k+1}$ and noting \cref{def_Gamma}, we have \[ \frac{q_1}{\Gamma_1} \leq \frac{(1-\tau_0)q_0+p_0}{\Gamma_1},\qquad \frac{q_{k+1}}{\Gamma_{k+1}} \leq \frac{q_k}{\Gamma_k} + \frac{p_k}{\Gamma_{k+1}} \quad k \ge 1. \] Summing up the above inequalities, we get the result. \end{proof} \vgap The next result shows the boundedness of the error terms in the right hand side of \eqnok{main_recursion} in expectation. This is an essential step in establishing the convergence analysis of the algorithm. \begin{proposition} \label{original_nasa_big_proposition} Suppose that~\cref{assumption:original_nasa_assumption} holds and (for simplicity) $\tau_0 = 1$, $ \beta > 0$ for all $k$. Then, for any $k \geq 1$, we have \begin{align} \beta^2 \mathbb{E}[\|d^k\|^2| \mathscr{F}_{k}] &\leq \mathbb{E}[\|z^k\|^2|\mathscr{F}_{k}] \leq \prod_{i=1}^T\sigma_{J_i}^2, \label{dk_bnd}\\ \mathbb{E}[\|z^{k+1}-z^k\|^2 | \mathscr{F}_k] & \leq 4\tau_k^2\prod_{i=1}^T\sigma_{J_i}^2. \label{zk_error} \end{align} If, in addition, the batch size $b_k$ in~\cref{alg:originalnasa} is set to \beq\label{def_bk} b_k = \left\lceil \frac{\max_{1 \le i \le T} L^2_{f_i}}{\tau_k} \right\rceil \qquad k \ge 0, \eeq we have \beq\label{Rk_bnd} \mathbb{E}[R^{k+1} | \mathscr{F}_k] \leq \tau_k^2 \left[\frac{1}{2}\left(\prod_{i=1}^T\sigma_{J_i}^2\right)\left(\frac{L_{\nabla F}+(1+4\beta^2)L_{\nabla \eta}}{\beta^2} \right) + \sum_{i=1}^T\gamma_i \sigma_{G_i}^2\right]:= \tau_k^2 \sigma^2, \eeq where $R^{k+1}$ is defined in \cref{def_R_error}. \end{proposition} \begin{proof} The first inequality in \cref{dk_bnd} directly follows by \cref{opt_QP} and Cauchy-Schwarz inequality. Noting \cref{def_zk}, the fact that $\tau_0 = 1$, and in the view of~\cref{Gamma_lemma}, we obtain \begin{align*} z^k & = \sum_{i=0}^{k-1}\alpha_{i,k}\left(\prod_{\ell = 1}^T J_{T+1-l}^{i+1}\right) \end{align*} By convexity of $\| \cdot \|^2$ and conditional independence, we conclude that \begin{align*} \mathbb{E}[\|z^k\|^2 | \mathscr{F}_k] &\leq \sum_{i=0}^{k-1}\alpha_{i,k}\mathbb{E}\left[\left\| \prod_{\ell=1}^TJ_{\ell}^{i+1} \right\|^2 \ \Bigg| \ \mathscr{F}_k\right] & \leq \sum_{i=0}^{k-1}\alpha_{i,k}\prod_{\ell = 1}^T \mathbb{E}[\| J_{\ell}^{i+1}\|^2 | \mathscr{F}_i] \\ &\leq \sum_{i=0}^{k-1}\alpha_{i,k}\left(\prod_{\ell=1}^T\sigma_{J_{\ell}}^2 \right) = \prod_{\ell =1}^T\sigma_{J_{\ell}}^2. \end{align*} Noting \cref{dk_bnd}, we have \begin{align*} \mathbb{E}[\| z^{k+1}-z^k\|^2 | \mathscr{F}_k] & \leq \tau_k^2 \mathbb{E}\left[\left\| z^k - \prod_{\ell=1}^TJ_{\ell}^{k+1} \right\|^2 \ \Bigg| \ \mathscr{F}_k\right] \\ &\leq 2\tau_k^2\left\{\mathbb{E}[\|z^k\|^2 | \mathscr{F}_k] + \mathbb{E}\left[\left\| \prod_{\ell=1}^T J_{\ell}^{k+1} \right\|^2 \ \Bigg| \mathscr{F}_k\right]\right\} \\ & \leq 2\tau_k^2\left(\prod_{\ell=1}^T\sigma_{J_{\ell}}^2 + \prod_{\ell = 1}^T\sigma_{J_{\ell}}^2\right) = 4\tau_k^2\left(\prod_{\ell=1}^T\sigma_{J_{\ell}}^2\right). \end{align*} Now, observe that by \cref{nasa_ri}, \cref{original_nasa_ri_bar}, the choice of $b_k$ in \cref{def_bk}, and under~\cref{assumption:original_nasa_assumption}, we have \begin{align*} \mathbb{E}[\Delta^k | \mathscr{F}_k] &=0, \qquad \mathbb{E}[e_i^{k+1} | \mathscr{F}_k] =0, \quad \text{implying} \quad \mathbb{E}[r_i^{k+1} | \mathscr{F}_k]=\mathbb{E}[\bar r_i^{k+1} | \mathscr{F}_k]=0,\\ \mathbb{E}[\| e_i^{k+1} \|^2 | \mathscr{F}_k] &=\mathbb{E}[\| \frac{1}{b_k}\sum_{j=1}^{b_k}G_{i,j}^{k+1} - f_i(w_{i+1}^k) \|^2 | \mathscr{F}_k] \leq \frac{\sigma_{G_i}^2}{b_k}\\ &\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \le \min\left\{1, \frac{\tau_k }{\max_{1 \le i \le T} L^2_{f_i}}\right\} \sigma_{G_i}^2. \end{align*} Noting \cref{def_R_error}, \cref{dk_bnd}, \cref{zk_error}, and the above observation, we obtain \cref{Rk_bnd}. \end{proof} Observe that \cref{original_nasa_merit_function_lemma} shows that the summation of $\|d^k\|$ and the errors in estimating the inner function values are bounded by summation of error terms $R^k$ which is in the order of $\sum_{k=1}^N \tau_k^2$ as shown in \cref{original_nasa_big_proposition}. This is the main step in establishing the convergence of \cref{alg:originalnasa}. Indeed, $\bar{x} \in X$ is a stationary point of \cref{eq:mainprob}, if $\bar u=\bar{x}$ and $\bar{z}=\nabla F(\bar{x})$, where \beq\label{def_sub2} \bar u = \argmin_{y \in X}\ \left\{\langle \bar{z}, y-\bar{x} \rangle + \frac{1}{2} \|y-\bar{x}\|^2\right\}. \eeq Thus, for a given pair of $(\bar{x},\bar{z})$, we can define our termination criterion as follows. \begin{definition}\label{def_Vxz} A pair of $(\bar{x},\bar{z})$ generated by ~\cref{alg:originalnasa} is called an $\epsilon$-stationary pair, if $\mathbb{E}[\sqrt{V(\bar{x},\bar{z})}] \le \epsilon$, where \begin{align} V(\bar x,\bar z) = \|\bar u-\bar x\|^2 + \|\bar z - \nabla F(\bar x)\|^2, \label{Lyaponuv_function} \end{align} and $\bar u$ is the solution to \eqnok{def_sub2}. \end{definition} \textcolor{black}{We emphasize that in Definition~\ref{def_Vxz}, we consider a unified termination criterion for both the unconstrained and constrained cases. When $X=\bbr^{d_T}$, $V(\bar x,\bar z)$ provides an upper bound for the $\|\nabla F(\bar x)\|^2$. This can be simply seen from the fact that $\bar u-\bar x = \bar z$ in~\eqref{def_sub2} for unconstrained problems and hence from~\eqref{Lyaponuv_function}, we have \[ V(\bar x,\bar z) = \|\bar z\|^2+\|\bar z -\nabla F(\bar x)\|^2 \ge \frac 12 \|\nabla F(\bar x)\|^2. \] We also refer the reader to \cite{GhaRuswan20} for the relation between $V(\bar{x},\bar{z})$ and other common gradient-based termination criteria used in the literature such as gradient mapping (\cite{Nest04,Nest07-1,GhaLanZhang16}) and proximal mapping (\cite{DruLew18}). Furthermore, as shown in \cite{GhaRuswan20}, we have \begin{align} V(x^k,z^k) \le \max(1,\beta^2)\|u^k-x^k\|^2 + \|z^k - \nabla F(x^k)\|^2, \label{Lyaponuv_function2} \end{align} where $(x^k,u^k, z^k)$ are the solutions generated at iteration $k-1$ of \cref{alg:originalnasa}. Noting this fact, we provide the convergence rate of this algorithm by appropriately choosing $\beta$ and $\tau_k$ in the next results.} \begin{theorem}\label{thm:main_theorem_original_nasa} Suppose that $\{x^k,z^k\}_{k \geq 0}$ are generated by~\cref{alg:originalnasa}, ~\cref{fi_lips} and \cref{assumption:original_nasa_assumption} hold. Also assume that the parameters satisfy \cref{parameter_def} and step sizes $\{\tau_k\}$ are chosen such that \begin{align} \sum_{i=k+1}^N \tau_i\Gamma_i \leq c \Gamma_{k+1} \quad \forall k \geq 0 \text{ and } \forall N \geq 1, c \text{ is a positive constant}. \label{gamma_condition} \end{align} \textbf{(a)} For every $N \geq 1$, we have \beq\label{nasa_main_theorem} \sum_{k=1}^N\tau_k \mathbb{E}[\| \nabla F(x^k) - z^k\|^2 | \mathscr{F}_{k}] \leq {\cal B}_1(\sigma^2, N), \eeq where \beq\label{def_B} {\cal B}_1(\sigma^2,N)=\frac{4cL^2(T-1)}{\lambda}\left[W_\gamma(x^0,z^0,w^0) + \sigma^2 \sum_{k=0}^{N-1} \tau_k^2\right] + c \prod_{\ell=1}^T\sigma_{J_{\ell}}^2 \sum_{k=0}^{N-1} \tau_k^2, \eeq $\sigma^2$ is defined in \cref{Rk_bnd} and \beq\label{def_L} L^2 = \max \left\{L_{\nabla F}^2, \max_{2 \leq i \leq T}C_j^2\right\}. \eeq \textbf{(b)} As a consequence, we have \begin{align} \mathbb{E}[V(x^R,z^R)] &\le \frac{1}{\sum_{k=1}^{N} \tau_k}\left\{{\cal B}_1(\sigma^2,N)+\frac{\max(1,\beta^2)}{\lambda}\left[W_\gamma(x^0,z^0,w^0)+ \sigma^2 \sum_{k=0}^{N} \tau_k^2\right]\right\}, \label{nasa_main_theorem2} \end{align} where the expectation is taken with respect to all random sequences generated by the method and an independent random integer number $R \in \{1,\dots, N\}$, whose probability distribution is given by \begin{align*} \mathbb{P}[R = k] = \frac{\tau_k}{\sum_{j=1}^{N}\tau_j} \end{align*} \textbf{(c)} If, in addition, the stepsizes are set to \beq\label{def_tau} \tau_0 = 1, \quad \tau_k = \frac{1}{\sqrt{N}} \quad \forall k = 1, \dots, N, \eeq we have \begin{align} \mathbb{E}[\| \nabla F(x^R) - z^R\|^2 ] &\le \frac{1}{\sqrt{N}}\left[\frac{4L^2(T-1)\left[W_\gamma(x^0,z^0,w^0) + 2\sigma^2 \right]}{\lambda} + 2\prod_{\ell=1}^T\sigma_{J_{\ell}}^2 \right]\notag\\ &: = \frac{{\cal B}_2(\sigma^2)}{\sqrt{N}},\label{nasa_Fk-zk}\\ \mathbb{E}[V(x^R,z^R)] &\le \frac{1}{\sqrt{N}}\left[{\cal B}_2(\sigma^2)+\frac{\max(1,\beta^2)}{\lambda}\left[W_\gamma(x^0,z^0,w^0) + 2\sigma^2 \right] \right],\label{nasa_main_theorem3}\\ \mathbb{E}[\| f_i(w_{i+1}^R) - w_i^R \|^2] & \leq \frac{1}{\lambda\sqrt{N}}\left[W_\gamma(x^0,z^0,w^0) + 2\sigma^2\right] \qquad \qquad i = 1, \dots, T. \label{nasa_main_theorem4} \end{align} \end{theorem} \vgap \begin{proof} We first show part (a). Noting \cref{def_zk}, we have \[ \nabla F(x^{k+1}) - z^{k+1} = (1-\tau_k)(\nabla F(x^k) - z^k) + \tau_k (\delta^k+\bar \delta^k +\Delta^k), \] where $\Delta^k$ is defined in \cref{def_R_error} and \[ \delta^k = \nabla F(x^k) - \nabla f_T(x^k) \prod_{i=2}^T \nabla f_{T+1-i}(w^k_{T+2-i}), \qquad \bar \delta^k = \frac{\nabla F(x^{k+1}) - \nabla F(x^k)}{\tau_k}. \] Denoting $\bar \Delta_k = \langle \Delta^k, (1-\tau_k)(\nabla F(x^k) - z^k) + \tau_k (\delta^k+\bar \delta^k) \rangle$, we have \begin{align*} &\|\nabla F(x^{k+1}) - z^{k+1}\|^2 \\ =& \|(1-\tau_k)(\nabla F(x^k) - z^k) + \tau_k (\delta^k+\bar \delta^k)\|^2 +\tau_k^2 \|\Delta^k\|^2+2\tau_k \bar \Delta_k \\ \le & (1-\tau_k) \|\nabla F(x^k) - z^k\|^2 + 2\tau_k \left[\|\delta^k\|^2+L_{\nabla F}^2\|d^k\|^2+\bar \Delta_k\right]+\tau_k^2 \|\Delta^k\|^2, \end{align*} where the inequality follows from convexity of $\|\cdot\|^2$ and Lipschitz continuity of gradient of $F$. Thus, in the view of~\cref{Gamma_lemma}, we obtain \[ \| \nabla F(x^k) - z^k\|^2 \leq 2\Gamma_k \sum_{i=0}^{k-1}\frac{\tau_i}{\Gamma_{i+1}}\left(\|\delta^i\|^2+L_{\nabla F}\|d^i\|^2+\bar \Delta_i+\frac{\tau_i}{2}\|\Delta^i\|^2\right), \] which implies that $ \sum_{k=1}^N \tau_k \|\nabla F(x^k) - z^k\|^2 =$ \begin{align} &2\sum_{k=1}^N\tau_k\Gamma_k \sum_{i=0}^{k-1}\frac{\tau_i}{\Gamma_{i+1}}\left(\|\delta^i\|^2+L_{\nabla F}^2\|d^i\|^2+\bar \Delta_i+\frac{\tau_i}{2}\|\Delta^i\|^2\right) \notag\\ =& 2\sum_{k=0}^{N-1} \frac{\tau_k}{\Gamma_{k+1}}\left(\sum_{i=k+1}^N \tau_i\Gamma_i\right)\left(\|\delta^k\|^2+L_{\nabla F}^2\|d^k\|^2+\bar \Delta_k+\frac{\tau_k}{2}\|\Delta^k\|^2\right)\notag\\ \le & 2c \sum_{k=0}^{N-1}\tau_k\left(\|\delta^k\|^2+L_{\nabla F}^2\|d^k\|^2+\bar \Delta_k+\frac{\tau_k}{2}\|\Delta^k\|^2\right), \label{Fz-zk_error} \end{align} where the last inequality follows from~\cref{gamma_condition}. Now, observe that under~\cref{assumption:original_nasa_assumption}, we have \[ \mathbb{E}[\bar \Delta_k | \mathscr{F}_k] = 0, \qquad \mathbb{E}[\|\Delta_k\|^2 | \mathscr{F}_k] \le \mathbb{E}\left[\left\| \prod_{\ell=1}^T J_{\ell}^{k+1} \right\|^2 \ \Bigg| \mathscr{F}_k\right] \le \prod_{\ell=1}^T\sigma_{J_{\ell}}^2. \] Moreover, by~\cref{FZ_difference} and the fact that $(\sum_{i=1}^n a_i)^2 \leq n\sum_{i=1}^n a_i^2$ for nonnegative $a_i$'s, we have \begin{align*} \|\delta_k\|^2 &=\left\| \nabla F(x) - \nabla f_T(x) \prod_{i=2}^T \nabla f_{T+1-i}(w_{T+2-i}) \right\|^2\\ & \leq 2(T-1) \sum_{j=2}^{T-1} C_j^2\|f_j(w_{j+1}) - w_j\|^2 + 2C_T^2\|f_T(x) - w_T\|^2. \end{align*} Combining the above observations with \cref{Fz-zk_error} and in the view of \cref{def_L}, we obtain \begin{align} &\sum_{k=1}^N \tau_k \mathbb{E}[\|\nabla F(x^k) - z^k\|^2 | \mathscr{F}_k ] \le c \prod_{\ell=1}^T\sigma_{J_{\ell}}^2 \sum_{k=0}^{N-1} \tau_k^2 \notag \\ &+ 4cL(T-1) \sum_{k=0}^{N-1}\tau_k\left( \sum_{j=2}^{T-1} \|f_j(w_{j+1}) - w_j\|^2 + \|f_T(x) - w_T\|^2+\|d^k\|^2\right)\notag \end{align} Then, \cref{nasa_main_theorem} follows from the above inequality, \cref{main_recursion}, and \cref{Rk_bnd}. \vgap \noindent Part (b) then follows from part (a), \cref{Lyaponuv_function2}, \cref{main_recursion}, and noting that \[ \mathbb{E}[V(x^R,z^R)] = \frac{\sum_{k=1}^{N}\tau_k V(x^k,z^k)}{\sum_{j=1}^{N}\tau_j}. \] Part (c) also follows by noting that choice of $\tau_k$ in \cref{def_tau} implies that \begin{align*} \sum_{k=1}^{N} \tau_k \geq \sqrt{N}, \quad \sum_{k=0}^{N} \tau_k^2 = 2, \quad \Gamma_k = \left(1-\frac{1}{\sqrt{N}}\right)^{k-1}, \\ \sum_{i=k+1}^{N}\tau_i\Gamma_i = \left(1- \frac{1}{\sqrt{N}}\right)^k\frac{1}{\sqrt{N}}\sum_{i=0}^{N-k-1}\left(1-\frac{1}{\sqrt{N}}\right)^i \leq \left(1-\frac{1}{\sqrt{N}}\right)^k, \end{align*} ensuring condition \cref{gamma_condition} with $c=1$. \end{proof} \begin{remark}\label{remark:rmk1} The result in \eqnok{nasa_main_theorem3} implies that to find an $\epsilon$-stationary point of \eqnok{eq:mainprob} (see,~\cref{def_Vxz}), \cref{alg:originalnasa} requires ${\cal O}(\rho^T T^4/\epsilon^4)$ number of iterations, where $\rho$ is a constant depending on the problem parameters (i.e., Lipschitz constants and noise variances). Thus, the total number of used samples is bounded by \[ \sum_{k=1}^T b_k = {\cal O}\left(\frac{\rho^T T^6} {\epsilon^6}\right) = {\cal O}_T\left(\frac{1} {\epsilon^6}\right) \] due to \eqnok{def_bk} and \eqnok{def_tau}. This bound is much better than $\mathcal{O}_T\left(1/\epsilon^{(7+T)/2}\right)$ obtained in \cite{yang2019multi-level} when $T >4$\footnote{Following the presentation in~\cite{yang2019multi-level}, we only present the $\epsilon$-related $T$ dependence for their result.} . In particular, it exhibits the level-independent behavior as discussed in~\cref{sec:intro}. Note that, we obtain constants of order $\rho^T$, for example, when $\sigma^2_{J_i}$ in~\cref{Rk_bnd} are all equal. We emphasize that~\cite{yang2019multi-level} and~\cite{zhang2019multi} also have such constant factors that depend exponentially on $T$, in their proofs and the final results. \end{remark} \begin{remark}\label{remark:rmk1-1} The bound in \eqnok{nasa_main_theorem4} also implies that the errors in estimating the inner function values decrease at the same rate that we converge to the stationary point of the problem. This is essential to obtain a rate of convergence similar to that of single-level problems. Moreover, \eqnok{nasa_Fk-zk} shows that the stochastic estimate $z^k$ also converges at the same rate to the gradient of the objective function at the stationary point where $x^k$ converges to. \end{remark} Although our results for~\cref{alg:originalnasa} show improved convergence rates compared to~\cite{yang2019multi-level}, it is still worse than $\mathcal{O}_T\left(1/\epsilon^4\right)$ obtained in \cite{GhaRuswan20} for the case of $T=2$. Furthermore, the batch sizes $b_k$ is of order $\rho^T$ for some constant $\rho$ which makes it impractical. In the next section, we show that both of these issues could be fixed by a properly modified variant of \cref{alg:originalnasa}. \iffalse {\color{blue} In the discussion, we discuss $\sigma^2 = \mathcal{O}\left(\rho^T\right)$ for some $\rho > 1$. For $(d)$, Observe that $\max_{1 \leq i \leq T}L_i = \mathcal{O}(T), \lambda = \mathcal{O}(1), \beta_k = \mathcal{O}(2^T)$, so $L_{\nabla \eta} = \mathcal{O}(2^T), W(x^0,z^0,\bar{w}^0) = \mathcal{O}(2^T)$. Therefore, by definition of $\sigma^2$, $\gamma_i$ for $1\leq i \leq T$, and the geometric sum formula, these yield $$\sigma^2 := \rho^T = (fill this up)^T$$ It is easy to see that $\rho > 1$. Using these facts, the conclusion follows. Hence, \begin{align} \mathbb{E}[V(x^R,z^R)] & = \mathcal{O}\left(\frac{4^T\rho^T}{\sqrt{N}}\right) \label{original_nasa_big_theorem_complexity_lyaponuv} \\ \mathbb{E}[\|f_i(w_{i+1}^R) - w_i^R\|^2] & = \mathcal{O}\left(\frac{\rho^T}{\sqrt{N}}\right) \quad \text{ for } i = 1, \dots, T-1 \label{original_nasa_big_theorem_complexity_fiwi} \\ \mathbb{E}[\|f_T(x^R) - w_T^R\|^2] & = \mathcal{O}\left(\frac{\rho^T}{\sqrt{N}}\right) \label{original_nasa_big_theorem_complexity_fTwT} \end{align} } \fi
1,314,259,996,066
arxiv
\section{Introduction} The study of quantum spin ice (QSI) on three dimensional (3D) pyrochlore lattice has attracted considerable attention \cite{Huang,juan,gin, zhi, you,sun1, sun4,sun4a, udaa, uda, sun5,sun6,sun7}. Huang-Chen-Hermele \cite{Huang} have proposed an alternative Hamiltonian for QSI in 3D pyrochlore lattice, applicable to certain class of $d$- and $f$-electron systems with dipolar-octupolar Kramers doublets. Using dimensional reduction \cite{sun6, udaa, uda}, Carrasquilla {\it et~al.}, \cite{juan} have recently mapped this model to 2D kagome lattice with a [111] crystallographic field. They have identified the interaction that promotes a putative quantum spin liquid (QSL) state and uncovered the low-temperature quantum phase diagram using a non-perturbative, unbiased QMC simulations on the kagome lattice \cite{juan}. In this system, competition between the classical Ising frustration and a $Z_2$-invariant ferromagnetic quantum fluctuation lead to a putative QSL state. Thus, there is a possibility to search for 2D QSL states within a class of pyrochlore quantum spin ice materials. The distinctive feature of the QSI Hamiltonian is the presence of $Z_2$ symmetry. We have recently studied the 2D quantum kagome ice Hamiltonian of Carrasquilla {\it et~al.}, \cite{juan} on the triangular lattice \cite{sow3}, using spin wave theory. An explicit numerical simulation has not been reported at the moment. However, spin wave theory still captures the interesting properties of the system because quantum fluctuations are suppressed in this model. In principle, there is a possibility of a ring exchange interaction that exhibits a $Z_2$ symmetry, as in the U(1)-invariant XY model (hard-core bosons) with ring-exchange interactions \cite{an,an1, long, AW, Ar, N,P,Q, G,F, mic, bber}. The ring exchange quantum spin Hamiltonian is believed to be very important in Wigner crystals near the melting density \cite{bb,dj,O}. They also promote interesting quantum properties with rich quantum phase diagram. In this communication, we consider the competing interactions between a classically frustrated Ising interaction and a $Z_2$-invariant ring exchange interaction. The Hamiltonian can be written as \begin{align} H &= J_z\sum_{\langle ij\rangle}S_{i}^zS_{j}^z+K\sum_{\left\langle ijkl \right\rangle} \left(S_{i}^{+}S_{j}^{+}S_{k}^{+}S_{l}^{+} + S_{i}^{-}S_{j}^{-}S_{k}^{-}S_{l}^{-}\right), \label{model} \end{align} where $S_{l}^{\pm}= S_{l}^{x} \pm i S_{l}^{y}$ are the raising and the lowering spin operators respectively. A special feature of this Hamiltonian is that it exhibits only $Z_2$-symmetry in the $x$-$y$ plane, {\it i.e.,} $\pi$-rotation about the $z$-axis in spin space, $S_\mu^\pm\to-S_\mu^\pm$, $S_\mu^z\to S_\mu^z$; $\mu=i,j,k,l$. The Hamiltonian (Eq.~\eqref{model}) can be studied in any lattice geometry. However, for bipartite lattices, Eq.~\eqref{model} is related to a U(1)-invariant model by a $\pi$-rotation about the $x$-axis on two sublattices, {\it i.e.}, $S_{i,j}^\pm\to S_{i,j}^\mp$; $S_{i,j}^z\to -S_{i,j}^z$. We restrict our analyses to the triangular lattice. Hence, the summation over the ring exchange term runs over the three possible four-spin plaquette orientations on a triangular lattice; see Fig.~\eqref{fig3.1}. We will investigate the distinctive features of the pure $Z_2$-invariant XY ring-exchange Hamiltonian ($J_z=0$) and its effects when competing with a classical Ising frustration ($J_z<K$, with $K<0$). The study of this Hamiltonian is partially motivated by the quantum phases uncovered in 2D QSI Hamiltonian \cite{juan} and the recent study of the 2D QSI Hamiltonian on the triangular lattice \cite{sow3}. \section{Pure-K model} In order to get an insight into the effects of $Z_2$ symmetry of Eq.~\eqref{model}, we consider the pure-$K$ model in Eq.~\eqref{model}, which corresponds to $J_z=0$. In this limit, the resulting Hamiltonian has a related U(1)-invariant counterpart \cite{an,N}. The important feature of the U(1)-invariant pure-$K$ model is that the energy spectra has a gapless quadratic excitation near $\bold{k}=0$ \cite{N, an,an1}. However, the present model is devoid of continuous symmetries. The behaviour of the excitation spectra is not known in literatures. It is interesting to investigate how the excitations behave in the long wavelength limit. The Hamiltonian, Eq.~\eqref{model}, in this limit ($J_z=0$) can be written explicitly as \begin{eqnarray}\label{eqn3.2} \begin{split} H_{J_z=0} &=2K\sum_{\left\langle ijkl \right\rangle} \left(S_{i}^{x}S_{j}^{x}S_{k}^{x}S_{l}^{x} + S_{i}^{y}S_{j}^{y}S_{k}^{y}S_{l}^{y} \label{Kterm}\right. \\ & \left. - S_{i}^{x}S_{j}^{y}S_{k}^{y}S_{l}^{x} -S_{i}^{y}S_{j}^{x}S_{k}^{x}S_{l}^{y}-S_{i}^{y}S_{j}^{y}S_{k}^{x}S_{l}^{x} \right. \\ & \left. -S_{i}^{x}S_{j}^{x}S_{k}^{y}S_{l}^{y} -S_{i}^{x}S_{j}^{y}S_{k}^{x}S_{l}^{y}-S_{i}^{y}S_{j}^{x}S_{k}^{y}S_{l}^{x}\right). \end{split} \end{eqnarray} Classically, the ground state of Eq.~\eqref{Kterm} is highly degenerate resulting from the $Z_2$ symmetry of the Hamiltonian. The ground state corresponds to all possible spin configurations along the basis $\bold{e}_\alpha$ and $\bold{e}_\beta$ (see Fig.~\eqref{fig3.1}), and it is independent of the sign of $K$. There are several ways to investigate how quantum fluctuations select a particular classical ground state in this system. In the U(1)-invariant model, this can be done by integrating out the phase fluctuations about the classical ground state in the path integral for the partition function \cite{N}. This method, however, is effectively the same as performing spin wave theory about any classical ground state of Eq.~\eqref{Kterm}. The collective excitation spectrum is the same in both methods. Since we have only $x$-$y$ coupling in Eq.~\eqref{Kterm}, one can show that the excitation spectrum about any classical ground state is the same, provided one rotates the axes properly. In the present model there is no conserved quantity, it is expedient to use a direct spin wave theory via the Holstein Primakoff transform \cite{J}. We choose the easy-axis ferromagnetic state, and implement the linearized Holstein Primakoff transformation, \cite{A,J} \begin{figure} \centering \begin{tikzpicture} \draw[ultra thick,blue] (9,0)--(10,0); \draw[dashed,blue] (10.8,3.95) circle (.35cm); \draw(10.8,4)node[]{$K$}; \draw[dashed,blue] (7.5,3.5) circle (.35cm); \draw(7.5,3.5)node[]{$K$}; \draw[dashed,blue] (9.3,0.4) circle (.35cm); \draw(9.4,.33)node[]{$K$}; \draw[solid,blue] (9,0)--(9.5,0.866); \draw[ultra thick,blue] (9,0)--(8.5,0.866); \draw[solid,blue] (10,0)--(11,0); \draw[solid,blue] (10,0)--(10.5,0.866); \draw[ultra thick,blue] (10,0)--(9.5,0.866); \draw[solid,blue] (11,0)--(12,0); \draw[solid,blue] (11,0)--(11.5,0.866); \draw[solid,blue] (11,0)--(10.5,0.866); \draw[solid,blue] (12,0)--(13,0); \draw[solid,blue] (12,0)--(12.5,0.866); \draw[solid,blue] (12,0)--(11.5,0.866); \draw[solid,blue] (13,0)--(14,0); \draw[solid,blue] (13,0)--(13.5,0.866); \draw[solid,blue] (13,0)--(12.5,0.866); \draw[solid,blue] (14,0)--(13.5,0.866); \draw[ultra thick,blue] (8.5,0.866)--(9.5, 0.866); \draw[solid,blue] (8.5,0.866)--(9,1.732); \draw[solid,blue] (8.5,0.866)--(8,1.732); \draw[solid,blue] (9.5,0.866)--(10.5, 0.866); \draw[solid,blue] (9.5,0.866)--(10,1.732); \draw(11,0.7)node[]{$\bold{e}_\alpha$}; \draw(10.1,1.2)node[]{$\bold{e}_\beta$}; \draw[solid,blue] (9.5,0.866)--(9,1.732); \draw[->,>=stealth,thick, black] (10.5,0.866)--(11.5, 0.866); \draw[solid,blue] (10.5,0.866)--(11,1.732); \draw[->,>=stealth,thick, black] (10.5,0.866)--(10,1.732); \draw[solid,blue] (11.5,0.866)--(12.5, 0.866); \draw[solid,blue] (11.5,0.866)--(12,1.732); \draw[solid,blue] (11.5,0.866)--(11,1.732); \draw[solid,blue] (12.5,0.866)--(13.5, 0.866); \draw[solid,blue] (12.5,0.866)--(13,1.732); \draw[solid,blue] (12.5,0.866)--(12,1.732); \draw[solid,blue] (13.5,0.866)--(13,1.732); \draw[solid,blue] (8,1.732)--(9,1.732); \draw[solid,blue] (8,1.732)--(8.5,2.598); \draw[solid,blue] (8,1.732)--(7.5,2.598); \draw[solid,blue] (9,1.732)--(10,1.732); \draw[solid,blue] (9,1.732)--(9.5,2.598); \draw[solid,blue] (9,1.732)--(8.5,2.598); \draw[solid,blue] (10,1.732)--(11,1.732); \draw[solid,blue] (10,1.732)--(10.5,2.598); \draw[solid,blue] (10,1.732)--(9.5,2.598); \draw[solid,blue] (11,1.732)--(12,1.732); \draw[solid,blue] (11,1.732)--(11.5,2.598); \draw[solid,blue] (11,1.732)--(10.5,2.598); \draw[solid,blue] (12,1.732)--(13,1.732); \draw[solid,blue] (12,1.732)--(12.5,2.598); \draw[solid,blue] (12,1.732)--(11.5,2.598); \draw[solid,blue] (13,1.732)--(12.5,2.598); \draw[solid,blue] (7.5,2.598)--(8.5,2.598); \draw[ultra thick,blue ] (7.5,2.598)--(8,3.464); \draw[ultra thick,blue] (7.5,2.598)--(7,3.464); \draw[solid,blue] (8.5,2.598)--(9.5,2.598); \draw[solid,blue] (8.5,2.598)--(9,3.464); \draw[solid,blue] (8.5,2.598)--(8,3.464); \draw[solid,blue] (9.5,2.598)--(10.5,2.598); \draw[solid,blue] (9.5,2.598)--(10,3.464); \draw[solid,blue] (9.5,2.598)--(9,3.464); \draw[solid,blue] (10.5,2.598)--(11.5,2.598); \draw[solid,blue] (10.5,2.598)--(11,3.464); \draw[solid,blue] (10.5,2.598)--(10,3.464); \draw[solid,blue] (11.5,2.598)--(12.5,2.598); \draw[solid,blue] (11.5,2.598)--(12,3.464); \draw[solid,blue] (11.5,2.598)--(11,3.464); \draw[solid,blue] (12.5,2.598)--(12,3.464); \draw[solid,blue] (7,3.464)--(8,3.464); \draw[ultra thick,blue] (7,3.464)--(7.5,4.330); \draw[solid,blue] (7,3.464)--(6.5,4.330); \draw[solid,blue] (8,3.464)--(9,3.464); \draw[solid,blue] (8,3.464)--(8.5,4.330); \draw[ultra thick,blue] (8,3.464)--(7.5,4.330); \draw[solid,blue] (9,3.464)--(10,3.464); \draw[solid,blue] (9,3.464)--(9.5,4.330); \draw[solid,blue] (9,3.464)--(8.5,4.330); \draw[ultra thick,blue] (10,3.464)--(11,3.464); \draw[ultra thick,blue] (10,3.464)--(10.5,4.330); \draw[solid,blue] (10,3.464)--(9.5,4.330); \draw[solid,blue] (11,3.464)--(12,3.464); \draw[ultra thick,blue] (11,3.464)--(11.5,4.330); \draw[solid,blue] (11,3.464)--(10.5,4.330); \draw[solid,blue] (12,3.464)--(11.5,4.330); \draw[solid,blue] (6.5,4.330)--(7.5,4.330); \draw[solid,blue] (7.5,4.330)--(8.5,4.330); \draw[solid,blue] (8.5,4.330)--(9.5,4.330); \draw[solid,blue] (9.5,4.330)--(10.5,4.330); \draw[ultra thick,blue] (10.5,4.330)--(11.5,4.330); \draw[fill= blue] (9,0) circle (1mm); \draw[fill= blue] (10,0) circle (1mm); \draw[fill= blue] (11,0) circle (1mm); \draw[fill= blue] (12,0) circle (1mm); \draw[fill= blue] (13,0) circle (1mm); \draw[fill= blue] (14,0) circle (1mm); \draw[fill= blue] (8,1.732) circle (1mm); \draw[fill= blue] (9,1.732) circle (1mm); \draw[fill= blue] (10,1.732) circle (1mm); \draw[fill= blue] (11,1.732) circle (1mm); \draw[fill= blue] (12,1.732) circle (1mm); \draw[fill= blue] (13,1.732) circle (1mm); \draw[fill= blue] (7.5,2.598) circle (1mm); \draw[fill= blue] (8.5,2.598) circle (1mm); \draw[fill= blue] (9.5,2.598) circle (1mm); \draw[fill= blue] (10.5,2.598) circle (1mm); \draw[fill= blue] (11.5,2.598) circle (1mm); \draw[fill= blue] (12.5,2.598) circle (1mm); \draw[fill= blue] (7,3.464) circle (1mm); \draw[fill= blue] (8,3.464) circle (1mm); \draw[fill= blue] (9,3.464) circle (1mm); \draw[fill= blue] (10,3.464) circle (1mm); \draw[fill= blue] (11,3.464) circle (1mm); \draw[fill= blue] (12,3.464) circle (1mm); \draw[fill= blue] (6.5,4.330) circle (1mm); \draw[fill= blue] (7.5,4.330) circle (1mm); \draw[fill= blue] (8.5,4.330) circle (1mm); \draw[fill= blue] (9.5,4.330) circle (1mm); \draw[fill= blue] (10.5,4.330) circle (1mm); \draw[fill= blue] (11.5,4.330) circle (1mm); \draw[fill= blue] (8.5,0.866) circle (1mm); \draw[fill= blue] (9.5,0.866) circle (1mm); \draw[fill= blue] (10.5,0.866) circle (1mm); \draw[fill= blue] (11.5,0.866) circle (1mm); \draw[fill= blue] (12.5,0.866) circle (1mm); \draw[fill= blue] (13.5,0.866) circle (1mm); \end{tikzpicture} \caption{Color online. Triangular lattice with three-plaquette orientations (thick lines). The ring exchange interaction acts on the four sites within each plaquette. $\bold{e}_\alpha$ and $\bold{e}_\beta$ are the primitive vectors on the triangular lattice.} \label{fig3.1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=3in]{u1.png} \caption{Color online. The spin wave excitation spectrum of the U(1)-invariant pure-$K$ model. The coupling is set to unity.} \label{u1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=3in]{Brill.png} \caption{Color online. The energy dispersion of the $Z_2$-invariant pure-$K$ model. The coupling is set to unity. Fig.~\eqref{zone1} shows the points where the energy vanishes.} \label{zone} \end{figure} \begin{align} &S_{j}^{x}= S-b_{j}^\dagger b_{j },\quad S_{j}^{y}= i\sqrt{\frac{S}{2}}\left( b_{j}^\dagger -b_{j}\right). \label{HPT} \end{align} Next, we restrict the spins to $S=1/2$, substitute Eq.~\eqref{HPT} into Eq.~\eqref{Kterm} and Fourier transform over the three plaquettes. The resulting bosonic Hamiltonian is very lengthy to write here. It can be diagonalized by the Bogoliubov transformation, \begin{align} b_{\bold{k}}=u_{\bold{k}}\gamma_{\bold{k}}-v_{\bold{k}}\gamma_{-\bold{k}}^\dagger, \end{align} where $u_{\bold{k}}^2-v_{\bold{k}}^2=1$, one finds that the resulting Hamiltonian is diagonalized by \begin{align} &u_{\bold{k}}^2=\frac{1}{2}\left( \frac{A_{\bold{k}}}{E_{\bold{k}}}+1\right); \quad v_{\bold{k}}^2=\frac{1}{2}\left( \frac{A_{\bold{k}}}{E_{\bold{k}}}-1\right),\end{align} with $E_\bold{k}=\sqrt{A_\bold{k}^2-B_\bold{k}^2}$. The diagonal Hamiltonian yields \begin{eqnarray} H_{J_z=0}=\sum_{\bold{k}}E_{\bold{k}}\left( \gamma_{\bold{k}}^\dagger \gamma_{\bold{k}}+\gamma_{-\bold{k}}^\dagger \gamma_{-\bold{k}}\right). \end{eqnarray} The excitation of the quasiparticles is given by \begin{eqnarray} \mathcal{E}(\bold{k})=2E_\bold{k}=2\sqrt{A_\bold{k}^2-B_\bold{k}^2}, \end{eqnarray} \begin{align}\label{eqn3.20} &{A}_{\bold{k}}= \frac{3K}{2}-B_{\bold{k}},\quad {B}_{\bold{k}}=-\frac{K}{2}\lambda _{\bold{k}}- \frac{K}{8}\left(\lambda _{\bold{k}}+\bar{\lambda}_{\bold{k}} \right), \end{align} and \begin{align}\label{eqn3.17} &\lambda_{\bold{k}} = \cos k_\alpha+ \cos k_\beta +\cos(k_\alpha +k_\beta);\\& \bar{\lambda}_{\bold{k}}= \cos(k_\alpha -k_\beta)+\cos(2k_\alpha +k_\beta)+\cos(k_\alpha +2k_\beta). \end{align} where $k_\alpha=\bold{k}\cdot\bold{e}_\alpha$ and $k_\beta=\bold{k}\cdot\bold{e}_\beta$. In Fig.~\eqref{u1} we have shown the spin wave spectrum of the U(1)-invariant model obtained in Ref.~\cite{N} using a different approach. In this case, there is one quadratic gapless mode at $\bold{k}=0$. In contrast, Fig.~\eqref{zone} shows the spin wave spectrum for the present model. We observe instabilities of the spin wave at the midpoints of the adjacent sides of the Brillouin zone, that is at the points $A$, $B$, and $C$ in Fig.~\eqref{zone1}. At the center of the Brillouin zone the spectrum exhibits a gapped maximum dispersion, which in the long wavelength limit behaves as \begin{align} \mathcal{E}(\bold{k}) = a-b|\bold{k}|^2, \label{dis} \end{align} \begin{figure}[ht] \centering \includegraphics[width=2.5in]{Brill1.png} \caption{Color online. The Brillouin zone of the triangular lattice. The points $A$, $B$, and $C$ are the midpoints of the adjacent sides of the Brillouin zone, where the energy vanishes.} \label{zone1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=3in]{Z2_DOS.pdf} \caption{Color online. The density of states, $\rho(\omega)$, of the $Z_2$-invariant pure-$K$ model. } \label{dos} \end{figure} where $a=6K$ and $b=3K/4$. The maximum dispersion near $\bold{k}=0$ is one of the distinctive features of this model as a result of pure $Z_2$ symmetry of the Hamiltonian. Figure \eqref{dos} shows the plot of the density of states for the pure-$K$ model, given by \begin{align} \rho(\omega)=\frac{1}{V}\sum_\bold{k} \delta(\omega-\mathcal{E}(\bold{k})). \end{align} The distinguishing feature of the density of states is that the largest peak correspond to the saddle point of the excitation energy. The maximum excitation energy, however, is at much higher energy and leads to a step-like van Hove singularity. The density of states also shows a discontinuity corresponding to the vanishing of the excitation spectrum at the midpoints of adjacent sides of the Brillouin zone in Fig.~\eqref{zone}. Thus, the $Z_2$-invariant pure-K model describes an unusual liquid. \begin{figure}[ht] \centering \begin{tikzpicture}[scale=2] \draw[ultra thick,blue] (-1,0)--(-2,0) ; \draw[ultra thick,blue] (-2.5,0.8)--(-1.5,0.8); \draw[ultra thick,blue] (-2.5,0.8)--(-2,0); \draw[ultra thick,blue] (-1.5,0.8)--(-1,0); \draw [->,>=stealth,ultra thick, blue, blue] (-1,0)--(-0.8,-0.15); \draw [->,>=stealth,ultra thick, blue] (-2,0) --(-1.85,.25); \draw [->,>=stealth,ultra thick, blue] (-1.5,0.8) --(-1.65,.57); \draw [->,>=stealth,ultra thick, blue] (-2.5,0.8) --(-2.7,.95); \draw[fill= blue] (-1,0) circle (0.5mm); \draw[fill= blue] (-2,0) circle (0.5mm); \draw[fill= blue] (-1.5,0.8) circle (0.5mm); \draw[fill= blue] (-2.5,0.8) circle (0.5mm); \draw(-1.5,-0.7) node[]{$(a)$}; \end{tikzpicture} \quad \quad \begin{tikzpicture}[scale=2] \draw[ultra thick,blue] (1,0)--(2,0) ; \draw[ultra thick,blue] (2.5,0.8)--(1.5,0.8); \draw[ultra thick,blue] (2.5,0.8)--(2,0); \draw[ultra thick,blue] (1.5,0.8)--(1,0); \draw [->,>=stealth,ultra thick, blue] (1,0)--(0.8,-0.15); \draw [->,>=stealth,ultra thick, blue] (2,0) --(1.85,.25); \draw [->,>=stealth,ultra thick, blue] (1.5,0.8) --(1.65,.57); \draw [->,>=stealth,ultra thick, blue] (2.5,0.8) --(2.7,.95); \draw[fill= blue] (1,0) circle (0.5mm); \draw[fill= blue] (2,0) circle (0.5mm); \draw[fill= blue] (1.5,0.8) circle (0.5mm); \draw[fill= blue] (2.5,0.8) circle (0.5mm); \draw(1.5,-0.7) node[]{$(b)$}; \end{tikzpicture} \quad \quad \begin{tikzpicture}[scale=1.25] \draw[ultra thick,blue] (1,0)--(0,1) ; \draw[ultra thick,blue] (0,1)--(-1,0); \draw[ultra thick,blue] (-1,0)--(0,-1); \draw[ultra thick,blue] (0,-1)--(1,0); \draw [->,>=stealth,ultra thick, blue] (1.2,0) --(.5,0); \draw [->,>=stealth,ultra thick, blue] (-1.2,0) --(-0.5,0); \draw [->,>=stealth,ultra thick, blue] (0,.8) --(0,1.5); \draw [->,>=stealth,ultra thick, blue] (0,-.8) --(0,-1.5); \draw[fill= blue] (1,0) circle (0.9mm); \draw[fill= blue] (-1,0) circle (0.9mm); \draw[fill= blue] (0,1) circle (0.9mm); \draw[fill= blue] (0,-1) circle (0.9mm); \draw(0,-1.7) node[]{$(c)$}; \end{tikzpicture} \caption{Color online. The spin configurations on the triangular plaquettes, which obey the ``ice rules'' with two spins pointing inward and two spins pointing outward at each vertex. } \label{ice} \end{figure} \section{Effects of frustration} We now consider the full model in Eq.~\eqref{model}. In the regime $J_z\ll K$, the physics of this Hamiltonian is, in fact, the same as in the previous section. In the dominant Ising coupling, $J_z >K$, the sign of $K$ is very crucial and the system is frustrated as it is impossible to align the spins antiferromagnetically on the vertices of the triangular lattice. This leads to many classical degenerate ground states. The classical degenerate ground states of the pure-$J_z$ term are known to be lifted through order-by-disorder mechanism \cite{sow3} by quantum fluctuations emanating from the pure XY easy-axis ferromagnetic coupling $H_0 = -J\sum_{\langle ij\rangle}\left( S_{i}^+S_{j}^+ + S_{i}^-S_{j}^-\right)$. In this case, quantum fluctuations select a particular state known as a {\it ferrosolid} state \cite{sow3}. This state differs from the conventional {\it supersolid} state, \cite{G} as it breaks translational and $Z_2$ symmetries. We can imagine covering the triangular lattice with plaquettes, then one of the degenerate classical ground states of Eq.~\eqref{model} obeys the ice-rules depicted in Fig.~\eqref{ice}, in which the Ising term represents the degenerate classical ice and the ring exchange term denotes quantum fluctuations. However, the lifting of the classical degeneracy by the ring exchange term is a highly nontrivial mathematical problem. In fact, it is infeasible to analyze this problem analytically. For $J_z>K$ and $K<0$, there is a possibility of a gapped QSL state with gapped excitations on frustrated non-bipartite lattices. Although we cannot analytically confirm this claim, numerical techniques such as QMC should be tractable with $K<0$. It would be interesting to investigate numerically if Eq.~\eqref{model} promotes a two-dimensional QSL state within a class of triangular lattice QSL materials, \cite{qsl,qsl1,qsl2} and other QSL materials on the kagome \cite{balent} and pyrochlore lattices \cite{Huang, you,gin, zhi, sun1,sun4,sun4a, sun5,sun6,sun7,juan}. A related U(1)-invariant easy-axis model on the kagome lattice has been conjectured to possess a QSL phase \cite{bal}. \section{Conclusion} In this communication, we presented the distinctive features of a $Z_2$-invariant XY ring exchange interaction on the triangular lattice. We showed that the complete breaking of continuous U(1) symmetry down to discrete $Z_2$ symmetry has profound effects on the nature of the quantum properties that emerge from this system. For the pure ring-exchange model with $Z_2$-invariance, we showed that the distinguishing factor is the gapped $\bold{k}=0$ mode and soft modes at the midpoints of each side of the Brillouin zone. As a result, the $Z_2$-invariant ring exchange model possesses some special features which are different from its U(1)-invariant counterpart. We also provided a glimpse into the nature of the possible quantum phase that could emerge when competing with a classically frustrated Ising interaction. An explicit numerical simulation is required to uncover the nature of the proposed Hamiltonian and the possibility of any two-dimensional QSL states within a class of triangular lattice QSL materials such as $\kappa$-(BEDT-TTF)$_2$Cu$_2$(CN)$_3$ and EtMe$_3$Sb[Pd(dmit)$_2$]$_2$ \cite{qsl,qsl1,qsl2, balent}. \section*{Acknowledgments} The author would like to thank African Institute for Mathematical Sciences (AIMS), where this work was conducted. Research at Perimeter Institute is supported by the Government of Canada through Industry Canada and by the Province of Ontario through the Ministry of Research and Innovation. \section*{References}
1,314,259,996,067
arxiv
\section{Introduction} PbMg$_{1/3}$Nb$_{2/3}$O$_3$ crystallizing in the cubic perovskite structure is one of the most studied dielectrics as the typical representative of relaxor ferroelectrics. It exhibits a high ($\sim$10$^{4}$) and strongly diffused peak in the temperature dependent permittivity whose position remarkably shifts from 245\ensuremath{\,\mbox{K}}\, (at 100\,Hz) to 320\ensuremath{\,\mbox{K}}\, (at 1\,GHz).\cite{viehland,bovtun06} Its crystal structure remains cubic down to liquid He temperatures. The peculiar dielectric properties are caused by a wide dielectric relaxation which broadens and slows down on cooling.\cite{viehland,bovtun06} The relaxation originates from the dynamics of polar clusters, which develop below the Burns temperature T$_{d} \cong$ 620\,K.\cite{burns83} The broad distribution of relaxation frequencies has its origin in random fields and random forces as a consequence of chemical disorder in the perovskite B sites occupied by Mg$^{2+}$ and Nb$^{5+}$. The local ferroelectric instability in the polar clusters was indicated by an unstable polar optic phonon, which softens to T$_{d}$ and hardens above T$_{d}$.\cite{bovtun06,wakimoto02b} Although the dielectric properties of perovskite PbMg$_{1/3}$Nb$_{2/3}$O$_3$ were intensively studied during the last fifty years, not all is completely understood, particularly the complex and broad dielectric dispersion. The related compound Pb$_{1.83}$Mg$_{0.29}$Nb$_{1.71}$O$_{6.39}$ (PMN) with a pyrochlore structure, which frequently grows in the perovskite PMN ceramics and thin films as a second phase and significantly deteriorates its dielectric properties, has been much less studied. The only report known to the authors is that by Shrout and Swartz.\cite{shrout83} They investigated the dielectric response up to 400\,kHz down to liquid He temperatures and observed a diffuse maximum in the complex permittivity below 40\ensuremath{\,\mbox{K}}. The crystal structure of the pyrochlore PMN single crystal ($Fd3m$ space group) was determined in detail by Wakiya et al.\cite{wakiya93} High-frequency dielectric properties, including microwave, THz and infrared frequencies, have not been investigated, although they could be useful for understanding the reported diffused and frequency dependent maximum of the complex permittivity. The present report aims to fill this gap in the literature. We will show that our pyrochlore PMN ceramics does not undergo a diffuse phase transition (reported in Ref. \cite{shrout83}), but a quantum paraelectric behavior for which the temperature dependent permittivity is caused only by anomalous polar phonons. In this way, it represents the first quantum paraelectrics with pyrochlore crystal structure. \section{Experimental} Pb$_{1.83}$Mg$_{0.29}$Nb$_{1.71}$O$_{6.39}$ pyrochlore ceramic samples were produced by solid state reaction of mixed oxide powders described in details in Refs. \cite{shrout83,mergen97}. PbO (99.5\%), Nb$_{2}$O$_{5}$ (99.5\%) and MgO (97\%) powders were mixed and sintered at 880$\,{}^\circ \rm C$ for 8 hours. The pyrochlore cubic structure was verified by the X-ray diffraction. The dielectric response was investigated between 400 Hz and 1 MHz from 10\ensuremath{\,\mbox{K}}\, to 730\ensuremath{\,\mbox{K}}\, using an impedance analyzer HP 4192A. The TE$_{0n1 }$ composite dielectric resonator method\cite{krupka06} and network analyzer Agilent E8364B was used for microwave measurements at 8.8 GHz in 100 - 350\ensuremath{\,\mbox{K}}\, temperature interval. The cooling rate was 2\ensuremath{\,\mbox{K}}/min. Measurements at teraherz (THz) frequencies from 7\,cm$^{-1}$ to 33 cm$^{-1}$ (0.2 - 1.0\,THz) were performed in the transmission mode using a time-domain THz spectrometer based on an amplified Ti - sapphire femtosecond laser system. Two ZnTe crystal plates were used to generate (by optic rectification) and to detect (by electro-optic sampling) the THz pulses. Both the transmitted field amplitude and phase shift were simultaneously measured; this allows us to determine directly the complex dielectric response $\varepsilon^{\ast}(\omega)$. An Optistat CF cryostat with thin mylar windows (Oxford Inst.) was used for measurements down to 10\ensuremath{\,\mbox{K}}. Infrared (IR) reflectivity spectra were obtained using a Fourier transform IR spectrometer Bruker IFS 113v in the frequency range of 20 - 3000 cm$^{-1}$ (0.6 - 90 THz) at room temperature, at lower temperatures the reduced spectral range up to 650 cm$^{-1}$ only was studied (transparency region of polyethylene windows in the cryostat). Pyroelectric deuterated triglicine sulfate detectors were used for the room temperature measurements, while more sensitive liquid-He-cooled (1.5 K) Si bolometer was used for the low-temperature measurements. Polished disk-shaped samples with a diameter of 8 mm and thickness of $\sim$2 mm were investigated. \section{Results and discussions} Temperature dependence of the real and imaginary parts of complex permittivity $\varepsilon^{\ast}= \varepsilon^\prime - {\rm i}\varepsilon^{\prime\prime}$ at various frequencies is plotted in Fig.~\ref{Fig1}. One can see typical incipient ferroelectric behavior, i.e. increase in $\varepsilon$' on cooling and its noticeble saturation at low temperatures. It is important to stress that within the accuracy of measurements no frequency dispersion of $\varepsilon^\prime$ was observed between 400\,Hz and 8.8\,GHz at temperatures below 600\ensuremath{\,\mbox{K}}. The small low-frequency dispersion above 600\ensuremath{\,\mbox{K}}\, is caused by non-negligible conductivity of our sample. The pronounced $\varepsilon$'(T) dependence is therefore caused by the softening of an excitation above 10\,GHz. To reveal it we measured the THz dielectric spectra (see Fig.~\ref{Fig2}) and IR reflectivity spectra (Fig.~\ref{Fig3}) below room temperature (RT). One can actually see very pronounced changes in the THz complex permittivity due to the polar phonon softening in the investigated range (see Fig.~\ref{Fig2}). \begin{figure} \begin{center} \includegraphics[width=80mm]{Fig1.eps} \end{center} \caption{(color online) Temperature dependence of the real $\varepsilon$' and imaginary $\varepsilon$'' part of complex permittivity in pyrochlore PMN ceramics at different frequencies. $\varepsilon (0)$ means the sum of phonon and electron contributions to the static permittivity, as obtained from the IR reflectivity and THz data fit. $\varepsilon$'' data are plotted only below 300\ensuremath{\,\mbox{K}}, because its higher temperature values are influenced by the conductivity. Note the right scale for $\varepsilon$''.} \label{Fig1} \end{figure} \begin{figure} \begin{center} \includegraphics[width=80mm]{Fig2.eps} \end{center} \caption{(color online) THz dielectric spectra of the pyrochlore PMN at various temperatures. The soft mode frequency shifts down into the THz range on cooling, therefore the sample becomes less transparent at low temperatures (the noise increases) and the accessible spectral range narrows on cooling. Two frequency scales (THz and \ensuremath{\,\mbox{cm}^{-1}}) are given.} \label{Fig2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=80mm]{Fig3.eps} \end{center} \caption{(color online) IR reflectivity spectra of the pyrochlore PMN ceramics at various temperatures below RT. Low-temperature spectra were obtained only below 650\ensuremath{\,\mbox{cm}^{-1}}\, (transparency range of the polyethylene windows in the cryostat).} \label{Fig3} \end{figure} In order to obtain all phonon parameters as a function of temperature, IR and THz spectra were fitted simultaneously using the generalized-oscillator model with the factorized form of the complex permittivity:\cite{gervais83} \begin{equation}\label{eps} \varepsilon^{*}(\omega)=\varepsilon_{\infty}\prod_{j}\frac{\omega^{2}_{LOj}-\omega^{2}+i\omega\gamma_{LOj}}{\omega^{2}_{TOj}-\omega^{2}+i\omega\gamma_{TOj}} \end{equation} where $\omega_{TOj}$ and $\omega_{LOj}$ denotes the transverse and longitudinal frequency of the j-th polar phonon, respectively, and $\gamma$$_{TOj}$ and $\gamma$$_{LOj}$ denotes their corresponding damping constants. $\varepsilon$$^{*}$($\omega$) is related to the normal reflectivity R($\omega$) by \begin{equation}\label{refl} R(\omega)=\left|\frac{\sqrt{\varepsilon^{*}(\omega)}-1}{\sqrt{\varepsilon^{*}(\omega)}+1}\right|^2 .\end{equation} The high-frequency permittivity $\varepsilon_{\infty}$ resulting from the electron absorption processes was obtained from the room-temperature frequency-independent reflectivity tails above the phonon frequencies and was assumed to be temperature independent. The real and imaginary parts of $\varepsilon^*$($\omega$) obtained from the fits to IR and THz spectra are shown in Fig.~\ref{Fig4}. Parameters of the fits performed at 300 and 20\ensuremath{\,\mbox{K}}\, are summarized in Table I. One can see a higher number of observed modes at 20\ensuremath{\,\mbox{K}}\, than at RT. This is due to the reduced phonon damping at low temperatures, which allows to resolve a higher number of modes, which are probably overlapping at RT. One can see in the $\varepsilon$''($\omega$) spectra of Fig.~\ref{Fig4} (we remind that the frequencies of $\varepsilon$'' maxima roughly correspond to the phonon frequencies) that the most remarkable frequency shift with temperature is revealed by the lowest frequency mode below 30\ensuremath{\,\mbox{cm}^{-1}}, but also the mode near 130\ensuremath{\,\mbox{cm}^{-1}}\, partially softens on cooling. The temperature dependence of the soft mode frequency $\omega_{SM}$(left scale) and its dielectric strength $\Delta\varepsilon_{SM}$ are shown in Fig.~\ref{Fig5}. Mainly this mode causes an increase in $\varepsilon^\prime$ on cooling (see Fig.~\ref{Fig1}). In the case of uncoupled phonons the oscillator strength $f_{j}=\Delta\varepsilon_{j}.\omega_{TOj}^2$ of each phonon is roughly temperature independent so that each softening of phonon frequency $\omega_{TOj}$ is connected with the increase of its dielectric strength $\Delta\varepsilon_{j}$ given by:\cite{gervais83} \begin{equation} \Delta\varepsilon_{j} = \varepsilon_{\infty}\omega^{-2}_{TOj}\frac{\prod_{k}\omega^{2}_{LOk}-\omega^{2}_{TOj}} {\prod_{k\neq j}\omega^{2}_{TOk}-\omega^{2}_{TOj}}. \label{eq:sila} \end{equation} In our case the soft-mode oscillator strength $f_{SM}$ is temperature dependent, it increases twice from 3.3 .10$^{4}$ to 6.5 .10$^{4}$ cm$^{-2}$. This indicates that the soft mode is coupled with some higher frequency mode assuming that $\sum f_{j}=const$. However, it is difficult to reveal with which mode is the soft mode coupled, because the oscillator strength of the most high-frequency modes is much higher than $f_{SM}$ and relatively small changes (below the limit of our fitting accuracy) in these modes could explain the increase of the $f_{SM}$ on cooling. \begin{figure} \begin{center} \includegraphics[width=84mm]{Fig4.eps} \end{center} \caption{(color online) Complex dielectric spectra from the fit to IR reflectivity and THz dielectric spectra at various temperatures. To see better the mode softening, $\varepsilon$''($\omega$) in inset the spectra are shown in the spectral range below 200\ensuremath{\,\mbox{cm}^{-1}}.} \label{Fig4} \end{figure} \begin{figure} \begin{center} \includegraphics[width=80mm]{Fig5.eps} \end{center} \caption{(color online) Temperature dependence of the low-frequency soft mode $\omega_{SM}$(left scale) and its dielectric strength (right scale). The dashed black line and solid red line show the results of the Cochran and Minaki fit of the soft-mode frequency, respectively (see the text). } \label{Fig5} \end{figure} The static permittivity obtained from the fit of the IR reflectivity is defined as \begin{equation}\label{statics} \varepsilon(0)=\sum_{j}\Delta\varepsilon_{j}+\varepsilon_{\infty} \end{equation} and is plotted in Fig.~\ref{Fig1} in red solid dots. The values of $\varepsilon$(0) are slightly lower (by about 20) than the experimental values obtained at and below the microwave range. However, we believe that the disagreement is rather due to the experimental inaccuracy than due to real dielectric dispersion above the GHz range. So we conclude that the temperature dependence of the permittivity below 10\,GHz in Fig.~\ref{Fig1} is essentially due to anomalous polar phonons. \begin{table*}[!htbp] \caption{Parameters of the polar phonon modes in the pyrochlore PMN obtained from the fit of IR and THz spectra at 20 and 300 K. Frequencies $\omega_{TOj}$, $\omega_{LOj}$ and dampings $\gamma_{TOj}$, $\gamma_{LOj}$ of modes are in \ensuremath{\mbox{cm}^{-1}}, $\Delta \varepsilon_{j}$ is dimensionless, $\varepsilon_{\infty}$=5.87. } \begin{tabular}{|r | c c c c c||c c c c c |}\hline &\multicolumn{5}{c||}{20 K}&\multicolumn{5}{c|}{300 K}\\ \hline No&$\hspace{0.2cm} \omega_{TOi} \hspace{0.2cm}$&\hspace{0.2cm} $\gamma_{TOj}$ \hspace{0.2cm}&\hspace{0.2cm} $\omega_{LOj}$ \hspace{0.2cm}&\hspace{0.2cm} $\gamma_{LOj}$ \hspace{0.2cm}&\hspace{0.2cm} $\Delta \varepsilon_{j}$ \hspace{0.2cm}& $\hspace{0.2cm} \omega_{TOj} \hspace{0.2cm}$&\hspace{0.2cm} $\gamma_{TOj}$ \hspace{0.2cm}&\hspace{0.2cm} $\omega_{LOj}$ \hspace{0.2cm}&\hspace{0.2cm} $\gamma_{LOj}$ \hspace{0.2cm}&\hspace{0.2cm} $\Delta \varepsilon_{j}$ \hspace{0.2cm} \\ \hline 1&24.1&19.4&32.5&31.5&110.8&33.2 &22.6 &37.7 &30.5 &33.7\\ 2&55.8 &16.1 &63.9 &17.7 &36.6 &56.9 &13.5 &63.2 &16.9 &23.9\\ 3&71.7 &13.4 &80.1 &14.5 &12.0 &71.9 &12.3 &79.7 &18.1 &10.8\\ 4&125.9 &36.1 &175.8 &28.4 &46.1&131.9 &42.9 &182.3 &40.6 &40.8\\ 5&179.9 &28.0 &212.0 &68.7 &1.8 &186.7 &36.2 &215.3 &56.3 &1.6\\ 6&216.7 &49.2 &257.3 &23.2 &0.8 &217.6 &41.6 &260.1 &40.2 &0.4\\ 7&271.2 &26.0 &277.3 &22.0 &0.26&&&&&\\ 8&293.5 &23.7 &296.0 &22.0 &0.2&&&&&\\ 9&329.7 &29.5 &339.8 &30.7 &0.3&&&&&\\ 10&348.9 &23.7 &388.1 &49.2 &2.0&338.8 &41.1 &383.5 &48.9 &4.3\\ 11&389.1 &36.6 &429.5 &20.6 &0.06&386.5 &40.8 &427.7 &32.4 &0.2\\ 12&449.6 &28.1 &464.1 &33.6 &0.3&448.4 &31.7 &456.3 &35.0 &0.2\\ 13&490.9 &19.9 &547.6 &59.5 &0.9&486.3 &41.9 &530.7 &116.5 &1.0\\ 14&547.8 &43.4 &551.2 &55.6 &0.001&553.1 &120 &586.1 &120.4 &0.4\\ 15&562.6 &44.8 &575.1 &70.7 &0.9&&&&&\\ 16&598.9 &70.5 &703.6 &46.8 &0.4&602.6 &104.9 &703.6 &47.6 &0.3\\ 17&862.7 &80.1 &863.0 &69.1 &0.002&862.7 &80.1 &863 &69.1 &0.002\\ \hline \hline \end{tabular} \label{IRmodes} \end{table*} The temperature dependence of the lowest-mode frequency below 35\ensuremath{\,\mbox{cm}^{-1}}\, was fitted with the Cochran law \begin{equation}\label{Cochran} \omega_{SM}(T)=\sqrt{A(T-T_{cr})} \end{equation} where A is a constant and T$_{cr}$ is the critical softening temperature. From the fit we obtained A = (1.86$\pm$0.01) cm$^{-2}$ K$^{-1}$ and T$_{cr}$ = (-285$\pm$11)\ensuremath{\,\mbox{K}}. So, pyrochlore PMN tends to the ferroelectric instability at negative temperatures. Theoretical critical temperature can be obtained also from the fit of the temperature dependence of permittivity $\varepsilon$(T), which (for classical paraelectrics) should follow the Curie-Weiss law \begin{equation}\label{CW} \varepsilon^\prime=\varepsilon_{cw \infty}+\frac{C}{T-T_{cw}}. \end{equation} The result of the Curie-Weiss fit is shown in Fig.~\ref{Fig6} in dashed line with the following fit parameters: $\varepsilon_{cw \infty}=$43$\pm$1, Curie-Weiss constant $C$=(47800$\pm500$)\ensuremath{\,\mbox{K}}\, and critical temperature $T_{cw}=$(-202$\pm$8)\ensuremath{\,\mbox{K}}. The Cochran critical temperature T$_{cr}=$-285\ensuremath{\,\mbox{K}}\, is somewhat lower than the Curie-Weiss critical temperature T$_{cw}$, but if we consider that the extrapolated critical temperatures lie far below the investigated temperature range, the agreement between both values is reasonable. \begin{figure} \begin{center} \includegraphics[width=80mm]{Fig6.eps} \end{center} \caption{(color online) The temperature dependence of the experimental permittivity at 500\,kHz and result of the Curie-Weiss fit (dashed green line) and the fit with Barrett formula (red solid line). Note the log temperature scale.} \label{Fig6} \end{figure} The Curie-Weiss fit in Fig.~\ref{Fig6} deviates from the experimental data below $\sim$ 50\ensuremath{\,\mbox{K}}, because $\varepsilon$'(T) levels off at low temperatures. Similar behavior was observed in incipient ferroelectrics as SrTiO$_{3}$, KTaO$_{3}$ or CaTiO$_{3}$ where the polar soft mode is also responsible for the observed $\varepsilon$'(T).\cite{samara01,kvyatkovskii01} The soft mode does not soften completely, it levels off at low temperatures (typically below 30\ensuremath{\,\mbox{K}}), mainly due to zero-temperature vibrations of light oxygen ions, which prevents the formation of long-range ferroelectric order and permittivity divergence at low temperatures. Due to this phenomenon the incipient ferroelectrics are also called quantum paraelectrics.\cite{muller79} Note that pyrochlore PMN is the first quantum paraelectrics of pyrochlore crystal structure. The leveling-off of the low-temperature permittivity in incipient ferroelectrics was explained by Barrett\cite{barrett52} already in the beginning of 1950's. He derived the formula \begin{equation}\label{Barrett} \varepsilon'(T)=\frac{M}{\frac{T_{1}}{2}\coth(\frac{T_{1}}{2T})-T_{0}}+\varepsilon_{B\infty}, \end{equation} where M is constant, $T_{1}$ is the temperature below which the quantum fluctuations start to play a role ($\frac{1}{2}k_{B}T_{1}$ is the zero-point vibration energy\cite{kleemann07}) and $\varepsilon_{B\infty}$ marks the temperature independent part of the permittivity (this term was neglected in Ref. \cite{barrett52}, because it was very small in comparison to huge low-temperature $\varepsilon$' in SrTiO$_{3}$ and KTaO$_{3}$). Our use of the the Barrett formula in Eq.(~\ref{Barrett}) for fitting of $\varepsilon$'(T) yields very good agreement with the experimental data (see Fig.~\ref{Fig6}). We found $M$=(4.25$\pm$0.01)$\times$10$^{4}$\ensuremath{\,\mbox{K}}, $T_{1}$=(96$\pm$8)\ensuremath{\,\mbox{K}}\, $T_{0}$=(-167$\pm$9)\ensuremath{\,\mbox{K}}\, and $\varepsilon_{B\infty}=$ 47$\pm$0.5. For $T\gg T_{1}$, $\frac{1}{2}T_{1}\coth(\frac{T_{1}}{2T})$ asymptotically approaches $T$ and Eq.(~\ref{Barrett}) becomes a Curie-Weiss law. Therefore also $\varepsilon_{B\infty}=$ 47 is close to $\varepsilon_{CW \infty}$ = 43. It is worth to note that the zero point vibration frequency $\frac{1}{2}k_{B}T_{1}=$1\,THz=33\ensuremath{\,\mbox{cm}^{-1}}\, corresponds very well to the soft mode frequency (see Fig.~\ref{Fig5}). Note also that the $T_{1}$ parameter as well as the soft-mode frequency in the pyrochlore PMN qualitatively agree with analogous parameters obtained for SrTiO$_{3}$ and SrTi$^{18}$O$_{3}$, although the values of permittivity in these materials are two orders of magnitude higher,\cite{barrett52,kleemann07,filipic06} which is the consequence of different $M$ parameters. In the only published dielectric data below 400\,kHz Shrout and Swartz\cite{shrout83} observed similar essentially dispersionless increase in $\varepsilon$' on cooling down to 50\ensuremath{\,\mbox{K}}\, as we did in Fig.~\ref{Fig1}. We believe that the small dispersion below 50\ensuremath{\,\mbox{K}}\, and small decrease in $\varepsilon$' below $\sim$ 30\ensuremath{\,\mbox{K}}\, observed by Shrout and Swartz\cite{shrout83} could be due to some defects (e.g. vacancies) in the crystal lattice of pyrochlore Pb$_{1.83}$Mg$_{0.29}$Nb$_{1.71}$O$_{6.39}$, which is substantially non-stoichiometric. It is clear that the soft mode frequency cannot follow the Cochran law (Eq.~\ref{Cochran}) in the case of low-temperature quantum fluctuations. The correct low-temperature dependence of the soft mode frequency derived from the Barrett formula for permittivity can be found e.g. in the paper of Minaki et al. \cite{minaki03} \begin{equation}\label{Minaki} \omega_{SM}(T)=\sqrt{A\Big[\Big(\frac{T_{1}}{2}\Big)\coth\Big(\frac{T_{1}}{2T}\Big)-{T_{0}\Big]}}, \end{equation} where A is constant and $T_{1}$ and $T_{0}$ have the same meaning as in Eq.(~\ref{Barrett}). Note that Eq.(~\ref{Minaki}) follows from Eq.(~\ref{Barrett}) and Lyddane-Sachs-Teller relation under assumption that the temperature dependence of static permittivity is caused just by softening of the soft TO mode. Result of the soft mode fit with Eq.(~\ref{Minaki}) is shown by solid line in Fig.~\ref{Fig5} where one can clearly see the leveling-off of the soft-mode frequency at low temperatures (unlike the Cochran fit). The fitting parameters are the following: A = (2.03$\pm$0.05) cm$^{-2}$K$^{-1}$, T$_{1}$ = (96$\pm$9)\ensuremath{\,\mbox{K}}\, and T$_{0}$ = (-240$\pm$11)\ensuremath{\,\mbox{K}}. The fit parameters could be significantly improved if we would have more points in Fig.~\ref{Fig5} especially below 50\ensuremath{\,\mbox{K}}. Nevertheless, one can see reasonable agreement of both Cochran and Barrett fits above 50\ensuremath{\,\mbox{K}}\, as well as T$_{0}$ and T$_{1}$ parameters obtained from the Barrett fits of permittivity (Eq.~\ref{Cochran}) and the Minaki model of the soft mode frequency (Eq.~\ref{Minaki}). Let us compare the IR reflectivity spectra of pyrochlore PMN (Fig.~\ref{Fig3}) with those of perovskite PMN. The latter was first published by Burns and Dacol\cite{burns83} together with the temperature dependence of the optical index of refraction \textit{n}(T), which shows deviation from the linear dependence below $\sim$ 600\ensuremath{\,\mbox{K}} . They explained the unusual \textit{n}(T) dependence by formation of polar nanoregions. Surprisingly, the published IR spectrum of the perovskite sample \cite{burns83} corresponds to our pyrochlore spectrum in Fig.~\ref{Fig3}. In later papers of other authors (Refs. \cite{karamyan77,reaney94,prosandeev04,hlinka06}) the mutually similar (but different from Burns and Dacol's spectra\cite{burns83}) IR reflectivity spectra of perovskite PMN consisted of three distinct reflection bands typical for all cubic perovskite oxides. We stress that the infrared spectra in Refs. \cite{karamyan77,reaney94,prosandeev04,hlinka06} were obtained independently on ceramics, single crystals as well as on thin films. It appears that the IR spectrum by Burns and Dacol \cite{burns83} belongs to pyrochlore PMN. The rest of their data (\textit{n}(T) and \textit{P}(T)) were obviously obtained on perovskite PMN, because the peculiarities near the Burns temperature in the perovskite PMN were later confirmed in many experiments. It is of interest to compare the number of observed polar modes in the reflectivity spectra with the prediction of factor-group analysis: pyrochlore PMN crystallizes in $Fd3m$ space group with 8 formula units per conventional unit cell,\cite{wakiya93} i.e. 2 formula units per primitive unit cell. This means that on the whole, 66 lattice vibrational branches are expected. Pb ions are in 16d positions while Mg and Nb ions are in 16c positions \cite{wakiya93}, both sites having $D_{3d}$ symmetry, while the O cations are in positions 48f and 8b of $C_{2v}^{d}$ and $T_{d}$ symmetry, respectively. The mode symmetries and their activities in IR and Raman spectra can be obtained using standard tables \cite{rousseau81} with the following result for the $\Gamma$-point of the Brillouin zone (factor-group analysis): \begin{eqnarray} \Gamma_{Fd3m} = 3A_{2u}(-)+ 3E_{u}(-)+8F_{1u}(x)+4F_{2u} \nonumber \\ +4F_{2g}(xy,yz,xz)+A_{1g}(x^{2},y^{2},z^{2})\nonumber\\ +E_{g}(x^{2}+y^{2}-2z^{2},\sqrt{3}x^{2}-\sqrt{3}y^{2})+2F_{1g}. \label{eq:pyro1} \end{eqnarray} It means that after subtraction of 1$F_{1u}$ acoustic mode, 7$F_{1u}$ modes should be IR active, 4$F_{2g}$, 1$A_{1g}$ and 1$E_{g}$ should be Raman active, the rest of modes being silent. Table I shows that our fit of IR spectra required 17 modes, much more than expected. The analysis in Eq.~\ref{eq:pyro1} assumes one effective ion in 16c positions instead of statistically distributed Mg and Nb ions. Since the ions strongly differ in the mass, one could expect splitting of the modes in which these ions take part. If we take into account different Mg and Nb vibrations, the factor group analysis yields: \begin{eqnarray} \Gamma_{Fd3m} = 4A_{2u}(-)+ 4E_{u}(-)+10F_{1u}(x)+5F_{2u} \nonumber \\ +4F_{2g}(xy,yz,xz)+A_{1g}(x^{2},y^{2},z^{2})\nonumber\\ +E_{g}(x^{2}+y^{2}-2z^{2},\sqrt{3}x^{2}-\sqrt{3}y^{2})+2F_{1g}. \label{eq:pyro2} \end{eqnarray} In this case 9 polar $F_{1u}$ modes are expected, which is still less than 17 modes observed at low temperature (see Table I). If we assume that the Pb cations and some of the oxygen anions are dynamically disordered among more equivalent positions with the avarage structure remaining cubic like in isostructural Bi$_{1.5}$ZnNb$_{1.5}$O$_{7}$,\cite{levin02}, then 14 modes could be IR active,\cite{kamba02} close to our experimental result. The problem with the excess IR active modes is also known from the perovskite PMN , where only 4$F_{1u}$ polar modes are allowed in $Fm\bar{3}m$ structure, although 7 modes were observed.\cite{prosandeev04} The excess modes in the perovskite PMN were explained by polar clusters, which locally break the cubic symmetry into a rhombohedral one.\cite{hlinka06} In the case of pyrochlore structure similar local symmetry breaking, if present, should probably be non-polar, because there is no indication for the existence of polar clusters. There is also no dielectric dispersion below the polar phonon range, in contrast to the PMN perovskite, where the huge dielectric dispersion appears just due to polar cluster dynamics.\cite{bovtun06,viehland} Therefore it appears that new structural studies of pyrochlore PMN are needed to detect either a dynamical disorder of some atoms in the lattice or a non-cubic crystal symmetry. \section{Conclusion} Our dielectric studies of pyrochlore PMN indicate quantum paraelectric behavior, i.e. the permittivity increase on cooling following the Barrett formula in the whole investigated temperature range. The permittivity shows no dispersion up to the microwave range and its temperature dependence can be explained by the softening of polar optic modes. The zero-point vibrational energy $\frac{1}{2}k_{B}T_{1}=$1\,THz obtained from the Barrett formula corresponds very well to the soft mode frequency observed near 30\ensuremath{\,\mbox{cm}^{-1}}. It is worth to note that the pyrochlore PMN is the first quantum paraelectrics with pyrochlore crystal structure. Our IR spectrum of pyrochlore PMN corresponds to IR spectrum by Burns and Dacol\cite{burns83}, whose paper concerns the perovskite PMN. By comparing it with the later results,\cite{karamyan77,reaney94,prosandeev04,hlinka06} it becomes clear that Burns and Dacol published (by mistake) the IR spectrum of the pyrochlore PMN. Since the IR experiment revealed more modes than expected from the factor-group analysis, we suggest that the structure should have at least locally lower symmetry than the cubic one. \begin{acknowledgments} The work has been supported by the Grant Agency of the Czech Republic (Project No. 202/06/0403) and AVOZ10100520. \end{acknowledgments}
1,314,259,996,068
arxiv
\section{Introduction} Unsupervised generation of data is a dynamic area of machine learning and a very active research frontier in areas ranging from language processing and music generation to materials and drug discovery. In any of these fields, it is often advantageous to guide the generative model towards some desirable characteristics, while ensuring that the samples resemble the initial distribution. In music generation, for example, it might be expected that pleasant melodic patterns prevail over more dissonant ones \cite{SequenceTutor}. In natural language processing, a given sentiment might be emphasized, maybe for producing movie reviews \cite{radford2017learning}. Finally, in materials discovery, the aim is often to optimize some properties for a particular application, for example in organic solar cells \cite{Hachmann2011}, OLEDs \cite{Gomez-Bombarelli2016a} or new drugs. The generation of discrete data using Recurrent Neural Networks (RNNs), in particular, Long Short-Term Memory cells \cite{LSTM} and maximum likelihood estimation has been shown to work well in practice. However, this often suffers from the so-called \textit{exposure bias}, and might lack some of the multi-scale structures or salient features of the data. Meanwhile Generative Adversarial Networks (GANs) \cite{Ian14}, an approach where a generative model competes against a discriminate model, one trying to generate likely data while the other trying to distinguish false from real data. GANs have shown remarkable results at generation of data that imitates a data distribution, however they can suffer from several issues, among these mode-collapse \cite{Arjovsky2017}. Where the generator learns to produce samples with low variety. Although GANs were not initially applicable to discrete data due to non-differentiability, approaches such as SeqGAN \cite{SeqGAN}, MaliGAN \cite{Che2017} and BGAN \cite{Hjelm2017} have arisen to deal with this issue. Furthermore methods from Reinforcement Learning (RL) have shown great success at solving problems where continuous feedback from an environment is needed \cite{Hjelm2017}. In this paper, we introduce a novel approach to optimize the properties of a distribution of sequences, increase the diversity of the samples while maintaining the likeliness of the data distribution. In our approach, the generator is trained to maximize a weighted average of two types of rewards: the \textit{objective}, domain-specific metrics, and the \textit{discriminator}, which is trained along with the generator in an adversarial fashion. While the objective component of the reward function ensures that the model selects for traits that maximize the specified heuristic, the discriminator incentives the samples to stay within boundaries of the initial data distribution. Diversity is additionally promoted by reducing rewards of non-unique and less diverse sequences. In order to implement the above idea, we build on SeqGAN, a recent work that successfully combines GANs and RL to apply the GAN framework to sequential data \cite{SeqGAN} and extend it towards domain-specific rewards. To increase the stability of the adversarial training, we test Wasserstein-GANs \cite{Arjovsky2017a} in this framework. We test our model in the context of molecular and music generation, optimizing several domain-specific metrics. Our results show that ORGAN is able to tune the quality and structure of samples. We compare our results with the maximum likelihood estimation (MLE), SeqGAN and a RL approach. \section{Related work} Previous work has relied on specific modifications of the objective function to reach the desired properties. For example, \cite{SequenceTutor} introduce penalties to unrealistic sequences, in absence of which RL can easily get stuck around local maxima which can be very far from the global maximum reward. Related applications by \cite{ranzato2015sequence} and \cite{li2016deep} apply reinforcement learning to sequence generation in a NLP setting. In the last two years, many methodologies have been proposed for \textit{de novo} molecular generation. \cite{ertl2017silico} and \cite{segler2017generating} trained recurrent neural networks to generate drug-like molecules. \cite{HIPSVAE} employed a variational autoencoder to build a latent, continuous space where property optimization can be made through surrogate optimization. Finally, \cite{kadurin2017drugan} presented a GAN model for drug generation. Additionally, the approach presented in this paper has recently been applied to molecular design \cite{Sanchez-Lengeling2017}. In the field of music generation, \cite{lee2017seqgan} built a SeqGAN model employing an efficient representation of multi-channel MIDI to generate polyphonic music. \cite{chen2017learning} presented Fusion GAN, a dual-learning GAN model that can fuse two data distributions. \cite{jaques2017tuning} employ deep Q-learning with a cross-entropy reward to optimize the quality of melodies generated from an RNN. In adversarial training, \cite{Pfau2016} recontextualizes GANs in the actor-critic setting. This connection is also explored with the Wasserstein-1 distance in WGANs \cite{Arjovsky2017a}. Minibatch discrimination and feature mapping were used to promote diversity in GANs \cite{Ian16}. Another approach to avoid mode collapse was shown with Unrolled GANs \cite{Metz2016}. Issues and convergence of GANs has been studied in \cite{Mescheder2017}. \section{Background} In this section, we elaborate on the GAN and RL setting based on SeqGAN \cite{SeqGAN} $G_\theta$ is a generator parametrized by $\theta$, that is trained to produce high-quality sequences $Y_{1:T} = (y_1, ..., y_T)$ of length $T$ and a discriminator model $D_\phi$ parametrized by $\phi$, trained to classify real and generated sequences. $G_\theta$ is trained to deceive $D_\phi$, and $D_\phi$ to classify correctly. Both models are trained in alternation, following a minimax game: \begin{equation} \min_{\phi} \mathbb{E}_{Y \sim p_{\text{data}}(Y)} \left[\log D(Y) \right] + \mathbb{E}_{Y\sim p_{G_\theta}(Y)} \left[\log (1 - D(Y)) \right] \end{equation} For discrete data, the sampling process is not differentiable. However, $G_\theta$ can be trained as an agent in a reinforcement learning context using the REINFORCE algorithm \cite{REINFORCE}. Let $R(Y_{1:T})$ be the reward function defined for full length sequences. Given an incomplete sequence $Y_{1:t}$, also to be referred to as state $s_t$, $G_\theta$ must produce an action $a$, along with the next token $y_{t+1}$. The agent's stochastic policy is given by $G_\theta(y_t | Y_{1:t-1})$ and we wish to maximize its expected long term reward \begin{equation} J(\theta) = E[R(Y_{1:T}) | s_0, \theta] = \sum_{y_1 \in Y} G_\theta(y_1 | s_0) \cdot Q(s_0, y_1) \end{equation} where $s_0$ is a fixed initial state. $Q(s, a)$ is the action-value function that represents the expected reward at state $s$ of taking action $a$ and following our current policy $G_\theta$ to complete the rest of the sequence. For any full sequence $Y_{1:T}$, we have $Q(s = Y_{1:T-1}, a = y_T) = R(Y_{1:T})$ but we also wish to calculate $Q$ for partial sequences at intermediate timesteps, considering the expected future reward when the sequence is completed. In order to do so, we perform $N$-time Monte Carlo search with the canonical rollout policy $G_\theta$ represented as \begin{equation} \text{MC}^{G_\theta}(Y_{1:t};N) = \{Y^1_{1:T}, ..., Y^N_{1:T}\} \end{equation} where $Y^n_{1:t} = Y_{1:t}$ and $Y^n_{t+1:T}$ is stochastically sampled via the policy $G_\theta$. Now $Q(s,a)$ becomes \begin{equation} Q(Y_{1:t-1}, y_t) = \begin{cases} \frac{1}{N} \underset{n=1..N}{\sum} R(Y^n_{1:T}), \text{with} \\ Y^n_{1:T} \in \text{MC}^{G_\theta}(Y_{1:t}; N), & \text{if $t < T$}.\\ R(Y_{1:T}), & \text{if $t = T$}. \end{cases} \end{equation} An unbiased estimation of the gradient of $J(\theta)$ can be derived as \begin{multline} \nabla_\theta J(\theta) \simeq \frac{1}{T} \sum_{t = 1,...,T} \mathbb{E}_{y_t \sim G_{\theta}(y_t | Y_{1:t-1})} [ \\ \nabla_\theta \log G_{\theta}(y_t | Y_{1:t-1}) \cdot Q(Y_{1:t-1}, y_t) ] \end{multline} Finally in SeqGAN the reward function is provided by $D_\phi$. \section{ORGAN} \begin{figure}[h!] \includegraphics[width=\columnwidth]{ORGAN_schema.png} \caption[Schema for ORGAN]{Schema for ORGAN. \textit{Left}: $D$ is trained as a classifier receiving as input a mix of real data and generated data by $G$. \textit{Right}: $G$ is trained by RL where the reward is a combination of $D$ and the objectives, and is passed back to the policy function via Monte Carlo sampling. We penalize non-unique sequences. \label{fig:Schema_ORGAN}} \end{figure} Figure \ref{fig:Schema_ORGAN} illustrates the main idea of ORGAN. To take into account domain-specific desired objectives $O_i$, we extend the reward function for a particular sequence $Y_{1:t}$ to a linear combination of $D_\phi$ and $O_i$, parametrized by $\lambda$: \begin{equation} R(Y_{1:T}) = \lambda \cdot D_\phi(Y_{1:T}) + (1 - \lambda) \cdot O_i(Y_{1:T}) \end{equation} If $\lambda = 0$ the model ignores $D$ and becomes a "naive" RL algorithm, whereas if $\lambda = 1$ it is simply a SeqGAN model. It should be noted that, if chosen, the objective function can vary based on the current iteration of adversarial training, leading to alternating rewards between several objectives and the discriminator. An additional mechanism to prevent mode collapse is to penalize non-unique sequences by dividing the reward of a repeated sequence by it's the number of copies. The more a sequence gets repeated, the more it will have diminishing rewards. Alternatively, domain-specific similarity metrics could be used to penalize. To improve the stability of learning, and avoid of problems of GAN convergence like "perfect discriminator", we also implemented the Wasserstein-1 $W$ distance, also known as earth mover's distance, for $D_\phi$ \cite{Arjovsky2017a}. Although the computation of this distance is intractable due to an infimum, it can be transformed via the Kantorovich-Rubinstein duality: $$ W(p_{data},p_{G}) =\frac{1}{K} \sup_{\|D \| \ge K} \mathbb{E}_{Y \sim p_{\text{data}}}[D(Y)] - \mathbb{E}_{Y \sim p_{G}}[D(Y)] $$ Under $W$, $D$ is no longer meant to classify data samples, but now trained and converged to learn $\phi$ such that $D_\phi$ is K-Lipschitz continuous and used to compute the Wasserstein distance. Intuitively the cost of moving the generated distribution to the data. In this context, $D$ can now be considered as a critic in an actor-critic setting. \subsection{Implementation Details} $G_\theta$ is a RNN with LSTM cells, while $D_\phi$ is Convolutional Neural Network (CNN) designed specifically for text classification tasks \cite{CNN}. To avoid over-fitting with the CNN, we optimized its architecture on classification task between different datasets for each experiment. In the molecule generation task, we utilized a set of drug-like and nondrug-like molecules from the ZINC database \cite{ZINC}. In the music task, we discriminated between a set of folk and videogame tunes scraped from the internet. We utilize a dropout layer at $75\%$ and also $L2$ regularization on the network weights. All the gradient descent steps are done using the Adam algorithm \cite{Adam}. Molecular metrics are implemented using the RDKit chemoinformatics package \cite{Landrum}. Music metrics employ the MIDI frequencies. The code for ORGAN, including metrics for each experiment, can be found at \url{http://github.com/gablg1/ORGAN}\footnote{Repo soon to be updated (May'18)}. \section{Experimental results} In this section, we will test the performance of ORGAN in two scenarios: the generation of molecules encoded as text sequences and musical melodies. Our objective is to show that ORGAN can generate samples that fulfill some desired objectives while promoting diversity. For purposes of interpretation, the range of each objective has been mapped to $[0,1]$ range, where $0$ corresponds to an undesirable property and $1$ to a very desirable property. Each generator model was pre-trained for 250 epochs using MLE, and the discriminator was trained for 10 epochs. To measure diversity we use domain-specific measures. In both fields, there are multiple ways of quantifying the notion of diversity so we tried utilizing more widely used metrics. We compare ORGAN and the Wasserstein variant ($W$) with three other methods of training RNNs: SeqGAN, Naive RL, and Maximum Likelihood Estimation (MLE). Unless specified, $\lambda$ is assumed to be 0.5. All training methods involve a pre-training step of 250 epochs of MLE for $G_\theta$, and 10 epochs for $D_\phi$. The MLE baseline simply stops right after pre-training, while the other methods proceed to further train the model using the different approaches, up to 100 epochs. For each dataset, we first build a dictionary mapping the vocabulary - the set of all characters present in the dataset - to integers. The dataset is then preprocessed by transforming each sequence into a fixed sized integer sequence of length $N$ where $N$ is the maximum length of a string present in the dataset (in the case of molecules, along with around $10\%$ more characters to increase flexibility and allow generation of larger samples of data). Every string with a length smaller than $N$ is padded with ``\_" characters. Thus the input to our model becomes a list of fixed sized integer sequences. \subsection{Experiment: Molecules} Here we test the effectiveness of ORGAN for generating molecules with desirable properties in a pharmaceutical context of drug discovery. Molecules can be encoded as text sequences by using the SMILES representation \cite{SMILES} of a molecule. This representation encodes the topological information of a molecule based on common chemical bonding rules. For example, the 6-carbon ringed molecule benzene can be encoded as 'C1=CC=CC=C1'. Each C represents a carbon atom, the '=' symbolizes a double bond and '1' the start and closing of a cycle/ring, hydrogen atoms can be deduced via simple rules. The SMILES representation has predefined grammar rules, and as such, it is possible to have invalid expressions that cannot be decoded back to a valid molecule. Therefore desired property on a generative algorithm is to have a high percentage of valid expression. Invalid expressions get penalized. Additionally, we also penalize the generation of duplicate molecules. Recent generative models (\cite{Gomez-Bombarelli2016a},\cite{Kusner2017a}) have reported valid expression rates between $4\%$ up to $80\%$. It should be noted that there are common uninteresting ways to generate valid expressions by alternating "C" and "O" characters such as 'CCCCCCCC' and 'COCCCCOC', the combinatorial possibilities of such permutations is already huge. For training, we utilized a random subset of 5k molecules from the set of 134 thousand stable small molecules \cite{134k}. This is a subset of all molecules with up to nine heavy atoms (CONF) out of the GDB-17 universe of 166 billion organic molecules \cite{134k}. The maximum sequence length is 51 and the alphabet size is 43. When choosing objectives we picked qualities that are normally desired for small molecule drug discovery: \begin{description} \item[Solubility:] a property that measures how likely a molecule is able to mix with water, also known as the water-octanol partition coefficient (LogP). Computed via RDKit's Crippen function \cite{Landrum}. \item[Synthetizability:] estimates how hard (0) or how easy (1) it is to synthesize a given molecule \cite{Ertl2009}. \item[Druglikeness:] how likely a molecule is a viable candidate for a drug, an estimate that captures the abstract notion of aesthetics in medicinal chemistry \cite{Bickerton2012a}. This property is correlated to the previous two metrics. \end{description} To estimate the diversity of our generated samples we can utilize the notion of molecular similarity to construct a measure of how similar or dissimilar a molecule is with respect to a dataset. This measure is based on molecular fingerprints and their Jaccard distance \cite{Sanchez-Lengeling2017}. More concretely, Diversity measures the average similarity of a molecule with respect to a set, in this case, a random subset of molecules from the training set. A value of 1 would indicate the molecule is likely to be considered a diverse member of this set, 0 would indicate it has many repeated sub-structures with respect to the set. \begin{table*}[!htbp] \centering \begin{tabular}{l l c c r r r r r r} \toprule Objective & Algorithm & Validity (\%) & Diversity & \multicolumn{2}{c}{Druglikeliness} & \multicolumn{2}{c}{Synthesizability} & \multicolumn{2}{c}{Solubility} \\ \midrule & MLE & 75.9 & 0.64 & 0.48 &(0\%) & 0.23 & (0\%) & 0.30 &(0\%) \\ & SeqGAN & 80.3 & 0.61 & 0.49 &(2\%) & 0.25 & (6\%) & 0.31 &(3\%) \\ \midrule Druglikeliness & ORGAN & 88.2 & 0.55 & \cellcolor{gray!25} 0.52 &\cellcolor{gray!25}(8\%) & 0.32 &(38\%) & 0.35 &(18\%) \\ & OR(W)GAN & 85.0 & \textbf{0.95} & \cellcolor{gray!25}\textbf{0.60} &\cellcolor{gray!25}\textbf{(25\%)} & 0.54 &(130\%) & 0.47 &(57\%) \\ & Naive RL & 97.1 & 0.8 & \cellcolor{gray!25} 0.57 &\cellcolor{gray!25} (19\%) & 0.53 &(126\%) & 0.50 &(67\%) \\\midrule Synthesizability & ORGAN & \textbf{96.5} & \textbf{0.92} & 0.51 &(6\%) & \cellcolor{gray!25} \textbf{0.83} & \cellcolor{gray!25}\textbf{(255\%)} & 0.45 &(52\%) \\ & OR(W)GAN & \textbf{97.6} & \textbf{ 1.00} & 0.20 &(-59\%) & \cellcolor{gray!25} 0.75 & \cellcolor{gray!25} (223\%) & 0.84 &(184\%) \\ & Naive RL & \textbf{97.7} & \textbf{0.96} & 0.52 &(8\%) & \cellcolor{gray!25} \textbf{ 0.83} & \cellcolor{gray!25} \textbf{(256\%) }& 0.46 &(54\%) \\\midrule Solubility & ORGAN & \textbf{94.7} & 0.76 & 0.50 &(4\%) & 0.63 &(171\%) & \cellcolor{gray!25} 0.55 & \cellcolor{gray!25} (85\%) \\ & OR(W)GAN & 94.1 & \textbf{ 0.90} & 0.42 &(-12\%) & 0.66 &(185\%) & \cellcolor{gray!25} 0.54 & \cellcolor{gray!25}(81\%) \\ & Naive RL & 92.7 & 0.75 & 0.49 &(3\%) & 0.70 & (200\%) & \cellcolor{gray!25} \textbf{0.78} & \cellcolor{gray!25} \textbf{(162 \%)} \\ \midrule All/Alternated & ORGAN & 96.1 & 92.3 & \cellcolor{gray!25} 0.52 & \cellcolor{gray!25} (9\%) & \cellcolor{gray!25} 0.71 & \cellcolor{gray!25} (206\%) & \cellcolor{gray!25} 0.53 & \cellcolor{gray!25} (79\%) \\ \bottomrule \end{tabular} \caption{Evaluation of metrics, on several generative algorithms and optimized for different objectives for molecules. Reported values are mean values of valid generated molecules. The percentage of improvement over the \textit{MLE} baseline is reported in parenthesis. Values shown in bold indicate significant improvement. Shaded cell indicates direct optimized objectives.} \label{table mol} \end{table*} Table \ref{table mol} shows quantitative results comparing ORGAN to other methods and three different optimization scenarios. MLE and SeqGAN are able to capture the distribution of properties of the training set with minimal alteration in their metrics. While the metric optimized methods excelled in all metrics above the non-optimized methods, effectively showing that they are able to bias the generation process. The Wasserstein variant of ORGAN also seemed to give better diversity properties. In our experiments, we also noted that naive RL has different failure scenarios. For instance, this approach excelled particularly in the task of Solubility, this particular task rewards very simple sequences such as for the single atom molecule ``N" or monotonous patterns like ``CCCCCCC" or ``CCOCOCCCC" positively. It seems for the other approaches, the GAN/WGAN setting is enforcing more diversity and so punishes these types of patterns, providing highly soluble molecules with more complex features. \subsection*{Capacity ceiling} We did notice a form of capacity ceiling in our generation tasks in two forms. The GAN models tended to generate sequences that had the same average sequence length as the training set (15.42). With RL we did not observe this constraint, either it went quite low with synthesizability (9.4) or high (21.3) with druglikeliness. This might be advantageous or detrimental based, on the setting. Optimizing a property that relates to sequence length, for example, molecular size might change this. The other ceiling is illustrated in figure \ref{fig:mol_dist}, where the upper limits in Druglikeliness for the data and the best performing approach match. While OR(W)GAN tends to generate more druglike molecules, they do not reach the highest value of 1. This might be property and dataset dependent. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{figs/Druglikeliness.png} \caption{Violinplots of Druglikeliness for molecules from the baseline \textit{Dataset}(n=5000) and optimized OR(W)GAN (n=5440).} \label{fig:mol_dist} \end{figure} \subsection*{Multi-objective training programs} We also experimented with alternating objectives during training. By training for one epoch each objective in rotation until 99 epochs (33 epochs per objective) we arrive to figure \ref{fig:multi}. Surprisingly by alternating the objectives, as seen in the last row of table \ref{table mol}, the gains in each metric are quite high and almost comparable with the best models in each individually trained objective. Although it can also be appreciated in the slight fluctuating behavior of the graphs that there might be limits to the gains that can be achieved. Further work is warranted in this direction. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{figs/Multi.png} \caption{Plots of each objective across the training epochs. Objectives were trained for one epoch, and then switched for another.} \label{fig:multi} \end{figure} \subsection{Experiment: Musical melodies} To further demonstrate the applicability of ORGAN, we extend our study to music sequences. We employ the notation introduced by \cite{jaques2017tuning}, where each token corresponds to a sixteenth of a bar of music. The first two tokens are reserved as 0, which is silent, and 1, which means no event; the other 36 tokens encode three octaves of music, from C3 (MIDI pitch 48) to B5. We use a 1k random sample from the Essen Associative Code (EsAC) folk dataset as processed by \cite{chen2017learning}, where every melody has a duration of 36 tokens (2.25 music bars). We generate songs optimizing two different metrics: \begin{description} \item[Tonality.] This measures how many perfect fifths are in the music that is generated. A perfect fifth is defined as a musical interval whose frequencies have a ratio of approximately 3:2. These provide what is generally considered pleasant note sequences due to their high consonance. \item[Ratio of Steps.] A step is an interval between two consecutive notes of a scale. An interval from C to D, for example, is a step. A skip, on the other hand, is a longer interval. An interval from C to G, for example, is a skip. By maximizing the ratio of steps in our music, we are adhering to the conjunct melodic motion. Our rationale here is that by increasing the number of steps in our songs we make our melodic leaps rarer and more memorable \cite{Bonds2013}. \end{description} Moreover, we calculate diversity as the average pairwise edit distance of the generated data \cite{Habrard2008}. We do not attempt to maximize this metric explicitly but we keep track of it to shed light on the trade-off between metric optimization and sample diversity in the ORGAN framework. Table \ref{table music} shows quantitative results comparing ORGAN to other baseline methods optimizing for three different metrics. ORGAN outperforms SeqGAN and MLE in all of the three metrics. Naive RL achieves a higher score than ORGAN for the Ratio of Steps metric, but it under-performs in terms of diversity, as Naive RL would likely generate very simple rather than diverse songs. In this sense, similar to the molecule case, although the Naive RL ratio of steps score is higher than ORGAN's, the actual generated songs can be deemed much less interesting. \begin{table*}[!htbp] \centering \begin{tabular}{ c c c c c c} \toprule Objective & Algorithm & Diversity & Tonality & Ratio of Steps \\ \toprule & MLE & 0.221 & 0.007 & 0.010 \\ & SeqGAN & 0.187 & 0.005 & 0.010 \\ \midrule \textit{Tonality} & Naive RL & 0.100 & \cellcolor{gray!25} 0.478 & 2.9E-05\\ & ORGAN & \textbf{0.268} &\cellcolor{gray!25} \textbf{0.372} & 1.78E-04 \\ & OR(W)GAN & \textbf{0.268} & \cellcolor{gray!25}\textbf{0.177} & 2.4E-04 \\ \midrule \textit{Ratio of Steps} & Naive RL & 0.321 & 0.001 &\cellcolor{gray!25} 0.829 \\ & ORGAN & \textbf{0.433} & 0.001 & \cellcolor{gray!25} \textbf{0.632} \\ & OR(W)GAN & \textbf{0.134} & 5.95E-05 & \cellcolor{gray!25} \textbf{0.622} \\ \bottomrule \end{tabular} \caption{Evaluation of metrics, on several generative algorithms and optimized for different objectives for melodies. Each measure is averaged over a set of 1000 generated songs. Values shown in bold indicate significant improvement over \textit{MLE} baseline. Shaded cell indicates directly optimized objectives.} \label{table music} \end{table*} We note that the Ratio of Steps and Tonality have an inverse relationship. This is because two consecutive notes - what qualifies as a step - do not have the frequency ratio of a perfect fifth, which are responsible for increasing tonality. In addition, although the usage of the Wasserstein metric seems to decrease the metrics value, this can be explained as the result of slower training. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{figs/lambda_organ_music.png} \includegraphics[width=\linewidth]{figs/lambda_worgan_music.png} \caption{Plots of Diversity and Tonality rewards (the latter re-scaled to the [0, 1] interval) after 80 epochs of training on the music generation task. The upper plot employs the classical GAN loss, while the lower displays a WGAN. The values have been averaged over 1000 samples.} \label{fig:lambda_music} \end{figure} \subsection*{Effect of $\lambda$} By tweaking $\lambda$, the ORGAN approach allows one to explore the trade-off between maximizing the desired objective and maintaining likelihood to the data distribution. Figure \ref{fig:lambda_music} shows the distribution of tonality and diversity sampled from ORGAN and OR(W)GAN for several $\lambda$ values. This showcases that there exists an optimal value for $\lambda$ which maximizes simultaneously the reward and diversity. This value is dependent on the model, dataset and metric, therefore a parameter search would be advantageous to maximize objectives. \section{Conclusions and future work} In this work, we have presented ORGAN, a novel framework to optimize an arbitrary object in a sequence generation task. We have built on recent advances in GANs, particularly SeqGAN, and extended them with reinforcement learning to control properties of generated samples. We have shown that ORGAN can improve certainly desired metrics, achieving better results than recurrent neural networks trained via either MLE or SeqGAN. Even more importantly, data generation can be made subject to a domain-specific reward function while still using the adversarial setting to guarantee the production of non-repetitious samples. Moreover, ORGAN possesses a natural advantage as a black box compared to similar objective-optimization models, since it is not necessary to introduce multiple domain-specific penalties to the reward function: many times a simple objective "hint" will suffice. As evidenced with the experiments, the RL component is the one of the major drivers for the property optimization and promotion of diversity. Values tended to be higher when RL was present in the architecture of the models. Future work should investigate how the choice of heuristic can affect the performance of the model. There are also other formulations of GANs for discrete sequences \cite{Che2017},\cite{Hjelm2017} that could be extended with a RL component in order to fine-tune the generation processes. One area of improvement as seen from figure \ref{fig:mol_dist} is to push the boundaries of the datasets in certain properties. In some domains, outliers might be more valuable such as in the case of drug and materials discovery. Finally, forthcoming research should extend ORGANs to work with non-sequential data, such as images or audio. This requires framing the GAN setup as a reinforcement learning problem in order to add an arbitrary (not necessarily differentiable) objective function. We believe this extension to be quite promising since real-valued GANs are currently better understood than sequence data GANs. \bibliographystyle{named}
1,314,259,996,069
arxiv
\section{Introduction}\label{intro} In the past decade, $\approx$~130 extrasolar planets have been discovered as companions to late-type dwarfs mostly from Precise Radial Velocities (PRV; e.g.~Walker et al.~1995) from which the Keplerian orbital parameters and minimum planetary masses can be derived. From transiting planets, of which five are known, true planetary masses, radii, densities and albedo-dependent estimates of surface temperature are available. To date, a handful of experiments have lead to further information about these planets, including Lyman~$\alpha$ (Vidal-Madjar et al.~2003) and sodium detections (Charbonneau et al.~2002) in the atmosphere of the transiting planet around HD~209458. A current review of follow-up techniques to probe planetary characteristics has been presented by Charbonneau (2003). The presence of a giant planet likely influences its parent star beyond the dynamical perturbations measured by the PRV method. Robinson \& Bopp (1987) caught the signatures of ``superflares" with energies of $\sim$~10$^{35}$ ergs (Schaefer et al.~2000) on nine solar analogs which have no otherwise unusual properties such as very rapid rotation or very high chromospheric activity. Rubenstein \& Schaefer (2000) suggested that these anomalous superflares were stimulated by an unseen close-in extrasolar giant planet (CEGP, also known as a `hot jupiter'). They explored the possibility of magnetic reconnection events occurring between both the star's and the planet's entangled magnetic fields. Cuntz et al. (2000) suggested a more consistent observable interaction between a parent star and its hot jupiter in the form of enhanced stellar activity of the star's outer atmosphere. This interaction can take the form of tidal and/or magnetic heating of the plasma. Both the star and its planet experience strong tidal forces as well as repeated magnetic reconnection with a large planetary magnetosphere. If such planet-induced heating of the star is confined to a narrow range in stellar longitude, the heated regions likely track the planet as it orbits. This implies that the period of any observed activity would be correlated with the planet's orbital period $P_{orb}$ such that tidally induced activity has a period of $\sim P_{orb}/2$ and magnetic activity, a period of $P_{orb}$. In the simplest configuration of magnetic interaction, the expected enhancement would occur near the sub-planetary point, which defines orbital phase $\phi = 0$ at the time when the planet is in front of the star relative to the line-of-sight. A partial analogy can be made to the Jupiter-Io system (Zarka et al.~2001) where the volcanically active moon of Jupiter, orbiting at a distance of 5.9 $R_{J}$, constantly couples with Jupiter's magnetosphere, leaving two footprints at high positive and negative latitudes on the planet's surface. Plasma flows along the magnetic field lines, making up the Io Flux Tube, as these footprints follow Io in its orbit. Even though the analogy is limited, a similar phenomenon may occur between CEGPs and their stars such that coupling between the magnetic fields of the planet and the star may cause footprints or ``hotspots" on the star's surface which follow the planet and have a period close to the planet's orbital period Similarly, auroral emission may be stimulated on the planet's atmosphere (Zarka et al.~2001) but no searches for planetary radio emissions have yet been successful (e.g. Bastian et al.~2000, Farrell et al.~2003, Ryabov et al.~2003). More recently, Santos et al.~(2003) observed cyclic photometric variations with a period very similar to that of the radial velocity (RV) curve for the K dwarf HD~192263. The stability of the 24.4-day periodic radial velocity through almost 4 years of data (Santos et al.~2000, Henry et al.~2002) rules out the interpretation that stellar activity alone is the cause of the RV curve and supports the existence of a planetary-sized companion around the star. However, they question what might cause a quasi-stable photometric period that coincides with the planetary orbit. They offered planet-induced magnetic activity off-set by 90$^{\circ}$ from the sub-planetary point as an explanation. Even though the interpretations of Rubenstein \& Schaefer (2000) and Santos et al. (2003) are uncertain, there exists ample observational evidence of such tidal and magnetic interactions in the exaggerated case of the RS Canum Venaticorum (RS CVn) stars, which are tightly-orbiting binary systems consisting of two chromospherically active late-type stars. For example, Catalano et al.~(1996) found as many starspots and plages within 45$^{\circ}$ of the sub-binary point\footnote{The `sub-binary' point refers to the longitude on the star that faces the binary system's center-of-mass.} as on the rest of the stellar surface for several RS CVn systems. $\lambda$~And, a relatively long-period system ($P_{orb}$ = 20.1 d; Walker 1944), shows modulation of the Mg~II UV chromospheric emission lines with a period of 10 days, half the orbital period. Glebocki et al.~(1986) interpreted this as a tidal heating of the primary by its companion, possibly a brown dwarf (Donati et al.~1995). And, in our own Ca II H \& K observations of ER~Vul, an RS CVn system with two G~V dwarfs and $P_{orb}$ = 0.69 d, we see clear enhancements near the sub-binary longitudes (Shkolnik et al.~2004). With these scenarios in mind, we searched for periodic chromospheric heating by monitoring the Ca~II H \& K emission in stars with giant planets within a few stellar radii. We chose to study the tightest observable systems since the tidal and magnetic interactions depend on the distance from the planet to the star as $1/r^{3}$ and $1/r^{2}$, respectively. Of the known extrasolar planets, about 20\% have semi-major axes of less than 0.1 AU and masses comparable to Jupiter's (Schneider 2004). It is expected that these planets have magnetic fields also similar to Jupiter's (4.3 G).\footnote{However, if tidally locked, the planets' rotation rates may be much lower than Jupiter's possibly causing their magnetic fields to be substantially smaller (e.g.~Sanch\'ez-Lavega 2004).} It is also reasonable to assume that any magnetic interaction would be greatest in the outermost layers of the star, namely the chromosphere, transition region and the corona due to their proximity to the planet, low density, and nonradiative heat sources. The broad, deep photospheric absorption of the Ca~II H \& K lines allows the chromospheric emission to be seen at higher contrast. Because of this and the accessibility from the ground, the H \& K reversals are an optimal choice with which to monitor chromospheric heating of these sun-like stars. Our program stars have orbital periods between 2.5 and 4.6 days, eccentricities $\approx$~0 and semi-major axes $< 0.06$ AU. These systems offer the best chance of observing upper atmospheric heating. The first five systems we observed were $\tau$~Boo, HD~179949, HD~209458, 51~Peg and $\upsilon$~And from the Canada-France-Hawaii Telescope (CFHT). The first results from 2001 and 2002 observations, including the first evidence of planet-induced magnetic heating of HD~179949, were published in Shkolnik et al.~(2003). We later extended the experiment at the Very Large Telescope (VLT) to include five southern targets, HD~46357, HD~73256, HD~75289, HD~76700, and HD~83443. The system parameters for the program stars are listed in Table 1 along with our two standards, $\tau$~Ceti and the Sun. In this paper, we compile our 2003 CFHT data with those of previous years and include the recent VLT observations. A broader understanding of stellar activity, its cycles, and planet-induced chromospheric heating emerges. We also observed $\kappa^{1}$~Ceti, a young (650 $-$ 750 Myr old), chromospherically active, solar analog. It was one of the nine stars reported to have anomalous superflare activity by Robinson \& Bopp (1987) possibly caused by magnetic interactions with a close-in giant planet (Rubenstein \& Schaefer 2000). Of their sample, only $\kappa^{1}$~Ceti has been looked at by the PRV planet searches of Walker et al.~(1995), Cumming et al.~(1999) and Halbwachs et al.~(2003); none of which has detected a planet. Interestingly though, Walker et al.~(1995) observed a rapid RV change of 80 m~s$^{-1}$ in 1988 which could have been caused by a planet in highly elliptical orbit. However, it coincided with an equally sharp increase in the Ca II chromospheric activity indicator at 8662 \AA\/ implying the RV jump was likely due to changes intrinsic to the star. We may have observed this same phenomenon in our Ca II data. The details of our CFHT and VLT observations are outlined in Section~\ref{spectra}. We briefly discuss the precise differential radial velocities which yielded updated ephemerides for the planetary systems such that orbital phases could be determined to better than 0.02. In Section~\ref{caII} we discuss our analysis and results of the Ca II H \& K measurements including long-term, short-term and rotational modulation. A theoretical discussion of the physical requirements for magnetic interactions between stars and their hot jupiters is presented in Section~\ref{theory} with future experiments suggested in Section~\ref{summary}. \begin{deluxetable}{ccccccccccl} \tabletypesize{\footnotesize} \tablecaption{Stellar and Orbital Parameters \label{starpars}} \tablewidth{0pt} \tablehead{ \colhead{~ Star} & \colhead{Spectral} & \colhead{$v$sin$i$} & \colhead{$P_{rot}$} & \colhead{$P_{orb}$\tablenotemark{a,\rm b}} & \colhead{$M_{p}$sin$i$\tablenotemark{b}} & \colhead{Semi-major\tablenotemark{b}} & \colhead{$\langle$K$\rangle$\tablenotemark{c}} & \colhead{$\langle$K$\arcmin\rangle$\tablenotemark{d}} & \colhead{$\langle$MAD K$\rangle$\tablenotemark{e}} & \\ \colhead{} & \colhead{Type} & \colhead{(km s$^{-1}$)} & \colhead{(days)} & \colhead{(days)} & \colhead{($M_{J}$)} & \colhead{axis (AU)} & \colhead{(\AA)} & \colhead{(\AA)} & \colhead{(\AA)} & } \startdata $\tau$~Boo & F7~IV & 14.8$\pm$0.3 & 3.2\tablenotemark{f} &3.313 &3.87 & 0.046 & 0.323 & 0.177 & 0.0020\\ HD~179949 & F8~V & 6.3$\pm$0.9 & $<$9\tablenotemark{g} &3.092 &0.98 & 0.045 & 0.399 & 0.202& 0.0051\\ HD~209458 & G0~V & 4.2$\pm$0.5 & 16\tablenotemark{h} & 3.525 &0.69\tablenotemark{i} & 0.045 & 0.192 & 0.077& 0.0009\\ 51~Peg & G2~IV & 2.4$\pm$0.3 & 21.9\tablenotemark{f} &4.231 &0.47 &0.05 & 0.178 & 0.071 & 0.0008\\ $\upsilon$~And & F7~V & 9.0$\pm$0.4 & 14\tablenotemark{f} & 4.618 &0.71\tablenotemark{j} & 0.059 & 0.254 & 0.091& 0.0016\\ HD~46375 & K1~IV & $<$2 &43\tablenotemark{k} &3.024 &0.25 &0.041 & 0.339 & 0.233 & 0.0011\tablenotemark{l}\\ HD~73256 & G8/K0~V & 3.22$\pm$0.32& 14\tablenotemark{m} & 2.549 &1.85 & 0.037 & 0.995 & 0.899 & 0.0051\tablenotemark{n}\\ HD~75289 & G0~V & 4.37 &15.95\tablenotemark{m} & 3.510 &0.42 & 0.046 & 0.177 & 0.062 & 0.0003\\ HD~76700 & G6V & $<$2 & --- & 3.971 & 0.20 & 0.049 & 0.212 & 0.095 & 0.0004\\ HD~83443 & K0V & 1.4 & 35\tablenotemark{k} & 2.986 &0.38 & 0.039 & 0.314 & 0.211 & 0.0010\\ $\kappa^{1}$~Cet & G5~V & 4.64$\pm$0.11 & 9.3\tablenotemark{o} & --- & --- & --- & 0.906 & 0.815 & 0.0059\tablenotemark{n}\\ $\tau$~Cet & G8~V & 1 & 33 & --- & --- & --- & 0.201 & 0.102 & 0.0003\\ Sun & G2~V & 1.73$\pm$0.3 & 27 & --- & --- & --- & 0.298 & 0.137 & 0.0005\\ \enddata \tablenotetext{a}{The first five periods are from this work. (See Section \ref{deltaRVs}.) See Note b for the last five.} \tablenotetext{b}{Published orbital solutions: $\tau$ Boo \& $\upsilon$ And $-$ Butler et al.~1997; HD 179949 $-$ Tinney et al.~2001: 51 Peg $-$ Marcy et al.~1996; HD 209458 $-$ Charbonneau et al.~1999); HD 46375 $-$ Marcy et al.~2000; HD 73256 $-$ Udry et al.~2003; HD 75289 $-$ Udry et al.~2000; HD 76700 $-$ Tinney et al.~2002; HD 83443 $-$ Mayor et al.~2004.} \tablenotetext{c}{Total integrated intensity bounded by the K1 features of the mean normalized Ca II K core. These values are relative to the normalization points near 3930 and 3937 \AA\/ which are at approximately 1/3 of the continuum at 3950 \AA.} \tablenotetext{d}{We subtracted the photospheric emission from $\langle$K$\rangle$ in order to measure the mean integrated chromospheric emission $\langle$K$\arcmin\rangle$ using data from Wright et al.~(2004). (See text for more details.)} \tablenotetext{e}{Average integrated `intensity' of the mean absolute deviation (MAD) of the K residuals, per observing run} \tablenotetext{f}{Henry et al.~2000} \tablenotetext{g}{$P_{rot}$ is calculated from the $v$sin$i$ and a stellar radius of 1.24 R$_{\odot}$ (Tinney et al. 2001).} \tablenotetext{h}{Mazeh et al.~2000} \tablenotetext{i}{Transiting system; $i = 86.1^{\circ}\pm 0.1^{\circ}$ (Mazeh et al. 2000)} \tablenotetext{j}{Closest of three known planets in the system} \tablenotetext{k}{Derived from empirical fits (Noyes et al.~1984) of the Ca II $R'_{HK}$ index (Wright et al.~2004)} \tablenotetext{l}{$\langle$MADK$\rangle$ was corrected for a $\approx$~30\% variable contribution by telluric Ca II emission since it was the first star observed after sunset.} \tablenotetext{m}{Udry et al.~2003} \tablenotetext{n}{Value corrected to remove modulation due to rotation. For HD 73256, the non-corrected value is 0.0155 and for $\kappa^1$ Ceti, 0.0182.} \tablenotetext{0}{Rucinski et al.~2004} \end{deluxetable} \section{The Spectra}\label{spectra} \subsection{CFHT Observations}\label{cfht_obs} The observations were made with the 3.6-m Canada-France-Hawaii Telescope (CFHT) on 3.5, 4, 2, and 5 nights in 2001 August, 2002 July, 2002 August, and 2003 September, respectively. We used the Gecko \'{e}chellette spectrograph fiber fed by CAFE (CAssegrain Fiber Environment) from the Cassegrain to Coud\'{e} focus (Baudrand \& Vitry 2000). Spectra were centered at 3947 \AA\/ which was isolated by a UV grism (300 lines mm$^{-1}$) with $\simeq$~60~\AA\/ intercepted by the CCD. The dispersion was 0.0136 \AA\/ pixel$^{-1}$ and the 2.64-pixel FWHM of the thorium-argon (Th/Ar) lines corresponded to a spectral resolution of R = 110,000. The detector was a back-illuminated EEV CCD (13.5 $\mu$m$^{2}$ pixels, 200 $\times$ 4500 pixels) with spectral dispersion along the rows of the device. To remove the baseline from each observation, the appropriate mean darks were subtracted from all the exposures. Flat-fields were then normalized to a mean value of unity along each row. All the stellar exposures observed on a given night were combined into a mean spectrum to define a single aperture for the extraction of all stellar and comparison exposures, including subtraction of residual background between spectral orders (prior to flat-fielding). This aperture was ultimately used to extract one-dimensional spectra of the individual stellar and comparison exposures and of the mean, normalized flat-field. This one-dimensional flat used to obtain the most consistent flat-fielded spectra possible. All the data were processed with standard IRAF (Image Reduction and Analysis Facility) routines.\footnote[1]{IRAF is distributed by the National Optical Astronomy Observatories, which is operated by the Association of Universities for Research in Astronomy, Inc.~(AURA) under cooperative agreement with the National Science Foundation.} Wavelength calibration was done using the Th/Ar arcs taken before and after each spectrum. We required frequent arcs in order to track the CCD drift or any creaking in the system throughout the night. Heliocentric and differential radial velocity corrections were applied to each stellar spectrum using IRAF's {\it rvcorrect} routine. A specimen, flat-fielded spectrum of $\upsilon$~And is shown in Figure~\ref{upsAnd_spec}. The Ca~II H \& K reversals are weak yet visible at 3968 and 3933 \AA\/. The final spectra were of high S/N reaching $\approx$ 500 per pixel (or 4300 \AA$^{-1}$) in the continuum and 150 (or 1290 \AA$^{-1}$) in the H \& K cores. Spectra with comparable S/N were taken of two stars known not to have close-in giant planets, $\tau$ Ceti and the Sun (sky spectra were taken at dusk). Table 2 of Walker et al. (2003a) lists the five CFHT program stars plus $\tau$~Ceti, including their U magnitudes, exposure times, and typical S/N. \subsubsection{Precise Differential Radial Velocities}\label{deltaRVs} Differential radial velocities ($\Delta$RVs) were estimated with the {\it fxcor} routine in IRAF (version PC-IRAF V2.12) which performs a Fourier cross-correlation on dispersion-corrected spectra. We used the first spectrum in the series for each star as the template, hence, all $\Delta$RVs are relative to the first spectrum on the first night of the run. Both the template and the input spectrum were normalized with a low order polynomial. The correlation used only the $\sim$~20 \AA\/ region between the H \& K lines, that part of the spectrum bounded by (and including) the two strong Al~I lines (3942 $-$ 3963 \AA) as shown in Figure~\ref{upsAnd_spec}. The $\Delta$RVs measured during the 2002 July run for the five `51 Peg' stars can be found in Walker et al.~(2003a). The $\Delta$RVs from 2003 September are plotted in Figure~\ref{rv2003}. After the orbit is removed, the average $\sigma_{RV}$ is 17 m~s$^{-1}$. The two stars observed at the highest S/N have $\sigma_{RV}$ of 7 and 9 m~s$^{-1}$. This precision may be the best achievable for a single spectrum with this spectrograph. The excellent $\Delta$RVs yield current orbital ephemerides and hence accurate phases ($\pm~0.02$) for each observation. Using the ephemerides of the planets' discovery orbits, we tabulated the 2003 September times of sub-planetary position ($\phi$ = 0) with revised orbital periods. These are listed in Table~\ref{ephemerides}. \begin{deluxetable}{ccccccl} \tabletypesize{\footnotesize} \tablecaption{2003 September Ephemerides \label{ephemerides} } \tablewidth{0pt} \tablehead{ \colhead{~ Star} & \colhead{$\sigma_{\Delta RV}$} & \colhead{HJD at $\phi=0$} & \colhead{$\delta$(HJD)\tablenotemark{a}} & \colhead{Revised $P_{orb}$} & \colhead{$\delta(P_{orb})$\tablenotemark{a}} & \\ \colhead{} & \colhead{m~s$^{-1}$} & \colhead{days} & \colhead{days} & \colhead{days} & \colhead{days} & } \startdata $\tau$~Boo & 33 & 2452892.864 & 0.066 & 3.31250 & 0.00026 \\ HD~179949 & 19 & 2452894.114 & 0.062 & 3.09246 & 0.00031\\ HD~209458 & 17 & 2452893.653 & 0.070 & 3.52490 & 0.00020\\ 51~Peg & 15 & 2452895.868 & 0.085 & 4.23092 & 0.00014\\ $\upsilon$~And & 9 & 2452892.615 & 0.092 & 4.61750 & 0.00052\\ \enddata \tablenotetext{a}{Uncertainties in the respective measurements} \end{deluxetable} \begin{deluxetable}{cccccrccl} \tabletypesize{\footnotesize} \tablecaption{ The VLT Program Stars \label{vltstars}} \tablewidth{0pt} \tablehead{ \colhead{~ Star} & \colhead{U} & \colhead{B} & \colhead{Exp. time} & \colhead{n\tablenotemark{a}} & \colhead{S/N\tablenotemark{b}} & \colhead{S/N\tablenotemark{b}} & \\ \colhead{} & \colhead{} & \colhead{} & \colhead{s} & \colhead{} & \colhead{cont} & \colhead{core} & } \startdata HD~46375 & 9.33 & 8.7 & 300 & 8 & 510 & 150\\ HD~73256 & -- & 8.86 & 300 & 9 $\times$ 2 & 520 & 150\\ HD~75289 & 7.04 & 6.94 & 120 & 10 & 790 & 230\\ HD~76700 & 9.17 & 8.76 & 300 & 8 & 540 & 160\\ HD~83443 & 9.54 & 9.03 & 300 & 8 & 420 & 120\\ \enddata \tablenotetext{a}{Number of spectra taken per night. We observed HD~73256 twice per night for a total of 18 exposures.} \tablenotetext{b}{Nightly average per 0.015 \AA\/ per pixel in the continuum near 3950~\AA\/ and in the Ca II K core.} \end{deluxetable} \subsection{VLT Observations}\label{vlt_obs} We obtained high-resolution spectra through Visitor mode using the VLT's Ultraviolet and Visual Echelle Spectrograph (UVES) mounted on the 8.2-m Kueyen (UT2) over four photometric half-nights (2004 April 4 $-$ 7). The standard blue arm setting was used, centered on 4370 \AA, giving a wavelength range of 3750 to 4990 \AA. We used the CD2 cross-disperser grating (660 g~mm$^{-1}$) with a CCD of 2048 $\times$ 3000 pixels of 15~$\mu$m$^{2}$. Image Slicer \#2 with a slit width of 0.44$\arcsec$ resulted in a resolution of R $\approx$ 75,000. The data was reduced on-site by the UVES data reduction pipeline that uses the ESO-MIDAS software package within the UVES content. The data processing consisted of standard procedures: bias subtraction, interorder background correction, cosmic ray hit removal, flat-fielding, and wavelength calibration. The data were wavelength calibrated with Th/Ar arcs attached at the beginning and end of each set of 8 to 10 stellar exposures. The exposure times, number of exposures and the S/N for each star are listed in Table~\ref{vltstars}. \section{A Comparison of Ca II Emission}\label{caII} \subsection{Extracting the H, K and Al~I Lines}\label{HKlines} The very strong Ca II H and K photospheric lines suppress the stellar continuum making it difficult to normalize each 60-\AA\/ spectrum consistently. For this reason, a careful analysis by which to isolate any modulated Ca II emission was devised. Figure~\ref{upsAnd_spec} shows a flat-fielded spectrum of $\upsilon$~And with the normalization levels marked with dashed lines. The normalization wavelengths were constant for all spectra of a given star. The 7-\AA\/ spectral ranges, centered on the H and K lines, were chosen to isolate the H and K reversals while minimizing any apparent continuum differences induced by varying illumination of the CCD. This window is, however, wide enough such that a few photospheric absorption features appear which could be tested for variability as well. The mean Ca II K cores for the program stars are shown in Figures~\ref{cfht_Kcores} and \ref{vlt_Kcores}. Also, 7-\AA\/ and 2-\AA\/ cuts, centered on the strong photospheric Al~I line at 3944 \AA\/ were used as internal standards since the line has comparable depth and S/N to the Ca II lines. To normalize each sub-spectrum, the end points were set to 1 and fitted with a straight line. The spectra were grouped by date and a nightly mean was computed for each of the three lines. The RMS of the Al~I residuals for each CFHT target star is less than 0.0005 of the normalized mean. This is representative of the Al~I residuals for all stars in all four runs. The Al~I line of the VLT data varies with an average RMS of 0.0006. These values demonstrate both the level of stellar photospheric stability as well as the reliability of the data reduction and analysis. For all stars observed, both the H \& K emission was non-varying at the 0.001 level on a given night, likely as a result of intra-night (statistical) noise. \subsection{Long-term Variability}\label{long} With four CFHT observing runs spanning a baseline of over two years, we can compare the long-term variations in the chromospheric levels of the stars. We measure emission strength by integrating across the normalized K cores bounded by the K1 features (Montes et al.~1994). Each star's average integrated K emission $\langle$K$\rangle$ for each observing run is plotted in Figure~\ref{yearly} along with its fractional variation relative to the overall average emission. This is a good start to tracking the intrinsic stellar activity cycles of these stars. In the case of the Sun, we see the decrease from 2001 August to 2003 September as it declines from solar maximum. However, we also observed the naked-eye sunspot grouping of 2002 August that appeared in our data as a $\approx$~2\% increase in the Ca II emission relative to the other years. Since the variability from run-to-run may also be an indication of active regions on the disk of the star, we require more frequent monitoring over several more years to firmly say anything about the activity cycle of any individual program star. \subsection{Short-term Variability}\label{short} When monitoring Ca II emission, intrinsic stellar activity modulated by stellar rotation will appear along with any possible activity stimulated by the planet. The orbital periods of the planets are well known and uniquely established by the PRV and transit discovery methods. The rotation periods of the stars are much harder to determine in part due to stellar differential rotation which yields non-unique periods. For our work, it is key to distinguish between the rotational and orbital modulation of chromospheric emission. To isolate the chromospheric activity within the reversals, we took nightly residuals from the average stellar spectrum of all data. Each residual spectrum had a broad, low-order curvature removed which was an order of magnitude less than the variations in the H and K lines discussed below. The residuals of the normalized spectra (smoothed by 21 pixels) were used to compute the Mean Absolute Deviation (MAD = $N^{-1}\Sigma|data_{i}-mean|$ for $N$ spectra). As an example, the nightly residuals used to generate the MAD plot for $\upsilon$~And are displayed in Figure~\ref{upsAnd_Kcombo} (top). The MAD plot with the corresponding K-core superimposed is in Figure~\ref{upsAnd_Kcombo} (bottom). The identical analysis described above was performed on the Ca~II H line of all target stars. The complete Ca II H results can be found in Shkolnik (2004). As the two resonance lines share the same upper level and connect to the ground state, the same activity is seen in both. As expected, the Ca II H emission and activity levels are $\approx$~2/3 that of Ca II K (Sanford \& Wilson 1939). The star with the shortest rotation period in our sample is $\tau$~Boo. It has the largest $v$sin$i$ (= 14.8 m~s$^{-1}$; Gray 1982) and is believed to be in synchronous rotation with its tightly-orbiting planet ($P_{rot}$ = 3.2 $\pm$ 0.5 d, Henry et al.~2000; $P_{orb}$ = 3.31250 $\pm$ 0.00026 d, this work). If $\tau$~Boo is tidally locked to its planet, the planet-star interaction may be minimal (Saar et al.~2003, 2004) due to the fact that there is near zero relative velocity. (See Section~\ref{theory} for theoretical discussion.) The integrated residuals from the mean normalized K core are plotted against orbital phase in Figure~\ref{intK1} (left). Looking at the individual observing runs in the plot for $\tau$~Boo, the two nights of observation in 2001 August did not show much variation when the star was somewhat less active. The 2002 July data showed an increase in activity near the sub-planetary point relative to the other observations in their respective runs. However, in 2002 August and 2003 September observations show a relative enhancement at $\phi\approx$ 0.4 $-$ 0.5 modulated with a period near $P_{rot}$ and $P_{orb}$. This chaotic activity does not allow us to draw any conclusions about planet-induced heating. Similar to $\tau$~Boo, the K emission of HD~209458 showed night-to-night modulation, but with a smaller amplitude during most runs and without any phase coherence, as shown in Figure~\ref{intK1} (right). In 2001 August, we caught the system immediately after transit at which time we observed a slight enhancement in the Ca~II emission relative to all other observations. In the 2002 July run, an increase in emission occurred at $\phi\sim$ 0.25 with no apparent rise toward $\phi$ = 0. Due to the relatively low S/N of these data and the large intra-night deviations, we cannot form any conclusions. As seen in Figures~\ref{intK1} and \ref{intK2}, four of the five CFHT stars show significant chromospheric variation throughout a single observing run. The standards, $\tau$~Ceti and the Sun, show no such modulation. One consistent result for the `active' stars ($\tau$~Boo, HD~179949, HD~209458 and $\upsilon$~And) is their night-to-night modulation of H \& K emission. Also, unlike the case for HD~73256 (see Section~\ref{modbyrot}), the night-to-night variations of these stars do not increase or decrease monotonically throughout an observing run, implying the variability cannot be explained exclusively with starspots rotating into or out of view. Another mechanism is necessary. The night-to-night variations may indicate planet-induced activity or sporadic flaring from hotspots. If coupled to the planet, the localized activity would be travelling on the stellar surface faster than the star is rotating as it tracks the planet in its orbit. Other than for $\tau$~Boo, the timescale of activity is short compared to the stellar rotation period. Unfortunately, due to the large uncertainties in the rotation periods of the other stars, phasing with rotation is uninformative at this stage. However, we do know that $\upsilon$~And and HD~209458 both have $P_{rot}$ $>$~3$P_{orb}$ and rotate only $\lesssim 20^{\circ}$ per day. Wolf \& Harmanec (private communication) recently made UBV photometric observations of HD 179949 at our request from the SAAO Sutherland Observatory. When they combined their V observations with those from Hipparcos (converted to V) they detected a rotation period of 7.07 d but with an amplitude of only 0.008 mag. Given that the RMS of the V-observations was 0.006 mag, this periodicity is at the limit of detection. Indirect indications of the rotation rate of HD 179949 imply $P_{rot} \approx 9$ days and are presented in Shkolnik et al.~(2003). (See also Saar et al.~2004.) These include a high X-ray luminosity for the star, a very long tidal synchronization timescale and a moderate S$_{HK}$ index. While more photometry is needed to determine a rotation period conclusively, it is highly unlikely that HD 179949 is tidally locked to its planet at 3.092 d. \subsubsection{Planet-induced Activity on Two Stars}\label{induced} In Shkolnik et al.~(2003) we presented the first evidence of planet-induced heating on HD~179949. The effect lasted for over a year and peaked only once per orbit, suggesting a magnetic interaction. We fitted a truncated, best-fit sine curve with $P = P_{orb}$ = 3.092 d corresponding to the change in projected area of a bright spot on the stellar surface before being occulted by the stellar limb. Figure~\ref{intK2} (left) updates the integrated K residuals to include the 2003 September data. The spot model is a remarkable fit for the 2001 and 2002 data peaking at $\phi$ = 0.83 with an amplitude of 0.027. Clearly, the average K emission is higher during the latest run (as shown in Figure~\ref{yearly}) with a much smaller level of variability. It is interesting to note that the 2003 data still peak between $\phi$ = 0.80 $-$ 0.95, consistent with the previous results. The second convincing case of magnetic interaction is between $\upsilon$~And and its innermost giant planet.\footnote{$\upsilon$~And has three known Jupiter-mass planets at 0.059, 0.829, and 2.53 AU (Butler et al.~1999).} In Figure~\ref{intK2} (right), the 2002 July, 2002 August and 2003 September runs show good agreement in phase-dependent activity with an enhancement at $\phi$ = 0.53. The best-fit sine curve has an amplitude of 0.0032. The 2001 August fluxes are lower than the mean of all four observing runs by almost 3\% and still display a significant ($>$~2$\sigma$) modulation like the quiescent epoch of HD~179949. Again, even the low-amplitude modulation has a rise and fall with a period consistent with $P_{orb}$ and peaks near $\phi~=0.5$. For these two cases, the peak of the emission does not directly coincide with the sub-planetary point, $\phi$ = 0. For HD~179949, it leads the planet by 60$^{\circ}$ in phase and for $\upsilon$~And, the Ca II emission is 169$^{\circ}$ out of phase with the sub-planetary point. Santos et al.~(2003) also observed a 90$^{\circ}$ lag from the sub-binary point in the periodic activity indicated by the photometric variations for HD 192263. The phase lead or lag may help identify the nature of the interaction. For example, the phase offset of a starspot or group of starspots can be a characteristic effect of tidal friction, magnetic drag or reconnection with off-center stellar magnetic field lines, including a Parker spiral-type scenario (Weber \& Davis 1967, Saar et al.~2004). In any case, the phasing, amplitude and period of the activity have persisted for over a year between observations. For HD~179949, this equals 108 orbits or at least 37 stellar rotations and for $\upsilon$~And, the time spans 88 orbits or approximately 29 rotations. The observations are consistent with a magnetic heating scenario as the chromospheric enhancement occurred only once per orbit. We estimated the excess absolute flux released in the enhanced chromospheric emission of HD~179949 by calibrating the flux with that of the Sun. The flux was the same order of magnitude as a typical solar flare, $\sim$~10$^{27}$ erg~s$^{-1}$ or 1.5$\times$10$^{5}$ ergs~cm$^{-2}$~s$^{-1}$. This implies that flare-like activity triggered by the interaction of a star with its hot jupiter may be an important energy source in the stellar outer atmosphere. This also offers a mechanism for short-term chromospheric activity on the stars with close-in Jupiter-mass planets. \subsubsection{The Non-Varying Program Stars}\label{non_varying} Of the 10 program stars we monitored for H \& K variability, five of them showed no changes down to the 0.001 level: 51~Peg, HD~46375, HD~75289, HD~76700, HD~83443. There are two reasons we offer to explain the relative quiescence of these stars. It was well-known through the many years of the Mt.~Wilson S$_{HK}$ survey there is a strong correlation between rotation rate (or inversely with rotation period) and Ca II emission (Noyes et al.~1984, Pasquini et al.~2000). This is a likely contributor though is not obviously clear in our sample set as shown in Figure~\ref{MADK} (left) where the inverse of the rotation period is plotted against the mean chromospheric emission of the Ca II K line $\langle$K$\arcmin\rangle$. The photospheric contribution to the emission was removed from $\langle$K$\rangle$ using $\langle$K$\arcmin\rangle$ = $\langle$K$\rangle$(1 - ${R_{phot} \over R_{HK}}$) where R$_{phot}$ = R$_{HK}$ $-$ R$\arcmin_{HK}$ and is an empirical function of ($B - V$) and S$_{HK}$ taken from Hartmann et al.~(1984). The chromospheric contibutions R$\arcmin_{HK}$ are from Wright et al.~(2004).\footnote{For those few stars that were not in Wright et al.'s paper, we removed the photospheric contribution as tabulated from stars in Wright et al.'s sample of the same spectral type and log($g$).} In Table~\ref{starpars} we list $\langle$K$\rangle$, $\langle$K$\arcmin\rangle$, and $\langle$MADK$\rangle$ for all the stars. From Figure~\ref{MADK} (right), we deduce that the higher a star's chromospheric emission, the more night-to-night activity it displays. Radick et al.~(1998) show the same effect for a much larger sample. This is akin to shot noise since the flaring or stochastic noise associated with the activity will increase with the activity level. Secondly, a recent calculation of the magnetic fields in giant extrasolar planets (S\'anchez-Lavega 2004) looked at the internal structure and the convective motions of these planets in order to calculate the dynamo-generated surface magnetism. Given the same angular frequency (which is a reasonable approximation for the short-period planets in question), the magnetic dipole moment, and hence the magnetospheric strength, increases with planetary mass. This is observed in our own solar system for the magnetized planets where the magnetic moment grows proportionally with the mass of the planet (Arge et al.~1995). Since only lower limits exist for most of the hot jupiters, we can only plot $M_{p}$sin$i$ against $\langle$MADK$\rangle$ in Figure~\ref{msini_MADK} where we still see an intriguing correlation. The dashed circles for HD 179949 and $\upsilon$ And are their $\langle$MADK$\rangle$ values (0.0021, 0.0011, respectively) with the orbital modulation removed. Of our sample, $\tau$ Boo has the most massive planet and yet falls well below the correlation. As we discuss further in Section~\ref{theory}, if the star and planet are tidally locked, as is thought to be the case for $\tau$ Boo, then there is little or no free energy left from the orbit and we would expect weak, if any, magnetic coupling. \subsection{Modulation by Rotation}\label{modbyrot} For most of our target stars, rotation periods are not well enough known to accurately phase the Ca II data with $P_{rot}$. There is even ambiguity in $P_{rot}$ of the often-observed $\kappa^{1}$~Ceti. In late 2003, $\kappa^{1}$~Ceti was monitored continuously for 30.5 days by MOST.\footnote{The MOST microsatellite (Microvariability \& Oscillations of STars), a Canadian photometric telescope recently launched to observe p-mode oscillations on sun-like stars (Walker et al.~2003b)} This best-ever lightcurve obtained for $\kappa^{1}$~Ceti showed a pattern composed of two transiting spots of differing periods, 8.9 and 9.3 days providing a direct and unique measurement of differential rotation (Rucinski et al.~2004). The real ambiguity in $P_{rot}$ from the space observations of $\kappa^{1}$ Ceti demonstrates the difficulty in measuring $P_{rot}$ for most stars. Nonetheless, two stars do exhibit clear rotational modulation: $\kappa^{1}$~Ceti, a star with no confirmed planet (Halbwachs et al.~2003) and HD~73256, a star with a 1.85-M$_{J}$ planet orbiting at 0.037~AU (Udry et al.~2003). The Ca II emission from $\kappa^{1}$~Ceti has been monitored by Baliunas et al.~(1995) through the narrow-band filter of the Mt.~Wilson survey from 1967 to 1991. These data show long-term stability of a period of 9.4 $\pm$ 0.1 d, close to the photometric rotation period of 9.214 d published by Messina \& Guinan (2002). The rotation period for HD~73256 is photometrically determined to be 14 d (Udry et al.~2003), consistent with the 13.9 days derived from the R$\arcmin_{HK}$ activity index (Donahue 1993). We observed periodic Ca II H \& K variability in the chromosphere of $\kappa^{1}$~Ceti during our 2002 and 2003 CFHT runs from which we determined a rotation period of 9.332 $\pm$ 0.035 d. The results were first published in Rucinski et al.~(2004) where we compared the activity seen in Ca II with MOST's lightcurve showing that the chromospheric activity coincided with the low-latitude spot such that the maxima of the two curves agree. HD~73256 was observed twice per night for four nights at the VLT. The mean K cores of these two stars are shown at the top of Figure~\ref{vlt_Kcores} where their similarly strong emission is evident. The high level of chromospheric emission points to a young age for both of these stars: 650 $-$ 750 Myr for $\kappa^{1}$~Ceti (G\"udel et al.~1997; Dorren \& Guinan 1994) and 830 Myr for HD~73256 (Donahue 1993). The $\langle$MADK$\rangle$ given for these two stars in Table~\ref{starpars} are corrected for the rotational modulation resulting in activity levels comparable to HD~179949. The integrated residual K fluxes for $\kappa^{1}$~Ceti and HD~73256 are plotted against relative rotational phase in Figure~\ref{intK_Prot}. For completeness, we plot the residual K fluxes of HD~73256 as a function of orbital phase in Figure~\ref{hd73256_intK} where the flare is apparent at $\phi_{orb}$ = 0.03. There is no clear signature of the planet in the activity. In both cases, $\kappa^{1}$~Ceti ($M_{V}$ = 4.92) and HD~73256 ($M_{V}$ = 5.27) show sporadic flaring beyond the clear rotational modulation. The periodic best-fit curve for $\kappa^{1}$~Ceti necessitates a non-zero eccentricity while a sine curve is sufficient for HD 73256. The largest excursion from the rotation curve of $\kappa^{1}$~Ceti has an energy of 2.8$\times$10$^{4}$ erg~cm$^{-2}$~s$^{-1}$ (again, measured by comparing with solar absolute flux). We estimate the absolute flux emitted from HD~73256's flare at $\phi_{rot}$ = 0.15 to be 4.9$\times$10$^{4}$ erg~cm$^{-2}$~s$^{-1}$ (or $>2.9\times10^{8}$ erg~cm$^{-2}$ if the flare lasted for at least the hour for which we observed it.) The modulation of the K emission due to rotation is $\approx$~6\% indicating that the emission is dominated by a large hotspot on the stellar surface. As we have seen on HD~179949 and $\upsilon$~And, planet-induced variations are at the level of 1 $-$ 2\% suggesting that the reigning hotspot could have diluted any heating caused by the hot jupiter. The $\Delta$RVs for $\kappa^1$~Ceti are plotted in Figure~\ref{rv_k1ceti} against the 9.332-d phase determined from the K-line residuals where the open symbols correspond to data from 2002 and the solid squares to 2003. From the 2002 data we derive $\sigma_{RV} = $21.8 m~s$^{-1}$ and 23.6 m s$^{-1}$ when combined with 2003. These values are very similar to $\sigma_{RV} = $24.4 m s$^{-1}$, found over 11 years by Cumming et al.~(1999). In 2003 the $\Delta$RVs appear significantly different between the two nights, something which seems to be reinforced by the consistency within the pairs of $\Delta$RVs. While a planetary perturbation cannot be ruled out by the 2002 data and other PRV studies, the difference in 2003 might be associated with the velocity field of the star itself. The increase of velocity with increasing K-line strength at $\phi_{rot} \approx$ 0.7 is consistent with the extreme event in 1988 seen by Walker et al.~(1995). However it should be emphasized, the possibility remains of a close giant planet around $\kappa^1$~Ceti. For instance, a planet inducing a reflex radial velocity variation $< 50$~m~s$^{-1}$ and tidally synchronized with the star, would have $M_{p} \sin i \simeq 0.74$~M$_J$ and $a$ = 0.084 AU. \section{A Physical Scenario}\label{theory} The enhancements of chromospheric activity on HD 179949 and $\upsilon$ And appear only once per orbit implying a magnetic, rather than tidal, interaction between the star and its hot jupiter. The two stars are F-type stars with higher X-ray luminosity than the solar value. The ROSAT catalogue of bright main-sequence stars lists HD~179949 as having at least double the X-ray luminosity (a measurement independent of $i$) of most other single F8 $-$ 9 dwarfs (H\"unsch et al.~1998). Yohkoh solar X-ray observations (e.g. Yokoyama \& Shibata 2001) have shown that the energy release at the site of magnetic reconnection during a solar flare generates a burst of X-ray emitting gas. This hot plasma is funneled along the magnetic field lines down to the surface producing `footprints' through anomalous heating of the Sun's chromosphere and transition region. It is this same phenomenon that is likely occurring between hot jupiters and their host stars through the reconnection of their magnetic fields. Observationally, companion-induced activity is unambiguously observed on RS CVn stars, as discussed in Section~\ref{intro}. The hot jupiter of $\upsilon$ And is located farther out from the host star than that of HD 179949, implying that the magnetic interaction between the star and its hot jupiter is diminished. This would result in less Ca~II enhancement as is shown in our data. $\tau$~Boo is also an F-type star with intense X-ray emission, but its magnetic influence is likely reduced by the small relative motion in the azimuthal direction between the planet and the stellar magnetosphere in an almost final equilibrium state in which the star and the planet are tidally locked to each other. While more data are required to truly verify whether the variability of Ca~II emission is correlated to the apparent position of the hot jupiter in these systems, we explore and review a few theoretical aspects of planet-induced heating scenarios in this section. The location of the hot jupiter relative to the Alfv\'en radius (the distance from the star at which the radial velocity of the wind $V_{r,{\rm wind}}$ equals the local Alfv\'en velocity $V_A$) plays a significant role in transporting energy toward the star against the stellar winds. Since the Alfv\'en radius of the Sun is about 10 to 20 R$_{\odot}$ at solar minimum and 30 R$_{\odot}$ at solar maximum (e.g.~Lotova et al.~1985), the small distance $\lesssim 0.1$ AU of hot jupiters from their host stars suggests that unlike our Jupiter, surrounded by a bow shock, some of these hot jupiters are located inside the Alfv\'en radius depending on the magnetic strength of their host stars (Zarka et al.~2001, Ip et al.~2004). Therefore the direct magnetic interactions between a hot jupiter and its star without a bow shock might resemble the Io-Jupiter interactions (Zarka et al.~2001) or the RS CVn binaries (Rubenstein \& Schaefer 2000). Most of the theoretical models applied to these two cases have focused on the geometry of intertwined magnetic fields as well as the the energy transport through Alfv\'en waves and/or induced currents. Alfv\'en waves cannot propagate along the stellar field lines toward the star in the region outside the Alfv\'en radius where the group velocity of Alfv\'en waves is always in the positive radial direction (e.g. Weber \& Davis 1967).\footnote{Since the sound speed is much less than the Alfv\'en speed in the extended coronal region, the fast Alfv\'en radius is almost the same as the Alfv\'en radius. Therefore we do not distinguish these two radii in this paper. However unlike the Alfv\'en modes channeling along field lines, the fast Alf\'ven waves can propagate in all directions inside the Alf\'ven radius and therefore their impact on the parent star is attenuated. Here we assume that the Alfv\'en mode is the dominant, or at least the comparable, mode compared to the other compressible modes when they are excited.} Other means of inward energy transport require their energy flux to be at least larger than the energy flux carried by the stellar winds. Ip et al.~(2004) estimate the input magnetic power due to the relative motion between the synchronized hot jupiter and the stellar magnetosphere to be $\sim 10^{27}$ ergs~s$^{-1}$, the same order of magnitude of a typical solar flare. A similar amount of power might be obtained based on the induced current model (Zarka et al. 2001) if the radius of the ionosphere of an unmagnetized hot jupiter in the induced current model can be approximated by the radius of the magnetopause of a magnetized hot jupiter. The observed excess energy flux from an unresolved disk of the star is equal to this input magnetic power averaged over the disk of the star. That is, the energy flux is roughly equal to \begin{eqnarray} &&(B_m^2/8\pi) (V_{orb}-V_{\phi,{\rm wind}}) (r_m/R_*)^2 \nonumber \\ &=& \left( {B_*^{2(1-1/q)} B_p^{2/q} \over 8\pi} \right) a \left( {2\pi \over P_{orb}}-{2\pi \over P_{rot}} \right) \left( {R_* \over a} \right)^{2p(1-1/q)} \left( {R_p \over R_*} \right)^2, \label{eq:energyflux} \end{eqnarray} where $r_m$ is the radius of the planet's magnetopause, $B_{m}$ is the magnetic field at the magnetopause, $R_*$ is the radius of the star, $R_p$ is the radius of the planet, $a$ is the distance between the star and the planet, $V_{orb}$ and $P_{orb}$ are the orbital velocity and period respectively, $P_{rot}$ is the rotation period of the star, $V_{\phi,{\rm wind}}$ is the azimuthal component of the stellar wind velocity, and $B_*$ and $B_p$ are the mean magnetic field on the surface of the star and the planet. In deriving the above equation, we have assumed that the stellar magnetic field decays as $r^{-p}$, the planetary field decays as $r^{-q}$, $r_m$ was determined by equating the stellar and planetary fields at the magnetopause (i.e. $B_*(R_*/a)^p = B_p(R_p/r_m)^q$), and the stellar magnetosphere inside the Alfv\'en radius is nearly co-rotating with the stellar rotation (i.e. $V_{\phi,{\rm wind}}\approx a\Omega_*$). The radial component of the stellar wind $V_{r,{\rm wind}}$ is left out from the above estimate as long as the condition $V_{r,{\rm wind}} \lesssim V_{orb}-V_{\phi,{\rm wind}}$ is valid. The energy flux in eq(\ref{eq:energyflux}) should be regarded as the maximal input energy from the planet's orbital energy because only some fraction of this amount of energy is transferred to the Ca II emissions. For $p=2$ (Vrsnak et al.~2002 for the case of our Sun, and Weber \& Davis 1967 for the case of open fields), $q=2$, $a$ = 0.045~AU, $R_*=1.3$~R$_{\odot}$, $R_p=1.1$~R$_J$, $P_{rot}=9$ d, $B_{*}$ = 200~G, and $B_{p}$ = 10~G, the energy flux given by eq(\ref{eq:energyflux}) is roughly equal to 10$^{5}$ erg~cm$^{-2}$~s$^{-1}$, a value comparable to the differential intensity from our data of Ca II K emission from HD 179949. The same amount of energy flux can be also achieved for the case of a dipole field for the hot jupiter ($p=2$, $q=3$) where $a=0.045$~AU, $B_{*}$ = 250~G, and $B_p$ = 10~G. If both the star and the planet have dipole fields ($p=q=3$), very strong fields $B_*=1000$ G and $B_p=30$ G are required to generate the same energy flux. Therefore the tight energy budget constrained by the synchronous Ca II emission from HD 179949 strongly suggests that the mean global fields of this F-type star are not likely in a dipole field configuration at the location of the planet but have the radial, open structure (i.e. $p=2$) just like the solar fields have as a result of the outflowing winds. The argument that hot jupiters might have weaker fields than our Jupiter due to slower spin rates and weaker convection (S\'anchez-Lavega 2004) should be treated with caution since in addition to the uncertainty in the interior structure of the metallic hydrogen region of a hot jupiter, the response of slow convection to various rotation rates in the dynamo process is not well understood. If $p=2$, and $B_p \lesssim$ 1~G (S\'anchez-Lavega 2004) for HD 179949 b, then eq(\ref{eq:energyflux}) indicates that $B_{*}$ = 300~G is required to generate the energy flux 10$^{5}$ erg~cm$^{-2}$~s$^{-1}$ and in this case the stellar field dominates over the planetary field even on the surface of the hot jupiter (i.e. $q=\infty$ in eq(\ref{eq:energyflux})). The observational constraints on the strength of the stellar magnetic fields such as the field versus Ca~II relation (Schrijver et al.~1989, 1992) along with the radio cyclotron emissions from the hot jupiter's magnetosphere\footnote{The characteristic frequency of the cyclotron radiation is ${eB_p \over 2\pi m_e c}$, where $c$ is the speed of light, $e$ and $m_e$ are the electron charge and mass respectively. The radiation can reach $\approx 30-60$ MHz if $B_p \approx 10-20$ G.} should help to narrow down the field strength of hot jupiters in the magnetic interaction scenario, therefore improving our knowledge of the interior structure and the dynamo processes of gaseous planets. Now we turn our attention to the variation of Ca II level, the phase coverage of additional emission, and the phase lead at the different epochs of our observations. The data for HD~179949 seem to suggest that the additional Ca~II emission is smaller and the range of phase spanned by the emission is larger when the average (or minimal) K emission is larger. Presumably this has something to do with the intrinsic stellar activity and therefore the stellar field geometry. The observed phase lead may be caused by spiral stellar fields loaded with stellar winds. When a star is in its quiet phase and therefore the average Ca~II emission is very low, no enhancement happens probably because $V_{\rm wind} \gtrsim V_A$ for a hot jupiter located far out from its parent star and therefore being outside the Alfv\'en radius, as may be the case for the 2001 August observations of $\upsilon$ And. As the star shifts to its more active phase, the average Ca~II emission is at the moderate level and $V_{\rm wind}$ is not far smaller than $V_{A}$ at the hot jupiter. At this time, the configuration of the field lines on the surface of the star may be characterized by the open fields from the coronal holes covering a large area of the star, as well as the closed fields distributed only near the magnetic equator. Consequently the foot-points of the open stellar field lines pointing to the planet are located at lower latitudes, as indicated by the 2001 and 2002 data for HD 179949 fitted by a truncated sine curve (Shkolnik et al. 2003; also shown in Figure~\ref{intK2}). While a great number of closed fields form at low magnetic latitudes during this time, the coronal holes shrink to the magnetic polar regions. The bright spot at a high latitude implied from the observation for HD 179949 in September 2003 might be caused by the scenario that the hot jupiter perturbs the open field lines emanating from the shrinking coronal holes near the polar regions, leading to a longer phase duration of the additional Ca II flux and perhaps reducing the additional emission due to the smaller projection along the line of sight at higher latitudes. Note that at the time close to the stellar-maximum activity, the stellar field lines might be occasionally stretched out like solar streamers emanating from the low latitudes of the star where closed loops of stellar magnetic fields aggregate, possibly giving rise to the planet-induced heating at low latitudes of the stellar surface as well. The picture that we have sketched thus far assumes that most of energy flux released from the vicinity of the hot jupiter is transported along the field lines by Alfv\'en waves and deposited at the foot-points of the magnetic lines. Since the field lines inside the Alfv\'en radius are dominated by the poloidal component, detailed calculations for stellar wind models are needed to study how the integration of small pitch angles of the field line can lead to the moderate to large phase angles, $60^{\circ}$ for HD~179949 and $180^{\circ}$ for $\upsilon$~And. The phase difference is also determined by the Alfv\'en-speed travel time along the stellar field line. If $V_{A}>>V_{\rm wind}$, the Alfv\'en disturbance roughly takes $a/V_A \approx$ a few hours to propagate from the hot jupiter to the star with $V_A \approx 10^7-10^8$ cm~s$^{-1}$ at 0.04 AU. This means that the azimuthal angle that the planet has already traveled over the Alfv\'en-speed travel time is not small. Since the hot jupiter of $\upsilon$~And is located farther out ($a=0.06$ AU) from its star than HD~179949 b ($a=0.045$ AU), the large phase angle of $180^{\circ}$ for $\upsilon$~And might actually represent a phase lag caused by the small inward group velocity of the Alfv\'en waves $V_A-V_{r,{\rm wind}}$ that takes considerable amount of time to travel along the field lines right after the waves are generated from the planet. Alternatively, a large phase difference between the location of the heating spot in the chromosphere and the position of the hot jupiter might be caused by the entangled or rotationally spiraled magnetic fields connecting directly between the hot jupiter and the star (Rubenstein \& Schaefer 2000, Saar et al.~2004, Ip et al. 2004). Besides considering the energy transported along the field lines and field geometry, understanding how the energy is dissipated is important to construct a complete picture for the Ca~II emission in the planet-induced scenario. In our own solar system, a bow shock of the solar wind generally hinders this because Alfv\'en waves cannot propagate toward the star from the region outside the Alfv\'en radius. The inward Poynting energy of Alfv\'en waves $r^2 b^2 B/\sqrt{\rho}$ is roughly conserved if the wind velocity $V_{r,{\rm wind}}$ is much smaller than the Alfv\'en speed $V_A$. Here $r$ is the distance away from the star, $b$ is the magnetic disturbance of the wave, and $\rho$ is the mass density of the stellar wind. At first glance, the transition region characterized by a steep decrease of the Alfv\'en speed with depth seems to provide a possible radial stratification for the inward-propagating Alfv\'en waves to pile up the magnetic energy density, leading to the nonlinear dissipation of the growing waves and therefore the heating on the top of the chromosphere. However, the extremely narrow transition region corresponding to sharp gradient in density and Alfv\'en speed should act as a wave barrier to reflect the Alfv\'en waves. In this case, the wavelength is comparable to the size of the magnetopause of the hot jupiter (Wright 1987) unless the high-frequency modes are largely excited at the reconnection site. The energy carried by the planet-induced Alfv\'en waves along the open fields might be finally transmitted to coronal loops and therefore might be dissipated via resonance absorption, a damping mechanism however in contradiction to the observations of the solar corona and the coronae of cool stars (Schrijver \& Aschwanden 2002; Demoulin et al.~2003). The heating due to accelerated particles by Alfv\'en waves (Crary 1997) is not important either because unlike the interplanetary space, the dense corona eliminates the kinetic effect of plasmas inside 0.04 AU.\footnote{Besides the particle acceleration by Alfv\'en waves, a jet of accelerated particles with high speeds from the reconnection site at the magnetopause of a magnetized planet might be able to heat the stellar corona, but it is difficult for the process to produce a large phase difference between the heating spot and the location of the planet unless the coronal fields can be entangled in a more large-scale manner like the models for the RS CVn stars.} Despite the difficulties, the energy deposit on the surface of the star from the planet-induced Alfv\'en disturbances may be achieved by interacting non-linearly with stellar winds. The stellar wind consists of charges particles, stellar fields, and probably Alfv\'en waves. Non-linear interactions between planet-induced incoming and stellar intrinsic outgoing waves is one route of heating the surface of the star. The scenario of magnetic interaction implies the orbital decay of hot jupiters since the ultimate source of energy comes from the orbital energy. Theoretically the orbital decay of hot jupiters on the timescale of a few billion years can result from the tidal dissipation in the host star driven by the hot jupiter so long as the tidal dissipation in the solar-type stars is efficient (Rasio et al.~1996, Witte \& Savonije 2002, P\"atzold \& Rauer 2002, Jiang et al.~2003). In the magnetic interaction scenario for Ca II emission, the time scale of orbital decay is roughly equal to the ratio of the orbital energy of the hot jupiter ($\sim$10$^{44}$ ergs) to $10^{27}$ erg/s. This gives a timescale as short as several billion years, imposing a non-negligible constraint on modeling the orbital evolution of hot jupiters. In the case of $\tau$ Boo, the F-type star might be almost tidally locked to its hot jupiter. Therefore there is less free energy available from the planet's orbit. According to eq(\ref{eq:energyflux}), the azimuthal relative motion between the stellar and the planetary magnetospheres is smaller than that for the HD 179949 system by a factor of 0.2 if 3.2 d is indeed the rotation period of $\tau$ Boo. The mean chromospheric emission $\langle$K$\arcmin\rangle$ from $\tau$ Boo is weaker than that from HD 179949 roughly by a factor 0.9. Assuming that the mean stellar field of $\tau$ Boo is smaller than that of HD 179949 by 0.9$^2$ ($\langle$K$\arcmin\rangle$ $\propto$ $B_{*}^{0.5}$), one can estimate that the input energy to $\tau$ Boo is less than that to HD 179949 roughly by a factor of 0.15. If the corresponding Ca~II emissions are dimmed by this same factor, this effect should have been detected. However unlike the other two F-type stars HD 179949 and $\upsilon$~And, the variability of Ca~II emissions from $\tau$ Boo did not show any consistent phase relation with the planet's orbit. Note that the radial movement of spiral stellar fields with the stellar winds might be important in providing energy in this case because $V_{r,{\rm wind}} \approx V_{orb}$ at 0.04 AU in the solar system. However in the case of small pitch angles of the field lines inside the Alfv\'en radius, the Alfv\'en modes driven solely by the radial impact of the stellar winds might not be as efficient as the stellar fields sweeps across the hot jupiter due to the slow relative azimuthal motion. The transiting system HD 209458 did not show synchronized Ca II enhancement, perhaps because G stars have $V_{r,{\rm wind}} > V_A$ at the distance of the hot jupiter due to weaker stellar fields. The non-synchronized Ca II enhancement of $\tau$ Boo and HD~209458 (as well as the flaring on HD 73256 and $\kappa^{1}$~Ceti, discussed in Section~\ref{modbyrot}) might still be attributed to direct interactions with their planets but through more chaotic means due to the variability of the stellar winds at the locations of these hot jupiters relative to the nearby Alfv\'en radius. Monitoring these stars continuously through several orbital and rotational periods should pin down the cause $-$ intrinsic due to fast rotation or induced by a hot jupiter $-$ of the strong night-to-night variabilities detected in these systems. \section{Summary and Future}\label{summary} Of the sample of stars observed from the CFHT, those with planets (with the exception of 51~Peg) show significant night-to-night variations in their Ca~II H and K reversals. $\tau$~Ceti and the Sun, which have no close planets, remained very steady throughout each of the four observing runs. HD~179949 and $\upsilon$~And exhibited repeated orbital phase-dependent activity with enhanced emission leading the sub-planetary point by 0.17 and 0.47 in orbital phase, respectively. Both systems are consistent with a magnetic heating scenario and may be the first glimpse at the magnetospheres of extrasolar planets. The phase-lead or lag of the peak emission relative to the sub-planetary longitude can provide information on the field geometries and the nature of the effect such as tidal friction, magnetic drag or reconnection with off-center magnetic fields, including a Parker-spiral type scenario. $\tau$ Boo and HD 209458 also exhibited night-to-night variations that could not exclusively be due to stellar rotation. If $\tau$ Boo is indeed tidally locked by its hot jupiter, there is no orbital energy available to generate the Alfv\'en modes efficiently as the stellar fields sweep across the planet due to the slow relative azimuthal motion. We measured the excess absolute flux released in the enhanced chromospheric emission of HD~179949 to be the same order of magnitude as a typical solar flare, $\sim$~10$^{27}$ erg~s$^{-1}$ or 1.5$\times$10$^{5}$ ergs~s$^{-1}$~cm$^{-2}$. This implies that flare-like activity triggered by the interaction of a star with its hot jupiter may be an important energy source in the stellar outer atmosphere. This offers a mechanism for short-term chromospheric activity. The H \& K emission of $\kappa^{1}$~Ceti, an active star with no confirmed planet, was clearly modulated by the star's 9.3-d rotation. Similarly, HD 73256 displayed rotational modulation with its 14-d period. In these two cases, the chromospheric emission increases by $\approx$~6\% (relative to the normalization level at 1/3 of the continuum). Any planet-induced heating at the level of 1 $-$ 2\% could have been diluted by the dominating hotspot on the stellar surface. Neither the $\Delta$RVs nor the Ca II periodicity exclude the possibility of a sub-stellar companion in a tight orbit around $\kappa^{1}$~Ceti. Apart from the cyclical component for four of the stars, short-term chromospheric activity appears weakly dependent on the mean K-line reversal intensities for the sample of 13 stars. Also, a suggestive correlation exists with $M_{p}$sin$i$ and thus with the hot jupiter's magnetic field strength. Because of their small separation ($\leq$ 0.1 AU), many of the hot jupiters lie within the Alfv\'en radius of their host stars which allows a direct magnetic interaction with the stellar surface. Additional Ca II observations are crucial to confirm the stability of the magnetic interaction as well as to establish better phase coverage. Observations on timescales of a few years will begin to characterize the long-term activity of our program stars and allow us to see correlations between intrinsic Ca II emission and night-to-night activity more clearly. This work opens up the possibility of characterizing planet-star interactions with implications for extrasolar planet magnetic fields and the energy contribution to stellar atmospheres. A next step in understanding planet-star interactions is to map the activity as a function of stellar atmospheric height. Above the chromosphere lies the thin transition region (TR), where the temperature increases steeply as density and pressure drop, and the corona, which can extend out to several stellar radii. And since the magnetic field drops off as $r^{-p}$ (where $2 \leq p \leq 3$), these layers facilitate a stronger interaction with the planet. Their FUV and X-ray emissions will be extremely important diagnostics. One indication that the heating is from the outside in is if the increase in emission occurs slightly earlier in phase than in Ca II. Moreover, the relative strengths of the different emission lines will tell us where most of the energy is dissipated. The energy sum will point out if there are any discrepancies with the theorized energy budget. Orbital phase-dependent variability at these heights will constrain further the nature, form and strength of the interaction as well as specify non-thermal radiative processes in these hot layers of gas. \acknowledgements We are grateful to Marek Wolf and Petr Harmanec for their photometric observations of HD 179949 made at the South African Astronomical Observatory (SAAO). We thank Geoff Bryden, Peng-Fei Chen, Gary Glatzmaier, Gordon I. Ogilvie, and Ethan T. Vishniac for useful communications regarding Section~\ref{theory}. Research funding from the Canadian Natural Sciences and Engineering Research Council (G.A.H.W. \& E.S.) and the National Research Council of Canada (D.A.B.) is gratefully acknowledged. We are indebted to the CFHT staff for their care in setting up the CAFE fiber feed and the Gecko spectrograph, as well as to the staff at ESO's VLT for their telescope and instrument support and the real-time data-reduction pipeline. Also, we appreciate the helpful comments and suggestions from the referee, Steve Saar. \clearpage
1,314,259,996,070
arxiv
\section{Introduction} Let ${\mathbb D}=\{\lambda\in{\mathbb C}: |\lambda|<1\}$ denote the unit disk. For $\lambda',\lambda''\in{\mathbb D}$ we define the M\"obius function $m$ on ${\mathbb D}$ as $$ m(\lambda', \lambda'')=\left|\frac{\lambda'-\lambda''}{1-\bar \lambda'\lambda''}\right|, $$ and the Poincar\'e function $p$ as $p(\lambda',\lambda'')=\frac12\log\frac{1+m(\lambda',\lambda'')}{1-m(\lambda',\lambda'')}$. Let $X$ be a complex manifold. For $x,y\in X$ put \begin{align*} c_X(x,y)=&\sup\{p(f(x),f(y)): f\in{\mathcal O}(X;{\mathbb D})\},\\ c_X^\ast(x,y)=&\sup\{m(f(x),f(y)): f\in{\mathcal O}(X;{\mathbb D})\}, \end{align*} where ${\mathcal O}(X;{\mathbb D})$ denotes the set of all holomorphic mappings $X\to{\mathbb D}$. The function $c_X$ is called the Carath\'eodory pseudodistance for $X$ (see e.g. \cite{JP1}). In case, when $c_X$ is indeed distance we say that $X$ is $c$-hyperbolic. A $c$-hyperbolic manifold $X$ is called $c$-complete if any $c_X$-Cauchy sequence $\{x_\nu\}_{\nu\ge1}\subset X$ converges to a point $x_0\in X$ (w.r.t. "usual" topology). We say that a $c$-hyperbolic manifold $X$ is $c$-finitely compact if any ball $B_c(x_0,R)=\{x\in X: c_X(x,x_0)<R\}$ is relatively compact in $X$ (w.r.t. "usual" topology). It is known (see \cite{Sib1} and \cite{Sel1}) that on a domain in the complex plane these two notions are equivalent. The aim of this paper is to give a local version of the results of N. Sibony \cite{Sib1} and M. A. Selby \cite{Sel1}, we also simplify the proofs. \begin{theorem}\label{thm:1} Let $\Omega\subset{\mathbb C}$ be a domain and let $\zeta\in\partial\Omega$ be a boundary point. Then the following conditions are equivalent: \begin{enumerate} \item\label{thm:1:1} the point $\zeta$ is a weak peak function for $\Omega$, i.e., there exists an $f\in {\mathcal O}(\Omega)\cap C(\Omega\cup\{\zeta\})$ such that $|f|<1$ on $\Omega$ and $f(\zeta)=1$; \item\label{thm:1:4} there exist no Borel finite measure $\mu$ on $\Omega$ such that $$ |f(\zeta)|\le\int|f|d\mu\quad\text{ for any } f\in {\mathcal O}(\Omega)\cap C(\Omega\cup\{\zeta\}). $$ \item\label{thm:1:5} there exist no Borel probability measure $\mu$ on $\Omega$ such that $$ f(\zeta)=\int f d\mu\quad\text{ for any } f\in {\mathcal O}(\Omega)\cap C(\Omega\cup\{\zeta\}). $$ \item\label{thm:1:3} fix any $a\in(0,1)$. Then $$ \sum_{n=1}^\infty \frac{\gamma(A_n(\zeta,a)\setminus\Omega)}{a^n}=+\infty, $$ where $A_n(\zeta,a)=\{z\in{\mathbb C}: a^{n+1}\le |z-\zeta|\le a^n\}$ and $\gamma$ is the analytic capacity (see \cite{Gam} and definition below); \item\label{cond:1} for any sequence $\{z_\nu\}_{\nu\ge1}\subset\Omega$ such that $z_\nu\to\zeta$ we have $c_{\Omega}(z_0,z_\nu)\to\infty$; \item\label{cond:3} for any sequence $\{z_\nu\}_{\nu\ge1}\subset\Omega$ such that $z_\nu\to\zeta$ we have $\{z_\nu\}$ is not a $c_{\Omega}$-Cauchy sequence; \item\label{cond:4} for any sequence $\{z_\nu\}_{\nu\ge1}\subset\Omega$ with $z_\nu\to\zeta$ there exists an $f\in{\mathcal O}(\Omega)$ such that $|f|<1$ on $\Omega$ and $f(z_\nu)\to1$ when $\nu\to\infty$. \end{enumerate} \end{theorem} Part of Theorem~\ref{thm:1} for a domain $\Omega$ in ${\mathbb C}^N$ is claimed in \cite{Edi}. However, the presented proof was based on a version of Hahn-Banach theorem given in \cite{Pol-Gog}. At the end of the paper we give a simple example, which shows that this version of Hahn-Banach does not hold. So, at the moment we do not know whether Theorem~\ref{thm:1} holds true in higher dimension. \section{Conditions for existence of a peak function} Let $F$ be any subset of ${\mathbb C}$. We denote by $R(F)$ the set of all continuous functions on $F$, which can be approximated by holomorphic functions on a neighborhood of $F$. We say that $x_0\in F$ is a peak point for $R(F)$ if there exists a function $f\in R(F)$ such that $|f|<1$ on $F\setminus\{x_0\}$ and $f(x_0)=1$. The next result (see e.g., Theorem 10.8 in Chapter VIII \cite{Gam}) gives a description of $R(\Omega\cup\{\zeta\})$ for a domain $\Omega\subset{\mathbb C}$ and a boundary point $\zeta\in\partial\Omega$. \begin{theorem} Let $\Omega\subset{\mathbb C}$ be a domain and let $\zeta\in\partial\Omega$. For any $f\in H^\infty(\Omega)$ there exists a sequence $\{f_n\}_{n\ge1}\subset H^\infty(\Omega)$ with $\|f_n\|_{\Omega}\le 17\|f\|_{\Omega}$ such that $f_n\to f$ locally uniformly on $\Omega$ and each $f_n$ extends holomorphically to a neighborhood of $\zeta$. Moreover, if $f$ extends continuously to $\zeta$ then $f_n$ tends to $f$ uniformly on $\Omega$. In particular, $R(\Omega\cup\{\zeta\})={\mathcal O}(\Omega)\cap C(\Omega\cup\{\zeta\})$. \end{theorem} Let us give a variation of 1/4-3/4 Bishop's theorem (cf. Theorem 11.1 in Chapter II \cite{Gam}), which is a standard technique in a construction of peak functions. \begin{theorem}\label{theorem:10a} Let $X$ be a topological space and let $x_0\in X$. Assume that there exists a sequence $\{f_\nu\}_{\nu}\subset C(X)$ and numbers $0<r<1\le R$ such that \begin{itemize} \item $|f_\nu|\le R$ on $X$; \item $f_\nu(x_0)=1$; \item for any $x\in X\setminus\{x_0\}$ there exists a $k=k(x)$ such that $|f_\nu(x)|<1$ for any $\nu\ge k$. \item for any $k\in{\mathbb N}$ and any $\epsilon>0$ there exists an $m_0$ such that for any $m\ge m_0$ we have $|f_m|\le r$ on the set $\{z\in\Omega:|f_k(z)|\ge 1+\epsilon\}$. \end{itemize} Then for any $s\in (\frac{R-1}{R-r},1)$ there exists a subsequence $\{n_k\}_{k=1}^\infty$ such that for a function $h(x)=(1-s)\sum_{k=1}^\infty s^k f_{n_k}(x)$ we have \begin{enumerate} \item $|h|<1$ on $X\setminus\{x_0\}$; \item $h(x_0)=1$; \item $F_N$ tends uniformly to $h$ on $X$, where $F_N=(1-s)\sum_{k=1}^N s^k f_{n_k}$. \end{enumerate} \end{theorem} \begin{proof} Fix $s\in (\frac{R-1}{R-r},1)$. Then $R-1+s(r-R)<0$. Choose a sequence $\epsilon_\nu\searrow0$ such that $$ \epsilon_{\nu-1}(1-s^\nu)+s^\nu\big(R-1+s(r-R)\big)<0,\quad \nu\ge1. $$ Now we are going to construct inductively a sequence of holomorphic functions $\{h_\nu\}$. Put $h_0=1$ and $h_1=f_1$. Assume that $h_0,h_1,\dots,h_\nu$ are constructed. Put $$ W_\nu=\{x\in X:\max_{1\le j\le \nu}|h_j(x)|\ge 1+\epsilon_\nu\}. $$ By the assumptions there exists an $m_\nu$ such that for $h_{\nu+1}=f_{m_\nu}$ we have: \begin{itemize} \item $|h_{\nu+1}|\le R$ on $X$; \item $h_{\nu+1}(x_0)=1$; \item $|h_{\nu+1}|\le r$ on $W_\nu$. \end{itemize} Put $h=(1-s)\sum_{j=0}^\infty s^j h_j$. Note that $h(x_0)=1$. Let us show that $|h|\le1$ on $X$. Assume that $x\not\in\cup_\nu W_\nu$. Then $|h_\nu(x)|\le 1$ for any $\nu\in{\mathbb N}$. Hence, $|h(x)|\le 1$. Note that $W_1\subset W_2\subset\dots$. Now assume that $x\in W_\nu\setminus W_{\nu-1}$ (we put $W_0=\varnothing$). Then \begin{itemize} \item $|h_j(x)|\le R+\epsilon_{\nu-1}$ for $1\le j\le \nu-1$; \item $|h_\nu(x)|\le 1$; \item $|h_j(x)|\le r$ for $j>\nu$. \end{itemize} So, we have $$ |h(x)|\le\frac{1-s}{R}\Big\{ (R+\epsilon_{\nu-1})\sum_{j=0}^{\nu-1} s^j+ s^\nu+r\sum_{j=\nu+1}^\infty s^j\Big\}<1. $$ Note that $h$ is a non-constant function and that $h(x_0)=1$. \end{proof} Recall the definition of the analytic capacity. Let $\widehat{\mathbb C}={\mathbb C}\cup\{\infty\}$ denote the Riemann sphere. The analytic capacity of a compact set $K$ is defined by $$ \gamma(K)=\sup\{|f'(\infty)|: f\in{\mathcal O}(\Omega), \|f\|\le1, f(\infty)=0\}, $$ where $\Omega$ is the unbounded component of $\widehat{\mathbb C}\setminus K$ and $f'(\infty)=\lim_{z\to\infty} zf(z)$. For any set $F\subset{\mathbb C}$ we put $$ \gamma(F)=\sup\{\gamma(K): K\subset F\text{ compacts}\}. $$ We have the following elementary result (c.f. Corollary 1.8 in Chapter VIII \cite{Gam}) \begin{prop}\label{prop:6} If $K\subset{\mathbb C}$ is a compact set then $$ \gamma(K)=\inf\{\gamma(U): K\subset U\text{ open}\}. $$ \end{prop} One of the main result of this section is the following Curtis type result (cf.~Theorem 4.1 in Chapter VIII \cite{Gam}). \begin{theorem} Let $F\subset{\mathbb C}$ be any subset and let $\zeta\in F$. Assume that $$ \limsup_{r\to0+}\frac{\gamma({\mathbb D}(\zeta;r)\setminus F)}{r}>0. $$ Then $\zeta$ is a peak point for $R(F)$. \end{theorem} \begin{proof} There exists a sequence of holomorphic functions $f_j:\Omega_j\to{\mathbb D}$ such that $f_j(\infty)=0$ and $$ \lim_{j\to\infty} \frac{f'_j(\infty)}{r_j}>0, $$ where $\Omega_j=\widehat{\mathbb C}\setminus K_j$. Note that $\cup_{j=1}^\infty\Omega_j=\widehat{\mathbb C}\setminus\{\zeta\}$. Consider a sequence of functions $$ g_j(z)=\frac{(z-\zeta)f_j(z)}{f'_j(\infty)},\quad z\in\Omega_j. $$ Fix for a while $j$. From the maximum pronciple, for any $z\in\Omega_j$ with $|z-\zeta|\ge r_j$ we have $|g_j(z)|\le \frac{r_j}{f_j'(\infty)}$. Hence, $|g_j(z)|\le \frac2\alpha$ for any $z\in\Omega_j$ and big enough $j$. Moreover, $g_j(\infty)=1$. Passing to a subsequence, we can assume that the sequence $g_j$ converges to a bounded holomorphic function $g$ on $\widehat{\mathbb C}\setminus\{\zeta\}$. Hence, $g\equiv1$. Put $h_j=1-g_j$. Then $h_j(\zeta)=1$, $|h_j|\le 1+\frac2\alpha$, and for any $z\in\widehat{\mathbb C}\setminus\{\zeta\}$ we have $h_j(z)\to0$ when $j\to\infty$. Now we use Theorem~\ref{theorem:10a} to construct a peak function. \end{proof} Using the relation between the analytic capacity and the Lebesgue measure on the complex plane (see \cite{Gam}) we get. \begin{corollary}\label{cor:6a} Let $F\subset{\mathbb C}$ be a Borel set and let $\zeta\in F$. Assume that $$ \limsup_{r\to0+} \frac{{\mathcal L}({\mathbb D}(\zeta;r)\setminus F)}{r^2}>0. $$ Then $\zeta$ is a peak point for $R(F)$. \end{corollary} In another words, if $\zeta$ is not a peak point for $R(F)$ then $F$ is of full measure at $\zeta$, i.e., $\lim_{r\to0+} \frac{{\mathcal L}({\mathbb D}(\zeta;r)\setminus F)}{r^2}=0$. As a corollary of Melnikov's result (see Theorem 4.5 in Chapter VIII in \cite{Gam}) and Bishops's characterization of a peak point for a compact set (see e.g. Theorem 2.1 in \cite{Zal}) we have \begin{theorem} Let $\Omega\subset{\mathbb C}$ be any domain and let $\zeta\in\partial\Omega$. Assume that $$ \sum_{n=1}^\infty\frac{\gamma(A_n(\zeta;a)\setminus\Omega)}{a^n}<+\infty. $$ Then there exists a compact set $K\subset\Omega\cup\{\zeta\}$ such that $\zeta$ is not a peak point for $R(K)$, and, therefore, there exists a Borel probability measure $\mu$ with support in $K$ such that $$ f(\zeta)=\int_{\Omega} f d\mu\quad\text{ for any }f\in R(\Omega\cup\{\zeta\}). $$ \end{theorem} Let us finish this section with a generalization of a part of Melnikov's result. \begin{theorem}\label{thm:8i} Let $F\subset{\mathbb C}$ be any subset and let $\zeta\in F$. Let $a\in(0,1)$. If $$ \sum_{n=1}^\infty\frac{\gamma(A_n(\zeta,a)\setminus F)}{a^n}=+\infty $$ then $\zeta$ is a peak point for $R(F)$. \end{theorem} \begin{proof} The proof essentially is the same as in case of a compact set $F$ (see sufficiency part in Melnikov's criterion, Theorem VIII.4.5 in \cite{Gam}). Indeed, from the definition it follows that for any $n\in{\mathbb N}$ there exists a compact set $K_n\subset A_n(\zeta,a)\setminus F)$ and a holomorphic function $f_n:\widehat{\mathbb C}\setminus K_n\to{\mathbb D}$ such that $f_n(\infty)=0$ and $f_n'(\infty)\ge\frac12\gamma(A_n(\zeta,a)\setminus F)$. Having the family $\{f_n\}_{n\in{\mathbb N}}$, we just repeat the arguments given on page 206 in \cite{Gam}. \end{proof} \section{Proof of Theorem~\ref{thm:1}} We denote by ${\mathcal M}$ the set of all positive Borel finite measures in ${\mathbb C}$. For $\mu\in{\mathcal M}$ we define its Newton potential as $M(z)=M_{\mu}(z)=\int\frac{1}{|w-z|}d\mu(w)$. For any $\zeta,\eta\in{\mathbb C}$ and any $r>0$ we set $$ F_{r}(\eta)=\frac{1}{\pi r^2}\int_{{\mathbb D}(\zeta;r)}\left|\frac{z-\zeta}{z-\eta}\right|d{\mathcal L}(z), $$ where $d{\mathcal L}$ is the Lebesgue measure in ${\mathbb C}$. We have the estimate $$ F_r(\eta)\le\frac{1}{\pi r}\int_{{\mathbb D}(\zeta,r)}\frac{1}{|z-\eta|}d{\mathcal L}(z)\le 2. $$ Hence, $\int F_r(\eta)d\mu(\eta)\le 2\mu({\mathbb C})$. So, \begin{equation}\label{eq:123} \frac{1}{\pi r^2}\int_{{\mathbb D}(\zeta,r)}|z-\zeta|\cdot M(z)d{\mathcal L}(z)\le2\mu({\mathbb C}), \end{equation} and, therefore, $M<\infty$ a.e. on ${\mathbb C}$. The following result, which essentially is a corollary of Fubini's theorem, shows the behaviour of the left side of \eqref{eq:123} when $r\to0$ (see e.g. \cite{St1}, Lemma 26.16). \begin{prop}\label{prop:11} Let $\mu\in{\mathcal M}$. For any $\zeta\in{\mathbb C}$ we have $$ \lim_{r\to0}\frac{1}{\pi r^2}\int_{{\mathbb D}(\zeta,r)}|w-\zeta|\cdot M(w)d{\mathcal L}(w)=\mu(\{\zeta\}). $$ In particular, if $\mu(\{\zeta\})=0$ then for any $\epsilon>0$ the set $$ \Pi(\epsilon)=\{w\in{\mathbb C}: |w-\zeta|\cdot M(w)>\epsilon\} $$ is of zero density at $\zeta$, i.e., $$ \lim_{r\to0}\frac{{\mathcal L}(\Pi(\epsilon)\cap{\mathbb D}(\zeta,r))}{\pi r^2}=0. $$ \end{prop} The proof of Theorem~\ref{thm:1} is based on the following simple observation. \begin{prop}\label{thm:23a} Let $F\subset{\mathbb C}$ be any set and let $\zeta\in X$. Assume that $\mu$ is a Borel finite measure in ${\mathbb C}$ with $\mu(\{\zeta\})=0$ such that \begin{equation}\label{eq:9} |f(\zeta)|\le\int_{F}|f|d\mu \end{equation} for any $f\in R(F)$. Then for any $\eta\in F$ we have $$ |f(\eta)-f(\zeta)|\le 2\|f\|_{\infty} M(\eta)|\eta-\zeta|. $$ \end{prop} \begin{proof} Fix a function $f\in R(F)$ and a point $\eta\in F\setminus\{\zeta\}$. It suffices to note that $\tilde f(z)=\frac{f(z)-f(\eta)}{z-\eta}\in R(F)$ and use the inequality \eqref{eq:9} to $\tilde f$. \end{proof} As an immediade corollary we get. \begin{corollary}\label{thm:23} Let $\Omega\subset{\mathbb C}$ be a domain and let $\zeta\in\partial\Omega$. Assume that $\mu$ is a Borel finite measure in ${\mathbb C}$ such that $$ |f(\zeta)|\le\int_{\Omega}|f|d\mu $$ for any $f\in H^\infty(\Omega)$ which extends holomorphically to a neighborhood of $\zeta$. Then for any $\eta\in\Omega$ we have $$ |f(\eta)-f(\zeta)|\le 2\|f\|_{\infty} M(\eta)|\eta-\zeta|. $$ In particular, for any $\eta_1,\eta_2\in\Omega$ we have \begin{equation}\label{eq:124} c^\ast_{\Omega}(\eta_1,\eta_2)\le 34\Big(|\zeta-\eta_1| M(\eta_1)+|\zeta-\eta_2| M(\eta_2)\Big). \end{equation} Moreover, there exists a c-Cauchy sequence $\{\eta_n\}_{n\ge1}\subset\Omega$ such that $\eta_n\to\zeta$. \end{corollary} \begin{proof} First part follows from Corollary~\ref{thm:23}. For the second part, if $$ \limsup_{r\to0}\frac{{\mathcal L}({\mathbb D}(\zeta;r)\setminus\Omega)}{\pi r^2}>0 $$ then from Corollary~\ref{cor:6a} there exists a function $f\in R(\Omega\cup\{\zeta\})$ such that $|f|<1$ on $\Omega$ and $f(\zeta)=1$. In particular, the measure $\mu$ as in the assumptions does not exist. So, $$ \lim_{r\to0}\frac{{\mathcal L}({\mathbb D}(\zeta;r)\cap\Omega)}{\pi r^2}=1. $$ Then by Proposition~\ref{prop:11} there exists a sequence $\{\eta_n\}_{n\ge1}\subset \Omega$ with $\eta_n\to\zeta$ such that $|\zeta-\eta_n| M(\eta_n)\le\frac1{2^n}$. From Theorem~\ref{thm:23} we get the result. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:1}] Note that implications $(1)\implies(7)\implies(5)\implies(6)$ and $(2)\implies(3)$ are straightforward. The implication $(6)\implies(2)$ is proved above as a Corollary~\ref{thm:23}. And the implication $(4)\implies(1)$ follows from Theorem~\ref{thm:8i}. \end{proof} \section{A counterexample} Let $V$ be a vector space. Following \cite{Pol-Gog} we call a mapping $Q:V\to[-\infty,+\infty)$ \emph{superlinear} if: \begin{enumerate} \item $Q(cv)=cQ(v)$ for any $c\ge0$ and any $v\in V$; \item $Q(v_1+v_2)\ge Q(v_1)+Q(v_2)$. \end{enumerate} In \cite{Pol-Gog} the following version of a Hahn-Banach theorem is stated: {\it Let $V$ be a vector space and let $M\subset V$ be a vector subspace. Assume that $Q:V\to[-\infty,+\infty)$ be a superlinear mapping. Then any linear operator $\ell:M\to{\mathbb R}$ such that $\ell\ge Q$ on $M$ extends to a linear operator $L:V\to{\mathbb R}$ such that $L\ge Q$ on $V$.} The following elementary counterexample shows that it does not hold. \begin{example} Let $V={\mathbb R}^2$ and let $M={\mathbb R}\times\{0\}$. We put $$ Q(x,y)= \begin{cases} -\infty\quad&\text{ if }y\le0\\ 0&\text{ if }y>0 \end{cases} $$ and $\ell(x,y)=x$. We have $\ell\ge Q$ on $M$. Moreover, $Q$ is superlinear. However, there is no linear $L:V\to{\mathbb R}$ such that $L=\ell$ on $M$ and $L\ge Q$ on $V$. \end{example} \bibliographystyle{amsplain}
1,314,259,996,071
arxiv
\section{Introduction} The energy spectra and relative abundances of cosmic rays (CR) are key observables for a theoretical understanding of the acceleration and propagation mechanisms of charged particles in our Galaxy~\cite{Drury, Ohira2011,Malkov,Blasi2012,Tomassetti,Vladimirov,Ptuskin,Thoudam,Bernard,Serpico,Ohira2016, Evoli2018,Evoli2019,ICRC2021_Caprioli,ICRC2021_Lipari}. Direct measurements by space-borne instruments have recently achieved a level of unprecedented precision, thanks to long term observations and their capability to identify individual chemical elements. Direct measurements from high-altitude balloons and indirect measurements from ground based arrays convey important complementary information, albeit with different systematic uncertainties. The extensions to higher energies of CR spectral data have shown unexpected deviations from a single power law, as in the case of the recently observed double broken spectral shape of the proton spectrum in the multi-TeV domain, reported by the DAMPE~\cite{DAMPE_proton} and Calorimetric Electron Telescope (CALET)~\cite{ICRC2021_KKPSM} experiment. A progressive spectral hardening (as a function of energy) has been established for light elements and heavier nuclei~\cite{AMS-CO, AMS-Li-Be-B, AMS-Ne-Mg-Si, CREAM2HARD, CALET-CO} with an onset at a few hundred GeV$/n$. Also, a spectral softening has been observed in the TeV domain for proton and helium as reported by the DAMPE~\cite{DAMPE_proton,DAMPE-He}, CALET~\cite{ICRC2021_KKPSM} and NUCLEON~\cite{NUCLEON-nuclei} experiments. The spectral study of heavy elements was recently extended to higher energies with the publication of the iron spectrum by the AMS-02~\cite{AMS-Fe} and CALET~\cite{CALET-IRON2021} experiments. \\ \indent In this paper, we pursue the study of elements sitting on the high side of the periodic table, where nickel -- with much larger abundance than all other trans-iron elements -- provides a favorable opportunity for a low background measurement of its spectrum. \\ \indent Space-borne direct measurements of CR nickel nuclei include the spectrum measured from $0.6$ to 35~GeV/${n}$ by~the French-Danish C2 instrument HEAO3-C2~\cite{HEAO3-Ni} onboard the NASA HEAO3 satellite (launched in 1979)~and the recent measurement by the NUCLEON experiment (launched in 2014)~\cite{NUCLEON-Ni} in the energy range~$\sim$51--511~GeV$/n$. Measurements in the lower energy range~50--550~MeV/${n}$ were carried out, during the 2009--2010 solar minimum period, by the Cosmic Ray Isotope Spectrometer (CRIS)~\cite{Lave13} onboard the Advanced Composition Explorer at the L1 Lagrange point. Data up to $\sim$~500~MeV/${n}$~\cite{Cummings2016} were collected by the $\it{Voyager\, 1}$ spacecraft after the start of observations of the local interstellar energy spectra of Galactic cosmic-ray nuclei (August 2012).\\ \indent Earlier measurements with balloon experiments have a limited statistics and energy reach. They include (i) the High Energy Nuclei (HEN) telescope~\cite{Juliusson74} at nickel energies up to about 10.5~GeV/${n}$ (3 flights in 1971 and 1972 for a total of 7.6 m$^{2}$ sr hrs); (ii) the scintillation-Cherenkov telescope (hereafter cited as Balloon 1975)~\cite{Minagawa81} from 1 to 10~GeV$ /n $ (2 flights in 1975 with a total exposure of 20 m$^{2}$ sr hrs); (iii) the multi-element Cherenkov telescope~\cite{Lezniak78} from 0.3--50~GeV$/n$ (3 flights in 1974 and one in 1976); (iv) the Cosmic Ray Isotope Instrument System (CRISIS)~\cite{Young81} from 600--900~MeV/${n}$ ($\sim$~57 hrs afloat in 1975); (v) the Large Isotopic Composition Experiment (ALICE)~\cite{Esposito92} at energies near 1~GeV/${n}$ (flown for 14.7 hrs in 1987). \\ \indent In this paper we present a measurement of the differential energy spectrum of CR nickel in the energy range from 8.8 to 240~GeV$ /n $ carried out, with unprecedented precision, with CALET onboard the International Space Station (ISS). Though optimized for the measurement of the all-electron spectrum~\cite{CALET-ELE2017,CALET-ELE2018}, CALET has an excellent charge identification capability to tag individual CR elements~\cite{CALET-PROTON,CALET, CALET2, CALET3} from proton to nickel nuclei (and above). It can explore particle energies up to the PeV scale thanks to its large dynamic range, adequate calorimetric depth and accurate tracking. CALET published accurate spectral measurements of electrons \cite{CALET-ELE2018}, protons~\cite{CALET-PROTON}, carbon~\cite{CALET-CO}, oxygen~\cite{CALET-CO}, and iron~\cite{CALET-IRON2021}. Preliminary updates of proton, helium, boron and boron to carbon ratio analyses were presented at the ICRC-2021 conference~\cite{ICRC2021_HL}. \looseness=1 \section{CALET Instrument} Charge identification is carried out by the CHarge Detector (CHD), a two-layered hodoscope of plastic scintillator paddles. It can resolve individual elements from atomic number $ Z $~=~1 to $ Z $~=~40 with excellent charge resolution spanning from 0.15 charge units for C to 0.35 charge units for Fe~\cite{GSI}. The particle's energy is measured with the Total AbSorption Calorimeter (TASC), a lead-tungstate homogeneous calorimeter [27 radiation lengths (r.l.), 1.2 proton interaction lengths] preceded by a thin (3 r.l.) pre-shower IMaging Calorimeter (IMC). The latter is equipped with 16 layers of thin scintillating fibers (1 $ \mathrm{mm}^2$ square cross-section) read out individually and interleaved with tungsten absorbers. The IMC provides tracking capabilities as well as an independent charge measurement, via multiple samples of specific energy loss ($ dE/dx $) in each fiber, up to the onset of saturation which occurs for ions above silicon. Therefore charge identification for nickel and neighboring elements relies on CHD only. More details on the instrument and on the trigger system can be found in the Supplemental Material (SM) of Ref.~\cite{CALET-ELE2017}. CALET was launched on August 19, 2015 and installed on the Japanese Experiment Module Exposed Facility of the ISS. The on-orbit commissioning phase was successfully completed in the first days of October 2015. Calibration and test of the instrument took place at the CERN-SPS during five campaigns between 2010 and 2015 with beams of electrons, protons and relativistic ions~\cite{akaike2015, bigo, niita}. \section{Data analysis} The flight data (FD) used in the present analysis were collected over a period of 2038 days of CALET operation. The total observation live time for the high-energy (HE) shower trigger~\cite{CALET2017} is $T\sim 4.1 \times10^4$ hours, corresponding to 86.0\% of total observation time. Individual on-orbit calibration of all channels is performed with a dedicated trigger mode~\cite{CALET2017,niita} allowing the selection of penetrating protons and He particles. First, raw data are corrected for gain differences among the channels, light output non-uniformity and any residual dependence on time and temperature. After calibration, a single ``best track'' is reconstructed for each event with an associated estimate of its charge and energy. The particle's direction and entrance point are reconstructed from the coordinates of the scintillating fibers in the IMC. The tracking algorithm, based on a combinatorial Kalman filter, identifies the incident track in the presence of background hits generated by backscattered radiation from the TASC~\cite{paolo2017}. The angular resolution and the spatial resolution for the impact point on the CHD are $\sim{0.08}^\circ$ and $\sim$180 $\mu$m respectively. Physics processes and interactions in the apparatus are simulated via Monte Carlo (MC) techniques, based on the EPICS package~\cite{EPICS, EPICSurl} which implements the hadronic interaction model DPMJET-III~\cite{dpmjet3prl}. The instrument configuration and detector response are detailed in the simulation code which provides digitized signals from all channels. An independent analysis based on GEANT4~\cite{GEANT4} is also performed to assess the systematic uncertainties. In this analysis, only the $ ^{58}\mathrm{Ni} $ isotope was considered since its mass difference with respect to other isotopes (mainly $ ^{60}\mathrm{Ni} $) is less than 3\%. \indent \emph{\bf{Charge measurement}} \indent The particle's charge $ Z $ is reconstructed from the signals of the CHD paddles traversed by the incident particle and properly corrected for its path length. Either CHD layer provides an independent $ dE/dx $ measurement which has to be corrected for the quenching effect in the scintillator's light yield. The latter is parameterized by fitting selected FD samples of each nuclear species to a ``halo'' model~\cite{GSI} as a function of $ Z^2 $. The resulting curves are then used to reconstruct a charge value in either layer ($Z_{\rm CHDX}$, $Z_{\rm CHDY}$) on an event-by-event basis~\cite{CALET-CO}. The presence of an increasing amount of backscatters from the TASC at higher energy generates additional energy deposits in the CHD that add on to the primary particle ionization signal and may induce a wrong charge identification. This effect causes a systematic displacement of the CHDX/CHDY charge peaks to higher values (up to 0.8 charge units) with respect to the nominal charge position. Therefore it is necessary to restore the nickel peak position to its nominal value, $ Z $~=~28, by an energy dependent charge correction applied separately to the FD and the MC data. A similar correction is applied to iron and nearby elements. The CHD charge resolution $ \sigma_Z $, obtained by combining the average of $Z_{\rm CHDX}$ and $Z_{\rm CHDY}$ signal is 0.39 in charge units and it is shown in Fig.~S1 of the SM~\cite{PRL-SM}. Background contamination from neighbor elements misidentified as nickel is shown in Fig.~S2 of the SM~\cite{PRL-SM}. Between 100~GeV and 1~TeV it is mainly due to iron and secondly to cobalt. Above 1~TeV the iron contribution is the most important. Contamination from heavier nuclei is negligible. \begin{figure} [!htb] \centering \includegraphics[width=1.0\hsize]{Fig1.eps} \caption{\scriptsize Crossplot of $Z_{\rm CHDY}$ vs.~$Z_{\rm CHDX}$ reconstructed charges in the elemental range between Mn ($ Z $~=~25) and Zn ($ Z $~=~30) before removing charge-changing nuclear interactions. Nickel candidates are selected inside an ellipse with semi-minor and major axes 1.4 $ \sigma_x $ and 1.4 $ \sigma_y $, respectively, rotated clockwise by $ 45^{\circ} $. The maximum and the minimum elliptical selection (depending on the energy) are indicated by the yellow and the orange ellipses in the figure.} \label{fig:CrossPlotCharge} \end{figure}\noindent \indent \emph{\bf{Energy measurement}} \indent For each event, the shower energy $E_{\rm TASC}$ is calculated as the sum of the energy deposits of all TASC logs, after merging the calibrated gain ranges of each channel~\cite{CALET2017}. The energy response derived from the MC simulations was tuned using the results of a beam test carried out at the CERN-SPS in 2015~\cite{akaike2015} with beams of accelerated ion fragments of 13, 19 and 150~GeV$ /c/n $ momentum per nucleon (as described in the SM of Ref.~\cite{CALET-CO}). Correction factors are 6.7\% for $E_{\rm TASC}<45$~GeV and 3.5\% for $E_{\rm TASC}>350$~GeV, respectively. A linear interpolation is used to determine the correction factor for intermediate energies. \indent \emph{\bf{Event selection}} \indent The onboard HE shower trigger, based on the coincidence of the summed dynode signals of the last four IMC layers and the top TASC layer (TASCX1) is fully efficient for elements heavier than oxygen. Therefore, an offline trigger confirmation, as required for the analysis of lower charge elements~\cite{CALET-PROTON,CALET-CO}, is not necessary for nickel, because the HE trigger threshold is far below the signal amplitude expected from a nickel ion at minimum ionization (MI) and the trigger efficiency is close to 100\%. However, in order to select interacting particles, a deposit larger than 2 standard deviation of the MI peak is required in at least one of the first four layers of the TASC. Events with one well-fitted track crossing the whole detector from the top of the CHD to the TASC bottom layer (and clear from the edges of TASCX1 by at least 2~cm) are selected. The fiducial geometrical factor for this category of events is $S\Omega$~$\sim$~$510 \, \mathrm{cm}^2$sr, corresponding to about 50\% of the CALET total acceptance. Particles undergoing a charge-changing nuclear interaction in the upper part of the instrument are removed by requiring that the difference between the charges from either layer of the CHD is less than $1.5 $ charge units. The cross plot of the $ Z_{\mathrm{CHDY}} $ vs.~$ Z_{\mathrm{CHDX}}$ charge, in Fig.~\ref{fig:CrossPlotCharge}, shows the nickel events selection: candidates are contained within an ellipse centered at $ Z $ = 28 with 1.4~$ \sigma_x $ and 1.4~$ \sigma_y $ wide semi-major and minor axes (with both variances depending on the energy) for $ Z_{\mathrm{CHDX}} $ and $ Z_{\mathrm{CHDY}}$, respectively, and rotated clockwise by 45$ ^{\circ} $. Event selections are identical for the MC and the FD. \indent \emph{\bf{Energy unfolding}} \indent As detailed in Ref.~\cite{CALET-IRON2021} for iron, the TASC crystals are subject to a light quenching phenomenon which is not reproduced by the MC simulations. Therefore a quenching correction is extracted from the FD and applied \textit{a posteriori} to the MC energy deposits generated by non-interacting primary particles in the TASC logs. Distributions of $E_{\rm TASC}$ for Ni selected candidates are shown in Fig.~S2 of the SM~\cite{PRL-SM}, with a sample of $5.2 \times 10^3$ events. In order to take into account the li\-mi\-ted calorimetric energy resolution for hadrons (of the order of $\sim$30\%) an energy unfolding algorithm is applied to correct for bin-to-bin migration effects. In this analysis, we used the Bayesian approach~\cite{Ago} implemented within the RooUnfold package~\cite{ROOUNFOLD} of the ROOT analysis framework~\cite{ROOT}. Each element of the response matrix represents the probability that a primary nucleus in a given energy interval of the CR spectrum produces an energy deposit falling into a given bin of $E_{\rm TASC}$. The response matrix is derived using the MC simulation after applying the same selection procedure as for flight data and it is shown in Fig.~S6 of the SM~\cite{PRL-SM}. \indent \emph{\bf{Differential energy spectrum}} \indent The energy spectrum is obtained from the unfolded energy distribution as follows: \begin{equation} \Phi(E) = \frac{N(E)}{\Delta E\; \varepsilon(E) \; S\Omega \; T } \label{eq_flux} \end{equation} \begin{equation} N(E) = U \left[N_{obs}(E_{\rm TASC}) - N_{bg}(E_{\rm TASC}) \right] \end{equation} where $ S\Omega $ and T are the geometrical factor and the live time respectively, $\Delta E$ denotes the energy bin width, $E$ is the geometric mean of the lower and upper bounds of the bin~\cite{Maurino}, $N(E)$ the bin content in the unfolded distribution, $\varepsilon (E)$ the total selection efficiency (Fig.~S3 of the SM~\cite{PRL-SM}), $U()$ the unfolding procedure operator, $N_{obs}(E_{\rm TASC})$ the bin content of observed energy distribution (including background), and $N_{bg}(E_{\rm TASC})$ the bin content of background events in the observed energy distribution. In the energy range between $ 10^2 $ and $ 10^3 $~GeV of $ E_{\mathrm{TASC}}$ the background fraction is $N_{bg}/N_{obs} \sim1\%$. Starting from $ 10^3 $~GeV it increases up to 10\% at $ 10^4 $~GeV. \section{Systematic Uncertainties} The most important sources of systematics uncertainties in the nickel analysis are due to the MC model and event selection at high energy. The systematic error related to charge identification was studied by varying the semi-minor and major axes of the elliptical selection up to $ \pm $15\% corresponding to a variation of charge selection efficiency of $ \pm 17\%$. The result was an (energy bin dependent) flux variation lower than 4\% below 100~GeV$ /n $ and increasing to~$ \sim$8\% at 200~GeV$ /n $. A comparison between different MC algorithms is in order as it is not possible to validate the MC simulations with beam test data at high energy. A comparative study of key distributions was carried out with EPICS and GEANT4 showing that the respective total selection efficiencies for Ni are in agreement within $ \sim$3\% over the whole energy range (Fig.~S3 of the SM~\cite{PRL-SM}). The difference between the two energy response matrices is contained between -5\% and +5\%. The resulting fluxes show a difference around $ \sim $5\% below 40~GeV$ /n $ and less than $ \sim$10\% in the 100--200~GeV$ /n $ region. The uncertainty on the energy scale correction is $\pm2$\% and depends on the accuracy of the beam test calibration. It causes a rigid shift of the flux ($ \pm 4\% $) above 30~GeV$ /n $, not affecting the spectral shape. As the beam test model was not identical to the instrument in flight~\cite{CALET-PROTON}, the difference in the spectrum ($ \pm 5\% $ up to 140~GeV$ /n $) obtained with either configuration was modeled and included in the systematic error. The uncertainties due to the unfolding procedure were evaluated with different response matrices computed by varying the spectral index (between -2.9 and -2.2) of the MC generation spectrum. As the trigger threshold is much smaller than the energy of a non interacting nickel nucleus, the HE trigger efficiency is close to 100\% in the whole energy range with a negligible contribution to the systematic error. The fraction of interactions (Fig.~S5 of the SM~\cite{PRL-SM}) in the CHD, and above it, was checked by comparing the MC and the FD as explained in the SM. The contribution due to a shower event cut, rejecting non interacting particles (4\% around 10 GeV and $ 2 $\% above), was evaluated and included in the systematic uncertainties. Possible inaccuracy of track reconstruction could affect the determination of the geometrical acceptance. The contamination due to off-acceptance events that are erroneously reconstructed inside the fiducial acceptance was estimated by MC to be $\sim$1\% at 10~GeV$/n$ while decreasing to less than $0.1\%$ above 60~GeV$/n$. The systematic uncertainty on the tracking efficiency is negligible~\cite{CALET-CO}. A different tracking procedure, described in Ref.~\cite{akaike2019}, was also used to study possible systematic uncertainties in tracking efficiency. The result is consistent with the Kalman filter algorithm. The systematic error related to background contamination is assessed by varying the contamination level by as much as $ \pm 50\%$. The result was a flux variation around 1\% below 100~GeV$ /n $, increasing to 3\% at 200~GeV$ /n $. The systematic error related to the atomic mass of nickel isotopes composition reduces the normalization by $ 2.2\% $. Additional energy-independent systematic uncertainties affecting the flux normalization include live time (3.4\%), long-term stability ($<2.7\%$) and geometrical factor ($ \sim$1.6\%), as detailed in the SM of Ref.~\cite{CALET-ELE2017}. The energy dependence of all systematic errors for nickel analysis is shown in Fig.~S8 of the SM~\cite{PRL-SM}. The total systematic error is computed as the sum in quadrature of all the sources of systematics in each energy bin. \section{Results} The nickel differential spectrum in kinetic energy per nucleon measured by CALET in the energy range from 8.8 to 240~GeV$/n$ is shown in Fig.~\ref{fig:flux}, where current uncertainties including statistical and systematic errors are bounded within a green band. The CALET spectrum is compared with the results from Balloon 1975~\cite{Minagawa81}, CRISIS~\cite{Young81}, HEAO3-C2~\cite{HEAO3-Ni} and NUCLEON~\cite{NUCLEON-Ni}. The nickel flux measurements with CALET are tabulated in Table~I of the SM~\cite{PRL-SM} where statistical and systematic errors are also shown. CALET and HEAO3-C2 nickel spectra have similar flux normalization in the common interval of energies. CALET and NUCLEON differ in the shape although the two measurements show a similar flux normalization at low energy. \begin{figure} [!htb] \centering \hspace*{-5mm} \includegraphics[width=1.05\hsize]{Fig2.eps} \caption{\scriptsize CALET nickel flux (multiplied by $E^{2.6}$) as a function of kinetic energy per nucleon. Error bars of the CALET data (red) represent the statistical uncertainty only, the yellow band indicates the quadrature sum of systematic errors, while the green band indicates the quadrature sum of statistical and systematic errors. Also plotted are the measurements from Balloon 1975~\cite{Minagawa81}, CRISIS~\cite{Young81}, HEAO3-C2~\cite{HEAO3-Ni} and NUCLEON~\cite{NUCLEON-Ni}. This figure is reproduced and enlarged in Fig.~S9 of the SM~\cite{PRL-SM}.} \label{fig:flux} \end{figure} Figure~\ref{fig:Fefit} shows a fit to the CALET nickel flux with a single power law function (SPL) \begin{equation} \Phi(E) = C\, \left(\frac{E}{\text{1 GeV}/n} \right)^{\gamma} \label{eq:SPL} \end{equation} where $ \gamma $ is the spectral index and $ C $ is the normalization factor. \begin{figure} \centering \hspace*{-5mm} \includegraphics[width=1.05\hsize]{Fig3.eps} \caption{\scriptsize Fit of the CALET nickel energy spectrum to an SPL function (blue line) in the energy range [20, 240]~GeV$ /n $. The flux is multiplied by $E^{2.6} $ where $ E $ is the kinetic energy per nucleon. The error bars are representative of purely statistical errors. } \label{fig:Fefit} \end{figure}\noindent The fit is performed from 20~to 240~GeV$/n$ and gives $\gamma = -2.51 \pm 0.04 (\mathrm{stat}) \pm 0.06 (\mathrm{sys})$ with a $\chi^2/$d.o.f.~=~0.3/3. Below 20 GeV$ /n$ the observed Ni flux softening is similar to the one found for iron and lighter primaries. To better understand the nickel spectral behavior we report also the nickel to iron ratio as a function of kinetic energy per nucleon (see Fig.~\ref{fig:NiFeRatio}). Our measure extends the results of previous experiments (i.e. HEAO3-C2) up to 240~GeV$/n$. The fit, performed from 8.8~to 240~GeV$/n$, gives a constant value of $0.061 \pm 0.001$(stat) with the $ \chi^2/$d.o.f. = 2.3/6. \begin{figure}[!htb] \centering \includegraphics[width=\hsize]{Fig4.eps} \caption{\scriptsize Nickel to iron flux ratio measured with CALET (red points). The errors bars are representative of statistical errors only. Data are fitted with a constant function giving Ni/Fe = 0.061 $ \pm $ 0.001. Also plotted is the result from HEAO3-C2~\cite{HEAO3-Ni}. \label{fig:NiFeRatio}} \end{figure} The experimental limitations of the present measurement (i.e. low statistics as well as large systematic errors for the highest energy bins) do not yet allow one to test the hypothesis of a spectral shape different from a single power law in the region above 20 GeV$ /n $. As a matter of fact, current expectations (e.g.,~\cite{Thoudam,Tomassetti}) for a detectable spectral hardening of nickel are still under debate. \section{Conclusion} In this paper, based on 67 months of observations with CALET on the ISS, we report for the first time a measurement of the energy spectrum of nickel over an extended energy range up to 240~GeV$ /n $ and with a significantly better precision than most of the existing measurements. The nickel spectrum behavior below 20 GeV/n is similar to the one observed for iron and lighter primaries. Above 20~GeV$ /n $, our present observations are consistent with the hypothesis of an SPL spectrum up to 240~GeV$ /n $. Beyond this limit, the uncertainties given by our present statistics and large systematics do not allow us to draw a significant conclusion on a possible deviation from a single power law. An SPL fit in this region yields a spectral index value $ \gamma = -2.51 \pm 0.07 $. The flat behavior of the nickel to iron ratio suggests that the spectral shapes of Fe and Ni are the same within the experimental accuracy. This suggests a similar acceleration and propagation behavior as expected from the small difference~in atomic number and weight between Fe and Ni nuclei. An extended data set, as expected beyond the 67 month period of continuous observations accomplished so far, will not only improve the most important statistical limitations of the present measurement, but also our understanding of the instrument response in view of a further reduction of systematic uncertainties. \section{Acknowledgments} \begin{acknowledgments} We gratefully acknowledge JAXA's contributions to the development of CALET and to the operations aboard the JEM-EF on the International Space Station. We also wish to express our sincere gratitude to ASI (Agenzia Spaziale Italiana) and NASA for their support of the CALET project. This work was supported in part by JSPS KAKENHI Grant No.~26220708, No.~19H05608, No.~17H02901, No.~21K03592, and No.~20K22352 and by the MEXT-Supported Program for the Strategic Research Foundation at Private Universities (2011-2015) (No.~S1101021) at Waseda University. The CALET effort in Italy is supported by ASI under agreement No.~2013-018-R.0 and its amendments. The CALET effort in the United States is supported by NASA through Grants No.~80NSSC20K0397, No.~80NSSC20K0399, and No.~NNH18ZDA001N-APRA18-0004. \end{acknowledgments} \nocite{*} \providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}%
1,314,259,996,072
arxiv
\section{Introduction} Students of mathematics are introduced at undergraduate level to a number of distinct subjects called \textit{algebra}. In pre-calculus algebra\footnote{P\'{O}C: when I first came to the US, I often answered the question \textit{What do you teach?} with the response \textit{Algebra}. On one memorable occasion, the response (from a non-mathematician) was \textit{Oh, are you not smart enough to teach calculus?}} one solves polynomial equations. In linear algebra, one learns to multiply matrices and studies systems of linear equations. In abstract algebra, a course on group theory often introduces symmetry and homomorphisms, and courses on rings, fields (and possibly) Galois theory finally return to the solution of polynomial equations. In this paper we will attempt to pull together ideas from all of these algebraic subjects: we will introduce an (associative) \textit{algebra of matrices} as an algebraic structure related to both linear and abstract algebra. We will briefly review definitions for homomorphisms of algebras as motivation for the definition of simple and semi-simple algebras. Our main topic in this paper is an exploration of the dimensions of semi-simple algebras. After introducing and defining the necessary algebraic objects, our proof will turn out to be rather elementary: most of our time is spent on careful manipulation of inequalities and solving quadratic equations. We have attempted to write this paper in a way which is accessible to anyone with some knowledge of linear algebra (where we assume knowledge of abstract vector spaces, and matrix operations in arbitrary dimension $n$) and abstract algebra (where we assume exposure to the definition of a ring and to the concept of a homomorphism). Further knowledge of groups, rings and fields would be helpful but is not essential. Everything described in Section \ref{Algebra} is well-known, we make no claim of originality in our coverage of this material. For a more detailed discussion of the structure theory of rings and algebras, we refer the reader to Chapter 5 of \textit{Groups and Representations} by Alperin and Bell \cite{AlperinBell}; or to Chapter 13 of \textit{Algebra: A graduate course} by Isaacs \cite{IsaacsAlgebra}. \section{Algebraic preliminaries}\label{Algebra} Throughout this paper, we will use the symbol $k$ for an arbitrary (commutative) field. The reader less comfortable with field theory will lose nothing by taking $k = \mathbb{R}$ or $k= \mathbb{C}$ throughout. In the study of linear algebra, the linear transformations of a vector space are of central importance. After fixing a basis for an $n$-dimensional vector space over $k$, we can identify linear transformations with \textit{matrices}. We denote by $\mathrm{M}_{n}(k)$ the set of all $n \times n$ matrices with entries in $k$. There are three natural operations on matrices over a field: \begin{enumerate} \item Scalar multiplication: for any scalar $c \in k$ and any matrix $M \in \mathrm{M}_{n}(k)$, the matrix $cM$ is obtained by multiplying each element of $M$ by $c$. \item Matrix addition: for matrices $M_{1}, M_{2} \in \mathrm{M}_{n}(k)$, addition is defined entry-wise. \item Matrix multiplication has a slightly more complicated definition motivated by \textit{composition} of linear maps, which we do not record formally here. \end{enumerate} It is a routine exercise in a linear algebra course to verify that $\textrm{M}_{n}(k)$ forms a \textit{vector space} under the operations of scalar multiplication and matrix addition. In a course on ring theory, one proves that $\textrm{M}_{n}(k)$ under matrix addition and matrix multiplication forms a \textit{ring}. This is essentially the definition of a $k$-algebra: it is both a ring and a vector space. There is also some compatibility condition: the zero element of the ring structure and the vector space structure must coincide, for example. The formal definition follows. \begin{definition} An (associative\footnote{There are also non-associative algebras, in which only the second and third conditions hold. Often, some alternative property is imposed: Lie algebras, in which the Jacobi identity is enforced, have a structure theory analogous to what we discuss in this section.}) \textit{algebra} over the field $k$ is a vector space $\mathcal{A}$ over $k$ together with a binary operation (multiplication) which satisfies the following conditions: \begin{enumerate} \item Associativity: $A(BC) = (AB)C$ for all $A, B, C \in \mathcal{A}$. \item Distributivity: $A(B+C) = AB + AC$ and $(A+B)C = AC + BC$ for all $A, B, C \in \mathcal{A}$. \item Scalar compatibility: $(cA)(dB) = cdAB$ for all $c, d \in k$ and all $A, B \in \mathcal{A}$. \end{enumerate} \end{definition} Any subset of $\mathrm{M}_{n}(k)$ closed under all three operations is a \textit{subalgebra}. The diagonal matrices and the upper triangular matrices in $\mathrm{M}_{n}(k)$ are both subalgebras of $\mathrm{M}_{n}(k)$, for example. We can also construct subalgebras as \textit{direct sums}: suppose that $\mathcal{A}$ is a subalgebra of $\mathrm{M}_{t}(k)$ and $\mathcal{B}$ is a subalgebra of $\mathrm{M}_{n-t}(k)$. Then \[ \mathcal{A} \oplus \mathcal{B} = \left\{ \left(\begin{array}{ll} A & \textbf{0} \\ \textbf{0} & B \end{array}\right) \mid A \in \mathcal{A}, B \in \mathcal{B} \right\} \] is a subalgebra of $\mathrm{M}_{n}(k)$. To see this, observe that block-matrices can be added and multiplied just like ordinary matrices (though one must take some care since multiplication of elements is no longer commutative). In this section, we focus exclusively on subalgebras of $\mathrm{M}_{n}(k)$. This is justified by an analogue of Cayley's Theorem which states that every $n$-dimensional associative algebra over the field $k$ is isomorphic to a subalgebra of $\mathrm{M}_{n}(k)$, see for example \cite[Theorem 12.2]{Isaacs} (though note that Isaacs proves a more general result for rings). The study of algebras over a field is a classical topic in abstract algebra. As in group theory, the organising principle is \textit{homomorphism}. \begin{definition} Let $\mathcal{A}$ and $\mathcal{B}$ be algebras over a field $k$. A function $\mathcal{A} \rightarrow \mathcal{B}$ is a \textit{homomorphism} (of $k$-algebras) if it preserves all the operations of the algebra. More precisely, \begin{enumerate} \item $\phi(cA) = c \phi(A)$ for all $c \in k$ and $A \in \mathcal{A}$ \item $\phi(A + B) = \phi(A) + \phi(B)$ for all $A, B \in \mathcal{A}$ \item $\phi(AB) = \phi(A)\phi(B)$ for all $A, B \in \mathcal{A}$ \end{enumerate} \end{definition} The \textit{kernel} of a homomorphism $\phi: \mathcal{A} \rightarrow \mathcal{B}$ is \[ \textrm{ker}(\phi) = \left\{ A \in \mathcal{A} \mid \phi(A) = \textbf{0}_{\mathcal{B}}\right\}\,. \] It can be verified that $\textrm{ker}(\phi)$ is closed under all three operations of $\mathcal{A}$ and also satisfies the stronger condition that $AN, NA \in \textrm{ker}(\phi)$ for any $A \in\mathcal{A}$ and $N \in \textrm{ker}(\phi)$. So $\textrm{ker}(\phi)$ is both a \textit{subspace} of $\mathcal{A}$ (because $\phi$ is a linear transformation) and a two-sided \textit{ideal} of $\mathcal{A}$ (because $\phi$ is a homomorphism of rings). The next example shows how to construct homomorphisms of algebras from fixed subspaces of the underlying vector space. \begin{example} Suppose that $V$ is an $n$-dimensional vector space with basis $\{e_{1}, e_{2}, \ldots, e_{n}\}$ and that $\mathcal{A}$ is a subalgebra of $\mathrm{M}_{n}(k)$ in which every element fixes the subspace $U$ generated by $\{e_{n-t+1}, e_{n-t+2}, \ldots, e_{n}\}$ of dimension $t$ setwise. With respect to this basis, every element of $\mathcal{A}$ can be written in block-upper-triangular form: \[ A = \left(\begin{array}{ll} A_{V/U} & A_{X} \\ \textbf{0} & A_{U} \end{array}\right) \] The function $\phi: A \rightarrow A_{U}$ is a homomorphism of algebras. The image of $\phi$ is $\{ A_{U} \mid A \in \mathcal{A}\}$ which is a subalgebra of $\mathrm{M}_{t}(k)$, while the kernel consists of all matrices for which $A_{U} = 0$. Unless $A_{X}$ is identically zero for all $A \in \mathcal{A}$, the algebra $\mathcal{A}$ is \textit{not} a direct sum. \end{example} The reader unfamiliar with homomorphisms of algebras is advised to construct fixed subspaces of the diagonal and upper-triangular matrices, and hence to describe homomorphisms from these algebras to smaller matrix algebras. In fact, there is a First Isomorphism Theorem for $k$-algebras (analogous to the result in group theory). We record it below. \begin{theorem}\label{FIT} Suppose that $\mathcal{I}$ is a two-sided ideal in the $k$-algebra $\mathcal{A}$. For any $A \in \mathcal{A}$ define the coset of $\mathcal{I}$ containing $A$ by \[ A + \mathcal{I} = \{ A + N \mid N \in \mathcal{I} \}\,.\] Scalar multiplication, vector addition and vector multiplication on $\mathcal{A}/\mathcal{I} = \{ A + \mathcal{I} \mid A \in \mathcal{A}\}$ are defined in the natural way\footnote{That is: $c(A + \mathcal{I}) = cA + \mathcal{I}$ for all $c \in k$ and $A \in \mathcal{A}$. Addition and multiplication are given by $(A + \mathcal{I}) + (B + \mathcal{I}) = (A+B) + \mathcal{I}$ and $(A+\mathcal{I})(B + \mathcal{I}) = AB+\mathcal{I}$ respectively.}. With these operations, $\mathcal{A}/\mathcal{I}$ is a $k$-algebra and the function $\phi_{\mathcal{I}}: A \rightarrow A + \mathcal{I}$ is a homomorphism with kernel $\mathcal{I}$. \end{theorem} Using Theorem \ref{FIT}, we can construct examples of homomorphisms not coming from fixed subspaces. One example comes from the set of \textit{strictly upper triangular} (s.u.t.) matrices, which are zero on and below the diagonal. Observe that the s.u.t. matrices form an algebra, and that this algebra is \textit{nilpotent}: there exists an integer $r$ such that all products of length $r$ in the algebra evaluate to $\textbf{0}$. It can be proved that the s.u.t. matrices form a two-sided ideal inside the upper triangular matrices. The quotient of $n\times n$ upper triangular matrices by s.u.t. matrices is isomorphic to the algebra of $n \times n$ diagonal matrices. Of special interest in an algebraic theory are \textit{simple} objects: in this case $k$-algebras containing no non-trivial two-sided ideals. In contrast to group theory (where the classification of finite simple groups is one of the great achievements of mathematics) the classification of simple finite-dimensional $k$-algebras is relatively straightforward. We will not give a proof, but we will record the main result. Recall that a field $k$ is algebraically closed if every polynomial with coefficients in $k$ has a root in $k$. \begin{theorem}[Wedderburn, Theorem 13.17 \cite{AlperinBell}]\label{Wed} If the field $k$ is algebraically closed, then every simple finite-dimensional $k$-algebra is isomorphic to $\mathrm{M}_{n}(k)$ for some $n \in \mathbb{N}$. \end{theorem} Requiring $k$ to be algebraically closed means that we can always construct eigenvalues and eigenvectors of matrices. Without this assumption, there are other classes of simple algebras over $k$, which are fully described in the reference. Those additional algebras will not concern us in this paper (for any field $k$, a semi-simple sub-algebra of $\mathrm{M}_{n}(k)$ has dimension equal to one of the examples given in Theorem \ref{Wed}). A famous theorem of Jacobson gives a decomposition of an arbitrary algebra into nilpotent and simple components, greatly generalising our example of upper triangular and s.u.t. matrices above. \begin{theorem}[Jacobson, Theorem 13.23 \cite{AlperinBell}]\label{Jac} Let $\mathcal{A}$ be a finite-dimensional $k$-algebra. Then there exists a largest nilpotent ideal which contains all other nilpotent ideals, called the \textit{Jacobson radical}. The quotient of $\mathcal{A}$ by the Jacobson radical is isomorphic to a direct sum of simple algebras. \end{theorem} Jacobson's theorem motivates the following definition. \begin{definition} An algebra is \textit{semi-simple} if its Jacobson radical is $(\textbf{0})$. \end{definition} Over an algebraically closed field, a semi-simple algebra is isomorphic to a direct sum of matrix algebras. A theorem of Malcev\footnote{This result is also called the Wedderburn Principal Theorem and the Wedderburn-Malcev theorem in the literature. In the context of Lie algebras, it is called the Jordan-Chevalley decomposition.} gives precise information about how a semi-simple algebra sits inside of $\mathrm{M}_{n}(k)$. We describe the general result: suppose that $k$ is algebraically closed and $\mathcal{A}$ is a subalgebra of $\mathrm{M}_{n}(k)$. Let $\mathcal{J}$ be the Jacobson radical of $\mathcal{A}$. Then there exists a subalgebra $\mathcal{S}$ of $\mathcal{A}$ which is isomorphic to the semi-simple quotient $\mathcal{A}/\mathcal{J}$, and disjoint from $\mathcal{J}$. Any other subalgebra which is a complement to $\mathcal{J}$ (as a vector space) is necessarily conjugate to $\mathcal{S}$. Up to conjugation in $\mathrm{M}_{n}(k)$, the algebra $\mathcal{S}$ is block-diagonal and $\mathcal{J}$ is s.u.t. and disjoint from $\mathcal{S}$. We record a special case of the general theory, for which we will have use in the next section. \begin{theorem}[Malcev, \cite{Malcev}]\label{Malcev} Let $k$ be an algebraically closed field, and let $\mathcal{A}$ be a semi-simple subalgebra of $\mathrm{M}_{n}(k)$. If $\mathcal{A}$ is isomorphic to the direct sum $\bigoplus_{i=1}^{t} \textrm{M}_{n_{i}}(k)$ then $\mathcal{A}$ is conjugate in $\mathrm{M}_{n}(k)$ to an algebra of block diagonal matrices where the $i^{\textrm{th}}$ block has size $n_{i} \times n_{i}$. \end{theorem} If $\sum_{i=1}^{t} n_{i} < n$ then Malcev's theorem does not, in general, tell us what happens in the remainder of the matrix. The following algebras are semi-simple and isomorphic but not conjugate, for example: \[\left\{ \left( \begin{array}{ll} A & \textbf{0} \\ \textbf{0} & A \end{array}\right) \mid A \in \mathrm{M}_{2}(k) \right\}, \,\,\, \left\{ \left( \begin{array}{ll} A & \textbf{0} \\ \textbf{0} & \textbf{0} \end{array}\right) \mid A \in \mathrm{M}_{2}(k) \right\}\,. \] In the next section we will explore the possible dimensions for a semi-simple subalgebra of $\textrm{M}_{n}(k)$. \section{Dimensions of semi-simple subalgebras of $\textrm{M}_{n}(k)$} A \textit{connected semi-simple subalgebra} (CSA) of $\mathrm{M}_{n}(k)$ is a subalgebra which is semisimple, such that the sum of the orders of the simple components is precisely $n$. (That is, $\sum_{i} n_{i} = n$ in the notation of Theorem \ref{Malcev}.) We will write $\mathcal{C}(n)$ for the set of dimensions of CSAs of $\mathrm{M}_{n}(k)$. It follows directly from Theorem \ref{Wed} and the definition of a CSA that an integer $\ell \in [0, 1, \ldots, n^{2}]$ is the dimension of a CSA in $\mathrm{M}_{n}(k)$ if and only if there exists a partition $d_{1} + d_{2} + \ldots + d_{t}$ of $n$ such that $\ell = d_{1}^{2} + d_{2}^{2} + \ldots + d_{t}^{2}$. So we have the following explicit description: \[ \mathcal{C}(n) = \{ \ell = d_{1}^{2} + d_{2}^{2} + \ldots + d_{t}^{2} \mid d_{1} + d_{2} + \ldots + d_{t} = n \} \,.\] Since for any integer $t^{2} \equiv t \mod 2$, it follows that \[ \ell = d_{1}^{2} + d_{2}^{2} + \ldots + d_{t}^{2} \equiv d_{1} + d_{2} + \ldots + d_{t} = n \mod 2\,.\] We record this result as a Lemma. \begin{lemma} \label{parity} If $\ell$ is the dimension of a CSA of $\mathrm{M}_{n}(k)$ then $\ell \equiv n \mod 2$. \end{lemma} Lemma \ref{parity} already shows that $\lim_{n\rightarrow \infty} n^{-2}|\mathcal{C}(n)| \leq 1/2$. Unfortunately, the number of partitions of $n$is proportional to $e^{c\sqrt{n}}$ (see Chapter 15 of van Lint and Wilson's \textit{A course in combinatorics} for an introduction to the theory of partitions \cite{vanLintWilson}). So while elegant, this description of $\mathcal{C}(n)$ cannot be used in computations for even moderate values of $n$. Instead, we describe $\mathcal{C}(n)$ recursively. \begin{lemma} \label{recursive} Set $\mathcal{C}(0) = \{0\}$. For each $n \geq 2$, \[ \mathcal{C}(n) = \cup_{j=1}^{n} \left\{ j^{2} + \ell \mid \ell \in \mathcal{C}(n-j)\right\}\,.\] \end{lemma} \begin{proof} By Theorem \ref{Malcev}, we may assume that all CSAs are block-diagonal, and completely described by the corresponding partition of $n$. Each CSA in $\mathrm{M}_{n}(k)$ has a full matrix algebra $\mathrm{M}_{j}(k)$ in its upper left corner, for some positive integer $j$. In the lower right $(n-j) \times (n-j)$ block, there must be a semi-simple subalgebra of $\mathrm{M}_{n-j}(k)$, which has dimension in the set $\mathcal{C}(n-j)$. This establishes the recursion. (Note that isomorphic subalgebras are counted multiple times, and that there may be multiple non-isomorphic algebras with the same dimension.) \end{proof} This recursion is moderately efficient and allowed the authors to compute the sets $\mathcal{C}(n)$ for values of $n$ in the thousands without difficulty. By Lemma \ref{parity}, the dimension of a CSA is of the form $n + 2m$ for some integer $m$. \begin{definition} An even integer $2m$ is \textit{realisable} in dimension $n$ if there exists a CSA of $\mathrm{M}_{n}(k)$ of dimension $n + 2m$. The \textit{width} of $2m$ is the minimal dimension in which it is realisable. \end{definition} If $2m$ is realisable in dimension $n$, then it is realisable in all larger dimensions. We adapt an argument of Savitt and Stanley \cite{SavittStanley} to give an upper bound on the width of $2m$. First observe that if $\sum_{i = 1}^d t_i^2 = n + 2m$ and $\sum_{i = 1}^d t_i = n$ then $\sum_{i = 1}^d t_i\left(t_i - 1\right) = 2m$. With this in mind we define a decomposition of an even integer $2m$. \begin{definition} For an even integer $2m$, define the \textit{greedy decomposition} as \[ 2m = t_{0}(t_{0}-1) + t_{1}(t_{1}-1) + \ldots + t_{d}(t_{d}-1) \] where all of the $t_{j}$ are positive integers and each $t_{j}$ is chosen to be maximal subject to the condition $t_{j}(t_{j}-1) \leq n - \sum_{i=0}^{j-1} t_{i}(t_{i}-1)$. The \textit{greedy width} of $2m$ is $\mathcal{G}(2m) = \sum_{j=1}^{d} t_{j}$. \end{definition} For example, the greedy decomposition of $40$ is \[ 40 = 6(5) + 3(2) + 2(1) + 2(1)\] and so $\mathcal{G}(40) = 13$. This implies that $\textrm{M}_{n}(k)$ contains a CSA of dimension $n+40$ for all $n \geq 13$. The greedy construction for an algebra of dimension $n+40$ is the direct sum \[ \mathrm{M}_{6}(k) \oplus \mathrm{M}_{3}(k) \oplus \mathrm{M}_{2}(k) \oplus \mathrm{M}_{2}(k) \oplus \mathrm{M}_{1}(k) \times (n-13)\,, \] where the last term indicates that one takes the direct sum of $n-13$ copies of the one-dimensional matrix algebra. A little thought reveals that it is possible to do better: $\mathrm{M}_{10}(k)$ contains the sub-algebra $\mathrm{M}_{5}(k)\oplus \mathrm{M}_{5}(k)$ which has dimension $50 = 10 + 40$. Hence the actual width of $40$ is at most $10$. Nevertheless, the greedy width is a useful upper bound on the minimal width of CSAs. Its behaviour is sufficiently regular that we can prove some theorems about it. \begin{proposition} \label{greedy} For every integer $m$, we have $\mathcal{G}(2m) \leq \mathrm{max} \left\{ \frac{3}{2}\sqrt{2m},\, 38\right\}$. \end{proposition} \begin{proof} The claim may be verified computationally up to $m = 3042$. The last time that the inequality $\mathcal{G}(2m) \leq \frac{3}{2}\sqrt{2m}$ fails is when $2m = 640$. In fact, the greedy decomposition of $640$ requires dimension $38$, with decomposition $25 + 6 + 3 + 2 + 2$, while $\frac{3}{2}\sqrt{2m} \sim 37.947$. Assume that the claim holds for all even integers up to $2m-2$, where $m \geq 3042$. We will show by induction that the claim holds for $2m$. The first term in the greedy expansion of $2m$ is the unique integer $t$ such that \[ t(t-1) \leq 2m < (t+1)t\,. \] It follows that $2m - t(t-1) < (t+1)t - t(t-1) \leq 2t$. We apply the induction hypothesis: \begin{eqnarray*} \mathcal{G}(2m) & \leq & t + \mathcal{G}\left(2m- (t(t-1)\right) \\ & \leq & t + \mathcal{G}\left(2t-2\right) \\ & \leq & t + \textrm{max}\left(\frac{3}{2} \sqrt{2t-2},\, 38\right) \,. \end{eqnarray*} Since $t(t-1)\leq 2m$ by construction, we have $(t-1)^{2} \leq 2m$, equivalently $t-1 \leq \sqrt{2m}$. We deal with each possibility separately: suppose first that $\mathcal{G}(2m) \leq t + 38$. Then: \[ \mathcal{G}(2m) \leq t + 38 \leq \sqrt{2m} + 39 \leq \frac{3}{2} \sqrt{2m}\,,\] which holds provided $m \geq 3042$. In the second case, again using $t -1 \leq \sqrt{2m}$, \[ \mathcal{G}(2m) \leq t + \frac{3}{2}\sqrt{2t-2} \leq \sqrt{2m} + 1 + \frac{3}{2}\sqrt[4]{8m} \,.\] We rearrange the inequality $1 + \sqrt{2m} + \frac{3}{2}\sqrt[4]{8m} \leq \frac{3}{2}\sqrt{2m}$ to get $2 + 3\sqrt[4]{8m} \leq \sqrt{2m}$. Multiplying both sides by $2$, we get \[ 4 + 6 \sqrt[4]{8m} \leq \sqrt{8m} \,.\] We make the substitution $y = \sqrt[4]{8m}$ and solve the resulting quadratic equation to find that the inequality holds when $y \geq 3 + \sqrt{13}$. Solving in terms of $m$, we obtain $m\geq 119 + 33\sqrt{13}$ which implies that the result holds whenever $2m \geq 476$. Since we already assumed that $m \geq 3042$, the result is established by induction. \end{proof} We apply Proposition \ref{greedy} to prove that for sufficiently large $n$, the dimensions of CSAs in the matrix algebra $\mathrm{M}_{n}(k)$ are structured and predictable, except for a (relatively) small region close to $n^{2}$. \begin{theorem}\label{main} For any $n \geq 225$, the algebra $\mathrm{M}_{n}(k)$ contains a connected semi-simple subalgebra of dimension $n + 2m$ for every even integer satisfying $0 \leq 2m \leq n^{2} - \frac{9}{2}n\sqrt{n}$. \end{theorem} \begin{proof} Proposition \ref{greedy} shows that $\mathrm{M}_{n}(k)$ contains a CSA of dimension $n + 2m$ whenever we have $n \geq \max\{\frac{3}{2}\sqrt{2m}, 38\}$. For any $n \geq 38$, we invert this bound to find that $\mathrm{M}_{n}(k)$ contains CSAs of dimension $n + 2m$ for all $2m \leq \frac{4}{9} n^{2}$. For each $1 \leq j \leq n-38$ we define the interval \[ S_{j} = \left[ n + j(j-1), n + j(j-1) + \frac{4}{9} (n-j)^{2}\right]\,,\] which is the set of dimensions of CSAs of $\textrm{M}_{n}(k)$ having a $j\times j$ block in the upper left corner, and a CSA of width $(n-j)$ in the lower right. Let us examine the behaviour of the function $f(j) = n + j(j-1) + \frac{4}{9} (n-j)^{2}$, where we continue to hold $n$ fixed. Simplifying we can write $f(j) = \frac{13}{9} j^{2} - \left( \frac{8}{9}n + 1\right) j + \frac{4}{9} n^{2} + n$. Taking the derivative with respect to $j$, we find that $f(j)$ reaches a local minimum when $j = \frac{4}{13}n - \frac{9}{26}$. So for values of $j$ in the interval $\left[ \frac{4}{13} n, n-38\right]$ the maximum entry of $S_{j}$ increases with $j$. Now we establish a sufficient condition on $j$ for the intervals $S_{j}$ and $S_{j+1}$ to overlap. This occurs provided that the inequality $n + j(j-1) + \frac{4}{9} (n-j)^{2} \geq n + j(j+1)$ holds. To find the largest value of $j$ for which this occurs, we rearrange this condition as a quadratic in $j$: \[ 4j^{2} - (8n+18)j + 4n^{2} \geq 0\,. \] Solving using the quadratic formula, and simplifying the results, the roots of the polynomial are $n + \frac{9}{4} \pm \frac{3}{2} \sqrt{ n + \frac{27}{4}}$. The leading coefficient of the quadratic is positive, so the sets $S_{j}$ and $S_{j+1}$ intersect when $j$ is an integer in the interval $\left[0, n + \frac{9}{4} - \frac{3}{2} \sqrt{ n + \frac{27}{4}}\right]$. It will be convenient to use a slightly smaller upper bound for the interval of the form $j_{\textrm{max}} = n - \frac{3}{2}\sqrt{n}$ (a computation shows that this bound is valid when $n \geq \frac{9}{4}$). We simplify the lower bound on elements of $S_{j_{\textrm{max}}}$ to get \[ n + (n-\frac{3}{2}\sqrt{n})(n-\frac{3}{2}\sqrt{n}-1) = n^{2} - \frac{9}{2}n\sqrt{n} + \frac{9}{4} n - \frac{3}{2}\sqrt{n}\,. \] (Note that taking the upper bound here would only improve our estimate by a linear term.) Again, it is convenient to replace this bound with the simpler form $n + n^{2} - \frac{9}{2} n\sqrt{n}$, expressing the upper bound on the dimension in the form $n + 2m$ yields our result. To complete the proof, we need to enforce the conditions $j_{\textrm{max}} \leq n - 38$ and \[ n + j_{\textrm{max}}(j_{\textrm{max}}+1) + \frac{4}{9} (n-j_{\textrm{max}})^{2} \geq n + n^{2} - \frac{9}{2} n\sqrt{n}\] simultaneously. Substituting for $j_{\textrm{max}}$, approximating $\frac{4}{9}(38)^{2}$ by $642$ and simplifying, we obtain the inequality \[ 75 n - 2048 \leq \frac{9}{2}n\sqrt{n}\,. \] Making the substitution $y^{2} = n$, we obtain a cubic $9y^{3} - 250y^{2} + 4096 \geq 0$. Solving numerically, this polynomial inequality holds for all $y \geq 15$, and hence the result holds for $n \geq 225$. \end{proof} \section{An open problem} One could carry out a similar analysis for not-necessarily-connected subalgebras of $\textrm{M}_{n}(k)$. Taking unions of the dimensions of CSAs established in Theorem \ref{main} of $M_{2}(k), \ldots, \textrm{M}_{n}(k)$ gives the following result. \begin{corollary} \label{gencase} For every $n \geq 49$, every integer in $[0, n^{2}-\frac{9}{2}n\sqrt{n} - 2n]$ is the dimension of a semi-simple subalgebra of $\mathrm{M}_{n}(k)$. As $n \rightarrow \infty$ the proportion of integers in $[0, n^{2}]$ which are the dimension of a semi-simple subalgebra of $\textrm{M}_{n}(k)$ tends to $1$. \end{corollary} \begin{proof} All integers in the interval $[0, \ldots, n-1]$ may be realised by algebras of diagonal matrices. By Theorem \ref{main}, every integer with the same parity as $n$ in the interval $[n, n^{2}-\frac{9}{2}n\sqrt{n}]$ is the dimension of a semi-simple subalgebra of $\mathrm{M}_{n}(k)$. Similarly, every integer of the opposite parity to $n$ in the interval $[n-1, (n-1)^{2} - \frac{9}{2}(n-1)\sqrt{n-1}]$ is the dimension of a semi-simple subalgebra. Since $n^{2}-\frac{9}{2}n\sqrt{n}-2n \leq (n-1)^{2} - \frac{9}{2}(n-1)\sqrt{(n-1)}$ the result follows. \end{proof} We write $\textrm{gap}(n)$ for the first integer which is \textit{not} the dimension of a semi-simple subalgebra of $\mathrm{M}_{n}(k)$. We conjecture that there exist constants $0 \leq \alpha \leq \beta \leq \frac{9}{2}$ such that \[ \liminf_{n\rightarrow \infty} \frac{n^{2} - \textrm{gap}(n)}{n^{3/2}} = \alpha \] and \[ \limsup_{n\rightarrow \infty} \frac{n^{2} - \textrm{gap}(n)}{n^{3/2}} = \beta \,.\] Computations up to $n = 600$ suggest that the function $n^{2} - \frac{13}{4}n\sqrt{n} - \textrm{gap}(n)$ is (mostly) positive for large $n$, and that $n^{2} - \frac{7}{2}n\sqrt{n} - \textrm{gap}(n)$ is (mostly) negative. We propose the problem of finding $\alpha$ and $\beta$ explicitly. \bibliographystyle{abbrv} \flushleft{
1,314,259,996,073
arxiv
\section*{Introduction} The purpose of this note is to show that a small variant of the methods used by Voisin in \cite{Voisin1} and \cite{Voisin2} leads to a surprisingly quick proof of the gonality conjecture of \cite{GL3}, asserting that one can read off the gonality of an algebraic curve $C$ from its syzygies in the embedding defined by any one line bundle of sufficiently large degree. More generally, we establish a necessary and sufficient condition for the asymptotic vanishing of the weight one syzygies of the module associated to an arbitrary line bundle on $C$. Let $C$ be a smooth complex projective curve of genus $g \ge 2$, and let $L$ be a very ample line bundle of degree $d$ on $C$ defining an embedding \[ C \ \subseteq \ \mathbf{P} H^0(C,L) \ = \ \mathbf{P}^r. \] Starting with the work of Green in \cite{Kosz1}, \cite{Kosz2} there has been a great deal of interest in understanding connections between the geometry of $C$ and $L$ and their syzygies. More precisely, write $S = \textnormal{Sym}\, \textnormal{H}^0(C,L)$ for the homogeneous coordinate ring of $\mathbf{P}^r$, and denote by \[ R= R(L)= \oplus_m H^0(C, mL)\] the graded $S$-module associated to $L$. Consider next the minimal graded free resolution $E_{\bullet} = E_{\bullet}(L)$ of R over $S$: \[ \xymatrix{ 0 \ar[r] & E_{r-1} \ar[r]& \ldots \ar[r] & E_2 \ar[r] & E_1 \ar[r] & E_0 \ar[r] & R \ar[r] & 0 , }\] where $ E_p =\oplus S(-a_{p,j})$. Note that if $L$ is normally generated then $E_0 = S$, in which case $E_{\bullet}$ gives rise to a minimal resolution of the homogeneous ideal $I = I_{C/\mathbf{P}^r}$ of $C$ in $\mathbf{P}^r$. As customary, we denote by $K_{p,q}(C;L)$ the vector space of minimal generators of $E_p$ in degree $p+q$, so that \[ E_p \ = \ \bigoplus_q\, K_{p,q}(C;L) \otimes_{\mathbf{C}} S(-p-q). \] We will be concerned here with investigating the grading of $E_{\bullet}(L)$ -- ie determining which of the $K_{p,q}$ are non-vanishing -- when $L$ has very large degree. It is elementary that if $H^1(C,L) = 0$ then $K_{p,q}(C;L) = 0$ for $q \ge 3$. Moreover, work of Green \cite{Kosz1} and others shows that if $d = \deg(L) \gg 0$, so that in particular $ r = d-g$, then: \begin{align*} K_{p,0} (C;L) \, \ne \, 0 \ &\Longleftrightarrow \ p =0; \\ K_{p,2}(C;L) \, \ne \, 0 \ &\Longleftrightarrow \ r-g \, \le p \, \le r - 1. \end{align*} It follows from this that \[ K_{p,1}(C; L) \, \ne \, 0 \ \text{ for } 1 \, \le \, p \, \le r-1-g, \] but these results leave open the question of when $K_{p,1}(C;L) \ne 0$ for $p \in [r-g, r-1]$. Our first main result is that this is determined by the gonality $\textnormal{gon}(C)$ of $C$, ie the least degree of a branched covering $C \rightarrow \mathbf{P}^1$. \begin{theoremalpha} \label{Gonality.Thm} If $\deg(L) \gg 0$, then \[ K_{p,1}(C; L) \, \ne \, 0 \ \Longleftrightarrow \ 1 \, \le \, p \, \le \, r - \textnormal{gon}(C).\] \end{theoremalpha} Thus one can read off the gonality of a curve from the resolution of the ideal of $C$ in the embedding defined by any one line bundle of sufficiently large degree. The cases $p =r-1, p = r-2$ were established by Green \cite{Kosz1}, and the general statement was conjectured in \cite{GL3}, where it was observed that if $1 \le p \le r - \textnormal{gon}(C)$, then $K_{p,1}(C;L) \ne 0$.\footnote{In fact, suppose that $p: C \rightarrow \mathbf{P}^1$ is a branched covering of degree $k$. Then when $\deg(L) \gg 0$ the linear spaces spanned by the fibres of $p$ sweep out a $k$-dimensional scroll $S \subset \mathbf{P}^r$ containing $C$. But the resolution of $I_{S/\mathbf{P}^r}$ has a linear strand of length $r-k$, which forces $K_{p,1}(C;L) \ne 0$ for $1 \le p \le r-k$. Thus the essential content of the Theorem is that if $K_{r-k,1}(C;L) \ne 0$ and $\deg L \gg 0$, then $C$ carries a pencil of degree $\le k$.} Using Voisin's results \cite{Voisin1}, \cite{Voisin2} on syzygies of general canonical curves, Aprodu and Voisin \cite{Aprodu}, \cite{AproduVoisin} proved the statement of the Theorem for a general curve of each gonality. We show (Remark \ref{Effective.Gonality}) that the conclusion of the Theorem holds for instance once $\deg(L) \ge g^3 $, but we suspect that it should be enough to assume a lower bound on $d$ that is linear in $g$. Theorem \ref{Gonality.Thm} follows from a more general result concerning the weight one asymptotic syzygies associated to an arbitrary divisor $B$. Specifically, fix a line bundle $B$ on $C$, and with $L$ as above consider the $S = \textnormal{Sym}\, H^0(L)$ module \[ R \ = \, R(B;L) \ = \ \bigoplus_m H^0(C, B + mL). \] One can again form the graded minimal free resolution $E_{\bullet}(B;L)$ of $R(B;L)$ over $S$, giving rise to Koszul cohomology groups $K_{p,q}(C,B;L)$. As in the case $B = \mathcal{O}_C$ discussed in the previous paragraphs, the $K_{p,0}$ and the $K_{p,2}$ are completely controlled when $\deg L \gg 0$, and so the issue is to understand the weight one groups $K_{p,1}(C, B;L)$ when $L$ has large degree. Recall that $B$ is said to be $p$-\textit{very ample} if every effective divisor $\xi$ of degree $(p+1)$ on $C$ imposes independent conditions on the sections of $B$, i.e. if the natural map \[ H^0(C,B) \longrightarrow H^0(C, B \otimes \mathcal{O}_{\xi}) \] is surjective for every $\xi \in C_{p+1} =_{\text{def}} \textnormal{Sym}^{p+1}C$. Our second main result is: \begin{theoremalpha} \label{Kp1(B).Theorem} Fix $B$ and $p \ge 0$. Then \[ K_{p,1}(C, B; L ) \, = \,0 \ \text{ for all $L$ with $\deg L \gg 0$} \] if and only if $B$ is $p$-very ample. \end{theoremalpha} \noindent Serre duality implies that the vector spaces \[ K_{p,q}(C, B;L) \ \ \text{ and } \ \ K_{r-1-p, 2-q}(C, K_C - B; L) \] are naturally dual, $K_C$ being the canonical divisor of $C$, and one then finds that Theorem A is equivalent to the case $B = K_C$ of Theorem B. While this is arguably the most interesting instance of the result, it will become clear that decoupling $B$ and $L$ is helpful in guiding the argument. When $B$ fails to be $p$-very ample, it is natural to introduce the invariant \[ \gamma_p(B) \ = \ \dim \big\{ \xi \in C_{p+1}\, \big | \, H^0(B) \longrightarrow H^0(B \otimes \mathcal{O}_\xi )\text{ not surjective } \big\}. \] \begin{theoremalpha} \label{Hilb.Poly.Thm} Let $L_d = dA + E$, where $A$ is an ample line bundle on $C$ and $E$ is arbitrary. Fix $B$ and $p$, and assume that $B$ is not $p$-very ample. Then there is a polynomial $P(d)$ of degree $\gamma_p(B)$ in $d$ such that \[ \dim K_{p,1}(C, B; L_d) \, = \, P(d) \ \ \text{ for } d \gg 0. \] \end{theoremalpha} \noindent In some cases, we are also able to compute the leading coefficient of $P(d)$. We note that Yang \cite{Yang} has recently proven (by somewhat related arguments) that the dimensions of the vector spaces $K_{p,0}$ and $K_{p,1}$ grow polynomially on an arbitrary variety. Theorems B and C follow in a surprisingly simple manner from a small variant of the Hilbert scheme computations pioneered by Voisin in her proof \cite{Voisin1}, \cite{Voisin2} of Green's conjecture for general canonical curves. It is well known that $K_{p,1}(C,B;L)$ can be computed as the cohomology of the Koszul-type complex \[ \Lambda^{p+1}H^0(L) \otimes H^0(B) \longrightarrow \Lambda^p H^0(L) \otimes H^0(B+L) \longrightarrow \Lambda^{p-1}H^0(L) \otimes H^0(B+2L), \] and the basic strategy is to realize this complex geometrically. In brief, a line bundle $B$ on $C$ determines a vector bundle $E_B = E_{p+1, B}$ of rank $p+1$ on the symmetric product $C_{p+1}$ whose fibre at a point $\xi \in C_{p+1}$ is the vector space $H^0(C, B \otimes \mathcal{O}_\xi)$. The natural map $H^0(B) \longrightarrow H^0(B \otimes \mathcal{O}_\xi)$ globalizes to a homomorphism of vector bundles \[ \textnormal{ev}_B = \textnormal{ev}_{p+1, B}: H^0(C,B) \otimes_{\mathbf{C}} \mathcal{O}_{C_{p+1}} \longrightarrow \ E_B, \tag{*} \] and evidently $\textnormal{ev}_B$ is surjective as a map of vector bundles if and only if $B$ is $p$-very ample. On the other hand, if $N_L = \det E_L$, then it is well-known that $H^0(N_L) = \Lambda^{p+1} H^0(C, L)$, and twisting (*) by $N_L$ gives rise to a vector bundle map \[ H^0(C, B) \otimes N_L \longrightarrow E_B \otimes N_L. \tag{**}\] Computations of Voisin identify $H^0(C_{p+1}, E_B \otimes N_L)$ with the space $Z_{p,1}(C, B; L)$ of Koszul cycles, and hence $K_{p,1}(C,B;L) = 0$ if and only if the homomorphism \[ H^0(C, B) \otimes H^0(C_{p+1},N_L) \longrightarrow H^0(C_{p+1}, E_B \otimes N_L)\] determined by (**) is surjective. But assuming that $B$ is $p$-very ample, so that (**) is surjective as a map of bundles, this follows for $\deg L \gg 0$ simply by applying Serre-Fujita vanishing to the kernel of (**). We note that the main difference from Voisin's set-up -- apart from separating $B$ and $L$, which clarifies the issue -- is that we push down to the symmetric product rather than working on the universal family over it. Some related computations had earlier appeared in the paper \cite{Laz1}, where it was shown that one could see the syzygies of canonical curves in cohomology related to the cotangent bundle $E_{\Omega_C}$ of the symmetric product, but it has to be admitted that nothing came of these. We are grateful to Marian Aprodu, Gabi Farkas, B. Purnaprajna, Frank Schreyer, David Stapleton, Bernd Sturmfels, Brooke Ullery and Claire Voisin for valuable discussions and encouragement. \section{Proofs} This section is devoted to the proofs of Theorems A, B and C from the Introduction. We keep the notation introduced there.\footnote{In addition, we continue to allow ourselves to be a little sloppy in confounding additive and multiplicative notation for divisors and line bundles.} Thus $C$ is a smooth projective curve of genus $g$, and $L$ is a very ample line bundle of degree $d$ on $C$ defining an embedding \[ C \ \subseteq \ \mathbf{P} H^0(L) \, = \, \mathbf{P}^r. \] We fix an arbitrary line bundle on $B$ on $C$, and we are intrested in the Koszul cohomology groups \[ K_{p,q}(B;L)\ =\ K_{p,q}(C, B;L)\]arising as the cohomology of the Koszul-type complex: \Small \vskip -16pt \[ \Lambda^{p+1}H^0(L) \otimes H^0(B+(q-1)L) \longrightarrow \Lambda^p H^0(L) \otimes H^0(B+qL) \longrightarrow \Lambda^{p-1}H^0(L) \otimes H^0(B+(q+1)L). \] \normalsize We recall that results of Green and others imply that if $d = \deg(L) \gg 0$, then $K_{p,q}(B;L) = 0$ for all $q \ge 3$, and: \begin{align*} K_{p,0}(B;L) \, \ne \, 0 \ &\Longleftrightarrow \ p \in [0, h^0(B)-1] \\ K_{p,2}(B;L) \, \ne \, 0 \ &\Longleftrightarrow \ p \in [r - h^1(B), r - 1] \end{align*} (cf \cite[Proposition 5.1, Corollary 5.2]{ASAV}).\footnote{In particular, if $H^0(B) = 0$ then $K_{p,0}(B;L) = 0$ for all $p$, and if $H^1(B) = 0$, then $K_{p,2}(B;L) = 0$ for all $p$ provided that $\deg L \gg 0$.} So the issue is to understand which of the groups $K_{p,1}(B;L)$ vanish when $\deg L \gg 0$. Write $C_k $ for the $k^{\text{th}}$ symmetric product of $C$, viewed as parameterizing all effective divisors on $C$ of degree $k$. We consider the commutative diagram: \begin{equation} \begin{gathered} \xymatrix{& C & \\ C \times C_p \ \ar@{^{(}->}[rr] ^{j_{p+1}}\ar[dr]_{ \sigma_{p+1}} \ar[ur]^{pr_1}& & \ C \times C_{p+1} \ar[dl]^{pr_2} \ar[ul]_{pr_1}\\ & C_{p+1}& } \end{gathered} \end{equation} where $\sigma_{p+1}$ and $ j_{p+1}$ are the maps defined by \[ \sigma_{p+1}( x , \xi) \, = \, x + \xi \ \ , \ \ j_{p+1}(x, \xi) \, = \, (x , x + \xi). \] Note that $\sigma_{p+1}$ realizes $C \times C_p$ as the universal family of degree $p+1$ divisors over $C_{p+1}$. The proofs revolve around two well-studied tautological sheaves on $C_{p+1}$. First given a line bundle $B$ on $C$, define \[ E_B \ = \ E_{p+1, B} \ =_{\text{def}} \ \sigma_{p+1, *} \, {pr}_1^* (B).\] Thus $E_B$ is a vector bundle of rank $p+1$ on $C_{p+1}$ whose fibre at $\xi \in C_{p+1}$ is identified with the vector space $H^0(C, B \otimes \mathcal{O}_\xi)$. It follows from the construction that $H^0(C_{p+1}, E_B) = H^0(C, B)$, which gives rise to a homomorphism: \begin{equation} \label{ev.B.equation} \textnormal{ev}_B \ = \ \textnormal{ev}_{p+1,B} : H^0(C, B) \otimes_{\mathbf{C}} \mathcal{O}_{C_{p+1}}\longrightarrow E_B \end{equation} of vector bundles on $C_{p+1}$. Evidently $\textnormal{ev}_B$ is surjective if and only if $B$ is $p$-very ample. Next, given a line bundle $L$ on $C$, put \[ N_L \ = \ N_{p+1,L} \ = \ \det E_L. \] Note that $\Lambda^{p+1} \textnormal{ev}_L$ determines a map \[ \Lambda^{p+1} H^0(C,L) \longrightarrow H^0(C_{p+1}, N_L), \] and it was established eg in \cite{EGL} and \cite{Voisin1} that this is an isomorphism. Twisting $\textnormal{ev}_B$ by $N_L$, one arrives at the vector bundle map \begin{equation} \label{Basic.VB.Map} H^0(C,B) \otimes_{\mathbf{C}} N_L \longrightarrow E_B \otimes N_L \end{equation} that lies at the heart of the proof. Our main results follow immediately from two lemmas whose proofs appear at the end of this section. The first, which is effectively due to Voisin, states that $K_{p,1}(B;L) = 0$ if and only if \eqref{Basic.VB.Map} is surjective on global sections. The second asserts that as $L$ gets very positive on $C$, the corresponding line bundles $N_L$ become sufficiently positive on $C_{p+1}$ to satisfy a Serre-type vanishing theorem. \begin{lemma}[Voisin] \label{Voisin.Lemma} The global sections of $E_B \otimes N_L$ are identified with the space \[ Z_{p,1}(B;L) \ = \ \ker \Big ( \Lambda^p H^0(L) \otimes H^0(B+L) \longrightarrow \Lambda^{p-1}H^0(L)\otimes H^0(B+ 2L)\Big )\] of Koszul cycles, and the homomorphism \[ H^0(C,B) \otimes H^0(C_{p+1}, N_L) \, = \, H^0(C,B) \otimes \Lambda^{p+1} H^0(C,L) \longrightarrow H^0(C_{p+1}, E_B \otimes N_L) \] arising from \eqref{Basic.VB.Map} is identified with the Koszul differential. In particular, \[ K_{p,1}(C, B; L) \ = \ 0 \] if and only if the bundle map \eqref{Basic.VB.Map} determines a surjection on global sections. \end{lemma} \begin{lemma} \label{Serre.Vanishing.Lemma} Let $\mathcal{F}$ be any coherent sheaf on $C_{p+1}$. There exists an integer $d_0 = d_0(\mathcal{F})$ having the property that if $d = \deg(L) \ge d_0(\mathcal{F})$, then \[ \HH{i}{C_{p+1}}{\mathcal{F} \otimes N_L} \ = \ 0 \ \ \text{ for }\ i > 0. \] \end{lemma} Granting the lemmas for now, we prove the main results. \begin{proof} [Proof of Theorem \ref{Kp1(B).Theorem}] Assume that $B$ is $p$-very ample, so that $\textnormal{ev}_B$ in \eqref{ev.B.equation} is surjective. Denote by $M_B = M_{p+1,B}$ its kernel: \begin{equation} \label{MBundle.Eqn} 0 \longrightarrow M_B \longrightarrow H^0(C,B) \otimes \mathcal{O}_{C_{p+1}} \longrightarrow E_B \longrightarrow 0. \end{equation} To show that $K_{p,1}(B;L) = 0$ when $\deg L \gg 0$, it suffices by Lemma \ref{Voisin.Lemma} to prove that \begin{equation} \label{Vanishing.H1} \HH{1}{C_{p+1}}{M_B \otimes N_L} \ = \ 0 \end{equation} for very positive $L$. But this follows from Lemma \ref{Serre.Vanishing.Lemma}. Conversely, if $\textnormal{ev}_B$ is not surjective, then it is elementary -- and we will see momentarily in the proof of Theorem \ref{Hilb.Poly.Thm} -- that $K_{p,1}(B;L) \ne 0$ for every sufficiently positive $L$. \end{proof} \begin{remark} Proposition \ref{Eff.Bound.Proposition} below gives an effective lower bound on $\deg(L)$ that is sufficient to guarantee the vanishing \eqref{Vanishing.H1}. \qed \end{remark} \begin{proof}[Proof of Theorem \ref{Hilb.Poly.Thm}] Denote by $M_B$ and $\mathcal{F}_B$ respectively the kernel and cokernel of $\textnormal{ev}_B$: \begin{equation} \label{ThmCEqn} 0 \longrightarrow M_B \longrightarrow H^0(B) \otimes \mathcal{O}_{C_{p+1}} \longrightarrow E_B \longrightarrow \mathcal{F}_B \longrightarrow 0. \end{equation} Taking $L_d = dA + E$ as in the statement of the Theorem, put $N_d = N_{L_d}$. We will see in the proof of Lemma \ref{Serre.Vanishing.Lemma} below that \[ N_d \ = \ N_E + dS_A, \] where $S_A$ is an ample divisor on $C_{p+1}$. On the other hand, it follows from the two lemmas that for $d \gg 0$ \[ K_{p,1}(C, B; L_d) \ = \ \HH{0}{C_{p+1}}{\mathcal{F}_B \otimes N_d}. \] Therefore $\dim K_{p,1}(B;L_d)$ is given for $d \gg 0$ by the Hilbert polynomial of $\mathcal{F}_B \otimes N_E$ with respect to $S_A$. But $\gamma_p(B) = \dim \textnormal{Supp}\, \mathcal{F}_B$, and the result follows. \end{proof} \begin{remark} This argument shows that $ K_{p,0}(C,B;L_d)= \HH{0}{C_{p+1}}{M_B \otimes N_d}$ provided that $d$ is large. Hence (assuming that $p \le r(B)$) the dimension of this Koszul group always grows as a polynomial of degree $(r(B) - p)$ in $d$ when $d \gg 0$.\footnote{The arguments of \cite{Yang} show that analogously on a variety of dimension $n$, $\dim K_{p,0}$ grows as a polynomial of degree $n(r(B) - p)$.} In other words, it is the growth of the $K_{p,1}$ groups that exhibit interesting dependence on geometry. \qed \end{remark} We next recall the well-known argument that the case $B = K_C$ of Theorem \ref{Kp1(B).Theorem} implies the Gonality Conjecture. \begin{proof} [Proof of Theorem \ref{Gonality.Thm}] Fix $p \le g$. We need to show that if $\deg(L) \gg 0$, and if \[ K_{r(L) - p,1}(C; L) \ \ne \ 0 \tag{*}, \] then $C$ carries a pencil of degree $\le p$. By duality, (*) implies that \[ K_{p-1, 1}(C, K_C; L) \ \ne \ 0, \] and hence by Theorem \ref{Kp1(B).Theorem} there exists an effective divisor $\xi \in C_p$ of degree $p$ that fails to impose independent conditions on $|K_C|$. But then $\xi$ moves in a non-trivial linear series thanks to Riemann-Roch. \end{proof} We conclude this section by proving the two lemmas stated above. \begin{proof} [Proof of Lemma \ref{Voisin.Lemma}] It follows from the projection formula and the constructions that \begin{align*} \HH{0}{ C_{p+1}}{ E_B \otimes N_L} \ &= \ \HH{0}{C \times C_p}{pr_1^*B \otimes \sigma_{p+1}^*N_L)}\\ &= \ \HH{0}{C \times C_p}{(j_{p+1})^*(pr_1^*B \otimes pr_2^*N_L))}. \end{align*} Moreover the map induced by \eqref{Basic.VB.Map} on global sections is identified with the restriction \[ \HH{0}{C \times C_{p+1}}{B \boxtimes N_L } \longrightarrow \HH{0}{C \times C_p} {{(B \boxtimes N_L )}|(C\times C_p)}.\] But this is exactly Voisin's Hilbert-schematic interpretation of Koszul cohomology, and from this point one can argue just as in \cite[Lemma 5.4]{AproduNagel}. In brief, one observes that on $C \times C_{p}$ one has an isomorphism \[ j_{p+1}^* \big( N_{p+1,L} \big) \ = \ \big( L \boxtimes N_{p,L} \big)(-D), \] where $D\subseteq C \times C_p$ is the image of $j_p: C \times C_{p-1} \hookrightarrow C \times C_p$. Therefore \[ \HH{0}{C\times C_p}{(j_{p+1})^*(B \boxtimes N_{p+1,L}) }\] is identified with \[ \ker \Big( \, \HH{0}{C \times C_p}{\mathcal{O}_C(B+ L) \boxtimes N_{p,L}} \longrightarrow \HH{0}{C \times C_{p-1}}{\mathcal{O}_C(B+2L) \boxtimes N_{p-1,L}}\, \Big), \] and the assertion follows. \end{proof} \begin{proof} [Proof of Lemma \ref{Serre.Vanishing.Lemma}] Given a divisor $A$ on $C$, the divisor $T_A =_{\text{def}} \sum pr_i^*(A)$ on the Cartesian product $C^{ p+1}$ descends to a divisor $S_A = S_{p+1,A}$ on $C_{p+1}$. For example, if $A = x_1 + \ldots + x_d$, then \[ S_A \ = \ C_{p, x_1} + \ldots + C_{p, x_d} \ \in \ \text{Div}(C_{p+1}) , \] where $C_{p,x}$ denotes the image of the map $C_p \hookrightarrow C_{p+1}$ given by $\xi \mapsto \xi +x$. One has $S_{A_1 + A_2} = S_{A_1} + S_{A_2}$, and $S_A$ is ample on $C_{p+1}$ if and only if $A$ is ample on $C$. Observe next that if $L$ is line bundle on $C$, then $ N_{L+A}= N_L + S_A $ on $C_{p+1}$. This is well-known, but it can be checked directly from the definitions by observing that if $x \in C$ is a point then there is an exact sequence \[ 0 \longrightarrow E_L \longrightarrow E_{L(x)} \longrightarrow \mathcal{O}_{C_{p,x}} \longrightarrow 0 \] of sheaves on $C_{p+1}$. Now fix an ample divisor $A$ of degree $a$ on $C$ and a coherent sheaf $\mathcal{F}$ on $C_{p+1}$. By Fujita-Serre vanishing, there exists an integer $m_0 = m_0(\mathcal{F})$ such that if $P$ is any nef divisor on $C_{p+1}$, then \[ \HH{i}{C_{p+1}}{\mathcal{F}(m S_A + P )} \ = \ 0 \ \text{ for } \, i > 0 \tag{*} \] whenever $m \ge m_0$. Put \[ d_0 \ = \ d_0(\mathcal{F}) \ = \ (2g + p) + m_0a, \] and suppose that $\deg(L) \ge d_0$. Then $L = L_0 + m_0A$ where $L_0$ is $p$-very ample, and in particular $N_{L_0}$ is globally generated. Therefore \[ N_L \ = \ m_0 S_A + \textnormal{ ( nef ) }, \] and so (*) gives the required vanishing. \end{proof} \section{Complements} This section is devoted to some additional results, and a conjecture about what one might hope for in higher dimensions. We start by establishing an effective version of Theorem \ref{Kp1(B).Theorem}. Since the statement is presumably far from optimal we only sketch the proof. \begin{proposition} \label{Eff.Bound.Proposition} Assume that $B$ is $p$-very ample. Then $K_{p,1}(C,B;L) =0$ for every line bundle $L$ with \begin{equation} \label{Effective.Bound} \deg(L) \ > \ (p^2 + p + 2)(g-1) \, + \, (p+1)\deg(B). \end{equation} \end{proposition} \begin{proof} [Sketch of Proof] Keeping notation as in the proof of Theorem \ref{Kp1(B).Theorem}, one needs to prove that $\HH{1}{C_{p+1}}{M_B \otimes N_L} =0$ when $\deg(L)$ satisfies the stated bound. If $h^0(C, B) > 2(p+1)$, we replace $H^0(C, B)$ in \eqref{MBundle.Eqn} by a general subspace of dimension $2p+2$ to define a vector bundle $M_B^\prime$ of rank $p+1$ sitting in an exact sequence \[ 0 \longrightarrow M_B^\prime \longrightarrow M_B \longrightarrow \oplus\, {\mathcal{O}}_{C_{p+1}}\longrightarrow 0, \] and one is reduced to proving that $\HH{1}{C_{p+1}}{M_B^\prime \otimes N_L} =0$. Note that $M^\prime \otimes N_B$ is globally generated and that $\det M_B^\prime = -N_B$. We assert that if $L$ satisfies \eqref{Effective.Bound}, then \[ N_L - (p+1)N_B - K_{C_{p+1}} \ \text{ is ample} . \tag{*}\] Granting this, we see that if \eqref{Effective.Bound} holds, then \[ M_B^\prime\otimes N_L \ = \ \big( M_B^\prime \otimes N_B) \otimes \det(M_B^\prime \otimes N_B) \otimes K_{C_{p+1}} \otimes A \] where is $A$ is ample, so the Griffiths vanishing theorem \cite[7.3.2]{PAG} applies. For (*), it is equivalent to check the statement after pulling back by the quotient $\pi : C^{p+1} \rightarrow C_{p+1}$. One has $\pi^* N_L= T_L - \Delta$, where $T_L = \sum pr_i^* L$ is the symmetrization of $L$ and $\Delta \in \text{Div}( C^{p+1})$ is the union of the pairwise diagonals. Since $K_{C_{p+1}} = N_{K_C}$, the claim (*) reduces with some computation to the fact that if $D$ is a divisor on $C$, then $T_D + \Delta$ is nef on $C^{p+1}$ if and only if $\deg D \ge p(g-1)$. \end{proof} \begin{remark} \label{Effective.Gonality} The Proposition guarantees that we can detect whether $K_C$ is $p$-very ample (or equivalently, whether $\textnormal{gon}(C) \ge p+2$) by the vanishing of $K_{p,1}(C, K_C;L)$ for any $L$ with \[ \deg(L) \ > \ (p^2 + 3p +3)(g-1). \] But in any event $\textnormal{gon}(C) \le \tfrac{g+3}{2}$, and it follows (with some computation) that the gonality of $C$ is detemined by the weight one syzygies of $C$ with respect to any line bundle of degree $\ge g^3$. However we expect that such cubic bounds are far from optimal: one hopes that it is enough that the degree of $L$ grows linearly in $g$. \qed \end{remark} As suggested by Schreyer, we observe next that in some cases one can use the proof of Theorem \ref{Hilb.Poly.Thm} to get more information about the polynomial $P(d)$ appearing there. We focus on the most interesting case $B =K_C$, and content ourselves with illustrating the method in a simple instance. Specifically, suppose that $C$ carries finitely many pencils \[ \alpha_1, \ldots, \alpha_s \ \in \ W^1_{p+1}(C) \] of degree $p+1$, while no other divisors of degree $p+1$ on $C$ move in non-trivial linear series. We assume also that each $\alpha_i$ is (scheme-theoretically) an isolated point in $W^1_{p+1}(C)$ in the sense that the multiplication maps \[ H^0(\alpha_i) \otimes H^0(K_C - \alpha_i) \longrightarrow H^0(K) \tag{*} \] are surjective for each $i$.\footnote{Recall that the Gieseker-Petri theorem asserts that the hypothesis holds automatically for a general curve of genus $g = 2p$, in which case $s$ is given by a certain Catalan number.} \begin{proposition} \label{Enumerative.Prop} Under the hypotheses just stated, take $L_d = d \cdot x$ for some point $x \in C$. Then for $d \gg 0$, \[ \dim K_{p,1}(C, K_C; L_d) \ = \ s \cdot d + \textnormal{ ( constant ) }. \] \end{proposition} \noindent We note that O'Dorney and Yang \cite{OY} have made some interesting computations of the dimensions of $ K_{p,0}(C, K_C; L_d)$ on a general curve, including determining the leading coefficient of the resulting polynomial. \begin{proof}[Sketch of Proof of Proposition \ref{Enumerative.Prop}] Note that each $\alpha_i$ determines a copy of $\mathbf{P}^1 = \linser{\alpha_i}$ sitting in the symmetric product $C_{p+1}$, and these are precisely the positive-dimensional fibres of the Abel-Jacobi map \[ u = u_{p+1} : C_{p+1} \longrightarrow \textnormal{Jac}^{p+1}(C).\] Now when $B = K_C$, the evaluation \eqref{ev.B.equation} is identified with the coderivative $du$ of $u$, and by a well-known computation \cite[Chapt. IV.4]{ACGH}, the condition (*) implies that \[ \textnormal{coker} \, du \ = \ \oplus_{i=1}^s \,\Omega^1_{\linser{\alpha_i}}. \] In particular, the sheaf $\mathcal{F}_{K_C}$ appearing in \eqref{ThmCEqn} has rank one along each $\mathbf{P}^1 = \linser{\alpha_i}$. On the other hand, if $L_d = d \cdot x$ then the divisor $N_d$ has degree $d + \textnormal{ (constant) }$ along $\linser{\alpha_i}$, so each of these copies of $\mathbf{P}^1$ contributes a term of the same shape to the Hilbert polynomial of $\mathcal{F}_{K_C}$. \end{proof} Finally, we make some remarks about what one might expect in higher dimensions. Let $X$ be a smooth projective variety of dimension $n$, and let $L_d = dA + E$ where $A$ is an ample and $E$ an arbitrary divisor on $X$. Given a line bundle $B$ on $X$, one would like to give geometric conditions on $B$ in order that \begin{equation} \label{Kp1.Van.Higher.Dim} K_{p,1}(X, B; L_d) \ = \ 0 \ \ \text{for all } d \gg 0: \end{equation} as explained above and in \cite[Problem 7.2]{ASAV} this is the most interesting group from an asymptotic viewpoint. It is conceivable that it suffices to assume that $B$ is $p$-very ample in the sense that $H^0(B)$ imposes independent conditions on every subscheme $\xi \subseteq X$ of length $p+1$, but this seems out of reach. On the other hand, recall that $B$ is said to be $p$-\textit{jet very ample} if for every effective zero-cycle \[ z \ = \ a_1 x_1 + \ldots + a_s x_s \] of degree $p+1$ on $X$, the natural mapping \[ \HH{0}{X}{B} \longrightarrow \HH{0}{X}{B \otimes \mathcal{O}_X/ \mathfrak{m}_1^{a_1} \cdot \ldots \cdot \mathfrak{m}_s^{a_s} } \] is surjective, where $\mathfrak{m}_i \subseteq \mathcal{O}_X$ is the ideal sheaf of $x_i$. When $\dim X = 1$ this is the same as $p$-very ample, but in higher dimensions the condition on jets is stronger. \begin{conjecture} If $B$ is $p$-jet very ample, then \eqref{Kp1.Van.Higher.Dim} holds. \end{conjecture} \noindent It is very possible that the ideas of \cite{Yang} will be helpful for this. % % % %
1,314,259,996,074
arxiv
\section{Preliminaries} \label{sec:FPL} In this section, we give a brief introduction to the Fokker-Planck-Landau equation, the Fokker-Planck collision operator and some related properties. \subsection{Fokker-Planck-Landau equation} As in many other kinetic theories in physics, the distribution function of a specific species $\alpha$ is described by the distribution function $f_{\alpha}(t, \bx, \bv)$, a seven-dimensional function of time $t$, the space position $\bx \in \Omega \subset \bbR^3$ and the microscopic velocity $\bv \in \bbR^3$. The distribution function $f_{\alpha}$ relates the density $\rho_{\alpha}$, the macroscopic velocity $\bu_{\alpha}$ and the temperature $T_{\alpha}$ of species $\alpha$ through \begin{equation} \label{eq:rlt} \begin{aligned} \rho_{\alpha}&=\int_{\mathbb{R}^3}f_{\alpha}(t,\bx,\bv) \dd \bv, \quad \rho_{\alpha}\bu_{\alpha}&=\int_{\mathbb{R}^3}\bv f_{\alpha}(t,\bx,\bv) \dd \bv, \quad \frac{3}{2} \rho_{\alpha}T_{\alpha}& = \frac{1}{2}\int_{\mathbb{R}^3}|\bv-\bu_{\alpha}|^2f_{\alpha}(t,\bx, \bv)\dd \bv. \end{aligned} \end{equation} The Fokker-Planck-Landau (FPL) equation describes the time evolution of the distribution functions for charged particles in a nonequilibrium plasma. The FPL equation with respect to the species $\alpha$ has the form \begin{equation} \label{eq:FPL} \pd{f_{\alpha}}{t} + \bv \cdot \nabla_{\bx} f_{\alpha} + \bF \cdot \nabla_{\bv} f_{\alpha} = \mQ[f_{\alpha}], \end{equation} where the force field $\bF=\bF(t, \bx)$ is produced either externally or self-consistently. Here, we consider only the case where $\bF$ is generated by the self-consistent electric field $ \bE(t, \bx)$, which is coupled to the distribution function through the Poisson equation \cite{xiong2020}: \begin{equation} \label{eq:poisson} \bE(t, \bx) = -\nabla_{\bx} \psi(t, \bx), \qquad -\Delta_{\bx}\psi = \sum_{\eta} q_{\eta} \int_{\bbR^3} f_{\eta}(\bv) \dd \bv, \end{equation} where $q_{\eta}$ is the electric charge of the particle $\eta$. The collision terms \begin{equation} \label{eq:col} \mQ[f_{\alpha}] = \sum_{\eta} \nu_{\eta} \mQ_{\eta}[f_{\alpha}, f_{\eta}] \end{equation} describe the collisions between particles of species $\alpha$ and $\eta$, which are discussed in detail in the next section. The non-negative parameter $\nu_{\eta}$ is the collision frequency. For simplicity, we restrict the study to a plasma consisting only of electrons and ions, which are referred to as the species $\alpha$ and $\beta$ respectively, in the following. \subsection{Collision operator} \label{sec:FPL operator} The collision operator $\mQ_{\eta}$, $\eta = \alpha, \beta$ in \eqref{eq:col} is called the Fokker-Planck-Landau (FPL) collision operator, which is obtained by setting the Boltzmann collision operator concentrating on grazing collisions \cite{Desvillettes1992}. It has the following form: \begin{equation} \label{eq:collision_operator} \mQ_{\eta}[f_{\alpha}, f_{\eta}] = \nabla_{\bv} \cdot \left[\int_{\bbR^3} {\bf A}(\bv - \bv')\Big(\nabla_{\bv} f_{\alpha}(\bv)f_{\eta}(\bv') - \nabla_{\bv'} f_{\eta}(\bv') f_{\alpha}(\bv)\Big) \dd \bv' \right], \qquad \eta = \alpha, \beta, \end{equation} where the collision kernel $\bf A(\cdot)$, in the form of a $3\times 3$ negative and symmetric matrix \begin{equation} \label{eq:A} {\bf A}(\bv)=\Psi(|\bv|)\Pi(\bv), \end{equation} reflects the interaction between particles. Here $\Pi(\bv)$ is the projection onto the space orthogonal to $\bv$, as $\Pi_{ij}(\bv)=\delta_{ij}-\frac{v_iv_j}{|\bv|^2}$. For the inverse-power-law (IPL) model, $\Psi(|\bv|)$ is a non-negative radial function, i.e. \begin{equation} \Psi(\bv)=\Lambda|\bv|^{\gamma+2}, \label{eq:ipl} \end{equation} where $\Lambda$ is a positive constant and $\gamma$ is the index of the power of the distance. Similar to the Boltzmann equation, we obtain the hard potential model when $\gamma > 0$ and the soft potential model when $\gamma < 0$. There are two special cases, the first of which is the model of Maxwell molecules when $\gamma = 0$ and the other is the model with Coulomb interactions when $\gamma = -3$ \cite{Filbet2002numerical}. For the different species of electrons and ions, the collision operator \eqref{eq:collision_operator} may be reduced to different forms. These two collision operators are discussed in detail below. \subsubsection{Electron-electron collision} \label{sec:ee operator} The quadratic operator $\mQ_{\alpha}[f_{\alpha}, f_\alpha]$ describes the electron-electron collisions, the form of which can be obtained by taking $\eta$ as $\alpha$ in the FPL collision operator \eqref{eq:collision_operator}. With a slight abuse of notation, the subscript $\alpha$ referring electrons is omitted from now on. Thus, the distribution function $f_{\alpha}$ is shorten to $f$ and the collision operator $\mQ_{\alpha}[f_{\alpha}, f_\alpha]$ is shorten to $\mQ[f, f]$ as \begin{equation} \label{eq:collision_ee} \mQ[f, f] = \nabla_{\bv} \cdot \left[\int_{\bbR^3} {\bf A}(\bv - \bv')\Big(\nabla_{\bv} f(\bv)f(\bv') - \nabla_{\bv'} f(\bv') f(\bv)\Big) \dd \bv' \right], \end{equation} with the collision kernel ${\bf A}(\cdot)$ defined in \eqref{eq:A}. For the steady solution of the FPL equation, we obtain the equilibrium, which has the following Maxwellian form: \begin{equation} \label{eq:max} \mM(\bv) = \frac{\rho}{(2 \pi T)^{3/2}} \exp\left(-\frac{|\bv - \bu|^2}{2T}\right), \end{equation} where $\rho$, $\bu$ and $T$ are the density, macroscopic velocity and temperature of electrons respectively \eqref{eq:rlt}. Moreover, this operator maintains the conservation of mass, momentum and energy as \begin{equation} \label{eq:conserve} \int_{\bbR^3} \mQ[f, f] \left( \begin{array}{c} 1 \\ \bv \\ |\bv|^2 \end{array} \right) \dd\bv = 0. \end{equation} Due to the complicated form of the FPL collision operator, several simplified operators are introduced to approximate the original quadratic operator $\mQ[f, f]$, for example the linearized collision operator \begin{equation} \label{eq:linear_col} \mL[f] = \mQ[f, \mM]+ \mQ[\mM, f], \end{equation} and the diffusive Fokker-Planck (FP) operator \cite{JinYan2011} \begin{equation} \label{eq:FP} \mP_{\rm FP}[f] = \nabla_{\bv} \cdot \left[\mM \nabla_{\bv} \left(\frac{f}{\mM}\right)\right]. \end{equation} \subsubsection{Electron-ion collision} The collisions between electrons and ions are described by the operator $\mQ_{\beta}[f, f_{\beta}]$, which can be obtained by taking $\eta$ as $\beta$ in \eqref{eq:collision_operator}. Since the electrons have a minute mass and high velocity compared to the ions, the ions may be collectively treated as a stationary positively-charged background. Furthermore, the temperature of ions $T_{\beta}$ is negligible compared to that of electrons $T$. Thus, the distribution function of ions can be given simply by a Dirac measure in the velocity space \cite{ZhangGamba2017} as \begin{equation} \label{eq:density_beta} f_{\beta}(t, \bx, \bv) = \rho_{\beta}(t, \bx)\delta_{0}(\bv - \bu_{\beta}(t, \bx)), \end{equation} where $\rho_{\beta}$ and $\bu_{\beta}$ are the density and macroscopic velocity of ions. Consequently, the collision operator $\mQ_{\beta}[f, f_\beta]$ can be reduced to \begin{equation} \label{eq:col_beta} \mQ_{\beta}[f] \triangleq \mQ_{\beta}[f, f_\beta] = \rho_{\beta}\nabla_{\bv} \cdot \left[{\bf A}(\bv - \bu_{\beta})\nabla_{\bv}f\right]. \end{equation} One can also easily check that this reduced operator still preserves mass and energy as \begin{equation} \label{eq:mass_energy_beta} \int_{\bbR^3} \mQ_{\beta}[f] \dd \bv = 0, \qquad \int_{\bbR^3} |\bv - \bu_{\beta}|^2\mQ_{\beta}[f] \dd \bv = 0. \end{equation} Refer to \cite{Filbet, dimarco2015} for more details on this reduced collision operator. \section{Introduction} The Fokker-Planck-Landau (FPL) equation is used to describe the evolution of collisional plasma systems at the kinetic level \cite{Goudon1997, Degond1994}. It is a six-dimensional integro-differential equation, which models binary collisions between charged particles with long-range Coulomb interactions. The FPL equation is the limit of the Boltzmann equation when all binary collisions are grazing \cite{Degond1992}. It was originally derived by Landau \cite{Landau1936} and was later derived independently in the Fokker-Planck form \cite{PhysRev.107.1}. The high-dimensionality of the FPL equation is a bottleneck for its numerical simulations. Although several simplified models of the original FPL equation has been developed, it is still a challenge to solve it both fast and accurately. One of the major difficulties in solving FPL equation numerically is the complexity of the Fokker-Planck collision operator, which is an integro-differential nonlinear operator in the microscopic velocity space. Most of the statistical methods, such as the DSMC method \cite{bird}, are limited for the FPL equation \cite{NANBU1998639} since the FPL collision operator models the infinite-range potential interactions within the plasma. Several deterministic methods are used to solve the FPL equation or its simplifications. The entropic scheme, which guarantees a nondecreasing entropy, is well studied in \cite{Implicit2005, BuetConservative1998, BEREZIN1987163, Degond1994}. To handle the stiffness of the collision operator, an asymptotic-preserving (AP) strategy is studied in \cite{dimarco2015}, while a conservative spectral method is adopted in \cite{ZhangGamba2017}. A positivity-preserving scheme for the linearized FPL equation is proposed in \cite{partical1970}, after which it was modified to preserve energy \cite{Relaxation1985} and then extended to the two-dimensional FPL equation with cylindrical geometry \cite{Fokker2014}. Several other numerical methods, such as the multipole expansions \cite{Lemou1998} and multigrid techniques \cite{BUET1997}, have also been proposed. In \cite{TAITANO2015357}, a fully implicit method is proposed for the multidimensional Rosenbluth-Fokker-Planck equation. The finite element methods in \cite{ZAKI1988184, ZAKI1988200} and the semi-Lagrangian schemes in \cite{CROUSEILLES20101927, sonnendrucker1999semi, Qiu2011positivity, Xiong2014High} are also used to solve the Vlasov equations. Moreover, the FPL equation with stochasticity is studied in \cite{Jin2010}. The spectral method has been widely used in numerical methods to solve the Vlasov equation \cite{GuoTang2000, Holloway1996Spectral}. In \cite{PARESCHI20002, Filbet2002numerical}, a spectral method based on the Fourier expansion is implemented for the nonhomogeneous FPL equation. Moreover, the Hermite spectral method is utilized in \cite{Bourdiec2006Numerical, Joseph2015, Gibelli2006Spectral} to discretize the microscopic velocity space and is utilized to solve the FPL and Boltzmann equation \cite{Hu2020Numerical}. One advantage of the Hermite spectral method is that its first few moments have explicit physical meanings. For example, the density, the macroscopic velocity and the temperature can be easily derived using the first three expansion coefficients \cite{PARESCHI20002}, which indicates that they may be captured more precisely by ingeniously designing the numerical algorithm. However, the high-dimensionality and complexity of the quadratic FPL collision model still impose great challenges for adopting the Hermite spectral method to numerically simulate the FPL equation. In this paper, a numerical method based on the Hermite spectral method is proposed for the nonhomogeneous FPL equation. The distribution functions in the FPL equation are approximated by a series of basis functions derived from Hermite polynomials. Strang splitting method is then adopted for the FPL equation, which is split into collision, convection and acceleration steps. Unlike the general Hermite spectral method \cite{PARESCHI20002}, we choose different expansion centers in the Hermite expansion for different numerical steps. For the collision step, the standard expansion center \cite{FPL2018} is chosen to utilize the precalculated expansion coefficients of the quadratic collisional term. To further reduce the computational cost of the quadratic FPL collision operator, a reduced collision model is constructed by combining the quadratic collision operator and the diffusive FP operator, which is proved effective by the numerical examples afterwards. For the acceleration step, the expansion center is chosen as the local macroscopic velocity and temperature, under which the effect of the external force field can be reduced to an ODE of the macroscopic velocity. A projection algorithm introduced in \cite{Hu2020Numerical} is utilized to handle the projections between distribution functions with different expansion centers. In the numerical simulation, the convection, collision and acceleration steps are solved successively. Both the linear and nonlinear Landau damping problems are tested, with the decay rate of the electrostatic energy and the effect of the collisional frequency studied. Moreover, the two-stream instability and the bump-on-tail instability are also simulated to validate this new method. The rest of this paper is organized as follows: Section \ref{sec:FPL} introduces the FPL equation, the FPL collision operator and several related properties. The detailed spectral method used to approximate the distribution function is introduced in Section \ref{sec:method}. The series expansion for the FPL collision operator and the reduced collision model are explained in Section \ref{sec:coll}. The numerical algorithm is proposed in Section \ref{sec:model}. Several numerical examples are exhibited in Section \ref{sec:num}. The conclusions and future work are stated in Section \ref{sec:conclusion} with some supplementary statements given in the Appendix \ref{app}. \section{Series expansion of the FPL equation} \label{sec:method} In this section, we introduce the series expansion to approximate the distribution function in detail, including the basis functions which are constructed by Hermite polynomials and several related properties. In the numerical scheme, different expansion centers are utilized in the Hermite expansion. A fast algorithm for the projections between distribution functions with different expansion centers is also stated in this section. \subsection{Distribution function} The Hermite expansion has been proved successful in the numerical method for the Boltzmann equation \cite{Hu2020Numerical}, and the exact expansion coefficients for the quadratic FPL collision have been computed in \cite{FPL2018}. Thus, the Hermite expansion is also adopted in the approximation to the FPL equation here. To be precise, the distribution function $f$ is discretized as \begin{equation} \label{eq:expansion} f(t, \bx, \bv) = \sum_{\boldsymbol{i} \in \bbN^3} f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(t, \bx) \mH_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(\bv), \end{equation} where the basis functions $\mH_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(\bv)$ are defined as \begin{equation} \label{eq:basis} \mH_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(\bv) = \tilde{T}^{-\frac{|\boldsymbol{i}|}{2}} H_{\boldsymbol{i}}\left(\frac{\bv - \tilde{\bu}}{\sqrt{\tilde{T}}} \right) \frac{1}{(2 \pi \tilde{T})^{3/2}} \exp\left(-\frac{|\bv - \tilde{\bu}|^2}{2 \tilde{T}}\right) \end{equation} and $\boldsymbol{i}$ refers to the multi-index $(i_1,i_2,i_3)$. We also adopt the following notations for simplicity: \begin{equation*} |\boldsymbol{i}|=i_1+i_2+i_3,\qquad \boldsymbol{i}!=i_1!i_2!i_3!, \qquad \pd{^{\boldsymbol{i}}}{\bv^{\boldsymbol{i}}}=\pd{^{i_1+i_2+i_3}}{v_1^{i_1}v_2^{i_2}v_3^{i_3}}. \end{equation*} In \eqref{eq:basis}, $H_{\boldsymbol{i}}(\bv)$ represents the Hermite polynomial \begin{equation} \label{eq:Hermite} H_{\boldsymbol{i}}(\bv) = (-1)^{|\boldsymbol{i}|} \exp\left(\frac{|\bv|^2}{2}\right) \pd{^{\boldsymbol{i}}}{\bv^{\boldsymbol{i}}} \left[\exp\left(-\frac{|\bv|^2}{2}\right) \right], \end{equation} where the two parameters $\tilde{\bu} \in \bbR^3$ and $\tilde{T} \in \bbR_{+}$, namely the expansion center, are of the same dimension as $\bv$ and $T$. The coefficients $f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}$ can be explicitly expressed by \begin{equation} \label{eq:coef} f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(t, \bx)=\dfrac{\tilde{T}^{\frac{|\boldsymbol{i}|}{2}}}{\boldsymbol{i}!}\int_{\mathbb{R}^3} f(t,\s{x},\s{v})H_{\boldsymbol{i}}\left(\dfrac{\bv-\tilde{\bu}} {\sqrt{\tilde{T}}}\right)\dd\bv \end{equation} from the orthogonality of Hermite polynomials \begin{equation} \label{eq:ortho} \int_{\bbR^3}H_{\boldsymbol{i}}(\bv)H_{\boldsymbol{j}}(\bv) \exp\left(-\frac{|\bv|^2}{2}\right) \dd\bv=\begin{cases} \boldsymbol{i}!,&\text{if } \boldsymbol{i}=\boldsymbol{j},\\ 0,&\text {otherwise.} \end{cases} \end{equation} Several lower-order moments have close relations to macroscopic variables. For example, relations \eqref{eq:rlt} with respect to density $\rho$, macroscopic velocity $\bu$ and temperature $T$ of the electron can be rewritten by the expansion coefficients as \begin{equation} \label{eq:relation} \begin{gathered} \rho = f_{\boldsymbol 0}^{[\tilde{\bu}, \tilde{T}]}, \qquad \rho \bu = \rho \tilde{\bu} + \left(f_{\be_1}^{[\tilde{\bu}, \tilde{T}]}, f_{\be_2}^{[\tilde{\bu}, \tilde{T}]}, f_{\be_3}^{[\tilde{\bu}, \tilde{T}]}\right)^T, \\ \frac{1}{2}\rho |\bu|^2 + \frac{3}{2}\rho T = \rho \bu \cdot \tilde{\bu} - \frac{1}{2} \rho |\tilde{\bu}|^2 + \frac{3}{2}\rho\tilde{T} +\sum_{d=1}^3 f_{2\be_d}^{[\tilde{\bu}. \tilde{T}]}. \end{gathered} \end{equation} Some other related macroscopic quantities such as the shear stress and the heat flux can also be expressed in terms of the expansion coefficients $f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}$, and we refer \cite{FPL2018} for further details. Moreover, it is worth mentioning that if the expansion parameters chosen are the local macroscopic velocity and temperature of the particles, i.e. $\tilde{\bu} = \bu$ and $\tilde{T} = T$, it holds that according to \eqref{eq:relation}, \begin{equation} \label{eq:relation_12} f_{\be_d}^{[\bu, T]} = 0, \qquad \sum\limits_{d = 1}^3f_{2\be_d}^{[\bu, T]} = 0, \qquad d = 1, 2, 3. \end{equation} \subsection{Projections between different expansion centers} One should be aware that different expansion centers may lead to different series expansions in \eqref{eq:expansion} and can be selected to meet different needs. Based on a prior understanding of the problem, different expansion centers $\tilde{\bu}$ and $\tilde{T}$ are chosen to accelerate the convergence of the series expansion \eqref{eq:expansion}. The selection of expansion centers is discussed in many studies. The normalized Hermite basis, or the expansion center $\tilde{\bu} = 0$ and $\tilde{T} = 1$, is adopted in \cite{UTH, Joseph2015, xiong2020}, while the local macroscopic variables $\tilde{\bu} = \bu(t, \bx)$ and $\tilde{T} = T(t, \bx)$ defined in \eqref{eq:rlt} are chosen as the expansion center in \cite{Wang} and for some related problems \cite{Hu2020Numerical}. Here, instead of fixing the expansion center throughout, we select an appropriate one for each numerical step. Here, instead of fixing the expansion center throughout, as we mainly focus on the reduction in the complexity of the series expansions and the feasibility of the numerical method, we select an appropriate one for each numerical step, which is further explained in Section \ref{sec:coll} and \ref{sec:model}. To achieve efficient projections between distribution functions with different expansion centers, we adopt the algorithm proposed in \cite{Hu2020Numerical}, which is described by the following theorem: \begin{theorem} \label{thm:project} Suppose a distribution function $f(\bv)$ in the velocity space satisfies \begin{equation} \int_{\bbR^3} (1+|\bv|^M) |f(\bv)| \dd \bv < \infty \end{equation} for some $M\in \bbZ_+$. Define the expansion coefficients of $f(\bv)$ as \begin{equation} \label{eq:psi} \begin{aligned} & f_{\boldsymbol{i}}^{[\tilde{\bu}^{(1)},\tilde{T}^{(1)}]} = \frac{(\tilde{T}^{(1)})^{\frac{|\boldsymbol{i}|}{2}}}{\boldsymbol{i}!} \int_{\bbR^3} H_{\boldsymbol{i}}\left( \frac{\bv - \tilde{\bu}^{(1)}}{\sqrt{\tilde{T}^{(1)}}}\right) f(\bv) \dd \bv, \\ & f_{\boldsymbol{i}}^{[\tilde{\bu}^{(2)},\tilde{T}^{(2)}]} = \frac{(\tilde{T}^{(2)})^{\frac{|\boldsymbol{i}|}{2}}}{\boldsymbol{i}!} \int_{\bbR^3} H_{\boldsymbol{i}}\left( \frac{\bv - \tilde{\bu}^{(2)}}{\sqrt{\tilde{T}^{(2)}}}\right) f(\bv) \dd \bv. \end{aligned} \end{equation} with $\tilde{\bu}^{(s)}=\left(\tilde{u}^{(s)}_1, \tilde{u}^{(s)}_2,\tilde{u}^{(s)}_3\right), s = 1, 2 \in \bbR^3$ and $\tilde{T}^{(1)}, \tilde{T}^{(2)} > 0$. Then, for any $\boldsymbol{i} \in \bbN^3$ satisfying $|\boldsymbol{i}|\leqslant M$, we have \begin{equation} f_{\boldsymbol{i}}^{[\tilde{\bu}^{(2)},\tilde{T}^{(2)}]} = \sum_{l = 0}^{|\boldsymbol{i}|} \phi_{\boldsymbol{i}}^{(l)}. \end{equation} Here $\phi_{\boldsymbol{i}}^{(l)}$ is defined recursively by \begin{equation} \label{eq:tilde_psi} \phi_{\boldsymbol{i}}^{(l)} = \begin{cases} f_{\boldsymbol{i}}^{[\tilde{\bu}^{(1)},\tilde{T}^{(1)}]}, &l = 0, \\ \dfrac{1}{l} \sum\limits_{d=1}^3 \left(\left(\tilde{u}^{(2)}_{d} - \tilde{u}^{(1)}_{d}\right) \phi_{\boldsymbol{i} - \be_d}^{(l-1)} + \frac{1}{2}(\tilde{T}^{(2)} - \tilde{T}^{(1)}) \phi_{\boldsymbol{i} - 2\be_d}^{(l-1)} \right), & 1 \leqslant l \leqslant |\boldsymbol{i}|, \end{cases} \end{equation} where terms with negative indices are treated as zero. \end{theorem} We refer to Theorem 3.1 in \cite{Hu2020Numerical} for the proof and more details of this projection algorithm. \section{Series expansion of the collision operators and the reduced collision model} \label{sec:coll} For the moment, we have obtained the series approximation to the distribution function. In this section, the FPL collision operator \eqref{eq:collision_operator} is expanded using the same basis functions. We derive the series expansions and provide algorithms to compute the expansion coefficients of the quadratic collision operator $\mQ[f, f]$ \eqref{eq:collision_ee} as well as its simplified approximation $\mP_{\rm FP}$ and the collision operator between different species $\mQ_{\beta}[f]$ \eqref{eq:col_beta} in this section. Moreover, since the computational cost for the quadratic collision operator \eqref{eq:collision_operator} is unaffordable, a reduced collision model is built based on the expansion coefficients to further reduce the computational cost. \subsection{Quadratic collision operator $\mQ[f, f]$} \label{sec:quadoperator} As stated in Section \ref{sec:FPL operator}, the major difficulty of handling the original FPL collision operator or even solving the FPL equation is the complexity of the quadratic collision operator $\mQ[f, f]$. In our method, both precalculation and model reduction are employed to address this difficulty. The quadratic collision operator $\mQ[f ,f]$ is to be expanded similarly to the series expansion of the distribution functions as \begin{equation} \label{eq:expansion_q} \mQ[f, f](t, \bx, \bv) = \sum_{\boldsymbol{i} \in \bbN^3} Q_{\boldsymbol{i}}^{[\tilde{\bu},\tilde{T}]}(t, \bx) \mH_{\boldsymbol{i}}^{[\tilde{\bu},\tilde{T}]}(\bv). \end{equation} By the orthogonality \eqref{eq:ortho}, the coefficients are calculated as \begin{equation} \label{eq:qcoef} Q_{\boldsymbol{i}}^{[\tilde{\bu},\tilde{T}]}(t, \bx)=\frac{\tilde{T}^{\frac{|\boldsymbol{i}|}{2}}}{\boldsymbol{i}!} \int_{\mathbb{R}^3}H_{\boldsymbol{i}}\left(\frac{\s{v}-\tilde{\bu}}{\sqrt{\tilde{T}}}\right) \mQ[f, f](t, \bx, \bv) \dd\bv. \end{equation} In our previous work \cite{FPL2018}, an algorithm was proposed to evaluate these coefficients in the standard case of $\rho=1, \tilde{\bu}=\bz$ and $ \tilde{T}=1$, and the normalized Hermite basis was used to approximate the distribution function. To apply the results, the same expansion center is adopted here. Then, the approximation to the distribution function \eqref{eq:expansion} and the collision operator \eqref{eq:expansion_q} are reduced to \begin{align} \label{eq:expansion1} f(t,\bx,\bv)&=\sum_{\boldsymbol{i}\in\bbN^3}f_{\boldsymbol{i}}^{[\bz,1]}(t,\bx)\mH_{\boldsymbol{i}}^{[\bz,1]}(\bv),\\ \label{eq:expansion_q1} \mQ[f,f](t,\bx,\bv)&=\sum_{\boldsymbol{i}\in\bbN^3}Q_{\boldsymbol{i}}^{[\bz,1]}(t,\bx)\mH_{\boldsymbol{i}}^{[\bz,1]}(\bv). \end{align} The superscript $[\bz, 1]$ is omitted afterwards if the expansion center $\tilde{\bu} = \bz$ and $\tilde{T} =1$ is used. Thus, in this case, the coefficients \eqref{eq:qcoef} are reduced to \begin{equation} \label{eq:qcoef1} Q_{\boldsymbol{i}}(t,\bx)=\frac{1}{\boldsymbol{i}!}\int_{\mathbb{R}^3}H_{\boldsymbol{i}}\left(\bv\right) \mQ[f, f](t, \bx, \bv) \dd\bv. \end{equation} Substituting \eqref{eq:expansion1} into \eqref{eq:qcoef1}, we can derive that \begin{equation} \label{eq:final_coe} \begin{aligned} Q_{\boldsymbol{i}}(t, \bx)& =\frac{1}{\boldsymbol{i}!} \int_{\mathbb{R}^3}H_{\boldsymbol{i}}\left(\bv\right) \nabla_{\s{v}}\cdot\left[\int_{\mathbb{R}^3} {\bf A}(\s{v}-\s{v}')\big(f(\s{v}') \nabla_{\s{v}}f(\s{v})-f(\s{v})\nabla_{\s{v}'}f(\s{v}')\big)\dd\s{v}'\right]\dd \bv \\ & =\sum_{\boldsymbol{j} \in \bbN^3}\sum_{\boldsymbol{k} \in \bbN^3}A_{\boldsymbol{i}}^{\boldsymbol{j},\boldsymbol{k}}f_{\boldsymbol{j}} f_{\boldsymbol{k}}, \end{aligned} \end{equation} where the coefficients $A_{\boldsymbol{i}}^{\boldsymbol{j},\boldsymbol{k}}$ have the following expression \cite[Eq.(3.4)]{FPL2018}: \begin{equation} \label{eq:Acomp} A_{\boldsymbol{i}}^{\boldsymbol{j},\boldsymbol{k}} =\dfrac{1}{\boldsymbol{i}!}\int_{\mathbb{R}^3}H_{\boldsymbol{i}}(\bv)\nabla_{\bv}\cdot\left[\int_{\mathbb{R}^3}A(\bv-\bv') (\mathcal{H}_{\boldsymbol{j}}(\bv')\nabla_{\bv}(\mathcal{H}_{\boldsymbol{k}}(\bv)) -\mathcal{H}_{\boldsymbol{j}}(\bv)\nabla_{\bv'} (\mathcal{H}_{\boldsymbol{k}}(\bv'))\dd\bv'\dd\bv\right]. \end{equation} Although the complicated form of the coefficients $A_{\boldsymbol{i}}^{\boldsymbol{j},\boldsymbol{k}}$ results in a formidably high computational cost, we find that these coefficients are intrinsic to the collision model and are constant when a specific collision model is chosen, which in our case means that the index $\gamma$ in the IPL model \eqref{eq:ipl} is fixed. This indicates that we can always precalculate these coefficients completely offline once and then use them in all cases. In \cite{FPL2018}, an algorithm was proposed to calculate accurate values of the coefficients $A_{\boldsymbol{i}}^{\boldsymbol{j},\boldsymbol{k}}$ (or $A_\alpha^{\lambda,\kappa}$ correspondingly in \cite[Eq.(3.4)]{FPL2018}) by introducing Burnett polynomials for all $\gamma>-5$. Due to the lengthy expressions involved in the algorithm, we do not present the details here. Readers may refer to Theorem 1, Lemma 2, Proposition 3 and Theorem 4 in \cite{FPL2018} for the details of this algorithm. Moreover, utilizing the recurrence relations of the Hermite polynomial \begin{equation} \label{eq:Hermite_recur} \begin{gathered} \dfrac{\p}{\p v_d}H_{\boldsymbol{i}}(\bv) = i_d H_{\boldsymbol{i}-\be_d}(\bv), \quad H_{\boldsymbol{i}+\be_d}(\bv) = v_d H_{\boldsymbol{i}}(\bv) - i_d H_{\boldsymbol{i}-\be_d}(\bv), \\ \dfrac{\p}{\p v_d}\left[H_{\boldsymbol{i}}(\bv) \exp\left(-\frac{|\bv|^2}{2}\right)\right] = -H_{\boldsymbol{i}+\be_d}(\bv) \exp\left(-\frac{|\bv|^2}{2}\right), \qquad d = 1, 2, 3, \end{gathered} \end{equation} the diffusive FP operator $\mP_{\rm FP}[f]$ \eqref{eq:FP}, a simplified approximation to the quadratic collision operator $\mQ[f, f]$, can also be expanded as \begin{equation} \label{eq:expan_FP} \begin{aligned} \mP_{\rm FP}[f] = \sum_{\boldsymbol{i} \in \bbN^3} {\rm FP}_{\boldsymbol{i}} \mH_{\boldsymbol{i}}(\bv), \qquad {\rm FP}_{\boldsymbol{i}} = \sum_{d=1}^3 \left[\left(1 - \frac{1}{T}\right) f_{\boldsymbol{i} - 2\be_d} + \frac{u_{d}}{T} f_{\boldsymbol{i} - \be_d}\right] - \frac{|\boldsymbol{i}|}{T} f_{\boldsymbol{i}}, \end{aligned} \end{equation} where $\bu=(u_{1}, u_{2}, u_{3})^T$ and $T$ are the macroscopic velocity and temperature of electrons \eqref{eq:rlt}, respectively. \subsection{Collision operator $\mQ_{\beta}[f]$} \label{sec:reduce_q} The collision operator $\mQ_{\beta}[f]$ \eqref{eq:col_beta} between different species can also be expanded with similar methods. Without loss of generality, we set $\rho_{\beta} = 1$. Therefore, the collision operator $\mQ_{\beta}[f]$ is expanded as \begin{equation} \label{eq:expan_FPL_beta_ini} \mQ_{\beta}[f](t, \bx, \bv) = \sum_{\boldsymbol{i} \in \bbN^3} \mQ_{\beta, \boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(t, \bx) \mH_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(\bv), \end{equation} with the coefficients being \begin{equation} \label{eq:expan_FPL_beta_coe_ini} \begin{aligned} \mQ_{\beta, \boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(t, \bx) & = \frac{1}{\boldsymbol{i}!} \int_{\bbR^3} \mQ_{\beta}[f](t, \bx, \bv) H_{\boldsymbol{i}}\left( \frac{\bv - \tilde{\bu}}{\sqrt{\tilde{T}}}\right) \dd \bv \\ & = \frac{1}{\boldsymbol{i}!} \int_{\bbR^3} \nabla_{\bv} \cdot \left[{\bf A}(\bv - \bu_{\beta})\nabla_{\bv}f\right] H_{\boldsymbol{i}}\left( \frac{\bv - \tilde{\bu}}{\sqrt{\tilde{T}}} \right) \dd \bv. \end{aligned} \end{equation} Noting that the macroscopic velocity $\bu_\beta$ of ions appears in the expression of the collision operator $\mQ_{\beta}[f]$, the expansion center here is chosen as $\tilde{\bu}=\bu_{\beta}$ and $\tilde{T}=1$ to simplify its series expansion. Thus, \eqref{eq:expan_FPL_beta_ini} and \eqref{eq:expan_FPL_beta_coe_ini} are reduced to \begin{equation} \label{eq:expan_FPL_beta} \begin{aligned} & \mQ_{\beta}[f](t, \bx, \bv) = \sum_{\boldsymbol{i} \in \bbN^3} \mQ_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}(t, \bx) \mH_{\boldsymbol{i}}^{[\bu_{\beta}, 1]}(\bv), \\ & \mQ_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}(t, \bx) = \frac{1}{\boldsymbol{i}!} \int_{\bbR^3} \nabla_{\bv} \cdot \left[{\bf A}(\bv - \bu_{\beta})\nabla_{\bv}f\right] H_{\boldsymbol{i}}\left(\bv - \bu_{\beta}\right) \dd \bv. \end{aligned} \end{equation} By substituting \eqref{eq:expansion} and \eqref{eq:col_beta} into \eqref{eq:expan_FPL_beta} and changing variables, the coefficients can be calculated explicitly as \begin{equation} \label{eq:detail_expan_FPL_beta_coe} Q_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}= \frac{\Lambda}{\boldsymbol{i}!} \sum_{\boldsymbol{j} \in \bbN^3} f_{\boldsymbol{j}}^{[\bu_{\beta}, 1]} \sum_{m,n=1}^{3}i_m \left[\delta_{mn} \sum_{s=1}^3 G_{ss}(\gamma, \boldsymbol{i}- \be_m,\boldsymbol{j} + \be_n) - G_{mn}(\gamma, \boldsymbol{i} - \be_m, \boldsymbol{j}+ \be_n) \right], \end{equation} where $f_{\boldsymbol{i}}^{[\bu_{\beta}, 1]}$ denotes the expansion coefficients of the distribution function $f(t, \bx, \bv)$ under the expansion center $\tilde{\bu} = \bu_{\beta}$ and $\tilde{T} = 1$ as \begin{equation} \label{eq:dis_f_alpha_beta} f(t,\bx,\bv)=\sum_{\boldsymbol{i}\in\bbN^3}f_{\boldsymbol{i}}^{[\bu_{\beta},1]}(t,\bx)\mH_{\boldsymbol{i}}^{[\bu_{\beta},1]}(\bv). \end{equation} Here, $ G_{mn}(\gamma, \boldsymbol{i}, \boldsymbol{j})$ is defined in \cite[Eq.(3.14)]{FPL2018} and also precalculated. The detailed calculation of \eqref{eq:detail_expan_FPL_beta_coe} is given in the Appendix \ref{app:linear_coe} and we refer readers to Proposition 3 and Theorem 4 in \cite{FPL2018} for the calculation of $ G_{mn}(\gamma, \boldsymbol{i}, \boldsymbol{j})$. \subsection{The reduced collision model} \label{sec:reduced_col} For the series expansion of the quadratic collision operator \eqref{eq:expansion_q1}, although the coefficients $A_{\boldsymbol{i}}^{\boldsymbol{j},\boldsymbol{k}}$ can be precalculated and kept for later use, both the storage cost and the computational cost for one single collision are too expensive for spatially nonhomogeneous problems. To be precise, the cost is $\mathcal{O}(M^9)$, with $M$ being the expansion order, which is introduced in Section \ref{sec:model}. To cope with these issues, we build a reduced quadratic operator $\mQ^{\rm new}[f]$ as an approximation to $\mQ[f, f]$ in \eqref{eq:collision_ee}, which consists of two parts: \begin{equation} \label{eq:col_new} \mQ^{\rm new}[f] = \nu\mQ^{\rm new}[f, f] + \nu_{\beta}\mQ^{\rm new}_{ \beta}[f]. \end{equation} The expansion center here is set as $\tilde{\bu} = \bz$ and $\tilde{T} = 1$ for the reduced model, following the choice in Section \ref{sec:quadoperator}, which is omitted below for simplicity. The collision model $\mQ^{\rm new}[f, f]$ is expanded similarly to \eqref{eq:expansion_q1} as \begin{equation} \label{eq:new_collision} \mQ^{\rm new}[f, f](t, \bx, \bv) = \sum_{\boldsymbol{i} \in \bbN^3} Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx) \mH_{\boldsymbol{i}}(\bv). \end{equation} To build the reduced collision model, we assume that the lower-order terms in the expansion are much more important than the higher-order ones, especially for capturing macroscopic variables such as the density $\rho$, macroscopic velocity $\bu$ and temperature $T$. Thus, the expansion coefficients from the more precise model \eqref{eq:expansion_q1} are adopted for the lower-order terms, and those from the diffusive FP operator \eqref{eq:expan_FP} are utilized to make up for the higher-order terms. Precisely, the expansion coefficients of $\mQ^{\rm new}[f, f]$ \eqref{eq:new_collision} are determined as \begin{equation} \label{eq:new_collision_coe} Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx) = \begin{cases} Q_{\boldsymbol{i}}(t, \bx), & |\boldsymbol{i}| \leqslant M_0,\\ \mu_0 {\rm FP}_{\boldsymbol{i}}(t, \bx) ,& |\boldsymbol{i}| > M_0, \end{cases} \end{equation} where $Q_{\boldsymbol{i}}(t, \bx)$ are calculated as \eqref{eq:expansion_q1} using the precalculated coefficients and ${\rm FP}_{\boldsymbol{i}}(t, \bx)$ are the expanding coefficients of the diffusive FP operator \eqref{eq:expan_FP}. The expansion order of the quadratic collision term $M_0$, to which we also refer as the quadratic length, and the decay rate of higher-order coefficients $\mu_0$ are the parameters of this model. In the numerical experiment, the damping rate $\mu_0$ is chosen as $\mu_0 = {\rm DIM} -1$ according to the isotropic model derived for the Fokker-Planck equation in \cite{Villani1998on}, where ${\rm DIM}$ is the number of dimensions of the microscopic velocity space. For the collision operator between different species $\mQ_{\beta}[f]$ \eqref{eq:expan_FPL_beta}, its computational cost is much less than that of the quadratic collision operator \eqref{eq:expansion_q} and hence we do not reduce it further in the reduced collision model. For the convenience of computation, the same expansion center $\tilde{\bu} = \bz$ and $\tilde{T} = 1$ is chosen for $\mQ_{\beta}^{\rm new}[f]$, where Theorem \ref{thm:project} is utilized to build the new collision operator for $\mQ_{\beta}[f]$ as \begin{equation} \label{eq:col_beta_new} \mQ^{\rm new}_{\beta}[f] = \sum_{\boldsymbol{i} \in \bbN^3} \mQ_{\beta, \boldsymbol{i}}^{{\rm new}}(t, \bx) \mH_{\boldsymbol{i}}(\bv), \end{equation} where $\mQ_{\beta, \boldsymbol{i}}^{{\rm new}}(t, \bx)$ is projected from $\mQ_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}(t, \bx)$ \eqref{eq:expan_FPL_beta} by Theorem \ref{thm:project}. Consequently, together with \eqref{eq:col_new}, \eqref{eq:new_collision_coe} and \eqref{eq:col_beta_new}, the reduced collision operator $\mQ^{\rm new}[f]$ is expanded as \begin{equation} \label{eq:col_new_coe} \mQ^{\rm new}[f] = \sum_{\boldsymbol{i} \in \bbN^3} Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx) \mH_{\boldsymbol{i}}(\bv) \end{equation} with \begin{equation} \label{eq:col_new_coe_1} \begin{aligned} Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx) & = \nu Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx) + \nu_{\beta} Q_{\beta, \boldsymbol{i}}^{{\rm new}}(t, \bx) \\ & = \begin{cases} \nu Q_{\boldsymbol{i}}(t, \bx) + \nu_{\beta}Q_{\beta, \boldsymbol{i}}^{{\rm new}}(t, \bx) ,& |\boldsymbol{i}| \leqslant M_0,\\[4mm] \nu\mu_0 {\rm FP}_{\boldsymbol{i}}(t, \bx) + \nu_{\beta}Q_{\beta, \boldsymbol{i}}^{{\rm new}}(t, \bx), & |\boldsymbol{i}| \geqslant M_0. \end{cases} \end{aligned} \end{equation} In the numerical scheme, which is further discussed in Section \ref{sec:col step}, the computational cost to obtain expansion coefficients for the quadratic collision term is $\mO(M_0^9)$, and those for the linear part and the projection are $\mO(M^3)$ and $\mO(M^4)$ \cite{Hu2020Numerical}, respectively. Therefore, the total computational cost to obtain the collision term is $\mO(M_0^9 + M^4)$. Since $M_0$ is always much smaller than $M$ in the numerical computation, the reduced collision model can tremendously reduce the computational cost compared with the original computational cost of $\mO(M^9)$. \begin{remark} As aforementioned, a larger $M_0$ produces a more accurate model, but there is no fixed principle regarding how to choose $M_0$, which may be determined on a case-by-case basis, restrained by the storage cost. The numerical results show that even a small $M_0$ can capture several expected physical phenomena successfully, which is further demonstrated in Section \ref{sec:num}. \end{remark} \section{Appendix} \label{app} In this section, we introduce a detailed calculation of the expansion coefficients for the collision operator \eqref{eq:col_beta} in Section \ref{app:linear_coe} and the derivation of the governing equations \eqref{eq:force} for the acceleration step in Section \ref{app:acc}. \subsection{Calculation of the expansion coefficient \eqref{eq:detail_expan_FPL_beta_coe}} \label{app:linear_coe} In this section, we introduce a detailed calculation of the expansion coefficients in \eqref{eq:detail_expan_FPL_beta_coe}. Bringing the explicit form of the collision operator $\mQ_{\beta}[f]$ in \eqref{eq:col_beta} into \eqref{eq:expan_FPL_beta} and integrating by parts, the expansion coefficients $\mQ_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}$ are calculated as \begin{equation} \label{eq:col_beta_q} \begin{aligned} \mQ_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}(t, \bx) &=- \frac{1}{\boldsymbol{i}!} \int_{\bbR^3} \left[ {\bf A}(\bv-\bu_\beta)\nabla_{\bv} f \right]\cdot \nabla_{\bv} H_{\boldsymbol{i}}\left(\bv - \bu_{\beta}\right) \dd \bv. \end{aligned} \end{equation} Expanding the distribution function $f$ as \begin{equation} \label{eq:expan_f} f(t, \bx, \bv) \approx\sum_{ \boldsymbol{j}\in \bbN^3} f_{\boldsymbol{j}}^{[\bu_{\beta}, 1]}(t, \bx) \mH_{\boldsymbol{j}}^{[\bu_{\beta}, 1]}, \end{equation} we can calculate \eqref{eq:col_beta_q} as \begin{equation} \label{eq:7.2} \begin{aligned} \mQ_{\beta, \boldsymbol{i}}^{[\bu_{\beta}, 1]}(t, \bx) &=- \frac{1}{\boldsymbol{i}!}\sum_{\boldsymbol{j}\in \bbN^3} f_{\boldsymbol{j}}^{[\bu_\beta, 1]}(t, \bx) C_{\boldsymbol{i}}^{\boldsymbol{j}}, \end{aligned} \end{equation} with \begin{equation} \label{eq:Coe_C} \begin{aligned} C_{\boldsymbol{i}}^{\boldsymbol{j}} &= \int_{\bbR^3} \left[ {\bf A}(\bv-\bu_\beta)\nabla_{\bv} \mH_{\boldsymbol{j}}^{[\bu_\beta, 1]}(\bv) \right] \cdot \nabla_{\bv} H_{\boldsymbol{i}}\left(\bv - \bu_{\beta}\right) \dd \bv. \end{aligned} \end{equation} By changing variables and utilizing the differentiation property of the basis function \eqref{eq:Hermite_recur}, we can derive \begin{equation} \label{eq:Coe_C_detail} \begin{aligned} C_{\boldsymbol{i}}^{\boldsymbol{j}} =- \sum_{m,n=1}^3\int_{\bbR^3} \left[{\bf A}(\bv)\right]_{mn} \mH_{\boldsymbol{j}+\be_n}^{[\bz, 1]}(\bv) i_m H_{\boldsymbol{i}-\be_m}\left(\bv \right) \dd \bv. \end{aligned} \end{equation} Recalling the definition of ${\bf A}(\bv-\bu_\beta)$ in \eqref{eq:A}, we expand \eqref{eq:Coe_C_detail} as \begin{equation} \label{eq:aligned} C_{\boldsymbol{i}}^{\boldsymbol{j}} = \Lambda \sum_{m,n=1}^3 i_m \Big[\int_{\bbR^3} \delta_{mn} \sum_{s=1}^3|\bv|^{\gamma}v_s^2\mH_{\boldsymbol{j}+\be_n}^{[\bz,1]}(\bv)H_{\boldsymbol{i}-\be_m}\dd \bv -\int_{\bbR^3} |\bv|^{\gamma} v_mv_n \mH_{\boldsymbol{j}+\be_n}^{[\bz, 1]}(\bv)i_m H_{\boldsymbol{i}-\be_m}\left(\bv \right) \dd \bv\Big]. \end{equation} Finally, with the definition in \cite[Eq.(3.14)]{FPL2018}, i.e. \begin{equation} G_{mn}\big(\gamma,\boldsymbol{i}, \boldsymbol{j}\big) =\int_{\bbR^3} |\bv|^{\gamma} v_mv_n H_{\boldsymbol{i}}\left(\bv \right) H_{\boldsymbol{j}}(\bv)\dfrac{1}{(2\pi)^{3/2}}\exp{\left(-\dfrac{|\bv|^2}{2}\right)}\dd \bv, \end{equation} we can derive the final expression of the expansion coefficients \eqref{eq:detail_expan_FPL_beta_coe}. \subsection{Deduction of the governing equation in the acceleration step} \label{app:acc} In this section, we present the deduction of the governing equation \eqref{eq:force} in the acceleration step with the expansion center $\tilde{\bu} = \bu$ and $\tilde{T} = T$. A similar deduction can be found in \cite{Wang}, to which we refer readers for more details. In this case, the distribution function $f$ is expanded as \begin{equation} \label{eq:expansion_center} f(t, \bx, \bv) = \sum_{\boldsymbol{i} \in \bbN^3} f_{\boldsymbol{i}}^{[\bu, T]}(t, \bx) \mH_{\boldsymbol{i}}^{[\bu, T]}(\bv). \end{equation} Substituting \eqref{eq:expansion_center} into the FPL equation \eqref{eq:FPL}, we can derive the moment equations with some rearrangement as \begin{equation} \label{eq:mnt_eq} \begin{split} & \frac{\partial f_{\boldsymbol{i}}}{\partial t} + \sum_{d=1}^3 \left( \frac{\partial u_{d}}{\partial t} + \sum_{j=1}^3 u_j \frac{\partial u_{d}}{\partial x_j} - F_d \right) f_{\boldsymbol{i}-\be_d} \\ & + \sum_{j,d=1}^3 \left[ \frac{\partial u_d}{\partial x_j} \left( T f_{\boldsymbol{i}-\be_d-\be_j} + (i_j + 1) f_{\boldsymbol{i}-\be_d+\be_j} \right) + \frac{1}{2} \frac{\partial T}{\partial x_j} \left( T f_{\boldsymbol{i}-2\be_d-\be_j} + (i_j + 1) f_{\boldsymbol{i}-2\be_d+\be_j} \right) \right] \\ &+ \frac{1}{2} \left( \frac{\partial T}{\partial t} + \sum_{j=1}^3 u_j \frac{\partial T}{\partial x_j} \right) \sum_{d=1}^3 f_{\boldsymbol{i}-2\be_d}+ \sum_{j=1}^3 \left( T \frac{\partial f_{\boldsymbol{i} -\be_j}}{\partial x_j} + u_j \frac{\partial f_{\boldsymbol{i}}}{\partial x_j} + (i_j + 1) \frac{\partial f_{\boldsymbol{i}+\be_j}}{\partial x_j} \right) = Q_{\boldsymbol{i}}, \end{split} \end{equation} where $[\bu, T]$ are omitted and $ Q_{\boldsymbol{i}}$ is the expansion for the collision term. Following the method in \cite{Li}, we deduce the mass conservation in the case of $\boldsymbol{i} = \bz$ as \begin{equation} \label{mass_con} \frac{\partial f_{\bz}}{\partial x_j} + \sum_{j=1}^3 \left( u_j \frac{\partial f_{\bz}}{\partial x_j} + f_{\bz} \frac{\partial u_j}{\partial x_j} \right) = 0. \end{equation} If we set $\boldsymbol{i} = \be_d$, with $ d = 1,2,3$, \eqref{eq:mnt_eq} reduces to \begin{equation} \label{eq:alpha = e_d} f_{\bz} \left( \frac{\partial u_d}{\partial t} + \sum_{j=1}^3 u_j \frac{\partial u_d}{\partial x_j} - F_d \right) + f_{\bz} \frac{\partial T}{\partial x_d} + T \frac{\partial f_{\bz}}{\partial x_d} + \sum_{j=1}^3 (\delta_{jd} + 1) \frac{\partial f_{\be_d + \be_j}}{\partial x_j} = 0. \end{equation} With the splitting method stated in Section \ref{sec:model}, \eqref{eq:alpha = e_d} is split into the convection step \begin{equation} \label{eq:convection_step} f_{\bz} \left( \frac{\partial u_d}{\partial t} + \sum_{j=1}^3 u_j \frac{\partial u_d}{\partial x_j} \right) + f_{\bz} \frac{\partial T}{\partial x_d} + T \frac{\partial f_{\bz}}{\partial x_d} + \sum_{j=1}^3 (\delta_{jd} + 1) \frac{\partial f_{\be_d + \be_j}}{\partial x_j} = 0, \end{equation} and the force step \begin{equation} \label{eq:force_step} \pd{u_d}{t} - F_d = 0, \qquad d = 1, 2, 3. \end{equation} Then, we obtain the governing equations \eqref{eq:force} for the acceleration step. \section{Conclusion} \label{sec:conclusion} In this paper, we developed a numerical algorithm for the FPL equation based on the Hermite spectral method. Both collisions within the same species and between different species were considered to simulate the time evolution of plasma. A reduced collision model was built by combining the quadratic FPL collision operator and the diffusive FP collision operator. A fast algorithm to project between distribution functions with different expansion centers was adopted. Several numerical experiments showed that our numerical algorithm can capture the time evolution of the particles accurately and efficiently compared to the fully quadratic collision model. The effect of the new reduced collision operator makes this algorithm promising when dealing with more complicated problems. However, this method is not capable of dealing with the problems that the state of the plasma diverges greatly from equilibrium, which we will work on in the future. Research on multidimensional problems with the magnetic field is also ongoing. \section*{Acknowledgements} Ruo Li is supported by the National Natural Science Foundation of China (Grant No. 11971041) and Science Challenge Project (No. TZ2016002). Yinuo Ren is partially supported by the elite undergraduate training program of School of Mathematical Sciences in Peking University. Yanli Wang is supported by Science Challenge Project (No. TZ2016002) and the National Natural Science Foundation of China (Grant No. U1930402 and 12031013). \section{Numerical experiments} \label{sec:num} In this section, several numerical examples are presented to test the new algorithm. In all the tests, the CFL is set as $0.45$. The Landau damping problems are studied first to show the capability of the new algorithm to simulate the FPL equations quantitatively. Two-stream instability and bump-on-tail instability are also tested to show that the numerical method can detect the evolution in the microscopic velocity space with the reduced collision model. \subsection{Linear Landau damping problem} \label{sec:lld} The Landau damping problem is one of the most popular problems in plasma physics. It is caused by the strong interactions between the electromagnetic wave and particles with velocities comparable to the phase velocity, which tend to synchronize with the wave \cite{Chen1984}. Particles with velocities slightly lower than the phase velocity are accelerated and thus gain energy from the wave, while those with slightly higher velocities are decelerated and thus lose energy to the wave, which results in an exponential decrease in the electrostatic energy of the wave. The linear Landau damping problem has been studied in \cite{Filbet2002numerical}, where several specific settings of the problems are proposed and simulated. The numerical results in \cite{Filbet2002numerical} can be used for comparison here. The setting of the linear Landau damping problem is adopted from \cite{ZhangGamba2017} with $\rho_{\beta} = 1$, $\bu_{\beta} = 0$ and the initial data being \begin{equation} \label{eq:ini_ex1} f(x,\bv) = \frac{1}{(2\pi)^{3/2}} \exp\left( -\dfrac{|\bv|^2}{2}\right) [1 + A \cos(k x)], \qquad (x, \bv) \in [0, 2\pi/k] \times \bbR^3, \end{equation} where $A$ is the amplitude of the perturbation. The periodic boundary condition is implemented in this example. In the Landau damping problem, our interest lies in the evolution of the square root of the electrostatic energy which is defined as \begin{equation} \label{eq:energy} \mathcal{E}(t) = \left(\sum_{j} \Delta x E_{1,j}(t)^2\right)^{1/2}. \end{equation} According to Landau's theory, $\mE(t)$ should decrease exponentially with a fixed rate $\omega_i$, which can be regarded as the imaginary part of the frequency $\omega$. The theoretical damping rate is often estimated as \cite{ZhangGamba2017, sonnendrucker2013numerical} \begin{equation} \label{eq:damping_rate} \gamma = \gamma_{L} + \gamma_{C}, \end{equation} where the damping rate of collisionless plasma $\gamma_L$ is \begin{equation} \label{eq:damping_rate_collisionless} \gamma_L = \begin{cases} -\sqrt{\dfrac{\pi}{8}} \dfrac{1}{k^3} \exp\left(-\dfrac{1}{2k^2} - \dfrac{3}{2}\right), & k \textrm { is large}, \\[5mm] -\sqrt{\dfrac{\pi}{8}} \left(\dfrac{1}{k^3} - 6k\right) \exp\left(-\dfrac{1}{2k^2} - \dfrac{3}{2} - 3k^2 - 12 k^4\right), & k \textrm{ is small}, \end{cases} \end{equation} and $\gamma_C$ is the collisional ``correction'' to the collisionless damping rate: \begin{equation} \label{eq:damping_rate_collision} \gamma_{C} = -\frac{1}{3} \nu \sqrt{2/\pi}, \end{equation} which depends only on the collisional frequency and reflects the effect of the collision, where $\nu$ refers to the collisional frequency. In this test, the amplitude of the perturbation $A$ is set as $10^{-5}$. In addition, the expansion order $M$ is set as $M = 20$, and the grid size as $N = 800$. Figures \ref{fig:ex1_k03_M0} and \ref{fig:ex1_k05_M0} show the time evolution of the electrostatic energy $\mE(t)$ with the wave number $k$ set as $0.3$ and $0.5$, respectively. For both wave numbers, the Coulomb case $\gamma = -3$ is studied, and the collision frequency is set as $\nu = \nu_{\beta} = 0$ and $0.01$ to demonstrate the effect of collision. The quadratic length $M_0$ is chosen as $5$ and $10$, respectively. The results show that this method successfully simulates the linear Landau damping problem and the numerical damping rate of the electrostatic energy is almost identical to the theoretical result in \eqref{eq:damping_rate_collisionless} for both wave numbers. When the collision is added, the electrostatic energy shows a faster decay due to the effect of the collision. This is reflected in the larger damping rates compared to that of the collisionless case, and the increase in damping rates exactly matches the theoretical results in \eqref{eq:damping_rate_collision}. This proves the accuracy of both our Hermite spectral method and our reduced collision model. Most importantly, the numerical solution with the quadratic length $M_0 =5$ is almost the same as that with $M_0=10$. This indicates that for the linear Landau damping problem, even with a small quadratic length $M_0=5$, our collisional model can capture the linear Landau damping phenomenon satisfactorily. For this reason, the quadratic length is set as $M_0 = 5$ in the linear Landau damping experiments. \begin{figure}[!htb] \centering \subfloat[$k = 0.3, \nu = \nu_{\beta} = 0$] {\includegraphics[width=0.49\textwidth, height=0.35\textwidth, clip]{nu0_k03_gamma3_M0_c.eps}}\hfill \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0.01$] {\includegraphics[width=0.49\textwidth, height=0.35\textwidth,clip]{nu001_k03_gamma3_M0_c.eps}} \caption{Time evolution of $\ln(\mE(t))$ with $N=800$ and $M=20$ for different $\nu$ in the linear Landau damping problem. The wave number $k = 0.3$. For the collisional case, the red dashed line corresponds to $M_0 =5$ while the blue line corresponds to $M_0 =10$. } \label{fig:ex1_k03_M0} \end{figure} \begin{figure}[!htb] \centering \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0$] {\includegraphics[width=0.49\textwidth, height=0.35\textwidth, clip]{nu0_k05_gamma3_M0_c.eps}}\hfill \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.01$] {\includegraphics[width=0.49\textwidth, height=0.35\textwidth,clip]{nu001_k05_gamma3_M0_c.eps}} \caption{Time evolution of $\ln(\mE(t))$ with $N=800$ and $M=20$ for different $\nu$ in the linear Landau damping problem. The wave number $k = 0.5$. For the collisional case, the red dashed line corresponds to $M_0 =5$ while the blue line corresponds to $M_0 =10$. } \label{fig:ex1_k05_M0} \end{figure} Then, we test the effect of different IPL models on our numerical method. The time evolution of the electrostatic energy $\mE(t)$ for different potential indices $\gamma$, the index in the IPL model \eqref{eq:ipl} aforementioned, is tested. Specifically, the model of Maxwell molecules $\gamma = 0$ and the model with Coulomb interactions $\gamma = -3$ are tested and compared in Figure \ref{fig:ex1_nu001_gamma}. Here, we also set the collisional frequency $\nu $ as $0.01$ and the wave number $k$ as $0.3$ and $0.5$, respectively. The numerical result illustrates that our method is capable of simulating the linear Landau damping for different $\gamma$, and we can conclude that the collision model with softer potential imposes a smaller damping rate. \begin{figure}[!htb] \centering \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0.01$] {\includegraphics[width=0.49\textwidth, height=0.35\textwidth, clip]{nu001_k03_M05_gamma_c.eps}}\hfill \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.01$] {\includegraphics[width=0.49\textwidth, height=0.35\textwidth,clip]{nu001_k05_M05_gamma_c.eps}} \caption{Time evolution of $\ln(\mE(t))$ with $N=800$ and $M_0=5$ for different $\gamma$ in the linear Landau damping problem. The blue line corresponds to $\gamma = 0$ while the red dashed line corresponds to $\gamma = -3$.} \label{fig:ex1_nu001_gamma} \end{figure} \subsection{Nonlinear Landau damping} As shown in the last section, when the wave amplitude $A$ is sufficiently small, the linear regime is valid, which yields exponentially decreasing electrostatic energy. However, the Landau damping problem with larger amplitude, which diverges from the linear theory and hence is also known as nonlinear Landau damping, is quite a different case. Typically, one finds that the amplitude decays, grows and oscillates before settling down to a relatively steady state \cite{Chen1984}. In this section, we study the nonlinear Landau damping problem numerically. The nonlinear Landau damping is primarily attributed to the ``trapping" phenomenon, where a particle is caught in the potential well of a wave, shuttles back and forth, and ends up gaining and losing energy to the wave \cite{Chen1984}. In this numerical experiment, the form of the initial data is the same as that in the last section, with $A$ augmented to $0.2$ and the electrostatic energy is again studied. The nonlinear Landau damping problem with this particular initial data was also studied in \cite{Filbet, ZhangGamba2017}, to which we refer readers for a comparison of the numerical results. The case of Maxwell molecules $\gamma=0$ is studied, and the spatial grid size, expansion order and quadratic length are set as $N=800$, $M = 20$ and $M_0 = 5$, respectively. Moreover, to avoid recurrence \cite{Filter2017}, the expansion order is chosen as $M = 200$ for the collisionless case. \begin{figure}[!htb] \centering \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth, clip]{ex2_nu00_k03_gamma0_c.eps}}\hfill \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0.01$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth, clip]{ex2_nu001_k03_gamma0_c.eps}}\hfill \subfloat[$k = 0.3, \nu= \nu_{\beta} = 0.05$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth,clip]{ex2_nu001_k03_gamma0_c.eps}} \hfill \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0.1$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth,clip]{ex2_nu01_k03_gamma0_c.eps}} \caption{Time evolution of $\ln(\mE(t))$ with $N=800$ and $M_0=5$ for different collisional frequencies $\nu = \nu_{\beta}= 0, 0.01, 0.05$ and $0.1$ in the nonlinear Landau damping problem. The wave number $k = 0.3$.} \label{fig:ex2_frequency_03} \end{figure} \begin{figure}[!htb] \centering \subfloat[$k = 0.5, \nu= \nu_{\beta} = 0$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth, clip]{ex2_nu00_k05_gamma0_c.eps}}\hfill \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.01$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth, clip]{ex2_nu001_k05_gamma0_c.eps}}\hfill \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.05$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth,clip]{ex2_nu001_k05_gamma0_c.eps}} \hfill \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.1$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth,clip]{ex2_nu01_k05_gamma0_c.eps}} \caption{Time evolution of $\ln(\mE(t))$ with $N=800$ and $M_0=5$ for different collisional frequencies $\nu = \nu_{\beta} = 0, 0.01, 0.05$ and $0.1$ in the nonlinear Landau damping problem. The wave number $k = 0.5$.} \label{fig:ex2_frequency_05} \end{figure} Figures \ref{fig:ex2_frequency_03} and \ref{fig:ex2_frequency_05} show the time evolution of electrostatic energy for $k = 0.3$ and $k = 0.5$ with collisional frequency $\nu = \nu_{\beta}=0, 0.01,0.05$ and $0.1$. We can conclude that for the nonlinear collisionless problem, instead of exponential damping as in the linear case, the electrostatic energy decreases exponentially at the beginning and then grows exponentially at a smaller rate, which is consistent with the results achieved by \cite{Cheng2014, ZhangGamba2017}. For the collisional case, we find that the electrostatic energy exhibits an exponential-like damping for both wave numbers $k = 0.3$ and $0.5$ and the damping rate increases with the collisional frequency. These results are reasonable because stronger collision implies more frequent energy exchange between particles and results in less ``trapping'' phenomena and faster damping rates. This numerical result also accords with that in \cite{ZhangGamba2017, Filbet}. \begin{figure}[!htb] \centering \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0.05$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth, clip]{ex2_nu005_k03_gamma_c.eps}}\hfill \subfloat[$k = 0.3, \nu = \nu_{\beta}= 0.1$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth,clip]{ex2_nu01_k03_gamma_c.eps}} \\ \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.05$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth, clip]{ex2_nu005_k05_gamma_c.eps}}\hfill \subfloat[$k = 0.5, \nu = \nu_{\beta}= 0.1$] {\includegraphics[width=0.45\textwidth, height=0.35\textwidth,clip]{ex2_nu01_k05_gamma_c.eps}} \caption{Time evolution of $\ln(\mE(t))$ with $N=800$ and $M_0=5$ for different potential indices $\gamma$ in the nonlinear Landau damping problem, where the blue line represents $\gamma = 0$ and red line represents $\gamma = -3$. The first row corresponds to the wave number $k = 0.3$ and the bottom row corresponds to the wave number $k =0.5$.} \label{fig:ex2_gamma} \end{figure} The cases of different potential indices in the IPL model are also studied, where the model of Maxwell molecules $\gamma = 0$ and the model with Coulomb interactions $\gamma = -3$ are tested. Figure \ref{fig:ex2_gamma} shows the time evolution of the electrostatic energy for wave number $k = 0.3$ and $0.5$ under different collisional frequencies $\nu = \nu_{\beta}= 0.05$ and $0.1$, from which we find that the damping rate for the Maxwell case $\gamma = 0$ is much larger than that for the Coulomb case $\gamma = -3$. This result is compatible with a similar conclusion in the linear case. \subsection{Two-stream instability} \begin{figure}[!htb] \centering \subfloat[Initial MDF $g(0, x, v_1)$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex3_0_k0.5_x400_M40_m5_A0.01_g-3_t0.eps}}\hfill \subfloat[Contours of $g(0, x, v_1)$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0_contour.eps}} \hfill \subfloat[Initial MDF $g(0, \frac{\pi}{4}, v_1)$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0_0.eps}} \caption{Initial marginal distribution functions of the two-stream instability problem. In (b) and (c), the blue solid lines correspond to the exact solution, and the red dashed lines correspond to the numerical approximation. Figure (a) shows only the numerical approximation. Figure (c) shows the numerical approximation and the exact solution at the position $x = \frac{\pi}{4}$. } \label{fig:ex3_ini} \end{figure} Two-stream instability is a common instability in plasma physics and of primary concern for studying the nonlinear effect of plasma in the future. It occurs when the fluid consists of two electron streams with different velocities. The mechanism of two-stream instability is similar to that of Landau damping, where particles at different velocities transfer energy to each other \cite{Bittencourt}. \begin{figure}[!htb] \centering \subfloat[$t = 20, \nu = 0$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex3_0_k0.5_x400_M40_m5_A0.01_g-3_t20.eps}}\hfill \subfloat[$t = 20, \nu = 0.001$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0.001_k0.5_x400_M40_m5_A0.01_g-3_t20.eps}} \hfill \subfloat[$t = 20, \nu = 0.01$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0.01_k0.5_x400_M40_m5_A0.01_g-3_t20.eps}} \\ \subfloat[$t = 30, \nu = 0$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex3_0_k0.5_x400_M40_m5_A0.01_g-3_t30.eps}}\hfill \subfloat[$t = 30, \nu = 0.001$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0.001_k0.5_x400_M40_m5_A0.01_g-3_t30.eps}} \hfill \subfloat[$t = 30, \nu = 0.01$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0.01_k0.5_x400_M40_m5_A0.01_g-3_t30.eps}} \\ \subfloat[$t = 50, \nu = 0$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex3_0_k0.5_x400_M40_m5_A0.01_g-3_t50.eps}}\hfill \subfloat[$t = 50, \nu = 0.001$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0.001_k0.5_x400_M40_m5_A0.01_g-3_t50.eps}} \hfill \subfloat[$t = 50, \nu = 0.01$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex3_0.01_k0.5_x400_M40_m5_A0.01_g-3_t50.eps}} \caption{Evolution of the marginal distribution function $g(t, x, v_1)$ under different collisional frequencies $\nu$ in the two-stream instability problem. The left column corresponds to $\nu =0$, the middle column corresponds to $\nu = 0.001$, and the right column corresponds to $\nu = 0.01$.} \label{fig:ex3_twostream_nu} \end{figure} In this numerical experiment, the initial data is given with a nonisotropic two-stream flow \begin{equation} \label{eq:two_stream} f = \frac{(1+A \cos (k x))}{\sqrt{2 \pi T}}\left[0.5\exp \left(-\frac{\left|\bv-(u_1, 0, 0)^T\right|^{2}} {2 T}\right)+0.5\exp \left(-\frac{\left|\bv+(u_1, 0,0)^T\right|^{2}}{2 T}\right)\right], \end{equation} with $A=0.01$, $T = 0.25$ and $u_1=1$. Here, only the electron-electron collsion is considered and thus the electron-ion collision frequency $\nu_{\beta}$ is set as $0$. Similar initial data and assumptions can be found in \cite{ZhangGamba2017}. The time evolution of the particles with the collisional model of Coulomb interactions $\gamma = -3$ is studied, and the wave number $k$ is chosen as $k = 0.5$. The grid size and expansion order are chosen as $N = 400$ and $M =40$, respectively. Moreover, the quadratic length is set as $M_0 = 5$. Here, the collisional frequency is set as $\nu=0$, $0.001$ and $0.01$ to present the effect of the collision. The marginal distribution function \begin{equation} \label{eq:marginal} g(t, x, v_1) = \int_{\bbR^2} f(t, x, v_1, v_2, v_3) \dd v_2 \dd v_3 \end{equation} is also plotted to show the electron ``trapping'' phenomenon. Clearly, our chosen parameters can approximate the initial distribution function satisfactorily (see Figure \ref{fig:ex3_ini}). To suppress the recurrence and the nonphysical oscillations, the filter developed in \cite{hou2007computing, Filter2017} is applied here. Figure \ref{fig:ex3_twostream_nu} shows the time evolution of the marginal distribution function \eqref{eq:marginal} in the $x-v_1$ plane. From these, we can find that for the collisionless case, the linear two-stream instability grows exponentially at first, and then the nonlinearity becomes dominant and ``trapping'' emerges. At the same time, the original distribution begins to twist and curve until an electron hole-like structure finally forms, which is consistent with the results in \cite{heath2012discontinuous}. For the collisional case, a smaller electron hole-like structure forms with the increase in the collisional frequency $\nu$, and no visible hole-like structure occurs in the case of collisional frequency $\nu = 0.1$. This again substantiates the effect of collision to reduce the ``trapping'' phenomenon. The time evolution of the total energy is also studied to test the conservation property of this numerical scheme. The total energy $\mE_t(t)$ is defined as \begin{equation} \label{eq:total_energy} \mE_t(t) = \frac{1}{2} \Delta x \sum_j \int_{\bbR^3} f(t, x_j, \bv) |\bv|^2 \dd \bv + \frac{1}{2}\mE(t)^2. \end{equation} The evolution of the total energy $\mE_t(t)$ for different collisional frequencies is plotted in Figure \ref{fig:ex3_energy}, from which we can see that although the numerical scheme cannot exactly preserve the total energy, the variation of the total energy is minute, especially in the linear instability stage, where the variation is almost negligible. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth, height=0.35\textwidth, clip]{ex3_energy_c.eps} \caption{Time evolution of the variation in the total energy $ \mE_t(t)$ for different collisional frequencies in the two-stream instability problem. The variation is defined as $(\mE_{t}(t) - \mE_{t}(0)) / \mE_t(0)$. } \label{fig:ex3_energy} \end{figure} \subsection{Bump-on-tail instability} Bump-on-tail instability is another important micro-instability which is a special case of two-stream instability when the two electron streams have different densities~\cite{Cheng2014}. The distribution function is unstable, which leads to growth in the initial perturbation followed by saturation and oscillation of the particles trapped in the potential through the wave \cite{Magdi1979, NAKAMURA1999122}. In this numerical experiment, we also begin with a nonisotropic distribution function as \begin{equation} \label{eq:bumpontail} f =\frac{(1+A \sin (k x))}{\sqrt{2 \pi T}}\left[n_m\exp \left(-\frac{\left|\bv-(u_1, 0, 0)^T\right|^{2}} {2 T}\right)+n_b\exp \left(-\frac{\left|\bv+(u_1, 0,0)^T\right|^{2}}{2 T}\right)\right], \end{equation} where $A=0.01$, $T=0.25$ and $u_1=1$. $n_m=0.7$, which represents the magnitude of the ``mainstream'', and $n_b=0.3$, representing the magnitude of the ``bump'' on the tail of the ``mainstream''. The wave number $k$ is chosen as $k = 0.3$. The grid size is chosen as $N = 400$, and the expansion order is set as $M =40$, which gives a satisfying approximation to the initial distribution function (see Figure \ref{fig:ex4_ini}). Moreover, the quadratic length is set as $M_0 = 5$. Similar to the previous numerical experiment, we focus on the model with Coulomb interactions, and the time evolution of particles with collision frequencies $\nu=0$, $0.001$ and $0.01$ is studied. \begin{figure}[!htb] \centering \subfloat[Initial MDF $g(0, x, v_1)$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex4_0_k0.3_x400_M40_m5_A0.01_g-3_t0.eps}}\hfill \subfloat[Contours of $g(0, x, v_1)$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0_contour.eps}} \hfill \subfloat[Initial MDF $g(0, 0, v_1)$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0_0.eps}} \caption{Initial marginal distribution functions of the bump-on-tail instability problem. In (b) and (c), the blue solid lines correspond to the exact solution, and the red dashed lines correspond to the numerical approximation. Figure (a) shows only the numerical approximation. Figure (c) shows the numerical approximation and the exact solution at the position $x = 0$. } \label{fig:ex4_ini} \end{figure} Figure \ref{fig:ex4_bump_nu} shows the time evolution of the marginal distribution function \eqref{eq:marginal} in the $x-v_1$ plane. We can observe that for the collisionless case, the bump is trapped by the electric field and gradually forms a crawling vortex-like structure. For the collisional case, the trapping of the bump is much weaker, and the distribution of the ``mainstream'' is less affected. In the case of the collisional frequency $\nu = 0.1$, no vortex-like structure is perceptible. The evolution of the total energy defined in \eqref{eq:total_energy} is also studied. Figure \ref{fig:ex4_energy} shows the evolution of the total energy for different collisional frequencies. Although the total energy is not perfectly preserved, the variation in the total energy is small, especially at the beginning of the evolution and decreases with the increase in the collisional frequency. \begin{figure}[!htb] \centering \subfloat[$t = 20, \nu = 0.0$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex4_0_k0.3_x400_M40_m5_A0.01_g-3_t20.eps}}\hfill \subfloat[$t = 20, \nu = 0.001$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0.001_k0.3_x400_M40_m5_A0.01_g-3_t20.eps}} \hfill \subfloat[$t = 20, \nu = 0.01$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0.01_k0.3_x400_M40_m5_A0.01_g-3_t20.eps}} \\ \subfloat[$t = 30, \nu = 0.0$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex4_0_k0.3_x400_M40_m5_A0.01_g-3_t30.eps}}\hfill \subfloat[$t = 30, \nu = 0.001$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0.001_k0.3_x400_M40_m5_A0.01_g-3_t30.eps}} \hfill \subfloat[$t = 30, \nu = 0.01$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0.01_k0.3_x400_M40_m5_A0.01_g-3_t30.eps}} \\ \subfloat[$t = 40, \nu = 0.0$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth, clip]{ex4_0_k0.3_x400_M40_m5_A0.01_g-3_t40.eps}}\hfill \subfloat[$t = 40, \nu = 0.001$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0.001_k0.3_x400_M40_m5_A0.01_g-3_t40.eps}} \hfill \subfloat[$t = 40, \nu = 0.01$] {\includegraphics[width=0.32\textwidth, height=0.21\textwidth,clip]{ex4_0.01_k0.3_x400_M40_m5_A0.01_g-3_t40.eps}} \caption{Evolution of the marginal distribution function $g(t, x, v_1)$ under different collisional frequencies $\nu$ in the bump-on-tail instability problem. The left column corresponds to $\nu =0$ , the middle column corresponds to $\nu = 0.001$, and the right column corresponds to $\nu = 0.01$.} \label{fig:ex4_bump_nu} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth, height=0.35\textwidth, clip]{ex4_energy_c.eps} \caption{Time evolution of the variation in the total energy $ \mE_t(t)$ for different collisional frequencies in the bump-on-tail instability problem. The variation is defined as $(\mE_{t}(t) - \mE_{t}(0)) / \mE_t(0)$. } \label{fig:ex4_energy} \end{figure} \section{Numerical algorithm for the FPL equation} \label{sec:model} In the previous section, the expansion of the distribution functions and the collision terms was discussed. In this section, we introduce the specific numerical method for solving the FPL equation, which is an extension of the method in \cite{VPFP2016}. Due to the complex form of the FPL equation, the Strang splitting method \cite{FVM} is adopted here to split the FPL equation into three parts: \begin{itemize} \item the convection step: \begin{equation} \label{eq:convection} \pd{f(t, \bx, \bv)}{t} + \bv \cdot \nabla_{\bx} f(t, \bx, \bv) = 0, \end{equation} \item the collision step: \begin{equation} \label{eq:collision_step} \pd{f(t, \bx, \bv)}{t} = \mQ[f(t, \bx, \bv)], \end{equation} \item the acceleration step: \begin{gather} \label{eq:force} \pd{f(t, \bx, \bv)}{t} + \bE(t, \bx) \cdot \nabla_{\bv} f(t, \bx, \bv) = 0, \\ \label{eq:force1} \bE(t, \bx) = -\nabla_{\bx} \psi(t, \bx), \qquad -\Delta_{\bx}\psi = \sum_{\eta} q_{\eta}\int_{\bbR^3} f_{\eta}(\bv) \dd \bv. \end{gather} \end{itemize} To obtain a finite system for computation, we make an approximation to the distribution function as \begin{equation} \label{eq:discretization} f(t, \bx, \bv) \approx\sum_{ \boldsymbol{i} \in I_M} f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(t, \bx) \mH_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}, \end{equation} where $I_M$ is the set of indices, with \begin{equation} I_M=\{\boldsymbol{i} = (i_1,i_2,i_3): 0\leqslant |\boldsymbol{i}|\leqslant M, i_1, i_2, i_3\in \bbN\}, \end{equation} and $M\in\mathbb{Z}_+$ is the expansion order. The distribution function $f(t, \bx, \bv)$ is determined by the coefficients $\{f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}(t, \bx), |\boldsymbol{i}|\leqslant M\}$, which are stored as a vector in the implementation as \begin{equation} \label{eq:fvec} \bbf^{[\tilde{\bu}, \tilde{T}]} = \left(f_{\bz}^{ [\tilde{\bu}, \tilde{T}]}, f_{\be_1}^{[\tilde{\bu}, \tilde{T}]}, f_{\be_2}^{[\tilde{\bu},\tilde{T}]} f_{\be_3}^{[\tilde{\bu}, \tilde{T}]}, \cdots, f_{\boldsymbol{i}}^{[\tilde{\bu}, \tilde{T}]}, \cdots\right)_{|\boldsymbol{i}|\leqslant M}^T. \end{equation} Thus, the reduced collision term \eqref{eq:col_new_coe} is also approximated as \begin{equation} \label{eq:discretizationq} \mQ^{\rm new}[f] \approx\sum_{ \boldsymbol{i}\in I_M} Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx) \mH_{\boldsymbol{i}}, \end{equation} which is determined by the coefficients $\{Q_{\boldsymbol{i}}^{{\rm new}}(t, \bx), |\boldsymbol{i}|\leqslant M\}$ and stored as \begin{equation} \label{eq:qvec} \bbQ^{\rm new} = \left(Q_{\bz}^{\rm new}, Q_{\be_1}^{\rm new}, Q_{\be_2}^{\rm new}, Q_{\be_3}^{\rm new}, \cdots, Q_{\boldsymbol{i}}^{\rm new}, \cdots\right)_{|\boldsymbol{i}|\leqslant M}^T. \end{equation} Here, the length of these vectors is \begin{equation} \label{eq:N} N = \frac{(M+1)(M+2)(M+3)}{6}. \end{equation} For the spatial space, we restrict our study to the one-dimensional spatial space, and the standard finite volume discretization is adopted along that direction. Let $\Gamma_h$ be a uniform mesh in $\Omega \in \bbR$, with an index $s$ as the identifier of each cell and $x_0$ as the left point. The mesh $\Gamma_h$ can be expressed by \begin{equation} \label{eq:Gamma} \Gamma^h = \{\Gamma_s = x_0 + (s h , (s+1)h): s \in \bbN\}, \end{equation} and the volume average value of $\bbf^{[\tilde{\bu}, \tilde{T}]}$ and $\bbQ^{\rm new}$ at the cell $\Gamma_s$ are $\bbf_s^{[\tilde{\bu}, \tilde{T}]}$ and $\bbQ_{s}^{\rm new}$, respectively. In the following sections, a numerical scheme is proposed to update the distribution function or, more precisely, the coefficients $\bbf_s^{[\tilde{\bu}, \tilde{T}]}$ at each step. We should point out that the expansion center $\tilde{\bu}$ and $\tilde{T}$ in \eqref{eq:fvec} is chosen differently at each step for different purposes. The expansion center is set as the standard expansion center $\tilde{\bu} = \bz$ and $\tilde{T} =1$ at the collision step to utilize the reduced collision model \eqref{eq:new_collision}. The same expansion center is utilized at the convection step to reduce the computational cost of the projection. Furthermore, the local velocity and temperature are adopted as the expansion center at the acceleration step. Therefore, the governing equation \eqref{eq:force} could be reduced to an ODE. The selection of the expansion centers at each step is further explained in the following sections. \subsection{Convection step} We begin the explanation of the numerical scheme from the convection step. To reduce the computational cost, we choose $\tilde{\bu} = \bz$ and $\tilde{T} = 1$, the same expansion center as that in the collision step here. Then, the approximation to the distribution function \eqref{eq:discretization} is reduced to \begin{equation} \label{eq:discretization_con} f(t, \bx, \bv) \approx\sum_{ \boldsymbol{i} \in I_M} f_{\boldsymbol{i}}(t, \bx) \mH_{\boldsymbol{i}}. \end{equation} In this case, one projection is saved, which reduces the computational cost of $O(M^4)$ due to the same expansion center being used at the convection and collision steps. By substituting \eqref{eq:discretization_con} into \eqref{eq:convection} and matching the corresponding coefficients, we derive the equations for coefficients $f_{\boldsymbol{i}}$ as \begin{equation} \label{eq:f_alpha} \pd{}{t} f_{\boldsymbol{i}} + \pd{}{x} \left( (i_1+1)f_{\boldsymbol{i} + \be_1} + f_{\boldsymbol{i} - \be_1} \right) = 0, \qquad |\boldsymbol{i}| \leqslant M, \end{equation} where terms with negative indices are regarded as zero. With the coefficient vector introduced in \eqref{eq:fvec}, \eqref{eq:f_alpha} can be rewritten as \begin{equation} \label{eq:f_eq} \pd{\bbf}{t} + \mA \pd{\bbf}{x} = 0, \end{equation} where $\mA$ is an $N\times N$ matrix, the entries of which are determined by \eqref{eq:f_alpha}. Supposing that $\bbf_s^n$ is the numerical solution to $\bbf$ at time $t^n$ and cell $s$, the convection equation \eqref{eq:f_eq} is solved by the forward Euler scheme as \begin{equation} \label{eq:scheme} \bbf_s^{n+1, \ast} =\bbf_s^n - \frac{\Delta t}{\Delta x}[F_{s+1/2}^n - F_{s-1/2}^n], \end{equation} where $\bbf_s^{n+1, \ast}$ denotes the numerical solution after the convection step at time $t^{n+1}$ and $F_{s+1/2}^n$ is the numerical flux through the boundary of the cells $\Gamma_s$ and $\Gamma_{s+1}$. In our method, the HLL flux \cite{VPFP2016} is utilized, which has the following form: \begin{equation} \label{eq:HLL} F_{s+1/2}^n = \left\{ \begin{array}{ll} \mA \bbf_{s}^{n} & \lambda^L \geqslant 0, \\[2mm] \dfrac{ \lambda^R \mA \bbf_{s}^{n} - \lambda^L \mA \bbf_{s+1}^{n} + \lambda^R\lambda^L\left(\bbf_{s+1}^{n} - \bbf_{s}^{n}\right) }{\lambda^R - \lambda^L}, & \lambda^L < 0 < \lambda^R, \\[2mm] \mA \bbf_{s+1}^{n}, & \lambda^R \leqslant 0, \end{array} \right. \end{equation} where $\lambda^L $ and $\lambda^R$ are the smallest and largest characteristic velocities, with $\lambda^L = - C_{M+1}$ and $\lambda^R = C_{M+1} $. Here, $C_{M+1}$ is the maximum root of the Hermite polynomial of degree $M+1$. To obtain a high-order numerical scheme, the linear reconstruction \cite{Hu2020Numerical} is adopted for the distribution function. In addition, the time step is decided by the CFL condition \begin{equation} \label{eq:CFL} \frac{ \Delta t C_{M+1} }{\Delta x} < {\rm CFL}. \end{equation} \subsection{Collision step} \label{sec:col step} The reduced collision model \eqref{eq:col_new_coe} is utilized here for the collision step. Substituting \eqref{eq:discretization_con} and \eqref{eq:discretizationq} into \eqref{eq:collision_step}, we obtain the governing equations of $f$ as \begin{equation} \label{eq:collision_eq} \pd{\bbf}{t} = \bbQ^{\rm new}. \end{equation} Here, \eqref{eq:collision_eq} is solved again by the forward Euler scheme as \begin{equation} \label{eq:Euler_col} \bbf_s^{n+1, \ast\ast} = \bbf_s^{n+1, \ast } + \Delta t \bbQ_s^{{\rm new}, n+1, \ast}, \end{equation} where $ \bbQ_s^{{\rm new}, n+1,\ast}$ is the numerical solution of the reduced collision operator $\mQ^{\rm new}$ after the convection step at time $t^{n+1}$ and cell $s$, with $\bbf_s^{n+1, \ast}$ and $\bbf_s^{n+1, \ast\ast}$ being the numerical solution after the convection step and after the collision step at time $t^{n+1}$, respectively. High-order Runge-Kutta numerical schemes can also be adopted to update the collision term in \eqref{eq:Euler_col}. \subsection{Acceleration step} At the acceleration step, the expansion center is chosen as the local macroscopic velocity and temperature or, more precisely, $\tilde{\bu} = \bu(t, x)$ and $\tilde{T} = T(t, x)$, both defined in \eqref{eq:rlt}. Consequently, the governing equation \eqref{eq:force} is reduced to an ODE system \cite{Wang}, which greatly reduces the computational cost. For the one-dimensional spatial problem, where the macroscopic velocity $\bu(t, x)$ is reduced to $\bu = (u_1, 0, 0)$, the numerical system for the acceleration step is simply reduced to solving an ODE system of the macroscopic velocity $u_1$ as \begin{equation} \label{eq:force_1} \begin{gathered} \pd{u_1}{t} - E_1 = 0, \qquad E_1(t, x) = -\pd{\psi(t,x)}{x}, \qquad -\partial_{xx}\psi = \sum_{\eta} q_{\eta} \int_{\bbR^3} f_{\eta}(\bv) \dd \bv. \end{gathered} \end{equation} A detailed deduction of \eqref{eq:force_1} can be found in the Appendix \ref{app:acc}. Since the expansion center at the acceleration step is different from that at the collision step, Theorem \ref{thm:project} is utilized to carry out the projections. We organize the procedure as below to perform the acceleration step: \begin{enumerate} \item Find $\left(\bbf_s^{[\bu, T]}\right)^{n+1, \ast\ast}$ from $\bbf_s^{n+1, \ast\ast}$ based on Theorem \ref{thm:project}, where $\bbf_s^{n+1, \ast\ast}$ are the numerical solutions after the collision step at $t = t^{n+1}$. \item Solve \eqref{eq:force1} to obtain $(F_1)_{s}^{n+1, \ast\ast}$ with the finite difference scheme \cite{VPFP2016}. \item Solve \eqref{eq:force} by the forward Euler scheme \begin{equation} \label{eq:forwardEuler} (u_1)_{s}^{n+1} = (u_1)_{s}^{n+1,\ast\ast} + \Delta t (F_1)_{ s}^{n+1, \ast\ast}, \end{equation} where $(u_1)_{s}^{n+1,\ast\ast}$ is the macroscopic velocity at cell $s$ after the collision step at time $t=t^{n+1}$. \item Obtain $\left(\bbf_{s}^{[\bu, T]}\right)^{n+1}$ by updating the expansion center to $(u_1)_{s}^{n+1}$. \item Find $\bbf_{s}^{n+1}$ from $\left(\bbf_{s}^{[\bu, T]}\right)^{n+1}$ based on Theorem \ref{thm:project}. \end{enumerate}
1,314,259,996,075
arxiv
\section{Introduction} Human learning is organized into a curriculum of interdependent learning situations of various complexities. For sure, Homer learned to formulate words before he could compose the Iliad. This idea was first transferred to machine learning in \mycite{selfridge}, where authors designed a \textit{learning scheme} to train a cart pole controller: first training on long and light poles, then gradually moving towards shorter and heavier poles. A related concept was also developed by \mycite{Schmid}, who proposed to improve world model learning by organizing exploration through \textit{artificial curiosity}. In the following years, curriculum learning was applied to organize the presentation of training examples or the growth in model capacity in various supervised learning settings~\cite{elman,krueger,bengiocl}. In parallel, the developmental robotics community proposed \textit{learning progress} as a way to self-organize open-ended \textit{developmental trajectories} of learning agents~\cite{oudeyer2007intrinsic}. Inspired by these earlier works, the Deep Reinforcement Learning (DRL) community developed a family of mechanisms called \textit{Automatic Curriculum Learning}, which we propose to define as follows: \textit{\textbf{Automatic Curriculum Learning (ACL)} for DRL is a family of mechanisms that automatically adapt the distribution of training data by learning to adjust the selection of learning situations to the capabilities of DRL agents.} \paragraph{Related fields.}ACL shares many connections with other fields. For example, ACL can be used in the context of \textit{Transfer Learning} where agents are trained on one distribution of tasks and tested on another~\cite{taylortransfer}. \textit{Continual Learning} trains agents to be robust to unforeseen changes in the environment while ACL assumes agents to stay in control of learning scenarios~\cite{continual-learning-review}. \textit{Policy Distillation} techniques \cite{pol-dil-review} form a complementary toolbox to target multi-task RL settings, where knowledge can be transferred from one policy to another (e.g. from task-expert policies to a generalist policy). \paragraph{Scope.}This short survey proposes a typology of ACL mechanisms when combined with DRL algorithms and, as such, does not review population-based algorithms implementing ACL (e.g.~\mycite{imgep}, \mycite{poet}). As per our adopted definition, ACL refers to mechanisms \textit{explicitly} optimizing the automatic organization of training data. Hence, they should not be confounded with \textit{emergent curricula}, by-products of distinct mechanisms. For instance, the on-policy training of a DRL algorithm is not considered ACL, because the shift in the distribution of training data \textit{emerges} as a by-product of policy learning. Given this is a short survey, we do not present the details of every particular mechanism. As the current ACL literature lacks theoretical foundations to ground proposed approaches in a formal framework, this survey focuses on empirical results. \section{Automatic Curriculum Learning for DRL} \label{sec:2} This section formalizes the definition of ACL for Deep RL and proposes a classification. \paragraph{Deep Reinforcement Learning} \hspace{-0.2cm}is a family of algorithms which leverage deep neural networks for function approximation to tackle reinforcement learning problems. DRL agents learn to perform sequences of actions $a$ given states $s$ in an environment so as to maximize some notion of cumulative reward $r$~\cite{sutton2018reinforcement}. Such problems are usually called \textit{tasks} and formalized as Markov Decision Processes (MDPs) of the form $T=\langle \mathcal{S},\mathcal{A},\mathcal{P}, \mathcal{R}, \rho_0 \rangle$ where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{P}:S \times A \times S \rightarrow [0,1]$ is a transition function characterizing the probability of switching from the current state $s$ to the next state $s'$ given action $a$, $\mathcal{R}:S \times A \rightarrow \mathbb{R}$ is a reward function and $\rho_0$ is a distribution of initial states. To challenge the generalization capacities of agents \cite{coinrun}, the community introduced multi-task DRL problems where agents are trained on tasks sampled from a task space: $T \sim \mathcal{T}$. In multi-goal DRL, policies and reward functions are conditioned on goals, which augments the task-MDP with a goal space $\mathcal{G}$~\cite{uvfa}. \paragraph{Automatic Curriculum Learning} \hspace{-0.2cm} mechanisms propose to learn a task selection function $\mathcal{D}:\mathcal{\mathcal{H}\to\mathcal{T}}$ where $\mathcal{H}$ can contain any information about past interactions. This is done with the objective of maximizing a metric $P$ computed over a distribution of target tasks $\mathcal{T}_{target}$ after $N$ training steps: \begin{equation} \label{eq:1} Obj: \max_{\mathcal{D}} \int_{T\sim \mathcal{T}_{target}} \! P_T^N\, \mathrm{d}T, \end{equation} where $P_T^N$ quantifies the agent's behavior on task $T$ after $N$ training steps (e.g. cumulative reward, exploration score). In that sense, ACL can be seen as a particular case of meta-learning, where $\mathcal{D}$ is learned along training to improve further learning. \paragraph{ACL Typology.} We propose a classification of ACL mechanisms based on three dimensions: \begin{enumerate}[leftmargin=0.45cm, nolistsep] \item \textit{Why use ACL?} We review the different objectives that ACL has been used for (Section~\ref{sec:main_objective}). \item \textit{What does ACL control?} ACL can target different aspects of the learning problem (e.g. environments, goals, reward functions, Section~\ref{sec:lever}) \item \textit{What does ACL optimize}? ACL mechanisms usually target surrogate objectives (e.g. learning progress, diversity) to alleviate the difficulty to optimize the main objective $Obj$ directly (Section~\ref{sec:surrogate_objective}). \end{enumerate} \section{Why use ACL?} \label{sec:main_objective} ACL mechanisms can be used for different purposes that can be seen as particular instantiations of the general objective defined in Eq~\ref{eq:1}. \paragraph{Improving performance on a restricted task set.} Classical RL problems are about solving a given task, or a restricted task set (e.g. which vary by their initial state). In these simple settings, ACL has been used to improve sample efficiency or asymptotical performance~\cite{per,apex,SAUNA}. \paragraph{Solving hard tasks.} Sometimes the target tasks cannot be solved directly (e.g. too hard or sparse rewards). In that case, ACL can be used to pose auxiliary tasks to the agent, gradually guiding its learning trajectory from simple to difficult tasks until the target tasks are solved. In recent works, ACL was used to schedule DRL agents from simple mazes to hard ones \cite{tscl}, or from close-to-success initial states to challenging ones in robotic control scenarios \cite{reverse-cur,BaRC} and video games \cite{montezuma-single-demo}. Another line of work proposes to use ACL to organize the exploration of the state space so as to solve sparse reward problems~\cite{countbased,icm,disagreement,pathakdisagreement,rnd}. In these works, the performance reward is augmented with an intrinsic reward guiding the agent towards uncertain areas of the state space. \paragraph{Training generalist agents.} Generalist agents must be able to solve tasks they have not encountered during training (e.g. continuous task spaces or distinct training and testing set). ACL can shape learning trajectories to improve generalization, e.g. by avoiding unfeasible task subspaces \cite{portelas2019}. ACL can also help agents to generalize from simulation settings to the real world (Sim2Real)~\cite{OpenAI2019SolvingRC,ADRmila} or to maximize performance and robustness in multi-agent settings via Self-Play~\cite{alpha-go-zero,rarl,openaiSumos,Baker2019HidenSeek,vinyals2019grandmaster}. \paragraph{Training multi-goal agents.}In multi-goal RL, agents are trained and tested on tasks that vary by their goals. Because agents can control the goals they target, they learn a behavioral repertoire through one or several goal-conditioned policies. The adoption of ACL in this setting can improve performance on a testing set of pre-defined goals. Recent works demonstrated the benefits of using ACL in scenarios such as multi-goal robotic arm manipulation \cite{her,eb-per,fournier-accuracy-acl,CGM,zhao2019curiosity,curriculu-her,curious} or multi-goal navigation \cite{asymetricSP,goalgan,settersolver,cideron2019self}. \paragraph{Organizing open-ended exploration.}In some multi-goal settings, the space of achievable goals is not known in advance. Agents must discover achievable goals as they explore and learn how to represent and reach them. For this problem, ACL can be used to organize the discovery and acquisition of repertoires of robust and diverse behaviors, e.g. from visual observations~\cite{diayn,skewfit,metarl-carml} or from natural language interactions with social peers~\cite{le2,imagine}. \section{What does ACL control?} \label{sec:lever} While \textit{on-policy} DRL algorithms directly use training data generated by the current behavioral policy, \textit{off-policy} algorithms can use trajectories collected from other sources. This practically decouples \textit{data collection} from \textit{data exploitation}. Hence, we organize this section into two categories: one reviewing ACL for data collection, the other ACL for data exploitation. \begin{figure}[h!] \centering \includegraphics[width=\columnwidth]{figures/ACL.pdf} \caption{ACL for data collection. ACL can control each elements of task MDPs to shape the learning trajectories of agents. Given metrics of the agent's behavior like performance or visited states, ACL methods generate new tasks adapted to the agent's abilities.} \label{fancy-fig} \end{figure} \subsection{ACL for Data Collection} \label{sec:lever_data_collection} During data collection, ACL organizes the sequential presentation of tasks as a function of the agent's capabilities. To do so, it generates tasks by acting on elements of task MDPs (e.g. $\mathcal{R}, \mathcal{P}, \rho_0$, see Fig.~\ref{fancy-fig}). The curriculum can be designed on a discrete set of tasks or on a continuous task space. In single-task problems, ACL can define a set of auxiliary tasks to be used as stepping stones towards the resolution of the main task. The following paragraphs organize the literature according to the nature of the control exerted by ACL: \paragraph{Initial state $(\rho_0)$.}The distribution of initial states $\rho_0$ can be controlled to modulate the difficulty of a task. Agents start learning from states close to a given target (i.e. easier tasks), then move towards harder tasks by gradually increasing the distance between the initial states and the target. This approach is especially effective to design auxiliary tasks for complex control scenarios with sparse rewards~\cite{reverse-cur,BaRC,montezuma-single-demo}. \paragraph{Reward functions $(\mathcal{R})$.}ACL can be used for automatic reward shaping: adapting the reward function $\mathcal{R}$ as a function of the learning trajectory of the agent. In curiosity-based approaches especially, an internal reward function guides agents towards areas associated with high uncertainty to foster exploration~\cite{countbased,icm,disagreement,pathakdisagreement,rnd}. As the agent explores, uncertain areas --and thus the reward function-- change, which automatically devises a learning curriculum guiding the exploration of the state space. In~\mycite{fournier-accuracy-acl}, an ACL mechanism controls the tolerance in a goal reaching task. Starting with a low accuracy requirement, it gradually and automatically shifts towards stronger accuracy requirements as the agent progresses. In \mycite{diayn} and \mycite{metarl-carml}, authors propose to learn a skill space in unsupervised settings (from state space and pixels respectively), from which are derived reward functions promoting both behavioral diversity and skill separation. \paragraph{Goals $(\mathcal{G})$.}In multi-goal DRL, ACL techniques can be applied to order the selection of goals from discrete sets~\cite{le2}, continuous goal spaces~\cite{asymetricSP,goalgan,skewfit,settersolver} or even sets of different goal spaces~\cite{CGM,curious}. Although goal spaces are usually pre-defined, recent work proposed to apply ACL on a goal space learned from \textit{pixels} using a generative model \cite{skewfit}. \paragraph{Environments $(\mathcal{S}, \mathcal{P})$.}ACL has been successfully applied to organize the selection of environments from a discrete set, e.g. to choose among Minecraft mazes~\cite{tscl} or Sonic the Hedgehog levels~\cite{tscllike}. A more general --and arguably more powerful-- approach is to leverage parametric \textit{Procedural Content Generation} (PCG) techniques~\cite{risiPCG} to generate rich task spaces. In that case, ACL allows to detect relevant niches of progress~\cite{OpenAI2019SolvingRC,portelas2019,ADRmila}. \paragraph{Opponents~$(\mathcal{S}, \mathcal{P})$.}Self-play algorithms train agents against present or past versions of themselves~\cite{alpha-go-zero,openaiSumos,vinyals2019grandmaster,Baker2019HidenSeek}. The set of opponents directly maps to a set of tasks, as different opponents results in different transition functions $\mathcal{P}$ and possibly state spaces $\mathcal{S}$. Self-play can thus be seen as a form of ACL, where the sequence of opponents (i.e. tasks) is organized to maximize performance and robustness. In single-agent settings, an adversary policy can be trained to perturb the main agent~\cite{rarl}. \subsection{ACL for Data Exploitation} \label{sec:lever_data_exploitation} ACL can also be used in the data exploitation stage, by acting on training data previously collected and stored in a \textit{replay memory}. It enables the agent to ``mentally experience the effects of its actions without actually executing them'', a technique known as \textit{experience replay}~\cite{lin1992self}. At the data exploitation level, ACL can exert two types of control on the distribution of training data: \textit{transition selection} and \textit{transition modification}. \paragraph{Transition selection $(\mathcal{S}\times\mathcal{A})$.}Inspired from the \textit{prioritized sweeping} technique that organized the order of updates in planning methods~\cite{moore1993prioritized}, \mycite{per} introduced \textit{prioritized experience replay} (PER) for model-free off-policy RL to bias the selection of transitions for policy updates, as some transitions might be \textit{more informative} than others. Different ACL methods propose different metrics to evaluate the importance of each transition~\cite{per,eb-per,curious,zhao2019curiosity,le2,imagine}. Transition selection ACL techniques can also be used for on-policy algorithms to filter online learning batches \cite{SAUNA}. \paragraph{Transition modification $(\mathcal{G})$.}In multi-goal settings, \textit{Hindsight Experience Replay} (HER) proposes to reinterpret trajectories collected with a given target goal with respect to a different goal \cite{her}. In practice, HER modifies transitions by substituting target goals $g$ with one of the outcomes $g'$ achieved later in the trajectory, as well as the corresponding reward $r'=R_{g'}(s,a)$. By explicitly biasing goal substitution to increase the probability of sampling rewarded transitions, HER shifts the training data distribution from simpler goals (achieved now) towards more complex goals as the agent makes progress. Substitute goal selection can be guided by other ACL mechanisms (e.g. favoring diversity~\cite{curriculu-her,cideron2019self}). \section{What Does ACL Optimize?} \label{sec:surrogate_objective} \begin{table*}[htb!] \small \centering \begin{tabular}{llll} \toprule \thead{Algorithm} & \thead{Why use ACL?} & \thead{What does ACL control?} & \thead{What does ACL optimize?} \\ \midrule ACL for Data Collection (\S~\ref{sec:lever_data_collection}): & & & \\ \midrule ADR (OpenAI) ~\cite{OpenAI2019SolvingRC} & Generalization & Environments $(\mathcal{S},\mathcal{P})$ (PCG) & Intermediate difficulty \\ ADR (Mila) ~\cite{ADRmila} & Generalization & Environments $(\mathcal{P})$ (PCG) & Intermediate diff. \& Diversity \\ ALP-GMM ~\cite{portelas2019} & Generalization & Environments $(\mathcal{S})$ (PCG) & LP \\ RARL ~\cite{rarl} & Generalization & Opponents $(\mathcal{P})$ & ARM \\ AlphaGO Zero ~\cite{alpha-go-zero} & Generalization & Opponents $(\mathcal{P})$ & ARM \\ Hide\&Seek ~\cite{Baker2019HidenSeek} & Generalization & Opponents $(\mathcal{P})$ & ARM \\ AlphaStar ~\cite{vinyals2019grandmaster} & Generalization & Opponents $(\mathcal{P})$ & ARM \& Diversity \\ Competitive SP ~\cite{openaiSumos} & Generalization & Opponents $(\mathcal{P})$ & ARM \& Diversity \\ RgC ~\cite{tscllike} & Generalization & Environments $(\mathcal{S})$ (DS) & LP \\ RC ~\cite{reverse-cur} & Hard Task & Initial states $(\rho_0)$ & Intermediate difficulty \\ $1$-demo RC~\cite{montezuma-single-demo} & Hard Task & Initial states $(\rho_0)$ & Intermediate difficulty \\ Count-based ~\cite{countbased} & Hard Task & Reward functions $(\mathcal{R})$ & Diversity \\ RND ~\cite{rnd} & Hard Task & Reward functions $(\mathcal{R})$ & Surprise (model error) \\ ICM ~\cite{icm} & Hard Task & Reward functions $(\mathcal{R})$ & Surprise (model error) \\ Disagreement ~\cite{pathakdisagreement} & Hard Task & Reward functions $(\mathcal{R})$ & Surprise (model disagreement) \\ MAX ~\cite{disagreement} & Hard Task & Reward functions $(\mathcal{R})$ & Surprise (model disagreement) \\ BaRC ~\cite{BaRC} & Hard Task & Initial states $(\rho_0)$ & Intermediate difficulty \\ TSCL ~\cite{tscl} & Hard Task & Environments $(\mathcal{S})$ (DS) & LP \\ Acc-based CL ~\cite{fournier-accuracy-acl} & Multi-Goal & Reward function $(\mathcal{R})$ & LP \\ Asym. SP ~\cite{asymetricSP} & Multi-Goal & Goals $(\mathcal{G})$, initial states $(\rho_0)$ & Intermediate difficulty \\ GoalGAN ~\cite{goalgan} & Multi-Goal & Goals $(\mathcal{G})$ & Intermediate difficulty \\ Setter-Solver ~\cite{settersolver} & Multi-Goal & Goals $(\mathcal{G})$ & Intermediate difficulty \\ CGM ~\cite{CGM} & Multi-Goal & Goals $(\mathcal{G})$ & Intermediate difficulty \\ CURIOUS ~\cite{curious} & Multi-Goal & Goals $(\mathcal{G})$ & LP \\ Skew-fit ~\cite{skewfit} & Open-Ended Explo. & Goals $(\mathcal{G})$ (from pixels) & Diversity \\ DIAYN \cite{diayn} & Open-Ended Explo. & Reward functions $(\mathcal{R})$ & Diversity \\ CARML ~\cite{metarl-carml} & Open-Ended Explo. & Reward functions $(\mathcal{R})$ & Diversity \\ LE2 ~\cite{le2} & Open-Ended Explo. & Goals $(\mathcal{G})$ & Reward \& Diversity \\ \midrule ACL for Data Exploitation (\S~\ref{sec:lever_data_exploitation}): & & & \\ \midrule Prioritized ER ~\cite{per} & Performance boost & Transition selection $(\mathcal{S}\times\mathcal{A})$ & Surprise (TD-error) \\ SAUNA ~\cite{SAUNA} & Performance boost & Transition selection $(\mathcal{S}\times\mathcal{A})$ & Surprise (V-error) \\ CURIOUS ~\cite{curious} & Multi-goal & Trans. select. \& mod. $(\mathcal{S}\times\mathcal{A}, \mathcal{G})$ & LP \& Energy \\ HER ~\cite{her} & Multi-goal & Transition modification $(\mathcal{G})$ & Reward \\ HER-curriculum ~\cite{curriculu-her} & Multi-goal & Transition modification $(\mathcal{G})$ & Diversity \\ Language HER ~\cite{cideron2019self} & Multi-goal & Transition modification $(\mathcal{G})$ & Reward \\ Curiosity Prio. ~\cite{zhao2019curiosity} & Multi-goal & Transition selection $(\mathcal{S}\times\mathcal{A})$ & Diversity \\ En. Based ER ~\cite{eb-per} & Multi-goal & Transition selection $(\mathcal{S}\times\mathcal{A})$ & Energy \\ LE2 ~\cite{le2} & Open-Ended Explo. & Trans. select. \& mod. $(\mathcal{S}\times\mathcal{A}, \mathcal{G})$ & Reward \\ IMAGINE ~\cite{imagine} & Open-Ended Explo. & Trans. select. \& mod. $(\mathcal{S}\times\mathcal{A}, \mathcal{G})$ & Reward \\ \bottomrule \end{tabular} \caption{Classification of the surveyed papers. The classification is organized along the three dimensions defined in the above text. In \textit{Why use ACL}, we only report the main objective of each work. When ACL controls the selection of environments, we precise whether it is selecting them from a discrete set (\textit{DS}) or through parametric Procedural Content Generation (\textit{PCG}). We abbreviate \textit{adversarial reward maximization} by \textit{ARM} and \textit{learning progress} by \textit{LP}.} \label{bigtable} \end{table*} Objectives such as the average performance on a set of testing tasks after $N$ training steps can be difficult to optimize directly. To alleviate this difficulty, ACL methods use a variety of surrogate objectives. \paragraph{Reward.}As DRL algorithms learn from reward signals, rewarded transitions are usually considered as more informative than others, especially in sparse reward problems. In such problems, ACL methods that act on transition selection may artificially increase the ratio of high versus low rewards in the batches of transitions used for policy updates~\cite{narasimhan2015language,unreal,imagine}. In multi-goal RL settings where some goals might be much harder than others, this strategy can be used to balance the proportion of positive rewards for each of the goals~\cite{curious,le2}. Transition modification methods favor rewards as well, substituting goals to increase the probability of observing rewarded transitions~\cite{her,cideron2019self,le2,imagine}. In data collection however, adapting training distributions towards more rewarded experience leads the agent to focus on tasks that are already solved. Because collecting data from already solved tasks hinders learning, data collection ACL methods rather focus on other surrogate objectives. \paragraph{Intermediate difficulty.} A more natural surrogate objective for data collection is \textit{intermediate difficulty}. Intuitively, agents should target tasks that are neither too easy (already solved) nor too difficult (unsolvable) to maximize their learning progress. Intermediate difficulty has been used to adapt the distribution of initial states from which to perform a hard task \cite{reverse-cur,montezuma-single-demo,BaRC}. This objective is also implemented in {GoalGAN}, where a curriculum generator based on a Generative Adversarial Network is trained to propose goals for which the agent reaches intermediate performance~\cite{goalgan}. \mycite{settersolver} further introduced a \textit{judge network} trained to predict the feasibility of a given goal for the current learner. Instead of labelling tasks with an intermediate level of difficulty as in GoalGAN, this Setter-Solver model generates goals associated to a random feasibility uniformly sampled from $[0,1]$. The type of goals varies as the agent progresses, but the agent is always asked to perform goals sampled from a distribution balanced in terms of feasibility. In \mycite{asymetricSP}, tasks are generated by an RL policy trained to propose either goals or initial states so that the resulting navigation task is of intermediate difficulty w.r.t. the current agent. Intermediate difficulty ACL has also been driving successes in Sim2Real applications, where it sequences \textit{domain randomizations} to train policies that are robust enough to generalize from simulators to real-world robots~\cite{ADRmila,OpenAI2019SolvingRC}. \mycite{OpenAI2019SolvingRC} trains a robotic hand control policy to solve a Rubik's cube by automatically adjusting the task distribution so that the agent achieves decent performance while still being challenged. \paragraph{Learning progress.}The $Obj$ objective of ACL methods can be seen as the maximization of a \textit{global learning progress}: the difference between the final score $\int_{T\sim \mathcal{T}} \! P_T^N\, \mathrm{d}T$ and the initial score $\int_{T\sim \mathcal{T}} \! P_T^0\, \mathrm{d}T$. To approximate this complex objective, measures of competence learning progress (LP) localized in space and time were proposed in earlier developmental robotics works~\cite{baranes2013active,imgep}. Like \textit{Intermediate difficulty}, maximizing LP drives learners to practice tasks that are neither too easy nor too difficult, but LP does not require a threshold to define what is "intermediate" and is robust to tasks with intermediate scores but where the agent cannot improve. LP maximization is usually framed as a multi-armed bandit (MAB) problem where tasks are arms and their LP measures are associated values. Maximizing LP values was shown optimal under the assumption of concave learning profiles~\cite{lopes2012strategic}. Both \mycite{tscl} and \mycite{tscllike} measure LP as the estimated derivative of the performance for each task in a discrete set (Minecraft mazes and Sonic the Hedgehog levels respectively) and apply a MAB algorithm to automatically build a curriculum for their learning agents. At a higher level, CURIOUS uses \textit{absolute} LP to select goal \textit{spaces} to sample from in a simulated robotic arm setup~\cite{curious} (absolute LP enables to redirect learning towards tasks that were forgotten or that changed). There, absolute LP is also used to bias the sampling of transition used for policy updates towards high-LP goals. ALP-GMM uses absolute LP to organize the presentation of procedurally-generated Bipedal-Walker environments sampled from a continuous task space through a stochastic parameterization~\cite{portelas2019}. They leverage a Gaussian Mixture Model to recover a MAB setup over the continuous task space. LP can also be used to guide the choice of accuracy requirements in a reaching task~\cite{fournier-accuracy-acl}, or to train a \textit{replay policy} via RL to sample transitions for policy updates~\cite{exp-replay-opti}. \paragraph{Diversity.}Some ACL methods choose to maximize measures of diversity (also called novelty or low density). In multi-goal settings for example, ACL might favor goals from low-density areas either as targets~\cite{skewfit} or as substitute goals for data exploitation~\cite{curriculu-her}. Similarly, \mycite{zhao2019curiosity} biases the sampling of trajectories falling into low density areas of the trajectory space. % In single-task RL, \textit{count-based} approaches introduce internal reward functions as decreasing functions of the state visitation count, guiding agent towards rarely visited areas of the state space~\cite{countbased}. Through a variational expectation-maximization framework, \mycite{metarl-carml} propose to alternatively update a latent skill representation from experimental data (as in \mycite{diayn}) and to meta-learn a policy to adapt quickly to tasks constructed by deriving a reward function from sampled skills. Other algorithms do not optimize directly for diversity but use heuristics to maintain it. For instance, \mycite{portelas2019} maintains exploration by using a residual uniform task sampling and \mycite{openaiSumos} sample opponents from past versions of different policies to maintain diversity. \paragraph{Surprise.} Some ACL methods train transition models and compute intrinsic rewards based on their prediction errors~\cite{icm,rnd} or based on the disagreement (variance) between several models from an ensemble~\cite{disagreement,pathakdisagreement}. The general idea is that models tend to give bad prediction (or disagree) for states rarely visited, thus inducing a bias towards less visited states. However, a model might show high prediction errors on stochastic parts of the environment (TV problem~\cite{icm}), a phenomenon that does not appear with model disagreement, as all models of the ensemble eventually learn to predict the (same) mean prediction \cite{pathakdisagreement}. Other works bias the sampling of transitions for policy update depending on their temporal-difference error (TD-error), i.e. the difference between the transition's value and its next-step bootstrap estimation~\cite{per,apex}. Similarly, \mycite{SAUNA} adapt transition selection based on the discrepancy between the observed return and the prediction of the value function of a PPO learner (V-error). Whether the error computation involves value models or transition models, ACL mechanisms favor states related to maximal~\textit{surprise}, i.e. a maximal difference between the expected (model prediction) and the truth. \paragraph{Energy.} In the data exploitation phase of multi-goal settings, \mycite{eb-per} prioritize transitions from \textit{high-energy} trajectories (e.g. kinetic energy) while \mycite{curious} prioritize transitions where the object relevant to the goal moved (e.g. cube movement in a cube pushing task). \paragraph{Adversarial reward maximization (ARM).} Self-Play is a form of ACL which optimizes agents' performance when opposed to current or past versions of themselves, an objective that we call \textit{adversarial reward maximization (ARM)} \cite{self-play-framework}. While agents from \mycite{alpha-go-zero} and \mycite{Baker2019HidenSeek} always oppose copies of themselves, \mycite{openaiSumos} train several policies in parallel and fill a pool of opponents made of current and past versions of all policies. This maintains a diversity of opponents, which helps to fight catastrophic forgetting and to improve robustness. In the multi-agent game Starcraft~II, \mycite{vinyals2019grandmaster} train three main policies in parallel (one for each of the available player types). They maintain a \textit{league} of opponents composed of current and past versions of both the three main policies and additional adversary policies. Opponents are not selected at random but to be challenging (as measured by winning rates). \section{Discussion} \paragraph{The bigger picture.}In this survey, we unify the wide range of ACL mechanisms used in symbiosis with DRL under a common framework. ACL mechanisms are used with a particular goal in mind (e.g. organizing exploration, solving hard tasks, etc. \S~\ref{sec:main_objective}). It controls a particular element of task MDPs (e.g. $\mathcal{S}, \mathcal{R}, \rho_0$, \S~\ref{sec:lever}) and maximizes a surrogate objective to achieve its goal (e.g. diversity, learning progress, \S~\ref{sec:surrogate_objective}). Table~\ref{bigtable} organizes the main works surveyed here along these three dimensions. Both previous sections and Table~\ref{bigtable} present what has been implemented in the past, and thus, by contrast, highlight potential new avenues for ACL. \noindent\textit{Expanding the set of ACL targets.} Inspired by the maturational mechanisms at play in human infants, \mycite{elman} proposed to gradually expand the working memory of a recurrent model in a word-to-word natural language processing task. The idea of changing the properties of the agent (here its memory) was also studied in developmental robotics~\cite{mature}, policy distillation methods~\cite{mixmatch,pol-dil-review} and evolutionary approaches \cite{ha-design} but is absent from the ACL-DRL literature. ACL mechanisms could indeed be used to control the agent's body ($\mathcal{S}, \mathcal{P}$), its action space (how it acts in the world, $\mathcal{A}$), its observation space (how it perceives the world, $\mathcal{S}$), its learning capacities (e.g. capacities of the memory, or the controller) or the way it perceives time (controlling discount factors~\cite{cl-discount}). \noindent\textit{Combining approaches.} Many combinations of previously defined ACL mechanisms remain to be investigated. Could we use LP to optimize the selection of opponents in self-play approaches? To drive goal selection in learned goal spaces (e.g. \mycite{Finot2019}, population-based)? Could we train an adversarial domain generator to robustify policies trained for Sim2Real applications? \paragraph{On the need of systematic ACL studies.}Given the positive impact that ACL mechanisms can have in complex learning scenarios, one can only deplore the lack of comparative studies and standard benchmark environments. Besides, although empirical results advocate for their use, a theoretical understanding of ACL mechanisms is still missing. Although there have been attempts to frame CL in supervised settings \cite{bengiocl,hacohen19a-scoring-pacing}, more work is needed to see whether such considerations hold in DRL scenarios. \paragraph{ACL as a step towards open-ended learning agents.}\hspace{-0.2cm}Alan Turing famously wrote \textit{``Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's?''} \cite{turing1950computing}. The idea of starting with a simple machine and to enable it to learn autonomously is the cornerstone of developmental robotics but is rarely considered in DRL \cite{imagine,diayn,metarl-carml}. Because they actively organize learning trajectories as a function of the agent's properties, ACL mechanisms could prove extremely useful in this quest. We could imagine a learning architecture leveraging ACL mechanisms to control many aspects of the learning odyssey, guiding agents from their simple original state towards fully capable agents able to reach a multiplicity of goals. As we saw in this survey, these ACL mechanisms could control the development of the agent's body and capabilities (motor actions, sensory apparatus), organize the exploratory behavior towards tasks where agents learn the most (maximization of information gain, competence progress) or guide acquisitions of behavioral repertoires. \scalefont{0.95} \bibliographystyle{named} \subsection{Style}
1,314,259,996,076
arxiv
\section{Introduction and highlights} \label{sub:intro} The study of black holes in M(atrix) theory holds a treasure trove of insight into quantum gravity and the nature of spacetime. As a non-perturbative formulation of M-theory, Matrix theory~\cite{Banks:1996vh,Bigatti:1997jy} can in principle access and potentially resolve many of the puzzles we associate with black holes. Early attempts at staging Matrix black holes have consisted of promising sketches~\cite{Horowitz:1997fr}-\cite{Berkowitz:2016znt} and numerical simulations~\cite{Hanada:2010rg}-\cite{Berkowitz:2016muc}. We have learned that understanding black holes is related to studying strongly coupled Yang-Mills at finite temperature~\cite{Itzhaki:1998dd}-\cite{Martinec:1998ja}, and that there might be intricate non-local dynamics near the event horizon~\cite{Giddings:2011ks,Giddings:2012bm}. More recently, we have learned that Matrix theory is characteristically chaotic~\cite{Berkowitz:2016znt,Maldacena:2015waa,Gur-Ari:2015rcq}, and interactions can scramble initial value data at the fastest possible rate that is allowed by the postulates of quantum mechanics~\cite{Sekino:2008he}-\cite{Gharibyan:2018jrp} -- as also expected from black hole physics. In this work we ask if one can write a mean field coarse-grained description of the strongly coupled microscopic dynamics of Matrix theory in a manner that captures the essential features of black holes and informs us about the geometry near the event horizon. To illustrate through an analogy, if M(atrix) theory is to black hole quantum mechanics as BCS theory is to superconductivity, we are looking for the analogue of a Landau-Ginzburg description of the quantum physics of black holes -- with the underpinning element of stochastic chaotic evolution. We know that Matrix theory is chaotic, and we know that one can often use the language of random variables, or in this case Random Matrix theory (RMT)~\cite{Gharibyan:2018jrp}-\cite{Cotler:2016fpe}\cite{Berkowitz:2016znt}, to capture chaotic dynamics. We also know that RMT is closely related to the strong damping regime of Fokker-Planck stochastic evolution~\cite{fokker,lemons,mahnke,Dyson1} whereby a statistical description of ergodic motion is effectively described with macroscopic variables. The suggestion is then to formulate a description of Matrix black holes where the entries of the Matrices are described through particles moving in a mean field potential -- one that is obtained by coarse-graining over microscopic degrees of freedom that are engaged in ergodic motion. \begin{figure} \begin{center} \includegraphics[width=2.5in]{fig0.png} \end{center} \caption{A cartoon of the effective model of the light-cone Schwarzschild black hole. The cells represent Planck size marginally bound D0 branes, about $d$ per cell in $d$ space dimensions. The cells are glued together with a condensate of off-diagonal matrix modes that act as scaffolding and do not carry information or entropy.}\label{fig:bh} \end{figure} In this work, we show that such an effective description of black holes is indeed possible using Matrix theory. In the process of developing this effective model, we settle on a microscopic picture of Matrix black holes that is both intuitive and complex. Entries on the diagonal of the matrices incorporate the thermodynamics and encode information. These can be thought of as particles that mostly hang around near the surface of the would-be horizon. They are subject to a mean field potential whose shape we determine. An additional `goo' of off-diagonal matrix entries glue these particles into clusters, effectively acting like bound states. These clusters contain around $d$ particles each, for a black hole in $d$ space dimensions. Figure~\ref{fig:bh} depicts a cartoon of the model. In the figure, the clusters are depicted as cells. The configuration is far from static, and in fact we expect that the cells continuously exchange particles and rearrange themselves. The rest of the matrix degrees of freedom, which constitute the overwhelming majority of the total, condense in a quantum ground state. It is possible that they should be thought of as a membrane stretched at the horizon, without any associated thermodynamics or entropy. Thermal energy is distributed in the dynamics of the cells as they slide near the horizon and interact with each other. We develop this model in detail, matching with all expectations from the dual M-theory supergravity description of a Schwarzschild black hole in the light-cone frame. In particular, Hawking evaporation~\cite{Hawking:1974sw}-\cite{Majhi:2011yi} is reproduced and information loss is demonstrated to arise from the process of coarse-graining over otherwise unitary dynamics. It becomes clear that dynamics near the horizon has a non-local component when explored at short enough timescales, while being local at the longer timescales associated with Hawking radiation\footnote{To clarify, this non-locality arises at the Planck scale. At energy scales below the Planck scale, we see no evidence for non-locality. This is the same non-local phenomenon typically associated with D0 brane scattering.}. Most interestingly, we demonstrate that non-unitary evolution and information loss arise at the timescales for which the Matrix dynamics is strongly coupled and spacetime geometry is expected to be emergent in the dual supergravity language. This suggests that Hawking information loss is inherently tied to the premise that geometry near the horizon of a large black hole is smooth and well-defined. The microscopic degrees of freedom underlying black hole dynamics are Planck sized bits that are interacting chaotically over Planckian timescales. Any description of the physics over timescales larger than the Planck time involves coarse graining over stochastic dynamics in a manner that leads to an effective quantum picture that is non-unitary. The notion of spacetime geometry arises at around those Planckian timescales, implying the breakdown of the geometrical picture of black hole evaporation as we approach the horizon. Put differently, the Hawking computation is robust when applied in smooth spacetime backgrounds over large enough timescales, yet the evaporation should still be regarded as unitary because the notion of geometry and spacetime is lost at the event horizon {\em at short timescales}. The outline of the text is as follows. In the first section, we present a brief overview of Matrix theory, followed by a review of Fokker-Planck dynamics and the light-cone Schwarzschild black hole in supergravity. We then systematically develop the effective model for the Matrix black hole, matching and checking against expectations on the dual low energy M-theory side. In the second section, we focus on the time evolution of information within the Matrix black hole. We track information encoded in the polarization states of the low energy M-theory supergravity multiplet, and we write an effective qubit time evolution operator that is based on the stochastic model developed earlier. We show how the evolution becomes non-unitary at longer timescales because of the coarse-graining over chaotic dynamics, and correlate this with the emergence of spacetime geometry in the dual M-theory language. For short timescales, we write a unitary time evolution operator that describe the weakly coupled qubit dynamics near the event horizon. Finally, in the discussion section, we reflect on the implications and future directions. \section{The effective model}\label{sec:first} \subsection{M(atrix) theory overview} The M(atrix) theory action is the dimensional reduction of $10$ dimensional Super Yang-Mills (SYM) to $0+1$ dimensions and is given by \begin{equation}\label{eq:sym} S = \int dt\ \mbox{Tr}\left[ \frac{1}{2\,R} \dot{X}_i^2+\frac{R}{2\,\lambda^3} [X_i,X_j]^2 + \frac{1}{2} \Psi \dot{\Psi} + \frac{R}{2\,\lambda^{3/2}} \Psi \Gamma^i [X_i, \Psi] \right]\ . \end{equation} The gauge group is $U(N)$, with the $X_i$s ($i=1,\ldots, 9$) and the $\Psi$ in the adjoint representation of the group. In our conventions, we have \begin{equation} R = g\sub{s} \ell\sub{s}\ \ \ ,\ \ \ \lambda = 2\pi\,\ell\sub{s}^2\ , \end{equation} where $g\sub{s}$ is the string coupling and $\ell\sub{s}$ is the string length\footnote{ Matrix theory is sometimes written in Planck scale conventions, related to the one we use by $X\rightarrow Y/\sqrt{R}$ and $t\rightarrow \tau/R$. Using units such that $2\pi\,\ell\sub{P}^3=1$ where $\ell\sub{P}$ is the eleven dimensional Planck length, the action takes the form \begin{equation} S = \int d\tau\ \mbox{Tr}\left[ \frac{1}{2\,R} \dot{Y}_i^2+\frac{R}{2} [Y_i,Y_j]^2 + \frac{1}{2} \Psi \dot{\Psi} + \frac{R}{2} \Psi \Gamma^i [Y_i, \Psi] \right]\ , \end{equation} where $\dot{Y}=dY/d\tau$. In this alternate convention, the length dimensions of the various quantities become $X\simeq \ell^{3/2}$, $\psi\sim \ell^0$, $t\sim \ell^2$, $R\sim \ell$. Note that if $Y\sim \ell\sub{P}^{3/2}$, then $X=\ell\sub{s}$, given that $\ell\sub{P} = g\sub{s}^{1/3}\ell\sub{s}$.}. The Yang-Mills coupling is \begin{equation} g\sub{YM}^2 = \frac{g\sub{s}}{\ell\sub{s}^3}\ . \end{equation} The length dimensions of the various quantities are: $X\sim \ell^1$, $t\sim \ell^1$, and $\psi \sim \ell^0$. The theory is purported to be a non-perturbative formulation of M-theory in the light-cone frame in the following scaling limit\footnote{This scaling limit corresponds to the decoupling regime for holographic duality~\cite{Maldacena:1997re,Itzhaki:1998dd,Witten:1998qj,Gubser:1998bc} -- as applied to D0 branes. The Matrix theory conjecture is thus in the same class of gravity-SYM correspondences that give rise to the AdS/CFT map.} \begin{equation} g\sub{s},\ell\sub{s} \rightarrow 0\ \ \ \mbox{with}\ \ \ g\sub{YM}^2 = \frac{g\sub{s}}{\ell\sub{s}^3} = \mbox{fixed}\ \ \mbox{and}\ \ \frac{X}{\ell\sub{s}} = \mbox{fixed}\ . \end{equation} This corresponds to focusing on energies that scale as $E\sim g\sub{s}/\ell\sub{s}$. It is sometimes convenient to introduce alternate M-theory variables $\epsilon$, $\tau$, and $\xi$ that remain fixed in the scaling regime of interest \begin{equation}\label{eq:rescaled} E = g\sub{s}^{2/3} \epsilon\ \ \ ,\ \ \ t = g\sub{s}^{-2/3} \tau\ \ ,\ \ X = \ell\sub{s} \xi\ . \end{equation} For example, the corresponding light-cone M-theory energy scale is $\epsilon \sim R/\ell\sub{P}^2 = \mbox{fixed}$. In the map onto light-cone M-theory, $N/R$ is interpreted as total light-cone momentum. Light-cone energy scales inversely with light-cone momentum, hence as $(R/N) \times \mbox{mass}^2$. Depending on the coupling regime, the number of active degrees of freedom of a configuration scales as $N^k$, where $k=2$ in the weakly coupled regime, and $k = 1$ at strong coupling. Compactifying light-cone M-theory to $d$ space dimensions, we can describe it through Matrix theory with $d$ of the $9$ $X_i$ matrices removed from the dynamics, assuming that the compact directions are small enough that associated modes are too heavy to excite. Alternatively, one can use $d+1$ dimensional SYM for a full description of the compactified theory, obtained from the current setup via a T-duality map. The relation between light-cone M-theory and Matrix theory is known to hold for $N\rightarrow \infty$, but the correspondence is valid for finite $N$ as well -- between Discrete Light-Cone Quantized (DLCQ) M-theory and finite $N$ matrix theory, where $N$ is mapped onto units of M-theory discrete light-cone momentum~\cite{Seiberg:1997ad}. In this work, we will work at finite but large $N$ in trying to describe an M-theory black hole that is large enough to have small curvature scales at its horizon. \subsection{From chaos to a stochastic evolution} Recently, Matrix theory has been demonstrated to be highly chaotic~\cite{Maldacena:2015waa,Gur-Ari:2015rcq,Berkowitz:2016znt}, with dynamics that can scramble initial value data in a time that scales logarithmically with the entropy~\cite{Barbon:2011pn,Lashkari:2011yi,Brady:2013opa,Pramodh:2014jha,Gharibyan:2018jrp} -- as opposed to the more common power law behavior. This allows one to capture Matrix theory physics, in the appropriate setting, by treating the matrix entries as random variables. Describing a non-extremal black hole is certainly a good candidate setup for exploring chaos in Matrix theory~\cite{Cotler:2016fpe,Magan:2016ojb,Gharibyan:2018jrp}. And techniques from the well-established field of Random Matrix Theory (RMT)~\cite{Dyson1,rmt,intrormt,randommatrix,Eynard:2015aea} can then be used to tackle the problem. RMT is most powerful when one is dealing with a theory with a single matrix; it then allows a robust statistical treatment of the eigenvalues of this matrix. In our setup, we will be interested in studying a configuration of matrices in Matrix theory that represents a $d$ dimensional Schwarzschild black hole in the dual light-cone M-theory. We will assume from the outset that we work with spherically symmetric configurations, where the different $X_i$ matrices are chaotic and {\em uncorrelated} in different space directions. Hence, each matrix entry in the $d$ matrices $X_i$, with $i=1,\ldots, d$, is random and not correlated with any other matrix entry. This configuration is to be mapped onto a black hole in the dual M-theory -- with a fixed temperature and associated Hawking evaporation phenomenon. The fermionic matrix entries of $\Psi$ in~(\ref{eq:sym}) will be treated as a component of the thermal soup -- in equilibrium with the bosonic matrix entries. At finite temperature, we will hence mostly focus on the bosonic sector with a mirror image at play in the fermionic sector being implied. However, we do need to incorporate the one-loop quantum contribution of the fermionic degrees of freedom to the mean field potential for the bosonic stochastic variables. Furthermore, later on, we will use the fermionic variables as probes to track information evolution in this thermal soup. We start by noting that RMT is closely related to stochastic physics. In particular, since the work by Dyson~\cite{Dyson1}, it has been demonstrated that RMT dynamics can be properly captured by the strong damping regime of Fokker-Planck evolution. We present here a quick overview of the subject. In RMT, each matrix entry can be thought of as a stochastic particle evolving in an {\em mean field potential}. For a particle with position $\bm{r}$ and velocity $\bm{v}$ in $d$ space dimensions, we can study it through the probability function \begin{equation} p(\bm{r},\bm{v}, t)\ \mbox{d}^d \bm{r} \mbox{d}^d \bm{v}\ , \end{equation} which represents the probability of finding the particle at time $t$ within $\bm{r}$ and $\bm{r}+\mbox{d}\bm{r}$ and $\bm{v}$ and $\bm{v}+\mbox{d}\bm{v}$. In our setup, we will consider matrix configurations that are spherically symmetric in $d$ dimensions. We will then focus on probability profiles where \begin{equation} p(\bm{r},\bm{v}, t) \rightarrow p(r, v, t) \prod_i \delta(v_{\theta_i}) \ . \end{equation} Here, the $v_{\theta_i}$ are $d-1$ components of $\bm{v}$ in the angular directions, and $v=v_r$. Correspondingly, the mean field potential is spherically symmetric\footnote{The model we develop involves time averaging over stochastic, chaotic dynamics. The cluster tiling of Figure~\ref{fig:bh} is not rigid and very dynamical over timescales shorter than the Hawking timescale. It is then reasonable to expect that, at timescales larger than the characteristic timescale associated with cluster dynamics, an approximate spherical symmetry sets in. Of course, going beyond this coarse model one needs to consider the possible breaking of the spherical symmetry~\cite{Rinaldi:2017mjl,Brower:2018szu}.} \begin{equation} V(\bm{r})\rightarrow V(r) \end{equation} and the Fokker-Planck equation takes the form \begin{equation} \frac{\partial p(r,v,t)}{\partial t} = \left( -v \frac{\partial}{\partial r} + \frac{1}{m} \frac{\partial V}{\partial r} \frac{\partial}{\partial v} + \gamma\, d + \gamma\, v\, \frac{\partial}{\partial v} + \frac{\gamma}{m} T \frac{1}{v^{d-1}} \frac{\partial}{\partial v} \left(v^{d-1} \frac{\partial}{\partial v}\right) \right) p(r,v,t)\ , \end{equation} where $T$ is the temperature of the environment, $\gamma$ is a damping parameter, and $m$ is the mass of the particle. This then allows us to study the evolution of the matrix entry in a statistical framework. The spherically symmetric Fokker-Planck equation is solved by the equilibrium time-independent profile \begin{equation} p_{eq} = C\, \exp\left[-\frac{1}{T} \left(\frac{1}{2}m\,v^2+V(r)\right)\right]\prod_i \delta(v_{\theta_i})\ . \end{equation} $C$ here is a normalization constant. Note that this non-relativistic treatment is consistent with Matrix theory since light-cone M-theory has Galilean symmetry with dispersion relation $E\sub{LC} = \bm{p}^2/2 p\sub{LC}$, where the light-cone momentum $p\sub{LC} \sim 1/R$ plays the role of Galilean mass. As mentioned above, the relation between RMT and stochastic physics arises in the regime of strong damping \begin{equation}\label{eq:strongdamping} \gamma \gtrsim \sqrt{-\frac{V''(0)}{m}}\ . \end{equation} Focusing on this regime, we also write the probability profile as \begin{equation} p\rightarrow \int \mbox{d}^d \bm{v}\, p \end{equation} integrating over all velocities. The resulting evolution equation is known as the Smoluchowski equation \begin{equation}\label{eq:smol} \frac{\partial p(r,t)}{\partial t} = \frac{1}{m\,\gamma}\left( \frac{1}{r^{d-1}} \frac{\partial}{\partial r} r^{d-1} V'(r)+\frac{T}{r^{d-1}} \frac{\partial}{\partial r} r^{d-1}\frac{\partial}{\partial r} \right) p(r,t)\ . \end{equation} The radial probability current that follows from~(\ref{eq:smol}) takes the form \begin{equation} j_r = - \frac{1}{m\,\gamma} \left(T\frac{\partial}{\partial r}+V'(r)\right) p(r,t)\ , \end{equation} which we will use later in understanding evaporation through stochastic diffusion. Our goal is to develop an effective model for strongly coupled chaotic Matrix theory, using the Smoluchowski equation with $r$ representing matrix entries in the bosonic matrix $\sqrt{\sum_i X_i^2}\sim X_i$ of~(\ref{eq:sym}) -- since different directions in space are statistically uncorrelated. We then need to identify the relevant mean field potential $V(r)$, mass $m$, temperature $T$, and damping parameter $\gamma$. It is worthwhile noting that an alternate and equivalent approach is to track the evolution of moments of random matrix entries. If $\chi$ represent any matrix entry, then the Smoluchowski equation with a quadratic potential is equivalent to stochastic fluctuations given by \begin{equation} \left<\delta \chi\right> = - \frac{V''(0) r}{m\,\gamma} \delta t\ \ \ ,\ \ \ \left<\delta \chi^2\right> = \frac{2\,T}{m\,\gamma} \delta t\ , \end{equation} which then imply the differential equations for the moments \begin{equation}\label{eq:moment1} \frac{d}{dt} \left<\chi\right> = -\frac{V''(0)}{m\,\gamma} \left<\chi\right>\ , \end{equation} \begin{equation}\label{eq:moment2} \frac{d}{dt} \left<\chi^2\right> = -\frac{2\,V''(0)}{m\,\gamma} \left<\chi^2\right>+\frac{2\,T}{m\,\gamma}\ , \end{equation} The timescale of stochastic evolution can then be easily read off as \begin{equation}\label{eq:tT} t\sub{T}\sim \frac{m\,\gamma}{V''(0)}\ . \end{equation} It is important to note that this is not the timescale over which one coarse-grains the random motion to arrive at a mean field potential for stochastic variables. This other timescale, which we call the stochastic timescale $t\sub{stoch}$, must be shorter than the thermal timescale, $t\sub{stoch}< t\sub{T}$, and is determined from the process of averaging over microscopic dynamics. We next need to determine the parameters of the model. We will build this effective description of strongly coupled chaotic Matrix theory by using knowledge of the gravity dual, and of the microscopic string theory dynamics that underlies Matrix theory. \subsection{The light-cone Schwarzschild black hole} We start by reviewing the dual gravity picture of the Matrix theory setup of interest -- a light-cone M-theory Schwarzschild black hole~\cite{Banks:1997cm}. The corresponding geometry is obtained by Lorentz boosting a $d$ dimensional Schwarzschild black hole in the light-cone direction with a boost factor given by $r\sub{h}/R$, where $r\sub{h}$ is the radius of the black hole horizon. While the horizon geometry is unchanged and the entropy or area in Planck units remains the same, the Hawking temperature is red-shifted \begin{equation} T\sub{h} = \frac{R}{r\sub{h}^2}\ . \end{equation} The Hawking radiation flux from evaporation takes the form \begin{equation} P_{\mbox{\tiny\emph{tot}}} \sim \frac{r\sub{h}^{d-1}}{r\sub{h}^{d+1}} = \frac{1}{r\sub{h}^2} \end{equation} in general $d$ dimensions. The thermal timescale associated with the Hawking temperature is then \begin{equation}\label{eq:hawkingtime} t\sub{h} \sim \frac{1}{T\sub{h}} \sim \frac{r\sub{h}^2}{R}\ . \end{equation} The entropy is related to the black hole mass $M\sub{bh}$ as usual $S\sim M\sub{bh}\, r\sub{h}$, and the evaporation process can be described by~\cite{Page:1976df,Page:1993df} \begin{equation} \frac{dM\sub{bh}}{dt} = \frac{R}{r\sub{h}^3}\ . \end{equation} Hence, the black hole lifetime is given by \begin{equation}\label{eq:lifetime} t\sub{life} \sim t\sub{h} S\ . \end{equation} Beside the timescale $t\sub{h}$ and $t\sub{life}$, the shorter scrambling timescale \begin{equation}\label{eq:scrambling} t\sub{scr} \sim t\sub{h} \ln S \end{equation} determines the timescale over which the black hole scrambles information. We have written all these relations in forms that can be compared to the Matrix theory stochastic model in the choice of units presented earlier. In our SYM choice of units, the entropy of the black hole is written as \begin{equation}\label{eq:entropy} S \sim \frac{r\sub{h}^{d-1}}{\ell\sub{s}^{d-1}}\ . \end{equation} For a large black hole, we see that we must require \begin{equation}\label{eq:curvature} r\sub{h} \gg \ell\sub{s} \end{equation} leading to small curvature scales at the black hole horizon. The task next is to model an effective Matrix theory stochastic system that reproduces these properties of a light-cone Schwarzschild black hole. \subsection{A conjecture for an effective model} In a perturbative regime, Matrix theory consists of $\sim N^2$ degrees of freedom as all matrix entries participate in the dynamics. In early models of a Schwarzschild black hole in Matrix theory, the authors of~\cite{Horowitz:1997fr,Banks:1997hz,Banks:1997tn} noted however that, to reproduce the correct equation of state of a light-cone black hole, one must have the entropy proportional to $N$ at strong coupling, not $N^2$ \begin{equation} S \sim N\ . \end{equation} This implies that only $N$ of the entries in each matrix $X_i$ are to participate in the thermodynamics of the Matrix black hole; that is, most degrees of freedom must be `frozen', given that $N\gg 1$ follows from~(\ref{eq:entropy}) and~(\ref{eq:curvature}). Inspired from the works of~\cite{Horowitz:1997fr,Banks:1997hz,Banks:1997tn}, we then propose that the thermodynamics of the Matrix black hole is carried by the $N$ diagonal entries of the $X_i$ matrices. Information in the black hole would also be carried by diagonal degrees of freedom only. These entries can be sometimes interpreted as coordinates of the corresponding D0 branes underlying Matrix theory. Entropically, these order $\sim N$ degrees of freedom would like to spread to infinity -- the theory even admits flat directions for this purpose. However, perturbatively there can be an initial cost in energy in doing so from strings stretching between the D0 branes -- {\em i.e.} off-diagonal modes of the matrices. Presumably, taking strong coupling effects into account, the configuration forms a metastable ball of size $r\sub{h}$, the black hole radius, along with decay channels that implement the process of Hawking evaporation. As a diagonal matrix entry random walks its way out, a bit of the black hole evaporates away~\cite{Berkowitz:2016muc}. If $N$ diagonal degrees of freedom are to spread in a volume $r\sub{h}^{d}$, average inter-brane spacing is generically parametrically much larger with $N$ than if they are spread over an area $r\sub{h}^{d-1}$. And since inter-brane spacing is costly in energy, we can start seeing that the proper model of a Matrix black hole would involve the diagonal entries of the matrices spread on the surface of a would-be black hole horizon. Figure~\ref{fig:bh} shows a cartoon of the setup. \begin{figure} \begin{center} \includegraphics[width=6.0in]{fig1.pdf} \end{center} \caption{(a) A shaded sub-block of a matrix that describes a cluster of $d-1$ D0 branes. The $\delta X$s refer to the off-diagonal entries spanning clusters; the off-diagonal entries within a cluster are in the shaded block, denoted by $\delta x$. (b) General structure of non-zero entries in the matrices for different space dimensions $d$. The $d-1$ labels refer to the number of active columns or rows in the first row or column, respectively. The shaded diagonals start within the shaded square in (a). }\label{fig:matrix} \end{figure} Figure~\ref{fig:matrix}(a) shows a cartoon of a matrix $X_i$, focusing on a sub-block associated with a group of `nearest-neighbor' branes\footnote{Note that the permutation symmetry requires that the additional $d^2$ off-diagonal entries in the top right and bottom left of each matrix are active as well. This is a detail in the description, in the large $N\gg d$ limit, we assume has subleading effect on the larger picture.}. Using the permutation subgroup of $U(N)$, we can always arrange to sort the matrix entries as depicted. We expect that a certain number of branes, of order $d-1$, whose coordinates appear as $x$ in the figure, would be close enough that corresponding matrix off-diagonal modes, labeled $\delta x$ in the figure, can be light. This still would not affect the $S\sim N$ requirement as the number of such modes would be independent of $N$. Branes much farther away, over a distance scale $r\sub{h}$, would be much heavier. We propose that beyond the $d\times d$ sub-block, all other off-diagonal modes would be too heavy to excite and would freeze or condense in a Bose-Einstein (BE) condensate. Indeed, if we look at the critical condensate temperature $T\sub{c}$, we would expect\footnote{The right hand side is the expression for the number of degrees of freedom in a Bose condensate in $d$ dimensions.} \begin{equation} N\sim N^2 \left(\frac{T\sub{h}}{T_c}\right)^{d/2}\ , \end{equation} which we can quickly see to be much larger than the Hawking temperature \begin{equation} T_c \sim \frac{R}{r\sub{h}^{2/d}} \gg T\sub{h} \end{equation} for $d\geq 2$. It is possible that this BE condensate describes a membrane-like configuration stretching at the black hole horizon~\cite{Horowitz:1997fr,Banks:1997hz,Banks:1997tn,Kabat:1997im,Uehara:2004vp}. In a coarse-grained effective language, we would set these heavy off-diagonal modes, the $\delta X$s in the figure, to zero. Interestingly, fuzzy spheres of various dimensions in Matrix theory have been shown to necessitate the activation of more off-diagonal modes that spread away from the diagonal~\cite{Castelino:1997rv,Ramgoolam:2001zx}. For example, a 2-sphere ($d=3$) is realized through $SU(2)$ representations, which activate $3$ diagonal lines along the matrix diagonals; and a 4-sphere ($d=5$) activates $5$ diagonal lines. Our model then fits well with this pattern. Figure~\ref{fig:matrix}(b) shows the general scheme. The diagonal entries {\em within} the $d\times d$ sub-block of matrices would be spread out from each other at a distance that is around the Planck scale and might naturally involve marginal bound state physics. In M-theory language, this would correspond to supergravity excitations carrying $\sim d$ units of light-cone momentum. These marginal bound states are conjectured to exist in Matrix theory and are a necessary ingredient for the dictionary between Matrix theory and M-theory~\cite{Banks:1996vh}. The off-diagonal modes $\delta x$ in these sub-blocks would remain relatively light and participate in making the physics of these clusters non-local, at around the Planck scale. They would correspond to strings joining nearest neighbor branes, and henceforth we refer to the $\delta x$s as `{\em off-diagonal nearest neighbor modes}'\footnote{Our treatment explicitly picks out a `frame' or gauge where the diagonal and off-diagonal matrix entries have very different physical roles. We expect that this setup corresponds to a description of the Matrix black hole from the perspective of the outside observer. $U(N)$ gauge transformations would naturally change the perspective, while mixing the roles of diagonal and off-diagonal entries. More on this in the Discussion section.}. Our stochastic model would then involve writing an effective theory of all the modes that remain active -- diagonals $x$ and nearest neighbor modes $\delta x$ -- while integrating out all other $\delta X$ modes. We need to provide two separate stochastic treatments, one for the $x$ modes on the diagonal, and another for the off-diagonal nearest neighbor modes $\delta x$. The first would describe the coarse-grained thermal state of the black hole; the second would describe finer cluster physics within each matrix sub-block. We will next demonstrate how these two sectors effectively decouple and can reliably be treated through stochastic methods due to a hierarchy in the relevant timescales. In the Matrix theory scaling regime time scales as $g\sub{s}/\ell\sub{s}$; this allows us to measure timescale through the effective Yang-Mills coupling $g\sub{eff}(\tau)^2$ defined as \begin{equation} \frac{g\sub{s}}{\ell\sub{s}} t = \frac{g\sub{s}^{1/3}}{\ell\sub{s}} (g\sub{s}^{2/3}\,t) = (g\sub{YM}^2)^{1/3} \tau \equiv (g\sub{eff}(\tau)^2)^{1/3}\ , \end{equation} which remains finite in the scaling regime. Hence, larger effective coupling corresponds to longer times since $0+1$ SYM is super-renormalizable. In this language, the first timescale $t\sub{h}$ from~(\ref{eq:hawkingtime}) arises from the thermodynamics of the diagonal modes, of order $N$ in number; this gives \begin{equation} \frac{g\sub{s}}{\ell\sub{s}} t\sub{h} = (g\sub{eff}(\tau\sub{h})^2)^{1/3} \sim \left(\frac{r\sub{h}}{\ell\sub{s}}\right)^2\gg 1\ . \end{equation} The scrambling timescale $t\sub{scr}$ of~(\ref{eq:scrambling}) is then given by \begin{equation} \frac{g\sub{s}}{\ell\sub{s}} t\sub{scr} =(g\sub{eff}(\tau\sub{scr})^2)^{1/3} \sim \ln N\,\left(\frac{r\sub{h}}{\ell\sub{s}}\right)^2\gg 1\ . \end{equation} The lifetime of the configuration $t\sub{life}$ from~(\ref{eq:lifetime}) should correspond to \begin{equation} \frac{g\sub{s}}{\ell\sub{s}} t\sub{life} = (g\sub{eff}(\tau\sub{life})^2)^{1/3} \sim \left(\frac{r\sub{h}}{\ell\sub{s}}\right)^2 N \gg 1\ . \end{equation} These statements follow from the expected black hole physics on the dual side of the correspondence. Note that all three timescales correspond to regimes where the Matrix theory SYM is strongly coupled. On the SYM side, perturbatively, we know that off-diagonal modes have dynamics given by\footnote{The total energy receives an important contribution from fermionic zero modes which will be taken into account when developing the mean field potential. At this stage, we use the bosonic sector only to simply identify relevant dynamical scales. Note also that, at finite temperature, supersymmetry would be broken.} \begin{equation} E\sim \frac{1}{R} \delta\dot{x}^2+\frac{R}{\ell\sub{s}^6} \Delta r^2 \delta x^2\ , \end{equation} where $\Delta r$ is the distance between the corresponding diagonal entries; this gives a frequency of \begin{equation}\label{eq:deltar} \omega_{\delta x} \sim \frac{R}{\ell\sub{s}^2}\frac{\Delta r}{\ell\sub{s}} \ . \end{equation} We can then easily see that if $\Delta r\sim \ell\sub{s}$ for nearest neighbor off-diagonal modes, $\delta x$ modes can be treated as heavy and can hence be integrated out over time scales \begin{equation} \omega_{\delta x} t > 1\Rightarrow t>t\sub{o}\ \ \ \mbox{with}\ \ \ \frac{g\sub{s}}{\ell\sub{s}}t\sub{o} = 1 \Rightarrow (g\sub{eff}(\tau)^2) > 1\ . \end{equation} This is the strong coupling transition point for the SYM, a regime that we typically associate with emergence of geometry on the dual M-theory side. The relevant strong coupling benchmark is given by $g\sub{eff}(\tau)^2\sim 1$, instead of the one using the 't Hooft effective coupling $g\sub{eff}(\tau)^2 N \sim 1$, because the dynamics in question is that of individual partons in the black hole soup, as opposed to the interaction of the black hole as a whole. More on the interplay between these two couplings and the emergence of a valid geometrical description can be found in the Discussion section. Next, looking at off-diagonal modes $\delta X$ that straddle diagonal modes separated by a large distance of order $\Delta r\sim r\sub{h}$, we see from~(\ref{eq:deltar}) that these can be integrated out for timescales \begin{equation} \omega_{\delta X} t\gg 1\Rightarrow t\gg t\sub{stoch}\ \ \ \mbox{with}\ \ \ \frac{g\sub{s}}{\ell\sub{s}} t\sub{stoch} = (g\sub{eff}^2(\tau\sub{stoch}))^{1/3} = \frac{\ell\sub{s}}{r\sub{h}} \Rightarrow (g\sub{eff}(\tau)^2)^{1/3} \gg \frac{\ell\sub{s}}{r\sub{h}}\ . \end{equation} This is the shortest of the timescales and determines the regime where a stochastic treatment is valid: it corresponds to timescales where integrating out the $\delta X$'s leads to a stochastic mean field potential for the diagonal modes. Note also that, for $r\sub{h}\gg \ell\sub{s}$, part of this regime overlaps with weak coupling in the Matrix SYM. \begin{figure} \begin{center} \includegraphics[width=6in]{fig2.pdf} \end{center} \caption{The hierarchy of timescales for event horizon dynamics. Timescales $t<t\sub{o}$ are associated with non-local physics within D0 brane clusters, but timescales $t>t\sub{stoch}$ allow a local description for coarser inter-cluster dynamics.}\label{fig:timescales} \end{figure} Figure~\ref{fig:timescales} summarizes the various timescales and clarifies the range of validity for the effective model that we propose. The stochastic formalism with a mean field potential for the diagonal modes requires coarse graining over time scales longer than $t\sub{stoch}$. For $t>t\sub{stoch}$, $\delta X$'s are frozen in a BE condensate. We can then incorporate the effect of the $\delta X$'s into a mean field potential for the modes on the diagonal. The nearest neighbor off-diagonal modes, the $\delta x$'s, cannot be integrated out at these timescales. We leave them part of the degrees of freedom participating in the physics of cluster formation. For timescales $t>t\sub{o}$, the nearest neighbor modes are heavy as well and are associated with high frequency dynamics that can be coarse grained and described through a stochastic treatment. However, the $\delta X$ modes will always have a much higher frequency (for $r\sub{h}\gg\ell\sub{s}$) and hence will still determine the mean field potential for the diagonal modes. Finally, thermal timescales, $t\sub{h}$, $t\sub{scr}$, and $t\sub{life}$ are all much longer and live well within the regime of a stochastic treatment that coarse grains physics faster than $t\sub{stoch}$. We then list in one place the set of observations underlying our model: \begin{itemize} \item We have a stochastic effective description for diagonal modes for $t>t\sub{stoch}$, or\\ $(g\sub{eff}(\tau)^2)^{1/3} \gg \frac{\ell\sub{s}}{r\sub{h}}$. We integrate out the off-diagonal modes that straddle widely separated modes on the diagonal. \item Strong coupling corresponds to timescales $t>t\sub{o}$, or $(g\sub{eff}(\tau)^2)^{1/3} \gg 1$. In this regime, all off-diagonal modes are heavy, but the effect of nearest neighbor off-diagonal modes on diagonal modes is sub-leading. We associate emergence of geometry on the dual M-theory side with the onset of strong coupling in Matrix theory~\cite{Bigatti:1997jy,Taylor:2001vb}. At timescales $t\sub{stoch} < t \lesssim t\sub{o}$, we might be able to write a stochastic effective description of D0 brane cluster dynamics. We expect that at around $t\sim t\sub{o}$, the degrees of freedom of Matrix theory organize in clusters of about $d$ nearest neighbor branes moving in the larger thermal soup. \item Hawking evaporation physics sets in at $t\gtrsim t\sub{h}$, or $(g\sub{eff}(\tau)^2)^{1/3} \sim \left(\frac{r\sub{h}}{\ell\sub{s}}\right)^2\gg 1$, well within the regime of validity of the stochastic treatment. \end{itemize} It is useful to write some of these timescales in M-theory Planck units. Using~(\ref{eq:rescaled}), and the fact that light-cone time is boosted by a factor of $\ell\sub{P}/R$, we find \begin{equation} \tau\sub{o} = \frac{\ell\sub{P}}{R} \ell\sub{P} \rightarrow \ell\sub{P}\ , \end{equation} \begin{equation} \tau\sub{stoch} = \frac{\ell\sub{P}}{R} \ell\sub{P} \frac{\ell\sub{P}}{r\sub{h}} \rightarrow \ell\sub{P} \frac{\ell\sub{P}}{r\sub{h}} \ll \ell\sub{P}\ , \end{equation} and \begin{equation} \tau\sub{h} = \frac{\ell\sub{P}}{R} \ell\sub{P} \left(\frac{r\sub{h}}{\ell\sub{P}}\right)^2 \rightarrow \ell\sub{P} \left(\frac{r\sub{h}}{\ell\sub{P}}\right)^2 \gg \ell\sub{P}\ . \end{equation} Hence we see that $\tau\sub{o}$ correspond to Planck scale time in M-theory language. As we shall see, all this means that the chaotic microscopic dynamics that underlies black hole horizon physics is associated with a characteristic timescale that is given by the Planck scale. A well-defined notion of spacetime geometry necessitates coarse graining over longer timescales. Our next task is to develop the stochastic effective descriptions of diagonal and nearest neighbor off-diagonal modes -- the first describing black hole thermodynamics and evaporation, the second giving us a crude peak into brane cluster/bound state dynamics. \subsection{Modes on the diagonal} In this section, we propose a mean field stochastic potential for diagonal modes, valid over timescales $t>t\sub{stoch}$. Using spherical coordinates, we posit \begin{equation}\label{eq:potential} V(r) = - V_0 \left(\frac{r^2}{r_0^2}-1\right)^2 \theta(r_0-r)\ , \end{equation} writing $r^2=x_i^2$, where $x_i$ is any diagonal mode of $X_i$. The potential is parametrized by two scales, $r_0$ and $V_0$, and we need to determine these two parameters by comparing the resulting dynamics to that of a light-cone black hole. Note also that we have incorporated quantum effects that we know would arise from the fermionic sector of Matrix theory: the $\theta(r_0-r)$ flattens the potential so as to model the expected flattening of the potenial from supersymmetry-based cancellations of zero mode energies\footnote{The potential is not strictly flat but comes with an $1/r^{d-2}$ fall-off at one loop order. For the purposes of the approximate stochastic description, we treat this as flat since no aspect of the model explores the region far away from the black hole.}. We start by noting that the only scale near the horizon of the Schwarzschild black hole is given by $r\sub{h}$ \footnote{This might prejudice the discussion in favor of black hole complementarity~\cite{Susskind:1993if}-\cite{Susskind:2012rm} as opposed to a firewall scenario at the horizon~\cite{Almheiri:2012rt}-\cite{Harlow:2013tf}. Nevertheless, we still need to map onto geometry on the dual M-theory side. We have tried to develop a model with an additional scale in the mean field potential set at the Planck scale near the horizon, and it seems that this does not lead to a picture that is consistent with Hawking evaporation. While we cannot rule out the possibility of finding an alternate model that includes the Planck scale -- as we have not explored all possibilities -- we note however that the simple model given in the text works very well without the need of a Planck scale at the horizon.}. We then start by setting \begin{equation} r_0 = r\sub{h} \end{equation} fixing the size of the stochastic diagonal fluctuations to within the would-be horizon size. The temperature of the soup should naturally be the Hawking temperature in the light-cone frame \begin{equation} T=T\sub{h} \simeq \frac{R}{r\sub{h}^2}\ . \end{equation} The mass of a stochastic particle should be set to the mass of a D0 brane \begin{equation} m = \frac{1}{R}\ . \end{equation} This leaves us with determining the damping parameter $\gamma$ and the potential scale $V_0$. We start by looking at evaporation flux from the thermal soup. Following~\cite{weiss}, we arrange for a steady state scenario for the probability distribution given by \begin{equation} p = C\, f(u) \exp\left[-\frac{1}{T} \left(\frac{1}{2}m\,v^2+V(r)\right)\right]\ , \end{equation} where $u=r-r_0$ and $C$ is a normalization constant to be determined. We need to find $f(u)$ given the boundary conditions \begin{equation} f(-r_0) = 1\ \ \ \mbox{and}\ \ \ f(0)\simeq 0\ , \end{equation} where the first one follows from matching with the equilibrium configuration at $r=0$, while the second one amounts to absorbing the evaporation flux at $r=r_0$, corresponding to evaporation to infinity. The Fokker-Planck equation at strong damping then leads to \begin{equation} \kappa u\,f'(u)+f''(u) = 0\ , \end{equation} where \begin{equation} \kappa = -\frac{V''(r_0)}{T} = \frac{8\,V_0}{T\,r_0^2} > 0 \end{equation} for the mean field potential at hand. The solution is given by the error function \begin{equation} f(u) = \frac{\mbox{erf}((r-r_0) \sqrt{\kappa/2})}{\mbox{erf}(-r_0 \sqrt{\kappa/2})}\ . \end{equation} Integrating over the velocities we have \begin{equation} p = C\, f(u) e^{-V(r)/T} \left(\frac{2\,\pi\,T}{m}\right)^{d/2}\ , \end{equation} which then leads to the current \begin{equation} j_r = -C\, \frac{T}{m\,\gamma} \left(\frac{2\pi\, T}{m}\right)^{d/2} \sqrt{\frac{2\,\kappa}{\pi}} e^{-V(r)/T} \frac{e^{-\frac{\kappa}{2} (r-r_0)^2}}{\mbox{erf}(-r_0 \sqrt{\kappa/2})}\ . \end{equation} We will see below that \begin{equation} r_0 \sqrt{\frac{\kappa}{2}} \simeq \sqrt{\frac{V_0}{T}} \simeq 1\ , \end{equation} when we find that $V_0 \sim T\sub{h}$. We then note that \begin{equation} \mbox{erf}(-r_0 \sqrt{\kappa/2}) \simeq -1\ . \end{equation} For $\mbox{erf}(-x)$, the function near $x \gtrsim 1$ is very well approximated by $-1$ with corrections suppressed exponentially as $e^{-x^2}/x$. We determine the normalization factor $C$ using \begin{equation} 1 = \int \mbox{d}^d{\bm{r}}\, p(r,t)\ . \end{equation} For this, we write \begin{equation} f(r-r_0) \simeq 1 + \frac{2}{\sqrt{\pi}} \sqrt{\frac{\kappa}{2}} r\,e^{-r_0 \sqrt{\kappa/2}} \end{equation} near $r\simeq 0$, and \begin{equation} f(r-r_0) \simeq \frac{2}{\sqrt{\pi}}\sqrt{\frac{\kappa}{2}} (r-r_0) \end{equation} near $r\simeq r_0$. We then get \begin{equation} 1\simeq C\, \frac{T^d}{m^{d/2}} e^{V_0/T} \frac{r_0^{d}}{V_0^{d/2}} \end{equation} up to a numerical factor. The probability current near $r_0$ takes the form \begin{equation} j(r_0) \simeq C\, \frac{T}{m\,\gamma}\left(\frac{2\pi\,T}{m}\right)^{d/2} e^{-V(r_0)/T}\ , \end{equation} which then leads to the evaporation flux \begin{equation} F \simeq j(r_0) r_0^{d-1} \simeq \frac{T^{-(d-1)/2}V_0^{(d+1)/2}}{m\,r_0^2\gamma}\ , \end{equation} which we can then match with Hawking evaporation at temperature $T\sub{h}$ \footnote{ If we want to include the kinetic energy of the evaporated bit, we would get \begin{equation} F \simeq \frac{T^{-(d-1)/2}V_0^{(d+1)/2}}{m\,r_0^2\gamma} e^{-(\omega+V_0)/T}\ , \end{equation} with $\omega$ being the kinetic energy, giving the standard black body spectrum. } \begin{equation} F=F\sub{h}\simeq \frac{R}{r_0^2}\ . \end{equation} This gives one of the two conditions we need to determine $\gamma$ and $V_0$. The other condition comes from the well-known one-loop effective potential of a probe D0 brane in the background of $N$ D0 branes. Using M-theory Planck units, we have~\cite{Kabat:1997im} \begin{equation} V\simeq \frac{N\, \ell\sub{P}^{d-1} v^4}{R^3 r^{d-3}}\ , \end{equation} where $v$ is the relative velocity of two partons at a separation $r\sim r\sub{h}$. While this is a perturbative result in the Matrix SYM, it is know to lead to an exact match with the dual M-theory scenario~\cite{Kabat:1997im} implying that it is valid at strong coupling as well\footnote{There have been suggestions that a non-remormalization theorem perhaps underlies this finding~\cite{Bigatti:1997jy}.}. Remembering that the black hole entropy is given by \begin{equation} S\simeq \frac{r\sub{h}^{d-1}}{\ell\sub{P}^{d-1}} \sim N \end{equation} in Planck units, and saturating the Heisenberg uncertainty bound for each parton~\cite{Horowitz:1997fr,Banks:1997hz,Banks:1997tn} \begin{equation} v\sim \frac{R}{r\sub{h}}\ , \end{equation} we get the scale of the potential energy at the size of the horizon \begin{equation} E\simeq \frac{R}{r\sub{h}^2}\ . \end{equation} Rescaling to SYM units using~(\ref{eq:rescaled}) gives the same relation ($r\sub{h}\rightarrow r\sub{h}\sqrt{R}$, $E\rightarrow E/R$). We then naturally identify this energy scale with the depth of the mean field potential \begin{equation}\label{eq:Vdiagonal} V(0)\simeq E \Rightarrow V_0 = \frac{R}{r\sub{h}^2} = T\sub{h}\ . \end{equation} Finally, from $F=F\sub{h}$, we then get \begin{equation} \gamma \simeq \frac{R}{r\sub{h}^2}\ . \end{equation} The latter relation implies that \begin{equation} m\,\gamma \simeq \frac{1}{r_0^2}\ , \end{equation} which corresponds to a borderline strong damping regime~(\ref{eq:strongdamping}) -- needed for consistency with RMT. We can now look at the quantum and thermal vacuum expectation values of a mode $x$ on the diagonal, given by \begin{equation} \left<x^2\right>\sub{th} \sim \frac{T}{V''(0)}\ \ \ ,\ \ \ \left<x^2\right>\sub{qu} \sim \sqrt{\frac{R}{V''(0)}}\ . \end{equation} For the given potential and parameters, we have \begin{equation} \left<x^2\right>\sub{th} \simeq \left<x^2\right>\sub{qu} \sim r\sub{h}^2\ , \end{equation} leading to borderline thermal regime, which implies that the diagonal modes are barely excited above the ground state. We also note that odd moments vanish at equilibrium, so that \begin{equation} \left<x\right>=0\ . \end{equation} We then have succeeded in developing a stochastic model for diagonal mode dynamics that matches with Hawking evaporation. As a result, a consistency check shows that this stochastic evolution has characteristic timescale given by~(\ref{eq:tT}) \begin{equation} t\sub{T} \sim \frac{m\,\gamma\,r\sub{h}^2}{V_0} = \frac{r\sub{h}^2}{R} = t\sub{h} \end{equation} as required. \subsection{Off-diagonal nearest neighbor modes} At timescales $t\sim t\sub{o}$, where Matrix theory enters the strongly coupled realm, we have the possibility to describe clusters of $d$ nearest neighbor branes through stochastic means. The clusters are marginally held together and we expect this dynamics to be a delicate one, given their natural overlap with the physics D0 brane marginal bound state formation. Nevertheless, we will use the methods of stochastic dynamics to try to describe the problem, bearing in mind that we aim only to identify scaling relations of what is most likely a very subtle cluster formation process. We model the potential for the nearest neighbor off-diagonal modes $V_{\delta x}$ with a simple quadratic confining form, and the only relevant scale is the curvature $V''(0)$. For nearest neighbor diagonals, we expect an inter-brane separation of $\Delta r\sim \ell\sub{s}$, leading to a perturbative potential for the corresponding off-diagonal modes given by \begin{equation}\label{eq:Vdeltax} V''_{\delta x}(0) \sim \frac{R}{\ell\sub{s}^6} \Delta r^2 \sim \frac{R}{\ell\sub{s}^4} = \frac{g\sub{s}}{\ell\sub{s}^3} = g\sub{YM}^2\ . \end{equation} This is a perturbative result but we extend it to $t\lesssim t\sub{o}$ as a scaling relation. The thermal and quantum vacuum expectation values are \begin{equation} \left<\delta x^2\right>\sub{th} \sim \frac{T}{V''(0)} = \frac{T}{g\sub{YM}^2}\ \ \ ,\ \ \ \left<\delta x^2\right>\sub{qu} \sim \sqrt{\frac{R}{V''(0)}} = \sqrt{\frac{R}{g\sub{YM}^2}}\ , \end{equation} where in the thermal expression, we want to think of $T$ as a scale for kinetic energy within the bound system. We would expect ground state physics, implying \begin{equation} \left<\delta x^2\right>\sub{th}\sim \left<\delta x^2\right>\sub{qu} \Rightarrow \frac{T^2}{g\sub{YM}^2} \sim R\ , \end{equation} which identifies \begin{equation} T_{\delta x} \sim \frac{R}{\ell\sub{s}^2} \end{equation} as the expected scale for kinetic energy in the cluster. The mass parameter would still be given by \begin{equation} m_{\delta x}=\frac{1}{R}\ . \end{equation} Finally, we propose that the strong damping bound needed by RMT should be valid, and at worst saturated \begin{equation} (m\,\gamma)^2 \sim m\,V''(0) \Rightarrow m\,\gamma \sim \frac{1}{\ell\sub{s}^2}\ , \end{equation} identifying the damping parameter $\gamma$ for cluster dynamics. As a sanity check, we can verify that the associated characteristic timescale for the stochastic dynamics is \begin{equation} \mbox{Timescale} \sim \frac{m\,\gamma}{V''(0)} \sim \frac{\ell\sub{s}}{g\sub{s}} = t\sub{o}\ , \end{equation} which again matches well with our expectations that the relevant dynamics is at the onset of strong coupling in the SYM theory. Finally, the expected size of the cluster becomes \begin{equation} \mbox{Size}^2 \sim \frac{T}{V''(0)} \sim \ell\sub{s}^2\ , \end{equation} which also syncs well with our expectation that one thermal parton is to occupy one Planck area at the black hole horizon \footnote{ Note that in M-theory Planck units, this translates to $\mbox{Size}\sim \ell\sub{P}$ as expected, given that $X\rightarrow X/\sqrt{R}$. }. \section{Quantum information} In this section, we want to describe how information evolves in the stochastic model we developed above. For this purpose, we need to look more closely at the fermionic degrees of freedom of the $\Psi$ matrix in~(\ref{eq:sym}). It is known that these correspond to the polarizations of the light-cone M-theory supergravity multiplet -- the graviton, the gravitino, and the 3-form gauge field~\cite{Banks:1996vh}. That is, in the low energy regime, we can think of an entry on the diagonal in the $X_i$'s as the coordinate of a supergravity particle whose flavor and polarization state is determined by the corresponding entry in the $\Psi$ matrix. We can expect that information in an M-theory black hole can be encoded in the polarization states of a thermal soup of supergravity excitations. We would then want to study the time evolution of the $\Psi$ matrix within the effective model we have developed. Note that the quantum contribution from the fermionic modes in their ground state has already been taken into account in the shape of the mean field potential for the diagonal bosonic modes. In the spirit of RMT, the equilibrium dynamics of the fermionic and bosonic matrix entries are treated as statistically uncorrelated. This justifies working with the bosonic sector by itself as we have done so: it is assumed that a corresponding thermal state is also set up in the fermionic sector as the two sectors are in thermal equilibrium. Our goal now is to track how information encoded in the polarization states evolves when this equilibrium configuration is slightly perturbed. We could for example consider one particularly interesting scenario, the emission of a supergravity particle from the stochastic soup, as a matrix entry of $X_i$ ventures off to large distances. We would choose a particular matrix configuration that can describe this situation, and analyze the evolution of the corresponding bit of quantum information in $\Psi$. \subsection{Qubit dynamics and M-theory polarizations} We start by considering a $d=3$ matrix configuration that looks like\footnote{The $\Delta X$s in this expression are set to zero to leading order in the computation as they are fast modes frozen in the vacuum and their effect is already incorporated in the mean-field potential. The expectation values $\left<\delta X\right>$ in the vacuum scales inversely with the large frequency.} \begin{equation}\label{eq:matrices} X = \left( \begin{array}{ccc} X\sub{bh} & \delta x\sub{bh} & 0 \\ \overline{\delta x}\sub{bh} & x\sub{bh} & \delta x \\ 0 & \overline{\delta x} & x \end{array} \right) \ \ \ ,\ \ \ \Psi = \left( \begin{array}{ccc} \Psi\sub{bh} & \delta \psi\sub{bh} & 0 \\ \overline{\delta \psi}\sub{bh} & \psi\sub{bh} & \delta \psi \\ 0 & \overline{\delta \psi} & \psi \end{array} \right)\ , \end{equation} where $X\sub{bh}$ and $\Psi\sub{bh}$ are a $(N-2)\times (N-2)$ sub-blocks representing part of the black hole, and the remaining $x\sub{bh}$/$\psi\sub{bh}$ and $x$/$\psi$ represent $1\times 1$ entries that are bits of the black hole that will participate in an emission process. The particle with coordinate $x$ and polarization state $\psi$ has perhaps ventured outside the black hole via ergodic motion. The $\delta x$ mode is a nearest neighbor off-diagonal, implying that $x\sub{bh}$ and $x$ are part of a cluster. The rest of the matrix entries start off in an equilibrium state at temperature $T\sub{h}$. Note that $\delta x\sub{bh}$ and $\delta \psi\sub{bh}$ are $N-2$ component vectors. The fermionic part of the Matrix theory action is given by~(\ref{eq:sym}) \begin{equation}\label{eq:stochaction} S\sub{ferm}[X,\Psi]=\int dt\, \frac{1}{2} \Psi \dot{\Psi} + \frac{R}{2\,\lambda^{3/2}} \Psi \Gamma^i \left[X_i, \Psi\right]\ . \end{equation} Quantizing the fermionic matrix entries, we have \begin{equation} \left\{\Psi_{ab\, \alpha}, \Psi^\dagger_{ab\, \beta}\right\} = 2\, \delta_{\alpha\beta}\ , \end{equation} where $\alpha$ and $\beta$ are 10 dimensional spinor indices, $\alpha, \beta = 1,\ldots, 16$, remembering that the matrix entries $\Psi_{ab}$ are Majorana-Weyl in $10$ spacetime dimensions. Applying this quantization to the matrix configuration~(\ref{eq:matrices}), we get for the off-diagonal modes \begin{equation} \left\{\delta \psi_\alpha, \delta\overline{\psi}_\beta\right\} = 2\,\delta_{\alpha\beta}\ , \end{equation} while the diagonal entries lead to a Clifford algebra \begin{equation} \left\{\psi_\alpha, \psi_\beta\right\} = 2\,\delta_{\alpha\beta}\ . \end{equation} The latter means that we can introduce new raising/lowering spinors on the diagonal by \begin{equation}\label{eq:diagqubits} \psi^\pm_\alpha = \frac{1}{2} \left(\psi_\alpha \pm i\, \psi_{\alpha+8}\right) \end{equation} where we now restrict $\alpha = 1, \ldots, 8$. We then have \begin{equation} \left\{\psi^+_\alpha,\psi^-_\beta\right\} = \delta_{\alpha\beta} \end{equation} as needed. In general, the fermionic sector then consists of $8\,N\,(N-1)$ qubits from off-diagonal modes and $8\,N$ qubits from the diagonal modes for a total of $8\,N^2$ qubits corresponding to $2^8=256$ polarization states of the M-theory supergravity multiplet -- one for each of the $N^2$ matrix degrees of freedom. Using~(\ref{eq:matrices}), we can then expand the action~(\ref{eq:stochaction}) treating all matrix entries as stochastic variables. Furthermore, given spherical symmetry, we expect all spatial directions to be statistically equivalent so that we can write $x_i \rightarrow x$ for all $i$. We get the action \begin{eqnarray}\label{eq:qubitaction} &&S\sub{ferm} = \frac{2\,\sqrt{d}}{(2\pi)^{3/2}}\,\frac{R}{\ell\sub{s}^3}\, \left[\left((x-x\sub{bh})\, \overline{\delta\psi}\, \Gamma \delta\psi +\delta x\, \overline{\delta\psi}\, \Gamma (\psi-\psi\sub{bh}) - \delta \psi\, \Gamma (\psi-\psi\sub{bh})\, \overline{\delta x}\right)\right. \nonumber \\ &&+ \left. \left(\overline{\delta \Psi}\sub{bh}\, \Gamma (X\sub{bh}-x\sub{bh})\delta \Psi\sub{bh} -\overline{\delta\Psi}\sub{bh}\,\Gamma (\Psi\sub{bh}-\psi\sub{bh})\,\delta X\sub{bh} -\overline{\delta X}\sub{bh}\,(\Psi\sub{bh}-\psi\sub{bh})\, \Gamma \delta\Psi\sub{bh}\right)\right] \end{eqnarray} where we define \begin{equation} \Gamma \equiv \frac{1}{\sqrt{d}} \sum_i \Gamma_i\ . \end{equation} Throughout, we use a symmetric representation for the $\Gamma_i$s. Note that $\Gamma^2=1$ and $\mbox{Tr}\, \Gamma =0$ so that the eigenvalues of $\Gamma$ are $\pm 1$. We will then choose the convenient representation where \begin{equation}\label{eq:Gamma} \Gamma = \left( \begin{array}{cc} 1_{8\times8} & 0_{8\times8} \\ 0_{8\times8} & -1_{8\times8} \end{array} \right)\ . \end{equation} Taking the thermal vacuum expectation value of~(\ref{eq:qubitaction}), we see that the thermal average of the action $\left<S\sub{ferm}\right>$ vanishes at equilibrium given that we know \begin{equation} \left< x \right> = \left< x\sub{bh} \right> = \left< \delta x \right> = \left< X\sub{bh} \right> = \left< \delta X\sub{bh} \right> = 0\ . \end{equation} This is simply the statement that, once equilibrium is achieved, we have two separate systems -- a bosonic and a fermionic one -- that can be treated as two thermal components in equilibrium at the same temperature. The interesting physics arises when we consider a perturbed configuration, for example one corresponding to $x-x\sub{bh}$ being momentarily large -- describing the process of evaporation of a bit of the Matrix black hole. The subsequent relaxation process would be driven by the couplings in~(\ref{eq:qubitaction}) between bosonic modes and qubits. We can analyze this physical setup by looking at the stochastic effective action of the qubits provided we arrange proper boundary conditions where $x$ and $\delta x$ are initially perturbed away from equilibrium. In the next section, we develop this method of tracking qubit information evolution. \subsection{Qubit action} We expect that a small perturbation should not affect the whole system appreciably on short enough timescales. This means that if we were to perturb $x$ and $\delta x$ in~(\ref{eq:matrices}) off-equilibrium, $X\sub{bh}$ and $\delta X\sub{bh}$ (as well as $\Psi\sub{bh}$ and $\delta \Psi\sub{bh}$) would remain in equilibrium as long as $N\gg 1$. Using techniques from~\cite{janssen}, given a stochastic variable $\chi$ coupling to other degrees of freedom $F(t)$ via $S=\int dt\, \chi\,F$, we can write an action \begin{equation}\label{eq:Seffstoch} i\,S\sub{q} = \ln \int \mathcal{D}\chi\, e^{-S\sub{stoch}[\chi] + i\, \int dt\,\chi F(t)}\ , \end{equation} where $\chi$ would be $x$ or $\delta x$ from earlier, and where the Stochastic action is \begin{equation} S\sub{stoch}[\chi] = \int dt \left[\frac{i\,m\,\gamma}{4\,T} \left(\dot{\chi} + \frac{V'(\chi)}{m\,\gamma}\right)^2 -\frac{i}{2\,m\,\gamma} V''(\chi) \right] \end{equation} $T$ is the temperature to which the perturbed $\chi$ relaxes to, and the path integration involves boundary conditions corresponding to the quenching process of interest. The potential $V$, the damping parameter $\gamma$, and the mass $m$ are all determined from our previous discussion in Section~\ref{sec:first}. $F(t)$ can be obtained from~(\ref{eq:qubitaction}) and is bilinear in the qubit variables. It can easily be shown that the Smoluchowski equation for $\chi$ given by~(\ref{eq:smol}) follows from $S\sub{stoch}$~\cite{janssen}. To evaluate the path integral, we start with the classical equations of motion \begin{equation} \frac{i\,m\,\gamma}{2}\frac{1}{T} \ddot{\chi} - \frac{i}{2} \frac{m\,\gamma}{T} \Omega^2 \chi = F\ , \end{equation} where \begin{equation}\label{eq:Omega} \Omega \equiv \frac{V''(0)}{m\,\gamma}\ . \end{equation} If $\chi$ represents a radial coordinate $\sqrt{x_i^2}$ in a spherically symmetric setup as given by~(\ref{eq:potential}) we get instead \begin{equation} \Omega^2 = \frac{V_0}{(m\,\gamma)^2\,r_0^2} \left(16\,\frac{V_0}{r_0^2}+8\,(d+2)\, \frac{T}{r_0^2}\right)\ . \end{equation} Since $V_0 \sim T$ for any of the bosonic perturbations of interest, $\Omega$ has then the same scale irrespective of symmetry. We solve the sourceless classical equation \begin{equation} \ddot{\chi}-\Omega^2 \chi = 0\ , \end{equation} and we easily find \begin{equation}\label{eq:chicl} \chi\sub{cl}(t) = \frac{1}{\sinh \Omega (t_i-t_f)}\left[ \chi_i \sinh{\Omega\,(t-t_f)} - \chi_f \sinh{\Omega (t-t_i)}\right]\ , \end{equation} with $F=0$, where $\chi_i$ is an initial off-equilibrium configuration, and $\chi_f$ is an equilibrium configuration $\chi$ relaxes towards. The classical contribution to the action is then \begin{equation} S\sub{q}^{cl} = \int_{0}^{t_f} dt\, F(t)\, \chi_{cl}(t)\ , \end{equation} where we take the initial time $t_i=0$. The quantum contribution is given by \begin{equation}\label{eq:quaction} S^{qu}\sub{eff} = -i\, \frac{T}{m\,\gamma} \int_{0}^{t_f} \int_{0}^{t_f} dt\,dt'\,F(t) \frac{\delta(t-t')}{\partial_t^2-\Omega^2} F(t')\ , \end{equation} with the associated Green's function \begin{eqnarray} G(t,t') = \frac{2}{\Omega} \frac{e^{\Omega\,(t_i+t_f)}}{e^{2\,\Omega\,t_i}-e^{2\,\Omega\,t_f}} &\times&\left[\, \sinh \Omega (t-t_f) \sinh \Omega (t_i-t') \theta(t-t') \right.\nonumber \\ &+&\left. \sinh \Omega (t-t_i) \sinh \Omega (t_f-t') \theta(t'-t)\, \right] \end{eqnarray} In summary, we arrive at an action for the qubit variables -- hidden in the $F(t)$ -- of the form \begin{equation}\label{eq:chiaction} S\sub{q} = \int_{0}^{t_f} dt\, F(t)\, \chi_{cl}(t) -i\, \frac{T}{m\,\gamma} \int_{0}^{t_f} dt\, \int_{0}^{t_f}dt'\, F(t) G(t,t') F(t')\ , \end{equation} describing the evolution of the relevant qubits as the bosonic stochastic variable $\chi$ relaxes -- after a quench described by the boundary conditions $\chi_i$ and $\chi_f$. Note that the second part of~(\ref{eq:chiaction}) is imaginary and this implies that the qubit evolution would be in general non-unitary. This piece involves quartic qubit interactions and would be responsible for scrambling information away as the background evolves stochastically. This is not surprising yet an important observation: we are then able to associate information loss in Hawking radiation to the scheme of coarse-graining over short timescales that results in an effective model of what otherwise is microscopic unitary evolution of information. That is, we see how averaging over chaotic dynamics in Matrix theory is responsible for information loss in the dual low energy M-theory or supergravity. Below, we will see that when this non-unitary piece of the effective dynamics becomes important, we expect the emergence of geometry on the dual M-theory side. Our goal next is to consider scenarios where $\chi$, or $x$ and $\delta x$, are perturbed away from equilibrium, and then we want to track the evolution of the qubits described by $\psi$ and $\delta \psi$. \subsection{Long timescales} Consider the qubit couplings given by~(\ref{eq:qubitaction}) where $x$ and $\delta x$ are arranged to start off in an off-equilibrium configuration. Neglecting the back reaction of this perturbation onto the black hole, we can take \begin{equation} \left<X\sub{bh}\right> = \left<x\sub{bh}\right> = \left<\delta X\sub{bh}\right> = 0 \end{equation} so that we have \begin{equation}\label{eq:qubitaction2} S\sub{ferm} \rightarrow \frac{2\,\sqrt{d}}{(2\,\pi)^{3/2}}\,\frac{R}{\ell\sub{s}^3}\, \left[x\, \overline{\delta\psi}\, \Gamma \delta\psi +\delta x\, \overline{\delta\psi}\, \Gamma (\psi-\psi\sub{bh}) - \delta \psi\, \Gamma (\psi-\psi\sub{bh})\, \overline{\delta x}\right] \end{equation} We want to develop the action of the qubits using~(\ref{eq:Seffstoch}), which then gives~(\ref{eq:chiaction}) where $\chi$ represents $x$ or $\delta x$, and $F(t)$ can be read off from~(\ref{eq:qubitaction2}). Before looking at the details, notice that the second term in~(\ref{eq:chiaction}), which is quartic in the qubits, is imaginary and renders the evolution non-unitary. The term is the result of coarse-graining over the stochastic variables $x$ and $\delta x$ and naturally leads to information loss. The scale for this non-unitary piece is \begin{equation} \mbox{non-unitary coupling}\sim \frac{T}{m\,\gamma}\, F^2\,t^2\, G \sim \frac{T}{m\,\gamma}\left(\frac{R}{\ell\sub{s}^3}\right)^2\, \frac{t}{{\partial_t^2-\Omega^2}} \end{equation} given that the propagator $G(t,t')$ scales as $\delta(t-t')/(\partial_t^2-\Omega^2)$ and the fermions are dimensionless. Irrespective of whether $\chi$ represents $x$ or $\delta x$, we have \begin{equation} \frac{T}{m\,\gamma} \simeq R\ . \end{equation} From~(\ref{eq:Vdiagonal}) and~(\ref{eq:Omega}), we have \begin{equation} \Omega_x \sim V_0 \sim \frac{R}{r\sub{h}^2} = \frac{1}{t\sub{h}}\ , \end{equation} when $\chi$ is identified with $x$; while from~(\ref{eq:Vdeltax}) and~(\ref{eq:Omega}) we instead have \begin{equation} \Omega_{\delta x} \simeq \frac{g\sub{s}}{\ell\sub{s}} = \frac{1}{t\sub{o}}\ , \end{equation} when $\chi$ is identified with $\delta x$. Hence, for $t\lesssim t\sub{o}$, the non-unitary coupling scales as \begin{equation} \mbox{non-unitary coupling}\sim \left(\frac{R}{\ell\sub{s}^2}\right)^3 t^3 \sim \left(\frac{t}{t\sub{o}}\right)^3 = g\sub{eff}(\tau)^2 \end{equation} whether $\chi$ represents $x$ or $\delta x$. We then see that this coupling, and hence information loss, sets in for timescales of order $t\sim t\sub{o}$, where the effective dimensionless Yang-Mills coupling becomes order unity and Matrix theory starts describing emergent spacetime geometry in the dual formulation. For shorter timescales, $t\ll t\sub{o}$, the evolution is effectively unitary, given by the first semi-classical term in~(\ref{eq:chiaction}). Note however, that for $t\sub{stoch}< t \ll t\sub{o}$, the dynamics is non-local, given by the Planck scale cluster physics and the light nearest neighbor off-diagonal modes $\delta x$ of the matrices. \subsection{Short timescales} Let us first start by writing the full qubit action~(\ref{eq:chiaction}) that follows from using~(\ref{eq:qubitaction2}). When $\chi$ is identified with the diagonal coordinate $x$ of~(\ref{eq:matrices}), we have \begin{equation}\label{eq:effdiagpert} S\sub{q} = \int_{0}^{t_f} dt\, f(t)\, x_{cl}(t) -i\, \frac{T}{m\,\gamma} \int_{0}^{t_f} dt\, \int_{0}^{t_f}dt'\, f(t) \frac{\delta(t-t')}{\partial_t^2-\Omega^2} f(t')\ , \end{equation} and \begin{equation}\label{eq:ft} f(t) = -\frac{R}{(2\,\pi)^{3/2}\ell\sub{s}^3} \sqrt{d} \left(\overline{\delta\psi}\cdot\delta \psi-\overline{\delta\psi}'\cdot\delta\psi'\right) \end{equation} obtained from~(\ref{eq:Gamma}) and~(\ref{eq:qubitaction2}), and where $\delta\psi'_\alpha\equiv\delta\psi_{\alpha+8}$ with $\alpha = 1,\ldots,8$. The `dot' represents a sum over 8 qubits, {\em i.e.} $\overline{\delta\psi}\cdot \delta \psi\equiv \sum_\alpha \overline{\delta\psi}_\alpha\delta \psi_\alpha$. As mentioned above, the second non-unitary piece is negligible for $t\ll t\sub{o}$. Looking at the first term of~(\ref{eq:effdiagpert}), we can see that it provides mass to the $\delta \psi$ and $\delta \psi'$ qubits, and it scales as \begin{equation} x\sub{cl} \frac{R}{\ell\sub{s}^3} t \sim \frac{x\sub{cl}}{\ell\sub{s}} \frac{t}{t\sub{o}}\ . \end{equation} For early times where $t<t\sub{o}$, this term is important only if $x\sub{cl}$ is large. This, for example, would be the case if the matrix entry labeled by $x$ would evaporate away, $x\sub{cl} \gtrsim r\sub{h}$. If the initial perturbation for the stochastic variable $x$ is such that $x_i\sim r\sub{h}$, the subsequent stochastic evolution is in a flat potential given the form of~(\ref{eq:potential}). This evolution, described by~(\ref{eq:moment1}) and~(\ref{eq:moment2}) -- or equivalently~(\ref{eq:chicl}), results in $x\sub{cl}(t)$ growing to infinity\footnote{To account for the flat direction in~(\ref{eq:potential}), we can for example take $\Omega_x\rightarrow 0$, which gives from~(\ref{eq:chicl}) $x\sub{cl}(t)\rightarrow x_i+(t/t\sub{o}) (x_f-x_i)$, where $x_i\sim r\sub{h}.$}. We then conclude that the effective qubit dynamics that arises from a perturbation on the diagonal -- that corresponds to $x$ evaporating away -- is described by \begin{equation}\label{eq:qubits2} S^{(I)}\sub{eff} = \int\sub{o}^{t\sub{o}} dt\, f(t) x\sub{cl}(t)\ , \end{equation} with $x\sub{cl} \sim r\sub{h}$ initially and growing larger thereafter. This is the statement that the off-diagonal qubits $\delta \psi$ and $\delta \psi'$ become heavier and heavier and condense as the bit evaporates away. For the off-diagonal coordinate $\delta x$ in~(\ref{eq:qubitaction2}), the resulting action takes the form \begin{equation}\label{eq:offdiagqubits} S\sub{q} = \int\sub{o}^{t_f} dt\, (\delta x_{cl}(t) F+\overline{\delta x_{cl}}(t) \overline{F}) - i\,R\, \int\sub{o}^{t_f}dt\int\sub{o}^{t_f} dt'\, F^*(t)\frac{\delta(t-t')}{\partial_{t'}^2-\Omega^2} F(t')\ , \end{equation} where \begin{equation} F(t) = \frac{R}{(2\,\pi)^{3/2}\ell\sub{s}^3} \sqrt{d} \times \left[ (\psi^+-\psi\sub{bh}^+)\cdot(\delta \psi +i\, \delta\psi')+ (\psi^--\psi\sub{bh}^-)\cdot(\delta \psi -i\, \delta\psi')\right] \label{eq:Ft} \end{equation} In arriving at this expression, we have used a complexified version of the action~(\ref{eq:Seffstoch}) where $\chi$ is complex as is $\delta x$ -- since the integrated modes are most naturally represented by complex variables. We also have used the diagonal qubit operators $\psi^\pm$ and $\psi\sub{bh}^\pm$ defined in~(\ref{eq:diagqubits}). Once again, as described above, the second non-unitary piece is negligible for $t\ll t\sub{o}$. The first term in~(\ref{eq:offdiagqubits}) term provides a coupling between qubits $\psi$, $\psi\sub{bh}$, and $\delta\psi$ and it scales as \begin{equation} \delta x\sub{cl} \frac{R}{\ell\sub{s}^3} t \sim \frac{\delta x\sub{cl}}{\ell\sub{s}} \frac{t}{t\sub{o}}\ . \end{equation} As the bit $x$ evaporates away, equations~(\ref{eq:moment1}) and~(\ref{eq:moment2}) -- or equivalently~(\ref{eq:chicl}) -- tell us that the initial value of $\delta x\sub{cl}$ decays exponentially to zero on timescale given by $t\sub{o}$, as the mode becomes heavy\footnote{The easiest way to see this is to note that, using~(\ref{eq:moment1}), we have $d(\delta x\sub{cl})/dt = -\Omega_{\delta x} \delta x\sub{cl}$.}. At short times $t\ll t\sub{o}$, we write \begin{equation}\label{eq:qubits1} S^{(II)}\sub{eff} = \frac{1}{2}\int\sub{o}^{t\sub{o}} dt\, (\delta x_{cl}(t)\,F+\overline{\delta x_{cl}}(t)\,\overline{F})\ . \end{equation} In summary, the qubit action is given by $S^{(I)}\sub{eff}+S^{(II)}\sub{eff}$, or \begin{eqnarray} S\sub{q} &=& \int\sub{o}^{\tau\sub{o}} d\tau\, \left[\,g\,\xi\sub{cl}(\tau) \left(-\overline{\delta\psi}\cdot\delta \psi+\overline{\delta\psi}'\cdot\delta\psi'\right) \right. \nonumber \\ &+& \left. g\, \delta \xi_{cl}(\tau)\,\left( (\psi^+-\psi\sub{bh}^+)\cdot(\delta \psi +i\, \delta\psi')- (\delta \psi -i\, \delta\psi')\cdot(\psi^--\psi\sub{bh}^-)\right) \right. \nonumber \\ &+& \left. g\,\overline{\delta \xi_{cl}}(\tau)\,\left( (\overline{\delta \psi} - i\, \overline{\delta\psi'})\cdot(\psi^--\psi\sub{bh}^-)- (\psi^+-\psi\sub{bh}^+)\cdot(\overline{\delta \psi} + i\, \overline{\delta\psi'})\right)\ , \right]\label{eq:qubitevolutionaction} \end{eqnarray} where we have switch from time $t$, $x$, and $\delta x$ to scaled variables $\tau$, $\xi$, and $\delta \xi$ (see equation~(\ref{eq:rescaled})), and the effective coupling $g$ is defined as \begin{equation} g = \frac{\ell\sub{s} g\sub{s}^{-2/3}R}{(2\,\pi)^{3/2}\ell\sub{s}^3}\sqrt{d} = \frac{(g\sub{YM}^2)^{1/3}\sqrt{d}}{(2\,\pi)^{3/2}} \end{equation} which has units of length such that $g\,\tau$ is the effective dimensionless coupling. In total, the system describes $8\times 4$ qubits: $8\times 2$ off-diagonals ones denoted by $\delta\psi_\alpha$ and $\delta\psi'_\alpha$, and $8\times 2$ on the diagonal denoted by $\psi_\alpha$ and ${\psi\sub{bh}}_\alpha$. The stochastic relaxation from a quench is given by the classical profiles $\xi\sub{cl}(\tau)=x\sub{cl}(\tau)/\ell\sub{s}$ and $\delta \xi\sub{cl}(\tau)=\delta x\sub{cl}(\tau)/\ell\sub{s}$ that follow from~(\ref{eq:chicl}). We now elaborate on the implications of the qubit evolution action~(\ref{eq:qubitevolutionaction}), restricting our attention to early times $t\sub{stoch}<t\lesssim t\sub{o}$ -- before the onset of dissipation and emergence of geometry. For the remaining discussion, we will use the coherent state representation of the qubits, which we first briefly review. For a qubit with states $\left|0\right>$ and $\left|1\right>$, a representation over a coherent state $\left|\eta\right>$ looks like~\cite{Grosche:1998yu} \begin{equation} \left<\eta|0\right> = 1\ \ \ ,\ \ \ \left<\eta|1\right> = \overline{\eta}\ , \end{equation} where $\eta$ is a Grassmanian. A general state $\left|\Phi\right>$ is then a function over the Grassmanians $\left<\eta|\Phi\right> \equiv \Phi(\overline{\eta})$. A Bell state \begin{equation} \left|B\right> = \frac{1}{\sqrt{2}} \left(\left|0\right>\left|1\right>-\left|1\right>\left|0\right>\right) \end{equation} is then represented as \begin{equation} \left<\eta_1\eta_2|B\right> = \frac{1}{\sqrt{2}} \left(\overline{\eta}_2-\overline{\eta}_1\right)\ . \end{equation} The expectation value of an operator gets a form of a function over Grassmanians \begin{equation} \mathcal{O}(\overline{\eta},\eta') \equiv \left<\eta | \mathcal{O}| \eta'\right> = \sum_{m,n=0,1} \overline{\eta}^m \left<m|\mathcal{O}|n\right> {\eta'}^{n}\ . \end{equation} The path integral measure is such that \begin{equation}\label{eq:application} \left<\eta|\mathcal{O}|\Phi\right> = \int d\overline{\eta}'d\eta' e^{-\overline{\eta}'\eta'} \mathcal{O}(\overline{\eta},\eta') \Phi(\overline{\eta}')\ . \end{equation} For a Hamiltonian of qubits referenced by the operators $\psi^\pm$, we would write $\psi^+\rightarrow \overline{\eta}$ and $\psi^-\rightarrow \eta$. For a simple bilinear and time-dependent structure with sources, we have \begin{equation}\label{eq:Hform} H = A(t)\, \overline{\eta}\eta - \overline{J}(t) \eta - \overline{\eta} J(t)\ . \end{equation} The unitary evolution operator as a function over Grassmanians takes the form \begin{equation} U({\overline{\eta}}'', \eta'; t'',t) = \int_{\eta(t')=\eta'}^{\overline{\eta}(t'')=\overline{\eta}''} \mathcal{D}\overline{\eta}(t)\mathcal{D}\eta(t) \exp\left[ \overline{\eta}''\eta(t'') +i\,\int_{t'}^{t''} dt\,\left(i\,\overline{\eta}(t) \dot{\eta}(t)-H(\overline{\eta}(t),\eta(t),t)\right) \right]\ , \end{equation} which, for a Hamiltonian of the form~(\ref{eq:Hform}), then leads to \begin{eqnarray} U({\overline{\eta}}'', \eta'; t'',t) &=& \exp\left[ {\overline{\eta}}''e^{-i \int_{t'}^{t''}A(t) dt} \eta' \right. \nonumber \\ &+&\left.i\, {\overline{\eta}}'' \int_{t'}^{t''} dt\, J(t) e^{-i \int_{t'}^{t''} ds\,\theta(s-t) A(s)} +i\, \int_{t'}^{t''} dt\, \overline{J}(t)e^{-i\int_{t'}^{t''}ds\,\theta(t-s)A(s)}\eta' \right. \nonumber \\ &-&\left. \int_{t'}^{t''} dt \int_{t'}^{t''} ds\, \overline{J}(t)D(t,s)J(s) \right]\label{eq:master} \end{eqnarray} where the propagator is given by \begin{equation} D(t,s) \equiv \theta(t-s) e^{i \int_{s}^t A(t') dt'}\ . \end{equation} We can then use this approach to write the unitary evolution operator for the qubits given by~(\ref{eq:qubitevolutionaction}). The Grassmanian variables will be labeled as $\delta\psi$, $\delta\psi'$, $\psi^-$, $\psi\sub{bh}^-$, and their complex conjugates -- in correspondence with the respective operators. We then seek the evolution operator written as \begin{equation}\label{eq:U} U(\psi^+(\tau), \psi^-(0), \psi\sub{bh}^+(\tau), \psi\sub{bh}^-(0), \overline{\delta\psi}(\tau), {\delta\psi}(0), \overline{\delta\psi'}(\tau), {\delta\psi'}(0) ;\tau, 0) \end{equation} that acts on the qubit wavefunction $\Phi(\psi^+(0), \psi\sub{bh}^+(0), \overline{\delta\psi}(0),\overline{\delta\psi'}(0))$. We have the evolution of a $8\times 4$ qubit system, half on the matrix diagonal and the other half off-diagonal; all $32$ qubit are part of a cluster. The time evolution is obviously sensitive to the details of the quench, given by $x\sub{cl}(t)$ and $\delta x\sub{cl}(t)$. The initial wavefunction $\Phi$ is another input to the problem. Cluster formation dynamics might naturally involve the delicate physics of D0 bound state formation -- akin to Cooper pair formation in superconductivity. The dynamics of the marginal bound states in Matrix theory is a complicated strong coupling problem that remains an open issue, and we will not be able to tackle the full problem here. Instead, given the spirit of an effective approximate scaling analysis, we will next engage in a speculative analysis that is inspired by a recent toy model of black hole qubit evaporation due to Osuga and Page~\cite{Osuga:2016htn}. We will argue that the Matrix theory qubit evolution operator has the hallmarks of the toy model presented in~\cite{Osuga:2016htn}, under a series of assumptions. In~\cite{Osuga:2016htn}, a toy model was proposed whereby the black hole Hilbert space is augmented to a tensor product that involves the black hole qubit sector and two other sectors, one for in-falling and another for outgoing radiation modes just inside and just outside the event horizon. Each black hole qubit is paired with two qubits that are in the singlet Bell state. The latter is proposed to represent the vacuum for the radiation pair of modes that assures smooth spacetime near the horizon. As a black hole qubit evaporates away, \cite{Osuga:2016htn} proposes a unitary evolution operator that essentially exchanges the black hole qubit with a qubit of outgoing radiation, leaving the black hole sector qubit entangled in a Bell state with the qubit of incoming radiation. The result of this is that one qubit of information leaves the black hole (into the outgoing radiation sector), and a vacuum Bell state of two qubits (black hole and incoming radiation sectors) is left behind that is now to be interpreted as part of a bit of new empty spacetime created just outside a black hole as the latter shrinks in size. The key assumptions in this model are: interactions in the black hole qubit sector are non-local at the Planck scale, and a Bell vacuum state for black hole and incoming radiation qubits is tantamount to shrinking the black hole or equivalently expanding the vacuum space outside of it. The motivation for this toy example is to present a proof of concept model of black hole evaporation consistent with black hole complementarity. In our setup, we have an explicit quantum theory of gravity that dictates the qubit evolution operator. The partons of the matrix black holes are clusters of diagonal and off-diagonal matrix qubits, about $8\times (d-1)^2$ qubits in $d$ space dimensions. For $d=3$, that's $32$ qubits. We propose that each cluster of qubits, a $32$-qubit system, carries $8$ qubits worth of information only -- corresponding to the $256$ supergravity states that can encode information; the remaining $24$ qubits are scaffolding that are in a highly entangled Bell-like vacuum state that is the result of cluster dynamics. These represent the halo at around the event horizon. Naturally, the information is on the diagonal qubits, say in $\psi\sub{bh}$ in the specific setup we have been considering. That means that $\delta\psi$, $\delta \psi'$, and $\psi$ start off in a maximally entangled vacuum Bell state of $24$ qubits representing radiation or `membrane goo' near the horizon. We then propose that the unitary evolution operator from~(\ref{eq:U}) and~(\ref{eq:qubitevolutionaction}) -- given a perturbation of the stochastic variables $x\sub{cl}(t)$ and $\delta x\sub{cl}(t)$ that describes the evaporation of the $x$ matrix entry -- results in having the qubit of information $\psi\sub{bh}$ transfered to $\psi$ which exits the Matrix black hole. The end result leaves behind a vacuum Bell state of qubits for $\delta\psi$, $\delta \psi'$, and $\psi\sub{bh}$ that is to be interpreted as the production of a bit of new spacetime outside the black hole. As a result, the matrix black hole shrinks in size from $N$ to $N-2$. Looking at the form of~(\ref{eq:qubitevolutionaction}), we see a structure that has the right general form to potentially generate such an evolution of qubits. The analogue of the exchange operator from~\cite{Osuga:2016htn} in our language takes the form $\exp\left[i\,\alpha\,t \,(\psi^+-\psi\sub{bh}^+)(\psi^--\psi\sub{bh}^-)\right]$. Our effective Hamiltonian involves in addition the mediation of the light $\delta\psi$ modes in combinations of the form $\sim(\psi^+-\psi\sub{bh}^+)\delta \psi^-$ and its complex conjugate. Bell states with $24$ qubits are very difficult to study and even determine in their own right. Added to this complication is the fact that~(\ref{eq:qubitevolutionaction}) is in general non-local due to the light off-diagonal modes. As a result, it is a very challenging task to determine the evolution of the qubits using the action~(\ref{eq:qubitevolutionaction}). To see this, note that the non-local couplings in~(\ref{eq:master}) have scale given by \begin{equation} \int_0^{\tau_0} d\tau' g\, \chi\sub{cl}(\tau) \simeq (\chi_i+\chi_f)\times g\, \tau_0 \sim (\chi_i+\chi_f) \end{equation} where we used~(\ref{eq:chicl}). For $\chi\rightarrow \xi\sub{cl} \gg 1$, given that $r\sub{bh}\gg \ell\sub{s}$. For $\chi\rightarrow \delta\xi\sub{cl} \sim 1$, given that the cluster length scale is $\ell\sub{s}$. In any scenario, the relevant dynamics is highly non-local. Noting some of the general similarities between the model of~\cite{Osuga:2016htn} and ours, we leave the analysis of the significantly more complex dynamics of our system for future work. \section{Discussion and Outlook}\label{sec:conclusion} \label{sub:conclusion} The analysis in this work is a first attempt to develop a quantum gravity-centric, bottom up picture of black hole event horizon physics. The results can be summarized through two main conclusions: \begin{enumerate} \item We have determined that near horizon dynamics is non-local in space {\em and} time at the Planck scale. The thermal degrees of freedom of the black hole are `cells' of around $d$ particles, for a black hole in $d$ space dimensions; each cell spans a size of order the Planck scale. One can think of each cell carrying bits of information, encoded in the polarization states of the fermionic variables of Matrix theory -- or equivalently the polarization states of the supergravity multiplet on the dual side. The dynamics of black hole degrees of freedom is non-local and chaotic for short Planckian timescales, in a regime where the Yang-Mills theory is hovering just below strong coupling. At longer timescales and larger distances, the dynamics is effectively local both in time and space, while being strongly coupled. This is when and where an effective geometrical picture is possible. \item When describing evaporation, one is dealing with a chaotic system near the would-be event horizon with a characteristic timescale given by the Planck scale. To describe the evaporation via a top down approach, {\em i.e.} via Hawking's approach, one needs to average chaotic dynamics over super-Planckian timescales. Where a spacetime description is valid, one is necessarily left with a non-unitary effective picture for the evaporation arising from coarse graining over Planckian chaotic motion. The suggestion is that the resolution of the black hole information loss paradox {\em cannot} lie in any framework that relies on a well-defined smooth spacetime geometry at the event horizon. This is a plausibility argument: We demonstrated that, through a rather simple stochastic model with a single input scale, one can understand how Hawking evoporation is inherently non-unitary -- naturally due to stochastic, chaotic UV physics. This simplest of settings necessitates however the breakdown of smooth geometry at the horizon. This obervation, together with other independent evidence towards a breakdown of geometry at the horizon, constitute strong evidence that one most likely needs to look for resolutions of the information paradox in models involving a new perspective on near horizon geometry. The geometrical description of black hole evaporation is inherently non-unitary as it arises from averaging over Planckian timescales that characterize the chaotic physics of the underlying degrees of freedom. \end{enumerate} A couple of footnotes are in order. First, we identify emergent geometry at the benchmark of strong effective Yang-Mills coupling $g\sub{eff}(\tau)^2$, as opposed to strong effective {\em 't Hooft} coupling $g\sub{eff}(\tau)^2 N$, which is the natural coupling for large $N$. The subtlety here is that the coupling that governs the microscopic event horizon dynamics is one that arises from the interaction of `order one' matrix entries on the diagonal. At most groups of order $d^2$ particles participate in the dynamics, hence the relevant effective coupling is not the $N$ dependent 't Hooft coupling. In describing the gravitational interaction of the {\em whole} black hole with entropy $S\sim N$, the relevant effective coupling is indeed the 't Hooft coupling; but microscopic event horizon dynamics does not involve the participation of all $N$ degrees of freedom. The second footnote has to do with implicit connections to the issue of black hole complementarity~\cite{Susskind:1993if}-\cite{Susskind:2012rm} . In modeling the mean field potential for the degrees of freedom of the Matrix black hole, we note that there was no need to introduce a separate Planck scale near the horizon: the entire potential can be modeled using a single scale, the radius of the event horizon\footnote{Our model builds from the outset on the premise that Hawking evaporation is a single scale phenomenon, at least to leading order. This does not allow capturing new UV physics through this model that might still exist and correct Hawking evaporation. Yet, the point is that such additional scales are not needed to understand why Hawking evporation is inherently non-unitary.}. This is not surprising since we were modeling the physics in a manner to match against expectations on the dual supergravity side. We also noted that the qubit action we arrived at has some of the features of the qubit evolution toy model proposed in the work of Osuga and Page~\cite{Osuga:2016htn}. The latter consisted of a proof-of-concept system that circumvents the need of a firewall by positing non-local interactions at the horizon and an exchange mechanism of qubits within a direct product of three Hilbert spaces. All these ingredients of this toy model emerge naturally from our Matrix theory discussion. However, our action is more complicated than the one in~\cite{Osuga:2016htn}, and we leave a detailed analysis of the dynamics for future work. Nevertheless, these similarities between the two systems, ours and that of~\cite{Osuga:2016htn}, might be hints that a firewall is {\em not} needed at the event horizon after all, and black hole complementarity prevails. This is consistent with~\cite{Giddings:2011ks,Giddings:2012bm,Almheiri:2012rt} given the non-local nature of the interactions near the event horizon in Matrix theory -- at the level of D0 brane clusters. There is however a significant conceptual challenge to this argument. Black hole complementarity is a statement about the perspective of an in-falling observer. This means that one needs to understand how a change of perspective between the observer at infinity and the one in-falling past the horizon is realized in the language of Matrix theory. Presumably, this involves a Matrix transformation in $U(N)$ since one expects that local spacetime coordinate invariance is embedded in the gauge group of the theory. This in turn requires a more precise map between emergent geometry and metric, and matrix degrees of freedom. Without this critical missing ingredient, we cannot conclusively understand how the firewall paradox is addressed by our effective model. Related to this last point, we also note that our treatment explicitly chooses a frame for describing the black hole, presumably corresponding to the perspective of an outside observer. This creates a clear separation between the roles of diagonal and off-diagonal matrix entries. The residual gauge freedom is the group of permuting diagonal entries, a subgroup of $U(N)$. The more interesting transformations would mix diagonal and off-diagonal entries, and we believe these correspond in part to switching the perspective of the observer. Very little is known or understood about this part of the Matrix-supergravity duality, and it seems a full treatment of the quantum black hole would necessitate progress in this direction. This work is a step towards unravelling the microscopic details of black hole horizon physics within a theory of quantum gravity that is fully embedded in string/M-theory. The effective model approach opens up new directions for a range of possible investigations and extensions that can only add to our understanding of black holes and quantum gravity. We hope to report on some of these in future works. \newpage \section{Acknowledgments} This work was supported by NSF grant number PHY-0968726. \providecommand{\href}[2]{#2}\begingroup\raggedright
1,314,259,996,077
arxiv
\section{Introduction} Let $1\leq d\leq 3$, $\Omega\subset\R^d$ be a bounded smooth domain and $T>0$ be fixed. We consider the following Euler-Poisson system with linear damping and confinement \begin{equation} \begin{aligned} \partial_t \vr + {{\mathrm{div}}_x}\, (\vr \vec{u}) & = 0, &&\text{ in } (0,T) \times \Omega, \\ \partial_t (\vr \vec{u}) + {{\mathrm{div}}_x}\, (\vr \vec{u} \otimes \vec{u}) & = - \gamma\vr \vec{u} - \vr \nabla_x \Phi_\vr - \vr x, &&\text{ in } (0,T) \times \Omega, \label{eq:euler_1} \\ -\Delta \Phi_\vr &= \vr - M_{\vr}, &&\text{ in } (0,T) \times \Omega, \end{aligned} \end{equation} where $\gamma>0$ is the friction coefficient and $M_\vr = \int_{\Omega}\rho(t,x)\dx{x}$ denotes the space average of the density. Note that, thanks to mass conservation, we have $M_\vr = \int_\Omega\rho_0(x)\dx{x}$, so that $M_\vr$ is constant in time. The system is subject to the impermeability and Neumann boundary conditions \begin{equation}\label{eq:bdryconditions} \vec{u}\cdot n = 0,\qquad \nabla_x\Phi_\rho\cdot n = 0,\qquad\text{ on } (0,T)\times\partial\Omega, \end{equation} where $n$ denotes the outward unit normal to $\partial\Omega$. \noindent To formulate the above equations in the measure-valued sense it is necessary to rewrite the nonlocal term $\vr \nabla_x \Phi_\vr$ into a divergence form. This can be done observing that any classical solution of~\eqref{eq:euler_1} satisfies the following pointwise identity \begin{equation}\label{eq:new_form} \prt*{\vr-M_\vr}\nabla_x\Phi_\vr = \frac{1}{2}\nabla_x |\nabla_x \Phi_\vr |^2 - {{\mathrm{div}}_x}\,[\nabla_x \Phi_\vr \otimes \nabla_x \Phi_\vr]. \end{equation} \noindent Hence it is justified to consider the following form of the Euler-Poisson system instead of~\eqref{eq:euler_1} \begin{equation \begin{aligned} \partial_t \vr + {{\mathrm{div}}_x}\, (\vr \vec{u}) & = 0, &\mbox{ in } (0,T) \times \Omega, \\ \partial_t (\vr \vec{u}) + {{\mathrm{div}}_x}\, (\vr \vec{u} \otimes \vec{u}) & = - \gamma \vr \vec{u} - \frac12\nabla_x |\nabla_x \Phi_\rho |^2 + {{\mathrm{div}}_x}\,[\nabla_x \Phi_\rho \otimes \nabla_x \Phi_\rho] - \vr x - M_\vr\nabla_x \Phi_\rho , &\mbox{ in } (0,T) \times \Omega, \label{eq:euler_2} \\ -\Delta \Phi_\rho &= \vr - M_\vr, & \mbox{ in } (0,T) \times \Omega. \end{aligned} \end{equation} Multiplying formally the momentum equation of~\eqref{eq:euler_1} and using the continuity equation several times we arrive at the following identity $$ \partial_t\brk*{\frac12\vr|\vec{u}|^2 + \vr\Phi_\vr + \frac12\vr|x|^2} - {{\mathrm{div}}_x}\,\brk*{\prt*{\frac12\vr|\vec{u}|^2 + \vr\Phi_\vr + \frac12\vr|x|^2 }\vec{u}} = -\gamma\vr|\vec{u}|^2 + \vr\partial_t\Phi_\vr, $$ which, upon integration in space and using the Poisson equation, yields the following energy balance satisfied by smooth solutions of~\eqref{eq:euler_1} \begin{equation*} \frac{\dx{}}{\dx{t}}\mathcal{E}(\vr,\vec{u},\Phi_\rho)(t) = -\gamma\int_\Omega\rho|\vec{u}|^2\dx{x}, \end{equation*} where \[ \mathcal{E}(\vr,\vec{u}, \Phi_\rho) = \int_\Omega \frac{1}{2} \vr |\vec{u}|^2 + \frac12|\nabla_x\Phi_\vr|^2 + \frac12\vr|x|^2 \dx{x}, \] denotes the total energy associated with~\eqref{eq:euler_1}. \subsection{Preliminaries and notation} Our main results concern existence and conditional uniqueness of measure-valued solutions to the Euler-Poisson system~\eqref{eq:euler_2}, which will be defined in the following section. First however, we introduce the necessary notation and recall the formalism of Young measures. \noindent Let $n,m\in\mathbb{N}$ and $X\subset\R^n, Y\subset\R^m$ be measurable sets. We denote the spaces of signed and non-negative Radon measures in $Y$ by $\mathcal{M}(Y)$ and $\mathcal{M}^+(Y)$, respectively. By ${\mathcal{P}}(Y)$ we denote the space of probability measures. In each case we equip these spaces with the total variation norm \begin{equation*} \norm{\nu}_{\mathrm{TV}} := \int_Y\dx{|\nu|}. \end{equation*} \noindent By $L^\infty_{\mathrm{weak}}(X;\mathcal{M}(Y))$ we denote the space of weakly-$^*$ measurable essentially bounded maps $\vec{\nu}=(\nu_x):X\to\mathcal{M}(Y)$. This means that for each $\phi\in C_0(Y)$ the map \begin{equation*} x\mapsto \int_Y\phi(\lambda)\dx{\nu_x}(\lambda) \equiv \skp*{ \nu_x ; \phi} \end{equation*} is Lebesgue-measurable, where $C_0(Y)$ denotes the space of continuous real-valued functions on $Y$ which vanish at infinity. We note also that $L^\infty_{\mathrm{weak}}(X;\mathcal{M}(Y))$ is isometrically isomporhic to the dual space of the separable space $L^1(X;C_0(Y))$. This is a key point in associating a parameterised measure to a sequence of measurable functions. \medskip When considering the issue of existence of solutions for system~\eqref{eq:euler_2}, we shall follow the usual strategy of first constructing weak solutions to an approximate problem, and then passing to the limit in the approximation parameters. Naturally, we run into the usual problem that the a priori estimates we can derive are not strong enough to allow for passing to the pointwise limit in the nonlinear terms. We must therefore find means to characterise the possible oscillation and concentration effects in our approximate sequences. This is done by embedding the problem into the larger space of bounded Radon measures:\ given a sequence $(z^\ep)_{\ep>0}$ of measurable functions $z^\ep:X\to Y$, we associate to each $z^\ep$ the mapping $\nu^\ep:X\to Y$ defined by $\nu^\ep(x) = \delta_{\{z^\ep(x)\}}$. Then the sequence $\prt*{\nu^\ep}_{\ep>0}$ belongs to the closed unit ball of $L^\infty_{\mathrm{weak}}(X;\mathcal{M}^+(Y))$. Therefore, by virtue of the Banach-Alaoglu Theorem, there exists a subsequence (which we shall never relabel) and a parameterised measure $\vec{\nu} = (\nu_x)_{x\in X}$ in $L^\infty_{\mathrm{weak}}(X;\mathcal{M}^+(Y))$ such that $\nu^{\ep}\weakstar\vec{\nu}$. In particular this implies that \begin{equation*} f(z^\ep) \weakstar \skp*{\nu_x ; f} \end{equation*} in $L^\infty(X)$ for every $f\in C_0(Y)$. Moreover, clearly $\nu_x\geq 0$ and $\norm{\nu_x}_{\mathrm{TV}}\leq1$ for a.e.\ $x\in X$. The parameterised measure $\vec{\nu}\in L^\infty_{\mathrm{weak}}(X;\mathcal{M}^+(Y))$ is called the Young measure associated to (or generated by) the (sub)sequence $(z^\ep)$. The above observations, on various level of generality, are usually termed the Fundamental Theorem, see, \emph{e.g.},~\cite{Ball1989, Pedregal}. The Young measure captures the oscillatory behaviour of a sequence and allows to characterise some nonlinear weak limits. When working in the above setting and defining a measure-valued solution to a given problem, one usually desires that the Young measure describing the solution be a family of probability measures. It can be then thought of as giving the probability distribution of values of the physical quantities represented by the problem dependent variables around a given point in the physical space. To guarantee that this is the case some additional information on the underlying sequences is needed. More precisely, suppose that the sequence $z^\ep$ satisfies the following tightness condition: \begin{equation} \label{eq:tightness} \sup_{\ep>0}\int_X g(|z^\ep(x)|)\dx{x} < \infty, \end{equation} for some non-decreasing continuous function $g:[0,\infty)\to [0,\infty]$ with $\lim_{\alpha\to\infty}g(\alpha) = \infty$. Then almost every of the $\nu_x$ is a probability measure. Furthermore, in this case one can show that for any $f\in C(Y)$ such that $(f(z^\ep))_{\ep>0}$ is weakly precompact in $L^1$, then \begin{equation*} f(z^\ep) \rightharpoonup \skp*{\nu_x ; f}\quad \text{in $L^1(X)$}. \end{equation*} Notice that every sequence $(z^\ep)$, which is uniformly bounded in some Lebesgue space $L^p$, $1\leq p<\infty$, satisfies condition~\eqref{eq:tightness}. Two important observations arise from the above discussion that are of importance in our approach. Firstly, when working with the variables $\rho, \vec{u}, \nabla_x\Phi_\rho$, the estimates that we shall derive in Section~\ref{sec:existence} will depend on $\rho$. Therefore we will not be able to deduce any control over the velocity in the vacuum regions $\{\rho=0\}$. Thus, we shall be working with a parameterised family of positive measures, rather than probabilities. See also Remark~\ref{rem:othervariables}. Secondly, due to lack of the pressure we cannot guarantee that the approximate quantities (in particular the density and the momentum) are uniformly integrable. Consequently, the nonlinearities that we deal with are neither $C_0$, nor are they weakly precompact in $L^1$. Therefore we have to introduce a way to capture possible concentration effects in the approximate sequences (which is done below), but also justify that the maps $x\mapsto\skp*{\nu_x;f}$ are integrable. This is formulated in Appendix~\ref{sec:appA}, together with an observation concerning projections of the Young measure onto individual variables. Now let $f$ be a continuous function on $Y$, such that \begin{equation*} \sup_{\ep>0}\int_X |f(z^\ep(x))|\dx{x} \leq C. \end{equation*} Then there is a measure $f_\infty\in\mathcal{M}(X)$, such that $f(z^\ep)$ converges (up to a subsequence) to $f_\infty$ weakly-$^*$ in $\mathcal{M}(X)$. One can then consider the difference \begin{equation} \label{eq:concentrationdefect} m^f = f_\infty - \skp*{ \nu_x ; f } \end{equation} where $\nu_x$ is the Young measure generated by the sequence $(z^\ep)$. We shall call this measure the \emph{concentration measure} associated to the function $F$. It is also sometimes called the concentration defect. Observe that $m^f = 0$ if the family $\prt*{f(z^\ep)}$ is weakly precompact in $L^1(X)$. As a note of caution let us point out that the term "concentration" might be misleading: the measure $m^f$ need not be supported on a set of small measure, see for instance~\cite{BallMurat1989} for an example of a sequence whose concentrations are being smeared out uniformly over the whole domain. \noindent It is often needed to compare two concentration measures coming from two nonlinear functions. It turns out that if one of the nonlinearities dominates the other, then the same is true of the associated concentration measures. More precisely, we have the following result. \begin{proposition} \label{prop:concentrationrelations} Let $\vec{\nu}=(\nu_x)$ be the Young measure generated by the sequence $(z^\ep)$. If two continuous functions $f_1$ and $f_2\geq0$ satisfy $|f_1(z)|\leq f_2(z)$ for every $z\in Y$ and if $f_2(z^\ep)$ is uniformly bounded in $L^1(X)$, then we have \begin{equation*} |m^{f_1}|(A) \leq m^{f_2}(A) \end{equation*} for every Borel set $A\subset X$. \end{proposition} \noindent The proof of this measure theoretic fact can be found in~\cite[Lemma~2.1]{FGSW2016} or~\cite[Proposition~3.3]{GKS2020}. \medskip In our case of interest we have $X = (0,T)\times\Omega$ and $Y=[0,\infty)\times\R^d\times\R^d$. Since we will generally perform the same algebraic operations on both the oscillatory and concentration parts due to given nonlinearities of the Euler-Poisson equations, we will use the shorthand notation \begin{equation*} \overline{f}(t,x) = \skp*{\nu_{t,x}(\lambda); f(\lambda)} + m^f(\dx{t},\dx{x}), \end{equation*} where $\lambda = (s,\vec{v},\vec{F}) \in [0,\infty)\times\R^d\times\R^d$. \noindent Let us make one final remark about concentration measures. As discussed above, we generally have $m^f\in\mathcal{M}(X)$. It is also not difficult to show that if $f$ is non-negative, then so is the corresponding concentration measure. When $X = (0,T)\times\Omega$, then oftentimes it is also possible to guarantee a disintegration of $m^f$ with respect to the time and space variables. In Section~\ref{sec:existence} we prove existence of dissipative measure-valued solutions using a sequence of weak solutions to an approximate problem. Due to the energy inequality we will obtain bounds in $L^\infty(0,T;L^1(\Omega))$ for the approximate quantities. Then the corresponding concentration measures will admit a disintegration of the form \begin{equation*} m^f = \dx{m}^f_t(x)\otimes \dx{t}, \end{equation*} where the family $t\mapsto m^f_t$ is bounded and weakly-$^*$ measurable. \subsection{Main results and structure of the paper} We state below our main result concering the Euler-Poisson system~\eqref{eq:euler_2}. It is a weak-strong type result comparing a measure-valued solution and a regular solution emanating from the same finite-energy initial data. By a "regular" or "strong" solution we shall mean a continuously differentiable triple $(\rho,\vec{U},\Phi_r)$ with bounded velocity, which satisfies either (and thus both) system~\eqref{eq:euler_1} or~\eqref{eq:euler_2} pointwise. The measure-valued solutions will be defined only in the following section, however for now let us just say that we they comprise of a classical Young measure $\vec{\nu}$ together with a number of concentration-defect measures as described above, which satisfiy equations~\eqref{eq:euler_2} in an averaged sense. Furthermore, they are required to be "dissipative" or "admissible" by exhibiting an energy inequality, see Section~\ref{sec:dmvs}. Vitally, the following theorem concerns precisely the class of measure-valued solutions for which we can show global existence. \begin{theorem}\label{thm:mv-stronguniqness} Let $1\leq d\leq 3$ and $\Omega\subset\R^d$ be a bounded smooth domain. Let \[ (r,\vec{U},\Phi_r)\in C^1([0,T)\times\bar{\Omega};(0,\infty))\times C^1([0,T)\times\bar{\Omega};\R^d)\times C^2([0,T)\times\bar{\Omega}) \] be a regular solution of~\eqref{eq:euler_2} with initial data $r(0,x)=r_0(x),\, \vec{U}(0,x)=\vec{U}_0(x)$ of finite energy and let $(\vec{\nu}, m^\rho, m^{\rho\vec{u}}, m^{\rho \vec{u}\otimes \vec{u}}, m^{|\nabla\Phi|^2}, m^{\nabla\Phi\otimes\nabla\Phi})$ be a dissipative measure-valued solution to the system~\eqref{eq:euler_2} with initial state \[ \nu_{0,x} = \delta_{\{r(0,x),\vec{U}(0,x),\nabla\Phi_r(0,x)\}},\;\;\;\; \text{for a.e.}\;\; x\in\Omega. \] Then \[ m^{\nabla\Phi\otimes\nabla\Phi}=0,\;\; m^{|\nabla\Phi|^2}=0, \] and we have the following identifications \begin{align*} \skp*{\nu_{t,x} ; \rho} + m^{\rho} &= r,\\ \skp*{\nu_{t,x} ; \rho\vec{u}} + m^{\rho\vec{u}} &= r\vec{U},\\ \skp*{\nu_{t,x} ; \rho\vec{u}\otimes\vec{u}} + m^{\rho\vec{u}\otimes\vec{u}} &= r\vec{U}\otimes\vec{U},\\ \skp*{\nu_{t,x} ; \rho|\vec{u}|^2} + m^{\rho|\vec{u}|^2} &= r|\vec{U}|^2, \end{align*} which hold for almost every $(t,x)\in(0,T)\times\R^d$. Furthermore, the Young measure admits the decomposition \begin{equation*} \nu_{t,x} = \bar{\nu}_{t,x} \otimes \delta_{\{\nabla\Phi_r(t,x)\}}, \end{equation*} for some parameterised measure $\bar{\nu}\in L^\infty_{\mathrm{weak}}((0,T)\times\R^d;\mathcal{M}^+([0,\infty)\times\R^d))$; and in turn the restriction $\bar{\nu}\mres ((0,\infty)\times\R^d)$ decomposes into \begin{equation*} \bar{\nu}_{t,x} \mres ((0,\infty)\times\R^d) = \bar{\bar{\nu}}_{t,x} \otimes \delta_{\{\vec{U}(t,x)\}} \end{equation*} for some parameterised measure $\bar{\bar{\nu}}\in L^\infty_{\mathrm{weak}}((0,T)\times\R^d;\mathcal{M}^+(0,\infty))$. Finally, all the concentration measures $m^{\rho}$, $m^{\rho\vec{u}}$, $m^{\rho\vec{u}\otimes\vec{u}}$ and $m^{\rho|\vec{u}|^2}$ are absolutely continuous with respect to the Lebesgue measure. \end{theorem} The above theorem is not a usual weak-strong uniqueness result, where we would like to assert that the Young measure is necessarily a tensor product of Dirac masses concentrated at the strong solution and all the concentration measures vanish. Indeed, this was the case for most of the recent studies on weak-strong uniqueness for measure-valued solutions in hydrodynamics or more general conservation laws. Starting from the works of Brenier et al.~\cite{BrDeLeSz2011} on the incompressible Euler and Demoulini et al.~\cite{Demoulini2012} on polyconvex elastodynamics, and their somewhat surprising observation that measure-valued solutions can enjoy the weak-strong uniqueness property (under an admissibility condition), this property has been proved for a variety of other equations, see~\cite{GSW2015, GKS2020, FGSW2016, ChristoforouTzavaras2018}. Notably the weak-strong uniqueness property for dissipative measure-valued solutions has been recently put to practical use in proving convergence of finite volume numerical schemes for the Euler and Navier-Stokes equations~\cite{Feireisl:2018sy, Feireisl:2020fj}. The common feature of these results is that one can control each relevant term in the equations by a relative energy between the measure-valued and regular solution in such a way that the vanishing of the relative energy implies that the corresponding measure-valued and regular quantities have to coincide. In the situation considered in this paper we cannot quite follow the same algorithm:\ establishing a relative energy inequality does not immediately imply a weak-strong uniqueness result. This is due to lack of appropriately strong information on the density and is a fundamental feature of system~\eqref{eq:euler_2} due to its pressurelessness. One might like to compare this situation with the isentropic compressible Euler equations with pressure $p(\rho)\sim\rho^\gamma$, see~\cite{GSW2015}. The potential energy term of the corresponding relative energy functional: \[ E_{rel}^{mv} = \int_\Omega \frac12\overline{\rho|\vec{u}-\vec{U}|^2} + \overline{\frac{1}{\gamma-1}\rho^\gamma-\frac{\gamma}{\gamma-1}r^{\gamma-1}\rho+r^\gamma}\dx{x} \] being zero immediately implies (due to convexity of the pressure) that the projection of the Young measure solution onto the first variable equals $\delta_{\{r(t,x)\}}$. This information can be used in the first term of the relative energy to conclude that the measrue-valued solution decomposes into $\nu_{t,x} = \delta_{\{r(t,x)\}}\otimes\delta_{\{\sqrt{r(t,x)}\vec{U}(t,x)\}}$ almost everywhere. This reasoning cannot be mimicked in the pressureless case. Let us further mention that the above issue is also related to some classical results on compensated compactness in the one-dimensional isentropic gas dynamics, see for instance~\cite{LPT94a, DiPerna1985, LPS1996}. Given $L^\infty$ initial data $\rho_0, u_0$ and a sequence $\rho_n, u_n$ of entropy weak solutions to the one-dimensional Euler equations, one can show that there holds the convergences \begin{equation*} \rho_n \rightarrow \rho,\quad u_n\rightarrow u \end{equation*} to another entropy solution in the almost everywhere sense, where the second convergence holds only on the set $\{\rho(t,x)>0\}$. In terms of Young measures this means that on this set the Young measure reduces to Dirac masses, \emph{i.e.}, \begin{equation*} \nu_{t,x} = \delta_{\{\rho(t,x)\}}\otimes \delta_{\{u(t,x)\}},\quad\text{if }\;\rho(t,x)>0; \end{equation*} while on the vacuum set $\{\rho(t,x)=0\}$ one has \begin{equation*} \nu_{t,x} = \delta_{\{\rho(t,x)\}} \otimes \bar{\nu}_{t,x}, \end{equation*} for some probability measure $\bar{\nu}$ whose support is contained in a compact set $[-a,a]$ depending on the initial data through the so-called Riemann invariants and invariant sets (in the $L^\infty$ sense). Thus, in this case the invariant sets prevent concentrations and guarantee the tightness condition in the "$u$-direction" (which is not necessarily true in general). Notice that the use of Riemann invariants is only available in one spatial dimension. The paper is structured as follows. We first introduce the notion of dissipative measured-valued solutions to the Euler-Poisson system in the next section. We analyse the existence of these solutions for the Euler-Poisson system in section 3. Section 4 deals with the relative entropy argument between strong solutions and any dissipative measured-valued solution. We finally prove our partial result about weak-strong uniqueness of these solutions in Section 5. Section 6 is finally devoted to a generalization to the case of adding alignment terms to the Euler-Poisson system as in the case of Cucker-Smale models. \section{Dissipative measure-valued solutions to the Euler-Poisson equation}\label{sec:dmvs} In what follows we use dummy variables $(s, \vec{v}, \vec{F})\in[0,\infty)\times\R^d\times\R^d$ when integrating with respect to a parameterised measure $\vec{\nu}\in L^\infty_{\mathrm{weak}}( (0,T) \times \Omega; \mathcal{M}^+([0,\infty) \times \R^d \times \R^d))$. They should be thought of as representing $\rho$, $\vec{u}$ and $\nabla_x\Phi_\rho$, respectively. We say that $(\vec{\nu}, m^\rho, m^{\rho\vec{u}}, m^{\rho \vec{u}\otimes \vec{u}}, m^{|\nabla\Phi|^2}, m^{\nabla\Phi\otimes\nabla\Phi})$ is a dissipative measure-valued solution of the Euler-Poisson system \eqref{eq:euler_2} in $(0,T) \times \Omega$ with initial data $(\vec{\nu}_0, m^\rho_0, m^{\rho\vec{u}}_0, m^{\rho \vec{u}\otimes \vec{u}}_0, m^{|\nabla\Phi|^2}_0, m^{\nabla\Phi\otimes\nabla\Phi}_0)$ if \[ \vec{\nu} = \prt*{\nu_{t,x}}_{(t,x) \in (0,T)\times \R^d} \in L^\infty_{\mathrm{weak}}\prt*{(0,T) \times \Omega; \mathcal{M}^+ \prt*{[0,\infty) \times \R^d \times \R^d}}\;\;\text{is a parameterized measure}, \] \[ m^\rho, m^{|\nabla\Phi|^2} \in L^\infty(0,T;\mathcal{M}^+(\overline\Omega)),\;\; m^{\rho\vec{u}} \in L^\infty(0,T;\mathcal{M}(\overline\Omega)^d),\;\; m^{\rho \vec{u}\otimes \vec{u}}, m^{\nabla\Phi \otimes \nabla\Phi} \in L^\infty(0,T;\mathcal{M}(\overline{\Omega})^{d\times d}), \] and the following hold: \begin{itemize}[label={}] \item {\emph{Continuity equation}}: \begin{equation}\label{eq:mvcontinuity} \begin{split} \int_{\Omega} & \skp*{ \nu_{\tau,x} ; s } \psi (\tau, x ) \dx{x} - \int_{\Omega} \skp*{ \nu_{0,x}; s } \psi(0, x) \dx{x} + \int_\Omega\psi(\tau,x)\dx{m}^{\rho}_\tau(x) - \int_\Omega\psi(0,x)\dx{m}^{\rho}_0(x) \\[0.5em] & = \int_0^\tau\!\! \int_{\Omega} \brk*{ \skp*{ \nu_{t,x} ; s } \partial_t \psi + \skp*{ \nu_{t,x} ; s \vec{v} } \cdot \nabla_x \psi} \dx{x}\dx{t} + \int_0^\tau\!\!\int_\Omega\partial_t \psi\dx{m}^\rho(x)\dx{t} + \int_0^\tau\!\!\int_\Omega\nabla_x\psi\cdot\dx{m}^{\rho\vec{u}}(x)\dx{t}, \end{split} \end{equation} for a.e.\ $\tau \in (0,T)$ and every $\psi \in C^1 ([0,T] \times \overline{\Omega})$; \item {\emph{Momentum equation}}: \begin{equation}\label{eq:mvmomentum} \begin{split} \int_{\Omega} & \skp*{ \nu_{\tau,x} ; s \vec{v} } \cdot \phi (\tau, x) \dx{x} - \int_{\Omega} \skp*{ \nu_{0,x}; s \vec{v} } \cdot \phi (0, x) \dx{x} + \int_\Omega\phi(\tau,x)\cdot\dx{m}^{\rho\vec{u}}_\tau(x) - \int_\Omega\phi(0,x)\cdot\dx{m}^{\rho\vec{u}}_0(x) \\[0.5em] & = \int_0^\tau\!\! \int_\Omega \skp*{ \nu_{t,x}; s \vec{v} } \cdot \partial_t \phi \dx{x}\dx{t} + \int_0^\tau\!\! \int_\Omega \skp*{ \nu_{t,x}; s \vec{v}\otimes\vec{v} } : \nabla_x \phi \dx{x}\dx{t} - \gamma\int_0^\tau\!\! \int_\Omega\skp*{ \nu_{t,x}; s \vec{v} } \cdot \phi\dx{x}\dx{t} \\[0.5em] &\hspace{0.2cm} - \int_0^\tau\!\! \int_\Omega\skp*{ \nu_{t,x}; s } x\cdot\phi\dx{x}\dx{t} + \int_0^\tau\!\! \int_\Omega\frac12\skp*{\nu_{t,x}; |\vec{F}|^2}{{\mathrm{div}}_x}\,\phi \dx{x}\dx{t} - \int_0^\tau\!\!\int_\Omega \skp*{ \nu_{t,x};\vec{F}\otimes\vec{F}}:\nabla_x\phi \dx{x}\dx{t} \\[0.5em] &\hspace{0.2cm} - M_\rho\int_0^\tau\!\!\int_\Omega\skp*{\nu_{t,x};\vec{F}}\cdot\phi \dx{x} \dx{t} +\int_0^\tau\!\!\int_\Omega\partial_t \phi \cdot \dx{m}^{\rho\vec{u}}(x)\dx{t} + \int_0^\tau\!\!\int_\Omega\nabla_x \phi : \dx{m}^{\rho \vec{u}\otimes \vec{u}}(x)\dx{t} \\[0.5em] &\hspace{0.2cm} - \gamma\int_0^\tau\!\!\int_\Omega\phi \cdot \dx{m}^{\rho\vec{u}}(x)\dx{t} -\int_0^\tau\!\!\int_\Omega \phi\cdot x \dx{m}^{\rho}(x)\dx{t} +\int_0^\tau\!\!\int_\Omega{{\mathrm{div}}_x}\,\phi\dx{m}^{\abs*{\nabla\Phi}^2}(x)\dx{t} \\[0.5em] &\hspace{0.2cm} -\int_0^\tau\!\!\int_\Omega\nabla\phi : \dx{m}^{\nabla\Phi\otimes\nabla\Phi}(x)\dx{t}, \end{split} \end{equation} for a.e.\ $\tau \in (0,T) $ and every $\phi \in C^1([0,T] \times \overline{\Omega}; \R^d)$; \item \emph{Poisson equation}: For a.ae.\ $\tau \in (0,T)$ and every $\vartheta \in C^1 (\overline{\Omega})$ \begin{equation}\label{eq:mvPoisson} \int_\Omega \skp*{ \nu_{\tau,x}; \vec{F}}\cdot\nabla_x\vartheta\dx{x} = \int_\Omega\skp*{\nu_{\tau,x}; s}\vartheta \dx{x} + \int_\Omega\vartheta(x)\dx{m}^{\rho}_\tau(x) - M_\vr\int_\Omega\vartheta(x)\dx{x}; \end{equation} and furthermore there exists a function $\Psi_\rho\in L^2(0,T; W^{1,2}(\Omega))$ such that \begin{equation}\label{eq:gradientmeasure} \nabla_x\Psi_\rho(t,x) = \int_\Omega \vec{F} \dx{\nu_{t,x}}(s,\vec{v},\vec{F}), \end{equation} almost everywhere. \item {\emph{Energy inequality}}: There is a non-negative measure $m^{\rho|\vec{u}|^2}\in L^\infty(0,T;\mathcal{M}^+(\overline{\Omega}))$ such that \begin{equation}\label{eq:energy_inequality} \begin{split} \int_\Omega& \skp*{\nu_{\tau,x}; \frac{1}{2} s | \vec{v}|^2} \dx{x} + \frac12 m_\tau^{\rho|\vec{u}|^2}(\Omega) + \int_\Omega\skp*{ \nu_{\tau,x}; \frac12|\vec{F}|^2} \dx{x} + \frac12m_\tau^{|\nabla\Phi|^2}(\Omega) + \int_\Omega\skp*{ \nu_{\tau,x}; s} \frac12|x|^2 \dx{x} \\[0.5em] &\qquad+\int_\Omega\frac12|x|^2\dx{m}^{\rho}_\tau(x) \\[0.5em] &\leq \int_\Omega\skp*{ \nu_{0,x}; \frac{1}{2} s | \vec{v}|^2 } \dx{x} + \frac12 m_0^{\rho|\vec{u}|^2}(\Omega) + \int_\Omega\skp*{ \nu_{0,x}; \frac12|\vec{F}|^2} \dx{x} + \frac12m_0^{|\nabla\Phi|^2}(\Omega) + \int_\Omega\skp*{ \nu_{0,x}; s} \frac12|x|^2 \dx{x} \\[0.5em] &\qquad+\int_\Omega\frac12|x|^2\dx{m}^{\rho}_0(x) - \int_0^\tau \brk*{\int_{\Omega} \skp*{ \nu_{t,x}; s |\vec{v}|^2 } \dx{x} + m^{\rho|\vec{u}|^2}(\Omega)}\dx{t}, \end{split} \end{equation} for a.e.\ $\tau \in (0,T)$. \end{itemize} \begin{remark}\hfill \begin{enumerate} \item We shall usually think of a measure-valued solution as being a weak limit of some family of weak solutions (either to an approximate problem or to~\eqref{eq:euler_2} itself). Then all the concentration-defect measures are easily described by~\eqref{eq:concentrationdefect} and can be related to the formalism of generalised Young measures via so-called recession functions, see~\cite{AlBo}. Moreover Proposition~\ref{prop:concentrationrelations} gives natural relationships between these measures, which are inherited from the corresponding inequalities at the approximate level. In particular all the concentration-defects appearing in the governing equations are dominated by those appearing in the energy inequality. Notice however that there might exist measure-valued solutions which do not arise as limits of weak solutions, see~\cite{Chiodaroli:2017xy, GallenmullerWiedemann2021}. \item Let us note that the appearance of a non-trivial concentration measure in the density cannot be excluded, because of low integrability of $\rho$. This is in contrast to the situation, when a pressure term is present in the system. Then typically $p(\rho)\sim \rho^\gamma$, $\gamma>1$, implying uniform intergrability, and excluding the possibility of concentrations in the density. \item Notice that one cannot exclude concentrations in the term $\int_\Omega\frac12|\nabla_x\Phi_\rho|^2\dx{x}$ in the energy, since the Poisson equation $-\Delta\Phi_\rho=\rho\in\mathcal{M}_{+}$ only guarantees $\nabla_x\Phi\in L^p$ with $\frac{d}{d-1}<p\leq2$. \item Let us recall the shorthand notation $$ \overline{f}(t,x) = \skp*{ \nu_{t,x}(\lambda); f(\lambda) } + m^f(\dx{t},\dx{x}). $$ Thus, equation~\eqref{eq:mvcontinuity} can be written concisely as \begin{equation*} \int_{\Omega} \overline{\rho}(\tau,x)\psi(\tau,x)\dx{x} - \int_{\Omega} \overline{\rho}(0,x)\psi(0, x) \dx{x} = \int_0^\tau \int_{\Omega} \brk*{ \overline{\rho}\partial_t \psi + \overline{\rho\vec{u}}\cdot\nabla_x \psi} \dx{x}\dx{t}, \end{equation*} and similarly for the other equations. \item The constant $M_\vr$ in equations~\eqref{eq:mvmomentum} and~\eqref{eq:mvPoisson} should more precisely be written as $M_{\bar{\vr}}$ to account for possible concentrations in the initial data, when one has \[ M_{\bar{\vr}} = \int_\Omega\skp*{\nu_0;s}\dx{x} + m_0^\vr(\Omega). \] However, in the proof of existence, as well as the subsequent analysis, we will specialise to the case of initial data without concentrations. Let us also observe that testing the continuity equation~\eqref{eq:mvcontinuity} with the test function $\psi\equiv 1$, we obtain \[ \int_\Omega \skp*{\nu_{\tau,x};s}\dx{x} + m_\tau^{\vr}(\Omega) = \int_\Omega\skp*{\nu_0;s}\dx{x} + m_0^\vr(\Omega), \] for any time $\tau>0$. Therefore $M_{\bar{\vr}}$ is a constant of motion also for the measure-valued solutions. \item The measure-valued solutions generated by an approximation as in Section~\ref{sec:existence} enjoy property~\eqref{eq:gradientmeasure}, since the Young measure is generated (in its third coordinate) by a sequence of gradients; and would still be true for other conceiveable approximations. This fact will be used to close the relative energy estimate. Note however that condition~\eqref{eq:gradientmeasure} itself does not imply that the projection $\pi_3\nu$ of the Young measure onto the third coordinate is generated by a sequence of gradients. \end{enumerate} \end{remark} \begin{remark}[Relation between two formulations] The above definition of a measure-valued solution is formulated for the system~\eqref{eq:euler_2}. In particular we take advantage of formula~\eqref{eq:new_form} to obtain a conservative form of the nonlocal term. In order to relate this definition to the initial formulation~\eqref{eq:euler_1}, we need to make sense of the term $\rho\nabla_x\Phi_\vr$. This can be done by defining a distribution $\stackrel{\photon}{\rho\nabla_x\Phi_\rho}$ via the relation \begin{equation*} \brk*{\stackrel{\photon}{\rho\nabla_x\Phi_\rho};\phi} = \int_0^\tau\int_\Omega\frac12\overline{|\nabla_x\Phi_\rho|^2}{{\mathrm{div}}_x}\,\phi \dx{x}\dx{t} - \int_0^\tau\int_\Omega\overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_\rho}:\nabla_x\phi\dx{x}\dx{t}, \end{equation*} for any $\phi\in C^1(\bar\Omega)$, where $\brk*{\cdot;\cdot}$ denotes the duality pair between $(C^1)^{*}$ and $C^{1}$. It can be seen that the approximate sequences constructed in the next section give rise to such an object. \end{remark} \section{Existence of dissipative measure-valued solutions} \label{sec:existence} In this section we will prove existence of dissipative measure-valued solutions defined in the previous section. To this end we construct a two-step approximation. Firstly, we define an approximate problem to~\eqref{eq:euler_1}, where we introduce a sixth order differential operator into the momentum equation. Secondly, we prove existence of a finite-dimensional approximation, and pass to the limit to obtain existence of weak solutions for the approximate problem. Finally, we pass to the limit with the coefficient of the highest order term to show that a sequence of such solutions generates a dissipative measure-valued solution to~\eqref{eq:euler_1}. We only consider the case when the initial data is regular, \emph{i.e.}, the initial concentration measures are all zero and $\nu_0$ is a Dirac measure concentrated at some given measurable initial states. The main result of this section is the following. \begin{theorem}\label{thm:existenceDMVS} Let $1\leq d\leq 3$. Suppose $\Omega\subset\R^d$ is a bounded smooth domain. If the initial data $(\vr_0, \vec{u}_0)$ is such that $\rho_0, \rho_0\vec{u}_0\in L^1(\Omega)$ and has finite energy, then there exists a dissipative measure-valued solution with initial data \begin{equation*} \nu_{0,x} = \delta_{\{\rho_0(x), \vec{u}_0(x), \nabla_x\Phi_{\rho}(0,x)\}}\;\;\;\text{ for a.e.\ }x\in\Omega. \end{equation*} \end{theorem} \noindent The remainder of this section is dedicated to the proof of the above theorem. \medskip \noindent\emph{Approximate problem.} Let $\ep>0$ and let $(\!(\cdot;\cdot)\!) = (\cdot;\cdot)_{W_0^{3,2}(\Omega)}$ denote the standard scalar product in $W_0^{3,2}(\Omega)$. Let $\rho_{0,\ep}$ and $\vec{u}_{0,\ep}$ denote smooth functions obtained by standard mollification of $\rho_0$ and $\vec{u}_0$ at scale $\ep$. We say that a triple $(\rho^\ep, \vec{u}^\ep, \Phi_{\rho^\ep})$ is a weak solution to the approximate Euler-Poisson problem with initial data $\rho_0^\ep = \rho_{0,\ep}+\ep$ and $\vec{u}_0^\ep =\vec{u}_{0,\ep}$ if, for all $\tau\in(0,T]$, \begin{equation}\label{eq:approxcontinuity} \int_0^\tau\int_\Omega \rho^\ep \partial_t\psi + \rho^\ep \vec{u}^\ep\cdot\nabla_x\psi\dx{x}\dx{t} = \int_\Omega\rho^\ep(\tau,x)\psi(\tau,x)\dx{x} - \int_\Omega\rho_0^\ep\psi(0,x)\dx{x}, \end{equation} for all $\psi\in C^1([0,T]\times\overline{\Omega})$, \begin{equation}\label{eq:approxmomentum} \begin{split} \int_0^\tau\int_\Omega\rho^\ep \vec{u}^\ep\cdot\partial_t\phi &+ \brk*{\rho^\ep \vec{u}^\ep\otimes \vec{u}^\ep - \nabla_x\Phi_{\rho^\ep}\otimes\nabla_x\Phi_{\rho^\ep}+\frac12|\nabla_x\Phi_{\rho^\ep}|^2\Id}:\nabla_x\phi\dx{x}\dx{t} -\gamma \int_0^\tau\int_\Omega \rho^\ep\vec{u}^\ep\cdot\phi\dx{x}\dx{t} \\[0.5em] &\qquad-\int_0^\tau\int_\Omega \rho^\ep x\cdot\phi\dx{x}\dx{t} - M_{\vr^\ep} \int_0^\tau\int_\Omega \nabla_x\Phi_{\vr^\ep}\cdot\phi \dx{x}\dx{t} \\[0.5em] &= \ep\int_0^\tau(\!(\vec{u}^\ep;\phi)\!)\dx{t} + \int_\Omega(\rho^\ep\vec{u}^\ep)(\tau,x)\cdot\phi(\tau,x)\dx{x} - \int_\Omega \rho_0^\ep \vec{u}_0^\ep \cdot\phi(0,x)\dx{x}, \end{split} \end{equation} for all $\phi\in C^1([0,T]\times\overline{\Omega})$, and \begin{equation}\label{eq:approxpoisson} \int_\Omega\nabla_x\Phi_{\rho^\ep}(\tau,x)\cdot\nabla_x\vartheta(x)\dx{x} = \int_\Omega\rho^\ep(\tau,x)\vartheta(x)\dx{x} -M_{\vr^\ep} \int_\Omega \vartheta(x) \dx{x}, \end{equation} for all $\vartheta\in C^1(\overline{\Omega})$, where $M_{\vr^\ep} = \int_\Omega \rho^\ep(t,x) \dx{x}$ is constant in time for each $\ep>0$. Let us note that due to strong $L^1$ convergence of $\rho_0^\ep$ to $\rho_0$, we obtain convergence of $M_{\rho^\ep}$ towards $M_{\rho}$. \noindent Let us remark that in the above formulation of the approximate problem we ignore the precise form of the sixth order elliptic operator as well as the corresponding boundary conditions. These conditions vanish when we pass to the limit with $\ep$, which is the sole purpose of this section. We will now define Galerkin approximate solutions to the approximate Euler-Poisson problem. For convenience we drop the index $\ep$. \noindent Let $\{\omega_i\}$ be an orthonormal basis of $W_0^{3,2}(\Omega)$ solving the eigenvalue problem \[ (\!(\omega_i;\cdot)\!) = \lambda_i(\omega_i;\cdot)_{L^2(\Omega)} \] in $W_0^{3,2}(\Omega)$, \emph{cf.}\;\cite[Section~6.4]{JosefBook}. Then $\{\omega_i\}$ is an orthonormal basis for $L^2(\Omega)$ and if the boundary of $\Omega$ is smooth enough, then all $\omega_i$ can be assumed smooth. Now put $\vec{u}^n(t,x) = \sum\limits_{i=1}^{n}c^n_i(t)\omega_i(x)$. The triple $(\rho^n, \vec{u}^n, \Phi_{\rho^n})$ is called a solution to the Galerkin approximation for the approximate Euler-Poisson problem~\eqref{eq:approxcontinuity}--\eqref{eq:approxpoisson} in $(0,T)\times\Omega$ with initial data $\rho^n_0 = \rho_0$ and $\vec{u}_0^n = \sum\limits_{i=1}^{n}(\vec{u}_0;\omega_i)_{L^2(\Omega)}\omega_i$, if \begin{equation}\label{eq:galerkinmass} \partial_t\rho^n + {{\mathrm{div}}_x}\,(\rho^n \vec{u}^n) = 0 \end{equation} \begin{equation}\label{eq:galerkinmomentum} \begin{split} \int_\Omega&\brk*{\rho^n \partial_t \vec{u}^n + \rho^n\nabla_x \vec{u}^n\vec{u}^n - {{\mathrm{div}}_x}\,\brk*{ \nabla_x\Phi_{\rho^n}\otimes\nabla_x\Phi_{\rho^n}-\frac12|\nabla_x\Phi_{\rho^n}|^2\Id} + \gamma\rho^n \vec{u}^n + \rho^n x}\cdot\omega_i\dx{x} \\[0.5em] &\hspace{2.5cm} + M_{\vr^n}\int_\Omega \nabla_x\Phi_{\vr^n}\cdot\omega_i \dx{x} + \ep(\!(\vec{u}^n;\omega_i)\!) = 0, \end{split} \end{equation} for $i=1,\dots,n$, and \begin{equation}\label{eq:galerkinpoisson} -\Delta\Phi_{\rho^n} = \rho^n - M_{\vr^n}, \end{equation} where $M_{\vr^n} = \int_\Omega\rho^n(t,x)\dx{x}$. \medskip \noindent\emph{Existence of Galerkin approximations.} Observe that the initial data for the Galerkin problem satisfies in particular $\rho_0^n\in C^1(\overline{\Omega})$, $\rho^n_0>0$ and $\vec{u}^n_0\in W_0^{3,2}(\Omega)$. For data with this regularity, the existence of a solution $(\rho^n, \vec{u}^n)$ with \[ \rho^n\in C^1([0,T)\times\overline{\Omega}),\;\;\; \vec{u}^n\in C^1([0,T); W_0^{3,2}(\Omega)) \] to the Galerkin system is obtained via a combination of the method of characteristics (to find the unique $\rho^n$) and the Schauder fixed point theorem. Since the proof follows in essentially the same steps as in~\cite{JosefBook} (see also~\cite{Gw2005}), we skip the details here. We just note that solving the continuity equation~\eqref{eq:galerkinmass} with the method of characteristics yields the following representation \begin{equation}\label{eq:galerkindensity} \rho^n(t,x) = \rho_0(X^n(0;t,x))\exp{\prt*{-\int_0^t{{\mathrm{div}}_x}\, \vec{u}^n(\tau,X^n(\tau;t,x))\dx{\tau}}}, \end{equation} where $X^n$ denotes the forward flow assosciated with velocity $\vec{u}^n$. In particular $\rho^n\geq \rho^*$ for some constant $\rho^*$ depending on $\ep^{-1}$. \medskip \noindent\emph{Energy estimates for the Galerkin approximations.} Upon multiplying each equation of~\eqref{eq:galerkinmomentum} with the coefficient $c_i^n(t) = (\vec{u}^n(t,\cdot);\omega_i(\cdot))_{L^2(\Omega)}$ and summing over $i=1,\dots,n$, we get, for each time, \begin{equation*} \begin{aligned} \frac{\dx{}}{\dx{t}}\int_\Omega\frac12\rho^n|\vec{u}^n|^2\dx{x} - \int_\Omega&{{\mathrm{div}}_x}\,\brk*{\nabla_x\Phi_{\rho^n}\otimes\nabla_x\Phi_{\rho^n}-\frac12|\nabla_x\Phi_{\rho^n}|^2\Id} \cdot \vec{u}^n\dx{x} + \ep(\!(\vec{u}^n;\vec{u}^n)\!) \\[0.5em] &= \gamma\int_\Omega \rho^n \abs*{\vec{u}^n}^2 \dx{x} - \int_\Omega \rho^n x \cdot \vec{u}^n\dx{x} - M_{\vr^n}\int_\Omega \nabla_x\Phi_{\vr^n}\cdot\vec{u}^n \dx{x}, \end{aligned} \end{equation*} where we used~\eqref{eq:galerkinmass} to write \begin{equation*} \int_\Omega\rho^n \vec{u}^n\cdot\nabla_x \vec{u}^n \vec{u}^n\dx{x} = \int_\Omega\partial_t\rho^n\ \frac12|\vec{u}^n|^2\dx{x}. \end{equation*} Moreover, using~\eqref{eq:galerkinmass},~\eqref{eq:new_form} and the Poisson equation we have \begin{equation*} \begin{aligned} \int_\Omega{{\mathrm{div}}_x}\,\brk*{\nabla_x\Phi_{\rho^n}\otimes\nabla_x\Phi_{\rho^n}-\frac12|\nabla_x\Phi_{\rho^n}|^2\Id}\cdot \vec{u}^n\dx{x} - M_{\vr^n}\int_\Omega \nabla_x\Phi_{\vr^n}\cdot\vec{u}^n \dx{x}&= -\int_\Omega\rho^n\nabla_x\Phi_{\rho^n}\cdot \vec{u}^n\dx{x} \\[0.5em] &= -\frac{\dx{}}{\dx{t}}\int_\Omega\frac12|\nabla_x\Phi_{\rho^n}|^2\dx{x}. \end{aligned} \end{equation*} Finally, we write \begin{equation*} -\int_\Omega \rho^n x \cdot \vec{u}^n \dx{x} = -\int_\Omega \rho^n\vec{u}^n \cdot \nabla_x\prt*{\frac12\abs*{x}^2} \dx{x} = -\frac{\dx{}}{\dx{t}}\int_\Omega\frac12\rho^n|x|^2\dx{x}. \end{equation*} We therefore obtain the following energy estimate \begin{equation}\label{eq:galerkinenergy} \begin{split} \int_\Omega&\left[\frac12\rho^n |\vec{u}^n|^2 + \frac12|\nabla_x\Phi_{\rho^n}|^2 + \frac12\rho^n|x|^2\right]\prt*{\tau,x}\dx{x} + \ep\int_0^\tau(\!(\vec{u}^n;\vec{u}^n)\!)\dx{t} \\[0.5em] &\leq \int_\Omega\frac12\rho_0|\vec{u}_0|^2 + \frac12|\nabla_x\Phi_{\rho^n}|^2(0,x) + \frac12\rho_0|x|^2\dx{x} - \gamma \int_0^\tau\int_\Omega\rho^n|\vec{u}^n|^2\dx{x}\dx{t}. \end{split} \end{equation} Now using~\eqref{eq:galerkindensity} and~\eqref{eq:galerkinenergy} we can deduce the uniform (in $n$) estimate \begin{equation}\label{eq:galerkindensityestimate} \norm*{\rho^n}_{L^\infty((0,T)\times\Omega)} + \int_0^T\norm*{\partial_t\rho^n}^2_{L^2(\Omega)}\dx{t} + \int_0^T\norm*{\nabla_x\rho^n}^2_{L^2(\Omega)}\dx{t} \leq C\prt*{\ep^{-1}}, \end{equation} while using~\eqref{eq:galerkinmomentum} multiplied by $\partial_t c_i^n(t)$,~\eqref{eq:galerkinenergy} and~\eqref{eq:galerkindensityestimate} we obtain \begin{equation}\label{eq:galerkinvelocityestimate} \int_0^T\norm*{\partial_t \vec{u}^n}^2_{L^2(\Omega)}\dx{t} + \ep\norm*{\vec{u}^n}_{L^\infty\prt*{0,T;W_0^{3,2}(\Omega)}} \leq C\prt*{\ep^{-1}}, \end{equation} see~\cite[Lemma~5.45]{JosefBook} for details. \medskip \noindent\emph{Existence of solutions to the approximate Euler-Poisson problem.} Having~\eqref{eq:galerkinenergy}--\eqref{eq:galerkinvelocityestimate} and the Poisson equation~\eqref{eq:galerkinpoisson} we can, for each $\ep>0$, deduce, up to extracting a subsequence, the convergences \begin{equation*} \begin{aligned} \rho^n &\weakstar \rho \;\;\; &&\text{ in } L^\infty((0,T)\times\Omega),\\ \partial_t\rho^n &\rightharpoonup \partial_t\rho \;\;\; &&\text{ in } L^2((0,T)\times\Omega),\\ \partial_t \vec{u}^n &\rightharpoonup \partial_t \vec{u} \;\;\; &&\text{ in } L^2((0,T)\times\Omega),\\ \vec{u}^n &\rightharpoonup \vec{u} \;\;\; &&\text{ in } L^2((0,T);W_0^{3,2}(\Omega)),\\ \nabla_x\Phi_{\rho^n}&\to\nabla_x\Phi_{\rho}\;\;\; &&\text{ in } L^2((0,T)\times\Omega). \end{aligned} \end{equation*} By the Aubin-Lions Lemma we then have \begin{equation*} \begin{aligned} \rho^n &\to \rho \;\;\; &&\text{ in } L^2((0,T)\times\Omega),\\ \vec{u}^n &\to \vec{u} \;\;\; &&\text{ in } L^2((0,T);W_0^{1,2}(\Omega)). \end{aligned} \end{equation*} Combining the above weak and strong convergences, we can pass to the limit in each integral in the formulation of the Galerkin problem, thus showing existence of a solution $(\rho^\ep,\vec{u}^\ep,\Phi_\rho^\ep)$ to the approximate Euler-Poisson problem. \medskip \noindent\emph{Existence of dissipative measure-valued solutions.} Since the approximate solution $(\rho^\ep, \vec{u}^\ep, \Phi_\rho^\ep)$ is the limit of the Galerkin approximations, we have the energy bound \begin{equation}\label{eq:approximateenergy} \begin{split} \int_\Omega&\left[\frac12\rho^\ep |\vec{u}^\ep|^2 + \frac12|\nabla_x\Phi_{\rho^\ep}|^2 + \frac12\rho^\ep|x|^2\right]\prt{\tau,x}\dx{x} + \gamma \int_0^\tau\int_\Omega\rho^\ep|\vec{u}^\ep|^2\dx{x}\dx{t} \\[0.5em] &\quad\leq \int_\Omega\left[\frac12\rho_0|\vec{u}_0|^2 + \frac12|\nabla_x\Phi_{\rho^\ep}|^2(0,x) + \frac12\rho_0|x|^2\right]\dx{x} . \end{split} \end{equation} Moreover, mass conservation implies that $\rho^\ep$ is uniformly bounded in $L^\infty(0,T;L^1(\Omega))$. Then, \begin{equation*} \int_\Omega|\rho^\ep \vec{u}^\ep|\dx{x}\leq\frac12\int_\Omega\rho^\ep\dx{x} + \frac12\int_\Omega\rho^\ep|\vec{u}^\ep|^2\dx{x}\leq C. \end{equation*} Therefore the sequence of approximate momenta $\{\rho^\ep\vec{u}^\ep\}$ is uniformly bounded in $L^\infty(0,T;L^1(\Omega))$, while $\{\nabla_x\Phi_{\rho^\ep}\}$ is bounded in $L^\infty(0,T;L^2(\Omega))$. As discussed in the introduction, by considering the sequence $\delta_{\{\rho^\ep, \vec{u}^\ep, \nabla_x\Phi_{\rho^\ep}\}}$ we obtain in the limit $\ep\to0$ a parameterised measure \begin{equation*} \vec{\nu}=\{\nu_{t,x}\}\in L_{\mathrm{weak}}^\infty\prt*{(0,T)\times\Omega;\mathcal{M}^+\prt*{[0,\infty)\times \R^d\times\R^d}} \end{equation*} which represents weak-$^*$ limits of nonlinear compositions with $C_0$ nonlinearities. However, since the ones appearing in the weak formulation of our problem are only continuous and not $C_0$ (and since their compositions with the approximating sequence are not uniformly integrable), we cannot apply any form of the Fundamental Theorem directly. The only terms in which we can straightaway pass to the limit are the ones containing $\nabla_x\Phi_{\rho^\ep}$ in the momentum and Poisson equations. For all the other terms we have to make use of Lemma~\ref{lem:integrability} to describe the oscillatory behaviour and introduce the concentration-defect measures as in~\eqref{eq:concentrationdefect}. Indeed the functions of interest are $f(s,\vec{v},\vec{F}) = s, s\vec{v}, s\vec{v}\otimes\vec{v}, s|\vec{v}|^2, \vec{F}\otimes\vec{F}, |\vec{F}|^2$, and all satisfy conditions of the lemma. In particular, using~\eqref{eq:limitwithconcentration}, we can pass to the limit $\ep\to0$ in each term of~\eqref{eq:approxcontinuity}: \begin{equation*} \begin{aligned} \int_0^\tau\int_\Omega \rho^\ep \partial_t\psi + \rho^\ep \vec{u}^\ep\cdot\nabla_x\psi\dx{x}\dx{t} &\longrightarrow \int_0^\tau\int_\Omega \overline{\rho}\partial_t\psi + \overline{\rho\vec{u}}\cdot\nabla_x\psi\dx{x}\dx{t},\\[0.5em] \int_\Omega\rho^\ep(\tau,x)\psi(\tau,x)\dx{x} &\longrightarrow \int_\Omega \overline{\rho}(\tau)\psi(\tau,x)\dx{x}, \end{aligned} \end{equation*} for any $\psi\in C^1([0,T]\times\overline{\Omega})$, to obtain~\eqref{eq:mvcontinuity}. Similarly we pass to the limit in~\eqref{eq:approxpoisson} to get~\eqref{eq:mvPoisson} (notice that $m^{\nabla\Phi}=0$, since $\nabla_x\Phi_\ep$ is square-integrable), and we also obtain~\eqref{eq:mvmomentum} as a limit of~\eqref{eq:approxmomentum}. Finally, passing to the limit in~\eqref{eq:approximateenergy} we obtain~\eqref{eq:energy_inequality}. Notice that by virtue of Proposition~\ref{prop:concentrationrelations} we have several natural relations between the concentration measures, for instance \begin{equation*} \abs*{m^{\rho|\vec{u}|}} \leq m^{\rho} + m^{\rho|\vec{u}|^2}, \end{equation*} and \begin{equation*} \abs*{m^{\nabla\Phi\otimes\nabla\Phi}} \leq m^{|\nabla\Phi|^2},\quad \abs*{m^{\rho\vec{u}\otimes\vec{u}}} \leq m^{\rho|\vec{u}|^2}. \end{equation*} In particular $m^{|\nabla\Phi|^2} = 0$ implies $m^{\nabla\Phi\otimes\nabla\Phi} = 0$ (see Lemma~\ref{lem:inegalites2}). Furthermore, due to bounds in $L^\infty(0,T;L^(\Omega))$, each concentration measure admits a disintegration with respect to the time and space variable, and thus we have \begin{align*} m^\rho, m^{\rho|\vec{u}|^2}, m^{|\nabla\Phi|^2} &\in L^\infty\prt*{0,T;\mathcal{M}^+(\overline{\Omega})},\\ m^{\rho\vec{u}} &\in L^\infty\prt*{0,T;\mathcal{M}(\overline{\Omega})^d},\\ m^{\rho\vec{u}\otimes\vec{u}}, m^{\nabla\Phi\otimes\nabla\Phi} &\in L^\infty\prt*{0,T;\mathcal{M}(\overline{\Omega})^{d\times d}}, \end{align*} Thus the proof of Theorem~\ref{thm:existenceDMVS} is complete. \bigskip Let us point out one useful identity satisfied by the measure-valued solutions. Suppose $(r,\vec{U}, \Phi_r)$ is a regular solution of~\eqref{eq:euler_2}. Upon using~\eqref{eq:new_form} for this regular solution and for the Galerkin approximates, we obtain, in the limit as $n\to\infty$, the following identity \begin{equation*} \begin{aligned} &\int_0^\tau\int_\Omega \prt*{r-M_r}\nabla_x\Phi_{\rho^\ep}\cdot\phi \dx{x}\dx{t} + \int_0^\tau\int_\Omega \prt*{\rho^\ep-M_{\vr^\ep}}\nabla_x\Phi_r\cdot\phi \dx{x}\dx{t} \\ &= -\int_0^\tau\int_\Omega \nabla_x\Phi_{\rho^\ep}\cdot\nabla_x\Phi_r{{\mathrm{div}}_x}\,\phi \dx{x}\dx{t} \\ &\quad + \int_0^\tau\int_\Omega \nabla_x\Phi_{\rho^\ep}\otimes\nabla_x\Phi_r\cdot\nabla_x\phi \dx{x}\dx{t} + \int_0^\tau\int_\Omega \nabla_x\Phi_r\otimes\nabla_x\Phi_{\rho^\ep}\cdot\nabla_x\phi \dx{x}\dx{t}, \end{aligned} \end{equation*} for every $\phi\in L^\infty(0,T;C^{1}(\Omega))$. We can pass to the limit in each term of the above identity to obtain \begin{equation} \begin{aligned}\label{eq:mvnewform2} &\int_0^\tau\int_\Omega \prt*{r-M_r}\overline{\nabla_x\Phi_\rho}\cdot\phi \dx{x}\dx{t} + \int_0^\tau\int_\Omega \overline{\prt*{\rho-M_\vr}}\nabla_x\Phi_r\cdot\phi \dx{x}\dx{t} \\ &= -\int_0^\tau\int_\Omega \overline{\nabla_x\Phi_\rho}\cdot\nabla_x\Phi_r{{\mathrm{div}}_x}\,\phi \dx{x}\dx{t} \\ &\quad + \int_0^\tau\int_\Omega \overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_r}\cdot\nabla_x\phi \dx{x}\dx{t} + \int_0^\tau\int_\Omega \overline{\nabla_x\Phi_r\otimes\nabla_x\Phi_\rho}\cdot\nabla_x\phi \dx{x}\dx{t}. \end{aligned} \end{equation} \begin{remark}[Other approximation schemes] Admittedly, the above approximation scheme carries the disadvantage of imposing rather strange and nonphysical boundary condition in the velocity. To avoid this, other approximations are possible, for instance through the Navier-Stokes equations with artificial local pressure. We skip the full exposition here, and refer to \cite{FGSW2016} for some details. \end{remark} \begin{remark} \label{rem:othervariables} Notice that in the above proof we can only guarantee that the measures $\nu_{t,x}$ belong to the space $\mathcal{M}^+$ of non-negative Radon measures with $\norm{\nu_{t,x}}_{TV}\leq 1$. In the variables $(\rho,\vec{u},\nabla_x\Phi_\rho)$ one cannot guarantee that thease are necessariliy probability measures, because the tightness condition might fail. Indeed, when $\rho=0$, neither the momentum nor the kinetic energy offer control over the velocity, which can be arbitrary in the vaccum regions. This situation is reminiscent of other Euler-type models analysed in the past: the Savage-Hutter equations~\cite{Gw2005} or compressible Euler equations~\cite{GSW2015}. There, to circumvent this issue a different set of variables is considered, namely $\rho$ and $\sqrt{\rho}\vec{u}$, instead of the more traditional $\rho$ and $\vec{u}$. In this formulation one has the uniform bound \begin{equation*} \int_{B_R(0)} |\sqrt{\rho^\ep}\vec{u}^\ep|^2 \dx{x} = \int_{B_R(0)} \rho^\ep|\vec{u}^\ep|^2 \dx{x} < \infty, \end{equation*} which implies tightness. Therefore the sequence $(\rho^\ep,\sqrt{\rho^n}\vec{u}^n,\nabla_x\Phi_\rho^n)$ generates a Young measure \[ \vec{\mu} \in L^\infty_{\mathrm{weak}}\prt*{(0,T)\times\R^d ; {\mathcal{P}}([0,\infty)\times\R^d\times\R^d)}, \] which clearly agrees with $\vec{\nu}$ on the first and third coordinates. \end{remark} \section{Relative energy inequality} \label{sec:relative} We suppose now that $(r,\vec{U},\Phi_r)$, $r>0$, is a strong solution to the Euler-Poisson system~\eqref{eq:euler_1} with regular initial data $(\rho_0, \vec{u}_0)$ of finite energy. Furthermore we consider a dissipative measure-valued solution $(\vec{\nu}, m^\rho, m^{\rho\vec{u}}, m^{\rho \vec{u}\otimes \vec{u}}, m^{|\nabla\Phi|^2}, m^{\nabla\Phi\otimes\nabla\Phi})$ with \[ \nu_{0,x} = \delta_{\{\rho_0(x), \vec{u}_0(x), \nabla_x\Phi_r(0,x)\}} \] for a.e.\ $x\in\Omega$. In~\cite{CaFeGwSw2015} the following relative energy was used: \[ \mathcal{E} (\vr,\vec{u} \,|\, r, \vec{U}) = \int_{\Omega} \brk*{ \frac{1}{2} \vr \abs*{\vec{u} - \vec{U}}^2 + \frac{1}{2} (r - \vr)(K \ast (r-\vr)) } \dx{x}, \] $K$ being the Poisson kernel, to compare strong and dissipative weak solutions and establish a weak-strong uniqueness result. Note that in their case $\Omega = \mathbb{T}^d$ was the flat torus in two or three dimensions. We shall mimic this approach here. To this end, we notice that, upon an integration by parts, we have \[ \int_\Omega \frac{1}{2} (r - \vr)(K \ast (r-\vr)) \dx{x} = \int_\Omega\frac12 \abs*{\nabla_x\Phi_r-\nabla_x\Phi_\rho}^2 \dx{x}. \] A natural candidate for a measure-valued version of the relative energy is therefore: \begin{equation}\label{eq:mvrelenergy} \mathcal{E}_{rel}^{mv}(\tau)= \int_{\Omega} \left[ \frac{1}{2} \overline{\rho |\vec{u} - \vec{U}|^2} + \frac{1}{2} \overline{|\nabla_x\Phi_\rho - \nabla_x\Phi_r|^2} \right] \dx{x}. \end{equation} \noindent We can then write \begin{equation}\label{eq:mvrelenergy2} \begin{split} \mathcal{E}_{rel}^{mv}(\tau) =& \int_\Omega\frac12 \overline{\rho |\vec{u}|^2} \dx{x} + \int_\Omega\frac12 \overline{|\nabla_x\Phi_\rho|^2} \dx{x} + \int_\Omega\frac12 \overline{\rho}|\vec{U}|^2\dx{x} \\[0.5em] &- \int_\Omega\overline{\rho\vec{u}}\cdot\vec{U}\dx{x} - \int_\Omega \overline{\nabla_x\Phi_\rho}\cdot\nabla_x\Phi_r \dx{x} + \frac12\int_\Omega|\nabla_x\Phi_r|^2\dx{x}. \end{split} \end{equation} We also introduce the measure-valued variant of the energy \[ \mathcal{E}^{mv}(\tau)= \int_{\Omega} \left[ \frac{1}{2} \overline{\rho |\vec{u}|^2} + \frac{1}{2} \overline{|\nabla_x\Phi_\rho|^2} + \frac12 \overline{\rho}|x|^2 \right] \dx{x}, \] so that inequality~\eqref{eq:energy_inequality} becomes \begin{equation}\label{eq:mvenergyinequality} \mathcal{E}^{mv}(\tau) \leq \mathcal{E}^{mv}(0) - \gamma \int_0^\tau \int_{\Omega} \overline{\rho |\vec{u}|^2} \dx{x}\dx{t} . \end{equation} \noindent Testing the continuity equation~\eqref{eq:mvcontinuity} in turn with $\frac12|\vec{U}|^2$ and $\Phi_r$ we have \begin{equation* \begin{split} \int_\Omega\frac12\overline{\rho}|\vec{U}|^2(\tau,x)\dx{x} = \int_\Omega&\frac12\rho_0|\vec{U}_0|^2 + \int_0^\tau\int_\Omega\overline{\rho}\vec{U}\cdot\partial_t\vec{U} + \overline{\rho\vec{u}}\cdot\nabla_x\vec{U}\vU\dx{x}\dx{t}, \end{split} \end{equation*} and \begin{equation* \begin{split} \int_\Omega\overline{\rho}\Phi_r(\tau,x)\dx{x} = \int_\Omega&\rho_0\Phi_r(0,x) + \int_0^\tau\int_\Omega\overline{\rho}\partial_t\Phi_r + \overline{\rho\vec{u}}\cdot\nabla_x\Phi_r\dx{x}\dx{t}, \end{split} \end{equation*} while testing the momentum equation~\eqref{eq:mvmomentum} with $\vec{U}$ gives \begin{equation* \begin{split} \int_{\Omega} &\overline{\rho \vec{u}}\cdot \vec{U} (\tau, x) \dx{x} = \int_{\Omega} \rho_0|\vec{U}_0|^2 \dx{x} + \int_0^\tau \int_\Omega \left[ \overline{\rho \vec{u}}\cdot \partial_t \vec{U} + \overline{\rho \vec{u}\otimes\vec{u}} : \nabla_x \vec{U} - \gamma \overline{\rho \vec{u}}\cdot \vec{U} - \overline{\rho} x\cdot\vec{U} \right] \dx{x}\dx{t} \\[0.5em] &\hspace{3cm}+\int_0^\tau \int_\Omega \frac12\overline{|\nabla_x\Phi_\rho|^2}\, {{\mathrm{div}}_x}\,\vec{U} -\overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_\rho} : \nabla_x\vec{U} - M_\vr \overline{\nabla_x\Phi_\vr}\cdot\vec{U} \dx{x}\dx{t}. \end{split} \end{equation*} Note also that by~\eqref{eq:mvPoisson} tested with $\Phi_r$ we have \begin{equation*} \int_\Omega\overline{\rho}\Phi_r(\tau,x)\dx{x} - M_\vr\int\Phi_r(\tau,x)\dx{x} = \int_\Omega\overline{\nabla_x\Phi_\rho}\cdot\nabla_x\Phi_r(\tau,x)\dx{x}. \end{equation*} \noindent Using the above identities and~\eqref{eq:mvrelenergy2} we get \begin{equation*} \begin{split} \mathcal{E}_{rel}^{mv}(\tau) &= \mathcal{E}^{mv}(\tau) - \int_\Omega\frac12\rho_0|\vec{U}_0|^2\dx{x} - \int_\Omega\rho_0\Phi_r(0,x)\dx{x} \\[0.5em] &\quad+\int_0^\tau\int_\Omega\overline{\rho}\vec{U}\cdot\partial_t\vec{U} + \overline{\rho\vec{u}}\cdot\nabla_x\vec{U}\vU-\overline{\rho \vec{u}}\cdot \partial_t \vec{U} - \overline{\rho \vec{u}\otimes\vec{u}} : \nabla_x \vec{U}\dx{x}\dx{t} \\[0.5em] &\quad+\gamma\int_0^\tau\int_\Omega \overline{\rho \vec{u}}\cdot \vec{U} \dx{x}\dx{t} + \int_0^\tau\int_\Omega\overline{\rho} x\cdot\vec{U} \dx{x}\dx{t} +\frac12\int_\Omega|\nabla\Phi_r|^2(\tau,x)\dx{x} \\[0.5em] &\quad -\int_0^\tau\int_\Omega\frac12\overline{|\nabla_x\Phi_\rho|^2} {{\mathrm{div}}_x}\,\vec{U} - \overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_\rho} : \nabla_x\vec{U} - M_\vr \overline{\nabla_x\Phi_\vr}\cdot\vec{U} \dx{x}\dx{t} \\[0.5em] &\quad-\int_0^\tau\int_\Omega\overline{\rho}\partial_t\Phi_r\dx{x}\dx{t} -\int_0^\tau\int_\Omega\overline{\rho\vec{u}}\nabla_x\Phi_r\dx{x}\dx{t} - \frac12\int_\Omega\overline{\rho}(\tau,x)|x|^2\dx{x} \\[0.5em] &\quad + M_\vr\int_\Omega\Phi_r(\tau,x)\dx{x}. \end{split} \end{equation*} For the first line on the right-hand side we have, invoking~\eqref{eq:mvenergyinequality} and~\eqref{eq:mvPoisson} again, an upper bound given by \[ \frac12\int_\Omega\rho_0|x|^2\dx{x} - \frac12\int_\Omega|\nabla_x\Phi_r|^2(0,x)\dx{x} - \gamma\int_0^\tau \int_{\Omega}\overline{\rho |\vec{u}|^2} \dx{t} -M_\vr\int_\Omega\Phi_r(0,x)\dx{x}, \] while for the second line we write, using the momentum equation for the strong solution $(r,\vec{U})$ and the strict positivity of $r$, \begin{equation*} \begin{split} \int_0^\tau\int_\Omega&\overline{\rho}\vec{U}\partial_t\vec{U} + \overline{\rho\vec{u}}\cdot\nabla_x\vec{U}\vU-\overline{\rho\vec{u}}\cdot\partial_t \vec{U} - \overline{\rho\vec{u}\otimes\vec{u}} : \nabla_x \vec{U}\dx{x}\dx{t} \\[0.5em] &= \int_0^\tau\int_\Omega\overline{\rho(\vec{U}-\vec{u})}\cdot\partial_t\vec{U} + \overline{\rho\vec{u}\otimes(\vec{U}-\vec{u})}:\nabla_x\vec{U}\dx{x}\dx{t} \\[0.5em] &= \int_0^\tau\int_\Omega\overline{\rho(\vec{u}-\vec{U})}\cdot(\gamma\vec{U} + x + \nabla_x\Phi_r) + \overline{\rho(\vec{u}-\vec{U})\otimes(\vec{U}-\vec{u})}:\nabla_x\vec{U}\dx{x}\dx{t}. \end{split} \end{equation*} Furthermore we write \begin{equation*} \begin{split} \int_0^\tau\int_\Omega\overline{\rho}x\cdot\vec{U}\dx{x}\dx{t} &= \int_0^\tau\int_\Omega x\cdot r\vec{U}\dx{x}\dx{t} + \int_0^\tau\int_\Omega\overline{(\rho-r)} x\cdot\vec{U}\dx{x}\dx{t} \\[0.5em] &= \int_\Omega\frac12|x|^2(r(\tau,x)-r_0)\dx{x}+ \int_0^\tau\int_\Omega\overline{(\rho-r)} x\cdot\vec{U}\dx{x}\dx{t}. \end{split} \end{equation*} We thus have \begin{equation*} \begin{split} \mathcal{E}_{rel}^{mv}(\tau) \leq &\left[ \frac12\int_\Omega\overline{(r-\rho)}|x|^2 \dx{x}\right]_{t=0}^{t=\tau} +\int_0^\tau\int_\Omega\overline{\rho(\vec{u}-\vec{U})\otimes(\vec{U}-\vec{u})}:\nabla_x\vec{U}\dx{x}\dx{t} \\[0.5em] &+\gamma\int_0^\tau\int_\Omega \overline{\rho \vec{u}} \cdot \vec{U} - \overline{\rho|\vec{u}|^2} + \overline{\rho(\vec{u}-\vec{U})}\cdot\vec{U}\dx{x}\dx{t} \\[0.5em] &+\int_0^\tau\int_\Omega\overline{(\rho-r)} x\cdot\vec{U} + \overline{\rho(\vec{u}-\vec{U})}\cdot x\dx{x}\dx{t} \\[0.5em] &+\frac12\int_\Omega|\nabla_x\Phi_r|^2(\tau,x)\dx{x} - \frac12\int_\Omega|\nabla_x\Phi_r|^2(0,x)\dx{x} -\int_0^\tau\int_\Omega\overline{\rho}\partial_t\Phi_r\dx{x}\dx{t} +\left[ M_\vr\int_\Omega \Phi_r(t,x)\dx{x}\right]_{t=0}^{t=\tau} \\[0.5em] &-\int_0^\tau\int_\Omega \brk*{\frac12\overline{|\nabla_x\Phi_\rho|^2}{{\mathrm{div}}_x}\,\vec{U} -\overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_\rho} : \nabla_x\vec{U} - M_\vr \overline{\nabla_x\Phi_\vr}\cdot\vec{U} + \overline{\rho}\vec{U}\cdot\nabla_x\Phi_r} \dx{x}\dx{t}. \end{split} \end{equation*} \noindent Notice that the function $(t,x)\mapsto\frac12|x|^2$ is an admissible test function for the continuity equation~\eqref{eq:mvcontinuity}, so that \begin{equation*} \left[ \frac12\int_\Omega\overline{(r-\rho)}|x|^2\dx{x}\right]_{t=0}^{t=\tau} = -\int_0^\tau\int_\Omega\overline{(\rho\vec{u}-r\vec{U})}x\dx{x}\dx{t}. \end{equation*} Furthermore, we make note of the following identities: \begin{equation*} \begin{aligned} \gamma\int_0^\tau\int_\Omega \overline{\rho \vec{u}} \cdot \vec{U} - \overline{\rho|\vec{u}|^2} + \overline{\rho(\vec{u}-\vec{U})}\cdot\vec{U}\dx{x}\dx{t} &= -\gamma\int_0^\tau\int_\Omega \overline{\rho |\vec{u}-\vec{U}|^2}\dx{x}\dx{t}, \\[0.5em] \int_0^\tau\int_\Omega\overline{(\rho-r)} x\cdot\vec{U} + \overline{\rho(\vec{u}-\vec{U})}\cdot x\dx{x}\dx{t} &= \int_0^\tau\int_\Omega\overline{(\rho\vec{u}-r\vec{U})}x\dx{x}\dx{t}, \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} \frac12\int_\Omega|\nabla_x\Phi_r|^2(\tau,x)\dx{x} &- \frac12\int_\Omega|\nabla_x\Phi_r|^2(0,x)\dx{x} -\int_0^\tau\int_\Omega\overline{\rho}\partial_t\Phi_r\dx{x}\dx{t} +\left[ M_\vr\int_\Omega \Phi_r(t,x)\dx{x}\right]_{t=0}^{t=\tau} \\[0.5em] &= \frac12\int_0^\tau\int_\Omega\partial_t|\nabla_x\Phi_r|^2\dx{x}\dx{t}-\int_0^\tau\int_\Omega\overline{\rho}\partial_t\Phi_r\dx{x}\dx{t}+M_\vr\int_0^\tau\int_\Omega \partial_t\Phi_r\dx{x}\dx{t} \\[0.5em] &=\int_0^\tau\int_\Omega\nabla_x\Phi_r\cdot\nabla_x(\partial_t\Phi_r)\dx{x}\dx{t}-\int_0^\tau\int_\Omega\overline{\rho}\partial_t\Phi_r\dx{x}\dx{t}+M_\vr\int_0^\tau\int_\Omega \partial_t\Phi_r\dx{x}\dx{t} \\[0.5em] &=\int_0^\tau\int_\Omega\overline{(\nabla_x\Phi_r-\nabla_x\Phi_\rho)}\nabla_x(\partial_t\Phi_r)\dx{x}\dx{t}, \end{aligned} \end{equation*} where for the last equality we used~\eqref{eq:mvPoisson}. \noindent Finally, we use~\eqref{eq:mvnewform2} to write \begin{align*} &-\int_0^\tau\int_\Omega \brk*{\frac12\overline{|\nabla_x\Phi_\rho|^2}{{\mathrm{div}}_x}\,\vec{U} -\overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_\rho} : \nabla_x\vec{U} - M_\vr \overline{\nabla_x\Phi_\vr}\cdot\vec{U} + \overline{\rho}\vec{U}\cdot\nabla_x\Phi_r} \dx{x}\dx{t} \\[0.5em] &=-\frac12\int_0^\tau\int_\Omega\overline{|\nabla_x\Phi_\rho-\nabla_x\Phi_r|^2}{{\mathrm{div}}_x}\,\vec{U} \dx{x}\dx{t} + \int_0^\tau\int_\Omega\overline{(\nabla_x\Phi_\rho-\nabla_x\Phi_r)\otimes(\nabla_x\Phi_\rho-\nabla_x\Phi_r)}:\nabla_x\vec{U} \dx{x}\dx{t} \\[0.5em] &\quad + \int_0^\tau\int_\Omega\overline{(\nabla_x\Phi_\rho-\nabla_x\Phi_r)}\cdot r\vec{U}\dx{x}\dx{t}. \end{align*} \noindent Furthermore, we use the strong form of the Poisson equation to write \begin{align*} \int_0^\tau&\int_\Omega\overline{(\nabla_x\Phi_\rho-\nabla_x\Phi_r)}\cdot r\vec{U}\dx{x}\dx{t} +\int_0^\tau\int_\Omega\overline{(\nabla_x\Phi_r-\nabla_x\Phi_\rho)}\cdot\nabla_x(\partial_t\Phi_r)\dx{x}\dx{t} \\[0.5em] &=\int_0^\tau\int_\Omega\overline{\nabla_x\Phi_\rho}\cdot r\vec{U}\dx{x}\dx{t} -\int_0^\tau\int_\Omega\overline{\nabla_x\Phi_\rho}\cdot\nabla_x(\partial_t\Phi_r)\dx{x}\dx{t} \\[0.5em] &=\int_0^\tau\int_\Omega\skp*{\nu_{t,x};\vec{F}}\cdot r\vec{U}\dx{x}\dx{t} -\int_0^\tau\int_\Omega\skp*{\nu_{t,x};\vec{F}}\cdot\nabla_x(\partial_t\Phi_r)\dx{x}\dx{t} \\[0.5em] &=\int_0^\tau\int_\Omega\nabla_x\Psi_\rho(t,x)\cdot r\vec{U}\dx{x}\dx{t} -\int_0^\tau\int_\Omega\nabla_x\Psi_\rho(t,x)\cdot\nabla_x(\partial_t\Phi_r)\dx{x}\dx{t} \\[0.5em] &=0, \end{align*} using property~\eqref{eq:gradientmeasure} in the third equality above. \noindent We therefore arrive at the following measure-valued version of the relative energy inequality \begin{equation}\label{eq:mvrelativeenergy} \begin{split} \mathcal{E}_{rel}^{mv}(\tau)\leq & \int_0^\tau\int_\Omega\overline{\rho(\vec{u}-\vec{U})\otimes(\vec{U}-\vec{u})}:\nabla_x\vec{U}\dx{x}\dx{t} -\gamma\int_0^\tau\int_\Omega \overline{\rho |\vec{u}-\vec{U}|^2}\dx{x}\dx{t} \\[0.5em] &-\frac12\int_0^\tau\int_\Omega\overline{|\nabla_x\Phi_\rho-\nabla_x\Phi_r|^2}{{\mathrm{div}}_x}\,\vec{U} \dx{x}\dx{t} + \int_0^\tau\int_\Omega\overline{(\nabla_x\Phi_\rho-\nabla_x\Phi_r)\otimes(\nabla_x\Phi_\rho-\nabla_x\Phi_r)}:\nabla_x\vec{U} \dx{x}\dx{t}. \end{split} \end{equation} \noindent All the terms on the right-hand side of inequality~\eqref{eq:mvrelativeenergy} can be easily seen to be controlled by \[ c\int_0^\tau\mathcal{E}_{rel}^{mv}(t)\dx{t}, \] where the constants $c$ depend only on the norm $\norm{\nabla_x\vec{U}}_{C([0,T]\times\overline{\Omega})}$. Note that for the terms involving a tensor product we use Lemma~\ref{lem:inegalites2} to compare them to the corresponding norm-squared terms. We thus have the following Gronwall-type inequality \begin{equation}\label{eq:relenergyfinal} \mathcal{E}_{rel}^{mv}(\tau) \leq c \int_0^\tau\mathcal{E}_{rel}^{mv}(t)\dx{t}. \end{equation} \section{Proof of the main theorem} \label{sec:mainproof} Having established the relative energy inequality, we proceed to the proof of our main result, Theorem~\ref{thm:mv-stronguniqness}. This will be done in several steps, each leading to identifications of successive terms in the measure-valued formulation with their counterparts in the strong formulation. Firstly, we observe that inequality~\eqref{eq:relenergyfinal} implies that $\mathcal{E}_{rel}^{mv} = 0$ at almost all times, since the strong and measure-valued solution emanate from the same initial data. In particular, since both terms of the relatie energy are non-negative, we have \begin{equation} \int_\Omega\overline{|\nabla_x\Phi_\rho-\nabla_x\Phi_r|^2}\dx{x} = 0, \end{equation} from which we readily infer that the projection of the Young measure $\nu$ onto the third coordinate reduces to a Dirac mass at $\nabla_x\Phi_r$, and therefore \begin{equation} \label{eq:potentialidentification} \nu_{t,x}(s,\vec{v},\vec{F}) = \bar{\nu}_{t,x}(s,\vec{v})\otimes\delta_{\{\nabla\Phi_r(t,x)\}},\quad m^{|\nabla\Phi|^2} = 0, \end{equation} where \[ \bar{\nu} = \prt*{\bar{\nu}_{t,x}} \in L^\infty_{weak}\prt*{(0,T)\times\Omega ; \mathcal{M}^{+}([0,\infty)\times\R^d)}. \] Consequently, in the ``kinetic'' part of the relative energy we have \begin{equation} \int_{[0,\infty)\times\R^d} s|\vec{v}-\vec{U}|^2\dx{\bar{\nu}_{t,x}}(s,\vec{v}) = 0,\quad m^{\rho|\vec{u}-\vec{U}|^2} = 0. \end{equation} As explained in the introduction, however, we cannot conclude here the vanishing of the concentration measures in the density and the momentum. Instead, we need to work with whole oscillation-concentration pairs and only relate these to the corresponding strong quantities. We begin by considering the kinetic term appearing in the energy inequality: namely, we will show that \[ \overline{\rho|\vec{u}|^2} = \overline{\rho}|\vec{U}|^2 = r|\vec{U}|^2. \] To this end we choose $0<\delta<1$ and apply Lemma~\ref{lem:inegalites} from Appendix~\ref{sec:appB} to infer that on the level of approximating sequences $\rho^\ep, \vec{u}^\ep$, as in Section~\ref{sec:existence}, we have the inequality \begin{equation} \rho^\ep\abs*{|\vec{u}^\ep|^2 - |\vec{U}|^2} \leq C\delta\rho^\ep + C_\delta\rho^\ep|\vec{u}^\ep-\vec{U}|^2. \end{equation} Since these inequalities are preserved when passing with $n$ to infinity, we conclude \begin{equation} -C\delta\overline{\rho} \leq \overline{\rho|\vec{u}|^2} - \overline{\rho}|\vec{U}|^2 \leq C\delta\overline{\rho}, \end{equation} where we have used that \begin{equation} \overline{\rho|\vec{u}-\vec{U}|^2} = 0, \end{equation} as concluded from the relative energy inequality. Whence, by arbitrariness of $\delta>0$, we have \begin{equation} \label{eq:kineticidentification} \overline{\rho|\vec{u}|^2} = \overline{\rho}|\vec{U}|^2. \end{equation} Let us mention in passing that this equality concerns the sums of the Young measure and concentration parts of the corresponding terms and it is in principle not immediately obvious that corresponding term-by-term equalities follow (in particular that $m^{\rho|\vec{u}|^2} = |\vec{U}|^2m^\rho$). However, this is indeed true by virtue of Lemma~\ref{lem:projections}.\\ Next, we observe that \begin{equation} \label{eq:densityidentification} \overline{\rho} = r \end{equation} almost everywhere in $(0,T)\times\Omega$. Indeed, this follows from the identification $\skp{\nu_{t,x};\vec{F}} = \nabla_x\Phi_r(t,x)$, mass conservation and the Poisson equation. Therefore, we have \begin{equation} \overline{\rho|\vec{u}|^2} = r|\vec{U}|^2, \end{equation} almost everywhere in $(0,T)\times\Omega$. Similarly, applying Lemma~\ref{lem:inegalites} again, we have \begin{equation} \rho^\ep\abs*{u^\ep_iu^\ep_j - U_iU_j} \leq C\delta\rho^\ep + C_\delta\rho^\ep|\vec{u}^\ep-\vec{U}|^2, \end{equation} and therefore in the limit \begin{equation} \overline{\rho u_iu_j} = rU_iU_j, \end{equation} so that we can conclude the analogous equality for the convective term, \emph{i.e.}, \begin{equation} \label{eq:convectiveidentification} \overline{\rho\vec{u}\otimes\vec{u}} = r\vec{U}\otimes\vec{U}. \end{equation} Consequently, the momentum equation becomes the simple ODE \begin{equation} \partial_t\prt*{\overline{\rho \vec{u}}} + \gamma\overline{\rho\vec{u}} = \partial_t\prt*{r\vec{U}} + \gamma r\vec{u}, \end{equation} from which we infer that \begin{equation} \label{eq:momentumidentification} \overline{\rho \vec{u}}(t,x) = (r\vec{U})(t,x), \end{equation} for almost every $(t,x)\in(0,T)\times\Omega$. Let us now focus on the Young measure $\bar{\nu}$ from equality~\eqref{eq:potentialidentification}. Firstly, to deal with potential "vacuum regions" where the density vanishes, we decompose this measure into \begin{equation} \bar{\nu} = \bar{\nu} \mres \prt*{\{0\}\times\R^d} + \bar{\nu} \mres \prt*{(0,\infty)\times\R^d} =\colon \sigma^1 + \sigma^2. \end{equation} Then, since \begin{equation} \int_{[0,\infty)\times\R^d} s|\vec{v}-\vec{U}|^2 \dx{\bar{\nu}_{t,x}} = \int_{(0,\infty)\times\R^d} s|\vec{v}-\vec{U}|^2 \dx{\sigma^2_{t,x}} = 0, \end{equation} we have \begin{equation} \sigma^2_{t,x} = \bar{\bar{\nu}}_{t,x} \otimes \delta_{\{\vec{U}(t,x)\}} \end{equation} almost everywhere in $(0,T)\times\Omega$ for some measure $\bar{\bar{\nu}}_{t,x}\in \mathcal{M}^+(0,\infty)$. Note that we still cannot guarantee that this is a probability measure. Consequently, denoting by $\pi_1\sigma^1$ the projection of the measure $\sigma^1$ onto the first coordinate, and using Lemma~\ref{lem:projections} we may expand equality~\eqref{eq:densityidentification} as \begin{equation} \int_A \brk*{\int_{\{0\}}s\dx{(\pi_1\sigma^1_{t,x})} + \int_{(0,\infty)}s\dx{\bar{\bar{\nu}}_{t,x}}} \dx{x}\dx{t} + m^\rho(A) = \int_A r(t,x) \dx{x}\dx{t}, \end{equation} where $A$ is any Borel subset of $(0,T)\times\R^d$. In particular, since $r>0$, this equality implies that the concentration measure $m^\rho$ is absolutely continuous with respect to the Lebesgue measure on $[0,\infty)\times\R^d$, so that there exists its Radon-Nikodym derivative, $D^{m^\rho}$. Hence, the last equality can be rewritten as \begin{equation} \label{eq:densityidentification2} \int_{(0,\infty)} s \dx{\bar{\bar{\nu}}_{t,x}}(s) + D^{m^\rho}(t,x) = r(t,x), \end{equation} almost everywhere. In a similar manner equalities~\eqref{eq:kineticidentification},~\eqref{eq:momentumidentification},~\eqref{eq:convectiveidentification} imply that all the measure $m^{\rho|\vec{u}|^2}$, $m^{\rho\vec{u}}$, $m^{\rho\vec{u}\otimes\vec{u}}$ are absolutely continuous with respect to the Lebesgue measure. This concludes the proof of Theorem~\ref{thm:mv-stronguniqness}. \begin{remark} At this point one might make one more observation about the measure family of probabilities $\mu_{t,x}$ corresponding to the variables $(\rho, \sqrt{\rho}\vec{u}, \nabla_x\Phi_\rho)$ as in Remark~\ref{rem:othervariables}. In this case one can deduce that \begin{equation} \mathrm{supp}\prt*{\pi_2\mu}_{t,x} \subset \{\vec{v}\; :\; \vec{v} = \alpha\vec{U}(t,x), \alpha\in[0,\infty)\} \end{equation} so that the projection of the Young measure onto the second coordinate lives only on the one-dimensional subspace determined by the direction of the velocity $\vec{U}$. \end{remark} \begin{remark} Notice that if we consider the quantity \begin{equation} \mathcal{KE}(t) \coloneqq \int_\Omega\frac12\overline{\rho|\vec{u}|^2}\dx{x}. \end{equation} We then have \begin{equation} \mathcal{KE}(\tau) \leq \mathcal{E}^{mv}(\tau) \leq \mathcal{E}^{mv}(0) - 2\int_0^\tau \mathcal{KE}(t)\dx{t}. \end{equation} Consequently $\mathcal{KE}$ satisfies, at almost all times, the bound \begin{equation} 0\leq \mathcal{KE}(\tau) \leq \mathcal{E}^{mv}(0)e^{-2t}, \end{equation} and therefore converges to zero with time. Thus both the Young and the concentration measures in $\mathcal{KE}$ converge to zero as $t\to\infty$. \end{remark} \section{Euler alignment system} The calculations performed in the previous section do not saturate the capacity of the relative energy method in the sense that we can also consider, instead of system~\eqref{eq:euler_1}, a system with more general form of the confinement potential, as well as include nonlinear damping. More precisely, let us consider the following system Euler alignment system \begin{equation} \begin{aligned \partial_t \vr + {{\mathrm{div}}_x}\, (\vr \vec{u}) & = 0, \\ \partial_t (\vr \vec{u}) + {{\mathrm{div}}_x}\, (\vr \vec{u} \otimes \vec{u}) & = - \vr \nabla_x \Phi_\vr - \vr \nabla_x V - \vr\nabla_x\prt*{W\star\rho} + \rho \int_\Omega\psi(x-y)\rho(t,y)\prt*{\vec{u}(t,y)-\vec{u}(t,x)}\dx{y}, \label{eq:euler_general} \\ -\Delta \Phi_\vr &= \vr-M_\vr, \end{aligned} \end{equation} in $(0,T) \times \Omega$, where $V=V(x)$ is smooth, and $W$ and $\psi$ are smooth and symmetric. The kernel $W$ includes the repulsive-attractive interaction force between individuals, while $\psi$ gives the local averaging measuring the consensus in their orientation. The equations are supplemented with the same boundary conditions~\eqref{eq:bdryconditions} as before. The spatial domain, $\Omega$, is still a bounded smooth domain in $\R^d$. Nevertheless, by a slight abuse of notation, we shall use the convolution symbol $\star$ to denote the integral \begin{equation} (W\star\rho)(x) = \int_\Omega W(x-y)\rho(y)\dx{y}, \end{equation} so as to simplify the notation. A similar system has recently been considered in~\cite{BrezinaMacha}, where measure-valued solutions of a viscous approximation are shown to converge in the inviscid limit to a strong solution of equation~\eqref{eq:euler_general}. However, the presence of an artificial pressure term is assumed -- this substantially simplifies the analysis, as explained above. The energy identity for strong solutions of system~\eqref{eq:euler_general} reads \begin{equation} \begin{aligned} \frac{\dx{}}{\dx{t}} \int_\Omega \frac12\rho|\vec{u}|^2 + \frac12\abs*{\nabla_x\Phi_\vr}^2 + \rho V + \frac12 \rho\prt*{W\star\rho} \dx{x} = -\frac12\int_\Omega\intO\psi(x-y)\rho(t,x)\rho(t,y)\abs*{\vec{u}(t,y)-\vec{u}(t,x)}^2 \dx{x}\dx{y}. \end{aligned} \end{equation} \noindent The definition of a dissipative measure-valued solutions has to be adjusted accordingly to the appearance of new terms. The weak formulation of the momentum equation, Equation~\eqref{eq:mvmomentum}, becomes \begin{equation}\label{eq:mvmomentum2} \begin{split} \int_{\Omega} & \overline{\rho\vec{u}}(\tau,x) \cdot \phi (\tau, x) \dx{x} - \int_{\Omega} \overline{\rho\vec{u}}(0,x) \cdot \phi (0, x) \dx{x} \\[0.5em] & = \int_0^\tau\!\! \int_\Omega \overline{\rho\vec{u}}\cdot\partial_t \phi \dx{x}\dx{t} +\int_0^\tau\!\! \int_\Omega \overline{\rho\vec{u}\otimes\vec{u}} : \nabla_x \phi \dx{x}\dx{t} - \int_0^\tau\!\! \int_\Omega\overline{\rho}\, \nabla_x V\cdot\phi\dx{x}\dx{t} \\[0.5em] &\hspace{0.2cm} + \int_0^\tau\!\! \int_\Omega\frac12\overline{|\nabla_x\Phi_\rho|^2}\,{{\mathrm{div}}_x}\,\phi \dx{x}\dx{t} - \int_0^\tau\!\!\int_\Omega \overline{\nabla_x\Phi_\rho\otimes\nabla_x\Phi_\rho}:\nabla_x\phi \dx{x}\dx{t} - M_\rho\int_0^\tau\!\!\int_\Omega\overline{\nabla_x\Phi_\rho}\cdot\phi \dx{x} \dx{t} \\[0.5em] &\hspace{0.2cm} - \int_0^\tau\!\!\int_\Omega \overline{\rho}(t,x)\int_\Omega\nabla_xW(x-y)\overline{\rho}(t,y)\dx{y}\cdot\phi(t,x)\dx{x}\dx{t} \\[0.5em] &\hspace{0.2cm} + \int_0^\tau\!\!\int_\Omega \overline{\rho}(t,x)\int_\Omega\psi(x-y)\overline{\rho\vec{u}}(t,y)\dx{y}\cdot\phi(t,x)\dx{x}\dx{t} \\[0.5em] &\hspace{0.2cm} - \int_0^\tau\!\!\int_\Omega \overline{\rho\vec{u}}(t,x)\int_\Omega\psi(x-y)\overline{\rho}(t,y)\dx{y}\cdot\phi(t,x)\dx{x}\dx{t} \end{split} \end{equation} for a.a.\ $\tau \in (0,T) $ and every $\phi \in C^1([0,T] \times \overline{\Omega}; \R^d)$; while the energy inequality~\eqref{eq:mvenergyinequality} now reads \begin{equation}\label{eq:mvenergyinequality2} \begin{aligned} \mathcal{E}_*^{mv}(\tau) \leq \mathcal{E}_*^{mv}(0) &- \int_\Omega\intO\psi(x-y)\brk*{\skp*{\nu_{t,x};s|\vec{v}|^2}\skp*{\nu_{t,y};s} - \skp*{\nu_{t,x};s\vec{v}}\skp*{\nu_{t,y};s\vec{v}}} \dx{x}\dx{y} \\[0.5em] &- \int_\Omega\intO \psi(x-y) \dx{m_\tau^{\vr|\vec{u}|^2}}(x)\dx{m_\tau^\vr}(y) + \int_\Omega\intO \psi(x-y) \dx{m_\tau^\vr}(x)\dx{m_\tau^\vr}(y), \end{aligned} \end{equation} where \begin{equation} \begin{aligned} \mathcal{E}_*^{mv}(\tau) &\coloneqq \int_\Omega \skp*{\nu_{\tau,x}; \frac{1}{2} s | \vec{v}|^2} \dx{x} + \frac12 m_\tau^{\rho|\vec{u}|^2}(\Omega) + \int_\Omega\skp*{ \nu_{\tau,x}; \frac12|\vec{F}|^2} \dx{x} + \frac12m_\tau^{|\nabla\Phi|^2}(\Omega) + \int_\Omega\skp*{ \nu_{\tau,x}; s} V(x) \dx{x} \\[0.5em] &\hspace{0.2cm}+\int_\Omega V(x)\dx{m}^{\rho}_\tau(x) + \frac12\int_\Omega \skp*{\nu_{\tau,x};s}\int_\Omega W(x-y)\skp*{\nu_{\tau,y};s}\dx{y}\dx{x} +\frac12\int_\Omega\intO W(x-y) \dx{m_{\tau}^\vr}(y)\dx{m_{\tau}^\vr}(x) \\[0.5em] &= \int_\Omega \frac{1}{2} \overline{\rho |\vec{u}|^2} + \frac{1}{2} \overline{|\nabla_x\Phi_\rho|^2} + \overline{\rho} V + \frac12\overline{\rho}\prt*{W\star\overline{\rho}} \dx{x}. \end{aligned} \end{equation} \bigskip \noindent We wish to derive an analogue of Theorem~\ref{thm:mv-stronguniqness} for the system~\eqref{eq:euler_general}. To this end we again suppose that a $C^1$-regular solution $(r,\vec{U},\Phi_r)$ with $r>0$ is available, and consider a measure-valued solution with the same initial data $(r_0,\vec{U}_0)$. Let us remark that the issue of existence of such a measure-valued solution can be settled by means of a similar two-step approximation argument as in Section~\ref{sec:existence}. Since the additional convolution-type terms pose no additional difficulties in providing existence of approximating sequences and passing to the limit, we skip the details. Let us now discuss how the new terms influence the calculations towards a Gronwall inequality performed in the previous section. We keep the same relative energy functional as in~\eqref{eq:mvrelenergy}. First, it can be readily seen that the $-\rho\nabla_x V$ term behaves in exactly the same way as the $-\rho\nabla\prt*{\frac12|x|^2}$ term previously. When we arrive at inequality~\eqref{eq:mvrelativeenergy}, we now have two additional lines on the right-hand side, namely \begin{equation} \begin{aligned}\label{eq:W-line} -\frac12\brk*{\int_\Omega \overline{\vr}\prt*{W\star\overline{\vr}}\dx{x}}_{t=0}^{t=\tau} +\int_0^\tau\int_\Omega \overline{\vr}\prt*{\nabla_x W\star\overline{\vr}}\cdot\vec{U} \dx{x}\dx{t} + \int_0^\tau\int_\Omega \overline{\vr\prt*{\vec{u}-\vec{U}}}\cdot \nabla_x\prt*{W\star r}\dx{x}\dx{t}, \end{aligned} \end{equation} and \begin{equation} \begin{aligned}\label{eq:Psi-line} &-\int_0^\tau\int_\Omega\intO\psi(x-y)\overline{\vr}(t,x)\overline{\vr\vec{u}}(t,y)\cdot\vec{U}(t,x) \dx{y}\dx{x}\dx{t} +\int_0^\tau\int_\Omega\intO\psi(x-y)\overline{\vr}(t,y)\overline{\vr\vec{u}}(t,x)\cdot\vec{U}(t,x) \dx{y}\dx{x}\dx{t} \\[0.5em] &-\int_0^\tau\int_\Omega \overline{\vr\prt*{\vec{u}-\vec{U}}}\cdot\int_\Omega\psi(x-y)r(t,y)\prt*{\vec{U}(t,y)-\vec{U}(t,x)}\dx{y} \dx{x}\dx{t} \\[0.5em] &-\int_0^\tau\int_\Omega\intO\psi(x-y)\brk*{\overline{\vr|\vec{u}|^2}(t,x)\overline{\vr}(t,y) - \overline{\vr\vec{u}}(t,x)\overline{\vr\vec{u}}(t,y)}\dx{y}\dx{x}\dx{t}. \end{aligned} \end{equation} For~\eqref{eq:W-line} we can, after some straightforward computations, write equivalently \begin{equation} \begin{aligned}\label{eq:W-line2} -\frac12\brk*{\int_\Omega \overline{\vr}\prt*{W\star\overline{\vr}}\dx{x}}_{t=0}^{t=\tau} + \int_0^\tau\int_\Omega\overline{\vr\vec{u}}\cdot\nabla_x\prt*{W\star\overline{\vr}} \dx{x}\dx{t} + \int_0^\tau\int_\Omega\overline{\vr\prt*{\vec{u}-\vec{U}}}\cdot \nabla_x\prt*{W\star \prt*{r-\overline{\vr}}}\dx{x}\dx{t}. \end{aligned} \end{equation} Since the attraction-repulsion kernel $W$ is smooth, the first two terms of~\eqref{eq:W-line2} cancel according to the following calculation, see also~\cite[Section~5, Step~4.]{CaFeGwSw2015}, \begin{equation} \begin{aligned} -\frac12\brk*{\int_\Omega \overline{\vr}\prt*{W\star\overline{\vr}}\dx{x}}_{t=0}^{t=\tau} = -\frac12\int_0^\tau\int_\Omega \frac{\partial}{\partial t}\prt*{\overline{\vr}\prt*{W\star\overline{\vr}}}\dx{x} = -\int_0^\tau\int_\Omega\overline{\vr}\prt*{W\star\partial_t\overline{\vr}}\dx{x} = -\int_0^\tau\int_\Omega\overline{\vr\vec{u}}\cdot\nabla_x\prt*{W\star\overline{\vr}}\dx{x}. \end{aligned} \end{equation} We now rewrite~\eqref{eq:Psi-line} as follows \begin{equation} \begin{aligned} \int_0^\tau&\int_\Omega\intO \psi(x-y)\overline{\vr\prt*{\vec{u}-\vec{U}}}(t,x)\cdot\prt*{\vec{U}(t,y)-\vec{U}(t,x)}\overline{\prt*{\vr-r}}(t,y) \dx{y}\dx{x}\dx{t} \\[0.5em] &-\int_0^\tau\int_\Omega\intO \psi(x-y)\brk*{\overline{\vr\abs*{\vec{u}-\vec{U}}^2}(t,x)\overline{\vr}(t,y) - \overline{\vr\prt*{\vec{u}-\vec{U}}}(t,x)\cdot\overline{\vr\prt*{\vec{u}-\vec{U}}}(t,y)}\dx{y}\dx{x}\dx{t}. \end{aligned} \end{equation} The latter of the two terms can be discarded, since it is negative. Indeed, at the level of approximating sequences we can write, due to symmetry of $\psi$, \begin{equation} \begin{aligned} &\int_\Omega\intO\psi(x-y)\brk*{\rho^\ep(t,y)\rho^\ep(t,x)\abs*{\vec{u}^\ep(t,x)-\vec{U}(t,x)}^2 - \rho^\ep(t,x)\prt*{\vec{u}^\ep(t,x)-\vec{U}(t,x)}\cdot\rho^\ep(t,y)\prt*{\vec{u}^\ep(t,y)-\vec{U}(t,y)}}\dx{y}\dx{x}\\[0.5em] &=\frac12\int_\Omega\intO\psi(x-y)\rho^\ep(t,x)\rho^\ep(t,y)\abs*{\prt*{\vec{u}^\ep(t,x)-\vec{U}(t,x)}-\prt*{\vec{u}^\ep(t,y)-\vec{U}(t,y)}}^2\dx{y}\dx{x} \geq 0. \end{aligned} \end{equation} \bigskip \noindent It now only remains to bound the remaining two terms \begin{equation} I_1 \equiv \int_0^\tau\int_\Omega\overline{\vr\prt*{\vec{u}-\vec{U}}}\cdot \nabla_x\prt*{W\star \prt*{r-\overline{\vr}}}\dx{x}\dx{t}, \end{equation} and \begin{equation} I_2 \equiv \int_0^\tau\int_\Omega\intO \psi(x-y)\overline{\vr\prt*{\vec{u}-\vec{U}}}(t,x)\cdot\prt*{\vec{U}(t,y)-\vec{U}(t,x)}\overline{\prt*{\vr-r}}(t,y) \dx{y}\dx{x}\dx{t}, \end{equation} in terms of the relative energy. The strategy is to mimic the analogous bounds for the weak-strong case, as presented in~\cite{CaFeGwSw2015}. In fact, on the level of approximating sequences the calculations are exactly the same, since they only rely on functional inequalities and the Poisson equation. We present these arguments below for the readers' convenience. First, notice that $I_1 = \lim_{\ep\to0}I_1^\ep$, where \begin{equation} \begin{split} |I_1^\ep| &= \abs*{\int_0^\tau\int_\Omega\rho^\ep(t,x)\prt*{\vec{u}^\ep(t,x)-\vec{U}(t,x)}\cdot\int_\Omega\nabla_xW(x-y)\prt*{r(t,y)-\rho^\ep(t,y)}\dx{y}\,\dx{x}\dx{t}} \\[0.5em] &\leq c\int_0^\tau\int_\Omega\rho^\ep\abs*{\vec{u}^\ep-\vec{U}}^2\dx{x}\dx{t} + c\int_0^\tau\prt*{\int_\Omega\rho^\ep(t,x)\dx{x}}\norm{\nabla_xW\star\prt*{r(t,\cdot)-\rho^\ep(t,\cdot)}}^2_{L^\infty}\dx{t} \\[0.5em] &\leq c\int_0^\tau\int_\Omega\rho^\ep\abs*{\vec{u}^\ep-\vec{U}}^2\dx{x}\dx{t} + cM_{\rho^\ep}\int_0^\tau\norm{r(t,\cdot)-\rho^\ep(t,\cdot)}^2_{W^{-1,2}}\dx{t} \\[0.5em] &\leq c\int_0^\tau\int_\Omega\rho^\ep\abs*{\vec{u}^\ep-\vec{U}}^2\dx{x}\dx{t} + cM_{\rho^\ep}\int_0^\tau\norm{\nabla_x\Phi_r(t,\cdot)-\nabla\Phi_{\rho^\ep}(t,\cdot)}^2_{L^2}\dx{t}. \end{split} \end{equation} In the limit $\ep\to0$ we obtain the desired estimate \begin{equation} I_1 \leq C\int_0^\tau\mathcal{E}_{rel}^{mv}(t)\dx{t}. \end{equation} The second term, $I_2$, is treated identically once we use boundedness of the strong solution $\vec{U}$. \bigskip We can therefore once again infer that whenever the regular and the measure-valued solution emanate from the same initial data, their relative energy vanishes for a.a.\ times $t>0$. As in the previous section we deduce that the projection of the Young measure $\nu_{t,x}$ onto the third coordinate is the Dirac measure concentrated at $\nabla_x\Phi_r(t,x)$, and $m^{|\nabla\Phi|^2}=0$. Using this information in the Poisson equation, we obtain that $\overline{\rho}=r$ almost everywhere. Similarly, from the vanishing of the first term in the relative energy, \begin{equation} \int_\Omega\overline{\rho\abs*{\vec{u}-\vec{U}}}\dx{x} = 0, \end{equation} we deduce the identifications \begin{equation} \overline{\rho|\vec{u}|^2} = r|\vec{U}|^2,\quad \overline{\rho\vec{u}\otimes\vec{u}} = r\vec{U}\otimes\vec{U}. \end{equation} Substituting these identifications into the (measure-valued) momentum equation of~\eqref{eq:euler_general} and using the strong formulation for $(r,\vec{U})$, we obtain the following ODE \begin{equation} \partial_t\prt*{\overline{\rho\vec{u}} - r\vec{U}} = r\brk*{\psi\star\prt*{\overline{\rho\vec{u}}-r\vec{U}}} - \prt*{\psi\star r}\prt*{\overline{\rho\vec{u}}-r\vec{U}}, \end{equation} which readily implies that $\overline{\rho\vec{u}}=r\vec{U}$ almost everywhere in $(0,T)\times\Omega$. Whence we obtain the following result \begin{theorem} Let $1\leq d\leq 3$ and $\Omega\subset\R^d$ be a bounded smooth domain. Let \[ (r,\vec{U},\Phi_r)\in C^1([0,T)\times\bar{\Omega};(0,\infty))\times C^1([0,T)\times\bar{\Omega};\R^d)\times C^2([0,T)\times\bar{\Omega}) \] be a strong solution of~\eqref{eq:euler_general} with initial data $r(0,x)=r_0(x),\, \vec{U}(0,x)=\vec{U}_0(x)$ of finite energy, and let\\ $(\vec{\nu}, m^\rho, m^{\rho\vec{u}}, m^{\rho \vec{u}\otimes \vec{u}}, m^{|\nabla\Phi|^2}, m^{\nabla\Phi\otimes\nabla\Phi})$ be a dissipative measure-valued solution with initial state \[ \nu_{0,x} = \delta_{\{r_0,\vec{U}_0,\nabla\Phi_r(0,x)\}}\;\;\;\; \text{for a.e.}\;\; x\in\Omega. \] Then \[ m^{\nabla\Phi\otimes\nabla\Phi}=0,\;\; m^{|\nabla\Phi|^2}=0, \] and we have the following identifications \begin{align} \skp*{\nu_{t,x} ; \rho} + m^{\rho} &= r,\\ \skp*{\nu_{t,x} ; \rho\vec{u}} + m^{\rho\vec{u}} &= r\vec{U},\\ \skp*{\nu_{t,x} ; \rho\vec{u}\otimes\vec{u}} + m^{\rho\vec{u}\otimes\vec{u}} &= r\vec{U}\otimes\vec{U},\\ \skp*{\nu_{t,x} ; \rho|\vec{u}|^2} + m^{\rho|\vec{u}|^2} &= r|\vec{U}|^2, \end{align} which hold for almost every $(t,x)\in(0,T)\times\R^d$. Furthermore, the Young measure admits the decomposition \begin{equation} \nu_{t,x} = \bar{\nu}_{t,x} \otimes \delta_{\{\nabla\Phi_r(t,x)\}}, \end{equation} for some parameterised measure $\bar{\nu}\in L^\infty_{\mathrm{weak}}((0,T)\times\R^d;\mathcal{M}^+([0,\infty)\times\R^d))$; and in turn the restriction $\bar{\nu}\mres ((0,\infty)\times\R^d)$ decomposes into \begin{equation} \bar{\nu}_{t,x} \mres ((0,\infty)\times\R^d) = \bar{\bar{\nu}}_{t,x} \otimes \delta_{\{\vec{U}(t,x)\}} \end{equation} for some parameterised measure $\bar{\bar{\nu}}\in L^\infty_{\mathrm{weak}}((0,T)\times\R^d;\mathcal{M}^+(0,\infty))$. Finally, all the non-zero concentration measures $m^\rho$, $m^{\rho\vec{u}}$, $m^{\rho\vec{u}\otimes\vec{u}}$, $m^{\rho|\vec{u}|^2}$ are absolutely continuous with respect to the Lebesgue measure. \end{theorem} \begin{appendices} \section{Appendix} \label{sec:appendix} \subsection{Young measures} \label{sec:appA} Below we gather some additional facts about the parameterised measure generated by our approximating sequences of solutions which we used to pass to the limit in Section~\ref{sec:existence} and deduce the weak-strong identifications in Section~\ref{sec:mainproof}. The required notation and definitions are presented in the introduction. \begin{lemma} \label{lem:integrability} Suppose $X\subset\R^n$ is bounded. Let $z^\ep:X\to Y$ be a sequence of measurable functions and let $\vec{\nu}=(\nu_x)\in L^\infty_{\mathrm{weak}}(X;\mathcal{M}^+(Y))$ denote the assosciated Young measure. Let $f\in C(Y)$ be a continuous function and suppose that the sequence $(f(z^\ep))$ is uniformly bounded in $L^1(X)$, \emph{i.e.}, \begin{equation} \sup_{\ep>0}\int_X |f(z^\ep(x))|\dx{x} \leq C. \end{equation} Then the function $f$ is $\vec{\nu}$-measurable, \emph{i.e.}, the map $x\mapsto\skp*{\nu_x ; f}$ is well-defined for a.e.\ $x\in X$. Moreover, the map $x\mapsto\skp*{\nu_x ; f}$ belongs to $L^1(X)$. \end{lemma} \begin{proof} Without loss of generality we can assume that $f\geq 0$. Integration with respect to the Young measure is well-defined for continuous functions which vanish at infinity. So consider the sequence of truncated functions $f^k(y) = \theta^k(|y|) f(y)$ where \begin{equation} \theta^k(\alpha)= \begin{cases} 1 & \text{if $|\alpha| < k$}\\ (k+1)-\alpha & \text{if $k\leq |\alpha| \leq k+1$}\\ 0 & \text{if $|\alpha|>k+1$}. \end{cases} \end{equation} Then $f^k\in C_0(Y)$, $0\leq f^k\leq f$, the seqeunce is non-decreasing and converges pointwise to $f$. It follows from the Monotone Convergence Theorem that $\skp*{\nu_x; f}$ is well-defined for a.e.\ $x\in X$. Moreover, for each $\phi\in L^1(X)$ we have \begin{equation} \int_X \phi(x)f^k(z^\ep(x))\dx{x} \longrightarrow \int_X \skp*{\nu_x;f^k}\phi(x)\dx{x}. \end{equation} In particular, for $\phi\equiv 1$ we have \begin{equation} \int_X f^k(z^\ep(x))\dx{x} \longrightarrow \int_X \skp*{\nu_x;f^k} \dx{x}. \end{equation} But \begin{equation} \int_X f^k(z^\ep(x))\dx{x} \leq \int_X f(z^\ep(x))\dx{x} \leq C, \end{equation} and so the integrals \begin{equation} \int_X \skp*{\nu_x;f^k} \dx{x} \leq C \end{equation} are bounded uniformly in $k$. Therefore, by monotone convergence, we deduce that $\brk*{x\mapsto\skp*{\nu_x;f}} \in L^1(X)$. \end{proof} Consequently, defining the concentration-defect measure as in~\eqref{eq:concentrationdefect}, we can deduce the convergence \begin{equation} \label{eq:limitwithconcentration} \int_X f(z^\ep(x))\phi(x)\dx{x} \longrightarrow \int_X \skp*{\nu_x ; f}\phi(x)\dx{x} + \int_X \phi(x)\dx{m^f}(x) \end{equation} for every bounded continuous test function $\phi\in C({\overline{X}})$. The reader might like to compare the above representation result with the notion of biting convergence and how the Young measure describes the biting limit~\cite{BallMurat1989, Pedregal}. \bigskip The next result concerns the possible problem with canonical projections of the parameterised measure onto one of the components of the dummy vector $(s,\vec{v},\vec{F})$. This another technical issue stemming from lack of tightness. Namely, the projection of the Young measure generated by a multi-component sequence onto one of its coordinates might not agree with the Young measure generated by the coresponding component. For example, consider the sequence $z^n(x) = (1,n)$ in $\R^2$. Then the corresponding Young measure $\nu$ is zero almost everywhere, while the Young measure generated by the projection sequence $\pi_1z^n =1$ is equal to $\delta_{\{1\}}$. This effect cannot occur if the Young measure generated by $z^n = (z_1^n, z_2^n)$ is a probability measure. Indeed, suppose this is the case. Then taking $f(z^n) = f_1(z_1^n)$ for $f_1\in C_0$, we have \begin{equation} \int f_1(z_1^n)\phi = \int f(z^n)\phi \longrightarrow \int \skp*{\nu_x ; f} \phi = \int \skp*{\pi_1\nu_x ; f_1}\phi, \end{equation} and \begin{equation} \int f_1(z_1^n)\phi \longrightarrow \int \skp*{\eta_x;f_1}\phi, \end{equation} where $\eta$ denotes the Young measure generated by the sequence $(z_1^n)$. It follows that $\nu_x=\eta_x$ almost everywhere. \\ In the current context the above issue could potentially lead to problems when considering the Young measure $\vec{\nu}$ coming from an approximating sequence $z^\ep=(\rho^\ep, \vec{u}^\ep)$. This is remedied by the observation that on each set $\{\rho^\ep\geq\alpha\}$, $\alpha>0$, the sequence $z^\ep$ satisfies the tightness condition, while on the vacuum zones we are free to modify the measures in question arbitrarily. More precisely, we have \begin{lemma} \label{lem:projections} Let $z^\ep = (\rho^\ep, \vec{u}^\ep)$ be any sequence of approximate solutions such that $\rho^\ep$ and $\rho^\ep\vec{u}^\ep$ are uniformy bounded in $L^1(\R^{d+1})$. Let $\vec{\nu}$ be the Young meaure generated by $z^\ep$ and $\vec{\eta}$ be the Young measure generated by $\rho^\ep$. Then \begin{equation} \pi_1\prt*{\nu_x\mres\prt*{(0,\infty)\times\R^d}} = \eta_x\mres(0,\infty) \end{equation} for almost every $x\in\R^{d+1}$. \end{lemma} \begin{proof} Let $\alpha>0$ be fixed. Let $\vec{\mu}_\alpha$ denote the Young measure generated by $z^\ep$ considered on the set $\{\rho^\ep>\alpha\}$. Then $\vec{\mu}\in L^\infty_{\mathrm{weak}}(\R^{d+1};{\mathcal{P}}((\alpha,\infty)\times\R^d))$. Clearly $\vec{\mu}_\alpha = \vec{\mu}\mres\prt*{(\alpha,\infty)\times\R^d}$. Similarly define $\vec{\eta}_\alpha = \vec{\eta}\mres(\alpha,\infty)$. Then, as discussed above, we have $\pi_1\vec{\mu}_\alpha = \vec{\eta}_\alpha$.\\ Now choose a Borel set $A\subset(0,\infty)$ and denote $A^\alpha = A\cap(0,\alpha)$. Then, by definition, $\vec{\eta}_\alpha(A) = \vec{\eta}(A^\alpha)$. Since the family $A^\alpha$ is non-decreasing as $\alpha\to0$ and $A = \bigcup_{\alpha\geq 0}A^\alpha$, we have $\vec{\eta}(A)=\lim_{\alpha\to0}\vec{\eta}(A^\alpha)$. Therefore the sequence $(\vec{\eta}_\alpha(A))_{\alpha>0}$ of extended reals converges as $\alpha\to0$ to $\eta(A)$ for every Borel set $A$.\\ Similarly, $\pi_1\vec{\mu}_\alpha(A) = \vec{\mu}(A\times\R^d) = \lim_{\alpha\to0}\vec{\mu}(A^\alpha\times\R^d) = \lim_{\alpha\to0}\pi_1\vec{\mu}(A^\alpha)$. But on the other hand $\vec{\mu}(A^\alpha\times\R^d) = \vec{\mu}_\alpha(A\times\R^d) = \pi_1\vec{\mu}_\alpha(A)$, and thus the sequence $(\pi_1\vec{\mu}_\alpha(A))_{\alpha>0}$ converges to $\pi_1\vec{\mu}(A)$. Consequently, we must have $\vec{\eta}(A) = \pi_1\vec{\mu}(A)$ for every Borel set $A\subset(0,\infty)$. \end{proof} \subsection{Inequalities} \label{sec:appB} We provide here the statement and proof of the simple geometric inequalities which we used in the proof of the main theorem to identifiy certain weak limits. \begin{lemma} \label{lem:inegalites} Let $\vec{U}\in L^\infty(\R^{n};\R^d)$ be a bounded vector-valued function. Then for any vector $\vec{u}\in\R^d$ and $\delta>0$ small enough we have the following inequalities \begin{enumerate}[align=left] \item $\displaystyle\abs*{u_iu_j - U_iU_j} \leq c\delta + C_\delta\prt*{|u_i-U_j|^2+|u_j-U_j|^2},\quad\text{for any }\; i,j=1,\dots,d$, \item $\displaystyle \abs*{|\vec{u}|^2 - |\vec{U}|^2} \leq c\delta + C_\delta|\vec{u}-\vec{U}|^2$ \end{enumerate} where the positive constants $c$ and $C_\delta$ depend only on $\norm{\vec{U}}_\infty$ and $\delta$ and $\norm{\vec{U}}_\infty$, respectively. \end{lemma} \begin{proof} Firstly, we fix a point $y\in\R^{n}$ and consider the fixed vector $\vec{U} = \vec{U}(y)$; furthermore we choose $0<\delta<1$. Let us denote $p(\vec{u}) = u_iu_j-U_iU_j$ and consider the change of variables $v_i = u_i-U_i$ and $v_j=u_j-U_j$. Then we have \begin{equation} p(\vec{v}) = v_iv_j + U_jv_i + U_iv_j. \end{equation} Now we observe that whenever $\min(|v_i|,|v_j|)>1$ we have \begin{equation} |p(\vec{v})| \leq (v_i^2 + v_j^2) + \norm{\vec{U}}_{\infty}v_i^2 + \norm{\vec{U}}_{\infty}v_j^2 \leq C(v_i^2 + v_j^2); \end{equation} while whenever $\max(|v_i|,|v_j|)>1$, then \begin{equation} |p(\vec{v})| \leq (v_i^2 + v_j^2) + 2\norm{\vec{U}}_{\infty}\max(v_i^2, v_j^2) \leq C(v_i^2 + v_j^2); \end{equation} and whenever $\max(|v_i|,|v_j|)\leq\delta$, then \begin{equation} |p(\vec{v})| \leq \delta^2 + 2\norm{\vec{U}}_{\infty}\delta \leq c\delta. \end{equation} In the remaining cases, we use continuity of the polynomial $p$:\ for the compact sets \begin{equation} X_1 = \{\delta\leq |v_i|, |v_j| \leq 1\},\;\;\; X_2 = \{\delta\leq |v_i| \leq1 , |v_j| \leq \delta\},\;\;\; X_3 \{ |v_i|\leq \delta, \delta \leq |v_j| \leq 1\} \end{equation} there are finite constants (for $\alpha = 1,2,3$) \begin{equation} K^\alpha_\delta = \sup_{X_\alpha}\;|p(\vec{v})| < \infty, \end{equation} so that \begin{equation} |p(\vec{v})| \leq K^\alpha_\delta \leq \frac{K^\alpha_\delta}{\delta^2}\max(v_i^2, v_j^2) \leq C_\delta(v_i^2 + v_j^2). \end{equation} Altogether we obtain the first of the claimed inequalities. The second one follows from taking $i=j$ and summing over all indices $i=1,\dots,d$. \end{proof} \begin{lemma} \label{lem:inegalites2} Let $\vec{v}\in\R^d$ and consider the matrix $A = \vec{v}\otimes \vec{v}$. Then the $L^1$-norm of $A$ can be bounded by its trace: \begin{equation} |A| = \sum_{i,j=1}^d |a_{ij}| \leq c_d\; \mathrm{tr}(A) = c_d |\vec{v}|^2. \end{equation} \end{lemma} \begin{proof} From the elementary inequality \begin{equation} |v_iv_j| \leq \frac12\prt*{|v_i|^2 + |v_j|^2}, \end{equation} we have the bound \begin{equation} |a_{ij}| \leq \frac12(|a_{ii}| + |a_{jj}|). \end{equation} \end{proof} This elementary inequality is used for instance in the calculation towards the relative energy inequality to bound the oscillation and concentration parts of the quadratic term $\rho(\vec{u}-\vec{U})\otimes(\vec{u}-\vec{U})$ by the "kinetic" term $\rho|\vec{u}-\vec{U}|^2$ of the relative energy. Also, it is used to deduce that $|m^{\vec{v}\otimes \vec{v}}| \leq m^{|\vec{v}|^2}$. \end{appendices} \section*{Acknowledgments} JAC was partially supported by EPSRC grant number EP/P031587/1 and the Advanced Grant Nonlocal-CPD (Nonlocal PDEs for Complex Particle Dynamics: Phase Transitions, Patterns and Synchronization) of the European Research Council Executive Agency (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 883363). TD was partially supported by National Science Centre, Poland, under agreement no UMO-2018/31/N/ST1/02394, the Polish National Agency of Academic Exchange (NAWA), and the Foundation for Polish Science. A\'{S}-G and PG were supported by National Science Centre, Poland, under agreement no UMO-2017/27/B/ST1/01569.
1,314,259,996,078
arxiv
\section{Introduction} The AdS/CFT correspondence, an exact duality between quantum gravity on a $(d{+}1)$-dimensional asymptotically-AdS space and a $d$-dimensional CFT defined on its boundary, has significantly advanced our understanding of quantum gravity, as well as provided a powerful framework for studying strongly-coupled quantum field theories. One aspect of this duality is a remarkable relationship between geometry and entanglement. This notion first appeared in the proposal \cite{Maldacena03} that two entangled CFT's have a bulk dual connecting them through a wormhole, and was later quantified by Ryu and Takayanagi via their proposal that entanglement entropy in the CFT is computed by the area of a certain minimal surface in the bulk geometry \cite{Ryu06, Ryu06b}. This latter proposal, known as the Ryu-Takayanagi (RT) formula, has led to much further work on sharpening the connection between geometry and entanglement \cite{Hubeny07, Headrick07,Raamsdonk09, Raamsdonk10, Hayden13b, Lewkowycz13,Maldacena13, Lashkari14} In the condensed matter physics community, improved understanding of quantum entanglement has led to significant progress in the numerical simulation of emergent phenomena in strongly-interacting systems. A key ingredient of such algorithms is the use of tensor networks to efficiently represent quantum many-body states~\cite{Vidal03b, Verstraete04b, Verstraete08}. Vidal combined this idea with entanglement renormalization to formulate the Multiscale Entanglement Renormalization Ansatz (MERA)~\cite{Vidal07, Vidal08}, a family of tensor networks that efficiently approximate wave functions with long-range entanglement of the type exhibited by ground states of local scale-invariant Hamiltonians~\cite{Evenbly09, Evenbly09b, Evenbly10}. The key idea is to represent entanglement at different length scales using tensors in a hierarchical array. In the AdS/CFT correspondence, the emergent radial direction can be regarded as a renormalization scale \cite{Susskind1998}, and spatial slices have a hyperbolic geometry resembling the exponentially growing tensor networks of MERA. This similarity between AdS/CFT and MERA was pointed out by Swingle, who argued that some physics of the AdS/CFT correspondence can be modeled by a MERA-like tensor network where quantum entanglement in the boundary theory is regarded as a building block for the emergent bulk geometry~\cite{Swingle12, Swingle12b}. Recently it has been argued in ~\cite{Almheiri14} that the emergence of bulk locality in AdS/CFT can be usefully characterized in the language of quantum error-correcting codes. Certain paradoxical features of the correspondence arise naturally by interpreting bulk local operators as logical operators on certain subspaces of states in the CFT, whose entanglement structure protects these operators from boundary erasures. Moreover, inspired by \cite{Swingle12, Swingle12b}, it was suggested that there should be tensor network models that concretely implement these ideas. In this paper, we propose such a family of exactly solvable toy models of the bulk/boundary correspondence based on a novel tensor-network construction of quantum error-correcting codes. Other authors have recently used holographic ideas~\cite{Yoshida2013,Latorre2015} and related tensor network constructions~\cite{Ferris14,Bacon14} to build quantum codes with interesting properties or toy models of the bulk/boundary correspondence~\cite{Qi13}, but our approach differs from previous work by combining the following properties, all of which are desirable for a model of AdS/CFT: \begin{itemize} \item \tb{Exactly solvable:} Many of the properties of our models can be shown explicitly. In particular, an exact prescription for mapping bulk operators to boundary operators can be obtained, and we can give examples where the Ryu-Takayanagi formula holds exactly for all connected boundary regions. \item \tb{QECC:} Our models are quantum error-correcting codes, where the bulk/boundary legs of the tensor network correspond to input/outputs of an encoding quantum circuit. In this sense they realize explicitly the proposal of \cite{Almheiri14}. \item \tb{Bulk uniformity:} The tensor network is supported on a uniform tiling of a hyperbolic space, known as a hyperbolic tessellation. If the tiling is extended to an infinite system, the tensor network has no inherent directionality and all the locations in the bulk can be treated on an equal footing (see Fig.~\ref{fig:HolographicPentagonCode}). \end{itemize} The rest of this paper is organized as follows: In section~\ref{sec:perfect}, we introduce a class of tensors called perfect tensors, which are associated with pure quantum states of many spins such that the entanglement is maximal across any partition of the spins into two sets of equal size. In section~\ref{sec:model}, we construct holographic states and codes by building networks of perfect tensors. These codes have properties reminiscent of the AdS/CFT correspondence, elucidated in the rest of the paper, where the code's logical/physical degrees of freedom are interpreted as the bulk/boundary degrees of freedom of a CFT with a gravitational dual. In section~\ref{sec:state}, we study the entanglement structure of holographic states, showing that the Ryu-Takayanagi formula is exactly satisfied for any connected boundary region, developing a graphical representation of multipartite entanglement, and confirming the negativity of tripartite information~\cite{Hayden13b}. In section~\ref{sec:code}, we investigate the dictionary relating bulk and boundary observables, define a lattice version of the causal wedge, and explain how bulk local operators in the causal wedge can be reconstructed on the boundary; we also define a lattice version of the entanglement wedge, and offer evidence supporting the entanglement wedge hypothesis proposed in ~\cite{Headrick2014, Wall2012, Czech2012}, see also \cite{Jafferis2014}. We briefly discuss how to describe black holes using holographic codes in section~\ref{sec:black}. Section~\ref{sec:conclude} contains our conclusions, and many details appear in the appendices. \section{Isometries and perfect tensors}\label{sec:perfect} In this section we review some tools which will be used in our constructions of holographic states and codes. We begin with a standard definition: \begin{definition} Say $\mathcal{H}_A$ and $\mathcal{H}_B$ are two Hilbert spaces, not necessarily of the same dimensionality. An \textbf{isometry} from $\mathcal{H}_A$ to $\mathcal{H}_B$ is a linear map $T:\mathcal{H}_A\mapsto\mathcal{H}_B$ with the property that it preserves the inner product. \end{definition} If $\mathcal{H}_A$ and $\mathcal{H}_B$ have finite dimensionality, as we will assume throughout this paper, then it immediately follows that such a $T$ can exist only if their dimensionalities $\textrm{dim}(A)$ and $\textrm{dim}(B)$ obey $\textrm{dim}(A)\leq \textrm{dim}(B)$. In the special case where $\textrm{dim}(A) = \textrm{dim}(B)$, $T$ is just a unitary transformation. Clearly the composition of two isometries is also an isometry. If $T:\mathcal{H}_A\mapsto\mathcal{H}_B$ is an isometry, then $T^\dagger T$ is the identity on $\mathcal{H}_A$ and $TT^\dagger$ is a projector mapping $\mathcal{H}_B$ to the range of $T$. We may represent the map $T$ as a two-index tensor acting as \begin{equation} T:|a\rangle \mapsto \sum_b |b\rangle T_{ba}, \end{equation} where $\{|a\rangle\}$ denotes a complete orthonormal basis for $\mathcal{H}_A$ and $\{|b\rangle\}$ for $\mathcal{H}_B$. Then $T$ is an isometry if and only if \begin{equation}\label{iso} \sum_b T^{\dagger}_{a'b}T_{ba}=\delta_{a'a}. \end{equation} We represent this graphically in figure \ref{isofig}, following the convention that operators are ordered from left to right, so that in the figure $T^\dagger$ is applied after $T$. We will call a tensor obeying \eqref{iso} an \textit{isometric tensor}. \begin{figure}[htb!] \centering \includegraphics[height=1.5cm]{isometry} \caption{Diagrammatic tensor notation, here showing that $T$ is an isometry.}\label{isofig} \end{figure} Isometric tensors have the property that any operator $O$ acting on its ``incoming'' leg, can be replaced by an equal norm operator $O'$ acting on its ``outgoing'' leg, because \begin{equation} TO = TOT^\dagger T = (TOT^\dagger)T \equiv O'T; \end{equation} we illustrate this property in figure \ref{push}. \begin{figure}[htb!] \centering \includegraphics[height=3cm]{push} \caption{Operator pushing through an isometric tensor.}\label{push} \end{figure} This operation is essential for what follows, and we will often describe it as ``pushing an operator through a tensor''. It is also easy to check a useful converse of operator pushing: If the two-index tensor $T$ has the property that any \textit{unitary} transformation $U$ contracted with its incoming index can be replaced by a corresponding \textit{unitary} transformation $U'$ contracted with its outgoing index (\textit{i.e.}, $TU = U'T$), then $T$ obeys \eqref{iso} up to a scalar factor, and therefore must be proportional to an isometric tensor. \begin{figure}[htb!] \centering \includegraphics[height=1.5cm]{Isometry2} \caption{If $\mathcal{H}_A=\mathcal{H}_{A_2}\otimes \mathcal{H}_{A_1}$, then we can move one of the factors to the output while preserving the isometric structure.}\label{shrink} \end{figure} Another important property of isometric tensors is that if the input Hilbert space factorizes, we may reinterpret an input factor as an output factor while preserving \eqref{iso}, up to an overall rescaling. That is, if $T: \mathcal{H}_{A_2}\otimes \mathcal{H}_{A_1}\mapsto \mathcal{H}_B $ is an isometric map, acting on a basis according to \begin{align} T:|a_2 a_1\rangle \mapsto \sum_b |b\rangle T_{ba_2a_1}, \end{align} then $\tilde T:\mathcal{H}_{A_1}\mapsto\mathcal{H}_B \otimes \mathcal{H}_{A_2}$ acting as \begin{align} \tilde T:|a_1\rangle \mapsto \sum_{ba_2} |ba_2\rangle T_{ba_2a_1} \end{align} obeys $\tilde T^\dagger \tilde T = \textrm{dim}(A_2) I_{A_1}$. We illustrate this property in figure \ref{shrink}. In this paper we will be interested in a special class of isometric tensors, which we will call perfect tensors. To formulate the concept of a perfect tensor, first note that we may divide the $m$ indices of a tensor $T_{a_1 a_2 \ldots a_m}$ into a set $A$ and a complementary set $A^c$. We use $|A|$ to denote the cardinality of the set $A$; hence $|A|+|A^c| = m$. Then $T$ may be regarded as a linear map from the span of the indices in $A$ to the span of the indices in $A^c$. We will usually assume that each index ranges over $v$ values, and we will use $A$ to denote both the set of $|A|$ indices and the corresponding vector space with dimension $v^{|A|}$; thus we say $T$ maps $A$ to $A^c$. \begin{definition} A $2n$-index tensor $T_{a_1a_2\ldots a_{2n}}$ is a \textbf{perfect tensor} if, for any bipartition of its indices into a set $A$ and complementary set $A^c$ with $|A|\leq |A^c|$, $T$ is proportional to an isometric tensor from $A$ to $A^c$. \end{definition} It is not obvious that nontrivial perfect tensors exist, but they do! Note that for $T$ to be perfect it suffices for $T$ to be a unitary transformation when $|A|=|A^c| = n$; in that case the property illustrated in figure \ref{shrink} ensures that $T$ is proportional to an isometric tensor for $|A|< n$. In Appendix \ref{App:PerfectTensorExamples} we describe perfect tensors explicitly for the case $n=3$, $v=2$ and for the case $n=2$, $v=3$; other cases with larger $n$ and $v$ are also discussed there. To keep our discussion concrete, we will focus especially on the six-index tensor for qubits ($v=2$)\footnote{This can be obtained from the encoding map of the 5-qubit code.}, but much of what we say applies to arbitrary $2n$-index perfect tensors. Perfect tensors are related to other notable ideas in quantum information theory. In general, a tensor $T$ with $m$ indices, each ranging over $v$ values, describes a pure quantum state $|\psi\rangle$ of $m$ $v$-dimensional spins, where, up to a normalization factor, \begin{equation} |\psi\rangle = \sum_{a_1,a_2, \ldots, a_m} T_{a_1a_2 \ldots a_m} |a_1a_2 \ldots a_m\rangle. \end{equation} A perfect tensor describes a pure state of $2n$ spins with a special property --- any set of $n$ spins is maximally entangled with the complementary set of $n$ spins. Such states have been called \textit{absolutely maximally entangled} (AME) states \cite{Helwig2012,Helwig2013}. Conversely any $AME$ state defines a perfect tensor. Regarded as a linear map from one spin to $2n-1$ spins, a perfect tensor is the isometric encoding map of a quantum error-correcting code which encodes a single logical spin in a block of $2n-1$ physical spins, where the logical spin is protected against the erasure of any $n-1$ physical spins. Because $n$ is more than half of all the physical spins, this is the best possible protection against erasure errors compatible with the no-cloning principle. In coding terminology this code has \textit{distance} $n$ and is denoted $[[m,k,d]]_v=[[2n-1,1,n]]_v$, where $m$ is the number of physical spins in the code block, $k$ is the number of protected logical spins, and $d$ is the code distance. This code is also the basis for a quantum-secret-sharing scheme called a $((n,2n-1))$ threshold scheme \cite{Cleve1999}; code states have the property that a party holding any $n-1$ spins has no information about the logical spin, while a party holding any $n$ spins has complete information about the logical spin (because erasure of the remaining $n-1$ spins is correctable). \section{Construction of holographic quantum states and codes}\label{sec:model} We have seen how tensors can be interpreted as quantum states or quantum codes. In this section we construct tensor networks in which the fundamental building blocks are perfect tensors. Our tensor networks describe states which we call \textit{holographic states}, and codes which we call \textit{holographic codes}. We shall focus on examples based on tilings of two-dimensional hyperbolic space, which are specific realizations of uniform hyperbolic tilings known as hyperbolic tessellations. These tilings have desirable symmetries for constructing a toy model of the AdS/CFT correspondence. In particular they are discretely scale-invariant, and there exist graph isomorphisms that bring any point in the graph to the center while preserving the local structure of the tiling.\footnote{Such transformations can be directly visualized using \emph{Kaleidotile} software~\cite{Kaleidotile}, which is freely available and has been of great aid in developing geometric intuition and producing figures of uniform hyperbolic tilings in this paper.} The machinery we develop may also be straightforwardly applied to non-uniform and higher-dimensional graphs. \begin{figure} \centering \subfloat[Holographic hexagon state]{ \includegraphics[width=0.4\linewidth]{HexagonState2} \label{fig:HolographicHexagonState}} \hspace{1cm} \subfloat[Holographic pentagon code]{ \includegraphics[width=0.4\linewidth]{PentagonCodeCentered} \label{fig:HolographicPentagonCode}} \caption{White dots represent physical legs on the boundary. Red dots represent logical input legs associated to each perfect tensor. }\label{fig:HolographicStateAndCode} \end{figure} Let's first consider a uniform tiling of a two-dimensional hyperbolic space by hexagons, with four hexagons adjacent at each vertex, as depicted in Fig~\ref{fig:HolographicHexagonState}. A perfect tensor with six legs is placed at each hexagon, and legs of perfect tensors are contracted with neighboring tensors at shared edges of the hexagons. We associate physical spins with the uncontracted open tensor legs on the boundary of the hyperbolic tiling; the tensor network corresponds to a pure state of these boundary spins, which we call a \emph{holographic state}. Note that perfect tensors are not necessarily symmetric under all the possible permutations of tensor legs, and thus we specify some particular ordering of tensor legs in the construction. We may similarly attach a state interpretation to more general networks constructed by contracting perfect tensors: \begin{definition} Consider a tensor network composed of perfect tensors which cover some geometric manifold with boundary, where all the interior tensor legs are contracted. A \textbf{holographic state} is a state interpretation of such a tensor network, where physical degrees of freedom are associated with all uncontracted legs at the boundary of the manifold. \end{definition} We now provide an example of a holographic quantum code. As in a holographic state, we consider a uniform tiling of the hyperbolic disc, this time by pentagons, with four pentagons adjacent at each vertex. A perfect tensor with six legs is placed at each pentagon, so that each tensor has one additional uncontracted open leg. This additional tensor leg is interpreted as a bulk index or logical input for the tensor network (see Fig.~\ref{fig:HolographicPentagonCode}). The entire system can be viewed as a big tensor with logical legs in the bulk and physical legs on the boundary. We then have the following theorem: \begin{theorem} The pentagon-tiling tensor network is an isometric tensor from the bulk to the boundary. We call it the holographic pentagon code. \end{theorem} We can prove this theorem by noting that if we order the tensors into layers labeled by increasing graph distance from the center, each tensor has at most two legs contracted with the tensors at the previous layer (this property is a consequence of the ``negative curvature'' of the graph). Therefore, even if we regard the pentagon's bulk logical index as an input leg, the total number of input legs is at most three, and we may therefore regard each tensor as an isometry from input legs to output legs. Applying the perfect tensors layer by layer, and recalling that the product of isometries is an isometry, we obtain an isometry mapping all the logical indices in the bulk to the physical indices on the boundary. We can view this isometry as the encoding transformation of a quantum error-correcting code, which we call a \emph{holographic code}. The number of logical $v$-dimensional spins is the number $N_{\rm bulk}$ of pentagons in the tiling, and the number of physical $v$-dimensional spins in the code block is the number $N_{\rm boundary}$ of uncontracted boundary indices in the tensor network. We show in Appendix \ref{App:CountingTensors} that the rate of the code, meaning the ratio of the number of logical spins to the number of physical spins, approaches \begin{align}\label{eq:RatioCount} \frac{N_{\rm bulk}}{N_{\rm boundary}}\to \frac{1}{\sqrt{5}}\approx .447 \end{align} in the limit of a large number of layers. This pentagon code was constructed by successively adding layers of tensors starting from the center and stopping after repeating this procedure a certain number of times (two layers in figure \ref{fig:HolographicPentagonCode}). Alternatively, we may fill the bulk using a non-uniform cutoff, so that the graph distance between the ``center'' and the boundary varies from one portion of the boundary to another (as occurs in figure \ref{fig:HolographicHexagonState}). By exerting this freedom, we may change the corresponding value \ref{eq:RatioCount} for the rate of the code and even slightly increase it. By varying the choice of perfect tensor and the shape of the cutoff, a large family of holographic codes can be constructed: \begin{definition} Consider a tensor network composed of perfect tensors which cover some geometric manifold with boundaries. The tensor network is called a \textbf{holographic code} if it gives rise to an isometric map from uncontracted bulk legs to uncontracted boundary legs. \end{definition} Tensor networks with open legs in the bulk were first proposed by Vidal~\cite{Vidal08}. More recently, Qi~\cite{Qi13} constructed a tensor-tree model with an exact unitary mapping between the bulk and the boundary. The most important difference between their models and ours is that their states are not protected against erasure of physical spins because the code rate is asymptotically unity. In addition our models are more symmetric; since perfect tensors can be interpreted as isometries along any direction, our models have no preferred direction in the bulk and all bulk sites are treated equally. In particular, the pentagon code has the nice feature that, because the 6-leg perfect tensor we construct in appendix \ref{App:PerfectTensorExamples} is symmetric under cyclic permutations of five of the legs, which we take to be the contracted legs, the symmetry of the network is just the full symmetry of the graph. \section{Entanglement structure of holographic states}\label{sec:state} In this section we explore to what extent holographic states reproduce key properties of the AdS/CFT correspondence, such as the Ryu-Takayanagi formula for entropy of a boundary region \cite{Ryu06} and the negativity of tripartite information \cite{Hayden13b}. \subsection{Ryu-Takayanagi formula} The Ryu-Takayanagi (RT) formula says that for a CFT whose gravitational dual is well-approximated by Einstein gravity at low energies, in any static state with a geometric bulk description the entropy $S_A$ of a boundary subregion $A$ at fixed time obeys \begin{equation} S_A=\frac{\mathrm{Area}(\gamma_A)}{4G}; \end{equation} here $G$ is Newton's constant and $\gamma_A$ is the minimal-area codimension-two bulk surface whose boundary matches the boundary $\partial A$ of $A$. In our examples the bulk theory is $2+1$ dimensional, so $\gamma_A$ will be a spacelike bulk geodesic whose ``area'' is just defined as its length. In our discrete setting, we will define $\gamma_A$ as a certain \textit{cut} through the tensor network which partitions it into two disjoint sets of perfect tensors. Associated with a cut $c$ is a decomposition of the tensor network as a contraction of two tensors $P$ and $Q$, where the contracted legs lie along the cut; the number of contracted legs is called the \textit{length} of $c$, denoted $|c|$. If $A$ is a set of boundary legs and $A^c$ is the complementary set of boundary legs, then we say that the boundary of the cut $c$ matches the boundary of $A$ if the uncontracted legs of $P$ are the legs of $A$, and the uncontracted legs of $Q$ are the legs of $A^c$. The \textit{minimal bulk geodesic bounded by $A$}, $\gamma_A$, is then defined as the cut $c$ of shortest length whose boundary matches the boundary of $A$. We use $P$ to denote, not just the tensor associated with one side of the cut, but also the set of bulk lattice sites corresponding to the perfect tensors which are contracted to construct $P$; likewise for $Q$. We note that $P$ or $Q$ might have more than one connected component, and so might $\gamma_A$ when regarded as a path in the dual graph. A standard argument for tensor network representations of quantum states shows that $|\gamma_A|$ provides an \textit{upper bound} on $S_A$. If $P$ and $Q$ are the tensors associated with a cut $c$ whose boundary matches the boundary of $A$, then the holographic state $|\psi\rangle$ may be expressed (up to normalization) as \begin{equation}\label{eq:RQ-schmidt} |\psi\rangle=\sum_{a,b,i}|ab\rangle P_{ai}Q_{bi} \equiv \sum_i |P_i\rangle_A\otimes |Q_i\rangle_{A^c}. \end{equation} Here $a$ and $b$ run over complete bases for $A$ and $A^c$ respectively, and $i$ runs over all possible values of the indices contracted along $c$; the vectors $\{|P_i\rangle\}$ in $\mathcal{H}_A$ and the vectors $\{|Q_i\rangle\}$ in $\mathcal{H}_{A^c}$ are not necessarily orthogonal or normalized. (See figure \ref{RQfig}.) \begin{figure}[htb!] \centering \includegraphics[width=0.8\linewidth]{TwoTensorCut} \caption{A cut through a holographic tensor network by a curve $c$ bounded by $\partial A$. Boundary indices $a$ and $b$ are uncontracted in $A$ and its complement $A^c$ respectively; tensors $P$ and $Q$ are contracted by summing over the index $i$ which is cut by $c$. }\label{RQfig} \end{figure} Tracing out $A^c$ we obtain (up to normalization) the density operator on $A$: \begin{equation}\label{rhoA} \rho_A=\sum_{i,i'} \langle Q_{i'}|Q_i\rangle |P_i\rangle \langle P_{i'}|. \end{equation} Evidently the rank of $\rho_A$ is at most the number of terms in the sum over $i$, namely $v^{|c|}$. The density operator of a given rank with maximal Von Neumann entropy is proportional to the identity on its support, and has entropy equal to log of the rank. We obtain the best bound by choosing the cut $c=\gamma_A$ with the shortest length: \begin{equation}\label{upperbound} S_A\leq |\gamma_A| \cdot \log v. \end{equation} In most of what follows, we will define entropy by taking logs with base $v$, and so suppress the $\log v$ factor. If the tensors $P$ and $Q$ are actually isometries from $i$ to $a$ and $b$ respectively, then $\{|P_i\rangle\}$ and $\{|Q_i\rangle\}$ are sets of orthonormal vectors; in that case \eqref{upperbound} is saturated and a discrete analogue of the RT formula holds exactly. Under what conditions will $P$ and $Q$ be isometries? We can prove the following theorem: \begin{theorem}\label{RTth} Suppose that we have a holographic state associated to a simply-connected planar tensor network of perfect tensors, whose graph has ``non-positive curvature''.\footnote{The scalar curvature of a graph is somewhat tricky to define in general; the condition we really need here is that the distance functional from one point on the dual network to another does not have interior local maxima. } Then for any connected region $A$ on the boundary, we have $S_A=|\gamma_A|$; in other words, the lattice RT formula holds. \end{theorem} The strategy of the proof is to show that $P$ and $Q$ can in fact be interpreted as \textit{unitary} transformations, from the cut together with some subregion of $A$ or $A^c$ to the rest of $A$ or $A^c$ respectively. We can then use the identity depicted in figure \ref{shrink} to re-interpret these transformations as isometries from the cut to $A$ and from the cut to $A^c$ respectively; the RT formula follows. The key to the argument, explained in appendix \ref{app:PlanarGraphProof}, is using a strengthened version of the max-flow min-cut theorem (which is standard in graph theory~\cite{Papadimitriou1998}) to establish that the tensor network representations of $P$ and $Q$ can be interpreted as unitary quantum circuits. \subsection{Bipartite entanglement of disconnected regions}\label{subsec:multiple-regions} Unfortunately the proof of Theorem \ref{RTth} does not directly generalize to a disconnected region $A$, nor even to connected regions for states, such as our holographic code states, where not all perfect tensor indices are contracted in the bulk. We do not consider this to be a serious problem for our models. However, we still find it worthwhile to introduce some machinery that allows us to quantify this presumption somewhat. The first technique we will introduce is an algorithmic procedure for constructing, given a boundary region $A$, a bulk curve $\gamma^\star_A$ bounded by $\partial A$ such that the corresponding tensor $P$ is guaranteed to be an isometry. For a holographic state the isometry $P$ maps $\gamma^\star_A$ to $A$, and for a holographic code $P$ maps $\gamma^\star_A$ and all incoming bulk indices of $P$ to $A$. Furthermore, $\gamma^\star_A$ is a \textit{local} minimum of the length, in the sense that no single tensor can be added to or removed from $P$ which reduces the length of the cut. The algorithm makes essential use of the properties of perfect tensors and is quite simple. We consider a sequence of cuts $\{c_\alpha\}$ each bounded by $\partial A$, and a corresponding sequence of isometries $\{P_\alpha\}$, such that each cut in the sequence is obtained from the previous one by a local move on the bulk lattice. The sequence begins with the trivial cut, $A$ itself; in each step we identify one perfect tensor which has at least half of its legs contracted with $P_\alpha$ and construct $P_{\alpha +1}$ by adding this perfect tensor to $P_\alpha$. Thus $P_{\alpha+1}$ is obtained by composing $P_\alpha$ with an isometry defined by a perfect tensor, and therefore $P_{\alpha+1}$ is an isometry if $P_\alpha$ is. The procedure halts when the cut reaches $\gamma^\star_A$ and no further local moves are possible. Though many different sequences of local moves are allowed, $\gamma^\star_A$ is well defined; tensors eligible for inclusion in $P_{\alpha+1}$ remain so as other tensors are included, so the output of the algorithm does not depend on the order of inclusion. Following standard computer science terminology, we call this procedure the \textit{greedy algorithm} and call $\gamma^\star_A$ the \textit{greedy geodesic}. A step of the greedy algorithm is illustrated in figure \ref{greedyfig}. \begin{figure}[htb!] \centering \includegraphics[height=2cm]{greedy.pdf} \caption{A step in the greedy algorithm. The upper node has at least three legs contracted with the region $P$, which we have shaded red, so we include it into $P$.}\label{greedyfig} \end{figure} \begin{figure}[htb!] \centering \includegraphics[width=0.7\linewidth]{RTProofCounterexamples} \caption{Three examples where the greedy algorithm fails to find the matching minimal geodesics from complementary regions. The first example involves disconnected regions in the holographic state. The second example involves a positive curvature obstruction at the center of the tiling which blocks the greedy geodesic from reaching the global minimal surface. The third example involves a connected region for the holographic code. In both the first and the third figure the greedy algorithm finds minimal geodesics from both sides but they do not match. In both cases, it is possible for the entropy to be slightly smaller than the length of the geodesic. This depends on tensors which were not absorbed by either of the greedy geodesics which we call the bipartite residual regions.} \label{fig:RTcounterexamplesd} \end{figure} When the assumptions of Theorem \ref{RTth} are satisfied, the argument in appendix \ref{app:PlanarGraphProof} ensures that the greedy algorithm will find a true minimal geodesic $\gamma_A$. If there is more than one minimal geodesic, as is sometimes the case, then the greedy algorithm might continue past a minimal geodesic and proceed through minimal geodesics of equal length. In that case, the tensors in between the successive geodesics define a unitary transformation from one cut to the other. If $A$ has more than one connected component, if there is positive curvature, or if there are uncontracted bulk indices as for a holographic code, the greedy algorithm does not necessarily succeed in finding matching minimal geodesics, as we illustrate in figure \ref{fig:RTcounterexamplesd}. In cases where the greedy algorithm fails to find a minimal geodesic, we can still use it to prove an interesting \textit{lower} bound on the entropy $S_A$. Suppose that $\gamma^\star_A$ and $\gamma^\star_{A^c}$ are two greedy geodesics, produced by applying the greedy algorithm to $A$ and its complement $A^c$ respectively, where $P$ and $Q$ are the corresponding tensors. Furthermore, suppose that $\gamma^\star_A\cap\gamma^\star_{A^c}$ is non-empty, in the sense that some links are cut by both geodesics. We can represent that state as\footnote{For holographic codes with dangling bulk legs, we assume for now that a product state is fed into all bulk legs. If the input bulk state were entangled instead, there would be additional contributions to the boundary entanglement which we are not including. This same proviso also applies to the discussion in the following subsection.} \begin{equation} |\psi\rangle=\sum_{a,b,i,j,k} |ab\rangle P_{a,ij} Q_{b,ik}S_{jk}\equiv \sum_{i,j,k} S_{jk}|P_{ij}\rangle_A\otimes |Q_{ik}\rangle_{A^c}. \end{equation} Here $i$ denotes the index shared between $\gamma^\star_A$ and $\gamma^\star_{A^c}$, $j$ is the index unique to $\gamma^\star_A$, $k$ is the index unique to $\gamma^\star_{A^c}$, and $S$ denotes the tensor that sits ``in between'' $\gamma^\star_A$ and $\gamma^\star_{A^c}$. We call the set of lattice sites in $S$ the \textit{bipartite residual region} (where the modifier ``bipartite'' draws a distinction with the \textit{multipartite residual region} to be discussed in section \ref{subsec-multi-eng}.) Because $P$ and $Q$ are isometries, both $\{|P_{ij}\rangle\}$ and $\{|Q_{ik}\rangle\}$ are sets of orthonormal vectors. Therefore, the marginal density operator for $A$ is \b \rho_A=\sum_{i,j,j',k}S_{jk}S_{j'k}^*|P_{ij}\rangle\langle P_{ij'}|. \end{equation} This density operator has support on the subspace of $A$ spanned by $\{|P_{ij}\rangle\}$, which has dimension $v^{|\gamma^\star_A|}$, and this subspace has a decomposition into subsystems $A_1\otimes A_2$ such that the basis element $|P_{ij}\rangle$ may be expressed as $|i\rangle_{A_1}\otimes |j\rangle_{A_2}$, where $\{|i\rangle\}$ and $\{|j\rangle\}$ are orthonormal bases for $A_1$ and $A_2$ respectively. We may then write \begin{equation}\label{rhoA2} \rho_A=\left(\sum_i|i\rangle\langle i|_{A_1}\right)\otimes \left(\sum_{j,j',k}S_{jk}S^*_{j'k}|j\rangle\langle j'|_{A_2}\right), \end{equation} and from the additivity of the entropy, using $\textrm{dim}(A_1) = v^{|A_1|}=v^{|\gamma^\star_A\cap\gamma^\star_{A^c}|}$, we obtain the following theorem. \begin{theorem}\label{thm-RT-inequality} For a holographic state or code, if $A$ is a (not necessarily connected) boundary region and $A^c$ is its complement, then the entropy of $A$ satisfies \begin{equation}\label{residual-lowerbound} S_A\geq |\gamma^\star_A\cap\gamma^\star_{A^c}|, \end{equation} where $\gamma_A^\star$ is the greedy geodesic obtained by applying the greedy algorithm to $A$ and $\gamma_{A^c}^\star$ is the greedy geodesic obtained by applying the greedy algorithm to $A^c$. \end{theorem} We see from Theorem \ref{thm-RT-inequality} that violations of the Ryu-Takayanagi formula are closely related to the size of the bipartite residual region. In particular, if there is no bipartite residual region then $S_A=|\gamma^\star_A|$; the upper bound \eqref{upperbound} and the lower bound \eqref{residual-lowerbound} together imply that $\gamma^\star_A$ is in fact a minimal geodesic, and RT holds. We will argue in section \ref{subsec-multi-eng} that the bipartite residual region has size $O(1)$ when the regions $A$ and $A^c$ on the boundary have $O(1)$ connected components. In this sense, the corrections to the RT formula are typically small. \subsection{A map of multipartite entanglement}\label{subsec-multi-eng} So far we have emphasized the bipartite entanglement between a boundary region $A$ and its complement $A^c$ in a holographic state or code. But we may also divide the boundary into three or more regions and investigate the structure of the entanglement among these regions. The entanglement structure can be elucidated via an entanglement ``distillation'' procedure which we will now describe. To explain this procedure we begin by revisiting the case of bipartite entanglement. We have seen that if the conditions of Theorem \ref{RTth} are satisfied, then a holographic state can be expressed in the form \eqref{eq:RQ-schmidt}, where a subsystem of $A$ of dimension $v^{|\gamma_A^\star|}$ is maximally entangled with a corresponding subsystem of $A^c$. This entanglement shared between two systems is generally diluted, since each party may contain many more than $S_A$ spins. The entanglement would be more useful in a more concentrated form. The procedure for transforming dilute entanglement into concentrated entanglement, called \textit{entanglement distillation}, is particularly simple for a bipartite pure state like $|\psi\rangle$ in \eqref{eq:RQ-schmidt}. We choose $|\gamma_A^\star|$ specified spins in $A$ (the subsystem $A_1$ of $A$) and we choose $|\gamma_A^\star|$ spins in $A^c$ (the subsystem $A_1^c$ of $A^c$). Then we apply a unitary transformation $U_A$ acting on $A$ that transforms the basis states $\{|P_i\rangle_A\}$ to the standard basis states of $A_1$, and a unitary transformation $U_{A^c}$ acting on $A^c$ that transforms the basis states $\{|Q_i\rangle_{A^c}\}$ to the standard basis states of $A_1^c$, thus obtaining the state \begin{equation} |\psi'\rangle = \left(|\Phi\rangle^{\otimes |\gamma_A^\star|} \right)_{A_1A^c_1}\otimes |\tilde \chi\rangle_{A_2}\otimes |\tilde \phi\rangle_{A_2^c}, \end{equation} in which the entanglement of $A$ with $A^c$ now resides entirely in the system $A_1 A_1^c$. Here $A_2$ denotes the complement of $A_1$ in $A$, $A_2^c$ denotes the complement of $A_1^c$ in $A^c$, and \begin{equation} |\Phi\rangle =\frac{1}{\sqrt{v}} \sum_{\alpha=1}^v |\alpha\rangle \otimes |\alpha\rangle \end{equation} is a maximally entangled EPR pair of two spins. There is a method for constructing the unitary transformations $U_A$ and $U_{A^c}$ explicitly, which has a pleasing geometrical interpretation. The method uses the greedy algorithm for constructing $\gamma_A^\star$, but where now each local move, in which the cut through the tensor network advances into the bulk by moving past one additional tensor, is accompanied by a local unitary transformation that decouples spins from the network. This local unitary transformation is depicted in figure \ref{fig_distillation}, where entanglement distillation is performed on a pair of contracted six-leg tensors. \begin{figure}[htb!] \centering \includegraphics[width=0.8\linewidth]{fig_distillation} \caption{ The correspondence between local moves and distillation of EPR pairs. (a) Distillation of two EPR pairs. (b) The corresponding local moves. Before the first move, the tensor on the left has four legs crossed by the cut. Because the tensor is perfect, its remaining two legs are maximally entangled with a subsystem of these four. The first local unitary transformation acts on the four spins below the cut, transforming the basis to decouple the second and third spin, while the first and fourth spins remain contracted across the cut; in the corresponding local move, the cut advances upward past the tensor on the left. After the first move, the tensor on the right has five legs crossed by the cut. The second unitary transformation changes the basis of these five spins, decoupling the first four, while the fifth remains contracted across the cut; now the corresponding local move advances the cut upward past the tensor on the right. The product of the two local unitaries has distilled two EPR pairs which cross the cut, while decoupling six spins below the cut. } \label{fig_distillation} \end{figure} Since each local move of the greedy algorithm moves the cut past a tensor which initially has at least three legs crossed by the cut, the legs above the cut are always maximally entangled with the legs below, and the corresponding local unitary transformation exists. For purposes of visualization, we may imagine that the spins which remain contracted across the cut advance further into the bulk in each step, remaining adjacent to the cut, while the spins which decouple are left behind. When the greedy algorithm applied to $A$ terminates, then, all the decoupled spins of $A$ are distributed throughout the bulk region in between the greedy geodesic and the boundary, while $|\gamma_A^\star|$ spins of $A$, lined up along the greedy geodesic, are contracted with tensors on the other side of the greedy geodesic. If we also apply the greedy algorithm to $A^c$, then under the conditions of Theorem \ref{RTth}, the algorithm terminates at the same greedy geodesic. Acting together, then, the unitary transformations associated with the two greedy algorithms have decoupled all the boundary spins, except for $|\gamma_A^\star|$ EPR pairs, one for each of the legs crossed by the greedy geodesic, thus executing the entanglement distillation protocol. Run backwards, the sequence of local unitary transformations associated with the greedy algorithm constitutes a \textit{holographic quantum circuit}, which prepares the boundary state. The input to this circuit is $|\gamma_A^\star|$ EPR pairs, plus a suitable number of additional spins in a product state, distributed throughout the bulk. The circuit builds the state step by step, gradually incorporating the bulk spins as the cut advances outward from the greedy geodesic toward the boundary. The input state, envisioned as a set of EPR pairs lined up along $\gamma_A^\star$, provides a \textit{map of entanglement}, a picture characterizing the structure of the entanglement between $A$ and $A^c$. (See figure \ref{fig_tiling2}.) The initial EPR pairs along the greedy geodesic which are deep inside the bulk encode long-range entanglement between $A$ and $A^c$, while the EPR pairs closer to the boundary encode shorter-range entanglement. \begin{figure}[htb!] \centering \includegraphics[width=0.60\linewidth]{fig_tiling2} \caption{A geometric map of bipartite entanglement. White dots represent physical spins distilled by applying local unitary transformations to $A$ and $A^c$. } \label{fig_tiling2} \end{figure} We can likewise use the greedy algorithm to create a map of multipartite entanglement, whether or not the conditions of Theorem 2 are satisfied. Suppose, for example, that we divide the boundary into four regions $A,B,C,D$, each of which is connected, as in figure \ref{fig_residual_tensor}. We may apply the greedy algorithm separately to each of the four regions, obtaining greedy geodesics $\gamma_A^\star, \gamma_B^\star,\gamma_C^\star, \gamma_D^\star$. The bulk region in between $A$ and its greedy geodesic $\gamma_A^\star$ is called the \textit{causal wedge} of $A$, denoted $\mathcal{C}[A]$. (The significance of the causal wedge in holographic codes will be discussed at length in section \ref{sec:code}.) As figure \ref{fig_residual_tensor} indicates, the union $\mathcal{C}[A]\cup \mathcal{C}[B]\cup \mathcal{C}[C]\cup \mathcal{C}[D]$ of the four causal wedges need not cover the entire bulk lattice --- there may be a \textit{multipartite residual region} in the bulk, which the greedy algorithm fails to reach when applied to the boundary regions one at a time. As we explain below, the size of the multipartite residual region is expected to be $O(1)$, independent of the total system size. \begin{figure}[htb!] \centering \includegraphics[width=0.80\linewidth]{Residuals} \caption{The ``map of entanglement'' and multipartite residual regions in a holographic state. For $|A||C| \gg |B||D|$ it is possible for the residual region to pinch off so much that EPR pairs can be directly distilled between $A$ and $C$. In other words, due to the discretization of the lattice, the causal wedges $\mathcal{C}[A]$ and $\mathcal{C}[C]$ may be adjacent in the bulk. In this case the residual region may be composed of two disconnected components, $R_B$ and $R_C$ which can contribute tripartite correlations. A similar analysis holds for $|A||C| \ll |B||D|$. For $|A||C| \approx |B||D|$, a single connected residual region $R$ contiguous to the four causal wedges is expected and may contribute four-party correlations. } \label{fig_residual_tensor} \end{figure} Multipartite residual regions in the bulk can indicate multipartite entanglement among the four regions on the boundary. As discussed above for the case of bipartite entanglement, suppose we decouple spins in each of $A,B,C,D$ by performing suitable local unitary transformations associated with each step of the greedy algorithm. Where the greedy geodesics of adjacent regions meet, EPR pairs are distilled, in keeping with our observation in section \ref{subsec:multiple-regions} that the bipartite entanglement of two boundary regions is no less than the length of the greedy geodesic shared by the two regions. The tensors trapped inside a multipartite residual region however, do not necessarily have a decomposition into EPR pairs. Instead it describes a state with multipartite entanglement, which cannot be expressed as a product of states with only bipartite entanglement. Just as for a partition of the boundary into connected regions $A$ and $A^c$, we can reverse the order in which tensors are incorporated by the greedy algorithm to obtain a holographic quantum circuit of isometries which prepares the boundary state. When we partition the boundary into four connected regions, however, the input to the circuit includes more than just EPR pairs distributed along shared greedy geodesics and decoupled spins in the bulk; additional multipartite states associated with each connected component of the bulk multipartite residual region are also part of the input. The circuit factorizes into a product $U_A\otimes U_B\otimes U_C\otimes U_D$, with each of the four unitary transformations acting within its own causal wedge to build the corresponding connected component of the boundary. Again, the greedy geodesics encode a ``map'' of the entanglement among $A,B,C,D$, now including a description of multipartite entanglement among all the regions as well as bipartite entanglement among pairs of regions. Two such maps are shown in figure \ref{fig_residual_tensor}; in these cases a single six-leg tensor is trapped in each connected component of the bulk multipartite residual region, though in general a more complex tensor network could be trapped inside as indicated in figure \ref{fig:RTcounterexamplesd}. We may also argue that if the bulk has constant negative curvature, then for any partition of the boundary into $O(1)$ connected components, the multipartite residual region is always $O(1)$ in size. This statement is true for the Riemannian geometry of the hyperbolic plane, but is merely heuristic because it disregards subtleties arising from the discrete lattice structure of the bulk. For a two-dimensional Riemannian manifold, the Gauss-Bonnet theorem applied to the residual region $R$ states that \begin{align}\label{eq:gauss-bonnet} \int_R K\;dA+\int_{\partial R}k_g\;ds=2\pi\chi(R). \end{align} here $K$ is the Gaussian curvature, $k_g$ is the geodesic curvature, and $\chi(R)$ is the Euler characteristic of the residual region, which is $\chi=1$ when $R$ has the topology of a disk. If $R$ is the interior of an $m$-gon whose sides are geodesics, \eqref{eq:gauss-bonnet} says that the integral of $K$ over $R$ is the deviation of the sum of interior angles of the $m$-gon from the corresponding sum for an $m$-gon in flat space; the latter sum is $(m-2)\pi$ because the $m$-gon can be covered by $m-2$ triangles. For the AdS space, the interior angles approach zero as the space becomes large compared to its curvature radius; therefore assuming uniform negative curvature $K=-1/\alpha^2$ (where $\alpha$ is the AdS radius), we conclude that the volume of the residual region is \begin{align} V(R) = \pi(m-2)\alpha^2. \end{align} In our tensor networks $\alpha$ is of order the length of a link; therefore $V(R)$ is $O(1)$ in lattice units if $m$ is $O(1)$, which establishes our claim. Likewise, the bipartite residual region arising from a partition of the boundary into two regions $A$ and $A^c$, discussed in section \ref{subsec:multiple-regions}, has size $O(1)$ if $A$ and $A^c$ both have $O(1)$ connected components. Indeed, the bipartite residual region is contained in the multipartite residual region found by applying the greedy algorithm separately to each connected component of $A$ and of $A^c$. \subsection{Negative tripartite information} A useful characterization of multipartite entanglement is the tripartite information, defined as \begin{equation} I_3(A,B,C)\equiv S_A+S_B+S_C-S_{AB}-S_{AC}-S_{BC}+S_{ABC}. \end{equation} For a general (mixed) tripartite quantum state, $I_3$ can take any real value. It is zero, though, for any tripartite pure state of $ABC$, since in that case $S_{ABC}=0$ and \textit{e.g.} regions $A$ and $BC$, being complementary, have the same entropy and therefore make cancelling contributions to $I_3$. Nor is there a contribution to $I_3$ from EPR pairs shared between a pair of the three regions (because \textit{e.g.} a pair shared by $AB$ yields positive contributions to $S_A$ and $S_B$ which are cancelled by negative contributions from $-S_{AC}$ and $-S_{BC}$) or from entanglement shared between one of the three regions and a fourth disjoint region. Thus, for a holographic state and for any partition of the boundary into four regions $A,B,C,D,$ nonzero contributions to $I_3(A,B,C)$ can arise only from the distilled multipartite states trapped in residual regions. In the holographic setting, it has been shown that $I_3 \leq 0$ follows from the RT formula \cite{Hayden13b}. For holographic states and codes, the non-positivity of $I_3$ is not ensured in general, because of the potential (small) violations of the RT formula. In some special cases, though, RT holds exactly, and the non-positivity of $I_3$ then follows. For example, suppose that we partition the boundary into four connected regions $A,B,C,D$, and that each connected component of the multipartite residual region traps just one perfect tensor. In that case there is no bipartite residual region, so Theorem \ref{thm-RT-inequality} implies that RT is exact and therefore $I_3 \leq 0$. To see that there is no bipartite residual region in this case, consider the bipartite partition of the boundary into the two disconnected regions $AC$ and $BD$, and consider an isolated $2n$-index perfect tensor surrounded by three or all four of the greedy geodesics $\gamma_A^\star, \gamma_B^\star,\gamma_C^\star,\gamma_D^\star$. This tensor must have at least $n$ legs crossing either $\gamma_A^\star \cup \gamma_C^\star$ or $\gamma_B^\star \cup \gamma_D^\star$. Therefore, when we apply the greedy algorithm to the boundary regions $AC$ and $BD$, one cut or the other will advance past this isolated tensor, excluding it from the bipartite residual region. Under suitable conditions we can actually prove a stronger result --- that $I_3$ is \textit{strictly negative}. Let us say that a connected component of the multipartite residual region is \textit{three sided} if surrounded by three of the four greedy geodesics, and \textit{four sided} if surrounded by all four greedy geodesics. Three-sided components make no contribution to $I_3$; if the three surrounding greedy geodesics are those of $X,Y,Z$, the symmetry of $I_3$ implies $I_3(A,B,C)= I_3(X,Y,Z)$, which vanishes for any pure state of $XYZ$. But an isolated $2n$-index perfect tensor which crosses all four greedy geodesics makes a negative contribution to $I_3$: \begin{theorem}\label{lem:negativeTripartiteInformation} Suppose the $2n$ indices of a perfect tensor state are partitioned into four disjoint nonempty sets $A,B,C,D$ such that $0 < |A|,|B|,|C|,|D| < n$. Then the tripartite information $I_3$ is strictly negative: $I_3(A,B,C) < 0$. \end{theorem} \begin{proof} First we notice that for a four-part \textit{pure} state $ABCD$, the tripartite information $I(A,B,C)$ is actually completely symmetric under permutations of the four subsystems, which we can see by using the property that complementary regions have the same entropy in a pure state: \begin{align} I_3(A,B,C) & = S_A + S_B + S_C - S_{AB} - S_{BC} -S_{AC} + S_{ABC}\\ & = S_A + S_B + S_C + S_D-\frac{1}{2}( S_{AB} + S_{CD} + S_{BC} + S_{AD} + S_{AC} +S_{BD} ). \end{align} We may therefore assume without loss of generality that $|A| \leq |B| \leq |C| \leq |D|$ which implies $|AB| \leq |CD|$ and $|AC| \leq |BD|$. Now we use the defining property of $2n$-index perfect tensors, that a set of $n$ or fewer indices is maximally entangled with its complement, which implies $S_X = \min(|X|, 2n-|X|)$, with entropy expressed in units of $\log v$. Therefore $S_A=|A|, S_B=|B|, S_C=|C|,S_D=|D|$, and furthermore $S_{AB} = |AB|$ and $S_{AC} = |AC|$. Now we distinguish two cases. If $|AD| \leq |BC|$, then $S_{BC} = S_{AD}= |AD|$ and we have \begin{align} I_3(A,B,C) = |A| + |B| + |C| +|D| - |AB| - |AC| - |AD| = -2|A| < 0. \end{align} If on the other hand $|BC| \leq |AD|$, then $S_{AD} = S_{BC} = |BC|$ and we have \begin{align} I_3(A,B,C) = |A| + |B| + |C| +|D| - |AB| - |AC| - |BC| = 2|D| - 2n < 0, \end{align} where to obtain the second equality we use $|AB| + |AC| + |BC| = 2(|A| + |B| + |C|) = 2(2n-|D|)$. This completes the proof. \end{proof} For a holographic state with boundary partitioned into sets $A,B,C,D$, the conditions of Theorem \ref{lem:negativeTripartiteInformation} are satisfied by an isolated perfect tensor trapped inside a four-sided component of the multipartite residual region; fewer than $n$ of the tensor's legs cross any greedy geodesic, because otherwise the greedy algorithm would have moved the cut forward past this perfect tensor, which therefore would not be in the multipartite residual region. Furthermore, since entropy is additive for a product state, $I_3$ is also strictly negative for any product of perfect tensor states shared by $A,B,C,D$, provided that at least one factor has support on all four sets. Since only the four-sided regions contribute to $I_3$, we conclude that $I_3$ is strictly negative if the multipartite residual region contains at least one four-sided connected component, and if each four-sided connected component contains only one perfect tensor. \section{Quantum error correction in holographic codes}\label{sec:code} In this section we study the error correction properties of our holographic codes in more detail. The idea that a CFT with a gravity dual must have error correcting properties was recently proposed in \cite{Almheiri14}, and in this section we will see that our holographic codes illustrate many aspects of the proposal of \cite{Almheiri14} quite explicitly. \subsection{AdS-Rindler reconstruction as error correction} We begin by briefly recalling the main point emphasized in \cite{Almheiri14}, which is that in AdS/CFT a bulk local observable can be realized by many different operators in the CFT. In fact, if $x$ is any point in the bulk, and $Y$ is any point on the boundary, the AdS/CFT dictionary can be chosen so that it maps the bulk local field $\phi(x)$ to a CFT operator $\mathcal{O}[\phi(x)]$ which has no support in an open set containing $Y$, and therefore commutes with any local field of the CFT supported near $Y$. Since $Y$ is an arbitrary boundary point, if the CFT operator corresponding to $\phi(x)$ were actually unique, we would conclude that $\mathcal{O}$ commutes with all local fields in the CFT, and therefore is a multiple of the identity because the local field algebra is irreducible. This paradox is evaded once we recognize that the correspondence is not unique. If $Y,Z$ are two distinct boundary points, the CFT operator corresponding to $\phi(x)$ can be chosen to be either $\mathcal{O}$, which commutes with CFT local fields supported near $Y$, or $\mathcal{O}'$, which commutes with CFT local fields supported near $Z$, where $\mathcal{O}$ and $\mathcal{O}'$ are inequivalent CFT operators even though they can be used interchangeably for describing bulk physics. \begin{figure}[htb!] \centering \includegraphics[height=7cm]{rindler.pdf} \caption{Bulk field reconstruction in the causal wedge. On the left is a spacetime diagram, showing the full spacetime extent of the causal wedge $\mathcal{C}[A]$ associated with a boundary subregion $A$ that lies within a boundary time slice $\Sigma$. The point $x$ lies within $\mathcal{C}[A]$ and thus any operator at $x$ can be reconstructed on $A$. On the right is a bulk time slice containing $x$ and $\Sigma$, which has a geometry similar to that of our tensor networks. The point $x$ can simultaneously lie in distinct causal wedges, so $\phi(x)$ has multiple representations in the CFT.}\label{rindlerfig} \end{figure} This novel feature of AdS/CFT, that a bulk local observable can be represented by boundary CFT operators in multiple ways, is illustrated in figure \ref{rindlerfig}. The idea is that any fixed-time CFT subregion $A$ defines a subregion in the bulk, the causal wedge $\mathcal{C}[A]$. For any point $x\in \mathcal{C}[A]$, bulk quantum field theory ensures that any bulk local operator $\phi(x)$ can be represented in the CFT as some nonlocal operator on $A$. This representation is called the AdS-Rindler reconstruction of the operator \cite{Hamilton2006, Morrison2014}. Because a given bulk point $x$ can lie within distinct causal wedges associated with different boundary regions, the bulk operator $\phi(x)$ can have distinct representations in the CFT with different spatial support. In \cite{Almheiri14} the non-uniqueness of the CFT operator corresponding to the bulk operator $\phi(x)$ was interpreted as indicating that $\phi(x)$ is a logical operator preserving a code subspace of the Hilbert space of the CFT. This code subspace is protected against ``errors'' in which parts of the boundary are ``erased.'' If the boundary operator corresponding to $\phi(x)$ acts on a subsystem of the CFT which is protected against erasure of the boundary region $A^c$, then this operator can be represented in the CFT as an operator supported on $A$, the complement of the erased region. Thus we may interpret the AdS-Rindler reconstruction of $\phi(x)$ on boundary region $A$ as correcting for the erasure of $A^c$; choosing the erased portion of the boundary in different ways leads to different reconstructions of $\phi(x)$. Moreover, operators near the center of the bulk are ``well protected'' in the sense that a large region needs to be erased to prevent their reconstruction, while operators near the boundary can be erased more easily by removing a smaller part of the boundary \cite{Almheiri14}. We may think of this code subspace as the low-energy sector of the CFT corresponding to a relatively smooth dual classical geometry. All CFT operators are physical, and thus have some bulk interpretation, but the ``logical'' operators are special ones which map low-energy states to other low-energy states. The same logical action can be realized by distinct CFT operators, as these distinct operators act on high-energy CFT states (those outside the code subspace) differently even though they act on low-energy states in the same way. \subsection{The physical interpretation of holographic codes} The error-correcting properties of the AdS/CFT correspondence were motivated in \cite{Almheiri14} by bulk calculations, together with plausibility arguments regarding the CFT. Our central observation in this paper is that analogous statements are provably true in holographic codes. We emphasize that in holographic codes the uncontracted bulk legs hanging from each tensor should \textit{not} be thought of as tensor factors in addition to the boundary legs. Rather the entire physical Hilbert space is spanned by states of the boundary legs only. The bulk legs just provide a way of conveniently describing states in a certain code subspace of this boundary Hilbert space, obtained by feeding states of the bulk legs through the isometry defined by the entire tensor network; this code subspace can be regarded as a simplified model of the low-energy states in a CFT. Likewise, operators acting on the dangling bulk indices correspond to nonlocal operators in the boundary theory whose algebra and action on the code subspace resembles what we would expect for the CFT description of how bulk local operators act on low-energy CFT states. When we speak of a ``bulk local operator'' we really mean the nonlocal boundary operator obtained by pushing an operator acting on a dangling bulk index out to the boundary using the isometry defined by the network. \subsection{Bulk reconstruction from tensor pushing} We now explain how holographic codes realize the AdS-Rindler reconstruction of figure \ref{rindlerfig}. The basic idea is that, instead of using the full isometry of the entire network to push a local bulk operator to the boundary, we can instead successively push it through individual perfect tensors in a manner of our choosing by using the operation of figure \ref{push}. We illustrate the reconstruction for two different bulk points of the pentagon code in figure \ref{pentagonpushfig}. Here we use the defining property of perfect tensors --- that the tensor provides a unitary transformation which maps any three legs of the tensor to the complementary set of three legs, and therefore also an isometry mapping any set of three or fewer ``incoming'' legs to any disjoint set of three ``outgoing'' legs. In figure \ref{pentagonpushfig}, each bulk vertex with arrows showing incoming and outgoing directions indicates such an isometry, and the complete set of blue legs is a product of such isometries, and hence also an isometry. The blue operator on the boundary is obtained by conjugating the blue bulk operator by the blue isometry, and the same applies to the green bulk and boundary operators. In the construction of the isometry, we regard the dangling bulk index on each tensor as an incoming index, and therefore require that no more than two contracted indices are incoming for each blue (or green) tensor. The same blue isometry, then, can be used to push not just the central blue bulk index to the boundary, but also any of the other incoming bulk indices (which are not shown in the figure) on blue tensors. \begin{figure}[htb!] \centering \includegraphics[height=8cm]{pentagonpush.pdf} \caption{Boundary reconstruction of bulk operators. The blue operator on the central bulk leg is pushed to an operator supported on a fairly large boundary region, while the green bulk operator further from the center is pushed to an operator supported on a smaller boundary region. Bulk legs for the other tensors are not shown.}\label{pentagonpushfig} \end{figure} The boundary operation corresponding to a given bulk local operator manifestly has the non-uniqueness we described in our discussion of the AdS-Rindler reconstruction. For example, we could move one of the three blue arrows directed outward from the central blue vertex to a different edge, thus reconstructing the central bulk operator on a different boundary region, or we could have sent the green arrows in the opposite direction and reconstructed the green bulk operator on a considerably larger boundary region on the opposite side. No matter which reconstruction we use, the boundary operator is obtained from the isometric embedding of the bulk indices into the code subspace of the boundary Hilbert space, and therefore each reconstructed operator corresponding to a given bulk operator acts on the code subspace in the same way. In the theory of quantum error-correcting codes, we say that an error is an \textit{erasure} (or equivalently a \textit{located error}) if the set of spins damaged by the error is known, so this information can be used in recovering from the error. Holographic codes also provide protection against errors which act at unknown locations on the boundary, but for the purpose of developing the analogy with the AdS/CFT correspondence we will focus on protecting against erasure. A logical system can be protected against erasure of a set of spins in the code block if the full algebra of logical operations has a realization supported on the complementary set of unerased spins. In AdS/CFT we might only require reconstruction of a subalgebra of the full logical algebra; for example, the pentagon code provides better protection for the degrees of freedom deep within the bulk than for those closer to the boundary. The framework in which a quantum code protects only a subalgebra of the code's full logical algebra has been called \textit{operator algebra quantum error correction} \cite{Kribs2005, Kribs2006, Beny2007, Beny2007a}. \subsection{Connected reconstruction and the causal wedge} Given a subregion $A$ of the boundary, which bulk local operators can be reconstructed on $A$? This is not an easy question to answer in general, but at least we can give a simple description of a large logical subsystem reconstructable on $A$, namely those logical operators acting on bulk sites which are reachable using the greedy algorithm explained in section \ref{sec:state}. Recall that the greedy algorithm associates with any boundary region $A$ a greedy geodesic $\gamma_A^\star$ whose boundary matches the boundary of $A$, such that $A$ and $\gamma_A^\star$ enclose a tensor $P_A$ which defines an isometry mapping free bulk legs in $P_A$ together with the legs cut by $\gamma^\star_A$ to $A$. Using this isometry applied to any operator acting on a bulk leg in $P_A$(tensored with the identity acting on all the rest of the isometry's input indices), we may push that logical operator through the isometry to obtain its reconstruction on $A$. Let's call the position of a perfect tensor in the network a \textit{bulk point} and say that the greedy algorithm \textit{reaches} a bulk point if it moves the cut past that tensor, hence using it in the construction of $P_A$ This operator reconstruction procedure can be applied to any boundary region $A$. In the special case where $A$ is connected, it provides a precise analog of the AdS-Rindler reconstruction in holographic codes, which we can formalize with a definition and theorem: \begin{definition} Suppose that $A$ is a {\bf connected} boundary region. The \textbf{causal wedge of $A$}, denoted $\mathcal{C}[A]$, is the set of bulk points reached by applying the greedy algorithm to $A$. \end{definition} \noindent We then have: \begin{theorem}\label{theorem:causal-wedge} Suppose $A$ is a connected boundary region. Then any bulk local operator in the causal wedge $\mathcal{C}[A]$ can be reconstructed as a boundary operator supported on $A$. \end{theorem} \noindent We could have formulated a geometric notion of the causal wedge, defining it as the set of bulk points enclosed between $A$ and the actual minimal geodesic $\gamma_A$, rather than the greedy geodesic. This geometrical definition is closer in spirit to how the term ``causal wedge'' has been used in the context of AdS/CFT. But we prefer this greedy notion of causal wedge instead, so that Theorem \ref{theorem:causal-wedge} is correct as stated. As figure \ref{pentagonpushfig} illustrates, bulk operators near the boundary can be reconstructed on smaller connected regions than bulk operators near the center, just as for the AdS-Rindler reconstruction in AdS/CFT. It is natural to wonder how large the connected region $A$ should be for the operator at the center of the bulk to be reconstructable on $A$. This question is studied for the pentagon code in appendix \ref{App:CountingTensors} by investigating whether the greedy algorithm applied to $A$ reaches the central tensor in the network. We find that a connected region of $N_A$ boundary spins necessarily allows reconstruction of all operators acting on the center provided that $A$ covers a sufficiently large fraction of the boundary, namely \begin{equation}\label{cthres} f_A \equiv \frac{N_A}{N_{\rm boundary}}>\frac{5+\sqrt{5}}{10}\equiv f_c\approx .724. \end{equation} The analogous result for the AdS-Rindler reconstruction is $f_c=1/2$, but the discreteness of our lattice introduces some additional overhead. It turns out, though, that because the tensor network is not invariant under translations of the boundary, whether the connected region $A$ allows reconstruction of the center depends not just on the size of $A$ but also on its location. In appendix \ref{App:CountingTensors} we show that, while the condition \eqref{cthres} is needed to guarantee reconstruction of the central operator on an arbitrary connected region, there are some connected regions with $f_A =\frac{N_A}{N_{boundary}}=\frac{3+\sqrt{5}}{10}\approx .524$ that suffice for the reconstruction. \subsection{Disconnected reconstruction and the entanglement wedge} Now let's consider what bulk operators can be constructed on boundary regions with more than one connected component. First we extend the definition of the causal wedge to disconnected regions: \begin{definition} Suppose that $A$ is a boundary region, which is a union of connected components $A_1,A_2,\ldots$. The \textbf{causal wedge of $A$}, denoted $\mathcal{C}[A]$, is defined as the union of the causal wedges of the components of $A$, $\mathcal{C}[A]=\bigcup_i \mathcal{C}[A_i]$. \end{definition} \noindent Since we have already established that any bulk operator in $\mathcal{C}[A_i]$ is reconstructable on $A_i$ if $A_i$ is connected, it follows immediately from this definition that even for disconnected regions any bulk operator in $\mathcal{C}[A]$ is reconstructable on $A$. The causal wedge contains bulk operators which can be reconstructed when we apply the greedy algorithm to the connected components of $A$ one at a time. But the greedy algorithm might advance further into the bulk, beyond the causal wedge, when applied to $A$ instead. Specifically, there could be a $2n$-index tensor just beyond the causal wedge of $A$ with $n$ or more legs crossing the union of greedy geodesics $\gamma_{A_i}^\star \cup \gamma_{A_j}^\star$, even though fewer than $n$ legs cross $\gamma_{A_i}^\star$ or $\gamma_{A_j}^\star$ individually. Then applying the greedy algorithm to $A_i \cup A_j$ moves the cut past this tensor. This step may then render further tensors eligible for inclusion, and in fact we will see that sometimes the greedy algorithm can move far beyond the causal wedge $\mathcal{C}[A]$ \begin{figure}[htb!] \centering \includegraphics[height=7cm]{pentagondc.pdf} \caption{Disconnected reconstruction of a central operator beyond the causal wedge. Each of two separate connected boundary regions is too small for reconstruction of the central operator, yet the reconstruction is possible on the union of the two regions. In this example the greedy algorithm reaches the central tensor when applied to both connected components at once, but not when applied to either component by itself.}\label{pentagondcfig} \end{figure} A concrete first example illustrating reconstruction of a bulk operator outside the causal wedge is shown in figure \ref{pentagondcfig}. In this example, $A$ is the union of two connected components $A_1$ and $A_2$, and the full operator algebra of the central tensor can be pushed to either $A_1^c$ or $A_2^c$. This implies that no nontrivial operator acting on the central tensor can be pushed to either $A_1$ or $A_2$. For every nontrivial operator $\phi$ in the algebra there is another operator $\phi'$ which does not commute with $\phi$. If $\phi'$ can be pushed to $A_1^c$, then surely $\phi$ cannot be pushed to $A_1$, because operators supported on complementary regions must commute. The same argument applies to $A_2$. Yet the greedy algorithm applied to $A$ reaches the central tensor, showing that its full operator algebra can be pushed to the union of $A_1$ and $A_2$. That operators beyond the causal wedge of $A$ can be reconstructed on $A$ has deep potential implications for AdS/CFT. Perturbative gravity techniques like the AdS-Rindler reconstruction can be used to construct bulk operators in the causal wedge but not beyond. Yet there has been speculation in the literature that reconstruction should be possible in a larger region, the \textit{entanglement wedge} \cite{Headrick2014}, see also \cite{Wall2012, Czech2012, Jafferis2014}. In AdS/CFT, the entanglement wedge $\mathcal{E}[A]$ is defined by first finding the minimal area surface $\gamma_A$ used in the RT formula, and then drawing a codimension one (\textit{i.e.}, two-dimensional for ${\rm AdS}_3$) spatial slice in the bulk whose only boundaries are $\gamma_A$ and $A$. The bulk domain of dependence of this slice is then defined as the entanglement wedge $\mathcal{E}[A]$. The entanglement wedge contains the causal wedge, but can be much larger in some cases. Figure \ref{transitionfig} illustrates a simple example highlighting the distinction between the causal and entanglement wedges.\footnote{In excited states where the geometry deviates from pure AdS, there are differences between the entanglement wedge and the causal wedge even for connected boundary regions. We will not try to capture this in our toy models, since without a theory of dynamics we cannot capture the full spacetime definitions of these regions. Our discussion is limited to the case where we stick with states near the vacuum, in which case $A$ needs to be disconnected for its causal wedge and entanglement wedge to differ.} \begin{figure}[htb!] \centering \includegraphics[height=5cm]{PhaseTransition.pdf} \caption{The intersection of the entanglement wedge $\mathcal{E}[A]$ with a bulk time-slice, in the case where $A$ has two connected components. Minimal geodesics in the bulk are solid lines. When $A$ is smaller than $A^c$, we have the situation on the left and the causal wedge agrees with the entanglement wedge. When $A$ is bigger, however, the minimal geodesics switch and the entanglement wedge becomes larger. In particular the point in the center lies in the $\mathcal{E}[A]$ but not $\mathcal{C}[A]$.}\label{transitionfig} \end{figure} We would like to investigate whether bulk operators in the entanglement wedge are reconstructable for holographic codes, but how should the entanglement wedge be defined? A definition of $\mathcal{E}[A]$ close to that used in AdS/CFT is: \begin{definition} Suppose $A$ is a (not necessarily connected) boundary region. The \textbf{geometric entanglement wedge} of $A$ is the set of bulk points in the bulk region bounded by $A$ and $\gamma_A$, where $\gamma_A$ is the minimal bulk geodesic whose boundary matches the boundary of $A$. If there is more than one minimal bulk geodesic, $\gamma_A$ is chosen to make the geometric entanglement wedge as large as possible. \end{definition} The main motivation for the conjecture that operators in the entanglement wedge are reconstructable in AdS/CFT comes from the validity of the RT formula for disconnected regions. (Additional evidence was given in \cite{Almheiri14} based on a typicality argument.) But we have already seen above that the RT formula does not hold exactly in holographic codes, so we should not necessarily expect the entanglement wedge conjecture to hold in detail for the geometric entanglement wedge. Instead, as in defining the causal wedge, we prefer a definition that makes the reconstructability manifest: \begin{definition} Suppose $A$ is a (not necessarily connected) boundary region. The \textbf{greedy entanglement wedge} of $A$, denoted $\mathcal{E}[A]$, is the set of bulk points reached by applying the greedy algorithm to all connected components of $A$ simultaneously. \end{definition} \noindent With this definition, bulk local operators in $\mathcal{E}[A]$ are automatically reconstructable in $A$, using the isometry defined by $P_A$ to push these operators to the boundary. The greedy algorithm also ensures that the interior boundary of $\mathcal{E}[A]$ is the greedy geodesic $\gamma_A^\star$, though not necessarily the minimal geodesic $\gamma_A$. A drawback of this definition is that $\mathcal{E}[A]$ includes only the bulk local operators which can be reconstructed on $A$ using the greedy algorithm; it might miss additional bulk operators which can be reconstructed by other methods. In fact we can find examples of codes such that some bulk local operators lying outside $\mathcal{E}[A]$ \textit{can} be reconstructed on $A$, as discussed in appendix \ref{App:BeyondGreedy}. These codes typically have special properties, such as symmetries, which make the reconstruction possible. If we know nothing more about the perfect tensors used to construct the code, aside from their perfection, we have no general reason to expect that bulk operators far outside the greedy entanglement wedge will be reconstructable. That said, we confess that we lack a complete understanding of when reconstruction is possible, and hope that further progress on this issue can be achieved in future work. \subsection{Erasure threshold} If the entanglement wedge conjecture is true for AdS/CFT, if holographic codes faithfully model the entanglement structure of boundary theories with classical gravitational duals, and if the greedy entanglement wedge is a reasonable stand-in for the entanglement wedge, then we should be able to find holographic codes and boundary regions such that the greedy entanglement wedge reaches far outside the causal wedge. In this section we provide examples which confirm this expectation. One way to formalize this is to choose $A$ to be a randomly chosen set of boundary spins, whose size is a specified fraction of the total boundary. The geometry of the hyperbolic plane suggests that, if $A$ is large enough, the causal wedge $\mathcal{C}[A]$ will stick close to the boundary, yet the entanglement wedge $\mathcal{E}[A]$ reaches the center of the bulk with high probability; we illustrate this in figure \ref{fig:EntanglementvsCausal}. We will see that not all holographic codes have this property, but we are able to provide concrete examples that do. \begin{figure}[htb!] \centering \subfloat[Shallow causal wedge]{ \includegraphics[width=0.35\linewidth]{ShallowCausalWedge}\hspace{1cm} \label{fig:ShallowCausalWedge}} \subfloat[Deep entanglement wedge]{ \includegraphics[width=0.35\linewidth]{DeepEntWedge} \label{fig:DeepEntanglementWedge}} \caption{ (a) When a boundary region $A$ is partitioned into many connected components it may have a very shallow causal wedge $\mathcal{C}[A]$ if each connected component is small. (b) In contrast, if $A$ comprises a sufficiently large fraction of the boundary, its entanglement wedge $\mathcal{E}[A]$ will extend deep into the bulk. } \label{fig:EntanglementvsCausal} \end{figure} Another, perhaps better, way to formulate this case is to imagine a probabilistic noise model which acts independently (without any noise correlations) on each of the physical boundary spins, where each spin is either erased with probability $p$ or left untouched with probability $1-p$. If $p$ is small, the set $A$ of unerased boundary spins breaks into many connected islands, where a typical island contains $O(1/p)$ spins and has a causal wedge which reaches into the bulk by only a constant distance. We can show, though, that if the holographic code is properly chosen and the erasure probability $p$ is less than a \textit{threshold value} $p_c$, then $\mathcal{E}[A]$ contains the central bulk spin with a success probability deviating from one by an amount which becomes \textit{doubly exponentially small} as the radius of the bulk increases. Which codes have an erasure threshold? One necessary requirement is that the code must have a distance that increases with the system size. For the purpose of reconstructing the central tensor in the bulk, this means that there should not be any logical operator supported on a constant number of boundary spins which acts non-trivially on the central bulk index. That's because erasure of any constant number of spins occurs with a nonzero constant probability, and recovery from the erasure error is not possible if a nontrivial logical operator has support on the erased qubits. The pentagon code fails to fulfill this necessary condition. To illustrate the problem, it is helpful to consider first a simpler code, the ``triangle code'' constructed by contracting four-index perfect tensors, where each leg is a 3-level spin, a \textit{qutrit}. Each triangle in the bulk has a dangling bulk index, and the code is constructed as a tensor network forming a tree, the Bethe lattice; each triangle is contracted with one triangle closer to the center and two triangles further from the center, as shown on the left side of figure \ref{fig:BetheLattice}. (Qi's model~\cite{Qi13} is based on a tensor network with a similar structure.) One way to describe the greedy algorithm is to say that it propagates erasures from the boundary toward the center of the bulk --- the inward directed leg of a triangle is erased if either of its outward directed legs is erased, and the central triangle can be reconstructed only if at least two of that triangle's legs are unerased. \begin{figure}[htb!] \centering \subfloat[Triangle code]{ \includegraphics[height=6cm]{bethe.pdf}\hspace{1cm} \label{fig:BetheLattice}} \subfloat[Pentagon code]{ \includegraphics[height=6cm]{benipoints.pdf} \label{fig:benipoints}} \caption{Dangerous small erasures for the triangle (a) and pentagon codes (b). In the triangle code erasing two boundary spins, boxed in blue, can prevent reconstruction of the central tensor. In the pentagon code erasing four spins can prevent the reconstruction.} \label{fig:lowWeightTrouble} \end{figure} It is easy, then, to prevent the greedy algorithm from reaching the center --- only two spins need to be erased. A single erasure on the boundary propagates all the way up to the center of the network, erasing one of the central triangle's legs. A single erasure on a different branch of the tree propagates up to another of the central triangle's legs, blocking the reconstruction of the central tensor on the remaining unerased spins. The greedy algorithm fails for a good reason. As described in appendix \ref{sec:qutritcode}, The logical algebra for the three-qutrit code represented by a single triangle is generated by logical operators of the form $\bar X =X\otimes X^{-1} \otimes I$, where $X$ is a generalized Pauli operator; in fact the code is symmetric under permutation of the three qutrits, so we can choose $X$ and $X^{-1}$ to act on any two of the three qutrits without changing the operator's action on the code space. Now choose a path through the Bethe lattice which begins on one leaf, travels to the center, exits the center on a different branch, and finally reaches another leaf on that branch. Apply the operator $\bar X$ to each of the logical bulk indices visited by this path. Then for each leg along the path the $X$ from the triangle on one side cancels the $X^{-1}$ coming from the triangle on the other side, except for one uncanceled $X$ on one leaf and one $X^{-1}$ on the other. We conclude that the code admits a logical operator acting nontrivially on the central triangle which has support on only two boundary spins. That is why the central bulk spin can be damaged by erasing only two boundary spins. For the pentagon code the situation is only slightly better. If we pick just four spins at the positions shown on the right side of figure \ref{fig:benipoints}, then the greedy algorithm applied to the complement of these four spins never absorbs any of the tensors adjacent to the dashed line. This failure is just a property of the graph defining the holographic code, but once again we can understand the failure by noting that there is a logical operator acting on the central pentagon supported on these four boundary spins, so erasing these four spins prevents central bulk operators from being reconstructed on their complement. Now we may consider a product of bulk logical operators acting on the pentagons just above and just below the dashed line. We use the logical operator of the five-qubit code $\bar X = - Z\otimes X\otimes Z\otimes I\otimes I$ described in appendix \ref{sec:5qubit}, where $X$ and $Z$ are Pauli operators (which square to one), and the operator's action is unchanged by cyclic permutations of the five qubits. Now $X$'s applied from either side of the cut cancel on the legs crossed by the cut, and $Z$'s applied from either side cancel for the legs just above and below the cut, leaving only four uncanceled $Z$'s acting on the boundary qubits. Of course, uncorrectable damage deep inside the bulk caused by erasing just a few boundary spins is not at all what we expect in AdS/CFT, where according to the entanglement wedge conjecture we should always be able to reconstruct the center of the bulk from a sufficiently large fraction of the boundary, whatever its shape or location. To obtain a better model for AdS/CFT we should modify the holographic code, thinning out the algebra of bulk logical operators, and hence reducing the rate of the code. A code that works better can be obtained by a simple modification of the pentagon code --- the modified tensor network is constructed by starting with a pentagon at the center and adding alternating layers of hexagons (with no dangling bulk indices) and pentagons (each with one bulk index) as the network grows radially outward. The associated network is depicted in figure \ref{fig:PentagonHexagon}. This change suffices to remove all the constant-weight logical operators acting nontrivally on the center and in fact we can prove that this pentagon/hexagon code has an erasure threshold. Numerical studies show that erasure can be corrected by the greedy algorithm with high success probability for $p \le p_{c}^{\rm greedy} \approx 0.26$; the erasure threshold $p_c$ achieved by the optimal recovery method might be higher than $p_{c}^{\rm greedy}$ if the tensors have further special properties aside from just being perfect. \begin{figure}[htb!] \centering \subfloat[Pentagon/Hexagon code]{ \includegraphics[height=6cm]{PentagonHexagonCode}\hspace{1cm} \label{fig:PentagonHexagonCode}} \subfloat[One qubit code]{ \includegraphics[height=6cm]{1qubitHexagon} \label{fig:1qubitHexagon}} \caption{Tensor networks for holographic pentagon/hexagon codes with erasure thresholds, where neighboring polygons share contracted indices. In the network shown on the left, pentagons and hexagons alternate on the lattice; each pentagon carries one dangling bulk index, and hexagons carry no bulk degrees of freedom. The logical qubit residing on the central pentagon is well protected against erasure if the erasure probability on the boundary is below the threshold value $p_c$. In the network on the right, there is just a single bulk qubit located at the center; the rest of the network is similar to the holographic state constructed from hexagons only. }\label{fig:PentagonHexagon} \end{figure} Since our main interest is in the reconstruction of the center of the bulk, in appendix \ref{App:ErasureThreshold} we study a code for which the only logical index resides at the center, also shown in figure \ref{fig:PentagonHexagon}. This code is almost the same as the holographic state obtained by contracting six-leg perfect tensors (hexagons), except that the tensor network contains one pentagon at the center; we therefore call it the \textit{single-qubit hexagon code}. We prove the existence of an erasure threshold for this code, and also derive an analytic lower bound on the threshold erasure rate $p_c \ge 1/12$. Numerical evidence indicates that the threshold is actually quite close to $p_c = 1/2$. The lower bound on the threshold is derived using a simplified and less powerful version of the greedy algorithm, the hierarchical recovery method, which begins at the boundary and proceeds inward toward the center of the bulk. A tensor at level $j+1$ of this hierarchy is connected to at least four tensors at level $j$, and the level-$(j+1)$ tensor is erased if two or more of its level-$j$ neighbors are erased. The proof proceeds by recursively deriving an upper bound on the erasure probability $p_j$ at level $j$, finding \begin{align} p_j \leq p_c \left(\frac{p}{p_c}\right)^{\lambda^j}, \end{align} where $p_c= 1/12$ and $\lambda = \frac{1+\sqrt{5}}{2}$. Thus the erasure probability for the central tensor drops doubly exponentially with the radius of the bulk if $p< p_c$, which means that the central tensor can be reconstructed on the set of unerased boundary qubits with very high probability. A tricky aspect of the proof is that, because a single level-$j$ tensor couples to two level-$(j+1)$ tensors, there are noise correlations which propagate from level to level. Fortunately, the hyperbolic geometry controls the spread of correlations, making the analysis manageable. In fact, correlations beyond nearest neighbors never arise. This is one advantage of using the hierarchical recovery method rather than the greedy algorithm. A similar proof strategy may also be applied to other holographic codes. \subsection{Holographic stabilizer codes}\label{subsec:stabilizer} Stabilizer codes have been extensively studied in quantum coding theory, and are often used in applications to fault-tolerant quantum computing \cite{Gottesman2009}. Here we describe how to construct a family of holographic codes which are also stabilizer codes. We introduce the stabilizer formalism to pave the way for section \ref{subsec:enough}, where we study some geometrical properties of holographic stabilizer codes. Stabilizer codes can be defined for higher-dimensional spins as well, but here we will assume the spins are qubits for simplicity. A \textit{Pauli operator} acting on $n$ qubits is a tensor product of Pauli matrices, that is, one of the $4^n$ operators contained in the set \begin{equation} \{I,X,Y,Z\}^{\otimes n} \end{equation} where $I$ is the $2\times 2$ identity matrix and $X, Y, Z$ are the $2\times 2$ Pauli matrices (often denoted $\sigma_x,\sigma_y,\sigma_z$). We use $[[n,k]]$ to denote a quantum code with $k$ logical qubits embedded in a block of $n$ physical qubits. We say that an $[[n,k]]$ code is a \textit{stabilizer code} (also called an \textit{additive} quantum code), if the code space can be completely characterized as the simultaneous eigenspace of $n-k$ commuting Pauli operators. These commuting Pauli operators are called the code's \textit{stabilizer generators} because they generate an abelian group called the code's \textit{stabilizer group}. The special case of a $k=0$ stabilizer code is called a \textit{stabilizer state}. We say that an $n$-index tensor is a \textit{stabilizer tensor} if the corresponding $n$-qubit state is a stabilizer state. For example, the six-index perfect tensor is a perfect stabilizer tensor, and holographic codes defined by tiling a hyperbolic geometry with pentagons are stabilizer codes. More generally, we may formulate the following theorem: \begin{theorem} \label{thm:stabilizer} Consider a holographic code defined by a contracted network of perfect stabilizer tensors, and suppose that the greedy algorithm starting at the boundary reaches the entire network. Then the code is a stabilizer code. \end{theorem} \noindent To understand why Theorem \ref{thm:stabilizer} is true we need to see how to construct the code's stabilizer generators. To be concrete, consider holographic codes constructed from tilings by hexagons and pentagons. The six-index perfect tensor defines a [[6,0]] stabilizer code, whose stabilizers are enumerated in appendix \ref{sec:5qubit}. As we have already noted, it also defines isometries from any set of 1, 2, or 3 indices to the complementary set of indices; these isometries may be regarded as the encoding maps for $[[5,1]]$, $[[4,2]]$, and $[[3,3]]$ stabilizer codes respectively. To be specific, consider the $[[5,1]]$ code, and let $M$ denote its isometric encoding map taking a one-qubit input to the corresponding encoded state in the code block of five qubits. We can characterize $M$ by specifying how it acts on Pauli operators, which (together with the identity) span the space of operators acting on a single qubit. Since the Pauli group is generated by $X$ and $Z$ it suffices to specify \begin{equation} M: X\mapsto \bar X,\quad M:Z \mapsto \bar Z, \end{equation} where $\bar X$ and $\bar Z$ are the code's logical Pauli operators, given explicitly in appendix \ref{App:PerfectTensorExamples}. Similarly, the action on Pauli operators defines isometric encoders for the $[[4,2]]$ and $[[3,3]]$ stabilizer codes, except that for \textit{e.g.} the $[[4,2]]$ code we specify the action on the four independent Pauli operators $X_1,X_2,Z_1,Z_2$, where the subscript $1,2$ labels the code's two logical qubits. For stabilizer codes the encoding isometry is always a \textit{Clifford isometry}, meaning its action by conjugation maps $k$-qubit Pauli operators to $n$-qubit Pauli operators. We already explained in section \ref{sec:model} that when the condition of Theorem \ref{thm:stabilizer} is satisfied then the encoding isometry for the holographic code can be obtained by composing the isometries associated with each perfect tensor in the network. A given tensor may have 0, 1, 2, or 3 incoming legs, including the dangling bulk leg (if the tensor is a pentagon) and all the incoming contracted legs, which are output legs from previously applied isometries. To prove Theorem \ref{thm:stabilizer} then, it is enough to know that composing the encoding isometries of two stabilizer codes yields the encoding isometry of a stabilizer code. To see how this works, it is helpful to think about the simple special case of a \textit{concatenated quantum code}, for which the tensor network is a tree. Consider in particular a code with just one logical qubit --- the central pentagon has one incoming logical leg and five outgoing legs, while every other tensor is a hexagon with one incoming leg and five outgoing legs. If the $[[5,1]]$ code is concatenated just once, the tensor network has five hexagons and describes a $[[25,1]]$ stabilizer code. To obtain this code's isometric map, we first apply the encoding isometry $M$ of the $[[5,1]]$ to the logical qubit, and then apply $M$ again to \textit{each one} of the five outgoing qubits. If $S$ denotes the stabilizer group of the $[[5,1]]$ code, then the stabilizer of the $[[25,1]]$ code will include $S$ acting on each one of the five subblocks corresponding to the five hexagons in the tensor network. But it also includes elements which act collectively on four of the five hexagons. For example, as described in appendix \ref{sec:5qubit}, one of the stabilizer generators for the $[[5,1]]$ code is the Pauli operator $X\otimes Z\otimes Z\otimes X\otimes I$. The isometries associated with the five hexagons map this operator to $\bar X\otimes \bar Z\otimes \bar Z\otimes \bar X\otimes I$, where now $\bar X,\bar Z$ are the logical Pauli operators acting on the five outgoing qubits emanating from a single hexagon. The same idea applies to more general compositions of code isometries. Suppose that $\mathcal{S}_1, M_1$ are the stabilizer group and encoding isometry for an $[[n_1,k_1]]$ stabilizer code and that $\mathcal{S}_2, M_2$ are the stabilizer group and encoding isometry for an $[[n_2,k_2]]$ stabilizer code. We may apply $M_2$ to $m$ of the $n_1$ output qubits from $M_1$ along with $k_2-m$ additional input qubits (where $m\le n_1$ and $m \le k_2$), thus obtaining an $[[ n_1- m + n_2, k_1 + k_2-m]]$ code. In fact this code is a stabilizer code, whose stabilizer group is generated by $\mathcal{S}_2$ and $M_2(\mathcal{S}_1)$; here we use a streamlined notation, in which it is understood that operators and maps are extended by identity operators where necessary, and we note that the elements of $M_2(\mathcal{S}_1)$ are Pauli operators because $M_2$ is a Clifford isometry. Thus we have proven Theorem \ref{thm:stabilizer}. It is also worthwhile to note that the stabilizer group and encoding isometry for the holographic code can be efficiently computed by composing the isometries arising from the perfect tensors in the network. \subsection{Are local gauge constraints enough?}\label{subsec:enough} It has recently been argued that in AdS/CFT gauge constraints in the boundary CFT may pick out a small enough subspace of states to explain the error correcting properties of AdS/CFT \cite{Mintun2015}. The idea is that any gauge-invariant state already possess some non-local entanglement via the imposition of the gauge constraints, and that this might be enough to resolve the various paradoxes of \cite{Almheiri14}.\footnote{The word ``gauge'' is sometimes used in quantum information theory in a way that is non-standard from the point of view of quantum field theorists. In quantum field theory, states that are not gauge-invariant have no physical interpretation, and are not really part of the Hilbert space of the theory; they appear only as a mathematical convenience. This is what the authors of \cite{Mintun2015} meant by gauge constraints, and it is what we mean here.} We can try to test this idea for the holographic stabilizer codes discussed in section \ref{subsec:stabilizer}. Since gauge constraints are spatially local, the argument of Ref. \cite{Mintun2015} suggests that the code's stabilizer group should be \textit{locally generated}, in the sense that it has a complete set of generators, each with support on a constant number of neighboring boundary qubits. In fact, though, holographic stabilizer codes do not have this property in cases where the greedy entanglement wedge reaches outside the causal wedge. This property poses no problem for the proposal of \cite{Almheiri14} however, as those authors argued that energetic constraints should also be included in defining the code subspace. Consider for example the disconnected boundary region $A=A_1\cup A_2$ in the pentagon code, depicted in figure \ref{pentagondcfig}. We have already seen that the full logical algebra of the central pentagon can be reconstructed on the disconnected region $A_1\cup A_2$, but that no nontrivial logical operator acting on the central pentagon is supported on either one of the connected components $A_1$, $A_2$. In a stabilizer code, a logical Pauli operator supported on $A_1\cup A_2$ is a tensor product $\mathcal{O}=\mathcal{O}_{A_1}\otimes \mathcal{O}_{A_2}$ of Pauli operators supported on $A_1$ and $A_2$ separately. In order to preserve the code space, this logical Pauli operator must commute with all of the code's stabilizer generators. But if the two components $A_1$ and $A_2$ are distantly separated and the stabilizer generators are geometrically local, then no stabilizer generator has nontrivial support on both $A_1$ and $A_2$. Any stabilizer generator with no support on $A_2$ trivially commutes with $\mathcal{O}_{A_2}$, and if it commutes with $\mathcal{O}$ then it must commute with $\mathcal{O}_{A_1}$ as well. Likewise, a stabilizer generator with no support on $A_1$ must commute with $\mathcal{O}_{A_2}$ if it commutes with $\mathcal{O}$. Therefore $\mathcal{O}_{A_1}$ and $\mathcal{O}_{A_2}$ are logical operators, and at least one is nontrivial if their product is, contradicting the hypothesis that no nontrivial logical operator is supported on either connected component of $A$. The conclusion is that the stabilizer generators cannot be geometrically local. The above argument applies even to higher-dimensional holographic stabilizer codes. In the case were the boundary is one dimensional, we may simply appeal to a known result in quantum coding theory, that a stabilizer code in one dimension with geometrically local generators has constant distance~\cite{Bravyi09, Pastawski15}. Therefore, a one-dimensional code with a local stabilizer cannot have a positive erasure threshold. \section{Black holes and holography}\label{sec:black} In holographic codes, bulk operators are reconstructed only on a subspace of the boundary Hilbert space. This may seem troubling, since the holographic correspondence is supposed to assign a bulk interpretation to all possible states on the boundary. A resolution of this confusion was proposed in \cite{Almheiri14} --- a particular bulk operator might not always be reconstructable because it lies deep inside a black hole for most boundary states.\footnote{We are currently agnostic about the reconstruction of bulk operators just inside the horizon, which must be needed in some form to describe the experience of an infalling observer. This is a topic of much recent controversy \cite{Almheiri13, Harlow2014}, but we will not take sides here.} In fact we can see this directly in our models if we incorporate black holes in a manner that we now describe. To illustrate the idea, consider the pentagon code, but with the central tensor removed. The central tensor's one free bulk index has been replaced by five bulk indices, those which had previously been contracted with legs of the missing pentagon; the tensor network now provides an isometry mapping these five indices, together with the bulk legs on the remaining pentagons, to the boundary. Thus the code subspace of the boundary Hilbert space is larger than for the pure pentagon code. We interpret this enlarged code space as describing configurations of the bulk with a black hole in the center, whose microstate is determined by the input to the new bulk legs. The entropy of the black hole is the logarithm of the dimension of the Hilbert space of black hole microstates, or \begin{equation} S_{BH}=\log_2\left(2^5 - 2\right)\approx 4.9, \end{equation} since only four of the bulk spins are new and we shouldn't count states that were part of the original pentagon code subspace. We depict this construction in figure \ref{fig_black_hole}a. \begin{figure} \centering \subfloat[Black hole]{ \includegraphics[width=0.35\linewidth]{BlackHole} \label{fig:BlackHole} }\hspace{1cm} \subfloat[Wormhole]{ \includegraphics[width=0.40\linewidth]{Wormhole} \label{fig:Wormhole} } \caption{A black hole in a holographic code, and the corresponding wormhole geometry. } \label{fig_black_hole} \end{figure} We can construct larger black holes by removing more central layers of the network; it is clear that their entropy scales with their horizon area, as predicted by Bekenstein and Hawking \cite{Bekenstein73, Hawking75}. As the black hole grows, the number of bulk legs outside the black hole decreases, so we can reconstruct fewer and fewer bulk local operators. Eventually the black hole eats up the entire network, and our isometry becomes trivial (and unitary). Thus our model really does assign a bulk interpretations to all boundary states, as demanded by AdS/CFT --- most boundary states correspond to large black holes in the bulk. It is amusing to note that we can also describe configurations corresponding to the two-sided wormhole of \cite{Maldacena03}; we just prepare two networks with central black holes of equal size, and maximally entangle the bulk legs at their horizons, as shown in figure \ref{fig_black_hole}b. It would be interesting to make contact with recent speculations about how the length of the wormhole relates to the complexity of the tensor network describing the state \cite{Hartman2013, Susskind2014, Roberts2014}, although for that purpose we would probably need to incorporate dynamics into our model. \section{Open problems and outlook}\label{sec:conclude} A remarkable convergence of quantum information science and quantum gravity has accelerated recently, propelled in particular by a vision of quantum entanglement as the foundation of emergent geometry. We expect this interface area to continue to grow in importance, as practitioners in both communities struggle to develop a common language and toolset. This paper was spurred by the connection between AdS/CFT and quantum error correction proposed in \cite{Almheiri14}. We have strived to make this connection more concrete and accessible by formulating toy models which capture the key ideas, and we hope our account will equip a broader community of scientists to contribute to further progress. Indeed, much remains to be done. First of all, the entanglement structure of holographic codes is not yet completely understood. We would like a more precise characterization of the violations of the Ryu-Takayanagi formula which can occur, and of the relationship between bulk residual regions and the multipartite entanglement of the boundary state. How is the greedy entanglement wedge different from the geometric entanglement wedge, and to what extent does the greedy entanglement wedge reach beyond the causal wedge? We have not yet discussed the correlation functions of boundary observables in holographic codes because we do not have much to say. In a stabilizer state $|\psi\rangle$, where $P$ and $Q$ are Pauli operators, the expectation value $\langle \psi| PQ|\psi\rangle$ is either zero (if $PQ$ anticommutes with an element of the stabilizer) or a phase (if $PQ$ commutes with the stabilizer); the same conclusion applies to a stabilizer code unless $PQ$ is a nontrivial logical operator preserving the code subspace. In contrast, two-point correlations in a CFT decay algebraically with distance; how might we recover this behavior in holographic codes? Perhaps algebraic decay is recovered for non-stabilizer holographic states, by defining suitable coarse-grained observables, or by injecting an encoded state such that bulk correlation functions decay exponentially as in Ref. \cite{Qi13}. Or we might replace perfect tensors by tensors which are nearly perfect. The behavior of two-point correlators highlights one way our toy models differ from full-blown AdS/CFT, but there are other ways as well; for one, there is no obvious analog of diffeomorphism invariance in a lattice model. What features in our lattice model correspond to the $1/N$ corrections in the continuum theory? In AdS/CFT the AdS radius is large compared to the Planck scale when the bulk theory is weakly coupled, yet in the pentagon model for example the curvature scale is comparable to the lattice cutoff. To approximate flatter bulk geometries we should study more general tessellations, including higher dimensional ones. A particularly serious drawback of our toy models so far is that we have not introduced any bulk or boundary dynamics. Can holographic codes illuminate dynamical processes like the formation and evaporation of a black hole? Finally, we have emphasized that holographic states and codes provide a concrete realization of some aspects of AdS/CFT, but they may also be interesting for other reasons, for example as models of topological matter. Furthermore, holographic codes generalize the concatenated quantum codes that have been extensively used in discussions of fault-tolerant quantum computing \cite{Gottesman2009}, and might likewise be applied for the purpose of protecting quantum computers against noise. For this application it would be valuable to develop the theory of holographic codes in a variety of directions, such as studying tradeoffs between rate and distance, formulating efficient schemes for correcting more general errors than erasure errors, and finding ways to realize a universal set of logical operations acting on the code space. \section*{Acknowledgment} We thank Ning Bao, Oliver Buerschaper, Glen Evenbly, Daniel Gottesman, Aram Harrow, Isaac Kim, Seth Lloyd, Nima Lashkari, Hirosi Ooguri, Grant Salton, Kristan Temme, Guifre Vidal and Xiaoliang Qi for useful comments and discussions. We also have enjoyed discussions with Ahmed Almheiri, Xi Dong, and Brian Swingle, and with Matthew Headrick, about their independent and upcoming related work. FP, BY, and JP acknowledge funding provided by the Institute for Quantum Information and Matter, a NSF Physics Frontiers Center with support of the Gordon and Betty Moore Foundation (Grants No. PHY-0803371 and PHY-1125565). BY is supported by the David and Ellen Lee Postdoctoral fellowship. DH is supported by the Princeton Center for Theoretical Science.
1,314,259,996,079
arxiv
\section{Introduction} Phase imaging plays a crucial role in the fields of optical, X-ray and electron microscopy \cite{optical1,optical2,x-ray,electron}. The phase of biological cells and tissues carries important information about the structure and intrinsic optical properties in microscopic imaging. Although this information cannot be directly recorded by the digital detector (CCD or CMOS), the Zernike phase contrast microscopy \cite{PCM} and differential interference contrast (DIC) microscopy \cite{DIC} could provide reliable phase contrast about the transparent cells and weakly absorbing objects via converting the phase into intensity. However, these techniques just only be used for the visualized and qualitative imaging instead of giving quantitative maps of phase change, which makes the quantitative data interpretation and phase reconstruction difficult. Quantitative phase imaging (QPI) is a powerful tool for wide-ranging biomedical research and characterization of optical elements, which can realize the quantitative reconstruction of the sample information due to its label-free and unique capabilities to image the phase or the optical path thickness of cells, tissues, and optical fibers. As the conventional interferometric approach of QPI, off-axis digital holographic microscopy (DHM) \cite{DMH1,DMH2} measures the phase delay quantitatively introduced by the heterogeneous refractive index distribution within the specimen. Such method requires a coherent illumination source and a relatively complicated, vibration-sensitive optical system, even the speckle noise of laser degrades the spatial resolution of phase image. By contrast, other non-interferometric QPI approaches, which are based on common path geometries, utilizing white-light illumination \cite{wl1,wl2,wl3} have been developed to alleviate the problem of coherent noise and enhance the stability of mechanical vibrations, thus the spatial resolution and imaging quality of the phase measurement are greatly improved. Nevertheless, these quantitative phase measurements based on QPI techniques still rely on spatially coherent illumination, the maximum achievable resolution of phase imaging is only dependent on the numerical aperture (NA) of objective and restricted by the coherent diffraction limit. On the other hand, the deterministic phase retrieval can also be realized by the transport of intensity equation (TIE) \cite{TIE1,TIE2,TIE3} only using object field intensities at multiple axially displaced planes. The TIE linearizes the relationship between the phase and derivative of intensity along the axis of propagation \cite{TIE1}, then the direct phase can be uniquely determined by solving the TIE with intensity images and the longitudinal intensity derivative on the in-focus plane. QPI based on TIE has been increasingly investigated in micro-optics inspection and dynamic phase imaging of biological processes in recent years due to its unique advantages over interferometric techniques to achieve quantitative reconstruction result without the need for complicated interferometric optical configurations, reference beam, laser illumination sources and phase unwrapping \cite{TIE_Appl1,TIE_Appl2,TIE_Appl3}. It has been demonstrated that the non-interferometric phase retrieval methods based on TIE can be well adopted to partially coherent illumination \cite{PC_TIE0,PC_TIE1,PC_TIE2,PC_TIE3} in spite of the fact that the original derivation of TIE is established on the paraxial approximation and coherent illumination. Due to the non-linear relationship among the intensity image of object, illumination source, and optical system under partially coherent field, the imaging process and mathematical modeling become more complicated than coherent situation \cite{Partial_Con1,Partial_Con2}. Nevertheless, the phase retrieval of TIE could be reformulated informatively using of the concept of weak object transfer function (WOTF) under weak defocus assumptions and ignoring the bilinear terms originating from the self-interference of scattered light \cite{WOTF1,WOTF2,WOTF3,WOTF4}. The WOTF describes the frequency domain response of phase and absorption for a certain optical imaging system, which is also been called the contrast transfer function (CTF) in the field of propagation-based X-ray phase imaging \cite{CTF1,CTF2}. Although the reconstructed phase from TIE is not well-defined over the partially coherent field, the definition of ``phase'' has been proven to be the weighted average superposition of phase under various coherent illumination using the theory of coherent mode decomposition \cite{Partial_decomp}, and can be converted to the well-defined optical path length of sample \cite{PC_TIE2}. The physical meaning of phase for partially coherent field is related to the transverse Poynting vector \cite{PC_TIE0} or Wigner distribution moment \cite{Winger} as well. Under the coherent illumination, the transfer function would be truncated by the limit of objective NA, and the poor response of TIE at low spatial frequency amplifies the noise and leads the cloud-like artifacts superimposed on the reconstructed phases \cite{TIE3,TIE_Appl1}. While in the case of partially coherent light, the maximum achievable resolution of phase imaging is extended to the sum of objective NA and illumination NA over coherent situation, where the ratio of illumination NA to objective NA is called coherence parameter $s = N{A_{ill}}/N{A_{obj}}$. As the value of parameter $s$ increases ($N{A_{ill}} \le N{A_{obj}}$ actually), the phase contrast of defocused intensity image is vanished dramatically due to the attenuated response of transfer function. While the illumination NA approaches objective NA, the spatial cutoff frequency is increased to the two times of objective NA as predicted by the WOTF \cite{WOTF1,OTF2}, but meanwhile the low contrast intensity images would lead to the disadvantage that the signal-to-noise ratio (SNR) is too bad to recovery the phase from the defocused intensity images. The imaginary part of WOTF of large defocus distance rises faster than small defocus distance at low spatial frequency near zero frequency, so most of phase retrieval methods with TIE based on multiple defocus-plane select the low frequencies of large defocus as the optimal one \cite{WOTF4,PC_TIE3}. But, the transfer function of phase under large defocus distances contains too much zero-crossings due to the rhythmical fluctuation of sine function, and these points make it almost impossible to recovery the high frequencies information of phase. In this paper, we present a highly efficient QPI approach which combines the annular aperture and programmable LED illumination by replacing traditional halogen illumination source with a LED array within a conventional transmission microscope. The annular illumination pattern matched with objective pupil is displayed on the LED array and each isolated LED is treated as the coherent source. The WOTF of axisymmetric oblique source in arbitrary position on source pupil plane is derived and the principle of discrete annular LED illumination pattern is validated. Not only the spatial resolution of final reconstructed phase could be extended to 2 NA of objective, but also the phase contrast of defocused intensity image is strong because the response of phase transfer function (PTF) with annular source tends to be roughly constant across a wide range of frequencies, which is an ideal form for noise-robust, high-resolution, and well-posed phase reconstruction. Even though this TIE-based QPI approach utilizing annular illumination has been reported by our group in the earlier paper \cite{AI_TIE} and the LED array has also been employed for Fourier ptychography \cite{FP1,FP2} and other QPI modalities \cite{QP_LED1,QP_LED2}, the novelty of this work is to deduce the WOTF for axisymmetric oblique source, and develop this discrete source to the superposition of arbitrary illumination pattern, such as circular illumination, annular illumination, or any other axisymmetric illumination. Furthermore, the combination of annular illumination and programmable LED array makes the modulation of illumination more flexible and compatible without the need for anodized and dyed circular glass plate or customized 3D printed annuli \cite{AI_TIE}. These advantages make it a competitive and powerful alternative to traditional bright-field illumination approaches for wide variety of biomedical investigations, micro-optics inspection and biophotonics. The noise-free and noisy simulation results sufficiently validate the applicability of discrete annular source, and the quantitative phase measurements of a micro polystyrene bead and visible blazed transmission grating demonstrate the accuracy of this method. The experimental investigation of unstained human cancer cells using different types objective are presented, and this results show the possibility of widespread adoption of QPI in the morphology study of cellular processes and biomedical community. \section{Principle} \subsection{WOTF for axisymmetric oblique source} In the standard 6$f$ optical configuration, illustrated in Figure 1 of \cite{WOTF1}, an object is illuminated by the k\"ohler illumination source and imaged via an objective lens. The image formation of this telecentric microscopic imaging system could be described by the Fourier transform and a linear filtering operation in the pupil plane \cite{Partial_Con1}. For the incoherent case, the intensity image can be given by the convolution equation $I\left( \mathbf{r} \right)={{\left| h\left( \mathbf{r} \right) \right|}^{2}}\otimes {{\left| t\left( \mathbf{r} \right) \right|}^{2}}\text{=}{{\left| h\left( \mathbf{r} \right) \right|}^{2}}\otimes {{I}_{u}}\left( \mathbf{r} \right)$, where $h$ denotes the amplitude point spread function (PSF) of the imaging system, $t$ is the complex amplitude, and ${I_u}$ represents the intensities of coherent partial images arising from all light source points. On a different note, in the coherent case it obeys $I\left( \mathbf{r} \right) = {\left| {h\left( \mathbf{r} \right) \otimes t\left( \mathbf{r} \right)} \right|^2}$. Thus, the incoherent system is linear in intensity, whereas the coherent system is highly nonlinear in that quantity \cite{Partial_Con1}. More information about how to obtain the intensity under partially coherent illumination can be found in the Appendix A. Due to the fact that above image formation is not linear in either amplitude or intensity, the mathematical derivation of phase recovery becomes more complicated for partially coherent system \cite{Partial_Con1,Partial_Con2}. To simplify this theoretical modeling, one way is to assume that the observed sample is a weak phase object, and the first-order Taylor expansion of complex amplitude can be described as: \begin{equation}\label{1} t\left( {\bf{r}} \right) \equiv a\left( {\bf{r}} \right)\exp \left[ {i\phi \left( {\bf{r}} \right)} \right] \approx a\left( {\bf{r}} \right){\left[ {1 + i\phi \left( {\bf{r}} \right)} \right]^{a\left( {\bf{r}} \right) = {a_0} + \Delta a\left( {\bf{r}} \right)}} \approx {a_0} + \Delta a\left( {\bf{r}} \right) + i{a_0}\phi \left( {\bf{r}} \right) \end{equation} where $a\left( {\bf{r}} \right)$ is the amplitude with a mean value of ${a_0}$, and $\phi \left( {\bf{r}} \right)$ is the phase distribution. Implementing the Fourier transform to $t$ and multiplying it with its conjugate form, the interference terms of the object function (bilinear terms) can be neglected owing to the scattered light is weak compared with the un-scattered light for a weak phase object. The formula of complex conjugate multiplication can be approximated as: \begin{equation}\label{2} \begin{aligned} T\left( {{{\bf{u}}_1}} \right){T^*}\left( {{{\bf{u}}_2}} \right) = & a_0^2\delta \left( {{{\bf{u}}_1}} \right)\delta \left( {{{\bf{u}}_2}} \right) + {a_0}\delta \left( {{{\bf{u}}_2}} \right)\left[ {\Delta \widetilde a\left( {{{\bf{u}}_1}} \right) + i{a_0}\widetilde \phi \left( {{{\bf{u}}_{{1}}}} \right)} \right] \\ &+ {a_0}\delta \left( {{{\bf{u}}_1}} \right)\left[ {\Delta {{\widetilde a}^*}\left( {{{\bf{u}}_2}} \right) - i{a_0}{{\widetilde \phi }^*}\left( {{{\bf{u}}_2}} \right)} \right]. \end{aligned} \end{equation} The approximation used in Eq. (\ref{2}) corresponds to the first order Born's approximation, and this approximation is commonly used in optical diffraction tomography \cite{ODT0,ODT1}. While the two cross-related points are coinciding with each other in the frequency domain, the intensity image under the partially coherent field for a weak object can be rewritten as the following equation by substitute Eq. (\ref{2}) into Eq. (\ref{27}) in the Appendix A: \begin{equation}\label{3} I\left( {\bf{r}} \right) = a_0^2TCC\left( {0;0} \right) + 2{a_0}{\mathop{\rm Re}\nolimits} \left\{ {\int {TCC\left( {{\bf{u}};0} \right)\left[ {\Delta \widetilde a\left( {\bf{u}} \right) + i{a_0}\widetilde \phi \left( {\bf{u}} \right)} \right]\exp \left( {i2\pi {\bf{ru}}} \right)d{\bf{u}}} } \right\} \end{equation} where the $TCC^{\rm{*}}\left( {0;{\bf{u}}} \right)$ is equal to $TCC\left( {{\bf{u}};0} \right)$ due to the conjugate symmetry of transmission cross coefficient (TCC). The intensity contribution of various system components (eg. source and object) is separated and decoupled in Eq. (\ref{3}), and the $TCC\left( {{\bf{u}};0} \right)$ could be expressed as WOTF: \begin{equation}\label{4} WOTF\left( \mathbf{u} \right)\equiv TCC\left( \mathbf{u};0 \right)=\iint{S\left( {{\mathbf{u}}^{'}} \right)}P^*\left( {{\mathbf{u}}^{'}} \right)P\left( {{\mathbf{u}}^{'}}+\mathbf{u} \right)d{{\mathbf{u}}^{'}} \end{equation} where ${\mathbf{u}}^{'}$ represents the variable in Fourier polar coordinate. The WOTF is real and even as long as the distribution of source $S\left( {\bf{u}} \right)$ or objective pupil $P\left( {\bf{u}} \right)$ is axisymmetric, thus the intensity image on the in-focus plane gives no phase contrast but absorption contrast. Some other asymmetric illumination methods could produce the phase contrast in the in-focus intensity image by break the symmetry of $S\left( {\bf{u}} \right)$ or $P\left( {\bf{u}} \right)$, and the prominent examples are differential phase contrast microscopy\cite{Axisys,QP_LED1} and partitioned or programmable aperture microscopy \cite{Program_micro1,Program_micro2}. The defocusing of optical system along $z$ axial, which is another more convenient way to produce phase contrast and an imaginary part, would be introduced into the pupil function: \begin{equation}\label{5} P\left( {\bf{u}} \right) = \left| {P\left( {\bf{u}} \right)} \right|{e^{ikz\sqrt {1 - {\lambda ^2}{{\left| {\bf{u}} \right|}^2}} }}, \left| {\bf{u}} \right|\lambda \le 1 \end{equation} where $z$ is the defocus distance along the optical axis. Substituting the complex pupil function into Eq. (\ref{4}) yields a complex WOTF: \begin{equation}\label{6} WOTF\left( {\bf{u}} \right) = \iint{ S\left( {{{\bf{u}}^{'}}} \right)\left| {{P^*}\left( {{{\bf{u}}^{'}}} \right)} \right|\left| {P\left( {{{\bf{u}}^{'}} + {\bf{u}}} \right)} \right|\exp \left[ {ikz\left( { - \sqrt {1 - {\lambda ^2}{{\left| {{{\bf{u}}^{'}}} \right|}^2}} {\rm{ + }}\sqrt {1 - {\lambda ^2}{{\left| {{\bf{u}}{\rm{ + }}{{\bf{u}}^{'}}} \right|}^2}} } \right)} \right]d{{\bf{u}}^{'}}} \end{equation} The transfer functions of amplitude and phase component are corresponding to the real and imagery part of WOTF, respectively: \begin{equation}\label{7} \begin{aligned} & {{H}_{A}}\left( \mathbf{u} \right)=2{{a}_{0}}\operatorname{Re}\left[ WOTF\left( \mathbf{u} \right) \right] \\ & {{H}_{P}}\left( \mathbf{u} \right)=2{{a}_{0}}\operatorname{Im}\left[ WOTF\left( \mathbf{u} \right) \right]. \end{aligned} \end{equation} \begin{figure}[!b] \centering \includegraphics[width=11.5cm]{Fig1.jpg} \caption{2D images of PTF for different types axisymmetric source under weak defocusing conditions and the line profiles of TIE and PTF for various defocus distances.} \label{} \end{figure} Considering that the upright incident coherent source is a special case of oblique illumination, the derivation of WOTF for oblique source will be processed under the same framework for these two different types illumination. There is a pair of symmetrically ideal light spots on the source pupil plane, and the distance from these two points to the center point is ${\bm{\rho}_s}$ (normalized spatial frequency). The intensity distribution of this source pupil could be expressed as: \begin{equation}\label{8} S\left( \mathbf{u} \right)=\delta \left( \mathbf{u}-{{\bm{\rho }}_{s}} \right)\text{+}\delta \left( \mathbf{u}+{{\bm{\rho }}_{s}} \right) \end{equation} Substituting this source pupil function into Eq. (\ref{6}) results in a complex (but even) WOTF for oblique situation \begin{equation}\label{9} \begin{aligned} WOT{{F}_{obl}}\left( \mathbf{u} \right)\text{=} & \left| P\left( \mathbf{u}-{{\bm{\rho }}_{{s}}} \right) \right|{{e}^{ikz\left( -\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}}\text{+}\sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}-{{\bm{\rho }}_{{s}}} \right|}^{2}}} \right)}} \\ & + \left| P\left( \mathbf{u}+{{\bm{\rho }}_{{s}}} \right) \right|{{e}^{ikz\left( -\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}}\text{+}\sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}+{{\bm{\rho }}_{{s}}} \right|}^{2}}} \right)}} \end{aligned} \end{equation} where $\left| {P\left( {{\bf{u}} - {{\bm{\rho }}_{s}}} \right)} \right|$ and $\left| {P\left( {{\bf{u}} + {{\bm{\rho }}_{s}}} \right)} \right|$ are the pair of aperture functions shifted by the oblique coherent source in Fourier space. The aperture function for a circular objective pupil with normalized spatial radius ${{\bm{\rho }}_p}$ is given by \begin{equation}\label{10} \left| P\left( \mathbf{u} \right) \right|= \left\{ \begin{aligned} & 1,\quad \text{if }\mathbf{u}\le {{\bm{\rho }}_{p}} \\ & 0, \quad \text{if }\mathbf{u}>{{\bm{\rho }}_{p}}. \end{aligned} \right. \end{equation} In the coherent case (${{\bm{\rho }}_{{s}}}{\rm{ = }}0$), the WOTF can be greatly simplified as: \begin{equation}\label{11} WOT{{F}_{coh}}\left( \mathbf{u} \right)\text{=}\left| P\left( \mathbf{u} \right) \right|{{e}^{ikz\left( -1\text{+}\sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u} \right|}^{2}}} \right)}}. \end{equation} The two aperture functions are overlapped each other in this situation, so the values of final coherent WOTF is only half. The absorption contrast and phase contrast are given by the real and imaginary parts of $WOT{F_{coh}}$ using Euler's formula as shown in Eq. (\ref{7}). By further invoking the paraxial approximation and replacing $\sqrt {1 - {\lambda ^2}{{\bf{u}}^2}} $ with $1 - {{{\lambda ^2}{{\bf{u}}^2}} \mathord{\left/{\vphantom {{{\lambda ^2}{{\bf{u}}^2}} 2}} \right. \kern-\nulldelimiterspace} 2}$, the imaginary part of the $WOT{F_{coh}}$ could be written as a sine term $\sin \left( {\pi \lambda z{{\left| {\bf{u}} \right|}^2}} \right)$. Under the condition of weak defocusing, this transfer function can be further approximated by a parabolic function \begin{equation}\label{12} {{H}_{p}}{{\left( \mathbf{u} \right)}_{TIE}}\text{=} \left| P\left( \mathbf{u} \right) \right| \sin \left( {\pi \lambda z{{\left| {\bf{u}} \right|}^2}} \right) \approx \left| P\left( \mathbf{u} \right) \right| \pi \lambda z{\left| {\bf{u}} \right|^2} \end{equation}\label{12} This Laplacian operator is corresponding to the PTF of TIE in Fourier domain, and the two dimensional (2D) image of WOTF for coherent source under weak defocusing condition is shown in Fig. 1(a1). The line profiles of TIE and PTF for various defocus distances are illustrated in Fig. 1(a2) as well. It is obvious that the transfer function profile of TIE is consistent with the PTF for weak defocus distance (0.5 $\mu$m) at low frequency, so the coherent transfer function is getting closer to the TIE as long as the defocus distance is getting smaller. In other words, the TIE is a special case of coherent transfer function under weak defocusing. On the other hand, these two coherent points do not coincide with each other in the center of source plane, as shown in Fig. 1(b1) and (c1). The imaginary part of Eq. (\ref{9}) is limited by their own pupil functions, thus the PTF for oblique point source could be written as: \begin{equation}\label{13} \begin{aligned} {{H}_{p}}{{\left( \mathbf{u} \right)}_{obl}}\text{=}& \frac{1}{2} \left| P\left( \mathbf{u}-{{\bm{\rho }}_{{s}}} \right) \right|\sin \left[ kz\left( \sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}-{{\bm{\rho }}_{{s}}} \right|}^{2}}}-\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}} \right) \right] \\ & + \frac{1}{2}\left| P\left( \mathbf{u}+{{\bm{\rho }}_{{s}}} \right) \right|\sin \left[ kz\left( \sqrt{1-{{\lambda }^{2}}{{\left| \mathbf{u}+{{\bm{\rho }}_{{s}}} \right|}^{2}}}-\sqrt{1-{{\lambda }^{2}}{{\left| {{\bm{\rho }}_{{s}}} \right|}^{2}}} \right) \right] \end{aligned} \end{equation} Figure 1(b2) and (c2) show the curves of PTF for different ${{\bm{\rho }}_s}$ and defocus distances additionally. The cutoff frequency of transfer function is determined by the shifted aperture functions, and the achievable imaging resolution, which is equal to ${{\bm{\rho }}_{{p}}}{\rm{ + }}{{\bm{\rho }}_{{s}}}$, becomes bigger and bigger with the increment of ${{\bm{\rho}}_{s}}$ in the oblique direction. Nevertheless, the profile line of transfer function has two jump edges due to the overlap and superposition of two shifted objective pupil functions. The jump edge would induce zero-crossings and make the response of frequency bad around these points, thus these jump edges should be avoided as much as possible. While this pair of points source matches objective pupil (${{\bm{\rho }}_{{p}}}{\rm{ \approx }}{{\bm{\rho }}_{{s}}}$), not only the cutoff frequency of PTF could be extended to the twice resolution of coherent diffraction limit but also the frequency response of PTF is roughly constant in a specific direction under this axisymmetric oblique illumination. \subsection{Validation of discrete annular LED illumination} \begin{figure}[!b] \centering \includegraphics[width=12.5cm]{Fig2.jpg} \caption{(a-c) 2D images of PTF and line profiles of three different types discrete annular illumination patterns for various defocus distances. (d) Traditional circular diaphragm aperture and corresponding PTF.} \label{} \end{figure} For any axisymmetric shape of partially coherent illumination, a certain illumination pattern could be discretized into a lot of coherent point source with the finite physical size including oblique and upright incident light points. The image formation of an optical microscopic system under partially coherent field could be simply understood as a convolution with a magnified replica of each discrete coherent source. Moreover, this process is coincident with the incoherent superposition of all intensities of the coherent partial images arising from all discrete light source points for the optical imaging with K\"ohler illumination \cite{Partial_Con1,Partial_decomp}. As we know, while the condenser aperture iris diaphragm becomes bigger, the maximum achievable imaging resolution of intensity image is getting bigger also and the depth of field (DOF) becomes shallower. However, the phase contrast (as well as absorption contrast) of the defocused image will become weak, and the attenuation of phase effect of captured intensity image will reduce the SNR of the phase reconstruction while the coherence parameter $s$ continues to grow \cite{WOTF1,AI_TIE}. So the parameter $s$ is recommended to be set between 0.7 and 0.8 for properly image resolution and contrast in most microscope instruction manual. To overcome the tradeoff between image contrast and resolution, we present the highly efficient programmable annular illumination which is different from the traditional circular diaphragm aperture for QPI microscopy. The LED array is placed at the front focal plane of the condenser to illuminate the specimen, and each single LED could be controlled separately. A test image, which is used to simulate the discrete LED array, with 512 $\times$ 512 pixels with a pixel size of 0.176 $\mu$m $\times$ 0.176 $\mu$m and an objective with 0.75 NA are employed for the validation of annular LED illumination. While a pair of oblique illumination points is located on the edge of source pupil, it could be known that the imaging resolution is twice objective NA in oblique direction as shown in Fig. 1(c). Thus, three different types of discrete annular patterns and one circular pattern are utilized for the comparison of WOTF under same system parameter. The expression of annular source could be written as the summation of delta function \begin{equation}\label{14} S({\bf{u}}) = \sum\limits_{i = 0}^N {\delta (\bf{u} - {{\bf{u}}_i})},\quad \left| {{{\bf{u}}_i}} \right| \approx \left| {{{\bm{\rho }}_p}} \right| \end{equation} where $N$ is the number of all discrete light points on the source plane. Figure 2 shows the 2D images and line profiles of imaginary part of WOTF for various annular illumination patterns and defocus distances. There are four LEDs on the top-bottom and left-right of source plane in Fig. 2(a), so the double imaging resolution of objective NA could be obtained in the vertical and horizontal directions. While eight LEDs could cover the twice cutoff frequency of objective in four different directions, and the PTF image of eight LEDs seems to be the superposition of transfer function of several pairs axisymmetric oblique source. For the continuous situation of annular illumination, as shown in Fig.2(c), the final PTF provides isotropic imaging resolution in all directions. In addition to above three different types of annular shape, the PTF of circular illumination aperture is illustrated in Fig. 2(d) and the cutoff frequency is extended to 2 NA of objective as well. However, the value of transfer function of circular apertures is diminished dramatically compared to above three annular shapes. It is corresponding to the phenomenon that the larger aperture diaphragm provides higher imaging resolution but the phase contrast of defocused image is too weak to capture. The condenser aperture of circular illumination must be stopped down to produce appreciable contrast for phase information, but it is not necessary for the annular illumination. Here it is worth noting that the number of LED located on the edge of source pupil $N$ should be as much as possible for isotropic imaging resolution in all directions, but we chose the eight LEDs as the proposed illumination pattern considering the finite spacing between two adjacent LED elements. From the plot of PTF for various aperture shapes and defocus distances, all four illumination patterns have twice frequency bandwidth of objective NA, but the response of circular illumination is too weak. The phase information can hardly be transferred into intensity via defocusing when illumination NA is large, and the weak phase contrast of defocus intensity image would leads bad SNR. The zero-crossings number of PTF for large defocus distances is more than weak defocusing due to the rhythmical fluctuation of imaginary part of WOTF, and it is difficult to recovery the signal component from the noise around these point. Thereby, the proposed annular LED illumination pattern not only extends the imaging resolution to double NA in most directions but also provides the robust phase contrast response for defoused intensity image. \subsection{QPI via TIE and WOTF inversion } In the paraxial regime, the wave propagation is mathematically described by the Fresnel diffraction integral \cite{Partial_Con1}, while the relationship between the intensity and phase during wave propagation can be described by TIE \cite{TIE1}: \begin{equation}\label{15} -k\frac{\partial{I(\bm{r})}}{\partial{z}} = \nabla_\perp\bm\cdot[I(\bm{r})\nabla_\perp\phi(\bm{r})] \end{equation} where $k$ is the wave number ${2\pi }/{\lambda }$, $I(\bm{r})$ is the intensity image on the in-focus plane, $\nabla_\perp$ denotes the gradient operator over the transverse direction $\bm{r}$, $\bm\cdot$ denotes the dot product, and $\phi(\bm{r})$ represents the phase of object. The left hand of TIE is the spatial derivative of intensity on the in-focus plane along $z$ axis. The longitudinal intensity derivative $\partial{I}/\partial{z}$ can be estimated through difference formula ${\left( {{I_1} - {I_2}} \right)}$\slash${2\Delta z}$, where $I_1$ and $I_2$ are the two captured defocused intensity images, and $\Delta z$ is the defoucs distance of axially displaced image. By introducing the Teague's auxiliary function $\nabla_\perp\psi(\bm{r}) = I(\bm{r})\nabla_\perp\phi(\bm{r})$, the TIE is converted into the following two Poisson equations: \begin{equation}\label{16} -k\frac{\partial{I(\bm{r})}}{\partial{z}} = {\nabla_\perp}^2\psi \end{equation} and \begin{equation}\label{17} \nabla_\perp\bm\cdot(I^{-1}\nabla_\perp\psi) = {\nabla_\perp}^2\phi \end{equation} The solution for $\psi$ could be obtained by solving the first Poisson equation Eq. (\ref{16}), thus the phase gradient can be obtained as well. The second Poisson equation Eq. (\ref{17}) is used for phase integration, and the quantitative phase $\phi(\bm{r})$ can be uniquely determined by these two Poisson equations. For a special case of pure phase object (unstained cells and tissues generally), the intensity image on the in-focus plane could be treated as a constant because of the untainted cells is almost transparent, and the TIE can be simplified as only one Poisson equation: \begin{equation}\label{18} - k\frac{{\partial I\left( {\bf{r}} \right)}}{{\partial z}} = I\left( {\bf{r}} \right){\nabla ^2}\phi \left( {\bf{r}} \right) \end{equation} Then, the fast Fourier transform (FFT) solver \cite{TIE_Appl2,TIE_Appl3} is applied to Eq. (\ref{18}) and the forward form of TIE in the Fourier domain corresponds to a Laplacian filter \begin{equation}\label{19} \frac{{{{ \widetilde{I_1} }}\left( {\bf{u}} \right) - {{ \widetilde{I_2} }}\left( {\bf{u}} \right)}}{4{ \widetilde{I} \left( {\bf{u}} \right)}} = \left( { \pi \lambda z{{\left| {\bf{u}} \right|}^2}} \right)\widetilde{\phi}(\bf{u}) \end{equation} The inverse Laplacian operator $1\slash{\pi \lambda z{{\left| {\bf{u}} \right|}^2}}$ is analogous to an inversion of weak defocus CTF or PTF in the coherent limit. \begin{figure}[!b] \centering \includegraphics[width=13.5cm]{Fig3.jpg} \caption{Various noise-free reconstruction results based on a simulated phase resolution target corresponding different illumination patterns. The parameters of optical system and pixel size of camera is set to satisfy the Nyquist sampling criterion, and the sampling frequency of camera equals twice imaging resolution of objective NA. Scale bar, 15 $\mu$m.} \label{} \end{figure} For partially coherent illumination, the traditional form of TIE is not suitable for the phase retrieval since this equation contains no parameters about imaging system. To take the effect of partial coherence and imaging system into account, the Laplacian operator $\pi \lambda z{\left| {\bf{u}} \right|^2}$ of TIE in the Fourier space should be replaced by the PTF of arbitrary axisymmetric source. The ATF ${H_A}\left( {\bf{u}} \right)$ and PTF ${H_{\rm{P}}}\left( {\bf{u}} \right)$ are determined by the real and imagery part of WOTF respectively, as shown in Eq. (\ref{7}). Thus, the ATF is an even function due to the nature of the cosine function, while the PTF is always an odd function for various different defocus distance. On the condition that the defoucs distances of two captured intensity images are same and defocus direction is opposite, the subtraction between two intensity images give no amplitude contrast but a pure twice phase contrast. Therefore, the in-focus image ${I(\bm{r})}$ is treated as the background intensity and the forward form of WOTF can be expressed as: \begin{equation}\label{20} \frac{{{{ \widetilde{I_1} }}\left( {\bf{u}} \right) - {{ \widetilde{I_2} }}\left( {\bf{u}} \right)}}{4{ \widetilde{I} \left( {\bf{u}} \right)}} = {\mathop{\rm Im}\nolimits} \left[ {WOTF\left( {\bf{u}} \right)} \right] \widetilde{\phi}(\bf{u}) \end{equation} Equation (\ref{20}) makes the relationship between phase and PTF linear, then QPI can be realized by the inversion of WOTF in Fourier space \begin{equation}\label{21} \phi \left( {\bf{r}} \right) = {{\mathscr{F}}^{ - 1}} \left\{{ \frac{{{{ \widetilde{I_1} }}\left( {\bf{u}} \right) - {{ \widetilde{I_2} }}\left( {\bf{u}} \right)}}{4{ \widetilde{I} \left( {\bf{u}} \right)}} {\frac{{{\mathop{\rm Im}\nolimits} \left[ {WOTF\left( {\bf{u}} \right)} \right]}}{{{{\left| {{\mathop{\rm Im}\nolimits} \left[ {WOTF\left( {\bf{u}} \right)} \right]} \right|}^2} + \alpha }}} } \right\} \end{equation} where ${{\mathscr{F}}^{ - 1}}$ denotes the inverse Fourier transform, and $\alpha$ is the Tikhonov-regularization parameter, which is usually used in the Wiener filter to set maximum amplification, avoiding the division by zero of WOTF. First, we implement our method to the phase reconstruction of a simulated resolution target. The resolution test image is used as an example phase object defined on a square region and the grid width is 512 pixels with a pixel size of 0.176 $\mu$m. The wavelength of illumination is 530 nm, and the objective NA is 0.75. The captured defocused intensity images are noise-free and the defocus distance is 0.5 $\mu$m. The WOTF for various illumination patterns could be derived using Eq. (\ref{9}) and Eq. (\ref{11}), and the inversion of WOTF is applied to the Fourier transform of captured intensity stack. The detailed compare reconstruction results of resolution target under different illumination patterns are shown in Fig. 3. The NA of objective and the pixel size of camera are set to satisfy the Nyquist sampling criterion, and twice imaging resolution of objective NA is equal to the maximum sampling frequency of camera. The center region of simulated resolution target is enlarged and marked with the dashed rectangle. As predicted by the WOTF of corresponding illumination pattern, the recovered spectrum is determined by the cutoff frequency of WOTF. Also, the phase profile lines of resolution elements in the smallest Group in this simulated resolution test image are plotted in the last row of sub-figure, and it could be seen that the edge of the resolution elements of coherent illumination is distorted and blurry but the elements of other groups of three aperture patterns are distinguishable. In order to characterize the noise sensitivity of proposed method, another simulated result is presented as well. The system parameters are same as above simulation, but each defocused intensity image is corrupted by Gaussian noise with standard deviation of 0.1 to simulate the noise effect. The shape of reconstructed Fourier spectrum is same as the non-zero region of PTF, and the final retrieved phase is evaluated by the root-mean-square error (RMSE). From this diagram, the cutoff frequency of coherent illumination is restricted to coherent diffraction limit, but the other three groups of source could extend the cut-off frequency to double imaging resolution of objective NA. Although the coherent situation could provides the maximum value of PTF (unit 1 approximatively), the slowly rising of PTF response at low frequency leads the over-amplification of noise, and the cloud-like artifacts is superimposed on the finally reconstructed phase. The values of WOTF of traditional circular aperture is too close to zero and results the over-amplification of noise at both low and high frequency. Therefore, the proposed annular illumination method provides not only the twice resolution of objective NA but also the robust response of transfer function, and the accuracy and stable quantitatively retrieved phase of the test object is given at last. \begin{figure}[!htp] \centering \includegraphics[width=13.5cm]{Fig4.jpg} \caption{Phase reconstruction results under the Gaussian noise with standard deviation of 0.1. The response of transfer function rises slowly at low frequencies leading the over-amplification of noise, and there are cloud-like artifacts superimposed on the reconstructed phases for coherent illumination. While the values of WOTF of traditional circular aperture is too close to zero and leads the over-amplification of noise at both low and high frequency. Scale bar, 15 $\mu$m.} \label{} \end{figure} \section{Experimental setup} \begin{figure}[!htp] \centering \includegraphics[width=13.5cm]{Fig5.jpg} \caption{(a) Schematic diagram of highly efficient quantitative phase microscopy. (b-c) The annular pattern is displayed on the LED array and the size of this annulus is matched with objective pupil in the back focal plane. (d) Photograph of whole imaging system. The LED array is placed beneath the sample and the crucial parts of setup in this photo are marked with the yellow boxes. Scale bar represents 300 $\mu$m.} \label{} \end{figure} As depicted in Fig. 5(a), the highly efficient quantitative phase microscopy is composed of three major components: a programmable LED array, a microscopic imaging system, and a CMOS camera. The commercial surface-mounted LED array is placed at the front focal plane of the condenser as illumination source, and the light emitted from condenser lens for single LED can be nearly treated as a plane wave. Each LED can provide approximately spatially coherent quasi-monochromatic illuminations with narrow bandwidth (central wavelength $\lambda$ = 530 nm, $\sim$ 20 nm bandwidth). The distance between every adjacent LED elements is 1.67 mm, and only a fraction of whole array are used for programmable illumination. The array is driven dynamically using a LED controller board, which is custom-built by ourselves with a Field Programmable Gate Array (FPGA) unit, to provide the various illumination patterns. In our work, the discrete annular LED illumination pattern matched with objective pupil is displayed on the array, as shown in Fig. 5(b). Figure 5(c) is taken in the objective back focal plane by inserting a Bertrand lens into one of the eyepiece observation tubes or removing the eyepiece tubes. The microscope is equipped with a scientific CMOS (sCMOS) camera (PCO.edge 5.5, 6.5 $\mu$m pixel pitch) and an universal plan objective (Olympus, UPlan 20 $\times$, NA = 0.4). Another universal plan super-apochromat objective (Olympus, UPlan SAPO 20$\times$, NA $=$ 0.75) and a higher sampling rate detector (2.2 $\mu$m pixel pitch) are also utilized for higher resolution imaging result. The photograph of whole imaging system is illustrated in Fig. 5(d) and the crucial parts of setup in this photo are marked with the yellow boxes. \section{Results} \subsection{Quantitative characterization of control samples} \begin{figure}[!t] \centering \includegraphics[width=13cm]{Fig6.jpg} \caption{(a1-b1) Reconstructed phase distributions of the micro polystyrene bead with 8 $\mu$m diameter and blazed transmission grating with 3.33 $\mu$m period. (a2-b2) Measured quantitative phase line profiles for a single bead and a few periods grating. Theoretical (assuming 90$^\text{o}$ groove angles) line profiles are also plotted for reference. Scale bar denotes 10 $\mu$m and 3 $\mu$m, respectively.} \label{} \end{figure} In order to validate the accuracy of proposed QPI approach based on annular LED illumination, the micro polystyrene bead (Polysciences, $n$=1.59) with 8 $\mu$m diameter immersed in oil (Cargille, $n$=1.58) is measured using 0.4 NA objective and sCMOS camera. The sample is slightly defocused, and three intensity images are recorded at $\pm$ 1 $\mu$m plane and in-focus plane. By invoking the inversion of WOTF, the reconstructed quantitative phase image of bead is shown in Fig. 6(a1), which is a sub-region of whole field of view (FOV). The horizontal line profile through the center of a single bead is illustrated as the solid brown line in Fig. 6(a2), and the blue dash line represents the theoretical quantitative phase of the micro polystyrene bead. Of interest in these results is excellent agreement between the magnitude and shape of the compared bead profile. There is still some slight high frequency noise in the retrieved phase image due to the fact that the tiny value of WOTF amplifies the noise near the cutoff frequency, but these artifacts do not affect the accuracy and feasibility of our proposed method. Further more, a visible blazed transmission grating ($Thorlabs\;GT13-03$, grating period $\Lambda$ = 3.33 $\mu$m, blaze angle ${\theta _B}$ = 17.5$^\text{o}$) is employed in the quantitative experiment using the same method and procedures. The grating is made by Schott B270 glass ($n_{glass}$ = 1.5251), and mounted face up on a glass slide with refractive index matching water ($n_{water}$ = 1.33) and a thin $no$. 0 coverslip. Considering that the large pixel size of sCMOS camera and high density of grating, a higher NA objective (NA = 0.75) and sampling rate detector (2.2 $\mu$m pixel size) are utilized for the imaging of this grating. The measured phase image is represented in Fig. 6(b1) for a 23.7 $\mu$m $\times$ 15.6 $\mu$m rectangular patch. Plotted for reference are the theoretical profiles in blue solid line, assuming 90$^\text{o}$ groove angles, and also plotted in Fig. 6(b1) is a few periods of the associated brown dot-solid line profiles with no interpolation. These two curves are well consistent with each other excepting the jump edges of phase owing to the rapid oscillations of grating. Thus, the two group quantitative characterizations of control samples further indicate success and accuracy of our method. \subsection{Experimental results of biological specimens} \begin{figure}[!htp] \centering \includegraphics[width=13cm]{Fig7.jpg} \caption{(a) Quantitative reconstruction results of LC-06 with 0.4 NA objective and 6.5 $\mu$m pixel pitch camera for coherent and discrete annular illumination. (b-c) Three enlarged sub-regions of quantitative maps and simplified DIC images are illustrated as well. The white arrows shows line profiles taken at different positions in the cells. Scale bar equals 50$\mu$m, 10$\mu$m and 15$\mu$m, respectively.} \label{} \end{figure} As demonstrated by the previous simulation results in subsection 2.3, the developed annular LED illumination could provides twice imaging resolution of objective NA and noise robust response of WOTF. We also test the present reconstruction method in its intended biomedical application experimentally, and the unstained lung cancer cell (LC-06) is used for highly efficient QPI firstly with 0.4 NA objective and 6.5 $\mu$m pixel pitch camera. Figure 7(a1) and (a2) are the quantitative phase images of LC-06 defined on a square FOV for point source and annular source respectively. Three representative sub-areas of whole quantitative map are selected and enlarged for more detailed descriptions. The phase images of three enlarged sub-regions are shown in jet map, and the corresponding simplified DIC images are illustrated in Fig. 7(b) and (c). From these quantitative phase and phase gradient images, it is obvious that the phase imaging resolution of annular illumination source is higher than the coherent one, and some tiny grains in cytoplasm could be observed clearer and more vivid. In addition, the white arrows show line profiles taken from two different positions in the cells, and the comparative phase profiles are presented in different colors of the lines in Fig. 7. The plot lines indicate that the significant improvement of high frequency features using annular aperture as compared to the coherent illumination. Thus, the allowed highest spatial frequency of QPI base on annular LED illumination is 0.8 NA (0.66$\mu$m) effectively in the phase reconstruction. Then, our system is used for the QPI of label-free HeLa cell by replacing the objective and the camera with another 0.75 NA objective and 2.2 $\mu$m pixel size camera. The FOV is 285.1 $\times$ 213.8 $\mu$m$^\text{2}$ with the sampling rate of 0.11 $\mu$m in the object plane. Figure 8(a) and (b) show the images of high resolution quantitative phase and the phase gradient in the direction of the image shear (45$^\text{o}$). As can be seen in Fig. 8(c), three sub-regions are selected by solid rectangular shape for no resolution loss of phase images. For this group of quantitative results, We will not repeat the enhancement of resolution of annular LED illumination but point out some defects in quantitative images. The background of this quantitative phase image is not ``black'' enough, which is caused by the loss of low frequency features of Fourier spectrum in the Fourier space. The root cause of this problem is the finite spacing between two adjacent LED elements leading to the mismatching between objective pupil and annular LED pattern. Furthermore, the PTF of system tends to be zero near zero frequency and makes the recovery of low frequency information difficult. \begin{figure}[!t] \centering \includegraphics[width=13cm]{Fig8.jpg} \caption{(a) High resolution QPI of HeLa cell with 0.75 NA objective. (b) Simulated DIC image. (c) Three enlarged sub-regions of quantitative phase of HeLa cell. Scale bar equals 20$\mu$m, 3$\mu$m and 5$\mu$m, respectively.} \label{} \end{figure} \section{Discussion and conclusion} In summary, we demonstrate an effective QPI approach based on programmable annular LED illumination for twice imaging resolution of objective NA and noise-robust reconstruction of quantitative phase. The WOTF of axisymmetric oblique source is derived using the concept of TCC, and the WOTF of discrete annular aperture is validated with the incoherent superposition of the individual point source. The inversion of WOTF is applied to the intensity stack containing three intensity images with equal and opposite defoci, and the quantitative phase could be retrieved. The recovered phase of simulated resolution target and noise-corrupted test image prove that the proposed illumination pattern could extend imaging resolution to 2 NA of objective and give great noise insensitivity. Further more, the biological sample of human cancer cells are imaged with two different types objective and the imaging resolution of retrieved phase is enhanced compared with the coherent illumination indeed. Besides, this QPI setup is easily fitted into a conventional optical microscope after small modifications and the programable source makes the modulation of annular pattern more flexible and compatible without customized-build annuli matched objective pupil. But there are still some important issues that require further investigation or improvement in this work. Due to the dispersion of LEDs and the finite spacing between adjacent LED elements, the annular illumination pattern and pupil of objective are not well tangent internally with each other. The unmatched annular aperture with objective may cause the loss of low frequency owing to the overlap and offset of PTF near zero frequency. In other words, the missing of low frequency would lead to that the background of phase images is not ``black'' enough. Another shortcoming of this modified microscopic imaging system is that it is difficult to apply the long-term time-elapse living cellular imaging to these relatively low end bright-field microscope, like Olympus CX22 microscope, different from our early work based on IX83 microscope. To solve these problems, a special sample cuvette is required for the imaging of living biological cells and the additional devices may be needed to modify our setup, such as a smaller spacing and brighter LED array. Despite these existing drawbacks, the configuration of this system takes full advantage of the compatibility and flexibility of the programmable LED illumination and bright-field microcopy. And the annular illumination pattern gives the quantitative demonstration of control samples and promising results of biological specimens. \section*{APPENDIX} \subsection*{A. Derivation of Intensity formation under partially coherent illumination using Hopkins' formulae} In the main text, the standard optical microscope system can be simplified as an extended light source, a condenser lens, a sample, an objective lens, and a camera on the image plane. Based on Abbe's theory \cite{Abbe}, the captured image of object at the image plane can be interpreted as the summation of all the source points of the illumination. For each source point, the optical formation is described by Fourier-transforms and a linear filtering operation as a linear system, and the electrical field $E\left( {x,y} \right)$ on the camera plane can be expressed as \begin{equation} E\left( {x,y;{f_c},{g_c}} \right) = \iint { {t\left( {f,g} \right)h\left( {f + {f_c},g + {g_c}} \right)\exp \left[ { - i2\pi \left( {fx + gy} \right)} \right]dfdg}} \end{equation} Where $t$ is the complex transmittance of object, and $h$ represents the amplitude point spread function (PSF) of the imaging system. The intensity on the image plane is proportional to the square magnitude of the electric field distribution and takes the form of \begin{equation}\label{23} \begin{aligned} I\left( {x,y} \right) & = {S\left( {{f_c},{g_c}} \right){{\left| {E\left( {x,y;{f_c},{g_c}} \right)} \right|}^2}d{f_c}d{g_c}}\\ & = {S\left( {{f_c},{g_c}} \right){{\left| {{\mathscr{F}} \left[ {t\left( {f,g} \right)h\left( {f + {f_c},g + {g_c}} \right)} \right]} \right|}^2}d{f_c}d{g_c}} \end{aligned} \end{equation} Where $I(x,y)$ is the intensity of the object captured at the image plane, $S( {{f_c},{g_c}})$ is the distribution of extended light source, and ${\mathscr{F}}$ denotes Fourier transform. By interchanging the order of integration, we can express Eq. (\ref{23}) according to Hopkins' formulation \cite{Hopk,MBorn} \begin{equation} \begin{aligned} I\left( {x,y} \right) = \iiiint{} & S\left( {{f_c},{g_c}} \right)P\left( {{f^{'}} + {f_c},{g^{'}} + {g_c}} \right){P^*}\left( {{f^{''}} + {f_c},{g^{''}} + {g_c}} \right)T\left( {{f^{'}},{g^{'}}} \right){T^*}\left( {{f^{''}},{g^{''}}} \right) \\ & \exp \left[ { - i2\pi \left( {{f^{'}} - {f^{''}}} \right)x - i2\pi \left( {{g^{'}} - {g^{''}}} \right)y} \right]d{f^{'}}d{g^{'}}d{f^{''}}d{g^{''}} \end{aligned} \end{equation} Where $P$ is the coherent transfer function with the objective pupil function $\left| P \right|$, and $T$ is the spatial object spectrum respective to the Fourier transform of object complex transmittance $t$. Here, we separate the contribution of the specimen and system, and the transmission cross coefficient (TCC) is introduced as a combination of the source and pupil expressed as \begin{equation}\label{25} TCC\left( {{f^{'}},{g^{'}};{f^{''}},{g^{''}}} \right) = \iint{{S\left( {{f_c},{g_c}} \right)P\left( {{f^{'}} + {f_c},{g^{'}} + {g_c}} \right){P^*}\left( {{f^{''}} + {f_c},{g^{''}} + {g_c}} \right)d{f_c}d{g_c}}} \end{equation} By replacing the variable $\left( {{f^{'}},{g^{'}}} \right)$ and $\left( {{f^{''}},{g^{''}}} \right)$ with two 2D vector ${{\bf{u}}_1}$ and ${{\bf{u}}_2}$ in frequency domain, and the Eq. (\ref{25}) can be simplified as \begin{equation} TCC\left( {{{\bf{u}}_1};{{\bf{u}}_2}} \right) = \iint{{S\left( {\bf{u}} \right)P\left( {{\bf{u}} + {{\bf{u}}_1}} \right){P^*}\left( {{\bf{u}} + {{\bf{u}}_2}} \right)d{\bf{u}}}} \end{equation} Then, the final intensity of object on the image plane can be rewritten in 2D vector variable \begin{equation}\label{27} I\left( {\bf{r}} \right) = \iint{{TCC\left( {{{\bf{u}}_1};{{\bf{u}}_2}} \right)T\left( {{{\bf{u}}_1}} \right){T^*}\left( {{{\bf{u}}_2}} \right)\exp \left[ {i2\pi {\bf{r}}\left( {{{\bf{u}}_1} - {{\bf{u}}_2}} \right)} \right]d{{\bf{u}}_1}d{{\bf{u}}_2}}} \end{equation}
1,314,259,996,080
arxiv
\section{Introduction} The Lov\'{a}sz Local Lemma (LLL), first introduced in \cite{lll-orig}, is a cornerstone principle in probability theory. In its simplest symmetric form, it states that if one has a probability space $\Omega$ and a set of $m$ ``bad'' events $\mathcal B$ in that space, and each such event has probability $P_{\Omega}(B) \leq p$; and each event depends on at most $d$ events (including itself), then under the criterion \begin{equation} \label{a1Alll-cond} e p d \leq 1 \end{equation} there is a positive probability that no bad events occurs. If equation (\ref{a1Alll-cond}) holds, we say \emph{the symmetric LLL criterion is satisfied}. Although the LLL applies to general probability spaces, and the notion of dependency for a general space can be complicated, most applications in combinatorics use a simpler setting in which the probability space $\Omega$ is determined by a series of discrete variables $X_1, \dots, X_n$, each of which is drawn independently with $P_{\Omega} (X_i = j) = p_{ij}$. Each bad event $B \in \mathcal B$ is a Boolean function of a subset of variables $S_B \subseteq [n]$. Then events $B, B'$ are dependent (denoted $B \sim B'$) if they share a common variable, i.e., $S_B \cap S_{B'} \neq \emptyset$; note that $B \sim B$. We say a set of bad events $I \subseteq \mathcal B$ is \emph{independent} if $B \not \sim B'$ for all distinct pairs $B, B' \in I$. We say a variable assignment $X$ \emph{avoids} $\mathcal B$ if every $B \in \mathcal B$ is false on $X$. There is a more general form of the LLL, known as the \emph{asymmetric LLL}, which can be stated as follows. Suppose that there is a weighting function $x: \mathcal B \rightarrow (0,1)$ with the following property: \begin{equation} \label{Alll-acond} \forall B \in \mathcal B \qquad P_{\Omega}(B) \leq x(B) \prod_{\substack{A \sim B \\ A \neq B}} (1 - x(A)) \end{equation} then there is a positive probability of avoiding all bad events. The symmetric LLL is a special case of this, derived by setting $x(B) = e p$. Both of these criteria are special cases of a yet more powerful criterion, known as the \emph{Shearer criterion}. This criterion requires a number of definitions to state; we discuss this further in Section~\ref{shearer-sec}. The probability of avoiding all bad events, while non-zero, is usually exponentially small; so the LLL does not directly lead to efficient algorithms. Moser \& Tardos \cite{moser-tardos} introduced a remarkable randomized procedure, which we refer to as the \emph{Resampling Algorithm}, which gives polynomial-time algorithms for nearly all LLL applications: \begin{algorithm}[H] \centering \begin{algorithmic}[1] \State Draw all variables $X \sim \Omega$. \While{some bad events are true} \State Choose some true $B \in \mathcal B$ arbitrarily. \State Resample the variables in $S_B$, independently from the distribution $\Omega$. \EndWhile \end{algorithmic} \caption{The sequential Resampling Algorithm} \end{algorithm} This resampling algorithm terminates with probability one under the same condition as the probabilistic LLL, viz. satisfying the Shearer criterion. The expected number of resamplings is typically polynomial in the input parameters. We note that this procedure can be useful even when the total number of bad events is exponentially large. At any stage of this algorithm, the expected number of bad events which are currently true (and thus need to be processed), is still polynomial. If we have a subroutine which lists the currently-true bad events in time $\text{poly}(n)$, then the overall run-time of this algorithm can still be polynomial in $n$. We refer to such a subroutine as a \emph{Bad-Event Checker}. These are typically very problem-specific; see \cite{hss} for more details. \subsection{Parallel algorithms for the LLL} Moser \& Tardos also gave a simple RNC algorithm for the LLL, shown below as Algorithm~\ref{mtparalg}. Unlike their sequential algorithm, this requires a small slack in the LLL criterion. In the symmetric setting, this criterion is $$ e p d (1 + \epsilon) \leq 1 $$ and in the asymmetric setting, it is given by $$ \forall B \in \mathcal B \qquad (1 + \epsilon) P_{\Omega}(B) \leq x(B) \prod_{\substack{A \sim B \\ A \neq B}} (1 - x(A)) $$ for some parameter $\epsilon \in (0,1/2)$. We refer to these stronger criteria as \emph{$\epsilon$-slack.} \vspace{-0.1in} \begin{algorithm}[H] \centering \begin{algorithmic}[1] \State Draw all variables $X \sim \Omega$. \While{some bad events are true} \State Choose a maximal independent set $I$ of bad events which are currently true. \State Resample, in parallel, all the variables $\bigcup_{B \in I} S_B$ from the distribution $\Omega$. \EndWhile \end{algorithmic} \caption{The Parallel Resampling Algorithm} \label{mtparalg} \end{algorithm} \vspace{-0.1in} Moser \& Tardos showed that this algorithm terminates after $O\bigl( \epsilon^{-1} \log (n \sum_{B \in \mathcal B} \frac{x(B)}{1-x(B)}) \bigr)$ rounds with high probability.\footnote{We say that an event occurs \emph{with high probability} (abbreviated whp), if it occurs with probability $\geq 1 - n^{-\Omega(1)}$.} In each round, there are two main computational tasks: one must execute a parallel Bad-Event Checker and one must find a maximal independent set (MIS) among the bad events which are currently true. Both of these tasks can be implemented in parallel models of computation. The most natural complexity parameter in these settings is the number of variables $n$, since the final output of the algorithm (i.e. a satisfying solution) will require at least $n$ bits. We will focus on this paper on the PRAM (Parallel Random Access Machine) model, in which we are allowed $\text{poly}(n)$ processors and $\text{polylog}(n)$ time. There are a number of variants of the PRAM model, which differ in (among other things) the ability and semantics of multiple processors writing simultaneously to the same memory cell. Two important cases are the CRCW model, in which multiple cells can simultaneously write (the same value) to a cell, and the EREW model, in which each memory cell can only be used by a single processor at a time. Nearly all ``housekeeping'' operations (sorting and searching lists, etc.) can also be implemented in $O(\log n)$ time using standard techniques in either model. A Bad-Event Checker can typically be implemented in time $O(\log n)$. The step of finding an MIS can potentially become a computational bottleneck. In \cite{luby-mis}, Luby introduced randomized algorithms for computing the MIS of a graph $G = (V,E)$ using $\text{poly}(|V|)$ processors; in the CRCW model of computation, this algorithm requires time $O(\log |V|)$ while in other models such as EREW it requires time $O(\log^2 |V|)$. Luby also discussed a deterministic algorithm using $O(\log^2 |V|)$ time (in either model). Applying Luby's MIS algorithm to the Resampling Algorithm yields an overall run-time of $O(\epsilon^{-1} \log^3( n \sum_{B \in \mathcal B} \frac{x(B)}{1-x(B)}))$ (on EREW) or $O(\epsilon^{-1} \log^2( n \sum_{B \in \mathcal B} \frac{x(B)}{1-x(B)}))$ (on CRCW) and the overall processor complexity is $\text{poly} (n, \sum_{B \in \mathcal B} \frac{x(B)}{1-x(B)}))$.\footnote{The weighting function $x(B)$ plays a somewhat mysterious role in the LLL, and it can be confusing to have it appear in the complexity bounds for Resampling Algorithm. In most (although not all) applications, the expression $\sum_{B \in \mathcal B} \frac{x(B)}{1-x(B)})$ can be bounded as $\text{poly}(n)$.} The computation of an MIS is relatively costly. In \cite{pettie}, Chung et al. gave several alternative algorithms for the symmetric LLL which either avoid this step or reduce its cost. One algorithm, based on bad events choosing random priorities and resampling a bad event if it has earlier priority than its neighbors, runs in $O(\epsilon^{-1} \log m)$ distributed rounds. This can be converted to a PRAM algorithm using $O(\epsilon^{-1} \log^2 m)$ time (in EREW) and $O(\epsilon^{-1} \log m)$ time (in CRCW). Unfortunately, this algorithm of \cite{pettie} requires a stronger criterion than the LLL: namely, in the symmetric setting, it requires that $e p d^2 \leq (1-\epsilon)$. In many applications of the LLL, particularly those based on Chernoff bounds for the sum of independent random variables, satisfying the stricter criterion $e p d^2 \leq (1-\epsilon)$ leads to qualitatively similar results as the symmetric LLL. In other cases, the criterion of \cite{pettie} loses much critical precision leading to weaker results. In particular, their bound essentially corresponds to the state of the art \cite{moser} before the break-through result of Moser and Moser-Tardos~\cite{moser-tardos}. Another parallel algorithm of Chung et al. requires only the standard symmetric LLL criterion and runs in $O(\epsilon^{-1} \log^2 d \log m)$ rounds, subsequently reduced to $O(\epsilon^{-1} \log d \log m)$ rounds by \cite{mohsen}. When $d$ is polynomial in $m$, these do not improve on the Moser-Tardos algorithm. More recent distributed algorithms for the LLL such as \cite{mohsen2} do not appear to lead to PRAM algorithms. In \cite{moser-tardos}, a deterministic parallel (NC) algorithm for the LLL was given, under the assumption that $d = O(1)$. This was strengthened in \cite{det-lll} to allow arbitrary $d$ under a stronger LLL criterion $e p d^{1+\epsilon} \leq 1$, with a complexity of $O(\epsilon^{-1} \log^3 (mn))$ time and $(mn)^{O(1/\epsilon)}$ processors (in either CRCW or EREW). This can be extended to an asymmetric setting, but there are many more technical conditions on the precise form of $\mathcal B$. \subsection{Overview of our results} In Section~\ref{a1Asec2}, we introduce a new theoretical structure to analyze the behavior of the Resampling Algorithm, which we refer to as the \emph{witness DAG}. This provides an explanation or history for some or all of the resamplings that occur. This generalizes the notion of a witness tree, introduced by Moser \& Tardos in \cite{moser-tardos}, which only provides the history of a single resampling. We use this tool to show stronger bounds on the Parallel Resampling Algorithm given by Moser \& Tardos: \begin{theorem} Suppose that the Shearer criterion is satisfied with $\epsilon$-slack. Then whp the Parallel Resampling Algorithm terminates after $O(\epsilon^{-1} \log n)$ rounds Suppose furthermore we have a Bad-Event Checker which uses polynomial processors and $T$ time. Then the total complexity of the Parallel Resampling Algorithm is $\epsilon^{-1} n^{O(1)}$ processors, and $O(\frac{(\log n) (T + \log^2 n)}{\epsilon})$ expected time (in EREW model) or $O(\frac{(\log n) (T + \log n)}{\epsilon})$ time (in CRCW model). \end{theorem} These bounds are independent of the LLL weighting function $x(B)$ and the number of bad events $m$. These simplify similar bounds shown in Kolipaka \& Szegedy \cite{kolipaka}, which show that Parallel Resampling Algorithm terminates, with constant probability, after $O(\epsilon^{-1} \log (n/\epsilon))$ rounds.\footnote{Note that Kolipaka \& Szegedy use $m$ for the number of variables and $n$ for the number of bad events, while we do the opposite. In this paper, we have translated all of their results into our notation. The reader should be careful to keep this in mind when reading their original paper.} In Sections~\ref{a1Asec4} and \ref{a1Asec5}, we develop a new parallel algorithm for the LLL. The basic idea of this algorithm is to select a random resampling table and then precompute all possible resampling-paths compatible with it. Surprisingly, this larger collection, which in a sense represents all possible trajectories of the Resampling Algorithm, can still be computed relatively quickly (in approximately $O(\epsilon^{-1} \log^2 n)$ time in the EREW model). Next, we find a \emph{single} MIS of this larger collection, which determines the complete set of resamplings. It is this reduction from $\epsilon^{-1} \log n$ separate MIS algorithms to just one that is the key to our improved run-time. We will later analyze this parallel algorithm in terms of the Shearer criterion, but this requires many preliminary definitions. We give a simpler statement of our new algorithm for the symmetric LLL criterion: \begin{theorem} Suppose that we have a Bad-Event Checker using $O(\log mn)$ time and $\text{poly}(m,n)$ processors. Suppose that each bad event $B$ has $P_{\Omega}(B) \leq p$ and is dependent with at most $d$ bad events and that $e p d (1 + \epsilon) \leq 1$ for some $\epsilon > 0$. Then, there is an EREW PRAM algorithm to find a configuration avoiding $\mathcal B$ whp using $\tilde O(\epsilon^{-1} \log(m n) \log n)$ time and $\text{poly}(m,n)$ processors.\footnote{The $\tilde O$ notation hides polylogarithmic factors, i.e. $\tilde O(t) = t (\log t)^{O(1)}$.} \end{theorem} In Section~\ref{a1Asec:det}, we derandomize this algorithm can be derandomized under a slightly more stringent LLL criterion. The full statement of the result is somewhat complex, but a summary is that if $e p d^{1+\epsilon} < 1$ then we obtain a deterministic EREW algorithm using $O(\epsilon^{-1} \log^2 (mn))$ time and $(mn)^{O(1/\epsilon)}$ processors. This is NC for constant $\epsilon$. The following table summarizes previous and new parallel run-time bounds for the LLL. For simplicity, we state the symmetric form of the LLL criterion, although many of these algorithms are compatible with asymmetric LLL criteria as well. The run-time bounds are simplified for readability, omitting terms which are negligible in typical applications. \begin{center} \begin{tabular}{|c||c|c|c|} \hline Model & LLL criterion & Reference & Run-time \\ \hline \hline \multicolumn{4}{|c|}{Previous results} \\ \hline Randomized CRCW PRAM & $e p d (1 + \epsilon) \leq 1$ & \cite{moser-tardos} & $\epsilon^{-1} \log^2 m$ \\ Randomized EREW PRAM & $e p d (1 + \epsilon) \leq 1$ & \cite{moser-tardos} & $\epsilon^{-1} \log^3 m$ \\ Deterministic EREW PRAM & $e p d^{1+\epsilon} \leq 1$ & \cite{det-lll} & $\epsilon^{-1} \log^3 m$ \\ Randomized CRCW PRAM & $e p d^2 \leq 1 - \epsilon$ & \cite{pettie} & $\epsilon^{-1} \log m$ \\ Randomized EREW PRAM & $e p d^2 \leq 1 - \epsilon$ & \cite{pettie} & $\epsilon^{-1} \log^2 m$ \\ \hline \hline \multicolumn{4}{|c|}{This paper} \\ \hline Randomized CRCW PRAM & $e p d (1 + \epsilon) \leq 1$ & Resampling Algorithm & $\epsilon^{-1} \log^2 n$ \\ Randomized EREW PRAM & $e p d (1 + \epsilon) \leq 1$ & Resampling Algorithm & $\epsilon^{-1} \log^3 n$ \\ Randomized EREW PRAM & $e p d (1 + \epsilon) \leq 1$ & New algorithm & $\epsilon^{-1} \log^2 m$ \\ Deterministic EREW PRAM & $e p d^{1+\epsilon} \leq 1$ & New algorithm & $\epsilon^{-1} \log^2 m$ \\ \hline \end{tabular} \end{center} Although the main focus of this paper is on parallel algorithms, our techniques also lead to a new and stronger concentration result for the run-time of the sequential Resampling Algorithm. The full statement appears in Section~\ref{a1Asec3}; we provide a summary here: \begin{theorem} Suppose that the asymmetric LLL criterion is satisfied with $\epsilon$-slack. Then whp the Resampling Algorithm performs $O\bigl( (\sum_B \frac{x(B)}{1 - x(B)}) + \frac{\log^2 n}{\epsilon}\bigr)$ resamplings. Alternatively, suppose that the symmetric LLL criterion $e p d \leq 1$ is satisfied. Then whp the Resampling Algorithm performs $O(n + d \log^2 n)$ resamplings. \end{theorem} Similar concentration bounds have been shown in \cite{kolipaka} and \cite{achlioptas}. The main technical innovation here is that prior concentration bounds have the form $O( \frac{ \sum_B \frac{x(B)}{1 - x(B)} }{\epsilon} )$, whereas the new concentration bounds are largely independent of $\epsilon$ (as long as it is not too small). \subsection{Stronger LLL criteria} \label{shearer-sec} The LLL criterion, in either its symmetric or asymmetric form, depends on only two parameters: the probabilities of the bad events, and their dependency structure. The symmetric LLL criterion $e p d \leq 1$ is a very simple criterion involving these parameters, but it is not the most powerful. In \cite{shearer}, Shearer gave the strongest possible criterion that can be stated in terms of these parameters alone. This criterion is somewhat cumbersome to state and difficult to work with technically, but it is useful theoretically because it subsumes many of the other simpler criteria. We note that the ``lopsided'' form of the LLL can be applied to this setting, in which bad events are atomic configurations of the variables (as in a $k$-SAT instance), and this can be stronger than the ordinary LLL. As shown in \cite{harris2}, there are forms of lopsidependency in the Moser-Tardos setting which can even go beyond the Shearer criterion itself. However, the Parallel Resampling Algorithm does not work in this setting; alternate, slower, parallel algorithms which can take advantage of this lopsidependency phenomenon are given in \cite{harris2}, \cite{harris4}. In this paper we are only concerned with the standard (not lopsided) LLL. To state the Shearer criterion, it will be useful to suppose that the dependency structure of our bad events $\mathcal B$ is fixed, but the probabilities for the bad events have not been specified. We define the \emph{independent-set polynomial} $Q(I,p)$ as $$ Q(I, p) = \sum_{\substack{I \subseteq J \subseteq \mathcal B\\\text{$J$ independent}}} (-1)^{|J|-|I|} \prod_{B \in J} p(B) $$ for any $I \subseteq \mathcal B$. Note that $Q(I, p) = 0$ if $I$ is not an independent set. This quantity plays a key role in Shearer's criterion for the LLL \cite{shearer} and the behavior of the Resampling Algorithm. We say that the probabilities $p$ satisfy the Shearer criterion iff $Q(\emptyset, p) > 0$ and $Q(I, p) \geq 0$ for all independent sets $I \subseteq \mathcal B$. \begin{proposition}[\cite{shearer}] \label{a1Ashearer-prop} Suppose that $p$ satisfies the Shearer criterion. Then any probability space with the given dependency structure and probabilities $P_{\Omega} = p$ has a positive probability that none of the bad events $B$ are true. Suppose that $p$ do not satisfy the Shearer criterion. Then there is a probability space $\Omega$ with the given dependency structure and probabilities $P_{\Omega} = p$ for which, with probability one, at least one $B \in \mathcal B$ is true. \end{proposition} \begin{proposition}[\cite{shearer}] \label{a1Ashearer-prop2} Suppose that $p(B) \leq p'(B)$ for all $B \in \mathcal B$. Then, if $p'$ satisfies the Shearer criterion, so does $p$. \end{proposition} One useful parameter for us will be the following: \begin{definition} For any bad event $B$, define the \emph{measure} of $B$ to be $\mu(B) = \frac{Q( \{B \}, P_{\Omega})}{ Q( \emptyset, P_{\Omega} )}$. \end{definition} In \cite{kolipaka}, Kolipaka \& Szegedy showed that if the Shearer criterion is satisfied, then the Resampling Algorithm terminates with probability one; furthermore, the run-time of the Resampling Algorithms can be bounded in terms of the measures $\mu$. \begin{proposition}[\cite{kolipaka}] The expected number of resamplings of any $B \in \mathcal B$ is at most $\mu(B)$. \end{proposition} This leads us to define the \emph{work parameter} for the LLL by $W = \sum_{B \in \mathcal B} \mu(B)$. Roughly speaking, the expected running time of the Resampling Algorithm is $O(W)$; we will later show (in Section~\ref{a1Asec3}) that such a bound holds whp as well. Although the sequential Resampling Algorithm can often work well when the Shearer criterion is satisfied (almost) exactly, for the Parallel Resampling Algorithm one must often satisfy it with a small slack. \begin{definition} We say that the Shearer criterion is satisfied with $\epsilon$-slack, if the vector of probabilities $(1+\epsilon) P_{\Omega}$ satisfies the Shearer criterion. \end{definition} It it extremely difficult to directly show that the Shearer criterion is satisfied in a particular instance. There are alternative criteria, which are weaker than the full Shearer criterion but much easier to work with computationally. Perhaps the simplest is the asymmetric LLL criterion. The connection between the Shearer criterion and the asymmetric LLL criterion was shown by Kolipaka \& Szegedy in \cite{kolipaka}. \begin{theorem}[\cite{kolipaka}] \label{a1Akthm1} Suppose that a weighting function $x: \mathcal B \rightarrow (0,1)$ satisfies $$ \forall B \in \mathcal B \qquad P_{\Omega}(B) (1+\epsilon) \leq x(B) \prod_{\substack{A \sim B \\ A \neq B}} (1 - x(A)) $$ Then the Shearer criterion is satisfied with $\epsilon$-slack, and $\mu(B) \leq \frac{x(B)}{1-x(B)}$ for all $B \in \mathcal B$. \end{theorem} This was extended to the cluster-expansion LLL criterion of \cite{bissacot} by Harvey \& Vondr\'{a}k in \cite{harvey}: \begin{theorem}[\cite{harvey}] \label{a1Akthm2} For any bad event $B$, let $N(B)$ denote the set of bad events $A$ with $A \sim B$. Suppose that a weighting function $\tilde \mu: \mathcal B \rightarrow [0, \infty)$ satisfies $$ \forall B \in \mathcal B \qquad \tilde \mu(B) \geq P_{\Omega}(B) (1+\epsilon) \sum_{\substack{I \subseteq N(B)\\ \text{$I$ independent}}} \prod_{A \in I} \tilde \mu(A) $$ Then the Shearer criterion is satisfied with $\epsilon$-slack, and $\mu(B) \leq \tilde \mu(B)$ for all $B \in \mathcal B$. \end{theorem} For the remainder of this paper, we will assume unless stated otherwise that our probability space $\Omega$ satisfies the Shearer criterion with $\epsilon$-slack. We will occasionally derive certain results for the symmetric LLL criterion as a corollary of results on the full Shearer criterion. \section{The witness DAG and related structures} \label{a1Asec2} There are two key analytical tools introduced by Moser \& Tardos to analyze their algorithm: the resampling table and witness tree. The \emph{resampling table} $R$ is a table of values $R(i,t)$, where $i$ ranges over the variables $1, \dots, n$ and $t$ ranges over the natural numbers $1, 2, \dots$. Each cell $R(i,t)$ is drawn independently from the distribution on the variable $i$, that $R(i,t) = j$ with probability $p_{ij}$, independently of all other cells. The intent of this table is that, instead of choosing new values for the variables in ``on-line'' fashion, we precompute the future values of all the variables. The first entry in the table $R(i,1)$, is the initial value for the variable $X_i$; on the $t^{\text{th}}$ resampling, we set $X_i = R(i,t+1)$.\footnote{Although nominally the resampling table provides a countably infinite stream of values for each variable, in practice we will only need to use approximately $\epsilon^{-1} \log n$ distinct values for each variable.} The \emph{witness tree} is a structure which records the history of all variables involved in a given resampling. Moser \& Tardos give a very clear and detailed description of the process for forming witness trees; we provide a simplified description here. Suppose that the Resampling Algorithm resample bad events $B_1, \dots, B_t$ in order (the algorithm has not necessarily terminated by this point). We build a witness-tree $\hat \tau_t$ for the $t^{\text{th}}$ resampling, as follows. We place a node labeled by $B_t$ at the root of the tree. We then go backwards in time for $j = t-1, \dots, 1$. For each $B_j$, if there is a node $v'$ in the tree labeled by $B' \sim B_j$, then we add a new node $v$ labeled by $B_j$ as a child of $v'$; if there are multiple choices of $v'$, we always select the one of greatest depth (breaking ties arbitrarily.) If there is no such node $v'$, then we do not add any nodes to the tree for that value of $j$. \subsection{The witness DAG} The witness tree $\hat \tau_t$ only provides an explanation for the single resampling at time $t$; it may discard information about other resamplings. We now consider a related object, the \emph{witness DAG} (abbreviated \emph{WD}) that can record information about multiple resamplings, or all of the resamplings. A WD is a directed acyclic graph, whose nodes are labeled by bad events. For nodes $v, v' \in G$, we write $v \prec v'$ if there is an edge from $v$ to $v'$. We impose two additional requirements, which we refer to as the \emph{comparability conditions}. First, if nodes $v, v'$ are labeled by $B, B'$ and $B \sim B'$, then either $v \prec v'$ or $v' \prec v$; second, if $B \not \sim B'$ then there is no edge between $v, v'$. We let $|G|$ denote the number of vertices in a WD $G$. It is possible that a WD can contain multiple nodes with the same label. However, because of the comparability conditions, all such nodes are linearly ordered by $\prec$. Thus for any WD $G$ and any $B \in \mathcal B$, the nodes of $G$ labeled $B$ can be unambiguously sorted. Accordingly, we use the notation $(B,k)$ to mean that node $v$ is the $k^{\text{th}}$ node of $G$ labeled by $B$. For any node $v$, we refer to this ordered pair $(B,k)$ as the \emph{extended label} of $v$. Every node in a WD receives a distinct extended label. We emphasize that this is a notational convenience, as an extended label of a node can be recovered from the WD along with its un-extended labels. Given a full execution of the Resampling Algorithm, one can form a a particularly important WD which we refer to as the \emph{Full Witness DAG} $\hat G$ (abbreviated \emph{FWD}). We construct this as follows. Suppose that we resample bad events $B_1, \dots, B_t$. Then $\hat G$ has vertices $v_1, \dots, v_t$ which are labeled $B_1, \dots, B_t$. We place an edge from $v_i$ to $v_j$ iff $i<j$ and $B_i \sim B_j$. We emphasize that $\hat G$ is a random variable. The FWD (under different terminology) was analyzed by Kolipaka \& Szegedy in \cite{kolipaka}, and we will use their results in numerous places. However, we will also consider partial WDs, which record information about only a subset of the resamplings. As witness trees and single-sink WDs are closely related, we will often use the notation $\tau$ for a single-sink WD. We let $\Gamma$ denote the set of all single-sink WDs, and for any $B \in \mathcal B$ we let $\Gamma(B)$ denote the set of single-sink WDs whose sink node is labeled $B$. \subsection{Compatibility conditions for witness DAGs and resampling tables} The Moser-Tardos proof hinged upon a method for converting an execution log into a witness tree, and necessary conditions were given for a witness tree being produced in this fashion in terms of its consistency with the resampling table. We will instead use these conditions as a \emph{definition} of compatibility. \begin{definition}[Path of a variable] Let $G$ be a WD. For any $i \in [n]$, let $G[i]$ denote the subgraph of $G$ induced on all vertices $v$ labeled by $B$ with $i \in S_B$. Because of the comparability conditions, $G[i]$ is linearly ordered by $\prec$; thus we refer to $G[i]$ as the \emph{path} of variable $i$. \end{definition} \begin{definition}[Configuration of $v$] Let $G$ be a WD and $R$ a resampling table. Let $v \in G$ be labeled by $B$. For each $i \in S_B$, let $y_{v,i}$ denote the number of vertices $w \in G[i]$ such that $w \prec v$. We now define the \emph{configuration of $v$} by $$ X_G^v (i) = R(i, 1 + y_{v, i}) $$ \end{definition} \begin{definition}[Compatibility of WD $G$ with resampling table $R$] For a WD $G$ and a resampling table $R$, we say that $G$ is \emph{compatible} with $R$ if, for all nodes $v \in G$ labeled by $B \in \mathcal B$, it is the case that $B$ is true on the configuration $X_G^v$. This is well-defined because $X_G^v$ assigns values to all the variables in $S_B$. We define $\Gamma^R$ to be the set of single-sink WDs compatible with $R$, and similarly for $\Gamma^R(B)$. \end{definition} The following are key results used by Moser \& Tardos to bound the running time of their resampling algorithm: \begin{definition}[Weight of a WD] Let $G$ be any WD, whose nodes are labeled by bad events $B_1, \dots, B_s$. We define the \emph{weight} of $G$ to be $w(G) = \prod_{k=1}^s P_{\Omega}(B_k)$. \end{definition} \begin{proposition} \label{a1wprop} Let $G$ be any WD. For a random resampling table $R$, $G$ is compatible with $R$ with probability $w(G)$. \end{proposition} \begin{proof} For any node $v \in G$, note that $X_G^v$ follows the law of $\Omega$, and so the probability that $B$ is true of the configuration $X_G^v$ is $P_{\Omega}(B)$. Next, note that each node $v \in G$ imposes conditions on disjoint sets of entries of $R$, and so these events are independent. \end{proof} \begin{proposition} \label{a1Agcompat} Suppose we run the Resampling Algorithm, taking values for the variables from the resampling table $R$. Then $\hat G$ is compatible with $R$. \end{proposition} \begin{proof} Suppose there is a node $v \in \hat G$ with an extended label $(B,k)$. Thus, $B$ must be resampled at least $k$ times. Suppose that the $k^{\text{th}}$ resampling occurs at time $t$. Let $Y$ be the configuration at time $t$, just before this resampling. We claim that, for all $i \in S_B$, we have $Y(i) = X_{\hat G}^v(i)$. For, the graph $\hat G$ must contain all the resamplings involving variable $i$. All such nodes would be connected to vertex $v$ (as they overlap in variable $i$), and those that occur before time $t$ are precisely those that have an edge to $v$. So $y_{v,i}$ is exactly the number of bad events up to time $t$ that involve variable $i$. Thus, just before the resampling at time $t$, variable $i$ was on its $1 + y_{v,i}$ resampling. So $Y(i) = R(i, 1 + y_{v,i}) = X_{\hat G}^v(i)$, as claimed. In order for $B$ to be resampled at time $t$, it must have been the case that $B$ was true, i.e., that $B$ held on configuration $Y$. However, since $Y$ agrees with $X_{\hat G}^v$ on $S_B$, it must be also be the case that $B$ holds on configuration $X_{\hat G}^v$. Since this is true for all $v$, it follows that $G$ is compatible with $R$. \end{proof} \subsection{Prefixes of a WD} A WD records information about many resamplings. If we are only interested in the history of a subset of its nodes, then we can form a \emph{prefix subgraph} which discards irrelevant information. \begin{definition}[Prefix graph] \label{a1prefix-def} For any WD $G$ and vertices $v_1, \dots, v_l \in G$, let $G(v_1, \dots, v_l)$ denote the subgraph of $G$ induced on all vertices which have a path to at least one of $v_1, \dots, v_l$. If $H$ is a subgraph of $G$ with $H = G(v_1, \dots, v_l)$ for some $v_1, \dots, v_l \in G$, then we say that $H$ is a \emph{prefix} of $G$. \end{definition} Using Definition~\ref{a1prefix-def}, we can give a more compact definition of the configuration of a node: \begin{proposition} For any WD $G$ and $v \in G$, we have $X^v_G(i) = R(i, |G(v)[i]|)$. \end{proposition} \begin{proof} Suppose that $v$ is labeled by $B$. The graph $G(v)[i]$ contains precisely $v$ itself and the other nodes $w \in G[i]$ with $w \prec v$. So $|G(v)[i]| = y_{v,i} + 1$. \end{proof} \begin{proposition} \label{a1xprop1} Suppose $G$ is compatible with $R$ and $H$ is a prefix of $G$. Then $H$ is compatible with $R$. \end{proposition} \begin{proof} Let $H = G(v_1, \dots, v_l)$. Consider $w \in H$ labeled by $B$. We claim that $H(w) = G(w)$. For, consider any $u \in H(w)$. So $u$ has a path to $w$ in $H$; it also must have a path to $w$ in $G$. On the other hand, suppose $u \in G(w)$, so $u$ has a path $p$ to $w$ in $G$. As $w$ has a path to one of $v_1, \dots, v_l$, this implies that every vertex in the path $p$ also has such a path. Thus, the path $p$ is in $H$, and hence $u$ has a path in $H$ to $w$, so $u \in H(w)$. Next, observe that for any $i \in S_B$ we have $$ X_G^{w}(i) = R(i, |G(w)[i]|) = R(i,|H(w)[i]|) = X_{H}^w(i) $$ and by hypothesis, $B$ is true on $X_G^{w}$. \end{proof} \subsection{Counting witness trees and WDs} In this section, we bound the summed weights of certain classes of WDs. In light of Proposition~\ref{a1wprop}, this will upper-bound the expected number of resamplings. \begin{proposition}[\cite{kolipaka}] \label{a1Aprop2} For any $B \in \mathcal B$, we have $$ \sum_{\tau \in \Gamma(B)} w(\tau) \leq \mu(B). $$ \end{proposition} \begin{proof} For any WD $G$ with a single sink node $v$ labeled $B$, we define $I'_j$ for non-negative integers $j$ using the following recursion. $I'_0 = \{v \}$, and $I'_{j+1}$ is the set of vertices in $G$ whose out-neighbors all lie in $I'_0 \cup \dots \cup I'_j$. Let $I_j$ denote the labels of the vertices in $I'_j$; so $I_0 = \{B \}$. Now observe that by the comparability conditions each set $I_j$ is an independent set, and for each $B' \in I_{j+1}$ there is some $B'' \sim B', B'' \in I_j$. Also, the mapping from $G$ to $I_0, \dots, I_j$ is injective. We thus may sum over all such $I_1, \dots, I_{\infty}$ to obtain an upper bound on the weight of such WDs. In \cite{kolipaka} Theorem 14, this sum is shown to be $Q( \{B \}, P_{\Omega})/Q(\emptyset, P_{\Omega})$ (although their notation is slightly different.) \end{proof} We will now take advantage of the $\epsilon$-slack in our probabilities. \begin{proposition} \label{a1Aweight-bound1} Given any $V \subseteq \mathcal B$, we say that $V$ is \emph{a dependency-clique} if $B \sim B'$ for all $B, B' \in V$. If $V$ is a dependency-clique then for any $\rho \in [0, \epsilon)$ we have $$ \sum_{B \in V} \frac{Q( \{ B \}, (1+\rho) P_{\Omega})}{Q(\emptyset, (1+\rho) P_{\Omega})} \leq \frac{1+\rho}{\epsilon - \rho}. $$ \end{proposition} \begin{proof} Consider the probability vector $p$ defined by $$ p(B) = \begin{cases} (1+\epsilon) P_{\Omega}(B) & \text{if $B \in V$} \\ (1+\rho) P_{\Omega}(B) & \text{if $B \notin V$} \\ \end{cases} $$ Since $V$ is a clique, for any independent $I$ we have $|I \cap V| \leq 1$. Thus we may calculate $Q(\emptyset, p)$ as {\allowdisplaybreaks \begin{align*} Q(\emptyset, p) &= \sum_{\substack{I \subseteq \mathcal B \\ \text{$I$ independent}}} (-1)^{|I|} \prod_{A \in I} p(A) = \sum_{\substack{I \subseteq \mathcal B \\ \text{$I$ independent}}} (-1)^{|I|} \prod_{A \in I} (1 + [A \in V] \frac{\epsilon-\rho}{1+\rho}) P_{\Omega}(A) \\ &= \sum_{\substack{ I \subseteq \mathcal B \\ \text{$I$ independent}}} (-1)^{|I|} (1 + [I \cap V \neq \emptyset] \frac{\epsilon-\rho}{1+\rho}) \prod_{A \in I} (1 + \rho) P_{\Omega}(A) \\ &= \sum_{\substack{ I \subseteq \mathcal B \\ \text{$I$ independent}}} (-1)^{|I|} \prod_{A \in I} (1 + \rho) P_{\Omega}(A) + \frac{\epsilon-\rho}{1+\rho} \sum_{B \in V} \sum_{ \substack{I \subseteq \mathcal B \\ \text{$I$ independent} \\ I \cap V = \{B \}}} (-1)^{|I|} \prod_{A \in I} (1 + \rho) P_{\Omega}(A) \\ &= Q(\emptyset, (1+\rho) P_{\Omega}) -\frac{(\epsilon - \rho)}{1+\rho} \sum_{B \in V} Q( \{B \}, (1+\rho) P_{\Omega} ) \end{align*} } We use here the Iverson notation so that $[I \cap V \neq \emptyset]$ is one if $I \cap V \neq \emptyset$ and zero otherwise. Note that $p \leq (1+\epsilon) P_{\Omega}$ and so by Propositions~\ref{a1Ashearer-prop}, \ref{a1Ashearer-prop2} we have $Q( \emptyset, p ) > 0$. By the same argument, $Q( \{B \}, (1+\rho) P_{\Omega} ) \geq 0$ for all $B \in V$. We thus have: {\allowdisplaybreaks \begin{align*} \sum_{B \in V} \frac{Q( \{ B \}, (1+\rho) P_{\Omega})}{Q(\emptyset, (1+\rho) P_{\Omega})} &= \frac{\sum_{B \in V} Q( \{B \}, (1+\rho) P_{\Omega})}{ Q(\emptyset, p) + \frac{(\epsilon - \rho)}{1+\rho} \sum_{B \in V} Q( \{B \}, (1+\rho) P_{\Omega} ) } \\ & \leq \frac{\sum_{B \in V} Q( \{B \}, (1+\rho) P_{\Omega})}{ \frac{(\epsilon - \rho)}{1+\rho} \sum_{B \in V} Q( \{B \}, (1+\rho) P_{\Omega} ) } = \frac{1+\rho}{\epsilon - \rho} \end{align*} } \end{proof} \begin{definition}[Adjusted weight] For any WD $G$, we define the \emph{adjusted weight} with respect to rate factor $\rho$ by $$ a_{\rho}(G) = w(G) (1+\rho)^{|G|}. $$ Observe that $w(G) = a_0(G)$. \end{definition} \begin{corollary} \label{a1Aweight-bound2} Suppose that $V \subseteq \mathcal B$ is a dependency-clique. Then for any $\rho \in [0, \epsilon)$ we have $$ \sum_{B \in V} a_{\rho}(B) \leq \frac{1+\rho}{\epsilon - \rho}. $$ \end{corollary} \begin{proof} Applying Proposition~\ref{a1Aprop2} using the probability vector $p = (1+\rho) P_{\Omega}$ gives $$ \sum_{B \in V} \sum_{\tau \in \Gamma(B)} a_{\rho}(B) \leq \sum_{B \in V} \frac{Q( \{B \}, (1 + \rho) P_{\Omega})}{Q(\emptyset, (1 + \rho) P_{\Omega})} $$ Now apply Proposition~\ref{a1Aweight-bound1}. \end{proof} \begin{corollary} \label{a1Aweight-bound3} We have the bound $W \leq n/\epsilon$, where we recall the definition $W = \sum_{B \in \mathcal B} \mu(B)$. \end{corollary} \begin{proof} We write \begin{align*} W &= \sum_{B \in \mathcal B} \mu(B) = \sum_{B \in \mathcal B} \frac{Q( \{B \}, P_{\Omega})}{Q(\emptyset, P_{\Omega})} \leq \sum_{i \in [n]} \sum_{B: S_B \ni i} \frac{Q( \{B \}, P_{\Omega})}{Q(\emptyset, P_{\Omega})} \end{align*} Now note that for any $i \in [n]$, the set of bad events $B$ with $i \in S_B$ forms a dependency-clique. Thus, applying Proposition~\ref{a1Aweight-bound1} with $\rho = 0$ gives $\sum_{i \in S_B} \frac{Q( \{B \}, P_{\Omega})}{Q(\emptyset, P_{\Omega})} \leq \frac{1}{\epsilon}$. \end{proof} \begin{corollary}[\cite{kolipaka}] \label{sscor} The total weight of all single-sink WDs is at most $n/\epsilon$. \end{corollary} \begin{proof} Follows immediately from Proposition~\ref{a1Aprop2} and Corollary~\ref{a1Aweight-bound3}. \end{proof} \begin{proposition} \label{a1Abound-prop2} For $r \geq 1 + 1/\epsilon$, the expected number of single-sink WDs compatible with $R$ containing more than $r$ nodes is at most $e n r (1+\epsilon)^{-r} $ \end{proposition} \begin{proof} For any $\rho \in [0, \epsilon)$, sum over such WDs to obtain: {\allowdisplaybreaks \begin{align*} \sum_{\substack{ \tau \in \Gamma \\ |\tau| \geq r}} P( \text{$\tau$ compatible with $R$} ) &= \sum_{\substack{ \tau \in \Gamma \\ |\tau| \geq r}} w(\tau) \leq (1+\rho)^{-r} \sum_{\substack{ \tau \in \Gamma \\ |\tau| \geq r}} w(\tau) (1+\rho)^{|\tau|} = (1+\rho)^{-r} \sum_{\substack{ \tau \in \Gamma \\ |\tau| \geq r}} a_{\rho}(\tau) \\ &\leq (1+\rho)^{-r} \sum_{i \in [n]} \sum_{B: S_B \ni i} \sum_{\tau \in \Gamma(B)} a_{\rho}(\tau) \\ &\leq (1+\rho)^{-r} n \frac{1+\rho}{\epsilon - \rho} \qquad \text{by Corollary~\ref{a1Aweight-bound2}} \end{align*} } Now take $\rho = \epsilon - (1 + \epsilon)/r$. By our condition $r \geq 1 + 1/\epsilon$ we have $\rho \in [0,\epsilon)$ and so Proposition~\ref{a1Aweight-bound1} applies. Hence the expected number of such WDs is thus at most $\frac{n r^r}{(r-1)^{r-1} (1+\epsilon)^r} \leq e n r (1+\epsilon)^{-r}$. \end{proof} \begin{corollary} \label{a1Acor1} Whp, every element of $\Gamma^R$ contains $O(\frac{\log (n/\epsilon)}{\epsilon})$ nodes. Whp all but $\frac{10 \log n}{\epsilon}$ elements of $\Gamma^R$ contain at most $\frac{10 \log n}{\epsilon}$ nodes. \end{corollary} \begin{proof} This follows immediately from Markov's inequality and Proposition~\ref{a1Abound-prop2}. \end{proof} \begin{corollary} \label{a1Acor2} Whp, all WDs compatible with $R$ have height $O(\frac{\log n}{\epsilon})$. \end{corollary} \begin{proof} Suppose that there is a WD $G$ of height $T$ compatible with $R$. Then for $i = 1, \dots, T$ there is a single-sink WD of height $i$ compatible with $R$ (take the graph $G(v)$, where $v$ is a node of height $i$.) This implies that there $\Omega(T)$ members of $\Gamma^R$ of height $\Omega(T)$. By Proposition~\ref{a1Acor1}, this implies $T = O( \frac{\log n}{\epsilon})$. \end{proof} Corollary~\ref{a1Acor2} leads to a better bound on the complexity of the Parallel Resampling Algorithm. The following Proposition~\ref{a1Aprop-bound} is remarkable in that the complexity is phrased solely in terms of the number of variables $n$ and the slack $\epsilon$, and is otherwise independent of $\mathcal B$. \begin{proposition} \label{a1Aprop-bound} Suppose that the Shearer criterion is satisfied with $\epsilon$-slack. Then whp the Parallel Resampling Algorithm terminates after $O( \frac{\log n}{\epsilon} )$ rounds. Suppose we have a Bad-Event Checker in time $T$ and polynomial processors. Then the Parallel Resampling Algorithm can be executed using $n^{O(1)}/\epsilon$ processors with an expected run-time of $O(\frac{(\log n) (T + \log ^2 n)}{\epsilon})$ (in the EREW model) or $O(\frac{(\log n) (T + \log n)}{\epsilon})$ (in the CRCW model). \end{proposition} \begin{proof} An induction on $i$ shows that if the Parallel Resampling Algorithm runs for $i$ steps, then $\hat G$ has depth $i$, and it is compatible with $R$. By Corollary~\ref{a1Acor2} whp this implies that $i = O( \frac{\log n}{\epsilon} )$. This implies that the total time needed to identify true bad events is $O(i T) \leq O(\frac{T \log n}{\epsilon})$. We next compute the time required for MIS calculations. We only show the calculation for the EREW model, as the CRCW bound is nearly identical. Suppose that at stage $i$ the number of bad events which are currently true is $v_i$. Then the total time spent calculating MIS, over the full algorithm, is $\sum_{i=1}^t O(\log^2 v_i)$. Since $\log x$ is a concave-down function of $x$, this sum is at most $O(t \log^2(\sum v_i/t))$. On the other hand, for each bad event which is at true at each stage, one can construct a distinct corresponding single-sink WD compatible with $R$. Hence, $\ensuremath{\mathbf{E}}[\sum v_i] \leq \sum_{\tau \in \Gamma} w(\tau) \leq n/\epsilon$. As $t \leq \frac{\log n}{\epsilon}$, we have $\ensuremath{\mathbf{E}}[t \log^2(\sum v_i/t)] \leq \epsilon^{-1} \log^3 n$. This shows the bound on the time complexity of the algorithm. Now suppose we can enumerate all the currently true bad events. The expected number of bad events which are ever true is at most the weight of all single-sink WDs, which is at most $W \leq n/\epsilon$. By Markov's inequality, whp the total number of bad events which are ever true is bounded by $n^{O(1)} / \epsilon$. \end{proof} \section{Mutual consistency of witness DAGs} \label{a1Asec4} In Section~\ref{a1Asec2}, we have seen conditions for WDs to be compatible with a \emph{given} resampling table $R$. In this section, we examine when a set of WDs can be mutually consistent, in the sense that they could all be prefixes of some (unspecified) FWD. \begin{definition}[Consistency of $G, G'$] Let $G, G'$ be WDs. We say that $G$ is \emph{consistent} with $G'$ is, for all variables $i$, either $G[i]$ is an initial segment of $G'[i]$ or $G'[i]$ is an initial segment of $G[i]$, both of these as labeled graphs. (Carefully note the position of the quantifiers: If $n=2$ and $G[1]$ is an initial segment of $G'[1]$ and $G'[2]$ is an initial segment of $G[2]$, then $G, G'$ are consistent.) Let $\mathcal G$ be any set of WDs. We say that $\mathcal G$ is \emph{pairwise consistent} if $G, G'$ are consistent with each other for all $G, G' \in \mathcal G$. \end{definition} \begin{proposition} \label{a1Ap2} Suppose $H_1, H_2$ are prefixes of $G$. Then $H_1$ is consistent with $H_2$. \end{proposition} \begin{proof} Observe that for any $w_1 \prec w_2 \in H_j$, we must have $w_1 \in H_j$ as well. It follows that $H_j[i]$ is an initial segment of $G[i]$ for any $i \in [n]$. As both $H_1[i]$ and $H_2[i]$ are initial segments of $G[i]$, one of them must be an initial segment of the other. \end{proof} \begin{definition}[Merge of two consistent WDs] Let $G, G'$ be consistent WDs. Then we define the \emph{merge} $G \vee G'$ as follows. If either $G$ or $G'$ has a node $v$ with an extended label $(B,k)$, then we create a corresponding node $w \in G \vee G'$ labeled by $B$. We refer to the \emph{corresponding label} of $w$ as $(B,k)$. Now, let $v_1, v_2 \in G \vee G'$ have corresponding label $(B_1,k_1)$ and $(B_2, k_2)$. We create an edge from $v_1$ to $v_2$ if either $G$ or $G'$ has an edge between vertices with extended label $(B_1, k_1), (B_2, k_2)$ respectively. \end{definition} For every vertex $v \in G$ with extended label $(B,k)$, there is a vertex in $G \vee G'$ with corresponding label $(B,k)$. We will abuse notation slightly and refer to this vertex in $H$ also by the name $v$. \begin{proposition} \label{a1pathprop} Let $G, G'$ be consistent WDs and let $H = G \vee G'$. If there is a path $v_1, \dots, v_l$ in $H$ and $v_l \in G$, then also $v_1, \dots, v_l \in G$. \end{proposition} \begin{proof} Suppose that this path has corresponding labels $(B_1, k_1), \dots, (B_l, k_l)$. Suppose $i \leq l$ is minimal such that $v_i, \dots, v_l$ are all in $G$. (This is well-defined as $v_l \in G$). If $i = 1$ we are done. Otherwise, we have $v_i \in G, v_{i-1} \in G' - G$. Note that $B_{i-1} \sim B_{i}$, so let $j \in S_{B_{i-1}} \cap S_{B_i}$. Note that $v_i \in G[j], v_{i-1} \in G'[j]$. But observe that in $H$ there is an edge from $v_{i-1}$ to $v_i$. As $v_{i-1} \notin G$, this edge must have been present in $G'$. So $G'[j]$ contains the vertices $v_{i-1}, v_i$, in that order, while $G[j]$ contains only the vertex $v_i$. Thus, neither $G[j]$ or $G'[j]$ can be an initial segment of the other. This contradicts the hypothesis. \end{proof} \begin{proposition} \label{a1welldefprop} Let $G, G'$ be consistent WDs and let $H = G \vee G'$. Then $H$ is a WD and both $G$ and $G'$ are prefixes of it. \end{proposition} \begin{proof} Suppose that $H$ contains a cycle $v_1, \dots, v_l, v_1$, and suppose $v_1 \in G$. Then by Proposition~\ref{a1pathprop} the cycle $v_1, \dots, v_l, v_1$ is present also in $G$, which is a contradiction. Next, we show that the comparability conditions hold for $H$. Suppose that $(B_1, k_1)$ and $(B_2, k_2)$ are the corresponding labels of vertices in $H$, and $B_1 \sim B_2$. So let $i \in S_{B_1} \cap S_{B_2}$. Without loss of generality, suppose that $G[i]$ is an initial segment of $G'[i]$. So it must be that $(B_1, k_1)$ and $(B_2, k_2)$ appear in $G'[i]$. Because of the comparability conditions for $G'$, there is an edge in $G'$ on these vertices, and hence there is an edge in $H$ as well. On the other hand, if there is an edge in $H$ between vertices $(B_1, k_1)$ and $(B_2, k_2)$, there must be such an edge in $G$ or $G'$ as well; by the comparability conditions this implies $B_1 \sim B_2$. Finally, we claim that $G = H (v_1, \dots, v_l)$ where $v_1, \dots, v_l$ are the vertices of $G$. It is clear that $G \subseteq H (v_1, \dots, v_l)$. Now, suppose $w \in H (v_1, \dots, v_l)$. Then there is a path $w, x_1, x_2, \dots, x_l, v$ where the vertices $x_1, \dots, x_l$ lie in $H$ and $v \in G$. By Proposition~\ref{a1pathprop}, this implies that $w, x_1, \dots, x_l, v \in G$. So $w \in G$ and we are done. \end{proof} \begin{proposition} \label{a1corr-prop} If $v \in G \vee G'$ has corresponding label $(B,k)$, then the extended label of $v$ is also $(B,k)$. \end{proposition} \begin{proof} Because of our rule for forming edges in $G \vee G'$, the only edges that can go to $v$ from other nodes labeled $B$, would have corresponding labels $(B,l)$ for $l < k$. Thus, there are at most $k-1$ nodes labeled $B$ with an edge to $v$. On the other hand, there must be nodes with extended label $(B,k)$ in $G$ or $G'$; say without loss of generality the first. Then $G$ must also have nodes with extended labels $(B,1), \dots, (B, k-1)$. These correspond to vertices $w_1, \dots, w_{k-1}$ with corresponding labels $(B,1), \dots, (B,k-1)$, all of which have an edge to $v$. So there are at least $k-1$ nodes labeled $B$ with an edge to $v$. Thus, there are exactly $k-1$ nodes in $G$ labeled $B$ with an edge to $v$ and hence $v$ has extended label $(B,k)$. \end{proof} \begin{proposition} \label{a1Ap0} The operation $\vee$ is commutative and associative. \end{proposition} \begin{proof} Commutativity is obvious from the symmetric way in which $\vee$ was defined. To show associativity, note that we can give the following symmetric characterization of $H = (G_1 \vee G_2) \vee G_3$. If $G_1, G_2$ or $G_3$ has a node labeled $(B_1, k_1)$ then so does $H$. There is an edge in $H$ from $(B_1, k_1)$ to $(B_2, k_2)$ if there is such an edge in $G_1, G_2$ or $G_3$. \end{proof} \begin{proposition} \label{a1Ap1} Suppose $G_1, G_2$ are consistent with each other and with some WD $G_3$. Then $G_1 \vee G_2$ is consistent with $G_3$. \end{proposition} \begin{proof} For any variable $i \in [n]$, note that either $G_1[i]$ is an initial segment of $G_2[i]$ or vice-versa. Also note that $(G_1 \vee G_2)[i]$ is the longer of $G_1[i]$ or $G_2[i]$. Now we claim that for any variable $i$, either $G_3[i]$ is an initial segment of $(G_1 \vee G_2)[i]$ or vice-versa. Suppose without loss of generality that $G_1[i]$ is an initial segment of $G_2[i]$. Then $(G_1 \vee G_2)[i] = G_1[i]$. By definition of consistency, either $G_1[i]$ is an initial segment of $G_3[i]$ or vice-versa. So $(G_1 \vee G_2)[i]$ is an initial segment of $G_3[i]$ or vice-versa. \end{proof} In light of Propositions~\ref{a1Ap0} and \ref{a1Ap1}, we can unambiguously define, for any pairwise consistent set of WDs $\mathcal G = \{G_1, \dots, G_l \}$, the merge $$ \bigvee \mathcal G = G_1 \vee G_2 \vee G_3 \dots \vee G_l $$ We can give another characterization of pairwise consistency, which is more illuminating although less explicit: \begin{proposition} \label{a1alt-char} The WDs $G_1, \dots, G_l$ are pairwise consistent iff there is some WD $H$ such that $G_1, \dots, G_l$ are all prefixes of $H$. \end{proposition} \begin{proof} For the forward direction: let $H = G_1 \vee \dots \vee G_l$. By Proposition~\ref{a1welldefprop}, each $G_i$ is a prefix of $H$. For the backward direction: by Proposition~\ref{a1Aprop2}, any $G_{i_1}, G_{i_2}$ are both prefixes of $H$, hence consistent. \end{proof} \begin{proposition} Let $G_1, G_2$ be consistent WDs and $R$ a resampling table. Then $G_1 \vee G_2$ is compatible with $R$ iff both $G_1$ and $G_2$ are compatible with $R$. \end{proposition} \begin{proof} For the forward direction: let $v \in G_1$ labeled by $B$. By Proposition~\ref{a1pathprop}, we have $G_1(v) = (G_1 \vee G_2)(v)$. Thus for $i \in S_B$ we have $|G_1(v)[i]| = |(G_1 \vee G_2)(v)[i]|$. This implies that $X_{G_1}^v = X_{G_1 \vee G_2}^v$. By hypothesis, $B$ is true on $X_{G_1 \vee G_2}^{v}$ and hence $X_{G_1}^v$. As this is true for all $v \in G_1$, it follows that $G_1$ is compatible with $R$. Similarly, $G_2$ is compatible with $R$. For the backward direction: Let $v \in G_1 \vee G_2$. Suppose without loss of generality that $v \in G_1$. As in the forward direction, we have $X_{G_1}^v = X_{G_1 \vee G_2}^v$; by hypothesis $B$ is true on the former so it is true on the latter. Since this holds for all $v \in G_1 \vee G_2$, it follows that $G_1 \vee G_2$ is compatible with $R$. \end{proof} \section{A new parallel algorithm for the LLL} \label{a1Asec5} In this section, we will develop a parallel algorithm to enumerate the entire set $\Gamma^R$. This will allow us to enumerate (implicitly) all WDs compatible with $R$. In particular, we are able to simulate all possible values for the FWD $\hat G$, without running the Resampling Algorithm. In a sense, both the Parallel Resampling Algorithm and our new parallel algorithm are building up $\hat G$. However, the Parallel Resampling Algorithm does this layer by layer, in an inherently sequential way: it does not determine layer $i+1$ until it has fixed a value for layer $i$, and resolving each layer requires a separate MIS calculation. Our new algorithm breaks this sequential bottleneck by exploring, in parallel, all possible values for the computed MIS. Although this might seem like an exponential blowup, in fact we are able to reduce this to a polynomial size by taking advantage of two phenomena: first, we can represent the FWD in terms of single-sink WDs; second, a random resampling table drastically prunes the space of compatible WDs. \subsection{Collectible witness DAGs} We will enumerate the set $\Gamma^R$ by building up its members node-by-node. In order to do so, we must keep track of a slightly more general type of WDs, namely, those derived by removing the root node from a single-sink WD. Such WDs have multiple sink nodes, which are all at distance two in the dependency graph. The set of such WDs is larger than $\Gamma^R$, but still polynomially bounded. \begin{definition}[Collectible WD] Suppose we are given a WD $G$, whose sink nodes are labeled $B_1, \dots, B_s$. We say that $G$ is \emph{collectible to $B$} if $B \sim B_1, \dots, B \sim B_s$. We say that $G$ is \emph{collectible} if it is collectible to some $B \in \mathcal B$. Note that if $G \in \Gamma(B)$ then $G$ is collectible to $B$. We use \emph{CWD} as an abbreviation for collectible witness DAG.\footnote{This definition is close to the concept of partial witness trees introduced in \cite{det-lll}.} \end{definition} \begin{proposition} Define $$ W' = \sum_{B \in \mathcal B} \frac{1}{P_{\Omega}(B)} \sum_{\tau \in \Gamma(B)} w(\tau) $$ The expected total number of CWDs compatible with $R$ is at most $W'$. \end{proposition} \begin{proof} Suppose that $G$ is a WD collectible to $B$. Then define $G'$ by adding to $G$ a new sink node labeled by $B$. As all the sink nodes in $G$ are labeled by $B' \sim B$, this $G'$ is a single-sink WD. Also, $$ P(\text{$G$ compatible with $R$}) = w(G) = \frac{w(G')}{P_{\Omega}(B)} $$ The total probability that there is some $G$ compatible with $R$ and collectible to $B$, is at most the sum over all such $G$. Each WD $G' \in \Gamma(B)$ appears at most once in the sum and so \begin{align*} \sum_{\text{$G$ collectible}} w(G) \leq \sum_{\substack{B \in \mathcal B\\\text{$G$ collectible to $B$}}} w(G) \leq \sum_{\substack{B \in \mathcal B \\ \text{$G$ collectible to $B$}}} \frac{w(G')}{P_{\Omega}(B)} \leq \sum_{B \in \mathcal B} \sum_{\tau \in \Gamma(B)} \frac{w(\tau)}{P_{\Omega}(B)} = W' \end{align*} \end{proof} \begin{corollary} \label{w1boundcorr} We have $m \leq W' \leq \sum_{B \in \mathcal B} \frac{\mu(B)}{P_{\Omega}(B)}$. \end{corollary} \begin{proof} The upper bound follows from Proposition~\ref{a1Aprop2}. For the lower bound, consider the contribution to the sum $\sum_{B \in \mathcal B} \frac{1}{P_{\Omega}(B)} \sum_{\tau \in \Gamma(B)}$ from the WDs $\tau$ with $|\tau| = 1$. \end{proof} The parameter $W'$, which dictates the run-time of our parallel algorithm, has a somewhat complicated behavior. For most applications of the LLL where the bad events are ``balanced,'' we have $W' \approx m$. For example, consider the symmetric LLL setting: \begin{proposition} \label{a1Aw1corr} If the symmetric LLL criterion $e p d \leq 1$ is satisfied then $W' \leq m e$. \end{proposition} \begin{proof} The asymmetric LLL criterion is satisfied by setting $x(B) = \frac{e P_{\Omega}(B)}{1 + e P_{\Omega}(B)}$ for all $B \in \mathcal B$. By Theorem~\ref{a1Akthm1}, we have $\mu(B) \leq e P_{\Omega}(B)$ for all $B \in \mathcal B$. \end{proof} \begin{proposition} \label{gen-w1-bound} Let $p: \mathcal B \rightarrow [0,1]$ be a vector satisfying the two conditions: \begin{enumerate} \item $P_{\Omega}(B) \leq p(B)$ for all $B \in \mathcal B$ \item $p$ satisfies the Shearer criterion with $\epsilon$-slack. \end{enumerate} Then we have $$ W' \leq \frac{(n/\epsilon)}{\min_{B \in \mathcal B} p(B)} $$ \end{proposition} \begin{proof} For any WD $G$ whose nodes are labeled $B_1, \dots, B_s$, define $w'(G)$ to be $p(B_1) \cdots p(B_s)$. Now consider some $\tau \in \Gamma(B)$ which has $s$ additional nodes labeled $B_1, \dots, B_s$. We have that $$ \frac{w(G)}{P_{\Omega}(B)} = \prod_{i=1}^s P_{\Omega}(B_i) \leq \prod_{i=1}^s p(B_i) = \frac{w'(G)}{p(B)} $$ Thus, by Corollary~\ref{sscor}, \begin{align*} W' \leq \sum_{B \in \mathcal B} \frac{1}{p(B)} \sum_{\tau \in \Gamma(B)} w'(\tau) \leq \frac{\sum_{\tau \in \Gamma} w(\tau)}{\min_{B \in \mathcal B} p(B)} \leq \frac{(n/\epsilon)}{\min_{B \in \mathcal B} p(B)} \end{align*} \end{proof} We note that Proposition~\ref{gen-w1-bound} seems to say that $W'$ becomes large when the probabilities $p$ are small. This seems strange, as bad events with small probability have a negligible effect. (For instance, if a bad event has probability zero, we can simply ignore it.) A more accurate way to read Proposition~\ref{gen-w1-bound} is that $W'$ becomes large only for problem instances which are \emph{close to the Shearer bound} and have small probabilities. Such instances would have some bad events with simultaneously very low probability and very high dependency; in these cases then indeed $W'$ can become exponentially large. \subsection{Algorithmically enumerating witness DAGs} In the Moser-Tardos setting, the witness trees were not actually part of the algorithm but were a theoretical device for analyzing it. Our new algorithm is based on explicit enumeration of the CWDs. We will show that, for an appropriate choice of parameter $K$, Algorithm~\ref{enum-par-alg1} can be used to to enumerate all $\Gamma^R$. \begin{algorithm}[H] \centering \begin{algorithmic}[1] \State Randomly sample the resampling table $R$. \State For each $B \in \mathcal B$ true in the initial configuration $R(\cdot, 1)$, create a graph with a single vertex labeled $B$. We denote this initial set by $F_1$. \For{$k = 1, \dots, K$} \State For each consistent pair $G_1, G_2 \in F_k$, form $G' = G_1 \vee G_2$. If $G'$ is collectible and $|G'| \leq k+1$, then add it to $F_{k+1}$. \State For each $G \in F_k$ which is collectible to $B$, create a new WD $G'$ by adding to $G$ a new sink node labeled $B$. If $G'$ is compatible with $R$ then add it to $F_{k+1}$. \EndFor \end{algorithmic} \caption{Enumerating witness DAGs} \label{enum-par-alg1} \end{algorithm} \begin{proposition} Let $G \in F_k$ for any integer $k \geq 1$. Then $G$ is compatible with $R$. \end{proposition} \begin{proof} We show this by induction on $k$. When $k = 1$, then $G \in F_1$ is a singleton node $v$ labeled by $B$. Note that $X_G^v(i) = R(i,1)$ for all $i \in S_B$, and so $B$ is true on $X_G^v$. So $G$ is compatible with $R$. Now for the induction step. Suppose first $G$ was formed by $G = G_1 \vee G_2$, for $G_1, G_2 \in F_{k-1}$. By induction hypothesis, $G_1, G_2$ are compatible with $R$. So by Proposition~\ref{a1Ap1}, $G$ is compatible with $R$. Second suppose $G$ was formed in step (5), so by definition it must be compatible with $R$. \end{proof} \begin{proposition} \label{enum-fk-prop} Suppose that $G$ is a CWD with at most $k$ nodes compatible with $R$. Then $G \in F_k$. \end{proposition} \begin{proof} We show this by induction on $k$. When $k = 1$, then $G$ is a singleton node $v$ labeled by $B$, and $X_G^v(i) = R(i,1)$. So $B$ is true on the configuration $R(\cdot, 1)$, and so $G \in F_1$. For the induction step, first note that if $|G| < k$, then by induction hypothesis $G \in F_{k-1}$. So $G$ will be added to $F_k$ in step (4) by taking $G_1 = G_2 = G$. Next, suppose $G$ has a single sink node $v$ labeled by $B$. Consider the WD $G' = G - v$. Note that $|G'| = k - 1$. Also, all the sink nodes in $G'$ must be labeled by some $B' \sim B$ (as otherwise they would remain sink nodes in $G$). So $G'$ is collectible to $B$. By induction hypothesis, $G' \in F_{k-1}$. Iteration $k-1$ transforms the graph $G' \in F_{k-1}$ into $G$ (by adding a new sink node labeled by $B$), and so $G \in F_k$ as desired. Finally, suppose that $G$ has sink nodes $v_1, \dots, v_s$ labeled by $B_1, \dots, B_s$, where $s \geq 2$ and $B \sim B_1, \dots, B_s$ for some $B \in \mathcal B$. Let $G' = G(v_1)$ and let $G'' = G(v_2, \dots, v_s)$. Note that $G'$ is missing the vertex $v_s$ and similarly $G''$ is missing the vertex $v_1$. So $|G'| < k, |G''| < k$, and $G', G''$ are collectible to $B$. Thus, $G', G'' \in F_{k-1}$ and so $G = G' \vee G''$ is added to $F_k$ in step (4). \end{proof} \begin{proposition} \label{a1Aphase1-prop} Suppose that we have a Bad-Event Checker using $O(\log mn)$ time and $\text{poly(m,n)}$ processors. Then there is an EREW PRAM algorithm to enumerate $\Gamma^R$ whp using $\text{poly}(W', \epsilon^{-1}, n)$ processors and $\tilde O(\epsilon^{-1} (\log n) (\log (W' n)))$ time. \end{proposition} \begin{proof} We have shown that $F_k$ contains all the CWDs compatible with $R$ using at most $k$ nodes. Furthermore, by Corollary~\ref{a1Acor1}, whp every member of $\Gamma^R$ has at most $O(\epsilon^{-1} \log (n/\epsilon) )$ nodes. Hence, for $K = \Theta(\epsilon^{-1} \log (n/\epsilon))$, we have that with high probability $F_K \supseteq \Gamma^R$. The expected total number of CWD compatible with $R$ is at most $W'$. Hence, with high probability, the total number of such WDs is at most $W' n^{O(1)}$. Routine parallel algorithms and our Bad-Event Checker can be used to implement the steps of checking compatibility with $R$ and merging WDs, so each iteration of this algorithm can be implemented using $(W' m n / \epsilon)^{O(1)}$ processors and $O(\log (W' m n / \epsilon))$ time. We obtain the stated bounds by using the fact that $W' \geq m$. \end{proof} \subsection{Producing the final configuration} Now that we have generated the complete set $\Gamma^R$, we are ready to finish the algorithm by producing a satisfying assignment. Define a graph $\mathcal G$, whose nodes correspond to $\Gamma^R$, with an edge between WDs $G, G'$ if they are inconsistent. Let $\mathcal I$ be a maximal independent set of $\mathcal G$, and let $G = \bigvee \mathcal I$. Finally define the configuration $X^*$, which we refer to as the \emph{final configuration}, by $$ X^*(i) = R(i, |G[i]|+1) $$ for all $i \in [n]$. \begin{proposition} \label{a1Aconj-proof} The configuration $X^*$ avoids $\mathcal B$. \end{proposition} \begin{proof} Suppose that $B$ is true on $X^*$. Define the WD $H$ by adding to $G$ a new sink node $v$ labeled by $B$. Observe that $G$ is a prefix of $H$. By Proposition~\ref{a1Ap2} $H, G$ are consistent. We claim that $H$ is compatible with $R$. By Proposition~\ref{a1xprop1}, $G$ is compatible with $R$ so this is clear for all the vertices of $H$ except for its sink node $v$. For this vertex, observe that for each $i \in S_B$ we have $X_H^v(i) = R(i, |H[i]|) = R(i, |G[i]|+1) = X^*(i)$. By Proposition~\ref{a1xprop1}, this implies that $H(v)$ is compatible with $R$ as well. So $H(v) \in \Gamma^R$, and consequently $H(v)$ is a node of $\mathcal G$. Observe that $H(v)$ and all the WDs $G' \in \mathcal I$ are prefixes of $H$. By Proposition~\ref{a1alt-char}, $H(v)$ is consistent with all of them. As $\mathcal I$ was chosen to be a maximal independent set, this implies that $H(v) \in \mathcal I$. By Proposition~\ref{a1welldefprop}, this implies that $H(v)$ is a prefix of $G$. This implies that $|G[i]| \geq |H(v)[i]|$ for any variable $i$. But for $i \in S_B$ we have $|H(v)[i]| = |H[i]| = |G[i]|+1$, a contradiction. \end{proof} We thus obtain our faster parallel algorithm for the LLL: \begin{theorem} \label{a1Amainthm} Suppose that the Shearer criterion is satisfied with $\epsilon$-slack and that we have a Bad-Event Checker using $O(\log mn)$ time and $\text{poly}(m,n)$ processors. Then there is an EREW PRAM algorithm to find a configuration avoiding $\mathcal B$ using $\tilde O(\epsilon^{-1} (\log n) \log(W' n))$ time and $(W' n / \epsilon)^{O(1)}$ processors whp. \end{theorem} \begin{proof} Use Proposition~\ref{a1Aphase1-prop} to enumerate $\Gamma^R$ using $\tilde O(\epsilon^{-1} (\log n) (\log (W' n))$ time and $(W' n / \epsilon)^{O(1)}$ processors. Whp, $|\Gamma^R| \leq W n^{O(1)}$. Using Luby's MIS algorithm, find a maximal independent set of such WDs in time $O(\log^2 (W n))$ and $(W n)^{O(1)}$ processors. Finally, form the configuration $X^*$ as indicated in Proposition~\ref{a1Aconj-proof} using $O(\log (W n / \epsilon))$ time $(W n / \epsilon)^{O(1)}$ processors. Using the bound $W \leq n/\epsilon$ gives the stated result. \end{proof} \begin{corollary} Suppose that the symmetric LLL criterion is satisfied with $\epsilon$-slack, i.e., $e p d (1+\epsilon) \leq 1$, and we can determine if any bad event $B$ is true on a given configuration in time $O(\log mn)$. Then there is an EREW PRAM algorithm to find a configuration avoiding $\mathcal B$ using $\tilde O(\epsilon^{-1} \log(m n) \log n)$ time and $(mn)^{O(1)}$ processors whp. \end{corollary} \begin{proof} We have $W = \sum_{B \in \mathcal B} ep \leq O(m/d)$. By Proposition~\ref{a1Aw1corr}, we have $W' \leq m e$. Note that $m \leq n d$, so $W \leq O(n)$. Now apply Theorem~\ref{a1Amainthm}. \end{proof} \subsection{A heuristic lower bound} In this section, we give some intuition as to why we believe that the run-time of this algorithm, approximately $O(\epsilon^{-1} \log^2 n)$, is essentially optimal for LLL algorithms \emph{which are based on the resampling paradigm}. We are not able to give a formal proof, because we do not have any fixed model of computation in mind. Also it is not clear whether our new algorithm is based on resampling. Suppose we are given a problem instance on $n$ variables whose distributions are all Bernoulli-$q$ for some parameter $q \in [0,1]$. The space $\mathcal B$ consists of $\sqrt{n}$ bad events, each of which is a threshold function on $\sqrt{n}$ variables, and all these events are completely disjoint from each other. By adjusting the exact threshold used and the parameter $q$, we can ensure that each bad event has probability $p = 1 - \epsilon$. The number of resamplings of each event is a geometric random variable, and it is not hard to see that with high probability there will be some bad event $B$ which requires $\Omega(\epsilon^{-1} \log n)$ resamplings before it is false. Also, note that whenever we perform a resampling of $B$, we must compute whether $B$ is currently true. This requires computing a sum of $\sqrt{n}$ binary variables, which itself requires time $\Omega(\log n)$. Thus, the overall running time of this algorithm must be $\Omega(\epsilon^{-1} \log^2 n)$. The reason we consider this a \emph{heuristic} lower bound is that, technically, the parallel algorithm we have given is not based on resampling. That is, there is no current ``state" of the variables which is updated as bad events are discovered. Rather, all possible resamplings are precomputed in advance from the table $R$. \iffalse \subsection{Processing the witness DAGs} \label{a1Astoring-sec} In our algorithm, we have assumed that if we have $M$ total witness DAGs under consideration, that we can perform the basic operations of the parallel algorithm in $\tilde O(\log (M m n))$ time using $(M m n)^{O(1)}$ processors on a PRAM. The main task we must perform is, given two witness DAGs $G_1, G_2$, we must first determine if $G_1, G_2$ are consistent and if so form $G = G_1 \vee G_2$. Next, we must check if $G$ is collectible to some $B$. A related task is: given a graph $G$ collectible to some $B$, create a new graph $G'$ which has an added sink node labeled $B$. If we worked with the full graph structure of the witness DAGs then these steps might appear to require a traversal of the graphs. However, we only need to store a limited amount of information about these DAGs. Namely, we must store the list of all sink nodes, and we must store the association between entries of $R$ and the corresponding bad events. That is, for each variable $i \in [n]$ and each $t \leq O(\epsilon^{-1} \log n)$, we must determine a list of all the nodes $v \in G$ and their labels $(B,k)$ such that $G(v)[i] = t$. Using this information, we may easily determine if $G_1, G_2$ are consistent. It is also straightforward to compute this association table for $G_1 \vee G_2$, given the association tables for the individual graphs. (We simply merge the lists; this can be done using standard parallel sorting algorithms). We can likewise determine the sink nodes of $G_1 \vee G_2$. Using our association table, we can determine if any sink node of $G_1$ appears in $G_2$; if so, this node is a sink node of $G_1 \vee G_2$ if is also a sink node of $G_2$. If the sink node of $G_1$ does not appear in $G_2$, then it becomes a sink node in $G_1 \vee G_2$, and so forth. Next, we enumerate in parallel over all $B \in \mathcal B$. Suppose we are given a fixed $B$ and a fixed $G$; we want to determine if $G$ is collectible to $B$. We can check, in parallel, whether the sink nodes of $G$ overlap the variables in $B$; this takes time $O(\log n)$ and $n^{O(1)}$ processors. We then check if every sink node of $G$ overlapped in some variable; this takes another $O(\log n)$ time and $n^{O(1)}$ processors. Finally, we need to determine if we can form a new graph $G'$ by adding a new sink node $v$ labeled $B$ to $G$. In addition to the graph-theoretic structure needed for this, we need to check if $B$ is true on the configuration $X^v$. This will be possible under our assumption that we can check if a bad event is true in time $T$. Other operations used in our algorithm can be handled in similar ways. \fi \section{A deterministic parallel algorithm} \label{a1Asec:det} In this section, we derandomize the algorithm of Section~\ref{a1Asec5}. The resulting deterministic algorithm requires an additional slack compared to the Parallel Resampling Algorithm (which in turn requires additional slack compared to the sequential algorithm). For the symmetric LLL setting, we require that $e p d^{1+\epsilon} \leq 1$ for some (constant) $\epsilon > 0$. A previous parallel LLL algorithm was given for this criterion in \cite{det-lll}. It is possible to extend this to an asymmetric criterion, but this requires many definitions and technical conditions. Our algorithm would also be compatible with such a criterion, in a manner similar to \cite{det-lll}, however in order to focus on the main case we do not discuss this here. Similarly, the deterministic algorithms impose strong constraints on the composition of the bad events (whereas the randomized algorithms allow them to be almost arbitrary). These constraints are somewhat technical, so we will focus on the simplest scenario: the set $\mathcal B$ contains $m$ bad events, which are each explicitly represented as \emph{atomic} events (that is, conjunctions of terms of the form $X_i = j$). The paradigmatic example of this setting is the $k$-SAT problem. In this simplified setting, the algorithm of \cite{det-lll} requires $O(\epsilon^{-1} \log^3 (mn))$ time and $(mn)^{O(1/\epsilon)}$ processors on the EREW PRAM.\footnote{While randomized MIS algorithms appear to be faster in the CRCW model as compared to EREW, this does not appear to be true for deterministic algorithms. The fastest known NC algorithms for MIS appear to require $O(\log^2 |V|)$ time (in both models). Thus, for the deterministic algorithms, we do not distinguish between EREW and CRCW PRAM models} The proof strategy behind our derandomization is similar to that of \cite{det-lll}: we first show a range lemma, allowing us to ignore WDs which have a large number of nodes. We use this to show that if the resampling table $R$ is drawn from a probability distribution which satisfies an approximate independence condition, then the algorithm of Section~\ref{a1Asec5} will have similar behavior to the scenario in which $R$ is drawn with full independence. Such probability distributions have polynomial support size, so we can obtain an NC algorithm by searching them in parallel. \begin{definition} \label{a1Aapprox-def} We say a probability space $\Omega'$ is $k$-wise, $\delta$-approximately independent, if for all subsets of variables $X_{i_1}, \dots, X_{i_k}$, and all possible valuations $j_1, \dots, j_k$, we have $$ \Bigl| P_{\Omega'} (X_{i_1} = j_1 \wedge \dots \wedge X_{i_k} = j_k) - P_{\Omega} (X_{i_1} = j_1 \wedge \dots \wedge X_{i_k} = j_k) \Bigr| \leq \delta $$ \end{definition} \begin{theorem}[\cite{approx-indep}] \label{a1Aapprox-thm} There are $k$-wise, $\delta$-approximately independent probability spaces which have a support of size $\text{poly}(\log n, 2^k, 1/\delta)$. \end{theorem} Our algorithm will be defined in terms of a key parameter $K$, which will determine later. \begin{lemma} \label{det-lemma2} Suppose that a resampling table $R$ has the following properties: \begin{enumerate} \item[(A1)] For all $\tau \in \Gamma^R$ we have $|\tau| \leq K$. \item[(A2)] There are at most $S$ CWDs compatible with $R$ of size at most $K$. \end{enumerate} Then, if we run the algorithm of Section~\ref{a1Asec5} up to $K$ steps, it will terminate with a configuration avoiding $\mathcal B$, using $O(K log (mnS) + \log^2 S)$ time and $\text{poly}(K,S,m,n)$ processors. \end{lemma} \begin{proof} By Proposition~\ref{enum-fk-prop}, Algorithm~\ref{enum-par-alg1} enumerates all $\Gamma^R$. By Proposition~\ref{a1Aconj-proof}, the final configuration avoids $\mathcal B$. The running time can be bounded noting that the total number of CWDs produced at any step in the overall process is bounded by $S$. \end{proof} \begin{lemma} \label{det-lemma1} If there is a CWD $G$ compatible with $R$ with $|G| \geq K$, then there exists a CWD $G'$ compatible with $R$ of size $K \leq |G'| \leq 2 K$. \end{lemma} \begin{proof} We prove this by induction on $|G|$. Let $G$ be collectible to $B$. If $|G| \leq 2 K$ then this holds immediately so suppose $|G| > 2 K$. Suppose $G$ has $s > 1$ sink nodes $v_1, \dots, v_s$. For each $i = 1, \dots, s$ define $$ H_i = G(v_1, v_2, \dots, v_{i-1}, v_{i+1}, \dots, v_s). $$ Clearly $G = H_1 \vee H_2 \vee \dots \vee H_s$. Hence there must exist $i$ such that $|G| > |H_i| \geq |G|/s \geq |G|/2 \geq K$. Also, $H_i$ is collectible to $B$ and is a prefix of $G$, hence is compatible with $R$. So apply the induction hypothesis to $H_i$. Next, suppose that $G$ has a single sink node $v$. Then $G - v$ is collectible to $B$ and is compatible with $R$. Since $|G| > 2 K$ we have $|G - v| \geq K$ and so we apply the induction hypothesis to $G - v$. \end{proof} \begin{proposition} \label{tree-cnt-prop} There are at most $m e (e d)^t$ CWDs with $t$ nodes. \end{proposition} \begin{proof} Let $Z$ denote the number of CWDs with $t$ nodes. Then $$ Z = \sum_{\substack{\text{CWD $G$} \\ |G| = t}} 1 = (e d)^t \sum_{\substack{\text{CWD $G$} \\ |G| = t}} (e d)^{-|G|} \leq (e d)^t \sum_{\text{CWD $G$}} (e d)^{-|G|} $$ The term $\sum_{\text{CWD $G$}} (e d)^{-|G|}$ can be interpreted as $\sum_{\text{CWD $G$}} w(G)$ under the probability vector $p(B) = \frac{1}{e d}$. This probability vector satisfies the symmetric LLL criterion, and so Proposition~\ref{a1Aw1corr} gives $\sum_{\text{CWD $G$}} (e d)^{-|G|} \leq m e$. \end{proof} \begin{proposition} \label{a1Arand1} For $c, c'$ sufficiently large constants and $d \geq 2$, the following holds: Suppose that $\mathcal B$ consists of atomic events on $s$ variables and $e p d^{1+\epsilon} \leq 1$ for $\epsilon \in (0,1)$. Let $K = \frac{c' \log(m/\epsilon)}{\epsilon \log d}$. Suppose that $R$ is drawn from a probability distribution which is $(m/\epsilon)^{-c/\epsilon}$-approximately, $\frac{2 c' s \log (m/\epsilon)}{\epsilon \log d}$-wise independent. Then with positive probability the following events both occur: \begin{enumerate} \item[(B1)] Every $\tau \in \Gamma^R$ has $|\tau| \leq 2 K$. \item[(B2)] There are at most $O(m)$ CWDs compatible with $R$. \end{enumerate} \end{proposition} \begin{proof} By Lemma~\ref{det-lemma1}, a necessary condition for there to exist any single-sink WD $\tau$ with $|\tau| > 2 K$ is for there to exist some CWD compatible with $R$ with $K \leq |G| \leq 2 K$. For any such $G$, let $\mathcal E$ be the event that $G$ is compatible with $R$. The event $\mathcal E$ is a conjunction of events corresponding to the vertices in $G$. Each such vertex event depends on at most $s$ variables, so in total $\mathcal E$ is an atomic event on at most $|G| s \leq 2 K s \leq \frac{2 c' s \log (m/\epsilon)}{\epsilon \log d}$ terms. So, by Definition~\ref{a1Aapprox-def}, $P( \mathcal E) \leq w(G) + (m/\epsilon)^{-c/\epsilon}$. Summing over all such $G$, we have \begin{align*} \sum_{\substack{\text{CWD $G$} \\ K \leq |G| \leq 2 K}} \negthickspace \negthickspace P(\text{$G$ compatible with $R$}) &\leq \negthickspace \sum_{\substack{\text{CWD $G$} \\ K \leq |G| \leq 2 K}} \negthickspace (w(G) + (m/\epsilon)^{-c/\epsilon}) \leq \sum_{\substack{\text{CWD $G$} \\ |G| \geq K}} w(G) + \sum_{\substack{\text{CWD $G$} \\ |G| \leq 2 K}} (m/\epsilon)^{-c/\epsilon} \end{align*} For any integer $t$, Proposition~\ref{tree-cnt-prop} shows there are at most $m e (e d)^t$ total CWDs with $t$ nodes, and hence their total weight is at most $m e (e d p)^t \leq m d^{-t \epsilon}$. So the first summand is at most $m e \sum_{t \geq K} d^{-t \epsilon} \leq O(m d^{-K \epsilon}/\epsilon)$; this is at most $0.1$ for $c'$ sufficiently large. Now let us fix $c'$, and show that we can take $c$ sufficiently large. By Proposition~\ref{tree-cnt-prop}, the total number of CWDs of size at most $2 K$ is at most $\sum_{t = 1}^{2 K} m e (ed)^t \leq 2m (e d)^{2 K}$. Hence the second summand is at most $2m e (m/\epsilon)^{-c / \epsilon} (e d)^{ \frac{c' \log(m/\epsilon)}{\epsilon \log d}}$; this is at most $0.1$ for $c$ sufficiently large. Thus, altogether, we see that there is a probability of at most $0.2$ that there exists a CWD (and in particular a single-sink WD) compatible with $R$ with more than $2 K$ nodes. Next suppose that all CWDs compatible with $R$ have size at most $2 K$. Thus, the expected total number of CWDs compatible with $R$ is given by \begin{align*} \sum_{\substack{\text{CWD $G$} \\ |G| \leq 2 K}} P(\text{$G$ compatible with $R$}) \leq \sum_{\substack{\text{CWD $G$} \\ |G| \leq 2 K}} (w(G) + (m/\epsilon)^{-c/\epsilon}) \leq W' + ( \sum_{t = 1}^{2 K} m (e d)^t) (m/\epsilon)^{-c/\epsilon} \end{align*} As we have already seen, the second summand is at most $0.1$ for our choice of $c, c'$. By Proposition~\ref{a1Aw1corr}, we have $W' \leq m e$. Thus, this sum is $O(m)$. Finally apply Markov's inequality. \end{proof} \begin{theorem} \label{main-det-thm} Suppose $e p d^{1+\epsilon} \leq 1$ for $\epsilon \in (0,1)$ and every bad event is an atomic configuration on at most $s$ variables. Then there is a deterministic EREW PRAM algorithm to find a configuration avoiding $\mathcal B$, using $O(\frac{s \log(mn)/\log d + \log^2 (m n)}{\epsilon})$ time and $(m n)^{O(\frac{s + \log d}{\epsilon \log d})}$ processors. \end{theorem} \begin{proof} We begin with a few simple pre-processing steps. If there is any variable $X_i$ which can take on more than $m$ values, then there must be one such value which appears in no bad events; simply set $X_i$ to that value. So we assume every variable can take at most $m$ values, and so there are at most $m^n$ possible assignments to the variables. If $\epsilon < 1/(mn)$, then one can exhaustively test the full set of assignments using $O(m^n) \leq (mn)^{1/\epsilon}$. So we assume $\epsilon > 1/(mn)$. Similarly, if $d = 1$, then every bad event is independent, and we can find a satisfying assignment exhaustively using $O(m^s)$ processors. So we assume $d \geq 2$. We first construct a probability space $\Omega$ which is $(mn)^{-c/\epsilon}$-approximately, $\frac{2 c' s \log(mn)}{\epsilon \log d}$-wise independent on a ground set of size $n K$. We use this to form a resampling table $R(i,x)$ for $i \in [n]$ and $x \leq K$. By Theorem~\ref{a1Aapprox-thm}, we can ensure that $\Omega$ has a support size of $(m n)^{O(\frac{s + \log d}{\epsilon \log d})}$. We subdivide our processors so that $\omega \in \Omega$ is explored by an independent group of processors; the total cost of this allocation/subdivision step is $O( \log |\Omega|) \leq O(\frac{(s+\log d) \log (m n)}{\epsilon \log d})$. Henceforth, every element of this probability space will be searched independently, with no further inter-communication (except at the final stage of reporting a satisfying solution.) Next, given a resampling table $R$, simulate the algorithm of Section~\ref{a1Asec5} up to $K = \frac{c' \log (m n)}{\epsilon \log d}$. By Lemma~\ref{a1Arand1}, for large enough constants $c, c'$ there is at least one element $\omega \in \Omega$ for which all single-sink WDs have size $O(K)$ and for which the total number of CWDs compatible with $R$ is $O(m)$. By Lemma~\ref{det-lemma2}, the algorithm of Section~\ref{a1Asec5} produces a satisfying assignment using $O(K \log(m n) + \log^2(m n)) = O(\frac{\log^2 (m n)}{\epsilon})$ time and using $\text{poly}(m,n)$ processors. Note that other elements $\omega' \in \Omega$ may not satisfy the bounds (B1), (B2); if we attempt to fully simulate the algorithm of Section~\ref{a1Asec5} for $\omega'$, then the running time might be very large (since one would need to sort and process the collectible WDs). However, we can immediately terminate the algorithm if we ever discover that the number of WDs exceeds the ``good'' value (which it is supposed to have on $\omega$); thus, these ``bad'' values $\omega'$ do not increase the total run-time or processor count. \end{proof} \begin{corollary} Suppose that there is a $k$-SAT instance in which each variable appears in at most $L \leq \frac{2^{k/(1+\epsilon)}}{e k}$ clauses. Then a satisfying assignment can be deterministically found on a EREW PRAM using $ O(\frac{k^2 \log^2 n}{\epsilon})$ time and $(2^k n)^{O(1/\epsilon)}$ processors. In particular, this is NC for $k \leq O(\log n)$. \end{corollary} \begin{proof} Each bad event (a clause being false) is an atomic configuration on $s = k$ variables, and there are at most $m \leq n 2^k$ clauses. Here $p = 2^{-k}$ and $d = L k = 2^{k/(1+\epsilon)}/e$. Note that $e p d^{1+\epsilon} = e^{-\epsilon} < 1$, and so the criterion of Theorem~\ref{main-det-thm} is satisfied. Since $\log d = \Theta(s)$, the total processor count required is $(m n)^{O(1/\epsilon)} = (n 2^k)^{O(1/\epsilon)}$ and the total run-time required is $O(\frac{s \log(mn)/\log d + \log^2 (m n)}{\epsilon}) = O( \frac{k^2 \log^2 n}{\epsilon} )$. \end{proof} Theorem~\ref{main-det-thm} is essentially a ``black-box'' simulation of the randomized algorithm using an appropriate probability space. A more recent algorithm of \cite{harris3} gives a ``white-box'' simulation, which searches the probability space more efficiently; this significantly relaxes the constraint on the types of bad events allowed. For problems where the bad events are simple (e.g. $k$-SAT), the algorithms have essentially identical run-times. \section{Concentration for the number of resamplings} \label{a1Asec3} The expected number of resamplings for the Resampling Algorithm is at most $W$. Suppose we wish to ensure that the number of resamplings is bounded with high probability, not merely in expectation. One simple way to achieve this would be to run $\log n$ instances of the Resampling Algorithm in parallel; this is a generic amplification technique which ensures that whp the total number of resamplings performed will be $O(W \log n)$. Can we avoid this extraneous factor of $\log n$? In this section, we answer this question in the affirmative by giving a concentration result for the number of resamplings. We show that whp the number of resamplings will not exceed $O(W)$ (assuming that $W$ is sufficiently large). We note that the straightforward approach here would be the following: the probability that there are $T$ resamplings is at most the probability that there is a $T$-node WD compatible with $R$; this can be upper-bounded by summing the weights of all such $T$-node WDs. This straightforward approach can only the weaker result that number of resamplings is bounded by $O(W/\epsilon)$. This multiplicative dependence on $\epsilon$ appears in prior concentration bounds. For instance, Kolipaka \& Szegedy \cite{kolipaka} shows that the Resampling Algorithm performs $O(n^2/\epsilon + n/\epsilon \log (1/\epsilon))$ resamplings with constant probability, and \cite{achlioptas} shows that in the symmetric LLL setting the Resampling Algorithm performs $O(n/\epsilon)$ resamplings whp. Such results have a large gap $\epsilon$ is very small. Our main contribution is to remove this factor of $\epsilon^{-1}$. \begin{proposition} \label{a1Asurj} Given any distinct bad events $B_1, \dots, B_s$, the total weight of all WDs with $s$ sink nodes labeled $B_1, \dots, B_s$, is at most $\prod_{i=1}^s \mu(B_i)$. \end{proposition} \begin{proof} We define a function $F$ which which maps $s$-tuples $(\tau_1, \dots, \tau_s) \in \Gamma(B_1) \times \dots \times \Gamma(B_s)$ to WDs $G = F(\tau_1, \dots, \tau_s)$ whose sink nodes are labeled $B_1, \dots, B_s$. The function is defined by first forming the disjoint union of the graphs $\tau_1, \dots, \tau_s$. We then add an edge from a node $B \in \tau_i$ to $B' \in \tau_j$ iff $i < j$ and $B \sim B'$. Now, consider any WD $G$ whose sink nodes $v_1, \dots, v_s$ are labeled $B_1, \dots, B_s$. For $i = 1, \dots, j$, define $\tau_i$ recursively by $$ \tau_i = G(v_i) - \tau_1 - \dots - \tau_{i-1} $$ Note that each $\tau_i$ contains the sink node $v_i$, so it is non-empty. Also, all the nodes in $\tau_i$ are connected to $v_i$, so $\tau_i$ indeed has a single sink node. Finally, every node of $G$ has a path to one of $v_1, \dots, v_j$, so it must in exactly one $\tau_i$. Thus, for each WD $G$ with sink nodes labeled $B_1, \dots, B_s$, there exist $\tau_1, \dots, \tau_s$ in $\Gamma(B_1), \dots, \Gamma(B_s)$ respectively such that $G = F(\tau_1, \dots, \tau_s)$, and furthermore such that the nodes of $G$ are the union of the nodes of $\tau_1, \dots, \tau_s$. In particular, $w(G) = w(\tau_1) \cdots w(\tau_s)$. So we have: \begin{align*} \sum_{\text{$G$ has $s$ sink nodes $B_1, \dots, B_s$}} w(G) &\leq \sum_{\tau_1, \dots, \tau_s} w(\tau_1) \dots w(\tau_s) = \prod_{i=1}^s \sum_{\tau \in \Gamma(B_i)} w(\tau) \\ &\leq \prod_{i=1}^s \mu(B_i) \qquad \text{by Proposition~\ref{a1Aprop2}} \end{align*} \end{proof} \begin{theorem} \label{a1Aconc-thm} Whp, the Resampling Algorithms performs $O(W + \frac{\log^2 n}{\epsilon})$ resamplings. \end{theorem} \begin{proof} First, consider the expected number of WDs which are compatible with $R$ and which contain exactly $s$ sink nodes; here $s$ is a parameter to be specified later. Each of these $s$ sink nodes must receive distinct labels. We can estimate this quantity as {\allowdisplaybreaks \begin{align*} \sum_{\text{$G$ has $s$ sink nodes}} P(\text{$G$ compatible with $R$}) &\leq \sum_{\text{$G$ has $s$ sink nodes}} w(G) \\ &\leq \sum_{\text{$B_1, \dots, B_s$ distinct}} \mu(B_1) \dots \mu(B_s) \qquad \text{by Proposition~\ref{a1Asurj}} \\ &\leq \frac{1}{s!} (\sum_{B \in \mathcal B} \mu(B))^s = \frac{W^s}{s!} \end{align*} } Now, suppose that the Resampling Algorithm runs for $t$ time-steps, and consider the event $\mathcal E$ that $t \geq c(W + \frac{\log^2 n}{\epsilon})$ where $c$ is some sufficiently large constant (to be determined). Let $\hat G$ be the FWD of the resulting execution. Each resampling at time $i \in \{1, \dots, t\}$ corresponds to some vertex $v_i$ in $\hat G$. For $i = 1, \dots, t$ define $H_i = G(v_i)$. Next define the set $A$ to be the set of indices $i_1, \dots, i_s$ satisfying the properties that $t \geq i_1 > i_2 > i_3 > \dots > i_{s-1} > i_s \geq 1$ and that $i_j \notin H_{i_1} \cup \dots \cup H_{i_{j-1}}$ for $j = 1, \dots, s$. For each tuple $a = (i_1, \dots, i_s) \in A$, define $\hat G(a)$ as $\hat G(v_{i_1}, \dots, v_{i_s})$. The condition $i_j \notin H_{i_1} \cup \dots \cup H_{i_{j-1}}$ ensures that each $v_{i_j}$ is a sink node in $\hat G(a)$, and thus $\hat G(a)$ contains exactly $s$ sink nodes. By Proposition~\ref{a1xprop1} it is compatible with $R$. Further, we claim that $\hat G(a) \neq \hat G(a')$ for $a \neq a'$. To see this, we note that we can recover $i_1, \dots, i_s$ uniquely from the graph $\hat G(a)$. For, suppose that $B$ is the label of a sink node $u$ of $\hat G(a)$; in this case, if $\hat G(a)[u]$ contains $k$ nodes labeled $B$, then $v_{i_j}$ must be the unique node in $\hat G$ with extended label $(B, k)$. This allows us to recover the unordered set $\{i_1, \dots, i_s \}$; the condition that $i_1 > \dots > i_s$ allows us to recover $a$ uniquely. Consequently, the number of $s$-sink-node WDs compatible with $R$ must be at least the cardinality of $A$, namely $$ |A| = \sum_{1 \leq i_1 \leq t} \sum_{\substack{i_2 < i_1\\ i_{2} \notin H_{i_1}}} \sum_{\substack{i_3 < i_2\\ i_3 \notin H_{i_1} \cup H_{i_2}}} \cdots \sum_{\substack{i_s < i_{s-1}\\ i_s \notin H_{i_1} \cup H_{i_2} \cup \dots \cup H_{i_{s-1}}}} 1 $$ Let us define $\mathcal E'$ to be the (rare) event that there are more than $\frac{10 \log n}{\epsilon}$ members of $\Gamma^R$ of size greater than $\frac{10 \log n}{\epsilon}$ nodes. By Proposition~\ref{a1Acor1}, $P(\mathcal E') \leq n^{-\Omega(1)}$. Let us now condition on that $\mathcal E'$ has not occurred. Let $X$ denote the set $\{ i \mid | \hat G(v_i) | \leq h \}$ where $h = \frac{10 \log n}{\epsilon}$. In this case, $|X| \geq t - \frac{10 \log n}{\epsilon}$. Consequently, if the event $\mathcal E$ occurs then for $c$ sufficiently large we have $|X| \geq t/2$. We thus have: \begin{align*} |A| &\geq |A \cap X \times \dots \times X| = \sum_{ i_1 \in X } \sum_{\substack{i_2 < i_1\\ i_{2} \notin H_{i_1} \\ i_2 \in X}} \sum_{\substack{i_3 < i_2\\ i_3 \notin H_{i_1} \cup H_{i_2} \\ i_3 \in X}} \cdots \sum_{\substack{i_s < i_{s-1}\\ i_s \notin H_{i_1} \cup H_{i_2} \cup \dots \cup H_{i_{s-1}} \\ i_s \in X}} 1 \end{align*} By Proposition~\ref{a1Atech-prop1} (which we defer to after this proof), this expression is at least $\binom{|X|-(s-1) h}{s} \geq \frac{(t/2 - s h)^s}{s!}$ since $|H_j| \leq h$ for all $j \in X$. Hence, we have shown that $\mathcal E$ requires either event $\mathcal E'$ or that the number of WDs with $s$ sink nodes compatible with $R$ to be at least $\frac{(t/2-s h)^s}{s!}$. As the expected number of such WDs is at most $W^s/s!$, Markov's inequality gives $P(\mathcal E) \leq W^s/(t/2-s h)^s + P(\mathcal E')$. Setting $s = \frac{t}{4 h}$, we bound this as $$ \frac{W^s}{(t/2 - s h)^s} = (4 W/t)^s \leq 2^{-s} = n^{-\Omega(1)} $$ using the facts that $t \geq 8 W$ and $t \geq \Omega( \frac{\log^2 n}{\epsilon} )$. \end{proof} \begin{corollary} The Resampling Algorithm performs $O(n/\epsilon)$ resamplings whp. \end{corollary} \begin{proof} By Corollary~\ref{a1Aweight-bound3} we have $W \leq n/\epsilon$. Thus, by Theorem~\ref{a1Aconc-thm}, with high probability the total number of resamplings made the Resampling Algorithm is at most $O(\frac{n}{\epsilon} + \frac{\log^2 n}{\epsilon}) = O(\frac{n}{\epsilon})$. \end{proof} For the symmetric LLL, we can even obtain concentration without $\epsilon$-slack. \begin{corollary} If the symmetric LLL criterion $e p d \leq 1$ is satisfied, then whp the number of resamplings is $O(n + d \log^2 n)$. \end{corollary} \begin{proof} Set $x(B) = \frac{1}{d}$ for all $B \in \mathcal B$. Now a simple calculation shows that we satisfy the asymmetric LLL condition for $\epsilon = e (\frac{d-1}{d})^{d-1} - 1 = \Omega(1/d)$. Thus $\mu(B) \leq x(B)/(1-x(B)) = 1/d$, and so $W \leq m/d$. We also may observe that $m \leq n d$. So, by Theorem~\ref{a1Aconc-thm}, the total number of resamplings is, with high probability $O(n + d \log^2 n)$. \end{proof} To finish this proof, we need to show the following simple combinatorial bound: \begin{proposition} \label{a1Atech-prop1} Suppose that $A$ is a finite set and suppose that for each integer $j$ there is a set $I_j \subseteq A$ with $|I_j| \leq h$. Define $$ f(A, s, I) = \sum_{i_1 \in A} \sum_{\substack{i_2 < i_1\\ i_2 \in A - I_{i_1}}} \sum_{\substack{i_3 < i_2\\ i_3 \in A - I_{i_1} - I_{i_2}}} \dots \sum_{\substack{i_s < i_{s-1}\\ i_s \in A - I_{i_1} - I_{i_2} - \dots - I_{i_{s-1}}}} 1 $$ Then we have $$ f(A, s, I) \geq \binom{|A|-(s-1) h}{s} $$ \end{proposition} \begin{proof} Set $t = |A|$. We prove this by induction on $s$. When $s = 1$ then $f(A, 1, I) = \sum_{i_1 \in A} 1 = t = \binom{t - (1-1) h}{1}$ as claimed. We move to the induction step $s > 1$. Suppose that $A = \{a_1, \dots, a_t \}$, and suppose that we select the value $i_1 = a_j$. Then the remaining sum over $i_2, \dots, i_s$ is equal to $f(A'_j, s-1, I)$, where $A'_j = \{a_1, \dots, a_{j-1} \} - I_{i_1}$. Summing over all $j = 1, \dots, t$ gives: {\allowdisplaybreaks \begin{align*} f(A, s, I) &= \sum_{j=1}^t f(A'_j, s-1, I) \geq \sum_{j=(s-1) h + s}^t f(A'_j, s-1, I) \\ &\geq \sum_{j=(s-1) (h+1)}^t \binom{ |A'_j|-(s-2) h}{s-1} \qquad \text{by inductive hypothesis} \\ &\geq \sum_{j=(s-1) (h+1)}^t \binom{ (j-1-h)-(s-2) h}{s-1} \qquad \text{as $|A'_j| \geq j - 1 - h$} \\ &= \sum_{j=s-1}^{t-1 - h (s - 1)} \binom{j}{s-1} = \binom{t-(s-1) h}{s} \end{align*} } and the induction is proved. \end{proof} \section{Acknowledgments} Thanks to Aravind Srinivasan and Navin Goyal, for helpful discussions and comments. Thanks to the anonymous reviewers, for many helpful comments and corrections.
1,314,259,996,081
arxiv
\section{Introduction} A large class of ${\cal N}=1$ superconformal field theories arise from D3-branes transverse to Calabi--Yau singularities. The near-horizon geometry is AdS$_5\times X$ where $X$ is a Sasaki--Einstein space, that is the base of a non-compact CY cone \cite{Klebanov:1998hh,Morrison:1998cs,Benvenuti:2004dy,Franco:2005sm}. Particular attention has been devoted to local orbifold and more general toric singularities, since the resulting quiver theories admit an elegant description in terms of brane tilings and dimers, that encode their low-energy dynamics and their moduli spaces \cite{Hanany:2005ve,Franco:2005rj, Hanany:2011iw, Hanany:2012hi, Hanany:2012vc}. Less is known about the inclusion of orientifold planes and flavour branes, since both typically break superconformal invariance (see \cite{Franco:2007ii} for previous work on unoriented brane tilings and dimers). On the other hand, configurations with orientifold planes and flavour branes provide us with concrete examples of semi-realistic models for particle physics \cite{Angelantonj:1996uy, Aldazabal:2000sa,Cvetic:2001tj} (see \cite{Blumenhagen:2006ci} for a review and references therein). In the case where the brane system is located at the fixed point of an orientifold involution, the low energy dynamics is governed by a local unoriented quiver theory whose quantum consistency relies on local tadpole cancellation and admits a full-fledged world-sheet description. Here we mainly focus on the case of ${\mathbb C}^3/{\mathbb Z}_n$ singularities with fractional D3-branes, non-compact D7-branes\footnote{Brane tilings with flavour have been recently considered in \cite{Forcella:2008au,Franco:2012mm, Franco:2012wv}.} and $\Omega$-planes of general type. We will rederive the various consistency conditions, most notably the relation between twisted tadpoles and anomalies in presence of flavour branes \cite{Aldazabal:1999nu, Bianchi:2000de,Uranga:2000xp}. The gauge group will be a product of unitary, orthogonal and symplectic groups. Matter will appear in fundamental, symmetric or anti-symmetric representations. In particular, we show that the presence of flavour branes allows for a rich pattern of quiver theories including new instances of ${\cal N}=1$ superconformal theories. We will also discuss D-brane instanton corrections of both kinds, `gauge' and `exotic', related to instantons sitting in an occupied or an empty node of the quiver, respectively \cite{Billo:2002hm,Bianchi:2007wy, Argurio:2007vqa, Bianchi:2009bg, Blumenhagen:2009qh, Bianchi:2009ij,Bianchi:2012ud}. Interestingly we find superconformal theories whereby instanton induced superpotentials break conformal symmetry in a dynamical fashion. Finally we discuss aspects of the new ${\cal N}=1$ strong-weak coupling duality, proposed by \cite{GarciaEtxebarria:2012qx} as a remnant of ${\cal N}=4$ S-duality. In particular we will identify new candidate dual pairs and propose that the duality relation can be understood in purely geometric terms. The plan of the paper is as follows. In section \ref{sunoriented} we describe the spectrum of the quiver theories and present general formulas for the one-loop anomalies and tadpoles entirely written in terms of the intersection numbers codifying the singularity (quiver diagram). In section \ref{sconformal} we show that the presence of flavour branes allows for new instances of ${\cal N}=1$ superconformal quiver gauge theories. Besides a number of truly superconformal quiver theories we find an infinite class of theories where breaking of conformal symmetry shows up only in the running of the coupling associated to an empty node. In section \ref{sinstanton} we study the effects of D-brane instantons of both kinds: `gauge' and `exotic'. We show in particular that conformal symmetry can be broken in a dynamic fashion via the generation of exotic superpotentials. Finally in section \ref{sdual} we propose an infinite series of new candidates for ${\cal N}=1$ strong-weak pairs of dual quiver gauge theories. We collect in Appendix~A a self-contained discussion of the Klein-bottle, Annulus and Moebius-strip one-loop amplitudes, anomalies and tadpoles of ${\mathbb C}^3/{\mathbb Z}_n$ orientifold theories. \bigskip \noindent{\it Note added} \noindent While this paper was being typewritten a related interesting paper by S.~Franco and A.~Uranga \cite{Franco:2013ana} appeared that discusses flavour D7-branes in general bipartite field theories, yet without the inclusion of $\Omega$-planes. \section{IIB on ${\mathbb C}^3/{\mathbb Z}_n$ orientifolds } \label{sunoriented} We are interested in unoriented quiver theories living on D3-branes at ${\mathbb C}^3/\Gamma$ singularities with $\Gamma$ a discrete and abelian group. We start by considering the case $\Gamma={\mathbb Z}_n$, the main focus of our analysis. We denote by $X^I$, $I=1,2,3$, the complex coordinates of ${\mathbb C}^3$ and by $\Theta $ the generator of the $ {\mathbb Z}_{n}$ orbifold group action \begin{equation} \Theta: \quad X^I \to w^{a_I} X^I, \qquad w=e^{\frac{2\pi {\rm i}}{n}}, \end{equation} with $a_I$ integers satisfying the supersymmetry-preserving Calabi--Yau condition \begin{equation} \sum_{I=1}^3 a_I=0 \quad ({\rm mod} \: n). \end{equation} The orbifold action has a single fixed point at the origin. Before the inclusion of D-branes and the $\Omega$-plane, the local physics around the singularity is described by an effective ${\cal N}=2$ supergravity theory with a certain number of hypermultiplets originating from twisted sectors where all three internal coordinates $X^I$ are twisted (see Appendix A for details). They parametrize the sizes and shapes of the compact exceptional cycles at the singularity \cite{Lust:2006zh,Reffert:2006du}. Twisted sectors where some of the $X^I$ are untwisted preserve larger supersymmetry and contribute non-localised states that are irrelevant to the local physics. ${\cal N}=1$ theories are obtained by the quotient of the orbifold theory by an orientifold involution involving world-sheet parity $\Omega$ combined with a space-time reflection and some additional ${\mathbb Z}_2$ symmetry (eg $(-)^{F_L}$). The inclusion of $\Omega$-planes projects hypermultiplets localised at the singularity onto chiral multiplets describing the sizes of the compact exceptional cycles. Fixed points of the reflection define an orientifold plane inverting the orientations of both closed and open strings (to be described next). We denote the orientifold action generically by $\Omega_{\epsilon}$ with $\epsilon=(\epsilon_0,\epsilon_I)$ four signs satisfying $\prod_{I=1}^3 \epsilon_I=-1$. These specify the orientation and the charge of the $\Omega$-plane. In particular \begin{eqnarray} \Omega 3^{\pm} : \qquad && (\pm{\,-}{\,-}{\,-})\nonumber\\ \Omega 7^{\pm}: \qquad && (\mp{\,+}{\,+}{\,-}) \end{eqnarray} represent an $\Omega3$, an $\Omega7$ plane along the (1~2)-planes and so on. The $\epsilon_0=\pm$ sign specifies the Sp/SO projection, with $+$ conventionally taken for the Sp-projection on D-brane stacks coincident with a given $\Omega$-plane. In a dimer description of the orientifold \cite{Franco:2007ii}, these signs specify the charges of the orientifolds at the four fixed points of the quotiented dimer. \subsection{Quiver gauge theories} Next, we consider the inclusion of D-branes at the ${\mathbb C}^3/{\mathbb Z}_n$ orientifold singularity. Compatibly with the ${\cal N}=1$ supersymmetry preserved by the $\Omega$-planes, we consider the insertion of $N$ `fractional' D3-branes as well as $M$ `flavour' D7-branes passing through the singularity and extending along four non-compact directions inside ${\mathbb C}^3/{\mathbb Z}_n$. The dynamics of D7-D7 open strings is irrelevant for the local physics. On the other hand open strings connecting D3 and D7 are localized at the singularity and provide fundamental matter. For definiteness we will consider D7 wrapped along the complex planes $I=1,2$, {\it i.e.\ } along the non-compact divisor $X^3=0$. One should keep in mind that additional D7 branes wrapped along the non-compact divisor $X^1=0$ or $X^2=0$, or superpositions thereof, can be considered. To find the field content of the unoriented quiver theory at the singularity we proceed in two steps. Starting from the ${\cal N}=4$ theory living on the D3-branes in flat space-time, we first perform the orbifold projection to an oriented quiver theory with flavour and then perform the unoriented projection to an unoriented quiver theory with flavour. In the ${\cal N}=1$ language the ${\cal N}=4$ theory is given by a vector multiplet and three chiral multiplets all in the adjoint of U$(N)$. In flat space-time D3-D7 open strings contribute $2M$ chiral multiplets ($M$ hypermultiplets) rotated by a U$(M)$ flavour group. We denote by $\bf V$ and $\bf C$ a vector and a chiral multiplet of ${\cal N}=1$ supersymmetry, respectively. One can then write the field content in the ${\cal N}=1$ language as \begin{equation} \label{n4} {\cal H}_{\rm flat} =({\bf V}+3 {\bf C})\, \fund ~ \overline{\fund} +{\bf C}\, ( \bar{\bf M} \times \fund + \mathbf M \times \overline{\fund}). \end{equation} Here and in the following we denote by $\fund$($\overline{\fund}$) the (anti)fundamental representation of a gauge group and by its dimension ${\bf M}$ $({\bf \bar M})$ the (anti)fundamental representations of the flavour group. The orbifold group breaks the gauge and flavour groups down to $\prod_a {\rm U}(N_a)$ and $\prod_a {\rm U}(M_a)$ respectively. Here $N_a$ and $M_a$ denote the number of D3 and D7 branes transforming in the $a$-representation of ${\mathbb Z}_n$ with $a=0,1\ldots n-1$. Explicitly, the action of the orbifold group generator on Chan-Paton indices breaks the fundamental representations of U($N$) and U($M$) according to (see Appendix for details) \begin{eqnarray} \label{zorb0} \Theta : \qquad \fund \to \oplus_a w^a \, \fund_a \,, \qquad~ {\bf M} \to \oplus_a w^a \, {\mathbf M_a }\,, \end{eqnarray} where we denote by $\fund_a$ and ${\bf M_a}$ the fundamental representations of $ {\rm U}(N_a)$ and $ {\rm U}(M_a)$ respectively. In addition, the spacetime action of $\Theta$ on the field components reads \begin{eqnarray} \label{zorb} \Theta: && \qquad {\bf V}\to {\bf V} ,\qquad {\bf C}^I\to w^{a_I } {\bf C}^I , \qquad {\bf C}^{\dot a} \to w^{ s } {\bf C}^{\dot a} , \end{eqnarray} where by ${\bf C}^I$ and ${\bf C}^{\dot a}$ ($\dot a=1,2$) we denote the chiral multiplets coming from D3-D3 and D3-D7 strings respectively. The former transforms in the fundamental of the $ {\rm SU}(3)$ rotation group of ${\mathbb C}^3$ while the latter as a chiral spinor of the rotation group of the ${\mathbb C}^2$ along the D7. A consistent orbifold group action on D3-D7 fields requires $s=\frac{a_1+a_2}{2}\in {\mathbb Z}$. For $n$ odd this is not a restriction since one can always redefine $a_I$ by adding $n$. Combining (\ref{zorb0}) and (\ref{zorb}) and keeping invariant components in (\ref{n4}) one finds the field content of the oriented quiver gauge theory with flavour \begin{equation} \label{orb} {\cal H}_{\rm orbifold}= \sum_{a=0}^{n-1} \left( {\bf V}\, \fund_a \, \overline{\fund}_a + {\bf C} \left[ \sum_{I=1}^3 \, (\fund_a ,\overline{\fund}_{a+ a_I}) + \mathbf M_a \, \overline{\fund}_{a+s}+ \bar {\bf M}_{a+s} \, {\fund}_{a} \right] \right). \end{equation} More precisely, states in the vector multiplets will be given by $N\times N$ block diagonal matrices, D3-D3 chiral multiplets $\Phi^I$ by $N\times N$ matrices with non trivial components for the $N_a\times N_{a+a_I}$ blocks, D3-D7 chiral fields $Q$ by $N\times M$ matrices with $N_a\times M_{a+s}$ non-trivial blocks and D7-D3 fields $\tilde{Q}$ by $M\times N$ matrices with $M_a\times N_{a+s}$ non-trivial blocks. Here and henceforth all subscripts will be always understood mod $n$. The superpotential is cubic and follows directly from that in flat spacetime \begin{equation} W_{\rm pert}= {\rm Tr } \left(g\, \Phi^1[\Phi^2,\Phi^3] +h_1\, \Phi^3 Q\tilde{Q} +h_2\, Q\langle \Phi^3_{77} \rangle \tilde{Q} \right), \label{wpert} \end{equation} after replacing the matrices by their orbifold invariant block form. The last term, involving the vev of some of the non-dynamical D7-D7 fields can be viewed as a mass terms in the low energy effective action. The dimensionless constants $g,h_1,h_2$ measure the strength of the various interactions. In the absence of D7-branes, tadpole/anomaly cancellation requires $N_a=N_b$ for any $a$ and $b$, corresponding to $N$ copies of the `regular' representation of ${\mathbb Z}_n$. The resulting quiver theory is superconformal in the IR, where anomalous U$(1)$'s decouple or become global (baryonic) symmetries. The mesonic branch of the moduli space is ${\rm Symm}_N (\rm CY)$ \footnote{This is almost self-evident for ${\cal N}=1$, since $n$ `fractional' D3-branes combine into a `bulk'/regular brane that can wander in CY. A proof for ${\cal N}>1$ remains elusive.}. The near-horizon geometry is ${\rm AdS}_5\times \mathrm S^5/{\mathbb Z}_n$. Including D7-branes generically spoils superconformal invariance but makes tadpole/anomaly cancellation easier to achieve even without $\Omega$-planes. In particular one can embed the (SUSY) standard model in a flavoured ${\mathbb Z}_3$ quiver \cite{Cicoli:2012vw, Cicoli:2013mpa, Cicoli:2013zha,Dolan:2011qu}. \begin{figure} \centering \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z3}} \hspace*{-1em} \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z4}} \hspace*{1em} \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z5}} \caption[the ${\mathbb C}^3/{\mathbb Z}_3$ and ${\mathbb C}^3/{\mathbb Z}_5$ orientifold theories] {the ${\mathbb C}^3/{\mathbb Z}_3$, ${\mathbb C}^3/{\mathbb Z}_4$ and ${\mathbb C}^3/{\mathbb Z}_5$ orientifold theories for $a_I=(1,1,-2)$.} \label{figz3} \end{figure} Let us consider the unoriented projection that identifies ingoing open strings ending on a brane with the outgoing open strings starting from the image brane transforming in the complex conjugate representation \begin{eqnarray} \label{omega3} \Omega_\epsilon: \qquad \fund_{a} \leftrightarrow \overline{\fund}_{n-a} \,, \qquad ~~~~~~~~~~~~ \mathbf M_{a} \leftrightarrow \bar{\bf M}_{n-a}\,, \end{eqnarray} Strings connecting a brane and its image are projected onto symmetric and antisymmetric representations according to the signs $(\epsilon_0,\epsilon_I)$ specifying the orientifold. Keeping invariant components from (\ref{orb}) under (\ref{omega3}) one finds \begin{eqnarray} \label{orient} {\cal H}_{\rm orientifold} &=& {\bf V} \left( \sum_{a=0,{\frac n 2}} \fund^2_{a,\epsilon_0} + \sum_{a=1}^{p}\, \fund_{a} \, \overline{\fund}_a \right) + {\bf C} \sum_{a=0}^{p} \left( \bar{\bf M}_{a+s } \,\fund_a +\mathbf M_a\, \overline{\fund}_{a+s} \right) +\nonumber\\ && +{\bf C} \sum_I \sum_{a=0}^{n-1} \left\{ \begin{array}{cc} \ft12 (\fund_a ,\overline{\fund}_{a+ a_I}) & a\neq -a-a_I \\ \fund_{a,-\epsilon_0\epsilon_I}^2 & a= -a-a_I \\ \end{array} \right. \end{eqnarray} with $p=[{n-1\over 2}]$ and $\fund^2_{a,\pm}$ denoting the symmetric and antisymmetric representations of the gauge group at node $a$. In (\ref{orient}) the identifications $\fund_{a}= \overline{\fund}_{n-a}$ and $ \mathbf M_{a} =\bar{\bf M}_{n-a}$ are understood. In particular, one can check that bifundamentals in the last line appear always twice leading to integer multiplicities as expected. Examples of unoriented quiver diagrams with flavour are displayed in Figures \ref{figz3}, \ref{figz6} \ref{figz6prime} and \ref{Z5prime}. The spectrum for $n=3,4,5,6$ and $\epsilon_0 = -1$ is displayed in Table \ref{tab:orientifolds}.\\ For even order orbifold groups $n=2k$ it is also possible to choose another unoriented projection \begin{eqnarray} \label{omega3hat} \hat{\Omega}_\epsilon: \qquad \overline{\fund}_{a} \leftrightarrow \fund_{\frac{n}{2}-a} \,, \qquad ~~~~~~~~~~~~ \bar{\bf M}_{a} \leftrightarrow \mathbf M_{\frac{n}{2}-a}\,, \end{eqnarray} which corresponds to an orientifold identifying the node $0$ with the node $n/2$. In Table \ref{tab:orientifolds} we focus on the first new example, the ${\mathbb{Z}}_6$ orbifold with this second orientifold projection. The corresponding unoriented quiver diagram is in Figure \ref{figz6} on the right. The cases with $n$ multiple of four are equivalent to the previous orientifold projection (\ref{omega3}). Note that symplectic groups require an even number of (fractional) branes, and this condition applies both to gauge and flavour groups. Since consistency requires $\Omega$ planes to act with the opposite projections on D3 and D7, one must for instance pay attention to the fact that a theory with an SO$(N_{0})$ gauge group must have even $M_{0}$, since the associated flavour group is Sp$(M_{0})$. When $n$ is even, the orbifold group also contains the spatial ${\mathbb Z}_{2}$ involution $\Theta^{n\over 2}$. As a result, $\Omega \Theta^{n\over 2}$ is also an orientifold involution leading an equivalent orientifold group. This leads to the following identifications \begin{equation}\begin{aligned}\label{omega identifications} \Omega3^{\pm} &= \Omega7^{\mp}\quad\text{($n$ even)},\cr \hat\Omega3^{\pm} &= \hat\Omega7^{\pm}\quad\text{($\tfrac{n}{2}$ odd)}. \end{aligned}\end{equation} \begin{figure} \centering \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z6}} \hspace*{2em} \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z6_horiz}} \caption[the $\mathbb C^3/\mathbb Z_6$ orientifold theories] {The ${\mathbb C}^3/{\mathbb Z}_6$ theory, $a_I=(1,1,-2)$, with two different orientifolds, defined in \eqref{omega3} and \eqref{omega3hat} respectively.} \label{figz6} \end{figure} \begin{table} \centering \begin{tabular}{ccl} \toprule & Gauge Group & Chiral multiplets \& anomalies \\ \midrule[\heavyrulewidth] \multirow{3}{*}{${\mathbb Z}_3$} & \multirow{3}{*}{$\mathrm{SO}(N_0)\times \mathrm{U}(N_1)$} & $3 ( \fund,\overline{\fund}) +\sum_I (\cdot,\fund^2_{\epsilon_I} )+{\bf \bar M_1} (\fund,\cdot) $\\ &&$ +{\bf M_0}\, (\cdot,\overline{\fund})+{\bf\bar{M}_1} (\cdot,\fund)$ \\[.2em] && $ M_0=\sum_{I=1}^3(N_1-N_0+4\epsilon_I)+M_1 $ \\ \midrule \multirow{4}{*}{${\mathbb Z}_4$} & \multirow{4}{*}{$\mathrm{SO}(N_0)\times \mathrm{U}(N_1)\times \mathrm{SO}(N_2)$} & $2( \fund,\overline{\fund},\cdot) + 2(\cdot, \fund,\fund) +(\fund,\cdot,\fund)+(\cdot,\fund^2_{\epsilon_3},\cdot)$ \\ && $ +(\cdot,\overline{\fund}^2_{\epsilon_3},\cdot) +{\bf \bar M_{1} }(\fund,\cdot,\cdot)+{\bf \bar M_{2} }(\cdot,\fund,\cdot)$ \\ && $+{\bf M_{0}}(\cdot,\overline{\fund},\cdot) + {\bf M_{1}} (\cdot,\cdot,{\fund})$ \\[.2em] && $M_0=-2N_0 +2N_2 +M_2$ \\ \midrule \multirow{5}{*}{${\mathbb Z}_5$} & \multirow{5}{*}{$\mathrm{SO}(N_0)\times \mathrm{U}(N_1)\times \mathrm{U}(N_2)$} & $2( \fund,\overline{\fund},\cdot) + 2(\cdot, \fund,\overline{\fund}) +(\fund,\cdot,\fund) +(\cdot,\overline{\fund},\overline{\fund}) +(\cdot,\fund^2_{\epsilon_3},\cdot)$ \\ && $+(\cdot,\cdot,\fund^2_{\epsilon_1}) +(\cdot,\cdot,\fund^2_{\epsilon_2}) +{\bf \bar M_{1} }(\fund,\cdot,\cdot)+{\bf \bar M_{2} }(\cdot,\fund,\cdot)$ \\ && $+{\bf M_{0}}(\cdot,\overline{\fund},\cdot) + {\bf M_{1}} (\cdot,\cdot,\overline{\fund}) +{\bf M_{2}}(\cdot,\cdot,\fund)$ \\[.2em] && $ M_0 =-2N_0 +N_1 +N_2+4\epsilon_3 +M_2$ \\ && $M_1 = N_0 -3N_1 +2N_2+4(\epsilon_1+\epsilon_2) +M_2 $ \\ \midrule \multirow{6}{*}{${\mathbb Z}_6$} && $2(\fund,\overline{\fund},\cdot,\cdot) + 2(\cdot,\fund,\overline{\fund},\cdot) +2(\cdot,\cdot,\fund,\fund) +(\fund,\cdot,\fund,\cdot)$ \\ && $+(\cdot,\overline{\fund},\cdot,\fund) +(\cdot,\fund^2_{\epsilon_3},\cdot,\cdot) +(\cdot,\cdot,\overline{\fund}^2_{\epsilon_3},\cdot)$ \\ &$\mathrm{SO}(N_0)\times \mathrm{U}(N_1)$ & $+{\bf M_{0}}(\cdot,\overline{\fund},\cdot,\cdot) +{\bf\bar M_{1}}(\fund,\cdot,\cdot,\cdot)+{\bf\bar M_{2} }(\cdot,\fund,\cdot,\cdot)$ \\ &$\quad{}\times \mathrm{U}(N_2)\times \mathrm{SO}(N_3)$ & $ + {\bf M_{1}} (\cdot,\cdot,\overline{\fund},\cdot) +{\bf M_{2}}(\cdot,\cdot,\cdot,\fund)+{\bf\bar M_3}(\cdot,\cdot,\fund,\cdot)$ \\[.2em] && $M_0=-2N_0 +N_1 +2N_2 -N_3 +M_2 +4\epsilon_3$ \\ && $ M_1 = N_0 -2N_1 -N_2 +2N_3 +M_3 -4\epsilon_3$ \\ \midrule \multirow{6}{*}{${{\mathbb Z}}_6,\,\hat\Omega$} & \multirow{6}{*}{$\mathrm{U}(N_0)\times \mathrm{U}(N_1)\times \mathrm{U}(N_5)$} & $2(\fund,\overline{\fund},\cdot)+2(\overline{\fund},{\cdot},{\fund})+(\overline{\fund},\overline{\fund},\cdot)+({\fund},\cdot,\fund)+ (\cdot,\fund,\overline{\fund}) $ \\ && $+2(\cdot,\fund^2_{\epsilon_i},\cdot)+2(\cdot,\cdot,\overline{\fund}^2_{\epsilon_i})+\bar{\bf M}_0(\cdot,\cdot,{\fund})+{\bf M}_0(\cdot,\overline{\fund},\cdot)$\\ &&$+\bar{\bf M}_1({\fund},\cdot,\cdot) +{\bf M}_1(\cdot,{\fund},\cdot)+\bar{\bf M}_5(\cdot,\cdot,\overline{\fund}) +{\bf M}_5(\overline{\fund},\cdot,\cdot) $ \\[.2em] && $M_1=3N_0-2N_1-N_5 -4(\epsilon_1+\epsilon_2)+M_0$ \\ && $M_5=3N_0-N_1-2N_5 -4(\epsilon_1+\epsilon_2)+M_0 $ \\ \bottomrule \end{tabular} \caption{Matter content for some ${\mathbb C}^3/{\mathbb Z}_n$ orientifold theories, with $a_I=(1,1,-2)$ and $\epsilon_0=-1$. The field content for $\Omega$ projections of Sp$(N)$ type (corresponding to $\epsilon_0=1$) follows by flipping all antisymmetric into symmetric representations and vice-versa, i.e. $\epsilon_I \to -\epsilon_I$. The constraints on $M_i$ come from the tadpole cancellation conditions.} \label{tab:orientifolds} \end{table} \subsection{Tadpoles and anomalies} For generic choices of $N_a$ and $M_a$, the unoriented quiver gauge theories obtained in the last section are chiral and therefore potentially anomalous. Sp and SO gauge groups are free of anomalies since, barring spinorial representations that are not realised in perturbative open string contexts, all representations are self-conjugate. For U$(N)$ gauge groups the anomaly is computed by the formula \begin{equation} \mathcal I_{U(N)}=\Delta n_F +\Delta n_A (N-4) +\Delta n_S (N+4), \end{equation} with $\Delta n_F$, $\Delta n_A$ and $\Delta n_S$ the differences between the number of chiral and anti-chiral ${\cal N}=1$ multiplets in the fundamental, symmetric and antisymmetric representations respectively. Higher rank (anti-)symmetric tensors are not realised in perturbative open string contexts. Taking into account the field content of the unoriented quiver gauge theory one finds \begin{equation} {\mathcal I}_a=\mathcal I_{{\rm U}(N_a)} = \sum_{b=0}^{n-1} ( I_{ab} \,N_b+ J_{ab}\, M_b) +4 \epsilon_0 K_{a} \label{tadfin} \end{equation} with \begin{eqnarray} I_{ab} &=& \sum_{I=1}^3 ( \delta_{a,b-a_I} -\delta_{a,b+a_I}) \nonumber\\ J_{ab} &=& \delta_{a,b-s} -\delta_{a,b+s }\nonumber\\ K_a &=& \sum_{I=1}^3 \epsilon_I ( \delta_{2a,a_I} -\delta_{2a,-a_I} ) \label{inters} \end{eqnarray} codifying the ``intersection numbers'' of the exceptional cycles at the singularity. More concretely, $I_{ab}$ counts the number of times D3 branes of type ``$a$" and ``$b$" intersect, $J_{ab}$ the intersections of D3$_a$ and D7$_b$ branes and $K_a$ the intersections of a D3$_a$ brane and its image D3$_a'$ under the orientifold action. This can be read off directly from the quiver diagram counting the number of arrows connecting the various nodes with plus or minus signs depending on the direction of the arrow. We notice that $I_{ab}$ and $J_{ab}$ are anti-symmetric matrices while $K_a=-K_{n-a}$. Explicitly for $a_I=(1,1,-2)$ the non-trivial components are \begin{align} {\mathbb Z}_3\quad & I_{a,a+1}=-I_{a+1,a}=3, \quad J_{a,a+1}=-J_{a+1,a}=1, \quad K_2=-K_1=\sum_I\epsilon_I \,, \\[.5em] {\mathbb Z}_{n\neq 3}\quad & I_{a,a+1}=-I_{a+1,a}=2, \quad I_{a+2,a}=-I_{a,a+2}=1, \quad J_{a,a+1}=-J_{a+1,a}=1, \cr & K_{ {n+ 1\over 2} }\hspace*{1pt}=-K_{n-1\over 2}= (\epsilon_1+\epsilon_2), \quad K_{ {n- 2\over 2} }=-K_{ {n+2\over 2} }=-K_1=K_{n-1}= \epsilon_3. \end{align} For the even $n$ cases with orientifold projection $\hat{\Omega}_{\epsilon}$, defined in (\ref{omega3hat}), the previous expression for $K_a$ is replaced by \begin{equation} \hat{K}_a = \sum_{I=1}^3 \epsilon_I (\delta_{2a,a_I+\frac{n}{2}} -\delta_{2a,\frac{n}{2}-a_I} ), \end{equation} with the same meaning of intersections between a D3$_a$ brane and its image. In the following sections we will mainly focus on the cases with the $\Omega_{\epsilon}$ projection defined in (\ref{omega3}).\\ We remark that equation (\ref{tadfin}) can be thought of as the components of the vector equation \begin{equation} N_b\, \pi_{D3b}+M_b\, \pi_{D7b} +4 \epsilon_0\, \pi_{O}=0 \label{tadvec} \end{equation} with $\pi_{D3b}$, $\pi_{D7b}$, $\pi_{O}$ the cycles wrapped by the ${\rm D3}_b$, ${\rm D7}_b$ and $\Omega$-planes respectively. Equation (\ref{tadfin}) follows from (\ref{tadvec}) after multiplying it by $\pi_a$ and identifying \begin{equation} I_{ab}=\pi_{D3a} \circ \pi_{D3b},\qquad J_{ab}=\pi_{D3a} \circ \pi_{D7b},\qquad K_{a}=\pi_{D3a} \circ \pi_{O} \label{prodos}. \end{equation} We would like to stress that the above `intersection numbers' are completely coded in the various contributions to the one-loop Klein bottle, Annulus and Moebius strip amplitudes. The interested reader can find all the details in the Appendix. As already observed long time ago \cite{Bianchi:2000de,Uranga:2000xp}, chiral anomalies are associated to tadpoles of twisted RR fields localized at the singularity and thus belonging to sectors with non-vanishing Witten index, {\it i.e.\ } giving rise to an ${\cal N} =1$ (chiral) spectrum. Tadpoles of RR fields belonging to the untwisted sector or to twisted sectors with vanishing Witten index i.e. giving rise to an ${\cal N} =4,2$ (chiral) spectrum, do not contribute to chiral anomalies in $D=4$ and can thus be discarded in the low-energy dynamics of the local unoriented quiver gauge theory. Additional constraints arise when one looks for a global embedding of these models. We will not address these important issues here since we are focussing on the local models. For recent work see \cite{Cicoli:2013mpa}. \subsection{ ${\mathbb C}^3/\prod_i {\mathbb Z}_{n_i}$-singularities} Although in explicit examples we have mostly focused on the ${\mathbb Z}_n$ case with $a^I=(1,1,-2)$, formulae in the previous section apply to the general case $a^I\neq (1,1,-2)$ and to the case of type IIB orientifolds on ${\mathbb C}^3/\prod_i {\mathbb Z}_{n_i}$. The singularity is now codified in the choice of the vectors $\vec a_{I} =\{ a^{(i)}_{I } \}$ satisfying \begin{equation} \sum_{I=1}^3 a^{(i)}_{I }=0 \quad ({\rm mod} \: n_i) \end{equation} for each $i$ separately. The spectrum, anomalies and tadpoles are given by the same formulae as before with intersection numbers $I_{\vec a \vec b}$, $J_{\vec a \vec b}$, $K_{\vec a}$, where we define $\vec a = (a^{(1)},a^{(2)},\ldots)\in{\mathbb Z}_{n_1}\times{\mathbb Z}_{n_2}\times\ldots$ The resulting intersection matrices are the tensor product of those of each single ${\mathbb Z}_{n_i}$ factor. As an example, let us consider ${\mathbb C}^3/{\mathbb Z}_2\times {\mathbb Z}_3$ with the following actions on ${\mathbb C}^3$: \begin{equation} a^{(1)}_I =(1,-1,0) , \qquad a^{(2)}_I =(0,-1,1). \end{equation} The nodes of the quiver are labeled by $\vec a=(a^{(1)},a^{(2)})$ with $a^{(1)}=0,1$ and $a^{(2)}=0,1,2$, so we have six nodes. One can then see that this orbifold action is precisely identical to ${\mathbb C}^3/{\mathbb Z}_6$ with $ a^{(1)}_I =(1,3,2)$. At the cost of being pedantic, there is a single fixed point, the origin, in ${\mathbb C}^3/\prod_i {\mathbb Z}_{n_i}$ and closed string (chiral) amplitudes with ${\cal N} =4,2$ supersymmetry do not contribute to tadpole, since the corresponding (un)twisted fields are not localised at the singularity but de-localised along non-compact cycles. \section{Conformal theories}\label{sec:conformal theories} \label{sconformal} Although generically the presence of flavour D7-branes and $\Omega$-planes tends to spoil superconformal invariance, judicious choices of the numbers and types of D7's may lead to $\mathcal N=1$ superconformal quiver gauge theories, thus opening up a completely new class of gauge theories of this kind that are amenable to a reliable description in terms of open strings. The prototype is the class of $\mathcal N=2$ superconformal gauge theories arising from $N$ D3's in the presence of 4 D7's and an $\Omega 7^-$ plane \cite{Sen:1996vd,Banks:1996nj,Aharony:1998xz}. The resulting gauge group is Sp$(2N)$, the flavour symmetry is SO$(8)$ acting on the 8 half hypermultiplets in the fundamental representation. In addition there is a flavour singlet hypermultiplet transforming in the anti-symmetric representation. The one-loop $\beta$-function of the Sp$(2N)$ gauge theory vanishes and since for a theory with ${\cal N}=2$ supersymmetry no anomalous dimensions are generated for hyper-multiplets, one can safely argue that the theory is (super)conformal. Here we consider ${\cal N}=1$ theories obtained as orbifold projections of ${\cal N}=2$ theories, so it is reasonable to believe that again anomalous dimensions for the fundamental fields be not generated. Indeed, superpotential interactions are always cubic so chiral fields come with their naive dimension one, as long as vev's of the non-dynamical D7-D7 fields appearing in (\ref{wpert}) vanish. To look for a superconformal theory one can then scan for models with vanishing one-loop $\beta$-function\footnote{We remark that these arguments can be easily adapted even to non-supersymmetric models of the class \cite{Angelantonj:2000kh,Bianchi:2000vb} where each individual sector preserves some supersymmetry and therefore no tadpoles for the dilation and other NS-NS fields are generated.}. With this proviso, the one-loop $\beta$ function for a general ${\cal N}=1$ gauge theory is \begin{equation} \beta=\ft12(3 \ell({\bf Adj}) - \sum_{C} \ell({\bf R_C}) ) , \end{equation} with the sum running over the chiral multiplets and $\ell({\bf R})$ denoting the index of the representation ${\bf R}$. In our conventions \begin{equation} \ell(\fund\, \overline{\fund} )=2N, \qquad \ell(\fund^2_\epsilon)=\ell(\overline{\fund}^2_\epsilon)=N+2 \epsilon, \qquad \ell(\fund)=\ell(\overline{\fund})=1. \label{lls} \end{equation} For the quiver gauge theories under consideration one finds \begin{eqnarray} \label{beta dimf delta} {\beta_a} &=& \left\{\begin{array}{ll} 3 N_a+\epsilon_0 K^+_a-\frac{1}{2}\left(I^+_{ab}N_b+J^+_{ab}M_b\right) & \quad{\rm (SU)} \\[.2em] \frac{3}{2}N_a+3\epsilon_0+\frac12\epsilon_0 K^+_a-\frac{1}{4}\left(I^+_{ab}N_b+J^+_{ab}M_b\right)&\quad{\rm (SO/Sp)} \end{array}\right. \end{eqnarray} in terms of \begin{equation} \label{inters symm} I^+_{ab} = \sum_{I=1}^3 ( \delta_{a,b-a_I} +\delta_{a,b+a_I}), \qquad J^+_{ab} = \delta_{a,b-s} +\delta_{a,b+s },\qquad K^+_a = \sum_{I=1}^3 \epsilon_I ( \delta_{2a,a_I} +\delta_{2a,-a_I} ), \end{equation} counting the number of arrows (independently of their orientations) in the quiver diagram connecting D3-D3, D3-D7 and D3-D3$'$ branes respectively. Using (\ref{beta dimf delta}) it is indeed straightforward to impose the vanishing of the one-loop beta function coefficients, obviously together with the tadpole cancellation conditions. \begin{figure} \centering \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z6_prime}} \hspace*{2em} \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z6_prime_horiz}} \caption[the ${\mathbb C}^3/{\mathbb Z}_6'$ theory with two different orientifolds] {the ${\mathbb C}^3/{\mathbb Z}_6'$ theory ($a_I=(1,3,2)$) with the two different orientifolds $\Omega$ and $\hat{\Omega}$.} \label{figz6prime} \end{figure} \begin{table}[h!] \centering \begin{tabular}{llll} \toprule & $\Omega$ plane & Conformal theories & Flavour branes \\ \midrule[\heavyrulewidth] ${\mathbb Z}_3$ & $\Omega7^-$ & Sp$(N)\times{\rm U}(N+1)$ & $M_0=2,\ M_1=3$ \\[.5em] ${\mathbb Z}_4$ & $\Omega3^+/\Omega7^-$ & Sp$(N)^2\times{\rm U}(N+3-p)$ & $M_0=M_2=4-2p,\ M_1=2p$,~ $p=0,1,2$ \\[.5em] ${\mathbb Z}_5'$ & $\Omega7^-$ & Sp$(N)\times{\rm U}(N+1)^2$ & $M_0=0,\ M_1=1,\ M_2=3$ \\[.5em] ${\mathbb Z}_6'$ & $\Omega3^+/\Omega7^-$ & Sp$(N)^2\times{\rm U}(N+3)^2$ & $M_0=M_3=4,\ M_1=M_2=0$ \\[.5em] ${{\mathbb Z}}_6'$ & $\hat{\Omega}3^-/\hat{\Omega}7^-$ & U$(N)\times{\rm U}(N+1)^2$ & $M_0=4,\ M_1=M_5=0$ \\ \bottomrule \end{tabular} \caption{Examples of superconformal unoriented quiver gauge theories.} \label{tab:full cft} \end{table} We distinguish between two classes of solutions: theories where $\beta_a=0$ for all nodes $a$, and theories which have non-conformal but empty nodes ($\beta_a\neq0$ for $ N_a=0$). For ${\mathbb C}^3/{\mathbb Z}_n$ models with $n=3,\ldots6$ and $\Omega3$ or $\Omega7$ planes, % we have found seven new conformal models, whose properties are summarized in Table \ref{tab:full cft}. The ${\mathbb C}^3/{\mathbb Z}_5'$ case corresponds to $a_I = (1,3,1)$, so that the structure of the flavour representations is changed since in this case $s=(a_1+a_2)/2 = 2$. Its quiver diagram is depicted in Figure~\ref{Z5prime} If we choose one node of the quiver to be empty, $N_a=0$, and relax the associated constraint $\beta_a=0$ for conformal invariance, it turns out to be much easier to find new conformal models. For brevity, we only provide few examples in table \ref{tab:cft with empty node} for the ${\mathbb Z}_{3}$ orbifold with $\Omega3$ and $\Omega7$ planes. One can easily find many more models for other orbifolds and/or allowing for more than one non-conformal empty node. Looking at Tables \ref{tab:full cft} and \ref{tab:cft with empty node}, we see that all solutions require the presence of (fractional) D7 flavour branes to compensate for the superconformal breaking $\Omega$-plane contribution. It is particularly interesting to note that all models in Table~\ref{tab:full cft} can be seen as $\mathcal N=1$ truncations of the $\mathcal N=2$ Sp$(N)$ superconformal theories discussed in \cite{Sen:1996vd,Banks:1996nj,Aharony:1998xz}. Indeed, not only all these models have a $\Omega7^-$ plane, but also the total number of D7 branes is always 4, reproducing the (local) setup of the F-theory solution of \cite{Sen:1996vd}.\footnote{One must keep into account that D7 branes on top of the orientifold are counted twice.} \begin{table}[h!] \centering \begin{tabular}{llll} \toprule &$\Omega$ plane & Conformal theories & Flavour branes \\ \midrule[\heavyrulewidth] ${\mathbb Z}_3$ & $\Omega3^+$ & Sp($N$) & $M_0=18,\ M_1=3N+6$ \\[.3em] &$\Omega7^-$ & Sp$(N)$ & $M_0=2,\ M_1=3N+6$ \\[.3em] &$\Omega3^-$ & SO$(0)\times{\rm U}(N)$ & $M_0=3(N-1),\ M_1=9\quad (N\ \rm odd)$ \\[.3em] &$\Omega7^-$ & Sp$(0)\times {\rm U}(N)$ & $M_0=3N-1,\ M_1=3$ \\ \bottomrule \end{tabular} \caption{Conformal theories found for the ${\mathbb Z}_{3}$ orbifold with one non-conformal empty node.} \label{tab:cft with empty node} \end{table} It would be interesting to study whether these superconformal unoriented quiver theories admit a holographic dual. One would expect a gravity dual on AdS$_5\times X$ with $X$ a deformation of the Einstein space $S^5/{\mathbb Z}_n$ accounting for the presence of the fractional and flavour branes (see \cite{Karch:2002sh,Ouyang:2003df} for previous works in this direction). In this context, tadpole conditions translate into constraints on the volumes of the various non-trivial cycles (faces of the dimer) of $X$. One can take the complementary attitude and exploit the world-sheet description of the brane system to study the `holographic' dual gravity solution of the RG flow triggered by the disk `dilaton' tadpoles along the lines of \cite{Leigh:1998hj, Angelantonj:2000kh, Bianchi:2000vb, Bertolini:2000dk,Billo:2011uc,Fucito:2011kb,Billo:2012st}. \begin{figure}[t] \centering \raisebox{-0.5\height}{\includegraphics[scale=0.8]{Z5_new}} \caption{the ${\mathbb C}^3/{\mathbb Z}_5$ theory with $a_I=(1,3,1)$.} \label{Z5prime} \label{figz5SCFT} \end{figure} \section{Instanton induced superpotentials} \label{sinstanton} We now turn our attention to non-perturbative effects generated by D-brane instantons in unoriented quiver theories with flavour. As by now customary, we start from the oriented case and then consider the effect of the unoriented projection and the inclusion of flavour branes. In flat space-time as well as in AdS (near horizon geometry), D-instantons behave as instantons for the ${\cal N} =4 $ SYM on a stack of D3-branes \cite{Billo:2002hm, Bianchi:1998nk, Witten:1998xy}. In the quiver gauge theories, just like fractional D3-branes correspond to D5 ad D7-branes wrapping vanishing cycles at the singularity, instantons can be realized in terms of fractional D(-1)-branes, i.e. Euclidean D1 and D3-branes wrapping the same set of vanishing cycles. The orientifold projection restricts these choices to configurations with zero net D5-brane charge. Unoriented D-brane instantons have been considered for their crucial role in generating phenomenologically interesting couplings in the superpotential \cite{Ibanez:2006da,Blumenhagen:2006xt,Argurio:2007vqa,Bianchi:2007wy}. For a recent review see \cite{Bianchi:2009ij, Blumenhagen:2009qh, Bianchi:2012ud} and references therein. Lately the analysis has been extended to (fluxed) E3-branes in F-theory \cite{Cvetic:2010rq, Grimm:2011dj, Bianchi:2011qh, Bianchi:2012kt}. In (unoriented) ${\cal N} =1 $ quiver theories, instanton induced superpotentials $W$ are computed by means of the instanton partition function \begin{equation} S_{W}=\prod_{a=1}^{\left[{n\over 2}+1\right]} \Lambda_a^{k_a \beta_a} \, \int d\mathfrak{M} \,e^{S_{\rm inst}}=\int d^4 x \,d^2\theta\, W(\Phi), \end{equation} with $\mathfrak{M}$ the ADHM moduli space realized in terms of open strings with at least one end on the D-instanton ($ d^4x\, d^2\theta$ is the center of mass super-volume form). $\Lambda_a$, $\beta_a$, $k_a$ are the scales, beta functions and instanton numbers associated to the gauge group at node $a$ and $S_{\rm inst}$ is the instanton moduli space action. There are two distinct classes of instantons: gauge and exotic instantons. Gauge instantons are associated to a single D(-1) brane (and its image) occupying a non-empty node of the quiver ({\it i.e.\ } wrapping the same vanishing cycle as a physical stack of branes) and generate Affleck--Dine--Seiberg like superpotentials. Exotic instantons arise from a single D(-1) brane occupying a Sp empty node and generate polynomial superpotential terms\footnote{The effect of E3 instantons associated to flavour nodes vanishes in the strict non-compact limit but may resurrect when the local unoriented quiver is embedded in a consistent global context.}. \subsection{Gauge Instantons} Let us first consider `gauge' instantons. The instanton fermionic moduli space can be splitted into two classes according to whether the zero mode corresponds to the gaugino (vector multiplet) or to matter fermions (chiral multiplets). We denote the total number of them for $k=1$ by $n_{\lambda_0}$ and $n_{\psi_0}$ respectively. Index theorems yield \begin{equation} n_{\lambda_0}= \ell({\bf Adj}), \qquad n_{\psi_0} = \sum_{C} \ell({\bf R_C}) , \end{equation} with the sum running over the chiral multiplets and $\ell({\bf R})$ given in (\ref{lls}). The beta function of the gauge theory is given by \begin{eqnarray} \beta=\ft12(3 n_{\lambda_0} - n_{\psi_0} ). \label{betann} \end{eqnarray} A single instanton can generate a superpotential \`a la Affleck-Dine-Seiberg if $ n_{\lambda_0}-n_{\psi_0} =2$, like in SQCD with $N_f = N_c -1$. In this case all fermionic zero modes, except for the two $\theta$'s parametrizing the superspace coordinates, can be soaked by bilinear terms in the fermion zero-modes arising from Yukawa couplings. Plugging this condition into (\ref{betann}), one concludes that a superpotential can be generated if the beta function of the gauge theory satisfies the condition \begin{equation} \beta=\ell({\bf Adj})+1= \left\{ \begin{array}{cc} 2N+1 & {\rm U}(N) \\ N+3& {\rm Sp}(N) \\ N-1 & {\rm SO}(N) \end{array} \right.\label{betainst} \end{equation} The generated superpotential can be written in the form \begin{equation} W_{\rm gauge}= {\Lambda^{\beta} \over \Phi^{\beta-3}} , \end{equation} where $\Phi^{\beta-3}$ is some gauge and flavour invariant composite operator, whose `refined' expression in terms of the chiral matter super-fields takes into account the exact number of zero-modes of each kind, {\it i.e.\ } for ${\mathbb Z}_3$ with no flavour branes and gauge group SU$(4)$, $\beta = 9$ and $\Phi^{\beta -3} = {\rm det}_{3\times 3}\, \epsilon_{u_1..u_4} \phi^{u_1 u_2}_{I} \phi^{u_3 u_4}_{J} $ \cite{Bianchi:2007wy}. It is now easy to scan table \ref{tab:orientifolds} for unoriented quiver gauge theories with flavour admitting nodes such that the beta function satisfies (\ref{betainst}). In these cases a superpotential term can be induced by gauge instantons. In table \ref{thz345inst} we collect the quiver gauge theories exhibiting superpotentials of this type.\\ \begin{table}[h!!] \centering \begin{tabular}{clll} \toprule & Gauge theories & Flavour branes \\ \midrule[\heavyrulewidth] \multirow{1}{*}{${\mathbb Z}_3$} & ${\rm Sp}(2p)_*$ & $M_0=4(3-p),\ M_1=2p$ & $p=0,\ldots,3$ \\[.2em] & ${\rm Sp}(2p)_*\times {\rm U}(1)$ & $M_0=4(3-p),\ M_1=2p-3$ & $p=2,3$ \\[.2em] & ${\rm Sp}(6)_*\times {\rm U}(2)$ & $M_0=M_1=0$ & \\[.2em] & ${\rm SO}(0)\times {\rm U}(4)_*$ & $M_0=M_1=0$ & \\ \midrule \multirow{1}{*}{${\mathbb Z}_4$} &Sx$(N_0)_*$$\times$U($N_1$)$\times$Sx$(N_2)$ & $M_1=N_0$$-$$N_2$$-$$2N_1$$-$$2(1$$-$$\epsilon_0)$ & $N_0\ge2(1$-$\epsilon_0)$+$2N_1$+$N_2$ \\ & & $M_2=M_0+2N_0-2N_2$ & \\[.3em] &Sx$(N_0)$$\times$U($N_1$)$\times$Sx$(N_2)_*$ & $M_1=N_2$$-$$N_0$$-$$2N_1$$-$$2(1$$-$$\epsilon_0)$ \\ & & $M_0=M_2+2N_2-2N_0$ & $N_2\ge 2(1$-$\epsilon_0)$+$2N_1$+$N_0$ \\ \bottomrule \end{tabular} \caption{Chiral gauge theories at the $\mathbb{Z}_{n}$, $n=3,4$, orientifold singularities admitting instanton contributions. The node where the instanton sits is indicated by a $*$. We use the symbol Sx$\,\equiv\,$SO, Sp for $\epsilon_0=-1,\ +1$ respectively. Recall that for $\epsilon_0=-1$ $M_0$ and $M_2$ must be even.} \label{thz345inst} \end{table} For the $\mathbb{Z}_3$ and $\mathbb{Z}_5$ orbifolds the number of solutions is finite. In particular for the $\mathbb{Z}_3$ case these solutions extend the gauge theories ${\rm Sp}(6)_*\times {\rm U}(2)$ and ${\rm U}(4)_*$ found in \cite{Bianchi:2007wy} without D7 branes. The $*$ indicates the gauge group where the instanton sits. We conclude this section by remarking that instantons may generate different dynamical effects. Indeed for gauge theories with $\beta=\ell({\bf Adj})$ one finds that, like for QCD with $N_f=N_c$, the moduli space can get deformed at the scale $\Lambda$ (see for instance \cite{Argurio:2007qk,Bianchi:2009bg}). On the other hand, there may be other non-perturbative effects, that may be related to instantons after Higgsing, leading to dynamical super potentials. In particular $\beta=\ell({\bf Adj})-1$ is a necessary condition for S-confinement \cite{Csaki:1996sm,Csaki:1996zb,Grinstein:1997zv,Grinstein:1998bu} , like in QCD with $N_f = N_c + 1$. For example, for the ${\mathbb Z}_4$ quiver one can find gauge theories with: \begin{itemize} \item{${\rm Sp}(2p)_*\times {\rm U}(0)\times {\rm Sp}(2p)_* $: Two types of instanton superpotentials are generated at each of the two non empty gauge theory nodes with scales $\Lambda_0$ and $\Lambda_2$.} \item{${\rm Sp}(2p+2)_*\times {\rm U}(N_1)\times {\rm Sp}(2p) $ with $N_1=0,1$: A superpotential is generated by a gauge instanton at node 0 while the theory S-confines at node 2 .} \end{itemize} \subsection{Exotic Instantons} Exotic instantons originate from a single D(-1) occupying an empty Sp node and carrying an O(1) symmetry. For this choice the instanton moduli space contains (besides the two universal fermionic zero modes and the four positions) only fermionic zero modes coming from D(-1)-D3 or D(-1)-D7 strings. Assuming that the D(-1) sits in node 0, the number of fermionic zero modes is summarized in the following table \begin{center} \begin{tabular}{cccc} type & modes & $U(N_b)$ & ${\rm dim} \mathfrak{M}_F$\\ {\rm D}(-1)-{\rm D}(-1)&$ x_\mu,\ \theta_\alpha$&$\bullet$ & 2\\ {\rm D}(-1)-{\rm D}3&$\mu^I$&$\fund_{a_I}$& $\sum_I N_{a_I}$\\ {\rm D}(-1)-{\rm D}7&$\mu'$&$ {\bf M}_{s}\times \bullet $ & $M_{s}$ \end{tabular} \end{center} For a D(-1) instanton at node $\tfrac{n}{2}$, a similar spectrum is found with $a_I \to \tfrac{n}{2}+a_I$ and $s\to s+\tfrac{n}{2}$. A non-perturbative superpotential arises whenever it is possible to saturate the integration over the charged moduli $\mu^I,\ \mu'$ and again the superpotential can be written in the form \begin{equation} W_{\rm exotic}=\Lambda^\beta \, \Phi^{3-\beta}, \end{equation} with $\beta\leq 3$ the putative beta function of the Sp(0) node \begin{equation} \beta=3-\ft12 (\sum_I N_{a_I}+ M_s). \end{equation} \subsection*{Examples} \begin{figure} \centering \raisebox{-0.5\height}{\includegraphics[scale=.8]{Z4exotic1}} \hspace*{3em} \raisebox{-0.5\height}{\includegraphics[scale=.8]{Z3exotic}} \caption[the ${\mathbb C}^3/{\mathbb Z}_3$ and ${\mathbb C}^3/{\mathbb Z}_5$ orientifold theories] {the ${\mathbb C}^3/{\mathbb Z}_4$ U($3-p$) (left) and ${\mathbb C}^3/{\mathbb Z}_3$ U($N$) (right) models admitting exotic instanton contributions.} \label{fig:exoticinstanton} \end{figure} In the following, we discuss two examples of instanton induced superpotentials in ${\cal N}=1$ superconformal unoriented quiver gauge theories. As a first example, consider the U($3-p$) conformal theory that one can obtain from the second row of Table \ref{tab:full cft} setting $N=0$, $p=0,1,2$, in the ${\mathbb C}^3/{\mathbb Z}_4$ orbifold. Since nodes 0 and 2 are both empty, there are two exotic (one-)instanton contributions that add up to give the full non-perturbative superpotential. The field content of the theory as well as the charged modes of the D(-1) are displayed in Figure \ref{fig:exoticinstanton} for the instanton contribution coming from node 0. The couplings of matter fields with the instanton modes read: \begin{equation} S_{\rm charged} \sim \mu^i S \mu_i + \mu' {\cal M} \mu', \end{equation} where ${\cal M}$ is the expectation value of a non-dynamical D7-D7 field, transforming in the antisymmetric representation of the SU($2p$) (flavour) group at node 1. Notice that the non-dynamical fields ${\cal M}$ and $\tilde {\cal M}$ do not produce any effect at the pertubative level because the nodes 0 and 2 are empty. Hence, vev's of these fields are perturbatively allowed without breaking conformal invariance. For $p=0$ both $\mathcal M$ and $\mu'$ are absent. The contribution to the effective action takes the schematic form: \begin{equation} S_{\rm n.p.} \sim \int d^4x\ d^2\theta \int d^{6-2p}\mu\ d^{2p}\mu'\ e^{-S_{\rm charged}}. \end{equation} There is only one way to saturate all fermion modes, which is to bring down a term $\mathcal M^p\,S^{3-p}$. Taking into account the analogous one-instanton correction arising from node 2, one finds: \begin{equation}\label{exotic superpot U2} W_{\rm exotic} \sim {\cal M}^p S^{3-p}+\tilde {\cal M}^p\tilde S^{3-p}. \end{equation} For $p=0$ formula (\ref{exotic superpot U2}) produces Yukawa couplings preserving conformal invariance. For $p=1,2$, conformal invariance is dynamically broken at the scales set by ${\cal M}$ and $\tilde {\cal M}$. For $p=2$ a Polony-like term is generated inducing supersymmetry breaking, too. It's important to note that the absence of a $\Lambda$ mass scale in (\ref{exotic superpot U2}) reflects the vanishing of the putative one-loop beta function coefficients of the two empty nodes: $\beta_0=\beta_2=0$. As a second example, we consider a conformal gauge theory in a ${\mathbb Z}_3$ quiver with an empty non-conformal node. The model is displayed in the last row of Table \ref{tab:cft with empty node}. It admits an exotic instanton contribution arising from the empty `Sp(0)' node. The matter content and D(-1) modes are again depicted in Figure \ref{fig:exoticinstanton} and the couplings with charged modes are as follows (we separate $\mu^I = (\mu^i,\ \tilde\mu)$ with $i=1,2$; $A_i,\ S$ sit in the antisymmetric and symmetric representations respectively): \begin{equation} S_{\rm charged} \sim \mu^i\mu_i S + \tilde\mu \mu^i A_i + \tilde\mu\mu' Q + \mu'\mu' \mathcal M. \end{equation} Similarly to the previous example, the mass scale ${\cal M}$ is the expectation value of the D7-D7 field transforming in an antisymmetric representation of the SU(3) flavour group associated with D7 branes in node 1. When $N$ is even or $N=1$ there is no contribution to the superpotential. For odd $N\ge3$ one finds that there are two ways to saturate all fermion zero-modes, leading to \begin{equation}\label{exotic superpot Z3} W_{\rm exotic} \sim \Lambda_0^{\frac32(1-N)}\left(Q^3 A^{N-3}S^{(N+3)/2}+ \mathcal M\, Q\, A^{N-1}S^{(N+1)/2} \right). \end{equation} We notice that, unlike in the previous example, the exotic superpotential that breaks conformal symmetry is generated even when the vev of the D7-D7 field ${\cal M}$ is set to zero. The presence of an overall scale $\Lambda_0$ in (\ref{exotic superpot Z3}), responsible for the breaking of conformal symmetry, reflects the fact that in this example the putative one-loop beta function of the empty node is non-zero: $\beta_0=\frac32(1-N)$. Another interesting possibility is to have both gauge and exotic instanton contributions. Looking at Table \ref{thz345inst}, we can see for instance that in the ${\mathbb Z}_4$, $\epsilon_0=+1$ models it's possible to set $N_2=0$ and obtain theories that exhibit one-instanton superpotential contributions both from a gauge instanton in the ${\rm Sp}(N_0)$ node and an exotic instanton at the Sp(0) node. \subsection{Scales and closed string moduli} As remarked above the scales $\Lambda$'s entering the superpotentials carry an explicit dependence on the closed string moduli $T_a$ describing the complex K\"ahler deformations of the singularity. Their imaginary parts parametrize Fayet--Illiopolous terms for the gauge theory at the corresponding node of the quiver. (Twisted) complex structure moduli $U_\alpha$, if present, are associated to 3-form fluxes and generate mass deformations of the quiver gauge theory\footnote{We are currently analysing this issue \cite{wipwithImp}}. The explicit form of the tree-level (disk) gauge kinetic functions $f_a(T_h)$ and thus of the RG invariant scales $\Lambda_a$ depends on the node where the `fractional' brane sits \begin{equation} \Lambda_a=M e^{2\pi f_a(T_b) }, \end{equation} where $M$ is some (holomorphic) mass-scale. The fields $\mathrm{Im} T_b$ transform under U$(1)_a\subset $ U$(N_a)$ according to \begin{equation} \delta_a \mathrm{Im} T_b = N_a ( w^{a b} - w^{(n-a)b}) \alpha_a\quad \rm (no \ sum) , \label{axionic} \end{equation} when \begin{equation} \delta_a A_\mu^b = \delta_a^b\ \partial_\mu \alpha_a\quad \rm (no \ sum). \end{equation} The axionic shifts (\ref{axionic}) compensate for the transformation properties of the chiral fields entering in the superpotential. {\it i.e.\ } the shift symmetry of the RR-axion ${\rm Im}T_a$ is gauged by the `anomalous' U$(1)$ vector boson $A_\mu^b$. As a result of the linear dependence of $f_a$ on $T_b$ \begin{equation} f_a = \sum_{b,c=0}^{n-1} I_{a,b}\, w^{b c}\, T_c \,, \end{equation} the gauging of the axionic shifts induces the following transformations of the holomorphic gauge kinetic functions \begin{equation} \delta_a f_b = N_a (I_{a,b}-I_{n-a,b}) \alpha_a\,. \label{df} \end{equation} For the first few $n$ one finds \begin{eqnarray}\label{ex} n=3: && \delta_1 f_1=3 N_1\alpha_1,\nonumber\\[.6em] n=4: && \delta_1 f_1= - N_1 \alpha_1 ,\\[.2em] \nonumber n=5: && \delta_a f_b= \left( \begin{array}{cc} N_1 \alpha_1 & N_1 \alpha_1 \\ -3 N_2 \alpha_2 & 2 N_2 \alpha_2 \\ \end{array} \right) \label{ffs} \end{eqnarray} and so on. \section{S-dual quiver gauge theories } \label{sdual} In a recent paper \cite{GarciaEtxebarria:2012qx}, a new duality relating ${\cal N}=1$ unoriented quiver theories that is based on S-duality of the parent ${\cal N}=4$ unoriented theory has been proposed. Indeed S-duality of type IIB theory can be used to relate the dynamics of different unoriented projections of (quiver) gauge theories living on D3-branes. In flat space-time the U$(N)$ ${\cal N}=4$ SYM governing the low-energy dynamics of a stack of D3-branes is self-dual. The same is true for the SO$(2N)$ ${\cal N}=4$ SYM governing the low-energy dynamics of a stack of D3-branes on top of a `standard' $\Omega 3^-$ plane. If one however consider `exotic' $\Omega 3$-planes carrying non trivial (but quantized \cite{Bianchi:1991eu, Bianchi:1997rf, Witten:1997bs}) 2-form fluxes\footnote{Recall $\Pi_2(S^5/{\mathbb Z}_2) = {\mathbb Z}_2$ \cite{Witten:1998xy}.} the situation changes. $\Omega 3^+$ carrying $(B_2,C_2)=(1/2,0)$ and giving rise to Sp$(2N)$ is conjectured to be S-dual to $\Omega 3^-$ carrying $(B_2,C_2)=(0,1/2)$ and giving rise to SO$(2N+1)$. Finally $\Omega 3^+$ carrying $(B_2,C_2)=(1/2,1/2)$ and giving rise to Sp$(2N)$ is self-dual \cite{Witten:1998xy}. The last two are usually referred to as $\tilde\Omega 3^{\pm}$. In \cite{GarciaEtxebarria:2012qx} the duality between SO and Sp orientifolds have been extended to ${\cal N}=1$ settings including the $\mathbb C^3/\mathbb Z_3$ unoriented quiver as well as non-orbifold toric singularities. The duality proposal has been substantiated by a precise matching not only of the gauge-invariant degrees of freedom and the anomalies of global symmetries but also of dynamical effects taking place on the two sides of the duality. Here we extend the analysis to the whole $\mathbb C^3/\mathbb Z_n$ series and propose an infinite sequence of new SO/Sp dual pairs of unoriented quiver gauge theories without flavour. We support the duality by matching the spectra of gauge invariant operators on the two sides of the duality. In particular, we show that $\Omega 3^+$-plane can be replaced by $\Omega 3^-$-plane plus certain number of fractional D3-branes determined by a simple geometric relation. We restrict ourselves to the case with no D7 branes (nor $\Omega$7-planes). Adding D7-branes would naively spoil the duality since D7-branes transform non-trivial under S-duality\footnote{It would be interesting to explore similar duality relations in presence of S-duality invariant configurations of mutually non-local 7-branes.}. For concreteness we take ${\mathbb C}^3/{\mathbb Z}_n$ with $n$ odd. We look for SO/Sp orientifold quiver dual pairs in presence of a single $\Omega 3$ plane, i.e. $\epsilon_I=({-\,}{-\,}{-})$. We denote by ${\bf N}=\{ N_a \}$ the number of fractional branes in the Sp gauge theory and by ${\bf \tilde N} =\{ \tilde N_a \}$ that in the SO gauge theory. Cancellation of anomalies in the two gauge theories requires \begin{eqnarray} I\cdot {\bf N}+4\,K=I\cdot {\bf \tilde N}-4\, K=0. \label{anomnof} \end{eqnarray} The two equations are solved by \begin{eqnarray} {\bf N} &=&c_+ \, {\bf v} -4\, I_\perp^{-1} \cdot K\nonumber\\ {\bf \tilde N} &=& c_- \, {\bf v}+ 4\, I_\perp^{-1} \cdot K, \label{solNn} \end{eqnarray} with ${\bf v}=(1,1,...)$ and $c_\pm$ arbitrary. By $I_\perp^{-1}$ we denote the inverse of $I$ in the space orthogonal to ${\bf v}$. We notice that terms proportional to ${\bf v}$ in (\ref{solNn}) do not contribute to (\ref{anomnof}) since $I\cdot {\bf v}=0$, or in other words anomaly equations are not modified by the addition of regular branes. To fix $c_\pm$ we recall that before the orbifolding $\Omega 3^+= \Omega 3^-+1\, D3$ and so the total number of fractional branes in the SO gauge theory should exceed by one that in the Sp theory, i.e. $ {\bf v} \cdot ({\bf \tilde N}-{\bf N})=1$, which translates into $c_- - c_+=\tfrac{1}{n}$. In addition, one should require that ${\bf N}$ and ${\bf \tilde N}$ are made of integers. The solution is parametrized by an integer $p$ and can be written as \begin{equation} c_{\pm} =p+\tfrac{1}{2} \mp \tfrac{1}{2n}. \end{equation} One can easily check that ${\bf N}$ and ${\bf \tilde N}$ given by (\ref{solNn}) are always integers and positive for $p$ large enough. The resulting gauge theory for $n=3,4,5$ are displayed in table \ref{tablesospD3}. The case $n=3$ reproduces the series of dual pairs studied in \cite{GarciaEtxebarria:2012qx}. \begin{table}[h!] \centering \begin{tabular}{cll} \toprule & Gauge theories & d.o.f. \\ \midrule[\heavyrulewidth] \multirow{2}{*}{${\mathbb Z}_3$} & ${\rm Sp}(2p+4)\times {\rm U}(2p)$ & $\nu_I = 9p + 6p^2$, \\ & ${\rm SO}(2p-1)\times {\rm U}(2p+3)$ & $\nu_0 =10+ 9p + 6p^2$ \\ \midrule \multirow{2}{*}{${\mathbb Z}_5$} & ${\rm Sp}(2p+2)\times {\rm U}(2p+2)\times {\rm U}(2p-2)$ & $\nu_{1,2} = 1+ 5p + 10p^2$, \\ & ${\rm SO}(2p-1)\times {\rm U}(2p-1)\times {\rm U}(2p+3)$ & $\nu_{3} = \nu_{1,2}-6,\ \nu_0 = \nu_{1,2}+10$ \\ \midrule \multirow{2}{*}{${\mathbb Z}_7$} & ${\rm Sp}(2p+8)\times {\rm U}(2p+4)^2\times {\rm U}(2p)$ & $\nu_{1,2} = 48+49p + 14p^2$, \\ & ${\rm SO}(2p-1)\times {\rm U}(2p+3)^2\times {\rm U}(2p+7)$ & $\nu_{3} = \nu_{1,2}-6,\ \nu_0 = \nu_{1,2}+20$ \\ \bottomrule \end{tabular} \caption{Examples of Sp/SO dual models, with $N_a\ge1$.} \label{tablesospD3} \end{table} In the rest of this section we collect some evidences for the duality between SO/Sp quiver gauge theories with fractional brane content (\ref{solNn}) on a general ${\mathbb C}^3/{\mathbb Z}_n$ orientifold singularity. The main check relies on the comparison of the spectra of the two gauge theories. To this aim, we organize the states of the two gauge theories according to their charges with respect to the global U$(1)^3$ symmetries. U$(1)^3$ is the Cartan of the SO$(6)$ R-symmetry of the parent ${\cal N}=4$ theory and it is therefore part of the global symmetry of any orientifold theory. There are three types of chiral multiplets ${\bf C}^I$, each one charged respect to one U$(1)_I\in$ U$(1)^3$. We denote by $\nu_I$ the number of degrees of freedom of each. Gauge invariant degrees of freedom are built out of traces involving these fields. This leads to $\sum_I \nu_I-{\rm dim} G$ mesonic/baryonic degrees of freedom. In the Sp gauge theory one finds \begin{eqnarray} C^I : && \nu_I= \sum_a \left( N_a N_{a+a_I} +\epsilon N_a \delta_{a+a_I,-a}\right), \nonumber\\ {\rm dim} G: && \nu_0=-\left( \ft12 \sum_a N_a^2 +\ft12 \epsilon \, N_0 \right), \end{eqnarray} with $\epsilon=+$. The spectrum of the SO gauge theory on the other hand is given by the same formulas with $\epsilon=-$ and $N_a\to \tilde N_a$. The difference of degrees of freedom between the two gauge theories is \begin{eqnarray} \Delta \nu_I &=& \sum_a (N_a+\tilde N_a) \left( \tilde N_{a+a_I}- N_{a+a_I} - \delta_{a+a_I,-a}\right) =0, \nonumber\\ \Delta \nu_0 &=&- \ft12 \sum_a (\tilde N_a+ N_a) (\tilde N_a- N_a) +\ft12 \, (N_0+\tilde N_0)=0 , \label{deltas} \end{eqnarray} where in the right hand side we used (\ref{solNn}), to write $\tilde N_a+ N_a=(2p+1) {\bf v}$, $({\bf \tilde N}-{\bf N})\cdot {\bf v}=1$. We notice that the matching between the degrees of freedom $\nu_I$ automatically ensures the matching of anomalies involving the U$(1)^3$ symmetries and therefore is a strong support of the claimed duality relation between the SO and Sp gauge theories. In the last column of Table \ref{tablesospD3} we display the number of degree of freedom for the first few candidates of dual pairs. We remark that the relation between the SO and Sp gauge theories can be translated into a purely geometric identification between the cycles wrapped by the ${{\Omega 3}}^+$ and ${\Omega 3}^-$ planes in the two theories. Indeed, it corresponds to the identification \begin{equation} {\Omega 3}^+={\Omega 3}^- + (\tilde{N}_a-N_a) D3_a , \label{opom} \end{equation} with $(\tilde{N}_a-N_a)$ such that the cycles wrapped by the two ${\Omega 3}$-planes coincide: \begin{equation} 4\, \pi_{\Omega 3}=-4\, \pi_{\Omega 3} + (\tilde{N}_a-{N}_a) \, \pi_{D3a} . \end{equation} Multiplying (\ref{opom}) and using (\ref{prodos}) one finds agreement with (\ref{anomnof}). In addition, one requires that $\sum_a (\tilde{N}_a- N_a)=1$ to match the duality in the parent theory in flat spacetime. Although the matching of the dof's, including their non-anomalous flavour charges, seems to be only a necessary condition, we believe this is equivalent to matching all triangle anomalies as carefully done in \cite{GarciaEtxebarria:2012qx} for the ${\mathbb Z}_3$ case. Indeed, matching of $\nu_I$ and $\nu_0$ implies that the number of gauge invariant operators matches on the two sides of the duality. In particular chiral operators that contribute to the superconformal index should match \cite{Romelsberger:2005eg, Romelsberger:2007ec, Dolan:2008qi}. We observe that the matching of gauge invariant degrees of freedom can be traced in the analogous matching before the orbifold projection. If we denote by $\Phi^I$ and $\tilde \Phi^I$ the 3 chiral multiplets in the adjoint of Sp$(2N)$ and SO$(2N+1)$ (before the orbifold projection) one can see that the number of singlets one can built at each dimension in the two gauge theories matches perfectly. This implies the correspondence \begin{equation} {\rm Tr} (\Phi^{I_1} \Phi^{I_2} \ldots \Phi^{I_k} ) \leftrightarrow {\rm Tr} (\tilde\Phi^{I_1} \tilde\Phi^{I_2} \ldots \tilde\Phi^{I_k} ) \label{trphi}. \end{equation} In the quiver gauge theory, gauge invariant operators are given again by (\ref{trphi}) with $\Phi^I$ and $\tilde \Phi^I$ now given by block matrices satisfying the orbifold invariant conditions. Moreover the basis of gauge invariant operators has $\sum_I \nu_I-\nu_0$ elements for the two dual gauge theories. Further dynamical checks of duality, including but not limited to a detailed comparisons of the superconformal indices, may help identifying the class of unoriented quiver dual pairs. \section{Conclusions} We have discussed unoriented quiver theories with flavour that govern the low-energy dynamics of D3-branes at orbifold singularities in the presence of (exotic) $\Omega$ planes and D7-branes wrapping non-compact cycles. The presence of a net number of `fractional' branes, as compared to the case without $\Omega$ planes and D7-branes, makes the theories intrinsically chiral, generically non superconformal and thus phenomenologically more promising than theories with only `regular' D3-branes. In the recent past oriented quivers for D3-branes at toric singularities that admit a dimer description have received a lot of attention. Although orientifolds of dimers have already been analysed in \cite{Franco:2007ii}, here we have tackled the problem from a world-sheet perspective in the restricted context of non-compact ${\mathbb Z}_n$ orientifolds. We have rederived the relation between tadpole and anomalies, taking into account the flavour branes, identified the locally consistent embeddings of the D7 and the various allowed unoriented projections. We have then recognized the conditions for restoring superconformal invariance in the presence of D7 and $\Omega$ planes, focusing on two classes of models with $\beta_a =0$ at all nodes and with $\beta_a =0$ at all but one `empty' node. We have relied on previous analyses of `dilaton' tadpoles and RG flows, in order to argue that no anomalous dimensions are expected for the matter fields that would require consideration of the NSVZ `exact' $\beta$ function rather than our simple-minded one-loop $\beta$ function. We have also classified quiver theories that receive non-perturbative corrections to the superpotential from unoriented D-brane instantons of the `gauge' or `exotic' kinds. In particular we have found a theory where both kinds of corrections are present and conformal theories where the conformal symmetry is broken dynamically via the generation of exotic superpotentials. We have finally turned our attention on to the recently proposed ${\cal N} =1 $ duality, which is a remnant of the ${\cal N}=4$ S-duality between Sp$(2N)$ and SO$(2N+1)$ gauge groups. We have identified candidate dual pairs and given further evidence for the validity of the duality in the orbifold context. It would be interesting to study the effect of 3-form NSNS and RR fluxes on the gauge theory dynamics. In particular, this can result in moduli stabilisation and topology changes. Indeed one can show that some orbifold singularities with vector-like matter can be connected to more general non-orbifold singularities. Work on this issue is in progress \cite{wipwithImp}. Another issue is related to the global embedding of the unoriented quivers with flavour. The consistent gauging of the D7-brane flavour symmetry requires the absence of chiral anomaly, and thus of global tadpoles, as well as other subtler, K-theoretic, issues that have been recently addressed for instance in \cite{Cicoli:2013mpa} for the case of two oriented quiver theories with flavour on ${\mathbb Z}_3$ singularities exchanged by an orientifold projection. \vspace{1cm} \centerline{\large\bf Acknowledgments} We would like to acknowledge A.~Amariti, C.~Bachas, S.~Cremonesi, E.~Dudas, S.~Franco, F.~Fucito, A.~Hanany, M.~Petropoulos, G.~Pradisi, R-K.~Seong, Y.~Stanev, G.~Travaglini for interesting discussions and above all L.~Martucci for collaboration at an early stage of this project. This work is partially supported by the ERC Advanced Grant n.226455 Superfields and by the Italian MIUR-PRIN contract 2009-KHZKRX. The work of D.~R.~P. is also supported by the Padova University Project CPDA119349. G.~I. thanks the University of Amsterdam for hospitality during the early stages of this project. While this work was being carried on, M.~B. has been visiting Imperial College (IC) in London, Queen Mary University of London (QMUL), Ecole Normale Superieure (ENS) Paris and Ecole Polytechinque (EPoly) Paris. M.~B. would like to thank A.~Hanany, G.~Travaglini, C.~Bachas, M.~Petropoulos, J.~Iliopoulos and their colleagues for their very kind hospitality, for creating a stimulating environment and to acknowledge partial support trough Internal EPSRC Funding (IC), Leverhulme Visiting Professorship (QMUL), Visiting Professorship (ENS, EPoly) and Institut Philippe Meier (ENS). \vspace{0.5cm} \begin{appendix} \section{ String partition function} In this appendix we review the computation of the string partition function for a system of unoriented closed and open strings on ${\mathbb C}^3/{\mathbb Z}_{n} $. See \cite{Angelantonj:2002ct} and references therein for a general review on open and unoriented strings . \subsection{Torus amplitude} \label{torusappendix} Closed string states organize into $g$-twisted sectors defined by the boundary conditions \begin{equation} X^I (\sigma+2\pi ,\tau)=w^{g a_I} X^I (\sigma,\tau) , \end{equation} with similar conditions for fermions. The torus partition function can be written as $\int {d^2 \tau_2\over \tau_2^2}$ times\footnote{For simplicity we normalize to one the infinite volumes of each complex plane.} \begin{equation} \label{torus} {\cal T}=\sum_{g,h=0}^{n-1} {\cal T}_{g,h}={1\over n} \sum_{g,h=0}^{n-1} |\rho[^g_h] (\tau)|^2 \Lambda[^g_h] (\tau,\bar \tau), \end{equation} with \begin{equation} \Lambda[^g_h]= \int d p\, e^{-\pi \tau_2 p^2} \langle p| \Theta^h |p\rangle \end{equation} the contribution of zero modes momenta (along the plane invariant under $\Theta^g$) and \begin{equation} \label{rhodef} \rho[^g_h](\tau) ={\rm Tr}_{\rm g-twisted} \left[ \left({1+(-)^F\over 2}\right) \Theta^h \, q^{L_0-a}\,\right] \ \ \ \ \ \ \ \ q=e^{2\pi i\tau} \end{equation} the oscillator part of the $h$-projected chiral partition function in the $g$-twisted sector. Explicitly \begin{eqnarray} \label{rhodef2} \rho[^g_h](\tau) &=& \ft12\sum_{a,b=0}^1 (-)^{ a +b+ ab} \prod_{I=0}^3 {\vartheta\left[^{a+{2g a_I\over n} }_{b+{2h a_I \over n} }\right]\over \vartheta\left[^{1+{2g a_I\over n} }_{1+{2h a_I\over n} }\right] } \, \prod_{I\in {\cal C}_{g,h}}2 \sin(\tfrac{\pi h a_I}{n}) \nonumber\\ &=& -\left( {\vartheta_1\over \eta^3} \right)^{\cal N} \, \prod_{I\in {\cal C}_{g,h}}2\sin(\tfrac{\pi h a_I}{n}). \end{eqnarray} The product in the second line runs over all $I$'s such that $ g a_I \in n {\mathbb Z}$ but $ h a_I \notin n{\mathbb Z}$. We denote this set by ${ \cal C}_{g,h}$. The terms $2\sin(\pi h a_I)$ cancel the zero mode part of the corresponding theta functions in the denominator. ${\cal N}$ counts the number of complex planes invariant under both $\Theta^g$ and $\Theta^h$, i.e. those planes I satisfying $ g a_I, h a_I \in n {\mathbb Z} $. We notice that ${\cal N} $ is also the number of supersymmetries preserved by the $\Theta^g,\Theta^h$ twists. In the second line of (\ref{rhodef2}) we used the Jacobi identity to perform the spin structure sum. Notice that only fermionic zero modes contribute to (\ref{rhodef2}), since $\vartheta_1 \approx (1-1) \eta^3$. Here and below indices $a,b$ are understood modulo $n$, e.g. $\delta_{a,0}$ means $a\in n {\mathbb Z}$, and so on.\\ On the other hand the bosonic zero mode contribution is given by \begin{eqnarray} \Lambda[^g_h] &=& {1\over \tau_2^{\cal N} } \prod_{I \in{ \cal C}_{g,h} } \int d^2 p_I \, e^{-\pi \tau_2 p_I^2} \langle p_I| \Theta^h |p_I\rangle= {1\over \tau_2^{\cal N} } \prod_{I \in{ \cal C}_{g,h} } \int d^2 p_I \delta( p_I-w^{h a_I} p_I)\nonumber\\ &=& {1\over \tau_2^{\cal N} } \prod_{I \in{ \cal C}_{g,h} } \, {1\over |1-w^{h a_I} |^{2 } }. \label{lamb1} \end{eqnarray} Plugging (\ref{rhodef2}) and (\ref{lamb1}) into (\ref{torus}) one finds that bosonic and fermionic zero mode contributions cancel against each other and one is left with the result \begin{equation} {\cal T}_{g,h}={1 \over \tau_2^{\cal N} } \left| {\vartheta_1 \over \eta^{3}} \right|^{2{\cal N}} . \label{tgh} \end{equation} Remarkably, the result (\ref{tgh}) depends only on the number ${\cal N}$ of supersymmetries preserved by the twists and not on the details of $g$, $h$. The torus amplitude can then be written in the simple form \begin{equation} {\cal T}={1\over n} \sum_{{\cal N}=1,2,4} \sum_{[^g_h]\in {\rm Orb}_{\cal N} } {1 \over \tau_2^{{\cal N} }} \label{torus2} \, \left| {\vartheta_1 \over \eta^{3}} \right|^{2{\cal N}}, \end{equation} where we denote by ${\rm Orb}_{\cal N}$ the set of twists $g,h$ leaving invariant ${\cal N}$ out of the four complex coordinates. We are interested in the physics localized around the singularity. States localized around the singularity come from ${\cal N}=1$ sectors where all three coordinates $X^I$ are twisted, i.e. $g a_I\notin {\mathbb Z}$ for any $I$. States in ${\cal N}=2,4$ sectors are non normalizable and will be discarded in the following. We notice also that the result (\ref{torus2}) is modular invariant since $\vartheta_1 \over \eta^{3}$ is invariant under $T$ and transforms as \begin{equation} S: \qquad {\vartheta_1 \over \eta^{3}} \to {1\over \tau } {\vartheta_1 \over \eta^{3}} \end{equation} under the S-modular transformation. \subsection*{Helicity traces} In this paper we deal with partition functions of supersymmetric theories that are always zero due to the matching between the number of bosonic and fermionic degrees of freedom in these theories. In particular, the partition function in sectors with ${\cal N}$ supersymmetries vanishes as $(1-1)^{\cal N}$, indicating that multiplets in these theories contain $2^{{\cal N}-1}$ bosons and $2^{{\cal N}-1}$ fermionic states. It is often convenient to resolve this degeneracy by counting states weighted by their helicity on a plane. In this way one can distinguish between vector and chiral multiplets in ${\cal N}=1$ or vector and hypermultiplets in ${\cal N}=2$ theories. To define the helicity trace of string states one can simply replace the chiral partition functions $\rho_{gh}$ by the character value function \begin{equation} \rho_{gh}(x)= 2\sin \pi x {\vartheta_1(\tfrac{x}{2})\over \vartheta_1(x)} \prod_{I=1}^3 \left( \left(2\sin \tfrac{ \pi h a_I}{n} \right)^{\delta_{g a_I,0}} {\vartheta\left[^{1+{2g a_I\over n} }_{1+{2h a_I\over n} }\right] (\tfrac{x}{2}) \over \vartheta\left[^{1+{2g a_I\over n} }_{1+{2h a_I\over n}} \right](0) } \, \right ). \end{equation} In particular, for $g=h=0$ one finds \begin{equation} {\cal N}=4:\qquad\rho[^0_0]\sim \sin^4\left( \tfrac{\pi x}{2}\right)+O(q)=e^{2\pi {\rm i} x}+e^{-2\pi {\rm i} x}+6-4( e^{\pi {\rm i} x}+e^{-\pi {\rm i} x} )+O(q), \end{equation} reproducing the helicity content of an ${\cal N}=4$ multiplet. Similarly, for $[^g_0] \in {\rm Orb}_{\cal N}$ with ${\cal N}=1,2$ one finds \begin{eqnarray} {\cal N}=2:\qquad \rho[^g_0] &\sim& \sin^2\left( \tfrac{\pi x}{2}\right)+O(q)\sim2-e^{\pi {\rm i} x}-e^{-\pi {\rm i} x} +O(q)\nonumber,\\ {\cal N}=1:\qquad \rho[^g_0] &\sim& \sin\left( \tfrac{\pi x}{2}\right)+O(q) \sim 1-e^{\pi {\rm i} x} +O(q), \end{eqnarray} reproducing the helicity content of the ${\cal N}=2$ hyper and ${\cal N}=1$ chiral multiplets respectively. We will mainly focus on ${\cal N}=1$ sectors proportional to $\vartheta_1(\tfrac{x}{2}) \sim \sin\left( \tfrac{\pi x}{2}\right)$. The coefficient of $\vartheta_1$ should be interpreted as the net number of chiral fields, i.e. as the difference between spinors of left and right moving chirality in the open string spectrum. \subsection{Klein bottle amplitudes} We now consider the inclusion of an $\Omega$-plane at the singularity. This corresponds to quotienting the type IIB string theory at the singularity by an action $\Omega_\epsilon$ involving a worldsheet parity and a reflection specified by four signs $(\epsilon_0,\epsilon_I)$ satisfying $\prod_{I=1}^3 \epsilon_I=-1$. The Klein bottle amplitude is given by the insertion of $\Omega_\epsilon$ in the torus amplitude. It is important to notice that only $\Omega$-unpaired states can contribute to the Klein bottle amplitude. In particular, $g$-twisted sectors combine left moving states with their complex conjugate and therefore can contribute to the Klein only if they come in real representations i.e. either for $g=0$ or $g={n \over 2}$ in the case of even $n$. % Inserting $\Omega_{\epsilon}$ in the momentum integral (\ref{lamb1}) one finds \begin{eqnarray} \Lambda^{\Omega}[^g_h] &=& {1\over \tau_2^{\cal N} } \prod_{I \in{ \cal C}_{g,h} } \int d^{2} p_I e^{-\pi \tau_2 p_I^2} \langle p_I | \epsilon_I w^{a_I h} p_I \rangle= {1\over \tau_2^{\cal N} } \prod_{I_{g,h} }{1 \over |1- \epsilon_I w^{a_I h} |^2 } . \label{lamb2} \end{eqnarray} Combined with contributions coming from the diagonal part $\rho[^g_h](2 i t)$ one finds \begin{eqnarray} {\cal K}_{0,h} &=& - \, \prod_{I=1}^3 {( 1- w^{2a_I h} ) \over (1- \epsilon_I w^{a_I h} )^2 } \int {dt\over t^3} \,{\vartheta_1\over \eta^3}(2 {\rm i}t) , \nonumber\\ {\cal K}_{{n\over 2},h} &=& -\hspace*{-.5em}\prod_{I:\, a_I\, \rm even} \frac{(1- w^{2a_Ih})}{(1-\epsilon_I w^{a_Ih})^2} \int {dt\over t^3} \,{\vartheta_1\over \eta^3}(2 {\rm i}t), \end{eqnarray} for the $\Theta^h$-projected amplitudes in the $g=0$ and $g=\tfrac{n}{2}$ twisted sectors respectively and zero otherwise. \subsection{Annulus and Moebius strip amplitudes} Finally we consider the inclusion of fractional D3 and D7-branes at the singularity. Fractional branes are classified by the representations ${\bf R}_a$ of ${\mathbb Z}_n$ with $a=0,...n-1$. We denoted by $N_a(M_a)$ the number of fractional D3(D7) branes of each type and by N(M) the total number. We are interested in the low energy of the four-dimensional theory localized at the singularity described by open string states with at least one end on D3 branes. The dynamics of these states is described by an effective ${\cal N}=1$ supersymmetric quiver gauge theory. We orient D7 branes along the $I=1,2$ planes. \subsubsection*{The action on Chan-Paton indices} The full action of the orbifold and orientifold projections gives the following identifications for the Chan-Paton matrices $\lambda$ associated to D3-D3 and D3-D7 fields: \begin{align} \label{explicit orbifold action} \Theta^h:\ \ & \lambda_{\mathbf V} = \gamma_{\Theta,\mathrm D3} \lambda_{\mathbf V} \gamma_{\Theta,\mathrm D3}^{-1} & \lambda_{\mathbf C^I} &= w^{a_I}\gamma_{\Theta,\mathrm D3} \lambda_{\mathbf C^I} \gamma_{\Theta,\mathrm D3}^{-1} & \lambda_{ \mathbf C_{3,7}^{\dot a}} &= w^{s}\gamma_{\Theta,\mathrm D3} \lambda_{\mathbf C_{3,7}^{\dot a} }\gamma_{\Theta,\mathrm D7}^{-1}\\[.5em] \label{explicit Omega action} \Omega\ :\ \ &\lambda_{ \mathbf V} = - \gamma_{\Omega,\mathrm D3} \lambda_{\mathbf V}^T \gamma_{\Omega,\mathrm D3}^{-1} & \lambda_{\mathbf C^I} &= \epsilon_I \gamma_{\Omega,\mathrm D3} (\lambda_{\mathbf C^I})^T \gamma_{\Omega,\mathrm D3}^{-1} & \lambda_{\mathbf C_{3,7}^{\dot a}} &= \gamma_{\Omega,\mathrm D3} (\lambda_{\mathbf C_{7,3}^{\dot a}})^T \gamma_{\Omega,\mathrm D7}^{-1}. \end{align} Up to some choices of phases and conventions, one can write the explicit embedding of the projections in the Chan--Paton group: \begin{eqnarray} \gamma_{\Theta,\rm D3} &=& \left( \begin{array}{ccccc} \mathbf{1}_{N_0} &&&&\\ &w\, \mathbf{1}_{N_1}&&&\\ &&w^2\, \mathbf{1}_{N_2}&&\\ &&&&\hspace*{-1.5em}\ddots\\ \end{array} \right)\,, \label{gampro1} \\[.5em] \gamma_{\Omega,\rm D3} &=& \left( \begin{array}{cccccc} \Delta_{N_0} & &&&&\\ &0&\cdots&\cdots&0&c\mathbf1_{N_1}\\ &\vdots&&&c\mathbf1_{N_2}&0\\ &\vdots&&\iddots&&\vdots\\ &0&c^*\mathbf1_{N_{2}}&&&\vdots\\ &c^*\mathbf1_{N_{1}}&0&\cdots&\cdots&0\\ \end{array} \right)\,, \qquad \begin{array}{rcl} \Delta_{N_a}&=& \left\{ \begin{array}{lc} \mathbf{1}_{N_a}&\epsilon_0=-1\\ {\rm i} J_{N_a}&\epsilon_0=+1\\ \end{array}\right.\\ \vphantom{\prod}&& \nonumber\\ c&=&\left\{ \begin{array}{lc} 1&\epsilon_0=-1\\ \,{\rm i}&\epsilon_0=+1\\ \end{array}\right. \end{array} \end{eqnarray} where $J_{N_a}$ is the (real, antisymmetric) quadratic invariant of Sp$(N_a)$. When $n$ is even, the central entry of the antidiagonal block in $\gamma_{\Omega,\rm D3}$ corresponds to the second SO/Sp node of the quiver and therefore is of the form $\Delta_{N_{n/2}}$. In the case of $n$ even and $n/2$ odd, there is another inequivalent projection, corresponding to the identification of the node $0$ with the node $n/2$. The first example of this kind is ${\mathbb Z}_6$, where we can write: \begin{equation} \gamma_{\Omega,\rm D3}= \left( \begin{array}{cccccc} 0 &\cdots & 0&c\mathbf1_{N_0} & & \\ \vdots & & c\mathbf1_{N_1} &0& & \\ 0& c^*\mathbf1_{N_1} & & \vdots & & \\ c^*\mathbf1_{N_0} &0 &\cdots &0 & & \\ & & & & 0& c\mathbf1_{N_5} \\ & & & & c^*\mathbf1_{N_5} & 0 \end{array} \right) \label{gampro2} \end{equation} These matrices satisfy the consistency condition \begin{equation}\label{Omega D3 consistency} \gamma^T_{\Omega,\mathrm D3} = -\epsilon_0 \gamma_{\Omega,\mathrm D3}, \end{equation} which can be obtained applying \eqref{explicit Omega action} twice. This choice of sign combined with \eqref{explicit Omega action} provides the correct gauge group and matter field projections in the D3-D3 sector. Consistency also requires that the D7-D7 sector exhibits the opposite unoriented projection: \begin{equation}\label{Omega D7 consistency} \gamma^T_{\Omega,\mathrm D7} = \epsilon_0 \gamma_{\Omega,\mathrm D7}, \end{equation} which means that the same expressions (\ref{gampro1}) can be used for $\gamma_{\Theta,\rm D7}$, $\gamma_{\Omega,\rm D7}$ after replacing $N_a\to M_a$ and $\epsilon_0 \to -\epsilon_0$. In the following we will use the shorter notation $\gamma_{h}\equiv\gamma_{\Theta,\mathrm{D}p}^h$ and $\gamma_{\Omega h}\equiv\gamma_{\Omega,\mathrm Dp}\, \gamma_{\Theta,\mathrm{D}p}^h$. \subsubsection*{The annulus amplitude} Let us first consider the annulus amplitude. There are three types of open strings depending on the boundary conditions at the two ends of the open string. Contributions from D3-D3 and D7-D7 open strings are proportional to the untwisted amplitude $\rho[^0_h]$. Finally D3-D7 open strings are twisted along the four-dimensional plane with mixed Neumann--Dirichlet boundary conditions and therefore have neither bosonic nor fermionic zero modes along this plane. Collecting the various contributions one finds \begin{eqnarray} {\cal A}_h &=& - \prod_{I=1}^3 ( 1- w^{a_I h} ) \left [ {\rm tr} _{\rm D3}\gamma_h -\frac{ w^{-{a_3 \over 2}h} {\rm tr} _{\rm D7}\gamma_h}{\prod_{I=1}^2 ( 1-w^{a_I h} )}\right]^2 \int {dt\over t^3} \, {\vartheta_1\over \eta^3} (\tfrac{ {\rm i}t}{2}) \label{directannulus}. \end{eqnarray} The three terms in the expansion of the square origin from D3-D3, D3-D7 and D7-D7 open strings respectively. $w$-dependent terms in the numerator and denominators come from contributions from fermionic and bosonic zero modes respectively. Finally ${{\vartheta}_1\over { \eta}^3} $ comes from bosonic and fermionic excitations transverse to the singularity. The Chan Paton traces are \begin{equation} {\rm tr} _{\rm D3}\gamma_h = \sum_{a=0}^{n-1} N_a w^{a h} ,\qquad {\rm tr} _{\rm D7}\gamma_h = \sum_{a=0}^{n-1} M_a w^{a h} .\label{chan} \end{equation} \subsubsection*{The Moebius amplitude} The insertion of $\Omega_\epsilon$ in the D3-D3 and D7-D7 annulus leads to the Moebius amplitudes \begin{equation} {\cal M}_h = \prod_{I=1}^3 ( 1+ \epsilon_I w^{a_I h} ) \left[ {\rm tr}_{\rm D3} (\gamma^{-1}_{\Omega h}\gamma^{T}_{\Omega h}) +\frac{ w^{-{a_3 }h} {\rm tr}_{\rm D7} (\gamma^{-1}_{\Omega h}\gamma^{T}_{\Omega h}) }{\prod_{I=1}^2 ( 1-w^{2 a_I h} )} \right] \int {dt\over t^3} \, { {\vartheta}_1\over { \eta}^3} (\tfrac{ {\rm i}t}{2}+\ft12) \label{directmoebius}, \end{equation} with \begin{equation} {\rm tr}_{\rm D3} (\gamma^{-1}_{\Omega h}\gamma^{T}_{\Omega h}) = -\epsilon_0 {\rm tr}_{\rm D3} \gamma_{2h}, \qquad {\rm tr}_{\rm D7} (\gamma^{-1}_{\Omega h}\gamma^{T}_{\Omega h}) = \epsilon_0 {\rm tr}_{\rm D7} \gamma_{2h} \label{omegagamma} \end{equation} for the unoriented projection defined by (\ref{gampro1}). \subsubsection*{The spectrum of open string states} The spectrum of the quiver gauge theory is codified in the Annulus and Moebius amplitudes. States in vector multiplets come from open strings connecting D3 branes of the same type and realize the gauge symmetry with orthogonal and symplectic gauge groups for nodes $a=0,\tfrac{n}{2}$ and unitary groups otherwise. Chiral multiplets come from open strings connecting D3 branes of different types or D3-D7 strings. They are summarized in the open string partition function \begin{eqnarray} {1\over 2n} \sum_{h=0}^{n-1}{\cal A}_{h,D3D3+D3D7}& =& - \sum_{a,b=0}^{n-1} \left( \ft12 I_{ab} N_a N_b +J_{ab} N_a M_b \right) \int {dt\over t^3} \, { \vartheta_1\over \eta^3} (\tfrac{{\rm i}t}{2}), \nonumber\\ {1\over 2n} \sum_{h=0}^{n-1}{\cal M}_{h,D3}& =& -\epsilon_0 \sum_{a=0}^{n-1} \ft12 K_{a} N_a \int {dt\over t^3} \, { \vartheta_1\over \eta^3} (\tfrac{{\rm i}t}{2}+\ft12) , \label{spectrumam} \end{eqnarray} with \begin{eqnarray} I_{ab} &=& \sum_{I=1}^3 ( \delta_{a,b-a_I} -\delta_{a,b+a_I}) ,\nonumber\\ J_{ab} &=& \delta_{a,b-s} -\delta_{a,b+s},\nonumber\\ K_a &=& \sum_{I=1}^3 \epsilon_I ( \delta_{2a,a_I} -\delta_{2a,-a_I} ) \end{eqnarray} codifying the intersection numbers of the exceptional cycles at the singularity. In deriving (\ref{spectrumam}) we repeatedly used the identity \begin{equation} \prod_{I=1}^3 ( 1+\epsilon_I w^{a_I h} )=\sum_{I=1}^3 \epsilon_I (w^{a_I h} -w^{-a_I h} ). \label{id} \end{equation} Using the fact that $\vartheta_1\sim 1-1$ counts the degrees of freedom of an ${\cal N}=1$ chiral multiplet (a vector multiplet is non-chiral) we conclude that the spectrum of chiral multiplet for the quiver can be written as \begin{eqnarray} \label{orient2} {\cal H}^{\rm open}_{\rm chiral} &=& \sum_{a,b=0}^{n-1}\, \left( \ft12 I_{ab} \fund_{a} \, \overline{\fund}_b + J_{ab}\, M_b \, \fund_{a} \right) + \epsilon_0 \sum_{a=0}^{n-1}\, \ft12 K_{a} \fund_{a}. \end{eqnarray} \subsection{Tadpole cancellation} \subsubsection*{Odd $n$} The Klein, Annulus and Moebius amplitudes can be rewritten as cylinder amplitudes representing the exchange of a closed twisted string state between $\Omega$-planes and D-branes. We denote by $\tilde {\cal K}_{0,h}$, $\tilde {\cal A}_h$, $\tilde {\cal M}_h$ the corresponding amplitudes. The length $\ell$ of the cylinder is related to the one-loop modulus $t$ via $\ell=({1\over 2t},{2\over t},{1\over 2t})$ for $({\cal K},{\cal A},{\cal M})$ respectively. The ${\cal K}$ and ${\cal A}$ direct and transverse amplitudes are related by an $S$ modular transformation while the Moebius amplitudes are linked by $P=T S T^2 S$. Using \begin{equation} \begin{aligned} S: & \qquad {\vartheta_1\over \eta^3}\left( -{1\over \tau} \right) = {1\over \tau} {\vartheta_1\over \eta^3}(\tau),\\[.2em] P: & \qquad {\vartheta_1\over \eta^3}\left( \frac{ {\rm i}}{2 \tau_2}+\frac{1}{2} \right) = {1\over {\rm i} \tau_2} {\vartheta_1\over \eta^3} \left( \frac{ { {\rm i}}\, \tau_2}{2 }+\frac{1}{2} \right), \end{aligned} \end{equation} one finds \begin{eqnarray} \tilde{\cal K}_h &=& {\rm i}\, 2^{2} \, \prod_{I=1}^3 {( 1- w^{2a_I h} ) \over (1- \epsilon_I w^{a_I h} )^2 } \int d\ell \,{\vartheta_1\over \eta^3} (i \ell) , \nonumber\\ \tilde{\cal A}_{h} &=& {\rm i}\, 2^{-2} \prod_{I=1}^3 ( 1- w^{a_I h} ) \left [ {\rm tr} _{\rm D3}\gamma_h -\frac{ w^{-{a_3 \over 2}h} {\rm tr} _{\rm D7}\gamma_h}{\prod_{I=1}^2 ( 1-w^{a_I h} )}\right]^2 \int d\ell \, {\vartheta_1\over \eta^3} (i \ell) \label{tadpol} ,\\ \tilde{\cal M}_h &=& {\rm i} \, 2 \, \epsilon_0 \prod_{I=1}^3 ( 1+ \epsilon_I w^{a_I h} ) \left[ {\rm tr}_{\rm D3} \gamma_{2h} -\frac{ w^{-{a_3 }h} {\rm tr}_{\rm D7} \gamma_{2h} }{\prod_{I=1}^2 ( 1-w^{2 a_I h} )} \right] \int d\ell \, { {\vartheta}_1\over {\eta}^3} (i \ell+\ft12) . \nonumber \end{eqnarray} Collecting the massless contributions from (\ref{tadpol}) one finds that $\tilde{\cal K}_h+\tilde{\cal A}_{2h}+\tilde{\cal M}_h$ form a complete square proportional to \begin{eqnarray} \left( \prod_{I=1}^3 ( 1- w^{2a_I h} ) {\rm tr}_{\rm D3} \gamma_{2h} \, + (w^{-2sh}-w^{2sh}) {\rm tr}_{\rm D7} \gamma_{2h} + 4 \, \epsilon_0 \prod_{I=1}^3 ( 1+\epsilon_I w^{a_I h} ) \right)^2 . \label{tad0odd} \end{eqnarray} Using (\ref{chan}) and (\ref{id}) one can rewrite the combination inside the brackets in (\ref{tad0odd}) as \begin{equation} \tilde{\cal K}_h+\tilde{\cal A}_{2h}+\tilde{\cal M}_h \sim \left[ \sum_{a=0}^{n-1} w^{2a h} \mathcal I_a \right]^2, \label{tad1} \end{equation} with \begin{equation} {\mathcal I}_a= \sum_{b=0}^{n-1} ( I_{ab}\,N_b+J_{ab}\,M_b) +4 \epsilon_0 K_{a}\,. \label{tadfinal0} \end{equation} We notice that ${\mathcal I}_a$ is precisely the anomaly associated to the gauge group U$(N_a)$ and is zero for $a=0,\tfrac{n}{2}$. This shows that cancellation of local tadpoles ${\mathcal I}_a=0$ and of irreducible anomalies boil down to the same set of conditions \begin{equation} I_{ab}\,N_b+J_{ab}\,M_b +4 \epsilon_0 \, K_{a} =0 \label{tadfinal} \end{equation} for all $a$. \subsubsection*{Even $n$} When $n$ is even, we must distinguish between the tadpoles for fields $T_{2h}$ in the even twisted sectors, which propagate through the Klein, Moebius and Annulus amplitudes, and the odd ones $T_{2h+1}$ which only propagate along the Annulus. Collecting all amplitudes contributing to the same tadpole one finds \begin{equation} \tilde{\cal K}_{0,h}+\tilde{\cal K}_{0,h+n/2}+\tilde{\cal K}_{n/2,h}+\tilde{\cal K}_{n/2,h+n/2}+\tilde{\cal A}_{2h}+\tilde{\cal M}_h+\tilde{\cal M}_{h+n/2} = 0, \qquad \tilde{\cal A}_{2h+1} = 0. \end{equation} The two equations can be re-expressed as perfect squares: \begin{align} \label{tad0}% &\left[\,\prod_{I=1}^3 ( 1- w^{2a_I h} ) {\rm tr}_{\rm D3} \gamma_{2h} + (w^{-2sh}-w^{2sh}){\rm tr}_{\rm D7} \gamma_{2h}\right.+ \cr &\qquad\qquad\qquad\qquad\qquad\:+\left.4\,\epsilon_0 \Big(\prod_{I=1}^3 (1+\epsilon_I w^{a_I h}) + \prod_{I=1}^3(1+\epsilon_I w^{a_I (h+n/2)})\Big)\right]^2 \cr &\sim \left[\sum_{a=0}^{n/2-1} w^{2a h} ( {\mathcal I}_a + {\mathcal I}_{a+n/2})\right]^2=0, \\[1.5em] &\left[\,\prod_{I=1}^3 ( 1- w^{a_I (2h+1)} ) {\rm tr}_{\rm D3} \gamma_{2h+1} + (w^{-s(2h+1)}-w^{s(2h+1)}){\rm tr}_{\rm D7} \gamma_{2h+1}\right]^2 \nonumber\\[.2em] &\sim \left[\sum_{a=0}^{n/2-1} w^{a (2h+1)} ( {\mathcal I}_a - {\mathcal I}_{a+n/2})\right]^2 =0, \end{align} where we used the fact that $K_a = K_{a+n/2}$. Again cancellation of local tadpoles ${\mathcal I}_a=0$ matches the cancellation of anomalies in the quiver gauge theory. For even $n$ another different orientifold projection can be achieved, called $\Omega_{\epsilon}'$ in the main text. This corresponds to the identification $\bar N_a = N_{\frac{n}{2}-a}$, $\bar M_a = M_{\frac{n}{2}-a}$. The cases with $n$ a multiple of four are in some sense ``trivial", since this $\hat{\Omega}_{\epsilon}$ coincides with the same orientifold projection previously described. The (\ref{omegagamma}) is now replaced by \begin{equation}\begin{aligned} {\rm tr}_{\rm D3} (\gamma^{-1}_{\Omega h}\gamma^{T}_{\Omega h})&= -\epsilon_0 (-1)^h\, {\rm tr}_{\rm D3} \gamma_{2h} = -\epsilon_0 w^{\frac{n}{2} h}\, {\rm tr}_{\rm D3} \gamma_{2h}\, ,\\ {\rm tr}_{\rm D7} (\gamma^{-1}_{\Omega h}\gamma^{T}_{\Omega h})&= +\epsilon_0 (-1)^h\, {\rm tr}_{\rm D7} \gamma_{2h} = +\epsilon_0 w^{\frac{n}{2} h}\, {\rm tr}_{\rm D7} \gamma_{2h} \,, \end{aligned}\end{equation} which can be easily checked for example in the $\mathbb{Z}_6$ case with $\gamma_{\Omega}$ explicitly given by (\ref{gampro2}).\\ Such choice is allowed because the extra phase squares to unity. This is a necessary condition to write again the sum of the transverse amplitudes as a perfect square, since the contributions to the Klein and Annulus remain unchanged. By performing the same steps as above, we recover the same identification of tadpole cancellation conditions with anomaly cancellation, as in (\ref{tad0}), but with a different expression for the orientifold contribution $K_a$: \begin{equation} \hat{K}_a = \sum_{I=1}^3 \epsilon_I (\delta_{2a,a_I+\frac{n}{2}} -\delta_{2a,\frac{n}{2}-a_I} ). \end{equation} The example of this second projection on $\mathbb{C}^3/\mathbb{Z}_6$ was given in the main text. \end{appendix} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,314,259,996,082
arxiv
\section{Introduction and main results} The Kirchberg--Phillips theorem -- proved independently by Kirchberg \cite{Kirchberg-simple} and Phillips \cite{Phillips-classification} in the mid 90s -- was one of the first major breakthroughs in the Elliott classification programme, constituting a complete $K$-theoretic classification of separable, nuclear, simple, purely infinite $C^\ast$-algebras satisfying the universal coefficient theorem (UCT). The simple, stably finite side of the classification programme took 20 years longer but was finally resolved by the work of many hands -- a highly incomplete list of references being \cite{GongLinNiu-classZ-stable}, \cite{GongLinNiu-classZ-stable2}, \cite{ElliottGongLinNiu-classfindec}, \cite{TikuisisWhiteWinter-qdnuc} -- leading to a complete classification of all separable, nuclear, unital, simple, $\mathcal Z$-stable $C^\ast$-algebras satisfying the UCT using $K$-theory and traces.\footnote{Here the main result of \cite{CETWW-nucdim} was used to reduce from finite nuclear dimension to $\mathcal Z$-stability.} The concept of pure infiniteness for simple $C^\ast$-algebras was first introduced by Cuntz in \cite{Cuntz-K-theoryI} as a $C^\ast$-algebraic analogue of Type $\mathrm{III}$ von Neumann algebra factors. In this paper it was proved that the Cuntz algebras $\mathcal O_n$ introduced in \cite{Cuntz-simple} enjoy this property. Purely infinite, simple $C^\ast$-algebras arise naturally from various constructions, such as simple Cuntz--Krieger algebras arising from symbolic dynamics \cite{CuntzKrieger-MarkovI}, and the reduced crossed product $C(\partial G)\rtimes_r G$ of a non-elementary hyperbolic group $G$ acting naturally on its Gromov boundary $\partial G$ \cite{AD-purelyinfdyn}. In these cases, the $C^\ast$-algebras fall into the class covered by the Kirchberg--Phillips classification, and their isomorphism class is therefore determined by their $K$-theory. A major leap forward in the study of purely infinite, simple $C^\ast$-algebras came from deep work of Kirchberg announced in \cite{Kirchberg-ICM}, see \cite{KirchbergPhillips-embedding} for published proofs. He proved his iconic $\mathcal O_2$-embedding theorem -- that all separable, exact $C^\ast$-algebras embed into $\mathcal O_2$ -- and showed that a separable, nuclear, simple $C^\ast$-algebra is purely infinite if and only if it is isomorphic to its tensor product with the Cuntz algebra $\mathcal O_\infty$. A $C^\ast$-algebra satisfying this latter property is called $\mathcal O_\infty$-stable. The class of separable, nuclear, simple, purely infinite $C^\ast$-algebras are now commonly known as Kirchberg algebras, while Kirchberg refers to the unital ones as pi-sun algebras (\underline{p}urely \underline{i}nfinite (simple),\footnote{Because when Kirchberg did this ground breaking work, pure infiniteness was not defined outside the simple setting.} \underline{s}eparable, \underline{u}nital, \underline{n}uclear). These results of Kirchberg were key ingredients in proving the Kirchberg--Phillips theorem. While Elliott's original classification results for stably finite $C^\ast$-algebras such as for AF algebras \cite{Elliott-AFclass} and certain AH algebras \cite{Elliott-classrr0} revolved around classifying not necessarily simple $C^\ast$-algebras, most subsequent classification results dealt with simple $C^\ast$-algebras. On the purely infinite side, around the turn of the millennium, Kirchberg vastly generalised his and Phillips' classification results by classifying all separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras up to stable isomorphism, using an ideal-related version of Kasparov's $KK$-theory as the classifying invariant. This resulted in a published manuscript \cite{Kirchberg-non-simple} outlining the strategy of the proof, and a complete proof is to appear as the headline result in a yet unpublished book by Kirchberg \cite{Kirchberg-book}. The goal of this paper is to present a proof of this highly influential work of Kirchberg thus closing this massive gap in the published literature. The proof presented here is very different from the one outlined by Kirchberg, and, dare I argue, quite a lot simpler. In \cite{KirchbergRordam-purelyinf} and \cite{KirchbergRordam-absorbingOinfty}, Kirchberg and Rørdam generalised the notion of pure infiniteness for simple $C^\ast$-algebras to the not necessarily simple case. This lead to three notions, namely weak pure infiniteness, pure infiniteness, and strong pure infiniteness, which are successively stronger conditions. While these are all equivalent for simple $C^\ast$-algebras, it is an open problem whether any or all of these notions are equivalent in general. Kirchberg and Rørdam showed that a separable, nuclear $C^\ast$-algebra is $\mathcal O_\infty$-stable if and only if it is strongly purely infinite,\footnote{Kirchberg and Rørdam proved this for stable $C^\ast$-algebras. One should appeal to \cite[Proposition 4.4(4,5)]{Kirchberg-Abel} or \cite[Corollary 3.2]{TomsWinter-ssa} for the general statement.} and hence these are the $C^\ast$-algebras classified by the above mentioned result -- which is also the main theorem of this paper. Strong pure infiniteness is rather hard to verify in examples, although significant progress was made in \cite{BrownClarkSierakowski-purelyinfgroupoids}, \cite{KirchbergSierakowski-spicrossed} and \cite{KirchbergSierakowski-filling}. Weak pure infiniteness and pure infiniteness are however easier to check. For instance, any $C^\ast$-algebra with finite nuclear dimension is weakly purely infinite if and only if it is traceless, in the sense that it has no non-trivial lower semicontinuous $[0,\infty]$-valued 2-quasitraces \cite[Theorem 5.2]{WinterZacharias-nucdim}.\footnote{A much easier proof is implicitly presented in \cite[Proposition 2.3]{RobertTikuisis-nucdimnonsimple}. In fact, if $A$ has nuclear dimension at most $m$, then its Cuntz semigroup has $(m+1)$-comparison for some $m<\infty$. It follows from tracelessness that the sum $a\oplus \cdots \oplus a$ ($m+1$ summands) of any positive element $a$ is properly infinite. Hence the $C^\ast$-algebra is weakly purely infinite.} There are conditions under which the three notions of pure infiniteness are known to be equivalent, for instance when the primitive ideal space is zero dimensional \cite{PasnicuRordam-purelyinfrr0} or Hausdorff \cite{BlanchardKirchberg-Hausdorff}. These results have been used to verify strong pure infiniteness of many more examples, see for instance \cite{HongSzymanski-purelyinfgraph}, \cite{Kwasniewski-crossedC_0}, \cite{KwasniewskiSzymanski-pureinffell}, \cite{BonickeLi-purelyinfample}. In \cite{Gabe-O2class}, I presented a new proof of a special case of the main theorem of this paper: a complete classification of separable, nuclear, stable/unital $\mathcal O_2$-stable $C^\ast$-algebras. While some of the machinery developed in that paper is crucial for proving the main results of this paper, these tools have already had other applications: they were used to answer exactly when traceless $C^\ast$-algebras are AF embeddable \cite[Corollary C]{Gabe-AFemb}, when Connes and Higson's $E$-theory \cite{ConnesHigson-deformations} can be unsuspended \cite[Corollary E]{Gabe-AFemb}, and lead to the exact computation of the nuclear dimension and decomposition rank of separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras \cite[Theorems A and B]{BGSW-nucdim}. The techniques presented in this paper should reach far beyond those in \cite{Gabe-O2class} and should be applicable in a much broader context. \subsection*{Classification of $C^\ast$-algebras through classification of maps} The main bulk of this paper is to provide existence and uniqueness results for $\ast$-homomorphisms satisfying certain properties. The purpose of this section is to explain why this is a natural thing to aim for, how it ties in with classification, and why it exactly becomes the class of separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras which are classified by these results. A classical method for classifying $C^\ast$-algebras is through an Elliott intertwining argument, see \cite[Section 2.3]{Rordam-book-classification}. A version of Elliott intertwining states that if $A$ and $B$ are separable $C^\ast$-algebras, and $\phi\colon A \to B$ and $\psi \colon B\to A$ are $\ast$-homo\-morphisms for which $\psi \circ \phi \sim_\au \id_A$ and $\phi \circ \psi \sim_\au \id_B$, where $\sim_\au$ denotes approximate unitary equivalence, then $A\cong B$. In particular, suppose one is given a functorial invariant $\mathrm{Inv}$, and a class $\mathscr C$ of separable $C^\ast$-algebras, such that one has the following: \begin{itemize} \item[$\exists$] \emph{Existence:} For all $A,B\in \mathscr C$ and any homomorphism $\rho \colon \mathrm{Inv}(A) \to \mathrm{Inv}(B)$, there is a $\ast$-homomorphism $\phi \colon A \to B$ such that $\mathrm{Inv}(\phi) =\rho$; \item[$!$] \emph{Uniqueness:} For all $A,B \in \mathscr C$ and any $\ast$-homomorphisms $\phi, \psi \colon A \to B$ such that $\mathrm{Inv}(\phi) = \mathrm{Inv}(\psi)$, one has $\phi\sim_\au \psi$. \end{itemize} Then for every $A,B\in \mathscr C$, $A\cong B$ if and only if $\mathrm{Inv}(A) \cong \mathrm{Inv}(B)$. So the objects in $\mathscr C$ are completely classified by the invariant $\mathrm{Inv}$. To see that one obtains classification, fix an isomorphism $\rho\colon \mathrm{Inv}(A) \to \mathrm{Inv}(B)$. By the existence part above one may lift $\rho$ and $\rho^{-1}$ to $\phi$ and $\psi$ respectively. As $\mathrm{Inv}(\psi \circ \phi) = \mathrm{Inv}(\id_A)$, uniqueness applied to $\psi\circ \phi$ and $\id_A$ entails that $\psi \circ \phi \sim_\au \id_A$, and similarly $\phi\circ \psi \sim_\au \id_B$. By Elliott intertwining one obtains $A\cong B$. In general, it is too much to hope for that one can find an invariant which classifies \emph{all} $\ast$-homomorphisms in this fashion. For instance, if the class $\mathscr C$ only consists of unital $C^\ast$-algebras it seems quite natural to only consider \emph{unital} $\ast$-homomorphisms. Or perhaps one wants to consider injective $\ast$-homomorphism. Either way, it makes sense to only consider existence and uniqueness for $\ast$-homomorphisms which satisfy a property $\mathscr P$, for which $\mathscr P$ is closed under taking compositions. There is one major caveat though: in order to run the classification argument above, one needs to know that $\id_A$ and $\id_B$ satisfy the property $\mathscr P$. An instance where this plays a notable role is \emph{nuclearity}. A map between $C^\ast$-algebras is said to be nuclear if it has the completely positive approximation property, and a $C^\ast$-algebra $A$ is nuclear exactly when $\id_A$ is nuclear. Nuclearity of $\ast$-homomorphisms is a fundamental part of most known existence and uniqueness theorems, and this explains why classification should only be expected to hold for nuclear $C^\ast$-algebras. A major part of this paper will be to produce existence and uniqueness results for $\ast$-homomorphisms satisfying mainly two properties: they should be nuclear and \emph{strongly $\mathcal O_\infty$-stable}. Strong $\mathcal O_\infty$-stability of $\ast$-homomorphisms was introduced in \cite{Gabe-O2class} where it was proved that a separable $C^\ast$-algebra $A$ is $\mathcal O_\infty$-stable -- i.e.~$A \cong A\otimes \mathcal O_\infty$ -- if and only if $\id_A$ is strongly $\mathcal O_\infty$-stable. In particular, the separable $C^\ast$-algebras $A$ classified by these existence and uniqueness results are exactly the ones for which $\id_A$ is nuclear and strongly $\mathcal O_\infty$-stable, or equivalently, $A$ should be separable, nuclear, and $\mathcal O_\infty$-stable. \subsection*{The main results -- the simple case} Before presenting the general (not necessarily simple) classification results, the focus will be on the classical case leading to the Kirchberg--Phillips theorem. In this case, the classifying invariant will be $KK_\nuc$ as defined by Skandalis \cite{Skandalis-KKnuc}; a variation on Kasparov's $KK$-theory \cite{Kasparov-KKExt}. While all $\ast$-homomorphisms $\phi \colon A\to B$ give rise to elements $KK(\phi) \in KK(A,B)$, only nuclear $\ast$-homomorphisms $\phi$ give rise to elements $KK_\nuc(\phi) \in KK_\nuc(A,B)$. If either $A$ or $B$ is nuclear, then the natural map $KK_\nuc(A,B) \to KK(A,B)$ is an isomorphism, and moreover, every $\ast$-homomorphism $\phi\colon A\to B$ is nuclear. Let $A$ be a separable $C^\ast$-algebra and $B$ a $\sigma$-unital $C^\ast$-algebra, i.e.~a $C^\ast$-algebra containing a strictly positive element. The properties satisfied by the $\ast$-homo\-morphisms $\phi \colon A \to B$ for the existence and uniqueness theorems are the following: \begin{itemize} \item \emph{Nuclear:} $\phi$ is a point-norm limit of maps factoring via completely positive maps through matrix algebras; \item \emph{Strongly $\mathcal O_\infty$-stable:}\footnote{If one replaces the paths with sequences $(s_n^{(i)})_{n\in \mathbb N}$ which satisfy the analogous conditions, then one obtains the definition of $\phi$ being \emph{$\mathcal O_\infty$-stable} from \cite{Gabe-O2class}. It is an open problem, studied in more detail in Section \ref{s:stronglyOinfty}, whether every $\mathcal O_\infty$-stable map is strongly $\mathcal O_\infty$-stable.} there are continuous, bounded paths $(s_t^{(i)})_{t\in[0,\infty)}$ in $B$ for $i=1,2$ such that \begin{equation} \lim_{t\to \infty}\| s_t^{(i)} \phi(a) - \phi(a) s_t^{(i)}\| = 0, \qquad \lim_{t\to \infty} \| ((s_t^{(i)})^\ast s_t^{(j)} - \delta_{i,j}) \phi(a) \| = 0, \end{equation} for every $a\in A$ and $i,j=1,2$, where $\delta_{i,j} = 1$ if $i=j$ and $\delta_{i,j}=0$ if $i\neq j$; \item \emph{Full:} $\phi(a)$ generates all of $B$ as a two-sided, closed ideal, for every non-zero $a\in A$. \end{itemize} These properties of $\ast$-homomorphisms translate into properties of $C^\ast$-algebras, in the sense that if $A$ is a separable $C^\ast$-algebra, then $\id_A$ is nuclear, strongly $\mathcal O_\infty$-stable, or full respectively, exactly when $A$ is nuclear, $\mathcal O_\infty$-stable, or simple respectively. Moreover, if $\phi \colon A \to B$ is a $\ast$-homomorphism and $A$ or $B$ is nuclear or $\mathcal O_\infty$-stable respectively, then $\phi$ is nuclear or strongly $\mathcal O_\infty$-stable respectively. Moreover, if $B$ is simple and $\phi$ is injective, then it is also full, or if $A$ is simple, then $\phi$ is full when corestricted to the two-sided, closed ideal generated its image. Let \begin{equation} \Gamma_i \colon KK_\nuc(A,B) \to \Hom(K_i(A), K_i(B)), \end{equation} for $i=0,1$ be the canonical homomorphisms induced by the Kasparov product under the canonical identifications $KK_\nuc(\mathbb C, B) \cong K_0(B)$ and $KK_\nuc(C_0(\mathbb R), B) \cong K_1(B)$, and let \begin{equation} \Gamma \colon KK_\nuc(A,B) \to \Hom(K_\ast(A),K_\ast(B)) \end{equation} be the induced homomorphism. If $\phi \colon A\to B$ is a nuclear $\ast$-homomorphism, then $\Gamma_i(KK_\nuc(\phi)) = \phi_i \colon K_i(A) \to K_i(B)$ for $i=0,1$ is the induced map in $K$-theory. So if $A$ and $B$ are unital $C^\ast$-algebras, and if $\phi \colon A\to B$ is a unital, nuclear $\ast$-homomorphism, then \begin{equation} \Gamma_0(KK_\nuc(\phi)) ([1_A]_0) = \phi_0([1_A]_0) = [1_B]_0. \end{equation} The following is the main existence part in the Kirchberg--Phillips theorem. \begin{introtheorem}[Existence -- full case]\label{t:existsimple} Let $A$ be a separable, exact $C^\ast$-algebra, let $B$ be a $\sigma$-unital $C^\ast$-algebra containing a full, properly infinite projection, and let $x\in KK_\nuc(A,B)$. Then there exists a nuclear, strongly $\mathcal O_\infty$-stable, full $\ast$-homo\-morphism $\phi \colon A \to B$ such that $KK_\nuc(\phi) = x$. Moreover, if $A$ and $B$ are unital, then there exists a unital, nuclear, strongly $\mathcal O_\infty$-stable, full $\ast$-homomorphism $\phi \colon A \to B$ for which $KK_\nuc(\phi) = x$ if and only if \begin{equation} \Gamma_0(x)([1_A]_0) = [1_B]_0. \end{equation} \end{introtheorem} The next goal is to present a uniqueness theorem to keep the above existence theorem company. In order to do so, one must first know which equivalence relation of $\ast$-homomorphisms to obtain uniqueness by. The obvious choice would be approximate unitary equivalence as this is what is needed for classification of $C^\ast$-algebras, but this does not imply agreement in $KK_\nuc$. The right notion should have more of a homotopic flavour. Let $\phi ,\psi \colon A \to B$ be $\ast$-homomorphisms for which $A$ is separable. Say that $\phi$ and $\psi$ are \emph{asymptotically Murray--von Neumann equivalent}, written $\phi \sim_\asMvN \psi$, if there is a norm-continuous path $(v_t)_{t\in [0,\infty)}$ of contractions in the multiplier algebra $\multialg{B}$, such that \begin{equation} \lim_{t\to\infty} \| v_t^\ast \phi(a) v_t - \psi(a) \| = 0, \qquad \lim_{t\to \infty} \| v_t \psi(a) v_t^\ast - \phi(a)\| = 0 \end{equation} for all $a\in A$. If each $v_t$ can be chosen to be a unitary, then $\phi$ and $\psi$ are said to be \emph{asymptotically unitarily equivalent}, written $\phi \sim_\asu \psi$. In the special case where $A = \mathbb C$, then $\phi \sim_\asMvN \psi$ (respectively $\phi \sim_\asu \psi$) if and only if the projections $\phi(1_\mathbb{C})$ and $\psi(1_{\mathbb C})$ are Murray--von Neumann equivalent (respectively unitarily equivalent). Asymptotic Murray--von Neumann equivalence and unitary equivalence are (for general $A$) connected in similar ways as Murray--von Neumann and unitary equivalence of projections are connected. For instance, $\phi \sim_\asMvN \psi$ if and only if $\phi \oplus 0 \sim_\asu \psi \oplus 0$. It turns out that asympotitc Murray--von Neumann equivalence is exactly the equivalence relation of $\ast$-homomorphisms which is captured by $KK_\nuc$. However, this is not strong enough in general to obtain classification of $C^\ast$-algebras, see \cite[Remark 3.15]{Gabe-O2class} for an example. Luckily, asymptotic Murray--von Neumann equivalence and asymptotic unitary equivalence turn out to agree in many interesting cases, for instance whenever $B$ is stable, or whenever $A,B,\phi$ and $\psi$ are all unital. This explains why we obtain classification of stable and of unital $C^\ast$-algebras. The following is the uniqueness theorem for full maps. \begin{introtheorem}[Uniqueness -- full case]\label{t:uniquesimple} Let $A$ be a separable $C^\ast$-algebra, and let $B$ be a $\sigma$-unital $C^\ast$-algebra. Suppose that $\phi, \psi\colon A \to B$ are nuclear, strongly $\mathcal O_\infty$-stable, full $\ast$-homomorphisms. The following are equivalent: \begin{itemize} \item[$(i)$] $KK_\nuc(\phi) = KK_\nuc(\psi)$; \item[$(ii)$] $\phi$ and $\psi$ are asymptotically Murray--von Neumann equivalent. \end{itemize} Additionally, if either $B$ is stable, or if $A, B, \phi$ and $\psi$ are all unital, then $(i)$ and $(ii)$ are equivalent to \begin{itemize} \item[$(iii)$] $\phi$ and $\psi$ are asymptotically unitarily equivalent (with unitaries in the minimal unitisation). \end{itemize} \end{introtheorem} A nuclear, strongly $\mathcal O_\infty$-stable, full $\ast$-homomorphism $A \to B$ as above exists only when $A$ is exact, and $B$ contains a properly infinite, full projection. So the $C^\ast$-algebras for which Theorem \ref{t:uniquesimple} applies, are the same as those considered in Theorem \ref{t:existsimple}. The classical approach to the Kirchberg--Phillips theorem in \cite{Kirchberg-simple} and \cite{Phillips-classification} was also through existence and uniqueness results similar to Theorems \ref{t:existsimple} and \ref{t:uniquesimple}, see also \cite[Theorems 8.2.1 and 8.3.3]{Rordam-book-classification}. In Section \ref{ss:classical} it is explained why these existence and uniqueness results are special cases of Theorems \ref{t:existsimple} and \ref{t:uniquesimple}. In order to apply existence and uniqueness results as above for classification, the identity maps of the $C^\ast$-algebras in question must also satisfy the relevant conditions on the maps. In the above case, this means that $\id_A$ and $\id_B$ should be nuclear, strongly $\mathcal O_\infty$-stable and full. These properties translate into properties of the $C^\ast$-algebras: $A$ and $B$ should be nuclear, $\mathcal O_\infty$-stable, and simple. Adding separability assumptions to the mix, one exactly arrives at the class of \emph{Kirchberg algebras}: separable, nuclear, $\mathcal O_\infty$-stable, simple $C^\ast$-algebras. While pure infiniteness is usually considered in the definition of Kirchberg algebras in place of $\mathcal O_\infty$-stability, a by-now-classical theorem of Kirchberg \cite{Kirchberg-ICM} (see also \cite{KirchbergPhillips-embedding}) says that a separable, nuclear, simple $C^\ast$-algebra is $\mathcal O_\infty$-stable if and only if it is purely infinite. While this is of course a beautiful and highly applicable characterisation, the $\mathcal O_\infty$-stability (of maps) is really what drives all the proofs in the classification results. The main result of the first part of the paper is the following classification result due to Kirchberg \cite{Kirchberg-simple} and Phillips \cite{Phillips-classification}. Recall that an element $x\in KK(A,B)$ is \emph{invertible}, if there exists $y\in KK(B,A)$ such that $y\circ x = KK(\id_A)$ and $x\circ y = KK(\id_B)$, and that $A$ and $B$ are \emph{$KK$-equivalent} if there exists an invertible element in $KK(A,B)$. \begin{introtheorem}[Classification of Kirchberg algebras]\label{t:KP} Let $A$ and $B$ be Kirchberg algebras (separable, nuclear, purely infinite, simple $C^\ast$-algebras). \begin{itemize} \item[$(a)$] If $A$ and $B$ are stable, then $A\cong B$ if and only if $A$ and $B$ are $KK$-equivalent. Moreover, for any invertible $x\in KK(A,B)$ there exists an isomorphism $\phi \colon A \xrightarrow \cong B$, unique up to asymptotic unitary equivalence (with unitaries in the minimal unitisation), such that $KK(\phi) = x$. \item[$(b)$] If $A$ and $B$ are unital, then $A\cong B$ if and only if there is an invertible $x\in KK(A,B)$ such that $\Gamma_0(x)([1_A]_0) = [1_B]_0$. Moreover, for any such $x$ there is an isomorphism $\phi \colon A \xrightarrow \cong B$, unique up to asymptotic unitary equivalence, such that $KK(\phi) = x$. \end{itemize} \end{introtheorem} By a dichotomy result of Zhang \cite{Zhang-dichotomy}, any Kirchberg algebra is either stable or unital and hence the above theorem classifies all Kirchberg algebras. Recall that a separable $C^\ast$-algebra $A$ \emph{satisfies the universal coefficients theorem (UCT)} of Rosenberg and Schochet \cite{RosenbergSchochet-UCT} (in $KK$-theory) if for any $\sigma$-unital $C^\ast$-algebra $B$ there is a (natural) short exact sequence \begin{equation}\label{eq:UCT} 0 \to \Ext(K_{1-\ast}(A), K_\ast(B)) \to KK(A,B) \xrightarrow{\Gamma} \Hom(K_\ast(A), K_\ast(B)) \to 0. \end{equation} Rosenberg and Schochet proved that a separable $C^\ast$-algebra satisfies the UCT if and only if it is $KK$-equivalent to a separable $C^\ast$-algebra of Type I. A remarkable theorem of Tu \cite{Tu-BaumConnes} implies that the groupoid $C^\ast$-algebra of a locally compact, Hausdorff, second countable, amenable groupoid satisfies the UCT. It is an open problem, often referred to as the \emph{UCT problem}, whether every separable, nuclear $C^\ast$-algebra satisfies the UCT. An elementary consequence of the UCT, see \cite[Proposition 7.3]{RosenbergSchochet-UCT}, is that the following holds whenever both $A$ and $B$ satisfy the UCT: if $\alpha_\ast \colon K_\ast(A) \to K_\ast(B)$ is an isomorphism and $x\in KK(A,B)$ satisfies $\Gamma(x) = \alpha_\ast$ (such $x$ always exists by the UCT short exact sequence \eqref{eq:UCT}), then $x$ is invertible in $KK$. Hence $C^\ast$-algebras satisfying the UCT are strongly classified by $K$-theory up to $KK$-equiva\-lence.\footnote{\emph{Strong classification} means that one may lift an isomorphism on the invariant to an isomorphism.} The following is therefore immediately obtained from Theorem \ref{t:KP}. \begin{introtheorem}[Classification of UCT Kirchberg algebras]\label{t:KPUCT} Let $A$ and $B$ be Kirchberg algebras satisfying the UCT. \begin{itemize} \item[$(a)$] If $A$ and $B$ are stable, then $A\cong B$ if and only if $K_\ast(A) \cong K_\ast(B)$. Moreover, for any isomorphism $\alpha_\ast \colon K_\ast(A) \xrightarrow \cong K_\ast(B)$ there is an isomorphism $\phi \colon A \xrightarrow \cong B$ such that $\phi_\ast = \alpha_\ast$. \item[$(b)$] If $A$ and $B$ are unital, then $A\cong B$ if and only if there exists an isomorphism $\alpha_\ast \colon K_\ast(A) \xrightarrow \cong K_\ast(B)$ such that $\alpha_0([1_A]_0) = [1_B]_0$. Moreover, for any such $\alpha_\ast$, there is an isomorphism $\phi \colon A \xrightarrow \cong B$ such that $\phi_\ast = \alpha_\ast$. \end{itemize} \end{introtheorem} Note that since $K$-theory is a weaker invariant than $KK$-theory, one no longer has uniqueness of the isomorphisms in Theorem \ref{t:KPUCT} as opposed to Theorem \ref{t:KP}. \subsection*{The main results -- the general case} In order to explain the general classification results, one must consider $C^\ast$-algebras with specified (two-sided, closed) ideal structure, and a version of $KK$-theory which takes this extra information into account. This is completely analogous to considering group equivariant $KK$-theory, when the $C^\ast$-algebras come equipped with a group action. To be more precise, for a $C^\ast$-algebra $A$ let $\mathcal I(A)$ denote the partially ordered set of two-sided, closed ideals in $A$. Similarly, for a topological space $X$ let $\mathcal O(X)$ denote the partially ordered set of open subsets of $X$. Both $\mathcal I(A)$ and $\mathcal O(X)$ are complete lattices. An \emph{action of $X$ on $A$} is an order preserving map $\Phi_A \colon \mathcal O(X) \to \mathcal I(A)$, and an \emph{$X$-$C^\ast$-algebra} is a $C^\ast$-algebra $A$ together with an action $\Phi_A$ of $X$ on $A$. Usually $\Phi_A$ is eliminated from the notation by defining $A(U) := \Phi_A(U)$ for $U\in \mathcal O(X)$. A map $\phi \colon A\to B$ is $X$-equivariant if $\phi(A(U)) \subseteq B(U)$ for all $U\in \mathcal O(X)$. A special -- and very important -- type of action is that of the primitive ideal space $\Prim A$ on $A$, given by the canonical order isomorphism $\I_A \colon \mathcal O(\Prim A) \to \mathcal I(A)$, see \cite[Theorem 4.1.3]{Pedersen-book-automorphism}. More generally, an $X$-$C^\ast$-algebra $A$ is called \emph{tight} if the action $\Phi_A \colon \mathcal O(X) \to \mathcal I(A)$ is an order isomorphism. Note that any isomorphism $\phi \colon A \xrightarrow \cong B$ of $C^\ast$-algebras induces a tight action $\Phi_B$ of $\Prim A$ on $B$, such that $\phi \colon (A,\I_A) \xrightarrow \cong (B,\Phi_B)$ is an isomorphism of $\Prim A$-$C^\ast$-algebras. Hence it suffices to classify all tight $X$-$C^\ast$-algebras for different spaces $X$. In the special case where $X$ is a one-point space, tightness of $A$ means that it is simple, so classifying simple $C^\ast$-algebras is the same as classifying tight $X$-$C^\ast$-algebras in the case where $X$ is a one-point space. Whenever $A$ is a separable $X$-$C^\ast$-algebra and $B$ is a $\sigma$-unital $X$-$C^\ast$-algebra, one may construct groups $KK(X; A, B)$ and $KK_\nuc(X; A, B)$ analogously as one constructs $KK(A,B)$ and $KK_\nuc(A,B)$. In fact, just as when constructing $KK^G(A, B)$ for $G$-$C^\ast$-algebras where $G$ is a group, one constructs $KK(X)$ and $KK_\nuc(X)$ by only considering Kasparov modules which remember the $X$-$C^\ast$-algebra structure. As in the classical case, the Kasparov product induces a bilinear product, and it therefore makes sense to talk about invertible $KK(X)$-elements. This is all done in full detail in Section \ref{s:KK}. The main classification result will be in terms of the existence of invertible $KK(X)$-elements, as in Theorem \ref{t:KP}. Every $X$-equivariant $\ast$-homomorphism $\phi \colon A \to B$ induces an element $KK(X;\phi) \in KK(X; A, B)$, and if $A$ is exact and $\phi$ is nuclear, then one obtains $KK_\nuc(X;\phi) \in KK_\nuc(X; A, B)$. There are canonical forgetful maps \begin{equation} KK_\nuc(X; A, B) \to KK(X; A, B) \to KK(A,B), \end{equation} and thus every element $x\in KK_\nuc(X; A,B)$ induces a homomorphism $\Gamma_i(x) \colon K_i(A) \to K_i(B)$ for $i=0,1$ which is compatible with taking compositions of (nuclear) $X$-equivariant $\ast$-homomorphisms in the obvious way. Just as in Theorems \ref{t:existsimple} and \ref{t:uniquesimple}, the focus will be on nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homomorphisms. However, fullness in the ideal-related setting is a bit more delicate. The idea is the following: let $\phi \colon A \to B$ be an $X$-equivariant $\ast$-homomorphism between $X$-$C^\ast$-algebras. Suppose that for every $a\in A$ that there is a smallest open subset $U_a$ of $X$ such that $a\in A(U_a)$. Then $\phi(a)$ will have to be contained in $B(U_a)$ by $X$-equivariance of $\phi$. The map $\phi$ is called \emph{$X$-full} if $\phi(a)$ is full in $B(U_a)$ for every $a\in A$. In the case that $X$ is a one-point set, and thus only have open subsets $\emptyset$ and $X$, and the action on $A$ (similarly $B$) is $A(\emptyset) = 0$ and $A(X) = A$, then for every $a\in A$, one has $U_a = X$ if $a\neq 0$ and $U_a = \emptyset$ if $a=0$. Hence one arrives at the usual definition of fullness: $\phi \colon A \to B$ is $X$-full exactly when $\phi(a)$ is full in $B = B(X)$ for every non-zero $a\in A$. In the case where $A$ and $B$ are tight $X$-$C^\ast$-algebras, $\phi \colon A \to B$ being $X$-full means that if $U\in \mathcal O(X)$ and if $a\in A(U)$ is a full element, then $\phi(a)$ is full in $B(U)$. In particular, any isomorphism of tight $X$-$C^\ast$-algebras is $X$-full, analogously as to how isomorphisms of simple $C^\ast$-algebras are full. It turns out that the existence of such a minimal open set $U_a$ for every $a\in A$ is equivalent to the action $\Phi_A \colon \mathcal O(X) \to \mathcal I(A)$ preserving all infima. In this case, the $X$-$C^\ast$-algebra $A$ is said to be \emph{lower semicontinuous}. Hence $X$-fullness of maps is only defined when the domain is lower semicontinuous. Additionally, an $X$-$C^\ast$-algebra is said to be \begin{itemize} \item \emph{monotone continuous} if the action preserves infima and increasing suprema; \item \emph{upper semicontinuous} if the action preserves suprema; \item \emph{$X$-compact} if the action preserves compact containment (see Definition \ref{d:lattice}). \end{itemize} Note that a tight $X$-$C^\ast$-algebra satisfies all the above conditions, i.e.~it is lower semicontinuous, monotone continuous, upper semicontinuous, and $X$-compact. \begin{introtheorem}[Existence]\label{t:irexistence} Let $X$ be a topological space, let $A$ be a separable, exact, monotone continuous $X$-$C^\ast$-algebra, and let $B$ be an $\mathcal O_\infty$-stable, upper semicontinuous, $X$-compact $X$-$C^\ast$-algebra. For every element $x\in KK_\nuc(X; A,B)$ there exists a nuclear, $X$-full $\ast$-homomorphism $\phi \colon A \to B$ such that $KK_\nuc(X; \phi) = x$. Moreover, if $A$ and $B$ are unital, then we may pick $\phi$ to also be unital if and only if the following conditions hold: \begin{itemize} \item[(1)] $B(U) = B$ for every $U\in \mathcal O(X)$ satisfying $A(U) = A$, and \item[(2)] $\Gamma_0(x)([1_A]_0) = [1_B]_0$ in $K_0(B)$. \end{itemize} \end{introtheorem} In Theorem \ref{t:existsimple} the target $C^\ast$-algebra $B$ was not assumed to be $\mathcal O_\infty$-stable, only to contain a properly infinite, full projection. By using the $\mathcal O_2$-embedding theorem, one may equivalently ask that there exists a full, nuclear, $\mathcal O_\infty$-stable $\ast$-homomorphism $A\to B$. There is also a more general version of Theorem \ref{t:irexistence} with an analogous assumption -- the existence of an $X$-full, nuclear, $\mathcal O_\infty$-stable $\ast$-homomorphism $A\to B$ -- see Proposition \ref{p:existence}. One obtains the existence result above by combining that proposition with an ideal-related version of the $\mathcal O_2$-embedding theorem, Corollary \ref{c:irO2X}. The uniqueness below is almost as general as Theorem \ref{t:uniquesimple}, although the asymptotic unitary equivalence is with multiplier unitaries instead of unitaries in the unitisation. \begin{introtheorem}[Uniqueness]\label{t:iruniqueness} Let $X$ be a topological space, let $A$ be a separable, exact, lower semicontinuous $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra. Suppose that $\phi, \psi \colon A \to B$ are $X$-full, nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homo\-morphisms. The following are equivalent: \begin{itemize} \item[$(i)$] $KK_\nuc(X; \phi) = KK_\nuc(X; \psi)$; \item[$(ii)$] $\phi$ and $\psi$ are asymptotically Murray--von Neumann equivalent. \end{itemize} Additionally, if either $B$ is stable, or if $A,B,\phi$ and $\psi$ are all unital, then $(i)$ and $(ii)$ are equivalent to \begin{itemize} \item[$(iii)$] $\phi$ and $\psi$ are asymptotically unitarily equivalent (with multiplier unitaries in the stable case). \end{itemize} \end{introtheorem} As when classifying Kirchberg algebras, one can apply the above existence and uniqueness theorem for classification provided the identity maps satisfy the conditions on the maps. So $\id_A$ and $\id_B$ should be nuclear, strongly $\mathcal O_\infty$-stable and $X$-full. This translates into properties of the $X$-$C^\ast$-algebras: $A$ and $B$ must be nuclear, $\mathcal O_\infty$-stable, and the action of $X$ should be \emph{tight}, i.e.~the actions $\Phi_A \colon \mathcal O(X) \to \mathcal I(A)$ and $\Phi_B \colon \mathcal O(X) \to \mathcal I(B)$ should be order isomorphisms. Note that tightness is for $X$-$C^\ast$-algebras what simplicity is for $C^\ast$-algebras without an action of $X$. As with usual $KK$, two separable $X$-$C^\ast$-algebras $A$ and $B$ are \emph{$KK(X)$-equivalent} if $KK(X;A, B)$ contains an invertible element $x$, i.e.~an element for which there exists $y\in KK(X;B,A)$ such that $y\circ x = KK(X;\id_A)$ and $x\circ y = KK(X; \id_B)$. \begin{introtheorem}[Classification]\label{t:nonsimpleclass} Let $X$ be a topological space, and suppose that $A$ and $B$ are separable, nuclear, $\mathcal O_\infty$-stable, tight $X$-$C^\ast$-algebras. \begin{itemize} \item[$(a)$] If $A$ and $B$ are stable, then $A$ and $B$ are isomorphic as $X$-$C^\ast$-algebras if and only if they are $KK(X)$-equivalent. Moreover, for any invertible $x\in KK(X; A, B)$ there exists an $X$-equivariant isomorphism $\phi \colon A \xrightarrow \cong B$, unique up to asymptotic unitary equivalence (with multiplier unitaries), such that $KK(X; \phi) = x$. \item[$(b)$] If $A$ and $B$ are unital, then $A$ and $B$ are isomorphic as $X$-$C^\ast$-algebras if and only if there is an invertible $x\in KK(X; A, B)$ such that $\Gamma_0(x)([1_A]_0) = [1_B]_0$. Moreover, for any such $x$ there is an $X$-equivariant isomorphism $\phi \colon A \xrightarrow \cong B$, unique up to asymptotic unitary equivalence, such that $KK(X; \phi) =x$. \end{itemize} \end{introtheorem} One can avoid the actions of topological spaces in the statement of Theorem \ref{t:nonsimpleclass} by introducing the following notation. Say that two separable $C^\ast$-algebras $A$ and $B$ are \emph{ideal-related $KK$-equivalent} if there exists an order isomorphism $\Phi \colon \mathcal I(A) \xrightarrow \cong \mathcal I(B)$ such that the induced tight $\Prim A$-$C^\ast$-algebras $(A, \I_A)$ and $(B, \Phi \circ \I_A)$ are $KK(\Prim A)$-equivalent. One immediately obtains the following corollary. \begin{introcorollary}\label{c:class} Let $A$ and $B$ be separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras. Then $A$ and $B$ are stably isomorphic if and only if they are ideal-related $KK$-equivalent. \end{introcorollary} Of course the above corollary can be formulated in such a way that the classification is strong, i.e.~so that the ideal-related $KK$-equivalence can be lifted to an isomorphism of the stabilised $C^\ast$-algebras, and a unital version if $A$ and $B$ are unital. \subsection*{$K$-theoretic classification} One thing that makes the Kirchberg--Phillips theorem highly applicable is Theorem \ref{t:KPUCT}; that UCT Kirchberg algebras are classified not just by $KK$-theory, but even by $K$-theory. The first similar result in the non-simple, purely infinite case was due to Rørdam \cite{Rordam-classsixterm} who showed that separable, nuclear, stable, purely infinite $C^\ast$-algebras $A$ satisfying the UCT, which contain \emph{exactly one} non-zero, proper (two-sided, closed) ideal $I$, also satisfying the UCT, are classified by the six-term exact sequence \begin{equation} \xymatrix{ K_0(I) \ar[r]^{\iota_0} & K_0(A) \ar[r]^{\pi_0\,\,\,} & K_0(A/I) \ar[d]^{\partial_0} \\ K_1(A/I) \ar[u]^{\partial_1} & K_1(A) \ar[l]_{\quad \pi_1} & K_1(I). \ar[l]_{\,\,\, \iota_1} } \end{equation} So essentially this classification applies when $\mathcal I(A) = \{ 0, I , A\}$ and all ideals satisfy the UCT, using the induced six-term exact sequence in $K$-theory as the classifying invariant. The following very natural question arises. \begin{introquestion} Are separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras for which every two-sided, closed ideal satisfies the UCT classified (up to stable isomorphism) by a $K$-theoretic invariant? \end{introquestion} By ``a $K$-theoretic invariant" one would hope for an invariant for which $K$-theory plays a dominating part, in the same spirit as the six-term exact sequence above. By Corollary \ref{c:class}, it suffices to show that any isomorphism of the $K$-theoretic invariant in question lifts to an ideal-related $KK$-equivalence. One does have the following partial solution to the question, which is immediately obtained by combining Theorem \ref{t:nonsimpleclass} with \cite[Theorem 6.2]{Gabe-cplifting} and \cite[Theorem 4.6]{DadarlatMeyer-E-theory}. This paper does not contain new proofs of the results from \cite{Gabe-cplifting} and \cite{DadarlatMeyer-E-theory}. \begin{introtheorem} Let $X$ be a topological space, suppose that $A$ and $B$ are separable, nuclear, $\mathcal O_\infty$-stable, stable, tight $X$-$C^\ast$-algebras for which all ideals satisfy the UCT, and let $\alpha \in KK(X; A, B)$. If $\alpha$ induces an isomorphism in $K$-theory $\alpha(U) \colon K_\ast (A(U)) \xrightarrow \cong K_\ast(B(U))$ for every $U\in \mathcal O(X)$, then $\alpha$ lifts to an $X$-equivariant isomorphism $A\xrightarrow \cong B$. \end{introtheorem} This is unfortunately not an optimal solution to the above question since one needs to know that such an $\alpha$ exists before one can apply the theorem. However, one would only need to lift isomorphisms on the $K$-theoretic invariants to some $KK(X)$-elements without worrying about whether the lift is invertible or not, since invertibility comes for free by the above theorem. The most general and systematic attack on the above problem is due to Meyer and Nest \cite{MeyerNest-homalg}, \cite{Meyer-homalg2} where they study the category of separable $X$-$C^\ast$-algebras (plus extra assumptions on the actions) with morphism sets $KK(X; A, B)$. This category turns out to be triangulated, and the question above reduces to doing homological algebra in such triangulated categories. This allowed for partial solutions to the above question assuming the $C^\ast$-algebras have finitely many ideals, see \cite{MeyerNest-filtrated}, \cite{BentmannKohler-UCTfinite}, \cite{Bentmann-vanishingbdry}, \cite{ArklintRestorffRuiz-classrr0}, \cite{BentmannMeyer-generalclass}, and \cite{Meyer-generalclassII}. While most applications of the Meyer--Nest techniques are for $C^\ast$-algebras with finitely many ideals, the general machinery is applicable much more broadly, and was for instance used for classification of certain continuous fields of Kirchberg algebras in \cite{DadarlatMeyer-E-theory} and \cite{BentmannDadarlat-oneparKirchberg}. A few other approaches have also been used for classification of non-simple, purely infinite $C^\ast$-algebras. Restorff showed \cite{Restorff-classCK}, using symbolic dynamics instead of the Meyer--Nest framework and Theorem \ref{t:nonsimpleclass}, that purely infinite Cuntz-Krieger algebras are classified by an invariant which essentially consists of the $K$-theory of all subquotients $J/I$ of the $C^\ast$-algebra, where $I\subseteq J$ are ideals. This classification is however internal in the sense that it can only be used if one knows that \emph{both} $C^\ast$-algebras in question are purely infinite Cuntz--Krieger algebras. A much more general, but still internal, classification result was recently obtained by Eilers, Restorff, Ruiz, and Sørensen \cite{ERRS-classunitalgraph} where all unital graph $C^\ast$-algebras are classified by a $K$-theoretic invariant. This remarkable result reaches far beyond the purely infinite case, and does not use Theorem \ref{t:nonsimpleclass} to obtain classification. A different approach is due to Dadarlat and Pennig \cite{DadarlatPennig-DixmierDouady}, where they prove a Dixmier--Douady type classification for certain continuous fields over compact, metrisable spaces $X$ for which the fibres are $\mathcal D\otimes \mathcal K$ for a fixed strongly self-absorbing $C^\ast$-algebra $\mathcal D$. When $\mathcal D$ is purely infinite and $X$ is finite dimensional, the $C^\ast$-algebras classified are separable, nuclear, and $\mathcal O_\infty$-stable by \cite{Dadarlat-findim} and are thus covered by Theorem \ref{t:nonsimpleclass}, although Dadarlat and Pennig prove the classification via other methods. The classifying invariant in this case is an induced element in a generalised cohomology group $\overline{E}^1_{\mathcal D}(X)$, see \cite{DadarlatPennig-DixmierDouady} and \cite{DadarlatPennig-DixmierDouadyII} for more details. These results might suggest that a complete $K$-theoretic invariant for classification should also depend on cohomological data of the space $X$ when $X$ is not zero-dimensional. \subsection*{Notation} Let $\mathcal K := \mathcal K(\ell^2(\mathbb N))$ denote the compact operators, and let $e_{i,j}$ denote the standard $(i,j)$'th matrix unit in $\mathcal K$ for $i,j\in \mathbb N$. Similarly $e_{i,j}$ denotes the standard matrix unit in the matrix algebra $M_n(\mathbb C)$. A $C^\ast$-algebra $B$ is \emph{stable} if $B\cong B \otimes \mathcal K$. The multiplier algebra of a $C^\ast$-algebra $B$ is denoted $\multialg{B}$, and the induced corona algebra $\multialg{B}/B$ is denoted by $\corona{B}$. There are two types of unitisations which play role in the paper: the \emph{minimal unitisation} $\widetilde B$, which is $B$ if $B$ was already unital; and the \emph{forced unitisation} $A^\dagger$ which is $A\oplus \mathbb C$ if $A$ was already unital. When $\rho \colon A \to B$ is a contractive completely positive map, then $\rho^\dagger \colon A^\dagger \to \widetilde B$ (or $\rho^\dagger \colon A^\dagger \to \multialg{B}$) is the induced unital map given by $\rho^{\dagger}(a+ \mu 1_{A^\dagger}) = \rho(a) + \mu 1_{\widetilde{B}}$ for $a\in A$ and $\mu \in \mathbb C$. Then $\rho^\dagger$ is completely positive by \cite[Proposition 2.2.1 and Lemma 2.2.3]{BrownOzawa-book-approx}, and is a $\ast$-homomorphism whenever $\rho$ is a $\ast$-homomorphism. As a rule of thumb, the forced unitisation $A^\dagger$ is used for domains of maps, whereas minimal unitisations $\widetilde B$ and multiplier algebras $\multialg{B}$ are used for codomains. \subsection*{Acknowledgement} This is the product of many years of work, and has greatly benefited from conversations with a lot of people. To this extent I thank Joan Bosa, Jorge Castillejos, Marius Dadarlat, Efren Ruiz, Chris Schafhauser, Aidan Sims, Gábor Szabó, Simon Wassermann, Stuart White, and Wilhelm Winter for fruitful discussions. A lot more people have definitely indirectly played an important role, and I thank you all. Parts of this project were completed during a research visit at the Mittag–Leffler Institute during the programme Classification of Operator Algebras: Complexity, Rigidity, and Dynamics, and during a visit at the CRM institute during the programme IRP Operator Algebras: Dynamics and Interactions. I am thankful for their hospitality during these visits. While I was a PhD student I completed certain crucial steps for the overall proof, at which time I was supported by the Danish National Research Foundation through the Centre for Symmetry and Deformation (DNRF92). This research was also funded by the Carlsberg Foundation through an internationalisation fellowship. Finally, I would like to thank the referee for many helpful comments and suggestions. \section{Equivalence of $\ast$-homomorphisms}\label{s:hom} Approximate and asymptotic unitary equivalence of $\ast$-homomorphisms is often too strong of an equivalence relation for obtaining classification (or uniqueness results) of $\ast$-homomorphisms, at least when working with non-unital $\ast$-homo\-morphisms. This motivated the notion of approximate and asymptotic Murray--von Neumann equivalence. In the following $\mathbb R_+ := [0,\infty)$. \begin{definition}[{\cite[Definition 3.4]{Gabe-O2class}}]\label{d:MvN} Let $A$ and $B$ be $C^\ast$-algebras, and let $\phi,\psi \colon A \to B$ be $\ast$-homomorphisms. Say that $\phi$ and $\psi$ are \emph{approximately Murray--von Neumann equivalent}, written $\phi \sim_{\aMvN} \psi$, if for any finite set $\mathcal F \subset A$ and any $\epsilon >0$, there exists a contraction $w\in \multialg{B}$ such that \begin{equation} \| w^\ast \phi(a) w - \psi(a)\| < \epsilon, \qquad \| w \psi(a) w^\ast - \phi(a)\| < \epsilon \end{equation} for all $a\in \mathcal F$. If one may always pick $w$ to be a unitary, then $\phi$ and $\psi$ are said to be \emph{approximately unitarily equivalent}, written $\phi \sim_{\au} \psi$. If $A$ is separable, say that $\phi$ and $\psi$ are \emph{asymptotically Murray--von Neumann equivalent}, written $\phi \sim_{\asMvN} \psi$, if there is a norm-continuous path $(v_t)_{t\in \mathbb R_+}$ of contractions in $\multialg{B}$, such that \begin{equation} \lim_{t\to \infty} \| v_t^\ast \phi(a) v_t - \psi (a) \| = 0 , \qquad \lim_{t\to \infty} \| v_t \psi(a) v_t^\ast - \phi (a) \| = 0 \end{equation} for all $a\in A$. If one may pick each $v_t$ to be a unitary, then $\phi$ and $\psi$ are said to be \emph{asymptotically unitarily equivalent}, written $\phi \sim_{\asu} \psi$. \end{definition} \begin{remark} The definition of approximate and asymptotic Murray--von Neumann equivalence in \cite[Definition 3.4]{Gabe-O2class} was slightly different than the above definition, but the definition above is equivalent. In \cite[Definition 3.4]{Gabe-O2class} it was required that each $w$ and $v_t$ was in $B$ instead of in $\multialg{B}$. Clearly that definition implies the condition in Definition \ref{d:MvN}. If $v_t\in \multialg{B}$ as in the above definition, let $(e_t)_{t\in \mathbb R_+}$ be a continuous approximate identity in $A$.\footnote{Such an approximate identity exists in any $\sigma$-unital $C^\ast$-algebra. For instance, one could fix an approximate identity $(e_n)_{n\in \mathbb N}$ and let $e_{n+r} = (1-r) e_n + r e_{n+1}$ for $r\in [0,1]$ and $n\in \mathbb N$.} Then $w_t := \phi(e_t) v_t \psi(e_t) \in B$, and \begin{equation} \lim_{t\to \infty} \| w_t^\ast \phi(a) w_t - \psi (a) \| = 0 , \qquad \lim_{t\to \infty} \| w_t \psi(a) w_t^\ast - \phi (a) \| = 0 \end{equation} for all $a\in A$, so the two definitions are equivalent. The same holds in the approximate case. \end{remark} By \cite[Lemma 3.5]{Gabe-O2class} one does not need to assume that $w$ in Definition \ref{d:MvN} is contractive. However, it is convenient to assume that $w$ and all $v_t$ are contractions so that Remark \ref{r:MvNelement} below is more readily applicable. The following was essentially contained in \cite[Proposition 3.12]{Gabe-O2class}. An addition has been made in a special case for obtaining asymptotic and approximate unitary equivalence with unitaries in the minimal unitisation instead of the multiplier algebra. \begin{proposition}\label{p:MvNvsue} Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, and let $\phi, \psi \colon A \to B$ be $\ast$-homomorphisms. If either \begin{itemize} \item[$(a)$] $A,B,\phi$ and $\psi$ are all unital, or \item[$(b)$] $B$ is stable, \end{itemize} then $\phi \sim_{\aMvN} \psi$ if and only if $\phi \sim_{\au} \psi$, and $\phi \sim_{\asMvN} \psi$ if and only if $\phi \sim_{\asu} \psi$. Additionally, in case $(b)$, if $B$ is $\sigma$-unital and contains a full projection, then the unitaries implementing the approximate and asymptotic unitary equivalences may be taken in the minimal unitisation $\widetilde B$ of $B$. \end{proposition} \begin{proof} The first part is \cite[Proposition 3.12]{Gabe-O2class}. For the additional part, suppose that $B$ is $\sigma$-unital, stable with a full projection $p$. By Brown's stable isomorphism theorem \cite{Brown-stableiso}, $B\cong pBp \otimes \mathcal K(H)$ where $H$ is a separable, infinite dimensional Hilbert space. We assume without loss of generality that $B = pBp \otimes \mathcal K(H)$. Let $(\xi_n)_{n\in \mathbb N}$ be an orthonormal basis for $H$, and let $T_1,T_2\in \mathcal B(H)$ be given by $T_1 \xi_n = \xi_{2n-1}$ and $T_2 \xi_n = \xi_{2n}$ for $n\in \mathbb N$. In the proof of \cite[Lemma 3.12]{Gabe-O2class}, a norm-continuous unitary path $(U_t)_{t\in \mathbb R_+}$ in $\mathcal B(H)$ is constructed as follows: Let $(V_{k, t})_{t\in [k-1,k]}$ for $k\geq 2$ be a norm-continuous unitary path with $V_{k,k-1} = 1_H$, which restricts point-wise to the identity on $\mathrm{span}\{ \xi_k,\xi_{2k-1}\}^\perp$ and such that $V_{k,k}$ flips $\xi_k$ and $\xi_{2k-1}$. Defining $U_t := V_{k,t} V_{k-1,k-1} \cdots V_{2,2}$ for $t\in [k-1,k]$ and $k\geq 2$ gives a continuous unitary path, and it is shown in the proof of \cite[Lemma 3.12]{Gabe-O2class} that the path of endomorphisms $U_t T_1(-) T_1^\ast U_t^\ast$ on $\mathcal K(H)$ converges point-norm to $\id_{\mathcal K(H)}$. The construction shows that if $k\geq 2$ and $t\in [k-1,k]$, then $U_t$ decomposes as $W_t \oplus 1_{H_k^\perp}$ on $H_k \oplus H_k^\perp = H$, where $H_k = \mathrm{span}\{\xi_1, \dots ,\xi_{2k-1}\}$. Hence, since $\mathcal B(H_k \oplus 0) \subseteq \mathcal K(H)$, it follows for $t\in [k-1,k]$ that \begin{equation} u_t := p \otimes U_t = p \otimes ((W_t - 1_{H_k})\oplus 0) + p \otimes (1_{H_k}\oplus 1_{H_k^\perp}) \in (pBp \otimes \mathcal K(H))^\sim = \widetilde B. \end{equation} So $(u_t)_{t\in [1,\infty)}$ is a norm-continuous unitary path in $\widetilde B$. Let $s_i := p \otimes T_i \in \multialg{B}$ for $i=1,2$. Then $s_1,s_2$ are isometries for which $s_1s_1^\ast + s_2 s_2^\ast =1$. Note that $u_t s_1(-) s_1^\ast u_t^\ast\colon B \to B$ converges point-norm to $\id_B$. Hence $\phi$ and $\psi$ are asymptotically unitarily equivalent to $\phi_1 := s_1\phi(-)s_1^\ast$ and $\psi_1 := s_1 \psi(-) s_1^\ast$ respectively, with unitaries in $\widetilde B$. The isomorphism $\theta \colon B \to M_2(B)$ given by $\theta(b) = (s_i^\ast b s_j)_{i,j=1,2}$ satisfies $\theta(s_1 bs_1^\ast) = b \oplus 0 \in M_2(B)$ for every $b\in B$. Hence $\theta \circ \phi_1 = \phi \oplus 0$ and $\theta \circ \psi_1 = \psi \oplus 0$. By \cite[Proposition 3.10]{Gabe-O2class}, if $\phi \sim_\mathrm{a(s)MvN} \psi$ then $\phi \oplus 0 \sim_\mathrm{a(s)u} \psi \oplus 0$ with unitaries in the minimal unitisation of $M_2(B)$. Applying $\theta^{-1}$ (extended to the minimal unitisations) it follows that $\phi \sim_\asu \phi_1\sim_\mathrm{a(s)u} \psi_1\sim_\asu \psi$ with unitaries in $\widetilde B$. \end{proof} \begin{remark}\label{r:MvNelement} For any $C^\ast$-algebra $B$, let \begin{equation} B_\infty := \prod_\mathbb{N} B /\bigoplus_\mathbb N B, \quad \textrm{and} \quad B_\as := C_b(\mathbb R_+, B)/C_0(\mathbb R_+, B) \end{equation} be the \emph{sequence algebra} and the \emph{path algebra} of $B$ respectively. Clearly $B$ embeds into $B_\infty$ and $B_\as$ as constant sequences and constant paths respectively. The following was observed in \cite[Observation 3.7]{Gabe-O2class} and will be used without reference: If $A$ is a separable $C^\ast$-algebra and $\phi, \psi \colon A \to B$ are $\ast$-homomorphisms then $\phi \sim_\aMvN \psi$ (respectively $\phi \sim_\asMvN \psi$) if and only if there is a contraction $v\in B_\infty$ (respectively $v\in B_\as$) such that \begin{equation} v^\ast \phi(a) v = \psi(a) , \quad \textrm{and} \quad v \psi(a) v^\ast = \phi(a) \end{equation} for all $a\in A$. Such a contraction $v\in B_\infty$ (respectively $v\in B_\as$) will be said to implement the approximate (respectively asymptotic) Murray--von Neumann equivalence. \end{remark} The following lemma illustrates how Remark \ref{r:MvNelement} will be applied (with $D = B_\infty$ or $D = B_\as$). \begin{lemma}[{\cite[Lemma 3.8]{Gabe-O2class}}]\label{l:conjhom} Let $A$ and $D$ be $C^\ast$-algebras and let $\phi, \psi \colon A \to D$ be $\ast$-homomorphisms. Suppose that there is a contraction $v\in D$ such that $v^\ast \phi(-) v = \psi$. Then \begin{itemize} \item[$(a)$] $vv^\ast \in D \cap \phi(A)'$, \item[$(b)$] $v^\ast v \psi(a) = \psi(a)$ for all $a\in A$, \item[$(c)$] $\phi(a) v = v \psi(a)$ for all $a\in A$. \end{itemize} \end{lemma} \begin{remark} For any $\ast$-homomorphism $\phi \colon A \to B$ let $B_\infty \cap \phi(A)'$ denote the commutant of $\phi(A)\subseteq B \subseteq B_\infty$, and let \begin{equation} \mathrm{Ann}_{B_\infty} \phi(A) := \{ x \in B_\infty : x \phi(A) + \phi(A) x \subseteq \{0\} \} \end{equation} be the annihilator of $\phi(A)$ in $B_\infty$. When there is no cause of confusion, $\Ann \phi(A)$ will be written instead of $\mathrm{Ann}_{B_\infty}\phi(A)$. Clearly $\Ann\phi(A)$ is an ideal in $B_\infty \cap \phi(A)'$, and the quotient $(B_\infty \cap \phi(A)')/\Ann \phi(A)$ will play an important role in what follows. Similarly one obtains an ideal $\mathrm{Ann}_{B_\as} \phi(A) = \Ann\phi(A)$ in $B_\as \cap \phi(A)'$. \end{remark} Relative commutants of the form $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ were studied extensively by Kirchberg in \cite{Kirchberg-Abel}. The following sums up some elementary properties of such relative commutants. \begin{lemma}\label{l:relcombasic} Let $A$ and $B$ be $C^\ast$-algebras for which $A$ is separable, and let $\phi, \psi \colon A \to B$ be $\ast$-homomorphisms. \begin{itemize} \item[$(a)$] If $C$ is a $C^\ast$-algebra containing $B$ as a hereditary $C^\ast$-subalgebra, then the inclusion $B \hookrightarrow C$ induces an isomorphism \begin{equation}\label{eq:relcomBvsM(B)} \frac{B_\infty \cap \phi(A)'}{\Ann\phi(A)} \xrightarrow{\quad\cong\quad} \frac{C_\infty \cap \phi(A)'}{\Ann\phi(A)} \end{equation} \item[$(b)$] The embedding $B \to M_2(B)$ into the (1,1)-corner, induces an isomorphism \begin{equation}\label{eq:relcomcorner} \frac{B_\infty \cap \phi(A)'}{\Ann\phi(A)} \xrightarrow{\quad \cong\quad} (1_{\tilde B}\oplus 0) \frac{M_2(B)_\infty \cap (\phi \oplus \psi)(A)'}{\Annn(\phi \oplus \psi)(A)} (1_{\tilde B}\oplus 0). \end{equation} \item[$(c)$] Suppose that $v\in B_\infty$ implements an approximate Murray--von Neumann equivalence $\phi \sim_\aMvN \psi$, i.e.~$v\in B_\infty$ is contraction for which $v^\ast \phi(-) v = \psi$ and $v\psi(-) v^\ast = \phi$. Then the c.p.~map $v^\ast(-) v \colon B_\infty \to B_\infty$ induces a (multiplicative) isomorphism \begin{equation} \frac{B_\infty \cap \phi(A)'}{\Ann\phi(A)} \xrightarrow{\quad \cong\quad} \frac{B_\infty \cap \psi(A)'}{\Ann\psi(A)}. \end{equation} The inverse is induced by $v(-)v^\ast$. \end{itemize} Moreover, the same statements hold if one replaces all sequence algebras with path algebras, and ``$\sim_\aMvN$'' with ``$\sim_\asMvN$''. \end{lemma} Note that using part $(a)$ above to replace $B$ with its minimal unitisation $\widetilde B$, it follows that the projection $1_{\tilde B}\oplus 0 \in \frac{M_2(B)_\infty \cap (\phi \oplus \psi)(A)'}{\Annn(\phi \oplus \psi)(A)}$ in part $(b)$ above is well-defined. \begin{proof}[Proof of Lemma \ref{l:relcombasic}] We only prove the approximate version as the asymptotic version is virtually identical. $(a)$: It is obvious that one gets a well-defined $\ast$-homo\-morphism in \eqref{eq:relcomBvsM(B)}, and it is injective since \begin{equation} B_\infty \cap \mathrm{Ann}_{C_\infty}\phi(A) = \mathrm{Ann}_{B_\infty}\phi(A). \end{equation} Let $x \in C_\infty \cap \phi(A)'$, and let $(e_n)_{n\in \mathbb N}$ be a countable approximate identity in $A$. Let $f\in B_\infty$ be the element induced by $\phi(e_n)_{n\in \mathbb N}$. Clearly $f \in B_\infty \cap \phi(A)'$, and $f \phi(a) = \phi(a) f = \phi(a)$ for all $a\in A$, so $1_{\widetilde C} - f \in \Ann\phi(A)$. Hence $f x f + \Ann\phi(A) = x + \Ann\phi(A)$. As $B_\infty$ is a hereditary $C^\ast$-subalgebra of $C_\infty$, it follows that $fxf \in B_\infty \cap \phi(A)'$ so the map \eqref{eq:relcomBvsM(B)} is surjective. $(b)$: By part $(a)$ we may assume that $B$ is unital (otherwise replace $B$ by its unitisation $\widetilde B$). Clearly the $\ast$-homo\-morphism \eqref{eq:relcomcorner} is well-defined. If $b\in B_\infty \cap \phi(A)'$ satisfies that $b \oplus 0 \in \Annn(\phi\oplus \psi)(A)$, then $b\in \Ann\phi(A)$, so the map is injective. Finally, if $b = (b_{i,j})_{i,j=1,2} \in M_2(B)_\infty \cap (\phi \oplus \psi)(A)'$, then $(1_{\tilde B}\oplus 0) b (1_{\tilde B}\oplus 0) = b_{1,1} \oplus 0$, and clearly $b_{1,1} \in B_\infty \cap \phi(A)'$, so the map is surjective. $(c)$: We first check that $v^\ast (B_\infty\cap \phi(A)') v \subseteq B_\infty \cap \psi(A)'$, so let $b\in B_\infty \cap \phi(A)'$ and $a\in A$. By Lemma \ref{l:conjhom} we have $v\psi(a) = \phi(a) v$ and $v^\ast \phi(a) = \psi(a) v^\ast$. Hence \begin{equation} v^\ast b v \psi(a) = v^\ast b \phi(a) v = v^\ast \phi(a) b v = \psi(a) v^\ast b v. \end{equation} It follows that $v^\ast (-) v \colon B_\infty\cap \phi(A)' \to B_\infty \cap \psi(A)'$ is a well-defined c.p.~map. If $x\in \Ann \phi(A)$, then \begin{equation} v^\ast x v \psi(a) = v^\ast x \phi(a) v = 0 \end{equation} for any $a\in A$, and thus $v^\ast x v\in \Ann \psi(A)$. Hence $v^\ast(-) v$ induces a c.p.~map \begin{equation} \eta \colon \frac{B_\infty \cap \phi(A)'}{\Ann\phi(A)} \to \frac{B_\infty \cap \psi(A)'}{\Ann\psi(A)}. \end{equation} By Lemma \ref{l:conjhom}$(b)$ we have $vv^\ast \phi(a) = \phi(a)$ and $v^\ast v \psi(a) = \psi(a)$ for all $a\in A$. Hence if $b,c \in B_\infty \cap \phi(A)'$ and $a\in A$, then \begin{eqnarray} (v^\ast b v)(v^\ast c v) \psi(a) &=& v^\ast b vv^\ast c \phi(a) v \nonumber\\ &=& v^\ast b v v^\ast\phi(a) c v \nonumber\\ &=& v^\ast b \phi(a) c v \nonumber\\ &=& (v^\ast b c v) \psi(a). \end{eqnarray} In particular, $(v^\ast b v)(v^\ast c v) - (v^\ast bc v) \in \Ann\phi(A)$, so $\eta$ is a $\ast$-homomorphism. The exact same arguments as above show that $v(-)v^\ast$ induces a $\ast$-homo\-morphism \begin{equation} \rho \colon \frac{B_\infty \cap \psi(A)'}{\Ann\psi(A)} \to \frac{B_\infty \cap \phi(A)'}{\Ann\phi(A)}. \end{equation} Again by Lemma \ref{l:conjhom}$(b)$ we have $vv^\ast \phi(a) = \phi(a)$ for all $a\in A$. Thus, if $b\in B_\infty \cap \phi(A)'$ and $a\in A$ then \begin{equation} vv^\ast b vv^\ast \phi(a) = vv^\ast b \phi(a) = vv^\ast \phi(a) b = \phi(a) b = b \phi(a). \end{equation} Thus $vv^\ast b vv^\ast - b \in \Ann\phi(A)$, so the composition $\rho \circ \eta$ is the identity map. Similarly, the composition $\eta \circ \rho$ is the identity map, so the maps $\eta$ and $\rho$, which are induced by $v^\ast(-) v$ and $v(-)v^\ast$ respectively, are isomorphisms and each others inverses. \end{proof} The following is an extension of \cite[Proposition 3.10]{Gabe-O2class}. \begin{proposition}\label{p:MvNeq} Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, and let $\phi, \psi \colon A \to B$ be $\ast$-homomorphisms. The following are equivalent: \begin{itemize} \item[$(i)$] $\phi$ and $\psi$ are asymptotically Murray--von Neumann equivalent; \item[$(ii)$] $\phi \oplus 0, \psi \oplus 0 \colon A \to M_2(B)$ are asymptotically unitarily equivalent; \item[$(ii')$] $\phi \otimes e_{1,1} , \psi \otimes e_{1,1} \colon A \to B \otimes \mathcal K$ are asymptotically Murray--von Neumann equivalent; \item[$(ii'')$] $\phi \otimes e_{1,1} , \psi \otimes e_{1,1} \colon A \to B \otimes \mathcal K$ are asymptotically unitarily equivalent; \item[$(iii)$] The projections \begin{equation} \left( \begin{array}{cc} 1_{\widetilde B} & 0 \\ 0 & 0 \end{array} \right), \left( \begin{array}{cc} 0 & 0 \\ 0 & 1_{\widetilde B} \end{array} \right) \quad \in \quad \frac{M_2(B)_\as \cap (\phi \oplus \psi)(A)'}{\Annn(\phi \oplus \psi)(A)} \end{equation} are Murray--von Neumann equivalent.\footnote{Here $\widetilde B$ is the minimal unitisation of $B$. These projections are well-defined by considering $C= \widetilde B$ in Lemma \ref{l:relcombasic}(a)} \end{itemize} The similar statement where one replaces ``asymptotically'' with ``approximately'', and ``$M_2(B)_\as$'' with ``$M_2(B)_\infty$'' also holds. \end{proposition} \begin{proof} The proofs in the ``asymptotic'' and the ``approximate'' are virtually identical, so we only do the ``asymptotic'' version. By \cite[Proposition 3.10]{Gabe-O2class}, $(i), (ii)$ and $(iii)$ are equivalent. By Proposition \ref{p:MvNvsue}$(b)$, $(ii')$ and $(ii'')$ are equivalent. $(i) \Rightarrow (ii')$: If $v_t$ implements $\phi \sim_{\asMvN} \psi$, then $v_t \otimes e_{1,1}$ implements $\phi \otimes e_{1,1} \sim_{\asMvN} \psi \otimes e_{1,1}$. $(ii') \Rightarrow (i)$: Note that $(1_{\multialg{B}} \otimes e_{1,1}) \multialg{B \otimes \mathcal K} (1_{\multialg{B}} \otimes e_{1,1}) = \multialg{B} \otimes e_{1,1} \cong \multialg{B}$. Thus, if $v_t\in \multialg{B \otimes \mathcal K}$ implements $\phi \otimes e_{1,1} \sim_{\asMvN} \psi \otimes e_{1,1}$, and $w_t \in \multialg{B}$ is such that $w_t \otimes e_{1,1} = (1_{\multialg{B}} \otimes e_{1,1}) v_t (1_{\multialg{B}} \otimes e_{1,1})$, then clearly $w_t$ implements $\phi \sim_{\asMvN} \psi$. \end{proof} \section{Approximate domination and nuclearity} While one's aim might be to classify $\ast$-homomorphisms up to approximate or asymptotic Murray--von Neumann (or unitary) equivalence, one might want to aim lower to begin with. This is where approximate domination enters the picture. \begin{definition}\label{d:approxdom} Let $A$ and $B$ be $C^\ast$-algebras, let $\phi \colon A \to B$ be a $\ast$-homo\-morphism, and let $\rho \colon A \to B$ be a c.p.~map. Say that $\phi$ \emph{approximately dominates} $\rho$ if for any finite subset $\mathcal F \subset A$ and $\epsilon >0$ there are $n\in \mathbb N$ and $b_1, \dots, b_n \in B$ such that \begin{equation} \| \rho(a) - \sum_{i=1}^n b_i^\ast \phi(a) b_i \| < \epsilon, \qquad a\in \mathcal F. \end{equation} Say that $\phi$ \emph{approximately $1$-dominates} $\rho$ if the above holds with $n=1$. \end{definition} It is clear that if two $\ast$-homomorphisms are approximately Murray--von Neumann equivalent then they approximately (1-)dominate each other. So when one wants to prove that two $\ast$-homomorphisms are approximately Murray--von Neumann equivalent, a good starting point would be to prove that they approximately dominate each other. At first glance, even this seems like a very non-trivial task. However, this is exactly where nuclearity plays a fundamental role, see Corollary \ref{c:fulldom}. \begin{remark}\label{r:approxdom} Let $\mathscr C$ denote the set of all c.p.~maps which are approximately dominated by the $\ast$-homo\-morphism $\phi \colon A \to B$. It is clear that $\mathscr C$ is closed in the point-norm topology. Moreover, it follows immediately from the definition that if $\rho_1, \rho_2 \in \mathscr C$ then the c.p.~map $\rho_1 + \rho_2 $ is in $\mathscr C$. An easy consequence is that $\mathscr C$ is a point-norm closed, convex cone of completely positive maps. \end{remark} Recall the following fundamental definition. \begin{definition} A map between $C^\ast$-algebras is called \emph{nuclear} if it is a point-norm limit of maps factoring via c.p.~maps through matrix algebras. \end{definition} Note that nuclear maps are automatically completely positive, so whenever a map is referred to as being nuclear it is implicit that it is completely positive. However, it is not necessarily assumed to be contractive, see Remark \ref{r:nuccontractive}. \begin{observation}\label{o:nuccomp} Clearly the composition of a nuclear map and a c.p.~map will again be nuclear. This elementary observation will be applied frequently without reference. \end{observation} \begin{remark} Similarly as in Remark \ref{r:approxdom}, the set of nuclear maps between two $C^\ast$-algebras is a point-norm closed, convex cone. This observation is folklore (or an elementary exercise left to the reader). \end{remark} \begin{remark}\label{r:nuccontractive} The above definition of nuclear maps is slightly different than the one often found in the literature (e.g.~\cite[Definition 2.1.1]{BrownOzawa-book-approx}) where all maps in question -- including the ones going in and out of the matrix algebras -- are assumed to be contractive. The following folklore lemma implies that the two definitions in question are equivalent for contractive maps. As I have not been able to find a reference in the literature, I have included a proof for the readers convenience. \end{remark} If $\rho \colon A \to B$ is a c.p.~map and $(e_\lambda)$ is an approximate identity in $A$, then $\| \rho \| = \lim_\lambda \| \rho(e_\lambda)\|$. Hence if $a\in A$, then $\| \rho(a^\ast(-)a) \| = \| \rho(a^\ast a)\|$. This elementary fact will be used somewhat frequently throughout the paper without mentioning. \begin{lemma}\label{l:nuccontractive} Any contractive nuclear map is a point-norm limit of maps factoring via \emph{contractive} c.p.~maps through matrix algebras. \end{lemma} \begin{proof} Let $\rho \colon A \to B$ be a contractive nuclear map and let $(e_\lambda)_{\lambda \in \Lambda}$ be an approximate identity in $A$. Then $\rho(e_\lambda(-)e_\lambda)$ defines a net of contractive nuclear maps\footnote{Each map $\rho(e_\lambda(-) e_\lambda)$ is nuclear by Observation \ref{o:nuccomp}.} converging point-norm to $\rho$, so it suffices to show that each $\rho(e_\lambda(-) e_\lambda)$ is a point-norm limit of maps factoring via contractive c.p.~maps through matrix algebras. Given a finite set of contractions $\mathcal F \subset A$, and $\epsilon >0$, pick c.p.~maps $\psi \colon A \to M_n(\mathbb C)$ and $\eta \colon M_n(\mathbb C) \to B$ such that $\eta(\psi(e_\lambda a e_\lambda)) \approx_{\epsilon/2} \rho(e_\lambda a e_\lambda)$ for $a\in \mathcal F \cup \{ 1_{\widetilde{A}}\}$. Let $y$ be the inverse of $\psi(e_\lambda^2)$ in the hereditary $C^\ast$-subalgebra of $M_n(\mathbb C)$ that $\psi(e_\lambda^2)$ generates. Define c.p.~maps \begin{equation} \psi_0 = y^{1/2} \psi(e_\lambda(-) e_\lambda) y^{1/2} \colon A \to M_n(\mathbb C) \end{equation} and \begin{equation} \eta_0 = \tfrac{1}{1+\epsilon/2} \eta(\psi(e_\lambda^2)^{1/2} (-) \psi(e_\lambda^2)^{1/2}) \colon M_n(\mathbb C) \to B. \end{equation} Then \begin{equation} \| \psi_0 \| = \| y^{1/2} \psi(e_\lambda^2) y^{1/2} \| = 1, \qquad \| \eta_0\| = \| \tfrac{1}{1+\epsilon/2} \eta(\psi(e_\lambda^2))\| \leq 1, \end{equation} where the latter holds since $\| \eta(\psi(e_\lambda^2)) \| \leq \| \rho(e_\lambda^2)\| + \epsilon/2 \leq 1+\tfrac{\epsilon}{2}$. Moreover, \begin{equation} \eta_0 \circ \psi_0 (a) \approx_{\epsilon/2} (1+\tfrac{\epsilon}{2}) \eta_0 \circ \psi_0(a) = \eta \circ \psi(e_\lambda a e_\lambda) \approx_{\epsilon/2} \rho(e_\lambda a e_\lambda), \quad a\in \mathcal F. \qedhere \end{equation} \end{proof} It was proved independently by Choi--Effros \cite{ChoiEffros-CPAP} and Kirchberg \cite{Kirchberg-CPAP} that a $C^\ast$-algebra $A$ is nuclear (i.e.~the canonical $\ast$-epimorphism $A \otimes_{\max{}} C \to A \otimes C$ is an isomorphism for any $C^\ast$-algebra $C$) if and only if $\id_A$ is nuclear. This is an immediate consequence of the following tensor product characterisation of nuclear maps. \begin{proposition}[Cf.~Choi--Effros \cite{ChoiEffros-CPAP}, Kirchberg \cite{Kirchberg-CPAP}]\label{p:nuctensor} Let $A$ and $B$ be $C^\ast$-algebras and let $\phi \colon A \to B$ be a c.p.~map. Then $\phi$ is nuclear if and only if for any $C^\ast$-algebra $C$ the induced c.p.~map $\phi \otimes \id_C \colon A \otimes_{\max{}} C \to B \otimes_{\max{}} C$ factors through the spatial tensor product $A \otimes C$. \end{proposition} \begin{proof} \cite[Corollary 3.8.8]{BrownOzawa-book-approx} is the result in the case where $A,B$ and $\phi$ are all unital. The general case can be obtained by normalising $\phi$ and unitising everything. \end{proof} \begin{remark}\label{r:nucemb}[Nuclear embeddability and exactness] The symbol $\otimes$ denotes the \emph{spatial} (which is the minimal) tensor product of $C^\ast$-algebras. Recall that a $C^\ast$-algebra $A$ is called \emph{exact} if the functor $A \otimes -$ takes short exact sequences to short exact sequences. Say that $A$ is \emph{nuclearly embeddable} if there exists an injective, nuclear $\ast$-homomorphism $A \to B$ into some $C^\ast$-algebra $B$ (which may obviously be chosen to be $B = \mathcal B(\mathcal H)$). By a remarkable result of Kirchberg \cite{Kirchberg-CAR} it follows that a $C^\ast$-algebra is exact if and only if it is nuclearly embeddable. See \cite[Section 3.9]{BrownOzawa-book-approx} for a more elementary proof. Also, if $A$ is a separable, exact $C^\ast$-algebra, one may always find an embedding $A \to \mathcal O_2$ by Kirchberg's $\mathcal O_2$-embedding theorem \cite{Kirchberg-ICM} (see also \cite{KirchbergPhillips-embedding}). Such an embedding is automatically nuclear by nuclearity of $\mathcal O_2$. As the focus of this paper will mainly be on nuclear $\ast$-homomorphisms $A\to B$, it will essentially not be any loss of generality to assume that $A$ is exact, which will be done in most major results. \end{remark} Recall that an element $b\in B$ is called \emph{full} if it is not contained in any proper two-sided, closed ideal in $B$ \begin{definition} A $\ast$-homomorphism $\phi \colon A \to B$ between $C^\ast$-algebras is said to be \emph{full} if $\phi(a)$ is full in $B$ for every non-zero $a\in A$. \end{definition} \begin{remark}\label{r:fullvssimple} Fullness is for $\ast$-homomorphisms what simplicity is for $C^\ast$-algebras. In fact, if $A$ is a $C^\ast$-algebra then the identity map $\id_A$ is full if and only if $A$ is simple. \end{remark} The goal of this section is to prove the following proposition. \begin{proposition}\label{p:fulldom} Let $A$ and $B$ be $C^\ast$-algebras and suppose that $\phi \colon A \to B$ is a full $\ast$-homomorphism. Then $\phi$ approximately dominates any nuclear map $\rho \colon A \to B$. \end{proposition} While a slight weakening of the above proposition (where $\phi$ is assumed to also be nuclear) would suffice for the purpose of proving the Kirchberg--Phillips theorem, and can be deduced almost immediately from \cite[Theorem 3.3]{Gabe-O2class}, I have chosen to take a longer yet more elementary approach for proving this. This approach should look familiar to those who are familiar with the completely positive approximation property for nuclear $C^\ast$-algebras. Before proving the above result, the following immediate corollary will be recorded for later use. \begin{corollary}\label{c:fulldom} Any two full, nuclear $\ast$-homomorphisms $\phi, \psi \colon A \to B$ approximately dominate each other. \end{corollary} In the following, $A^\ast_+$ denotes the convex cone of positive linear functionals on the $C^\ast$-algebra $A$ equipped with the weak$^\ast$-topology. Similarly, let $\CP(A,B)$ denote the convex cone of all c.p.~maps $A\to B$ equipped with the point-norm topology. The following is well-known and can be proved essentially exactly like \cite[Proposition 1.5.14]{BrownOzawa-book-approx}. The details are left for the reader. \begin{proposition}\label{p:*+CP} Let $A$ be a $C^\ast$-algebra and let $n\in \mathbb N$. Then there is an affine homeomorphism \begin{equation} (A \otimes M_n(\mathbb C))^\ast_+ \to \CP( A , M_n(\mathbb C) ) \end{equation} given by \begin{equation}\label{eq:checkf} f \mapsto \check f := \sum_{i,j=1}^n f(- \otimes e_{i,j}) e_{i,j}. \end{equation} \end{proposition} A \emph{pure positive linear functional} on a $C^\ast$-algebra $A$ is a positive linear functional which is either zero or for which its normalisation is a pure state. \begin{lemma}\label{l:nucsimpleapprox} Let $A$ and $B$ be $C^\ast$-algebras and let $\rho \colon A \to B$ be a nuclear map. For any finite subset $\mathcal F \subset A$ and any $\epsilon >0$ there are $n,m\in \mathbb N$, pure positive linear functionals $f_1,\dots,f_m \in (A\otimes M_n(\mathbb C))^\ast_+$, and elements $b_{i,j} \in B$ for $i,j=1,\dots, n$, such that \begin{equation} \| \rho(a) - \sum_{l=1}^m \sum_{i,j,k=1}^n f_l(a\otimes e_{i,j}) b_{k,i}^\ast b_{k,j} \| < \epsilon , \qquad a\in \mathcal F. \end{equation} \end{lemma} \begin{proof} We may find $n\in \mathbb N$ and c.p.~maps $\psi \colon A \to M_n(\mathbb C)$ and $\eta \colon M_n(\mathbb C) \to B$ such that \begin{equation}\label{eq:rhoetapsihalf} \| \rho (a) - \eta \circ \psi(a) \| < \epsilon /2 ,\qquad a\in \mathcal F. \end{equation} By (the proof of) \cite[Proposition 1.5.12]{BrownOzawa-book-approx} there are $b_{i,j} \in B$ such that \begin{equation} \eta(e_{i,j}) = \sum_{k=1}^n b_{k,i}^\ast b_{k,j}, \qquad i,j=1,\dots,n.\footnote{One proves that $(\eta(e_{i,j}))_{i,j=1}^n \in M_n(B)$ is positive, and $b_{i,j}$ is the $(i,j)$'th entry of the square root of $(\eta(e_{i,j}))_{i,j=1}^n$.} \end{equation} By Proposition \ref{p:*+CP} there is a positive linear functional $f$ on $A\otimes M_n(\mathbb C)$ such that $\check f = \psi$ (see \eqref{eq:checkf}). In particular, \begin{equation}\label{eq:etapsicheckf} \eta \circ \psi(a) = \eta\circ \check f (a) = \sum_{i,j=1}^n f(a\otimes e_{i,j}) \eta(e_{i,j}) = \sum_{i,j,k=1}^n f(a\otimes e_{i,j}) b_{k,i}^\ast b_{k,j} \end{equation} for all $a\in A$. We may find $m\in \mathbb N$ and pure positive linear functionals $f_1,\dots, f_m$ on $A\otimes M_n(\mathbb C)$ such that $\sum_{l=1}^m f_i$ approximates $f$ well enough on $\{ a \otimes e_{i,j} : a\in \mathcal F, \; i,j=1,\dots, n\}$ such that \begin{equation}\label{eq:sumijkfaeij} \| \sum_{i,j,k=1}^n f(a\otimes e_{i,j}) b_{k,i}^\ast b_{k,j} - \sum_{l=1}^m \sum_{i,j,k=1}^n f_l(a\otimes e_{i,j}) b_{k,i}^\ast b_{k,j} \| < \epsilon/2 \end{equation} for $a\in \mathcal F$. The result now follows by combining \eqref{eq:rhoetapsihalf}, \eqref{eq:etapsicheckf}, and \eqref{eq:sumijkfaeij}. \end{proof} It is well-known that if $A$ and $B$ are simple $C^\ast$-algebras then the spatial tensor product $A\otimes B$ is also simple. The following is a generalisation for $\ast$-homomorphisms, cf.~Remark \ref{r:fullvssimple}. \begin{lemma}\label{l:fulltensor} Let $A,B,C$ and $D$ be $C^\ast$-algebras and suppose that $\phi \colon A \to B$ and $\psi \colon C \to D$ are full $\ast$-homomorphisms. Then $\phi \otimes \psi \colon A \otimes C \to B \otimes D$ is a full $\ast$-homomorphism. In particular, if $\phi \colon A \to B$ is a full $\ast$-homomorphism and $D$ is a simple $C^\ast$-algebra, then $\phi \otimes \id_D \colon A \otimes D \to B \otimes D$ is a full $\ast$-homomorphism. \end{lemma} \begin{proof} Let $x\in A \otimes C$ be a non-zero element. The (two-sided, closed) ideal in $A\otimes C$ generated by $x$ contains a non-zero elementary tensor $a\otimes c$ by Kirchberg's slice lemma \cite[Lemma 4.19]{Rordam-book-classification}, alternatively see \cite[Lemma 2.12$(ii)$]{BlanchardKirchberg-Hausdorff}. In particular, $(\phi \otimes \psi)(a\otimes c)$ is contained in the ideal generated by $(\phi \otimes \psi)(x)$. However, as $a$ and $c$ are non-zero elements, and as $\phi$ and $\psi$ are full, it follows that $(\phi \otimes \psi)(a\otimes c) = \phi(a) \otimes \psi(c)$ is a full element in $B\otimes D$. Hence $(\phi \otimes \psi)(x)$ is also full in $B\otimes D$, so $\phi\otimes \psi$ is a full $\ast$-homomorphism. The ``in particular'' part follows since $\id_D$ is full provided $D$ is simple. \end{proof} A few elementary lemmas will be recorded for convenience. The main point of the following lemma is to obtain a norm estimate of a sum which does not dependent on the number of summands. \begin{lemma}\label{l:sumconjest} Let $B$ be a $C^\ast$-algebra and let $z,c_1,\dots,c_m\in B$. Then \begin{equation} \| \sum_{k=1}^m c_k^\ast z c_k \| \leq 4 \| z\| \| \sum_{k=1}^m c_k^\ast c_k\|. \end{equation} \end{lemma} \begin{proof} We may find positive $z_0,\dots,z_3 \in B$ such that $\| z_l \| \leq \| z\|$ for $l=0,\dots,3$ and $z = \sum_{l=0}^3 i^l z_l$. Hence \begin{equation} \| \sum_{k=1}^m c_k^\ast z c_k \| \leq \sum_{l=0}^3 \| \sum_{k=1}^m c_k^\ast z_l c_k \| \leq 4 \| z \| \|\sum_{k=1}^m c_k^\ast c_k \|. \qedhere \end{equation} \end{proof} It is well-known (e.g.~\cite[Corollary II.5.2.13]{Blackadar-book-opalg}) that if $b,b_0 \in B$ are positive elements and $b_0 \in \overline{B b B}$,\footnote{As is customary, $\overline{BbB}$ denotes the \emph{closed linear span} of $\{ xby : x,y\in B\}$, i.e.~it is the two-sided, closed ideal generated by $b$.} then for any $\epsilon >0$ there are $m\in \mathbb N$ and $c_1,\dots, c_m \in B$ such that \begin{equation} \| \sum_{k=1}^m c_k^\ast b c_k - b_0 \| < \epsilon. \end{equation} While one can always arrange that $\| \sum_{k=1}^m c_k^\ast b c_k \| \leq \| b_0\|$ (by perhaps perturbing the $c_k$'s from above slightly), one will in general not have control of the norm of the element $\sum_{k=1}^m c_k^\ast c_k$. The following lemma gives a criterion for when this norm can be controlled. For a positive element $b$ in a $C^\ast$-algebra and $\delta >0$, $(b-\delta)_+$ denotes the element obtained by applying the function $t\mapsto \max{}(t-\delta,0)$ to $b$ via functional calculus. Also, for elements $x,y$ in a $C^\ast$-algebra and $\gamma>0$, write $x\approx_\gamma y$ when $\| x-y\|<\gamma$. \begin{lemma}\label{l:sumidealgen} Let $B$ be a $C^\ast$-algebra and suppose that $b,b_0\in B$ are positive, non-zero elements such that $b_0 \in \overline{B (b-\delta)_+ B}$ for every $0< \delta < \| b\|$.\footnote{Note that the interesting information is when $\delta$ is close to $\|b\|$.} Then for any $\epsilon>0$ there exist $m\in \mathbb N$ and $c_1,\dots, c_m\in B$ such that \begin{equation} \| \sum_{k=1}^m c_k^\ast b c_k - b_0\| < \epsilon \end{equation} and $\| \sum_{k=1}^m c_k^\ast c_k \| \leq \|b_0\|/\| b\|$. \end{lemma} \begin{proof} It suffices to prove the case $\| b\| = \| b_0 \| =1$. Also, we may assume that $\epsilon < 1$. Let $\delta = 1-\epsilon/8$, and let $g \colon [0,1] \to [0,1]$ be the continuous function \begin{equation} g(t) = \left\{ \begin{array}{ll} 0 , & t=0 \\ 1, & t \in [\delta , 1] \\ \textrm{affine}, & \textrm{otherwise} \end{array} \right. \end{equation} for $t\in [0,1]$. In particular, $\| g(b) - b \| \leq \epsilon/8$, and $(b-\delta)_+ g(b) = (b-\delta)_+$. The element $b_0$ is contained in the two-sided, closed ideal generated by $(b-\delta)_+$ by assumption, so we may find $c_1',\dots, c_m' \in B$ such that \begin{equation} \| \sum_{k=1}^m c_k'^\ast (b-\delta)_+ c_k' - b_0 \| < \epsilon /2, \end{equation} and $\| \sum_{k=1}^m c_k'^\ast(b-\delta)_+ c_k'\| \leq \| b_0\| = 1$. Let $c_k = (b-\delta)_+^{1/2} c_k'$ for $k=1,\dots, m$. Then \begin{equation}\label{eq:sumckck} \| \sum_{k=1}^m c_k^\ast c_k \| = \| \sum_{k=1}^m c_k'^\ast (b-\delta)_+ c_k'\| \leq 1 \end{equation} and \begin{equation} \sum_{k=1}^m c_k^\ast b c_k \stackrel{\eqref{eq:sumckck},~\textrm{Lem.~}\ref{l:sumconjest}}{\approx_{\epsilon/2}} \; \; \sum_{k=1}^m c_k^\ast g(b) c_k = \sum_{k=1}^m c_k'^\ast (b-\delta)_+ c_k' \approx_{\epsilon/2} b_0 \end{equation} as wanted. \end{proof} \begin{proof}[Proof of Proposition \ref{p:fulldom}] By Remark \ref{r:approxdom} and Lemma \ref{l:nucsimpleapprox} it suffices to show that $\phi$ approximately dominates any map $\rho$ for which there is an $n\in \mathbb N$, a pure positive linear functional $f$ on $A\otimes M_n(\mathbb C)$, and $b_1,\dots, b_n\in B$ such that \begin{equation}\label{eq:rhosumfaeij} \rho(a) = \sum_{i,j=1}^n f(a \otimes e_{i,j}) b_{i}^\ast b_{j}, \qquad a\in A. \end{equation} If $f=0$ then $\rho = 0$ and the result obviously follows. Hence we may assume that $f$ is non-zero, and clearly also that $\| f\| =1$ so that $f$ is a pure state. Let $\mathcal F \subset A$ be finite and let $\epsilon >0$. By \cite{AkemannAndersonPedersen-excision} we may excise $f$, so we may find a positive element $x \in A \otimes M_n(\mathbb C)$ of norm 1 such that \begin{equation}\label{eq:excisionaeij} \| x(a \otimes e_{i,j}) x - f(a \otimes e_{i,j}) x^2\| < \frac{\epsilon}{8 n^2 \cdot \max\{ \| b_1\|^2, \dots, \| b_n\|^2\}} \end{equation} for $a\in \mathcal F$ and $i,j = 1,\dots, n$. Let \begin{equation} \phi^{(n)} := \phi \otimes \id_{M_n(\mathbb C)} \colon A \otimes M_n(\mathbb C) \to B \otimes M_n(\mathbb C) \end{equation} which is full by Lemma \ref{l:fulltensor}. By fullness of $\phi^{(n)}$ Lemma \ref{l:sumidealgen} provides $c_1,\dots, c_m \in B\otimes M_n(\mathbb C)$ such that $\| \sum_{k=1}^m c_k^\ast c_k \| \leq 1$, and such that \begin{align}\label{eq:sumfaeijbi} & \bigg\| \sum_{i,j=1}^n f(a\otimes e_{i,j}) (b_i^\ast \otimes e_{1,1}) \left( \sum_{k=1}^m c_k^\ast \phi^{(n)} (x^2) c_k \right) (b_j \otimes e_{1,1}) \\ & \qquad - \sum_{i,j=1}^n f(a \otimes e_{i,j}) (b_i^\ast b_j) \otimes e_{1,1} \bigg\| < \epsilon /2 \nonumber \end{align} for all $a\in \mathcal F$ (for instance by picking $\sum_{k=1}^m c_k^\ast \phi^{(n)} (x^2) c_k$ close to an approximate unit in $B\otimes M_n(\mathbb C)$). Let $d_k\in B$ for $k=1,\dots, m$ be the unique element such that \begin{equation} d_k \otimes e_{1,1} = \sum_{j=1}^n \phi^{(n)}((1_{\widetilde A} \otimes e_{1,j} ) x) c_k (b_j \otimes e_{1,1}) \in B \otimes e_{1,1} \subseteq B \otimes M_n(\mathbb C). \end{equation} For $a\in \mathcal F$ we have (with computations in $B\otimes M_n(\mathbb C)$) \begin{eqnarray} && \left(\sum_{k=1}^m d_k^\ast \phi(a) d_k \right) \otimes e_{1,1} \nonumber\\ &=& \sum_{k=1}^m (d_k \otimes e_{1,1})^\ast \phi^{(n)}(a \otimes e_{1,1}) (d_k \otimes e_{1,1}) \nonumber\\ &=& \sum_{i,j=1}^n (b_i^\ast \otimes e_{1,1}) \left( \sum_{k=1}^m c_k^\ast \phi^{(n)} (x (a\otimes e_{i,j}) x) c_k \right) (b_j \otimes e_{1,1})\nonumber\\ &\stackrel{(\ast)}{\approx}_{\epsilon/2}& \sum_{i,j=1}^n (b_i^\ast \otimes e_{1,1}) \left( \sum_{k=1}^m c_k^\ast \phi^{(n)} ( f(a\otimes e_{i,j}) x^2) c_k \right) (b_j \otimes e_{1,1}) \nonumber\\ &\stackrel{\eqref{eq:sumfaeijbi} \; \;}{\approx_{\epsilon/2}}& \sum_{i,j=1}^n f(a\otimes e_{i,j})(b_i^\ast b_j) \otimes e_{1,1} \nonumber\\ &\stackrel{\eqref{eq:rhosumfaeij}}{=}& \rho(a) \otimes e_{1,1} \end{eqnarray} where the estimate labeled $(\ast)$ above follows by applying Lemma \ref{l:sumconjest} with $z = x(a \otimes e_{i,j}) x - f(a \otimes e_{i,j}) x^2$ and using \eqref{eq:excisionaeij}. Hence \begin{equation} \| \sum_{k=1}^m d_k^\ast \phi(a) d_k - \rho(a) \| < \epsilon \end{equation} for $a\in \mathcal F$ which finishes the proof. \end{proof} \section{$\mathcal O_2$-stable and $\mathcal O_\infty$-stable $\ast$-homomorphisms}\label{s:PIhom} As a tool of getting from approximate domination to approximate 1-domination of maps, the following definition is very useful. \begin{definition}[{\cite[Definition 3.16]{Gabe-O2class}}] Let $\mathcal D$ be either $\mathcal O_2$ or $\mathcal O_\infty$. Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, and $\phi \colon A \to B$ be a $\ast$-homomorphism. Say that \begin{itemize} \item $\phi$ is \emph{$\mathcal D$-stable} if $\mathcal D$ embeds unitally in $B_\infty \cap \phi(A)' / \Ann \phi(A)$, \item $\phi$ is \emph{strongly $\mathcal D$-stable} if $\mathcal D$ embeds unitally in $B_\as \cap \phi(A)' / \Ann \phi(A)$. \end{itemize} \end{definition} \begin{remark}\label{r:Oinftypictures} It is well-known that a unital $C^\ast$-algebra $D$ is properly infinite if and only if $\mathcal O_\infty$ embeds unitally into $D$, see \cite[Proposition 4.2.3]{Rordam-book-classification}. Hence $\phi$ being $\mathcal O_\infty$-stable is equivalent to the induced sequential relative commutant being properly infinite. This means that there exist bounded sequences $(s_n^{(i)})_{n\in \mathbb N}$ in $B$ for $i=1,2$, such that \begin{equation} \lim_{n\to \infty} \| [s_n^{(i)}, \phi(a)] \| = 0, \qquad \lim_{n\to \infty} \| ((s_n^{(i)})^\ast s_n^{(j)} - \delta_{i,j}) \phi(a)\| = 0 \end{equation} for all $i,j = 1,2$ and $a\in A$. Here $\delta_{i,j} = 1_{\widetilde{B}}$ if $i=j$ and $\delta_{i,j}=0$ if $i\neq j$. Similarly, $\phi$ is $\mathcal O_2$-stable exactly when one may additionally arrange that \begin{equation} \lim_{n\to \infty} \| (s_n^{(1)} (s_n^{(1)})^\ast + s_n^{(2)}(s_n^{(2)})^\ast - 1_{\widetilde{B}}) \phi(a) \| = 0 \end{equation} for all $a\in A$. Similar characterisations hold for strongly $\mathcal O_2$-stable and strongly $\mathcal O_\infty$-stable $\ast$-homomorphisms, but with bounded, continuous paths $(s_t^{(i)})_{t\in \mathbb R_+}$. In particular, it follows that \begin{equation} \xymatrix{ \textrm{Strong $\mathcal O_2$-stability} \ar@{=>}[rr] \ar@{=>}[d] && \textrm{$\mathcal O_2$-stability} \ar@{=>}[d] \\ \textrm{Strong $\mathcal O_\infty$-stability} \ar@{=>}[rr] && \textrm{$\mathcal O_\infty$-stability}. } \end{equation} \end{remark} It was observed in \cite[Remark 3.24]{Gabe-O2class}, using the Kirchberg--Phillips theorem, that $\mathcal O_2$-stable $\ast$-homomorphisms are not necessarily strongly $\mathcal O_2$-stable. Also, the identity map on $\mathcal O_\infty$ is strongly $\mathcal O_\infty$-stable, but neither $\mathcal O_2$-stable nor strongly $\mathcal O_2$-stable. However, I do not know if $\mathcal O_\infty$-stability and strong $\mathcal O_\infty$-stability are equivalent. Thus, I emphasise the following question posed in \cite[Question 3.24]{Gabe-O2class}. \begin{question}\label{q:Oinftyvsstrong} Is every $\mathcal O_\infty$-stable $\ast$-homo\-mor\-phism also strong\-ly $\mathcal O_\infty$-stable? \end{question} In Section \ref{s:stronglyOinfty}, this question will be addressed. In particular, it is shown that if every unital, properly infinite $C^\ast$-algebra is $K_1$-injective -- which is another open problem, see \cite{BlanchardRohdeRordam-K1inj} -- then the question above has an affirmative answer. \begin{observation} Note that Lemma \ref{l:relcombasic}$(c)$ implies that (strong) $\mathcal D$-stabili\-ty is preserved by $\sim_{\mathrm{a(s)MvN}}$. This will be used without mentioning. \end{observation} The following says, that a $\ast$-homomorphism is strongly $\mathcal D$-stable if it factors through a $\mathcal D$-stable $C^\ast$-algebra, i.e.~a $C^\ast$-algebra $C$ for which $C \otimes \mathcal D \cong C$. A converse was shown for $\mathcal D$-stable maps in \cite[Corollary 4.5]{Gabe-O2class}, using what is Theorem \ref{t:uniquesimple} of this paper, by showing that $\mathcal D$-stable $\ast$-homomorphisms have a certain McDuff type property a la \cite{McDuff-centralseq}. A similar proof can be used to give a similar McDuff type characterisation for strongly $\mathcal D$-stable maps by applying \cite[Proposition 1.3.7]{Phillips-classification} instead of \cite[Theorem 4.3]{Gabe-O2class}. The following is an immediate consequence of \cite[Propositions 3.17, 3.18 and Lemma 3.19]{Gabe-O2class}. \begin{proposition}\label{p:Oinftyfactor} Let $\mathcal D$ be either $\mathcal O_2$ or $\mathcal O_\infty$. Let $A,B$ and $C$ be $C^\ast$-algebras with $A$ separable, and let $\eta \colon A \to C$ and $\rho \colon C \to B$ be $\ast$-homo\-morphisms. If $C$ is $\mathcal D$-stable, then $\rho \circ \eta$ is strongly $\mathcal D$-stable. In particular, if either $A$ or $B$ is $\mathcal D$-stable then any $\ast$-homomorphism $\phi \colon A \to B$ is strongly $\mathcal D$-stable. \end{proposition} \begin{remark} It should be empahsised that the proof of \cite[Propositions 3.17 and 3.18]{Gabe-O2class}, and therefore Proposition \ref{p:Oinftyfactor} above, rely on the fact that $\mathcal O_2$ and $\mathcal O_\infty$ are strongly self-absorbing.\footnote{A separable, unital $C^\ast$-algebra $\mathcal D$ is \emph{strongly self-absorbing}, as defined in \cite{TomsWinter-ssa}, if $\mathcal D \not \cong \mathbb C$, and if there exists an isomorphism $\mathcal D \xrightarrow \cong \mathcal D \otimes \mathcal D$ which is approximately unitarily equivalent to the embedding $\id_\mathcal{D} \otimes 1_{\mathcal D} \colon \mathcal D \to \mathcal D \otimes \mathcal D$.} This highly non-elementary fact is crucial for the methods in this paper to work and cannot be deduced from the Kirchberg--Phillips Theorem -- Theorem \ref{t:KP} -- without some sort of circular argument. That $\mathcal O_2$ is strongly self-absorbing follows from \cite[Theorems 5.1.1 and 5.2.1]{Rordam-book-classification}, and that $\mathcal O_\infty$ is strongly self-absorbing follows from \cite[Proposition 7.2.5 and Theorem 7.2.6]{Rordam-book-classification}. \end{remark} \begin{remark}\label{r:fullpropinfproj} If $\phi \colon A \to B$ is an $\mathcal O_\infty$-stable $\ast$-homomorphism, then every non-zero positive element in the image $\phi(A)$ is properly infinite in the sense of \cite[Definition 3.2]{KirchbergRordam-purelyinf}, see for instance \cite[Lemma 4.4]{BGSW-nucdim}. In particular, if $\phi$ is also full, then $B$ contains a properly infinite, full projection, even when there are no non-zero projections in $\phi(A)$. This follows from \cite[Lemma 7.2]{BGSW-nucdim}, but was essentially proved by Pasnicu and Rørdam in \cite[Proposition 2.7]{PasnicuRordam-purelyinfrr0} by using tricks of Blackadar and Cuntz from \cite{BlackadarCuntz-stablealgsimple}. \end{remark} The following is a Stinespring type theorem for (strongly) $\mathcal O_\infty$-stable $\ast$-homo\-morphisms which was essentially obtained in \cite{Gabe-O2class}. \begin{theorem}[Cf.~{\cite[Theorem 3.22]{Gabe-O2class}}]\label{t:Stinespring} Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, let $\phi \colon A \to B$ be a $\ast$-homomorphism, and let $\rho \colon A \to B$ be a c.p.~map which is approximately dominated by $\phi$. \begin{itemize} \item[$(a)$] If $\phi$ is $\mathcal O_\infty$-stable there is an element $v\in B_\infty$ of norm $\| \rho\|^{1/2}$ such that $\rho = v^\ast \phi(-) v$. \item[$(b)$] If $\phi$ is strongly $\mathcal O_\infty$-stable there is an element $v\in B_\as$ of norm $\| \rho\|^{1/2}$ such that $\rho = v^\ast \phi(-) v$. \end{itemize} \end{theorem} \begin{proof} This is exactly what is proved in \cite[Theorem 3.22]{Gabe-O2class}. The first sentence of said proof reduces the statement in \cite[Theorem 3.22]{Gabe-O2class} to the statement above, which is subsequently proved. \end{proof} \begin{lemma}\label{l:MvNsubeq} Let $\phi , \psi \colon A \to B$ be $\ast$-homomorphisms between $C^\ast$-algebras for which $A$ is separable. Suppose that there is a contraction $v\in B_\as$ such that $v^\ast \phi(-) v = \psi$. Then the element \begin{equation} w := \left( \begin{array}{cc} 0 & v \\ 0 & 0 \end{array} \right) + \Annn(\phi\oplus \psi)(A) \in \frac{M_2(B)_\as \cap (\phi \oplus \psi)(A)'}{\Annn(\phi\oplus \psi)(A)}, \end{equation} satisfies $ww^\ast \leq 1_{\widetilde B}\oplus 0$ and $w^\ast w = 0\oplus 1_{\widetilde B}$. In particular, the projection \begin{equation} \left( \begin{array}{cc} 1_{\widetilde B} & 0 \\ 0 & 0 \end{array} \right) \in \frac{M_2(B)_\as \cap (\phi \oplus \psi)(A)'}{\Annn(\phi\oplus \psi)(A)} \end{equation} is full. The same result holds when replacing $M_2(B)_\as$ with $M_2(B)_\infty$. \end{lemma} \begin{proof} Clearly the arguments given hold just as well for $M_2(B)_\infty$ as they do for $M_2(B)_\as$. By Lemma \ref{l:conjhom}, $vv^\ast \in B_\as \cap \phi(A)'$, $v^\ast v \psi(a) = \psi(a)$ for $a\in A$, and $\phi(a) v = v \psi(a)$. Hence, it is straight forward to check, that $v\otimes e_{1,2}$ induces an element $w\in \frac{M_2(B)_\as \cap (\phi \oplus \psi)(A)'}{\Annn(\phi\oplus \psi)(A)}$, for which $ww^\ast \leq 1_{\widetilde B} \oplus 0$ and $w^\ast w = 0 \oplus 1_{\widetilde B}$. ``In particular'' is obvious, since $1_{\widetilde B}\oplus 1_{\widetilde B}= w^\ast (1_{\widetilde B}\oplus 0) w + 1_{\widetilde B}\oplus 0$ is the unit of $\frac{M_2(B)_\as \cap (\phi \oplus \psi)(A)'}{\Annn(\phi\oplus \psi)(A)}$. \end{proof} As a somewhat easy consequence, one obtains the following absorption result. This result is closely connected to Dadarlat's notion of \emph{absorbing} $\ast$-homomorphisms in \cite[Definition 2.2]{Dadarlat-htpygrpsKirchbergalg}. Note that in part $(a)$ one wants $\theta$ to approximately dominate $\phi$, whereas $(b)$ requires that $\phi$ approximately dominates $\theta$. This is because part $(a)$ is used for existence theorems while part $(b)$ is used for uniqueness theorems. \begin{proposition}\label{p:piabsorbing} Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, and let $\phi, \theta \colon A \to B$ be $\ast$-homomorphisms. \begin{itemize} \item[$(a)$] If $\theta$ is (strongly) $\mathcal O_\infty$-stable and $\theta$ approximately dominates $\phi$, then $\phi \oplus \theta \colon A \to M_2(B)$ is (strongly) $\mathcal O_\infty$-stable, \item[$(b)$] If $\phi$ is (strongly) $\mathcal O_\infty$-stable, if $\theta$ is (strongly) $\mathcal O_2$-stable, and if $\phi$ approximately dominates $\theta$, then $\phi\oplus 0 \sim_{\mathrm{a(s)MvN}} \phi \oplus \theta$ as maps $A \to M_2(B)$. \end{itemize} \end{proposition} \begin{proof} Only the ``non-strong'' statements will be proved, since the ``strong'' statements are virtually identical. $(a)$: To simplify notation, let \begin{equation} D := \frac{M_2(B)_\infty \cap (\phi \oplus \theta)(A)'}{\Annn (\phi\oplus \theta)(A)}. \end{equation} As $(0\oplus 1_{\widetilde B})D (0\oplus 1_{\widetilde B}) \cong B_\infty \cap \theta(A)' /\Ann\theta(A)$ by Lemma \ref{l:relcombasic}$(b)$, and since this $C^\ast$-algebra is properly infinite by $\mathcal O_\infty$-stability of $\theta$, it follows that the projection $0 \oplus 1_{\widetilde B} \in D$ is properly infinite. As $\theta$ is $\mathcal O_\infty$-stable and approximately dominates $\phi$, it follows from Theorem \ref{t:Stinespring} and Lemma \ref{l:MvNsubeq} that $0\oplus 1_{\widetilde B}$ is a full projection. Thus $1_{\widetilde B}\oplus 1_{\widetilde B}$ is properly infinite, which implies that $\phi \oplus \theta$ is $\mathcal O_\infty$-stable. $(b)$: Again, to simplify notation, let \begin{equation} E:= \frac{M_4(B)_\infty \cap (\phi \oplus 0 \oplus \phi \oplus \theta)(A)'}{\Annn (\phi \oplus 0 \oplus \phi \oplus \theta)(A)}. \end{equation} By Proposition \ref{p:MvNeq} it suffices to show that $1_{\widetilde B}\oplus 1_{\widetilde B}\oplus 0 \oplus 0 \sim 0\oplus 0\oplus 1_{\widetilde B}\oplus 1_{\widetilde B}$ in $E$. First observe that \begin{equation} 1_{\widetilde B}\oplus 1_{\widetilde B} \oplus 0 \oplus 0 = 1_{\widetilde B}\oplus 0 \oplus 0\oplus 0 \end{equation} in $E$, and that $v= 1_{\widetilde B} \otimes e_{1,3}$ is a well-defined partial isometry in $E$ with \begin{equation} vv^\ast = 1_{\widetilde B}\oplus 0 \oplus 0 \oplus 0, \qquad v^\ast v = 0\oplus 0\oplus 1_{\widetilde B} \oplus 0. \end{equation} Hence it suffices to show that $0\oplus 0\oplus 1_{\widetilde B}\oplus 0\sim 0\oplus 0 \oplus 1_{\widetilde B} \oplus 1_{\widetilde B}$. Note that as in part $(a)$, $0\oplus 0 \oplus 1_{\widetilde B}\oplus 0$ is a properly infinite, full projection, and thus so is $0\oplus 0\oplus 1_{\widetilde B}\oplus 1_{\widetilde B}$. Hence by a result of Cuntz \cite{Cuntz-K-theoryI}, see also \cite[Proposition 4.1.4]{Rordam-book-classification}, it suffices to show that \begin{equation} [0\oplus 0\oplus 1_{\widetilde B}\oplus 0]_0 = [0\oplus 0 \oplus 1_{\widetilde B} \oplus 1_{\widetilde B}]_0 \in K_0(E), \end{equation} or equivalently, that $[0\oplus 0 \oplus 0 \oplus 1_{\widetilde B}]_0 = 0 \in K_0(E)$. As $\theta$ is $\mathcal O_2$-stable \begin{equation} (0\oplus 0 \oplus 0 \oplus 1_{\widetilde B})E (0\oplus 0 \oplus 0 \oplus 1_{\widetilde B}) \stackrel{\textrm{Lem.~\ref{l:relcombasic}}(b)}{\cong} \frac{B_\infty \cap \theta(A)'}{\Ann \theta(A)} \end{equation} contains a unital embedding of $\mathcal O_2$ and therefore $[0\oplus 0 \oplus 0 \oplus 1_{\widetilde B}]_0 = 0$. \end{proof} \section{Absorbing representations}\label{s:absrep} In this section some (mostly well-known) results are presented on absorbing representations $A \to \multialg{B}$. \subsection{Cuntz sums and infinite repeats} This first subsection only contains well-known information on Cuntz sums and infinite repeats. \begin{remark}[Cuntz sum] Suppose $D$ is a unital $C^\ast$-algebra containing a pair $s_1,s_2$ of $\mathcal O_2$-isometries, i.e.~isometries such that $s_1s_1^\ast + s_2 s_2^\ast = 1_D$. The most important example to keep in mind is $D = \multialg{B}$ where $B$ is a stable $C^\ast$-algebra. For any elements $d,e\in D$ one forms the \emph{Cuntz sum} by \begin{equation} d\oplus_{s_1,s_2} e := s_1 d s_1^\ast + s_2 e s_2^\ast. \end{equation} Note that if $\Phi_{s_1,s_2} \colon M_2(D) \xrightarrow \cong D$ is the isomorphism \begin{equation} \Phi_{s_1,s_2} ((d_{i,j})_{i,j=1,2}) = \sum_{i,j=1}^2 s_i d_{i,j} s_j^\ast \end{equation} then \begin{equation} \Phi_{s_1,s_2}(d \oplus e) = d \oplus_{s_1,s_2} e. \end{equation} Hence Cuntz sums are a variation of diagonal sums. Also, Cuntz sums are unique up to unitary equivalence. In fact, if $t_1,t_2$ are also $\mathcal O_2$-isometries in $D$ then $u := s_1 t_1^\ast + s_2 t_2^\ast$ is a unitary satisfying \begin{equation} u^\ast (d \oplus_{s_1,s_2} e) u = d \oplus_{t_1,t_2} e. \end{equation} If $\phi, \psi \colon A \to D$ are maps then one can form the (point-wise) Cuntz sum $\phi \oplus_{s_1,s_2} \psi \colon A \to D$ which is again unique up to unitary equivalence. \end{remark} \begin{remark}[Infinite repeats]\label{r:infrep} Let $B$ be a stable $C^\ast$-algebra. There are several ``pictures'' of infinite repeats in $\multialg{B}$. These pictures will be recalled below, and it will be explained how they are connected. By stability of $B$ there exists a sequence $(t_k)_{k\in \mathbb N}$ of isometries in $\multialg{B}$ with orthogonal range projections, such that $\sum_{k=1}^\infty t_k t_k^\ast = 1_{\multialg{B}}$, with convergence in the strict topology. Fix such $(t_k)_{k\in \mathbb N}$ (this will be used in all pictures). \emph{First picture}: If $\theta \colon A\to B$ is a map, then an \emph{infinite repeat} of $\theta$ is a map of the form $\theta_\infty := \sum_{k=1}^\infty t_k \theta(-) t_k^\ast$ with $(t_k)_{k\in \mathbb N}$ as above. Any two infinite repeats of $\theta$ are unitarily equivalent. In fact, if $(s_k)_{k\in \mathbb N}$ is another such sequence of isometries, then $u = \sum_{k=1}^\infty s_k t_k^\ast$ (strict convergence) is a unitary for which \begin{equation} u (\sum_{k=1}^\infty t_k \theta(-) t_k^\ast ) u^\ast = \sum_{k=1}^\infty s_k \theta(-) s_k^\ast. \end{equation} \emph{Second picture}: Consider $B$ as a right Hilbert $B$-module, so $\multialg{B} = \mathcal B(B)$. Let $B^{\oplus \infty} = B \oplus B \oplus \dots$ be the countable infinite direct sum of $B$ with itself and let $\theta^{\oplus \infty} \colon A \to \mathcal B(B^{\oplus \infty})$ denote the diagonal map \begin{equation} \theta^{\oplus \infty} (a)(b_1,b_2,\dots) = (\theta(a) b_1, \theta(a) b_2, \dots), \qquad a\in A, (b_1,b_2,\dots) \in B^{\oplus \infty}. \end{equation} Then $u \in \mathcal B(B, B^{\oplus\infty})$ given by $u b = (t_1^\ast b , t_2^\ast b , \dots)$ for $b\in B$ is a unitary satisfying \begin{equation} u^\ast \theta^{\oplus\infty}(-) u = \sum_{k=1}^\infty t_k \theta(-) t_k^\ast \colon A \to \mathcal B(B) = \multialg{B}. \end{equation} \emph{Third picture}: There is an isomorphism $\Omega \colon B \otimes \mathcal K \xrightarrow \cong B$ given on elementary tensors by $b \otimes e_{i,j} = t_i b t_j^\ast$. Clearly this extends to an isomorphism $\multialg{\Omega} \colon \multialg{B\otimes \mathcal K} \xrightarrow \cong \multialg{B}$. The map $\theta \otimes 1_{\multialg{\mathcal K}} \colon A \to \multialg{B \otimes \mathcal K}$ can also be thought of as an infinite repeat since \begin{equation} \multialg{\Omega} \circ (\theta \otimes 1_{\multialg{\mathcal K}}) = \sum_{k=1}^\infty t_k \theta(-) t_k^\ast. \end{equation} \end{remark} \begin{remark}[``$1+\infty= \infty$'']\label{r:infty+1} Let $B$ be a stable $C^\ast$-algebra and $\phi \colon A \to \multialg{B}$ be a $\ast$-homomorphism. If $\phi_\infty$ is an infinite repeat of $\phi$, and if $s_1,s_2\in \multialg{B}$ are $\mathcal O_2$-isometries, then $\phi \oplus_{s_1,s_2} \phi_\infty$ is unitarily equivalent to $\phi_\infty$. Informally, this says, unsurprisingly, that ``$1+\infty=\infty$''. \end{remark} \subsection{General absorption} \begin{definition}\label{d:abs} Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra, and let $\phi, \psi \colon A \to \multialg{B}$ be $\ast$-homomorphisms. Say that $\phi$ \emph{absorbs} $\psi$ if there is a sequence $(u_n)$ of unitaries in $\multialg{B}$ such that \begin{itemize} \item[$(a)$] $u_n^\ast (\phi \oplus_{s_1,s_2} \psi)(a) u_n - \phi(a) \in B$ for all $a\in A$ and all $n\in \mathbb N$, \item[$(b)$] $\| u_n^\ast (\phi \oplus_{s_1,s_2} \psi)(a) u_n - \phi(a)\| \to 0$ for all $a\in A$. \end{itemize} Here $s_1,s_2\in \multialg{B}$ are $\mathcal O_2$-isometries. \end{definition} Note that absorption does not depend on the choice of $\mathcal O_2$-isometries as Cuntz sums are unique up to unitary equivalence. By the following remark one may often reduce questions about absorption to the unital case. \begin{remark}[Unitisations and absorption]\label{r:unitvsnonunitabs} Let $A^\dagger$ denote the forced unitisation of the $C^\ast$-algebra $A$, i.e.~one adds a unit regardless if $A$ is unital or not. Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra and let $\phi, \psi \colon A \to \multialg{B}$ be $\ast$-homomorphisms. Let $\phi^\dagger , \psi^\dagger \colon A^\dagger \to \multialg{B}$ be the unitised $\ast$-homomorphisms of $\phi$ and $\psi$ respectively. If $u\in \multialg{B}$ is a unitary and $s_1,s_2\in \multialg{B}$ are $\mathcal O_2$-isometries, then \begin{equation} u^\ast (\phi^\dagger \oplus_{s_1,s_2} \psi^\dagger) (a+\mu 1_{A^\dagger}) u - \phi^\dagger(a+\mu 1_{A^\dagger}) = u^\ast (\phi \oplus_{s_1,s_2} \psi)(a) u - \phi(a) \end{equation} for all $a\in A$ and $\mu \in \mathbb C$. Hence $\phi$ absorbs $\psi$ if and only if $\phi^\dagger$ absorbs $\psi^\dagger$. \end{remark} The following abstract \emph{unital} absorption result of infinite repeats will be needed to prove a non-unital version. \begin{proposition}[Cf.~\cite{Kasparov-Stinespring}, \cite{DadarlatEilers-classification}]\label{p:absunital} Let $A$ be a separable, unital $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra, let $\psi, \theta \colon A \to \multialg{B}$ be unital $\ast$-homo\-morphisms, and let $\theta_\infty$ be an infinite repeat of $\theta$. Suppose that $\theta$ approximately dominates (see Definition \ref{d:approxdom}) the c.p.~map \begin{equation} b^\ast \psi(-) b \colon A \to B \end{equation} for every $b\in B$. Then $\theta_\infty$ absorbs $\psi$. \end{proposition} \begin{proof} By Remark \ref{r:infty+1} it suffices to show that $\theta_\infty$ absorbs an infinite repeat of $\psi$. By \cite[Theorem 2.13]{DadarlatEilers-classification} this is the case whenever the following is satisfied: for any $b\in B$ there is a bounded sequence $(x_n)_{n\in \mathbb N}$ in $B$ such that $x_n^\ast \theta_\infty(-) x_n$ converges point-norm to $b^\ast \psi (-) b$, and such that $\| c x_n\| \to 0$ for any $c\in B$. To check this, let $b\in B$. Fix $1_A \in \mathcal F_1 \subseteq \mathcal F_2\subseteq \dots$ finite sets such that $\bigcup \mathcal F_n$ is dense in $A$. For each $n\in \mathbb N$, we may find $d_1^{(n)},\dots, d_{m_n}^{(n)} \in B$ such that \begin{equation} \| b^\ast \psi(a) b - \sum_{k=1}^{m_n} d_k^{(n)\ast} \theta(a) d_k^{(n)} \| < 1/n , \qquad a\in \mathcal F_n. \end{equation} Let $(t_j)_{j\in \mathbb N}$ be a sequence of isometries in $\multialg{B}$ with orthogonal range projections such that $\sum_{j=1}^\infty t_j t_j^\ast = 1_{\multialg{B}}$ strictly. As infinite repeats are unique up to unitary equivalence, we may assume that $\theta_\infty = \sum_{j=1}^\infty t_j \theta(-) t_j^\ast$. Let $x_n := \sum_{k=1}^{m_n} t_{n + k} d_{k}^{(n)}$. The sequence $(x_n)_{n\in \mathbb N}$ is bounded since \begin{eqnarray} \| x_n^\ast x_n \| &=& \| \sum_{k,l=1}^{m_n} d_k^{(n)\ast} t_{n+k}^\ast t_{n+l} d_l^{(n)} \| \nonumber\\ &=& \| \sum_{k=1}^{m_n} d_k^{(n)\ast} d_k^{(n)} \| \nonumber\\ &=& \| \sum_{k=1}^{m_n} d_k^{(n)\ast} \theta(1_A) d_k^{(n)} \| \nonumber\\ &\xrightarrow{n\to \infty}& \| b^\ast \psi(1_A) b\|. \end{eqnarray} Moreover, for any $a\in A$ we have \begin{eqnarray} x_n^\ast \theta_\infty(a) x_n &=& \sum_{k,l=1}^{m_n} \sum_{j=1}^\infty d_k^{(n)\ast} t_{n+k}^\ast t_j \theta(a) t_j^\ast t_{n+l} d_l^{(n)} \nonumber\\ &=& \sum_{k=1}^{m_n} d_k^{(n)\ast} \theta(a) d_k^{(n)} \nonumber\\ &\xrightarrow{n\to \infty}& b^\ast \psi(a) b. \end{eqnarray} Finally, let $c\in B$. Then \begin{eqnarray} \| c x_n\| &=& \| c (1_{\multialg{B}} - \sum_{k=1}^n t_kt_k^\ast) x_n \| \nonumber\\ & \leq & \| c (1_{\multialg{B}} - \sum_{k=1}^n t_kt_k^\ast)\| \| x_n \| \nonumber\\ &\xrightarrow{n\to \infty}& 0. \end{eqnarray} Here we used that $t_j^\ast x_n = 0$ for $j=1,\dots, n$, the sequence $(x_n)_{n\in \mathbb N}$ is bounded, and and that $\sum_{k=1}^\infty t_k t_k^\ast = 1_{\multialg{B}}$ strictly. \end{proof} While the above proposition is proved in the unital case, it is often more desirable to obtain the results in the not necessarily unital case. When reducing the not necessarily unital case to the unital case, the $C^\ast$-algebra $A$ will often be replaced with its forced unitisation $A^\dagger$, see Remark \ref{r:unitvsnonunitabs}. The following is a useful trick for relating a c.p.~map $\rho \colon A^\dagger \to B$ to its restriction to $A$. \begin{lemma}\label{l:unitisetrick} Let $\rho \colon A^\dagger \to B$ be a c.p.~map, let $\pi \colon A^\dagger \to \mathbb C$ be the character for which $\pi(A) = \{0\}$, and let $(e_\lambda)_{\lambda \in \Lambda}$ be an approximate identity in $A$. Define the c.p.~maps $\rho_\lambda , \pi_\lambda \colon A^\dagger \to B$ for each $\lambda \in \Lambda$ by \begin{equation} \rho_\lambda = \rho|_A (e_\lambda(-)e_\lambda), \qquad \pi_\lambda = \rho(1_{A^\dagger} - e_\lambda^2) \pi(-). \end{equation} Then $\rho_\lambda + \pi_\lambda \to \rho$ in point-norm. \end{lemma} \begin{proof} For $a\in A$ and $\mu \in \mathbb C$ we have \begin{eqnarray} && \rho_\lambda(a+\mu 1_{A^\dagger}) + \pi_\lambda(a+\mu 1_{A^\dagger}) \nonumber\\ &=& \rho(e_\lambda a e_\lambda) + \mu \rho( e_\lambda^2) + \mu \rho(1_{A^\dagger} - e_\lambda^2) \nonumber\\ &=& \rho(e_\lambda a e_\lambda + \mu 1_{A^\dagger}) \nonumber\\ &\to& \rho( a + \mu 1_{A^\dagger}), \end{eqnarray} which is what we wanted to prove. \end{proof} Although the following lemma will not be applied until much later, it easily illustrates how the above trick works. \begin{lemma}\label{l:unitisednuc} A c.p.~map $\rho \colon A^\dagger \to B$ is nuclear if and only if the restriction $\rho|_A$ is nuclear. \end{lemma} \begin{proof} ``Only if'' is obvious so we prove ``if''. Suppose $\rho|_A$ is nuclear, and define $\rho_\lambda$ and $\pi_\lambda$ as in Lemma \ref{l:unitisetrick}. As the maps $\rho|_A$ and $\pi$ are nuclear, so are $\rho_\lambda$ and $\pi_\lambda$. As sums of nuclear maps are again nuclear, $\rho_\lambda + \pi_\lambda$ is nuclear for each $\lambda$. As the set of nuclear maps is point norm closed, $\rho$ is nuclear by Lemma \ref{l:unitisetrick}. \end{proof} As a consequence of Proposition \ref{p:absunital} above one obtains the following non-unital analogue of the result. In this version it is crucial, yet somewhat subtle, that $\theta$ takes values in $B$ (not in $\multialg{B}$) and that $B$ is stable. For instance, if $A$, $B$ and $\theta$ were unital, but $\psi$ was non-unital, the criteria of Proposition \ref{p:absnonunital} could be true but $\theta_\infty$ could never absorb $\psi$ since $\theta_\infty$ would be unital and thus can only absorb unital $\ast$-homomorphisms. Similar results appear in the literature where both $A$ and $B$ are assumed to be unital, such as \cite[Theorem 2.22]{DadarlatEilers-classification}, but usually such results are not stated in as abstract a form as the following result. \begin{proposition}\label{p:absnonunital} Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra, let $\psi \colon A \to \multialg{B}$ be a $\ast$-homomorpshim, let $\theta \colon A \to B$ be a $\ast$-homomorphism, and let $\theta_\infty \colon A \to \multialg{B}$ be an infinite repeat of $\theta$. Suppose that $\theta$ approximately dominates (see Definition \ref{d:approxdom}) the c.p.~map \begin{equation} b^\ast \psi(-) b \colon A \to B \end{equation} for every $b\in B$. Then $\theta_\infty$ absorbs $\psi$. \end{proposition} \begin{proof} Let $(\theta_\infty)^\dagger, \psi^\dagger \colon A^\dagger \to \multialg{B}$ be the forced unitised $\ast$-homo\-morphisms. As observed in Remark \ref{r:unitvsnonunitabs}, $\theta_\infty$ absorbs $\psi$ if and only if $(\theta_\infty)^\dagger$ absorbs $\psi^\dagger$. Note that if $\theta^\dagger \colon A^\dagger \to \multialg{B}$ is the unitisation then $(\theta^\dagger)_\infty = (\theta_\infty)^\dagger$ where we use the same sequence of isometries to define the two infinite repeats. Hence it suffices to check that $\theta^\dagger$ and $\psi^\dagger$ these satisfy the condition of Proposition \ref{p:absunital}. Fix $b\in B$ so that we wish to check that $\theta^\dagger$ approximately dominates $b^\ast \psi^\dagger(-)b$. Let $\pi \colon A^\dagger \to \mathbb C$ be the character vanishing on $A$, and let $(e_n)_{n\in \mathbb N}$ be an approximate identity in $A$. Define the c.p.~maps $A^\dagger \to B$ \begin{equation} \psi_n := b^\ast \psi(e_n (-) e_n) b, \qquad \pi_n := b^\ast(1_{\multialg{B}} - e_n^2)b \pi(-). \end{equation} By Lemma \ref{l:unitisetrick} it follows that \begin{equation} \psi_n + \pi_n \to b^\ast \psi^\dagger(-) b \end{equation} in point-norm. As the set of c.p.~maps which are approximately dominated by $\phi$ is a point-norm closed, convex cone, see Remark \ref{r:approxdom}, it suffices to show that $\theta^\dagger$ approximately dominates $\psi_n$ and $\pi_n$ for each $n\in \mathbb N$, so fix $n\in \mathbb N$. Let $\mathcal F\subset A^\dagger$ and $\epsilon>0$. As $\theta$ approximately dominates $b^\ast \psi(-) b$, we may find $d_1,\dots,d_m\in B$ such that \begin{eqnarray} && \| b^\ast \psi(e_n x e_n) b - \sum_{k=1}^m d_k^\ast \theta(e_n x e_n) d_k \| \nonumber\\ & =& \| \psi_n(x) - \sum_{k=1}^m d_k^\ast \theta(e_n) \theta^\dagger(x) \theta(e_n) d_k\| \nonumber\\ & <& \epsilon, \end{eqnarray} for $x\in \mathcal F$. Hence $\theta^\dagger$ approximately dominates $\psi_n$. As $B$ is stable we may pick a sequence $(t_j)_{j\in \mathbb N}$ of isometries in $\multialg{B}$ with orthogonal range projections such that $\sum_{j=1}^\infty t_j t_j^\ast =1_{\multialg{B}}$ in the strict topology. As $\theta(A) \subseteq B$ it follows that \begin{equation} t_j^\ast \theta^\dagger(x) t_j \xrightarrow{j\to \infty} 1_{\multialg{B}} \pi(x), \qquad x\in A^\dagger. \end{equation} Letting $c:= (b^\ast(1_{\multialg{B}} - e_n^2)b)^{1/2}$ it follows that \begin{equation} ct_j^\ast \theta^\dagger(x) t_j c \xrightarrow{j\to \infty} b^\ast(1_{\multialg{B}} - e_n^2)b \pi(x) = \pi_n(x) \end{equation} for all $x\in A^\dagger$. Hence $\theta^\dagger$ approximately dominates $\pi_n$ which finishes the proof. \end{proof} \subsection{Nuclear absorption} \begin{definition}\label{d:weaklynuc} Let $A$ and $B$ be $C^\ast$-algebras. A c.p.~map $\rho \colon A \to \multialg{B}$ is called \emph{weakly nuclear} if $b^\ast \rho (-) b \colon A \to B$ is nuclear for every $b\in B$. \end{definition} In the literature, for instance in \cite{Skandalis-KKnuc} and \cite{DadarlatEilers-classification}, one often considers the notion of \emph{strictly nuclear} contractive c.p.~maps $\rho \colon A \to \multialg{B}$, i.e.~$\rho$ is a point-strict limit of maps factoring via contractive c.p.~maps through matrix algebras. By the following folklore result, this is equivalent to being weakly nuclear. \begin{proposition} Let $A$ and $B$ be $C^\ast$-algebras, and let $\rho \colon A \to \multialg{B}$ be a contractive c.p.~map. The following are equivalent: \begin{itemize} \item[$(i)$] $\rho$ is weakly nuclear; \item[$(ii)$] $\rho$ is strictly nuclear, i.e.~$\rho$ is a point-strict limit of c.p.~maps factoring via contractive c.p.~maps through matrix algebras; \item[$(iii)$] $\rho$ is a point-strict limit of nuclear maps. \end{itemize} \end{proposition} \begin{proof} $(i)\Rightarrow (ii)$: If $\rho$ is weakly nuclear and if $(e_\lambda)$ is an approximate identity in $B$, then $(e_\lambda \rho(-) e_\lambda)$ is a net of contractive nuclear maps converging point-strictly to $\rho$. In fact, for any $a\in A$ and $b\in B$ we have \begin{equation} \| (\rho(a) - e_\lambda \rho(a) e_\lambda) b \| \leq \| \rho(a)b - e_\lambda \rho(a) b \| + \| e_\lambda\| \| \rho(a)b - \rho(a) e_\lambda b \| \to 0. \end{equation} Similarly $\| b(\rho(a) - e_\lambda \rho(a) e_\lambda)\| \to 0$. As each $e_\lambda \rho(-) e_\lambda$ is contractive and nuclear, it point-norm approximately factors via contractive c.p.~maps through matrix algebras, see Lemma \ref{l:nuccontractive}, so it easily follows that $\rho$ is the point-strict limit of c.p.~maps factoring via contractive c.p.~maps through matrix algebras. $(ii) \Rightarrow (iii)$: This is obvious. $(iii) \Rightarrow (i)$: Suppose $(\rho_\lambda \colon A \to \multialg{B})_\lambda$ is a net of nuclear maps converging point-strictly to $\rho$. For any $a\in A$ and $b\in B$ we have \begin{equation} \| b^\ast \rho(a) b - b^\ast \rho_\lambda(a) b \| \leq \| b\| \| (\rho(a) - \rho_\lambda(a)) b\| \to 0. \end{equation} Hence the net $(b^\ast \phi_\lambda(-) b \colon A \to B)_\lambda$ of nuclear maps converges point-norm to $b^\ast \phi(-) b$ which is therefore nuclear. \end{proof} \begin{definition} Let $A$ be a separable $C^\ast$-algebra and let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra. A $\ast$-homomorphism $\phi \colon A \to \multialg{B}$ is called \emph{nuclearly absorbing} if it absorbs any weakly nuclear $\ast$-homomorphism $A \to \multialg{B}$. \end{definition} \begin{remark}[Unital nuclear absorption] It is obvious that a unital $\ast$-homo\-morphism $\phi \colon A \to \multialg{B}$ can never absorb a non-unital $\ast$-homo\-morphism, in particular, a unital $\ast$-homomorphism cannot be nuclearly absorbing as it will not absorb the zero $\ast$-homomorphism (which is obviously weakly nuclear). One therefore says that a unital $\ast$-homomorphism $\phi \colon A \to \multialg{B}$ is \emph{unitally nuclearly absorbing} if it absorbs any unital, weakly nuclear $\ast$-homo\-morphism $A \to \multialg{B}$. Using Remark \ref{r:unitvsnonunitabs} and Lemma \ref{l:unitisednuc} it is not hard to see that $\phi \colon A \to \multialg{B}$ is nuclearly absorbing if and only if the (forced) unitisation $\phi^\dagger \colon A^\dagger \to \multialg{B}$ is unitally nuclearly absorbing. Although the unital case is very important for studying non-stable $\Ext$-theory, for instance \cite{ElliottKucerovsky-extensions}, it will not play a significant role in this paper. \end{remark} The following main theorem of this section -- which is a non-unital version of \cite[Theorem 2.22]{DadarlatEilers-classification} -- is an easy corollary of previous results. As was also noted before Proposition \ref{p:absnonunital}, it is crucial both that $B$ is stable and that $\theta(A) \subseteq B$ for the following to hold. \begin{theorem}\label{t:fullnucabs} Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra, and suppose that $\theta \colon A \to B$ is a full $\ast$-homomorphism. Then any infinite repeat $\theta_\infty \colon A \to \multialg{B}$ of $\theta$ is nuclearly absorbing. If, in addition, $\theta$ is nuclear (in which case $A$ must be exact) then $\theta_\infty$ is weakly nuclear and nuclearly absorbing. \end{theorem} \begin{proof} The first part follows immediately from Propositions \ref{p:fulldom} and \ref{p:absnonunital}. For the ``in addition'' part, we may assume that $\theta_\infty = \sum_{j=1}^\infty t_j \theta(-) t_j^\ast$, where $(t_j)_{j\in \mathbb N}$ is a sequence of isometries in $\multialg{B}$ with orthogonal range projections such that $\sum_{j=1}^\infty t_jt_j^\ast = 1_{\multialg{B}}$. Given $b\in B$, the c.p.~map $b^\ast \theta_\infty(-)b$ is the point-norm limit of the sequence $(\sum_{j=1}^N b^\ast t_j \theta(-) t_j^\ast b)_{N\in \mathbb N}$. As $\theta$ is nuclear, each map $\sum_{j=1}^N b^\ast t_j \theta(-) t_j^\ast b$ is nuclear and thus $b^\ast \theta_\infty(-) b$ is nuclear. Hence $\theta_\infty$ is weakly nuclear. \end{proof} \section{Asymptotic intertwining} A celebrated way of classifying $C^\ast$-algebras is by using an intertwining argument a la Elliott \cite{Elliott-AFclass}. This argument is also used to \emph{lift} isomorphisms of a given invariant to isomorphisms of the $C^\ast$-algebras (see Remark \ref{r:apint} for a minor mistake in the literature in this context). In this section it is shown that if $\phi_0 \colon A \to B$ and $\psi_0 \colon B \to A$ are $\ast$-homomorphisms of separable $C^\ast$-algebras such that $\psi_0 \circ \phi_0 \sim_{\asu} \id_A$ and $\phi_0 \circ \psi_0 \sim_{\asu} \id_B$, then there is an isomorphism $\phi \colon A \xrightarrow \cong B$ which is \emph{homotopic} to $\phi_0$. Moreover, this homotopy may be chosen to be very well-behaved (for instance, it will be ideal-preserving). This is used in Theorems \ref{t:KP} and \ref{t:nonsimpleclass} to show that the (ideal-related) $KK$-equivalence lifts to an isomorphism of $C^\ast$-algebras. Given a unitary $u$, let $\Ad u = u^\ast (-) u$ be the induced inner automorphism.\footnote{This definition is only recalled to emphasise that $\Ad u$ denotes $u^\ast(-) u$ and \emph{not} $u(-)u^\ast$.} \begin{remark}[On approximate intertwining]\label{r:approxintertwining} To set the reader up for the asymptotic intertwining, the following details on the more classical approximate intertwining a la Elliott are recalled. Suppose that $A$ and $B$ are separable $C^\ast$-algebras, that \begin{equation} \phi_0 \colon A \to B , \qquad \psi_0 \colon B \to A \end{equation} are $\ast$-homomorphisms, and that $(u_n)_{n\in \mathbb N}$ and $(v_n)_{n\in \mathbb N}$ are sequences of unitaries in $\multialg{A}$ and $\multialg{B}$ respectively, such that \begin{equation} \lim_{n\to \infty} \| u_n^\ast \psi_0(\phi_0(a)) u_n - a \| = 0, \qquad \lim_{n\to \infty} \| v_n^\ast \phi_0(\psi_0(b)) v_n - b \| = 0, \end{equation} for all $a\in A$ and $b\in B$. We may find subsequences $(u_{r(n)})$ and $(v_{s(n)})$ (defined recursively), such that if we define \begin{equation}\label{eq:apintunitaries} U_n = u_{r(n)} u_{r(n-1)} \cdots u_{r(1)}, \qquad V_n = v_{s(n)} \cdots u_{s(1)}, \end{equation} with $U_0 = 1_{\multialg{A}}$ and $V_0 = 1_{\multialg{B}}$, as well as \begin{equation} \phi_n = \Ad V_n\circ \phi_0 \circ \Ad U_n^\ast, \qquad \psi_n = \Ad U_{n} \circ \psi_0 \circ \Ad V_{n-1}^\ast, \end{equation} for $n\in \mathbb N$, we get a diagram \begin{equation}\label{eq:approxint} \xymatrix{ A \ar[rr]^{\id_A} \ar[dr]_{\phi_0} && A \ar[rr]^{\id_A} \ar[dr]_{\phi_1} && A \ar[rr]^{\id_A} \ar[dr]_{\phi_2} && \dots \ar[r] & A \ar[d]_{\phi}^\cong \\ & B \ar[rr]_{\id_B} \ar[ur]^{\psi_1} && B \ar[rr]_{\id_B} \ar[ur]^{\psi_2} && B \ar[r]_{\id_B} & \dots \ar[r] & B } \end{equation} which is an approximate intertwining as in \cite[Definition 2.3.1]{Rordam-book-classification}. Thus $\phi_n$ converges point-norm to an \emph{isomorphism} $\phi$, and $\psi_n$ converges point-norm to $\psi = \phi^{-1}$, i.e. \begin{eqnarray} \lim_{n\to \infty} \|\phi(a) - (\Ad V_n \circ \phi_0 \circ \Ad U_n^\ast)(a)\| &=& 0 , \\ \lim_{n\to \infty} \| \psi(b) - (\Ad U_{n} \circ \psi_0 \circ \Ad V_{n-1}^\ast )(b) \| &=& 0, \end{eqnarray} for all $a\in A$ and $b\in B$. Note that this does \emph{not} (at least a priori) imply that $\phi$ and $\phi_0$ are approximately unitarily equivalent (see Remark \ref{r:apint}). \end{remark} \begin{remark}\label{r:apint} In \cite[Corollary 2.3.3 and 2.3.4]{Rordam-book-classification} on approximate intertwining, it is claimed that $\phi_0$ and $\phi$ in Remark \ref{r:approxintertwining} are approximately unitarily equivalent. This is true if there are unitaries $\widetilde U_n \in \multialg{B}$ such that $\phi_0 \circ \Ad U_n^\ast = \Ad \widetilde U_n^\ast \circ \phi_0$, and is therefore in particular true if all unitaries are in the minimal unitisations, or if $\phi_0$ is non-degenerate. However, the result may fail in general. The mistake appears on Rørdam's list of mistakes in the book.\footnote{\verb+http://www.math.ku.dk/~rordam/Encyclopaedia.html+} This is related to the subtle annoyance of approximate unitary equivalence (with multiplier unitaries), that it is not preserved by composition. In fact, if $\phi_1, \phi_2 \colon A \to B$ and $\psi \colon B \to C$ are $\ast$-homomorphisms for which $\phi_1$ and $\phi_2$ are approximately unitarily equivalent, then it does \emph{not} follow that $\psi \circ \phi_1$ and $\psi \circ \phi_2$ are approximately unitarily equivalent. However, approximate Murray--von Neumann equivalence \emph{is} preserved by compositions. \end{remark} In this paper, whenever one obtains uniqueness of $\ast$-homomorphism with unitaries, the unitaries are (a priori) only in the multiplier algebra and not necessarily in the minimal unitisation as above. Hence the following proposition is included in order to remedy this. \begin{proposition}\label{p:apintMvN} Let $A$ and $B$ be separable $C^\ast$-algebras, and suppose that $\phi_0 \colon A \to B$ and $\psi_0 \colon B \to A$ are $\ast$-homomorphisms such that $\psi_0 \circ \phi_0 \sim_{\au} \id_A$ and $\phi_0 \circ \psi_0 \sim_{\au} \id_B$ (with multiplier unitaries). Then there is an isomorphism $\phi \colon A \xrightarrow \cong B$ such that $\phi_0 \sim_{\aMvN} \phi$ and $\psi_0 \sim_{\aMvN} \phi^{-1}$. \end{proposition} \begin{proof} We adopt the notation from Remark \ref{r:approxintertwining}. Let $(e_n)$ be an approximate identity in $A$, and let $w_n = V_n^\ast \phi_0(U_n e_n)$. We claim that $(w_n)$ implements an approximate Murray--von Neumann equivalence of $\phi_0$ and $\phi$. Similarly, one shows that $\psi_0$ and $\phi^{-1}$ are approximate Murray--von Neumann equivalent. Clearly one has $w_n \phi_0(a) w_n^\ast \to \phi(a)$ for any $a\in A$, so it remains to check $w_n^\ast \phi(a) w_n \to \phi_0(a)$ for any $a\in A$. Note that \begin{eqnarray} w_n^\ast \phi_n(a) w_n &=& \phi_0(e_n U_n^\ast) V_n^\ast V_n \phi_0(U_n a U_n^\ast) V_n^\ast V_n \phi_0(U_n e_n) \\ &=& \phi_0(e_n a e_n).\label{eq:apintMvN} \end{eqnarray} Given $a\in A$ and $\epsilon>0$, pick $N\in \mathbb N$ such that $\| \phi_n(a) - \phi(a)\| < \epsilon/2$ and $\|e_n a e_n - a\| < \epsilon /2$ for $n\geq N$. Then, as $\| w_n \| \leq 1$ for all $n$, we have \begin{eqnarray} \| w_n^\ast \phi(a) w_n - \phi_0(a) \| &<& \| w_n^\ast \phi_n(a) w_n - \phi_0(a)\| + \epsilon/2 \\ &\stackrel{\eqref{eq:apintMvN}}{=}& \| \phi_0(e_n a e_n -a) \| + \epsilon/2 \\ &<& \epsilon, \end{eqnarray} for $n\geq N$. Hence $\phi \sim_{\aMvN} \phi_0$, and similarly $\phi^{-1} \sim_{\aMvN} \psi_0$. \end{proof} \begin{remark} The following asymptotic intertwining -- Lemma \ref{l:asint} -- looks very technical; here is the main idea: If $(u_t)_{t\in \mathbb R_+}$ and $(v_t)_{t\in \mathbb R_+}$ implement asymptotic unitary equivalences $\psi_0 \circ \phi_0 \sim_\asu \id_A$ and $\phi_0 \circ \psi_0 \sim_\asu \id_B$ respectively, then follow the strategy of Remark \ref{r:approxintertwining} and construct $s, r \colon \mathbb N \to \mathbb N$. The goal will be to construct a (well-behaved) homotopy from $\Ad V_{n} \circ \phi_0 \circ \Ad U_{n}^\ast$ (as defined in Remark \ref{r:approxintertwining}) to \begin{equation} \Ad V_{n+1} \circ \phi_0 \circ \Ad U_{n+1}^\ast = \Ad V_n \circ \Ad v_{s(n+1)} \circ \phi_0 \circ \Ad u_{r(n+1)}^\ast \circ \Ad U_n^\ast. \end{equation} If done in a suitably nice way, this will induce a (well-behaved) homotopy from $\phi_0$ to the isomorphism $\phi = \lim_{n\to \infty} \Ad V_n \circ \phi_0 \Ad U_n^\ast$. The idea for constructing this homotopy is very easy: a simple observation shows that \begin{equation} \lim_{t\to \infty} \| (\Ad v_{s(n) + t} \circ \phi_0 \circ \Ad u_{r(n) + t} ^\ast )(a) - \phi_0(a) \| = 0, \qquad a\in A, \end{equation} so $\Phi \colon A \to C([0,\infty], B)$ given by \begin{equation} \Phi(-)(t) = \left\{ \begin{array}{ll} \Ad V_n \circ \Ad v_{s(n+1) +t } \circ \phi_0 \circ \Ad u_{r(n+1)+t}^\ast \circ \Ad U_n^\ast, & t\in [0,\infty) \\ \Ad V_n \circ \phi_0 \circ \Ad U_n^\ast , & t= \infty \end{array} \right. \end{equation} is the desired homotopy. \end{remark} \begin{lemma}\label{l:asint} Let $A$ and $B$ be separable $C^\ast$-algebras and suppose that \begin{equation} \phi_0 \colon A \to B, \qquad \psi_0 \colon B \to A \end{equation} are $\ast$-homomorphisms such that $\psi_0 \circ \phi_0 \sim_{\asu} \id_A$ and $\phi_0 \circ \psi_0 \sim_{\asu} \id_B$. Then there exist an isomorphism $\phi \colon A \xrightarrow \cong B$ and (\emph{not necessarily continuous!}) families of unitaries $(U_t)_{t\in \mathbb R_+}$ and $(V_t)_{t\in \mathbb R_+}$ in $\multialg{A}$ and $\multialg{B}$ resepctively, with $U_0 = 1_{\multialg{A}}$ and $V_0 = 1_{\multialg{B}}$, such that \begin{equation}\label{eq:asintermaps} \mathbb R_+ \ni t\mapsto (\Ad V_t \circ \phi_0 \circ \Ad U_t^\ast)(a), \qquad [1, \infty) \ni t\mapsto (\Ad U_{t} \circ \psi_0 \circ \Ad V_{t-1}^\ast)(b) \end{equation} are continuous and converge to $\phi(a)$ and $\phi^{-1}(b)$ respectively for all $a\in A$ and $b\in B$. \end{lemma} \begin{proof} Let $(u_t)_{t\in \mathbb R_+}$ and $(v_t)_{t\in \mathbb R_+}$ be continuous unitary paths in $\multialg{A}$ and $\multialg{B}$ respectively such that \begin{equation} \lim_{t\to \infty} \| u_t^\ast \psi_0(\phi_0(a)) u_t - a \| = 0, \qquad \lim_{n\to \infty} \| v_t^\ast \phi_0(\psi_0(b)) v_t - b \| = 0, \end{equation} for all $a\in A$ and $b\in B$. The first part of the proof is very much just a usual intertwining argument. The details will be filled in since some of the estimates below will be needed.\footnote{I could had easily waved my hands, and said something like ``the unitaries in the classical intertwining, may be chosen with these additional properties'', but I choose to fill in all the details for completion.} Let $\mathcal F_1' \subset \mathcal F_2' \subset \dots \subset A$ and $\mathcal G_1' \subset \mathcal G_2' \subset \dots \subset B$ be nested sequences of finite sets such that $\overline{\bigcup \mathcal F_n'} =A$ and $\overline{\bigcup \mathcal G_n'}=B$. We construct $\mathcal F_n$, $\mathcal G_n$, $U_n$, $V_n$, $r(n)$ and $s(n)$ recursively. Let $\mathcal F_1 = \mathcal F_1'$ and pick $1\leq r(1) \in \mathbb R_+$ such that \begin{equation} \| u_t^\ast \psi_0(\phi_0(x)) u_t - x \| \leq 2^{-1} , \qquad t\geq r(1), \; x\in \mathcal F_1. \end{equation} Let $U_1 := u_{r(1)}$. Let $\mathcal G_1 = \phi_0(\mathcal F_1) \cup \mathcal G_1'$, pick $1 \leq s(1)\in \mathbb R_+$ such that \begin{equation} \| v_t^\ast \phi_0(\psi_0(y)) v_t - y \| \leq 2^{-1} , \qquad t\geq s(1), \; y\in \mathcal G_1, \end{equation} and define $V_1 := v_{s(1)}$. Having constructed $\mathcal F_{n-1}$, $\mathcal G_{n-1}$, $U_{n-1}$, $V_{n-1}$, $r(n-1)$ and $s(n-1)$, we construct the next step as follows: Let \begin{equation} \mathcal F_n = u_{n-1} \mathcal F_{n-1} u_{n-1}^\ast \cup \psi_0(\mathcal G_{n-1}) \cup \mathcal F_n' \cup U_{n-1} \mathcal F_{n-1} U_{n-1}^\ast, \end{equation} pick $\max(n, r(n-1)) \leq r(n) \in \mathbb R_+$, such that \begin{equation}\label{eq:apintF} \| u_t^\ast \psi_0(\phi_0(a)) u_t - a \| \leq 2^{-n} , \qquad t\geq r(n), \; a\in \mathcal F_n,\footnote{In the classical intertwining, this norm estimate is only chosen for $t=r(n)$. Here it is needed for \emph{all} $t\geq r(n)$, which is of course possible.} \end{equation} and let $U_n = u_{r(n)} U_{n-1}$. Similarly, let \begin{equation} \mathcal G_n = v_{s(n-1)} \mathcal G_{n-1} v_{s(n-1)}^\ast \cup \phi_0(\mathcal F_n) \cup \mathcal G_n' \cup V_{n-1} \mathcal G_{n-1} V_{n-1}^\ast, \end{equation} pick $\max(n, s(n-1)) \leq s(n) \in \mathbb R_+$, such that \begin{equation}\label{eq:apintG} \| v_t^\ast \phi_0(\psi_0(b)) v_t - b \| \leq 2^{-n} , \qquad t\geq s(n), \; b\in \mathcal G_n, \end{equation} and let $V_n = v_{s(n)} V_{n-1}$. Note that we picked $r(n),s(n)\geq n$, to ensure that $r(n)$ and $s(n)$ both tend to $\infty$. As in the usual approximate intertwining, let \begin{equation} \phi_n = \Ad V_n\circ \phi_0 \circ \Ad U_n^\ast, \qquad \psi_n = \Ad U_{n} \circ \psi_0 \circ \Ad V_{n-1}^\ast. \end{equation} These induce an approximate intertwining as \eqref{eq:approxint}, see \cite[Definition 2.3.1]{Rordam-book-classification},\footnote{With the notation in \cite[Definition 2.3.1]{Rordam-book-classification} one has $\alpha_n = \id_A$ and $\beta_n = \id_B$.} so by \cite[Proposition 2.3.2]{Rordam-book-classification} it follows that $\phi_n$ converges point-norm to an isomorphism $\phi \colon A \xrightarrow \cong B$ and $\psi_n$ converges point-norm to $\phi^{-1}$. Fix (necessarily orientation reversing) homeomorphisms \begin{equation} h_n \colon (n,n+1] \to [r(n+1), \infty), \qquad k_n \colon (n,n+1] \to [s(n+1), \infty), \end{equation} for all $n\in \mathbb N_0$, so in particular $h_n(n+1) = r(n+1)$ and $k_n(n+1) = s(n+1)$. We define $U_t$ and $V_t$ for all $t\in \mathbb R_+$, by letting $U_0=1_{\multialg{A}}$, $V_0 = 1_{\multialg{B}}$, and \begin{equation} U_{t} := u_{h_n(t)} U_n, \qquad V_{t} := v_{k_n(t)} V_n \end{equation} for all $n\in \mathbb N_0$ and $t\in (n,n+1]$. This is well-defined, since \begin{equation} U_{n+1} = u_{r(n+1)} U_n = u_{h_n(n+1)} U_n \end{equation} for all $n\in \mathbb N_0$, and similarly for the $V_n$'s. Let \begin{equation} \phi_t := \Ad V_t \circ \phi_0 \circ \Ad U_t^\ast \end{equation} for $t\in \mathbb R_+$, and \begin{equation} \psi_t := \Ad U_t \circ \psi_0 \circ \Ad V_{t-1}^\ast \end{equation} for $t\in [1,\infty)$. Given $a\in A$, we check that $t \mapsto \phi_t(a)$ is continuous on $\mathbb R_+$. A similar argument shows that $t\mapsto \psi_t(b)$ is continuous on $t\in [1,\infty)$, so this is omitted. Clearly $t \mapsto U_t$ and $t\mapsto V_t$ are continuous when restricted to intervals $(n,n+1]$. Hence $t \mapsto \phi_t(a)$ is continuous in any point $\mathbb R_+ \setminus \mathbb N_0$, and is left continuous at all points in $\mathbb N$. Hence, it suffices to show that $t \mapsto \phi_t(a)$ is right continuous at any point $n$ in $\mathbb N_0$. Note that \begin{eqnarray} \psi_0(\phi_0(U_n a U_n^\ast) ) &=& \lim_{t\to \infty} u_t U_n a U_n^\ast u_t^\ast \nonumber\\ &=& \lim_{t\to n^+} u_{h_n(t)} U_n a U_n^\ast u_{h_n(t)}^\ast \nonumber\\ &=& \lim_{t\to n^+} U_t a U_t^\ast \end{eqnarray} and similarly \begin{eqnarray} && V_n^\ast \phi_0 (U_n a U_n^\ast) V_n \nonumber\\ &=& \lim_{t\to \infty} V_n^\ast v_t^\ast \phi_0(\psi_0( \phi_0(U_n a U_n^\ast))) v_t V_n \nonumber\\ &=& \lim_{t\to n^+} V_n^\ast v_{k_n(t)}^\ast \phi_0(\psi_0(\phi_0( U_n a U_n^\ast )))v_{k_n(t)}^\ast V_n \nonumber\\ &=& \lim_{t \to n^+} V_t^\ast \phi_0 (\psi_0(\phi_0(U_n a U_n^\ast))) V_t. \end{eqnarray} It easily follows (since $\Ad V_t \circ \phi_0$ is contractive for each $t$), that \begin{eqnarray} \lim_{t\to n^+} \phi_t(a) &=& \lim_{t\to n^+} ( \Ad V_t \circ \phi_0 \circ \Ad U_t^\ast)(a) \nonumber\\ &=& \lim_{t\to n^+} (\Ad V_t \circ \phi_0 \circ \psi_0 \circ \phi_0\circ \Ad U_n^\ast)(a) \nonumber\\ &=& (\Ad V_n \circ \phi_0 \circ \Ad U_n^\ast)(a) \nonumber\\ &=& \phi_n(a). \end{eqnarray} Hence $t \mapsto \phi_t(a)$ is right continuous in $n$, and thus continuous on all $\mathbb R_+$. It remains to check that $\phi_t(a) \to \phi(a)$ and $\psi_t(b) \to \psi(b)$ for all $a\in A$ and $b\in B$. Again, we only check the former, as the latter is shown in the exact same way. Fix $a\in A$ and $\epsilon >0$. Pick $N \in \mathbb N$ with $2^{-N} < \epsilon /5$ such that there is an $a'\in \mathcal F_{{N}-1}$ with $\| a - a'\| < \epsilon/5$, and such that $\| \phi(a') - \phi_n(a') \| < \epsilon/5$ for any $n\geq N$. We will check that $\| \phi_t(a) - \phi(a) \| < \epsilon$ for any $t\geq N$. This will finish the proof. Fix $n$ such that $t\in (n,n+1]$ and note that $n\geq N$. In the following, the notation $c \approx_\delta d$ means $\| c - d\| \leq \delta$. Thus \begin{eqnarray} && \phi_t(a) \nonumber\\ &\approx_{\epsilon/5}& \phi_t(a') \nonumber\\ &=\quad \; & ( \Ad V_n \circ \Ad v_{k_n(t)} \circ \phi_0\circ \Ad u_{h_n(t)}^\ast \circ \Ad U_n^\ast)( a')\nonumber\\ &\approx_{\epsilon/5}& (\Ad V_n \circ \Ad v_{k_n(t)} \circ \phi_0 \circ \psi_0 \circ \phi_0 \circ \Ad U_n^\ast) (a') \label{eq:Vnvkntphi0} \\ &\approx_{\epsilon/5}& ( \Ad V_n \circ \phi_0 \circ \Ad U_n^\ast) (a') \label{eq:Vnphi0Un}\\ &=\quad \;& \phi_n(a') \nonumber\\ & \approx_{\epsilon/5}& \phi(a') \nonumber\\ &\approx_{\epsilon/5}& \phi(a). \end{eqnarray} At \eqref{eq:Vnvkntphi0} we used that $a' \in \mathcal F_{N - 1} \subseteq \mathcal F_{n-1}$, so $x = U_n a' U_n^\ast \in \mathcal F_n$ (by construction of $\mathcal F_n$), and thus \eqref{eq:Vnvkntphi0} follows from \eqref{eq:apintF} and the choice of $N$. Similarly, at \eqref{eq:Vnphi0Un} we used (since $U_n a' U_n^\ast \in \mathcal F_n$) that $y = \phi_0(U_n a' U_n^\ast) \in \mathcal G_n$, so \eqref{eq:Vnphi0Un} follows from \eqref{eq:apintG} and the choice of $N$. This finishes the proof. \end{proof} \begin{proposition}\label{p:asint} Let $A$ and $B$ be separable $C^\ast$-algebras, and suppose that $\phi_0 \colon A \to B$ and $\psi_0 \colon B \to A$ are $\ast$-homomorphisms such that $\psi_0 \circ \phi_0 \sim_{\asu} \id_A$ and $\phi_0 \circ \psi_0 \sim_{\asu} \id_B$. Then there exist an isomorphism $\phi \colon A \xrightarrow \cong B$, and a homotopy $(\phi_s)_{s\in [0,1]}$ from $\phi_0$ to $\phi$, such that $\phi_s \sim_{\aMvN} \phi_t$ for all $s,t\in [0,1]$. \end{proposition} \begin{proof} For convenience, we replace $[0,1]$ with $[0,\infty]$. Let $\phi$, $(U_t)_{t\in \mathbb R_+}$ and $(V_t)_{t\in \mathbb R_+}$ be given as in Lemma \ref{l:asint}. Let \begin{equation} \phi_t = \left\{ \begin{array}{ll} \Ad V_t \circ \phi_0 \circ \Ad U_t^\ast & \textrm{ for } t\in [0,\infty), \\ \phi & \textrm{ for } t=\infty. \end{array} \right. \end{equation} By Lemma \ref{l:asint}, this gives a well-defined homotopy from $\phi_0$ to $\phi$. To show $\phi_s \sim_{\aMvN} \phi_t$ for all $s,t\in [0,\infty]$, it is enough to show $\phi_0 \sim_{\aMvN} \phi_t$ for $t\in (0,\infty]$. If $t\in (0,\infty)$, $(e_n)$ is an approximate identity in $A$, and $a_n = \phi_0(a_n U_t^\ast) V_t$, then \begin{equation} a_n^\ast \phi_0(-) a_n = \phi_t(e_n(-)e_n) \to \phi_t, \qquad a_n \phi_t(-) a_n^\ast = \phi_0(e_n (-) e_n) \to \phi_0, \end{equation} point-norm, so $\phi_0 \sim_{\aMvN} \phi_t$. If $t=\infty$, an argument identical to that in the proof of Proposition \ref{p:apintMvN}, shows that $\phi_0$ and $\phi$ are approximately\footnote{\emph{Not} asymptotically, since the maps $t \mapsto U_t$ and $t\mapsto V_t$ are not necessarily continuous.} Murray--von Neumann equivalent. \end{proof} \begin{question} Is it possible to pick an isomorphism $\phi$ in Proposition \ref{p:asint}, such that $\phi$ and $\phi_0$ are asymptotically Murray--von Neumann equivalent? \end{question} \begin{remark} Note that in Proposition \ref{p:asint}, one does \emph{not} get that $\psi_0$ is homotopic to $\phi^{-1}$. However, one does get that $\Ad U_1 \circ \psi_0$ \emph{is} homotopic to $\phi^{-1}$. So if one can choose a path of unitaries $(u_t)_{t\in \mathbb R_+}$ in $\multialg{A}$ which implements $\psi_0 \circ \phi_0 \sim_\asu \id_A$ and which satisfies $u_0 = 1_{\multialg{A}}$, then $\psi_0$ and $\phi^{-1}$ are also homotopic. This is for instance always the case if $A$ is stable, since the unitary group $\multialg{A}$ is then path-connected by \cite{CuntzHigson-Kuipersthm}. \end{remark} \begin{remark} As observed in \cite[Remark 3.14]{Gabe-O2class}, if one lets $\phi \colon \mathcal O_2 \to \mathcal O_2 \otimes \mathcal K$ and $\psi \colon \mathcal O_2 \otimes \mathcal K \to \mathcal O_2$ be embeddings, then $\psi \circ \phi \sim_{\asMvN} \id_{\mathcal O_2}$ and $\phi \circ \psi \sim_{\asMvN} \id_{\mathcal O_2 \otimes \mathcal K}$. Thus, asymptotic (and approximate) Murray--von Neumann equivalence is not strong enough of an equivalence relation to get classification up to isomorphism. However, by \cite[Corollary 3.13]{Gabe-O2class}, one does obtain classification up to \emph{stable} isomorphism. \end{remark} \section{A unitary path and some key lemmas}\label{s:unitary} This section is dedicated to showing that there is unitary path in the unitisation $(\mathcal O_2 \otimes \mathcal K)^\sim$ with some very desirable properties. Constructing this path is very elementary and it is the new key ingredient in proving the main theorems of this paper. The only properties of $\mathcal O_2$ that are used are that $M_3(\mathcal O_2) \cong \mathcal O_2$, that any two non-trivial projections\footnote{By non-trivial, I mean that they are non-zero and not the unit.} in $\mathcal O_2$ are unitarily equivalent, and that the unitary group of $\mathcal O_2$ is path-connected. \begin{lemma}\label{l:unitarypath} There exists a continuous path of unitaries $(u_t)_{t\in \mathbb R_+}$ in $(\mathcal O_2 \otimes \mathcal K)^\sim$ with $u_0 = 1_{(\mathcal O_2 \otimes \mathcal K)^\sim}$, such that the following hold: \begin{itemize} \item[$(a)$] $(u_t^\ast (1_{\mathcal O_2}\otimes e_{1,1}) u_t)_{t\in \mathbb R_+}$ is a continuous (not necessarily increasing) approximate identity of projections for $\mathcal O_2\otimes \mathcal K$; \item[$(b)$] for any $x\in \mathcal O_2\otimes \mathcal K$, $u_t x$ converges in norm to an element in $\mathcal O_2\otimes \mathcal K$ as $t\to \infty$. \end{itemize} \end{lemma} \begin{proof} We construct unitary paths $v_{n,t}$ for $n\in \mathbb N_0$, $t\in [n,n+1]$ with $v_{n,n}=1_{(\mathcal O_2 \otimes \mathcal K)^\sim}$, and let $u_t = u_n v_{n,t}$ for $t\in [n,n+1]$ (defined recursively). In particular, $u_0 = v_{0,0} = 1_{(\mathcal O_2 \otimes \mathcal K)^\sim}$. Recall the following facts about $\mathcal O_2$, which may be obtained using results from \cite{Cuntz-K-theoryI}: whenever $p,q\in \mathcal O_2$ are projections such that $0<p,q<1_{\mathcal O_2}$, then $p$ and $q$ are Murray--von Neumann equivalent, and $1_{\mathcal O_2}-p$ and $1_{\mathcal O_2}-q$ are Murray--von Neumann equivalent. Thus there is a unitary $u\in \mathcal O_2$, such that $u^\ast p u = q$. Moreover, as the unitary group of $\mathcal O_2$ is connected we may pick a continuous path $[0,1] \ni t \mapsto r_t\in \mathcal O_2$ of unitaries with $r_0 = 1_{\mathcal O_2}$ and $r_1 = u$. Fix $n\in \mathbb N_0$. As \begin{equation} \left( \sum_{k=n+1}^{n+3} 1_{\mathcal O_2} \otimes e_{k,k} \right) \left( \mathcal O_2 \otimes \mathcal K \right) \left( \sum_{k=n+1}^{n+3} 1_{\mathcal O_2} \otimes e_{k,k} \right) \cong M_3(\mathcal O_2) \cong \mathcal O_2, \end{equation} we may find a norm continuous path $[n,n+1] \ni t \mapsto r_{n,t} \in \mathcal O_2 \otimes \mathcal K$, such that \begin{equation} r_{n,n} = r_{n,t}^\ast r_{n,t} = r_{n,t} r_{n,t}^\ast = \sum_{k=n+1}^{n+3} 1_{\mathcal O_2} \otimes e_{k,k}, \quad \text{ for all }t\in [n,n+1], \end{equation} and such that \begin{equation}\label{eq:O2stuff} r_{n,n+1}^\ast ( 1_{\mathcal O_2} \otimes e_{n+1,n+1}) r_{n,n+1} = 1_{\mathcal O_2} \otimes (e_{n+1,n+1} + e_{n+2,n+2}). \end{equation} Let $v_{n,t} = r_{n,t} + \left( 1_{(\mathcal O_2 \otimes \mathcal K)^\sim} - \sum_{k=n+1}^{n+3} 1_{\mathcal O_2} \otimes e_{k,k} \right)$ for $t\in [n,n+1]$. Clearly $v_{n,t}$ is a unitary for each $n\in \mathbb N_0$, $t\in [n,n+1]$ and $v_{n,n} = 1_{(\mathcal O_2 \otimes \mathcal K)^\sim}$. Thus \begin{equation} u_t := u_n v_{n,t} = v_{0,1} v_{1,2} \cdots v_{n-1,n} v_{n,t} \end{equation} for $t\in [n,n+1]$ defines a continuous path of unitaries $(u_t)_{t\in \mathbb R_+}$ in $(\mathcal O_2 \otimes \mathcal K)^\sim$. An easy induction argument shows that $u_n^\ast (1_{\mathcal O_2}\otimes e_{1,1}) u_n = \sum_{k=1}^{n+1} 1_{\mathcal O_2} \otimes e_{k,k}$ for $n\in \mathbb N_0$. This is obvious for $n=0$, and by induction \begin{eqnarray} u_n^\ast (1_{\mathcal O_2}\otimes e_{1,1}) u_n &=& v_{n-1,n}^\ast u_{n-1}^\ast (1_{\mathcal O_2} \otimes e_{1,1}) u_{n-1} v_{n-1,n} \nonumber\\ &=& v_{n-1,n}^\ast \left( \sum_{k=1}^n 1_{\mathcal O_2} \otimes e_{k,k} \right) v_{n-1,n} \nonumber\\ &=& \sum_{k=1}^{n-1} 1_{\mathcal O_2} \otimes e_{k,k} + r_{n-1,n}^\ast (1_{\mathcal O_2} \otimes e_{n,n}) r_{n-1,n} \nonumber\\ &\stackrel{\eqref{eq:O2stuff}}{=}& \sum_{k=1}^{n+1} 1_{\mathcal O_2} \otimes e_{k,k}.\label{eq:une11un} \end{eqnarray} We prove $(a)$ and $(b)$. $(a)$: Clearly each $u_t^\ast (1_{\mathcal O_2}\otimes e_{1,1}) u_t$ is a projection. For $t\in [n,n+1]$ one has \begin{eqnarray} u_t^\ast(1_{\mathcal O_2}\otimes e_{1,1}) u_t &=& v_{n,t}^\ast u_{n}^\ast (1_{\mathcal O_2}\otimes e_{1,1}) u_n v_{n,t} \nonumber\\ &\stackrel{\eqref{eq:une11un}}{=}& v_{n,t}^\ast \left( \sum_{k=1}^{n+1} 1_{\mathcal O_2}\otimes e_{k,k} \right) v_{n,t} \nonumber\\ &= & \sum_{k=1}^n 1_{\mathcal O_2} \otimes e_{k,k} + r_{n,t}^\ast (1_{\mathcal O_2} \otimes e_{n+1,n+1}) r_{n,t} \nonumber\\ & \geq& \sum_{k=1}^n 1_{\mathcal O_2} \otimes e_{k,k}. \end{eqnarray} This implies that $u_t^\ast(1_{\mathcal O_2}\otimes e_{1,1} ) u_t$ is a not necessarily increasing approximate identity. $(b)$: Let $x\in \mathcal O_2\otimes \mathcal K$, and fix $\epsilon >0$. Pick $n\in \mathbb N$ and $x_0\in \mathcal O_2\otimes M_n \subseteq \mathcal O_2 \otimes \mathcal K$ such that $\| x - x_0\| <\epsilon/2$. For any $t\geq n$ we have $v_{\lfloor t \rfloor, t} x_0 = x_0$, and also, $v_{n+j, n+j+1} x_0 = x_0$ for any $j\geq 0$, so \begin{equation} u_t x_0 = v_{0,1} \dots v_{n-1,n} v_{n,n+1} \dots v_{\lfloor t \rfloor, t} x_0 = v_{0,1} \dots v_{n-1,n} x_0 \end{equation} is constant for $t\geq n$. Hence for $s,t \geq n$ we get \begin{equation} \| u_s x - u_t x\| < \| u_s x_0 - u_t x_0\| + \epsilon = \epsilon, \end{equation} and therefore $(u_t x)_{t\in \mathbb R_+}$ converges to some element. \end{proof} As opposed to \emph{being} an approximate identity in a $C^\ast$-algebra it is convenient to consider nets of multipliers which \emph{act} as approximate identities. More precisely, let $D$ be a $C^\ast$-algebra. A net $(e_\lambda)_{\lambda \in \Lambda}$ of positive contractions in $\multialg{D}$ is said to \emph{act as an approximate identity} (on $D$) if $\lim_{\lambda} \| e_\lambda d - d\| = 0$ for all $d\in D$. The net is not assumed to be increasing. \begin{corollary}\label{c:unitarypath} Let $B$ be a $C^\ast$-algebra such that there is a unital embedding $\iota \colon \mathcal O_2 \hookrightarrow \multialg{B}$ and let $\overline \iota \colon (\mathcal O_2 \otimes \mathcal K)^\sim \hookrightarrow \multialg{B \otimes \mathcal K}$ be the induced unital embedding. There is a continuous path of unitaries $(v_t)_{t\in \mathbb R_+}$ in $\overline \iota( (\mathcal O_2 \otimes \mathcal K)^\sim)$ with $v_0 = 1_{\multialg{B\otimes \mathcal K}}$, such that the following hold: \begin{itemize} \item[$(a)$] $(v_t^\ast ( 1_{\multialg{B}} \otimes e_{1,1}) v_t)_{t\in \mathbb R_+}$ acts as an approximate identity on $B \otimes \mathcal K$; \item[$(b)$] $v_t b v_t^\ast$ converges in norm to an element in $B\otimes \mathcal K$ as $t\to \infty$, for any $b\in B \otimes \mathcal K$. \end{itemize} \end{corollary} \begin{proof} Let $(u_t)_{t\in \mathbb R_+}$ be a unitary path as in Lemma \ref{l:unitarypath}, and let $v_t := \overline \iota(u_t)$. Clearly $t\mapsto v_t$ is continuous and $v_0 = 1_{\multialg{B\otimes \mathcal K}}$, so only $(a)$ and $(b)$ remains to be checked. Since $\iota(\mathcal O_2) \otimes \mathcal K$ is a non-degenerate $C^\ast$-subalgebra of $\multialg{B} \otimes \mathcal K$, part $(a)$ easily follows. For $(b)$, it suffices to check the condition for $b= b' \otimes yz$ with $b'\in B$ and $y,z\in \mathcal K$. Then \begin{equation} b = \overline \iota(1_{\mathcal O_2} \otimes y) (b\otimes 1_{\multialg{\mathcal K}} )\overline{\iota}(1_{\mathcal O_2} \otimes z), \end{equation} so $v_t \overline{\iota}(1_{\mathcal O_2}\otimes y) = \overline{\iota}( u_t (1_{\mathcal O_2}\otimes y))$ converges in norm to an element $y' \in \overline{\iota}(\mathcal O_2 \otimes \mathcal K) \subseteq \multialg{B} \otimes \mathcal K$, and similarly $\overline{\iota}(1_{\mathcal O_2}\otimes z) v_t^\ast$ converges to $z' \in \multialg{B} \otimes \mathcal K$. Thus $v_t b v_t^\ast$ converges in norm to $y' (b'\otimes 1_{\multialg{\mathcal K}}) z' \in B\otimes \mathcal K$. \end{proof} The following is the key lemma for lifting a $KK$-element (even in the ideal-related setting) to a $\ast$-homomorphism. Note that if $\phi, \theta \colon A\to B$ are $\ast$-homomorphisms, then $\phi(-) \otimes e_{1,1} + \theta(-) \otimes (1_{\multialg{\mathcal K}} - e_{1,1}) \colon A \to \multialg{B\otimes \mathcal K}$ is the diagonal $\ast$-homomorphism $\phi \oplus \theta \oplus \theta \oplus \cdots$, see Remark \ref{r:infrep}. \begin{lemma}[Key lemma for existence]\label{l:keyexistence} Let $A$ and $B$ be $C^\ast$-algebras, let $\theta \colon A \to B$ be a $\ast$-homomorphism, and suppose that there is a unital embedding $\mathcal O_2 \hookrightarrow \multialg{B} \cap \theta(A)'$. Let $\theta_\infty = \theta \otimes 1_{\multialg{\mathcal K}} \colon A \to \multialg{B \otimes \mathcal K}$. Then there is a norm-continuous path $(v_t)_{t\in \mathbb R_+}$ of unitaries in $\multialg{B\otimes \mathcal K} \cap \theta_\infty(A)'$ with $v_0 = 1_{\multialg{B\otimes \mathcal K}}$, which has the following property: Suppose that $\psi \colon A \to \multialg{B \otimes \mathcal K}$ is a $\ast$-homomorphism such that $\psi(a) - \theta_\infty(a)\in B \otimes \mathcal K$ for all $a\in A$. Then there exists a $\ast$-homomorphism $\phi \colon A \to B$ such that $v_t \psi(-) v_t^\ast$ converges point-norm to the $\ast$-homomorphism \begin{equation} \phi \otimes e_{1,1} + \theta \otimes (1_{\multialg{\mathcal K}} - e_{1,1}) \colon A \to \multialg{B \otimes \mathcal K}. \end{equation} \end{lemma} \begin{proof} Fix a unital embedding $\iota \colon \mathcal O_2 \hookrightarrow \multialg{B} \cap \theta(A)'$ and let $\overline \iota \colon (\mathcal O_2 \otimes \mathcal K)^\sim \hookrightarrow \multialg{B \otimes \mathcal K}$ be the induced unital $\ast$-homomorphism. Let $(v_t)_{t\in \mathbb R_+}$ be a unitary path in $\overline{\iota}((\mathcal O_2 \otimes \mathcal K)^\sim)$ as in Corollary \ref{c:unitarypath}. As $\theta_\infty = \theta \otimes 1_{\multialg{\mathcal K}}$, it easily follows that \begin{equation} \overline \iota ((\mathcal O_2 \otimes \mathcal K)^\sim) \subseteq \multialg{B \otimes \mathcal K} \cap \theta_\infty(A)'. \end{equation} For any $a\in A$ we get that \begin{equation} v_t \psi(a) v_t^\ast = v_t (\psi(a) - \theta_\infty(a)) v_t^\ast + v_t \theta_\infty(a) v_t^\ast = v_t (\psi(a) - \theta_\infty(a)) v_t^\ast + \theta_\infty(a) \end{equation} converges in norm for $t\to \infty$ by Corollary \ref{c:unitarypath}$(b)$. Thus $v_t \psi(-) v_t^\ast$ converges point-norm to a $\ast$-homomorphism $\psi_0 \colon A \to \multialg{B\otimes \mathcal K}$. In particular \begin{equation}\label{eq:limithom} \psi_0(a) = \lim_{t\to \infty} v_t \psi(a) v_t^\ast = \lim_{t\to \infty} v_t (\psi(a) - \theta_\infty(a)) v_t^\ast + \theta_\infty(a). \end{equation} In the following we let $e_{1,1}^\perp := 1_{\multialg{\mathcal K}} - e_{1,1}$. For any $b\in B\otimes \mathcal K$ we have \begin{eqnarray} && \lim_{t\to \infty} \| (1_{\multialg{B}} \otimes e_{1,1}^\perp) v_t b v_t^\ast \| \nonumber \\ &=& \lim_{t\to \infty} \| v_t^\ast (1_{\multialg{B}}\otimes e_{1,1}^\perp) v_t b\| \nonumber\\ &=& \lim_{t\to \infty} \| b - v_t^\ast (1_{\multialg{B}}\otimes e_{1,1})v_t b \| \nonumber\\ &=& 0, \end{eqnarray} where the last equality follows from Corollary \ref{c:unitarypath}$(a)$. Thus \begin{eqnarray}\label{eq:compcorner} && (1_{\multialg{B}} \otimes e_{1,1}^\perp) \psi_0(a) \nonumber\\ &\stackrel{\eqref{eq:limithom}}{=}& \lim_{t\to \infty} (1 _{\multialg{B}}\otimes e_{1,1}^\perp) v_t (\psi(a) - \theta_\infty) v_t^\ast + (1_{\multialg{B}} \otimes e_{1,1}^\perp)\theta_\infty(a) \nonumber\\ &=& \theta(a) \otimes e_{1,1}^\perp.\label{eq:compcorner} \end{eqnarray} By symmetry, we have \begin{equation} (1_{\multialg{B}} \otimes e_{1,1}^\perp) \psi_0(a) = \theta(a) \otimes e_{1,1}^{\perp} = \psi_0(a) (1_{\multialg{B}}\otimes e_{1,1}^\perp). \end{equation} Hence $1_{\multialg{B}}\otimes e_{1,1}$ commutes with $\psi_0(A)$ so we obtain a $\ast$-homomorphism \begin{equation} \phi_0 = (1_{\multialg{B}}\otimes e_{1,1}) \psi_0(-) (1_{\multialg{B}}\otimes e_{1,1}) \colon A \to \multialg{B}\otimes e_{1,1}. \end{equation} However, we have \begin{eqnarray} && (1_{\multialg{B}}\otimes e_{1,1}) \psi_0(a) \nonumber\\ & \stackrel{\eqref{eq:limithom}}{=} & \lim_{t\to \infty} (1_{\multialg{B}}\otimes e_{1,1}) v_t (\psi(a)- \theta_\infty(a)) v_t^\ast + \theta(a) \otimes e_{1,1} \nonumber\\ & \in& B\otimes \mathcal K, \end{eqnarray} so $\phi_0$ factors through $B \otimes e_{1,1}$. Let $\phi\colon A \to B$ be the corestriction of $\phi_0$. Then \begin{eqnarray} \lim_{t\to \infty} v_t \psi(a) v_t^\ast &=& \psi_0(a) \nonumber \\ &=& (1_{\multialg{B}}\otimes e_{1,1}) \psi_0(a) + (1_{\multialg{B}} \otimes e_{1,1}^\perp) \psi_0(a) \nonumber\\ &\stackrel{\eqref{eq:compcorner}}{=}& \phi(a) \otimes e_{1,1} + \theta(a) \otimes (1_{\multialg{\mathcal K}}- e_{1,1}), \end{eqnarray} for all $a\in A$, which finishes the proof. \end{proof} The following will be the key lemma for proving uniqueness results. It shows how one goes from a stable uniqueness result a la Dadarlat--Eilers \cite[Theorems 3.8 and 3.10]{DadarlatEilers-asymptotic} to a stable uniqueness results where one stabilises with a smaller $\ast$-homomorphism. In the presence of strong $\mathcal O_\infty$-stability one even obtains a uniqueness result on the nose. \begin{lemma}[Key lemma for uniqueness]\label{l:keyuniqueness} Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, and let $\phi, \psi, \theta \colon A \to B$ be $\ast$-homomorphisms. Suppose that there is a unital embedding $\iota \colon \mathcal O_2 \hookrightarrow \multialg{B} \cap \theta(A)'$. If there is a continuous unitary path $(w_t)_{t\in \mathbb R_+}$ in $(B\otimes \mathcal K)^\sim$ such that \begin{equation}\label{eq:DElemma} \lim_{t\to \infty}\| w_t ( \phi(a) \otimes e_{1,1} + \theta(a) \otimes e_{1,1}^\perp) w_t^\ast - \psi(a) \otimes e_{1,1} - \theta(a) \otimes e_{1,1}^\perp \| = 0 \end{equation} for all $a\in A$, then $\phi \oplus \theta \sim_\asu \psi \oplus \theta$ considered as maps $A \to M_2(B)$. Here $e_{1,1}^\perp = 1_{\multialg{\mathcal K}} - e_{1,1} \in \multialg{\mathcal K}$. In addition, if $\phi$ and $\psi$ are both strongly $\mathcal O_\infty$-stable and approximately dominate $\theta$ then $\phi \sim_\asMvN \psi$. \end{lemma} \begin{proof} Let $\mathcal K_1 := e_{1,1}^\perp \mathcal K e_{1,1}^\perp$ which is isomorphic to $\mathcal K$. In particular, $1_{\multialg{\mathcal K_1}} = e_{1,1}^\perp$. Let $\overline{\iota} \colon (\mathcal O_2 \otimes \mathcal K_1)^\sim \to \multialg{B \otimes \mathcal K_1} \subseteq \multialg{B\otimes \mathcal K}$ be the $\ast$-homomorphism induced by $\iota$. Let $(u_t)_{t\in \mathbb R_+}$ be a continuous unitary path in $(\mathcal O_2 \otimes \mathcal K_1)^\sim$ satisfying $(a)$ and $(b)$ of Lemma \ref{l:unitarypath} (with $e_{2,2}$ in place of $e_{1,1}$), and let $v_t := 1_{\multialg{B}} \otimes e_{1,1} + \overline{\iota}(u_t)$ be the induced unitary path in $\multialg{B\otimes \mathcal K}$. Note that \begin{eqnarray} && (1_{\multialg{B}} \otimes e_{1,1}) (b \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) \nonumber\\ &=& b \otimes e_{1,1} \nonumber\\ &=& (b \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) (1_{\multialg{B}} \otimes e_{1,1}) \end{eqnarray} for all $a\in A$ and $b\in B$. Clearly $\overline{\iota}(u_t)$ commutes with $\theta(A) \otimes 1_{\multialg{\mathcal K_1}}$, and $\overline{\iota}(u_t)$ annihilates $B \otimes e_{1,1}$. As $\overline{\iota}(\mathcal O_2 \otimes \mathcal K_1) = \iota(\mathcal O_2) \otimes \mathcal K_1 \subseteq \multialg{B} \otimes \mathcal K_1$ is a non-degenerate $C^\ast$-algebra, it follows from property $(a)$ of Lemma \ref{l:unitarypath}, that $\overline{\iota}(u_t (1_{\mathcal O_2} \otimes e_{2,2}) u_t^\ast)$ acts as an approximate identity on the corner $B \otimes \mathcal K_1$ of $B \otimes \mathcal K$. To ease notation, let $p_2 := 1_{\multialg{B}} \otimes (e_{1,1} + e_{2,2}) \in \multialg{B\otimes \mathcal K}$. Then \begin{equation} v_t p_2 v_t^\ast = 1_{\multialg{B}} \otimes e_{1,1} + \overline{\iota}(u_t (1_{\mathcal O_2} \otimes e_{2,2}) u_t^\ast) \end{equation} acts as an approximate identity on $B\otimes \mathcal K$. Thus, as $w_t \in (B\otimes \mathcal K)^\sim$ is continuous we may assume (by possibly reparametrising $v_t$) that \begin{equation}\label{eq:DEcommute} \lim_{t\to \infty} \| [v_t p_2 v_t^\ast, w_t]\| = 0. \end{equation} Let $v$ and $w$ in $\multialg{B \otimes \mathcal K}_\as$ be the elements induced by $(v_t)_{t\in \mathbb R_+}$ and $(w_t)_{t\in \mathbb R_+}$ respectively. To see that $\phi \oplus \theta \sim_{\asu} \psi \oplus \theta$ it suffices to find a partial isometry $V$ in $\multialg{B\otimes \mathcal K}_\as$ such that $VV^\ast = V^\ast V = p_2$, and \begin{equation} V (\phi(a) \otimes e_{1,1} + \theta(a) \otimes e_{2,2}) V^\ast = \psi(a) \otimes e_{1,1} + \theta(a) \otimes e_{2,2} \end{equation} for all $a\in A$. We will check that \begin{equation} V := p_2 v^\ast w v p_2 \in \multialg{B \otimes \mathcal K}_\as \end{equation} does the trick. As $v$ and $w$ are unitaries, we get \begin{equation} VV^\ast = p_2 v^\ast w v p_2 v^\ast w^\ast v p_2 \stackrel{\eqref{eq:DEcommute}}{=} p_2 v^\ast w w^\ast v p_2 = p_2, \end{equation} and similarly $V^\ast V = p_2$. For any $a\in A$ we have \begin{eqnarray} && V ( \phi (a) \otimes e_{1,1} + \theta(a) \otimes e_{2,2}) V^\ast \nonumber\\ &=& p_2 v^\ast w v p_2 ( \phi (a) \otimes e_{1,1} + \theta(a) \otimes e_{2,2}) p_2 v^\ast w^\ast v p_2 \nonumber\\ &= & p_2 v^\ast w v ( \phi (a) \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) p_2 v^\ast w^\ast v p_2 \nonumber\\ &\stackrel{(\ast)}{=}& p_2 v^\ast w ( \phi (a) \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) v p_2 v^\ast w^\ast v p_2 \nonumber\\ &\stackrel{\eqref{eq:DEcommute}}{=} & p_2 v^\ast w ( \phi (a) \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) w^\ast v p_2 \nonumber\\ &\stackrel{\eqref{eq:DElemma}}{=}& p_2 v^\ast ( \psi (a) \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) v p_2 \nonumber\\ &\stackrel{(\ast)}{=}& p_2 ( \psi (a) \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}) p_2 \nonumber\\ &=& \psi(a) \otimes e_{1,1} + \theta(a) \otimes e_{2,2}. \end{eqnarray} At the equations labeled with $(\ast)$, it was used that $v$ commutes with $b \otimes e_{1,1} + \theta(a) \otimes 1_{\multialg{\mathcal K_1}}$, as was shown earlier in the proof. Thus $\phi \oplus \theta \sim_{\asu} \psi \oplus \theta$. For the ``in addition'' part, note that the existence of a unital embedding $\mathcal O_2 \to \multialg{B} \cap \theta(A)'$ implies that $\theta$ is strongly $\mathcal O_2$-stable. Therefore $\phi \sim_\asMvN \psi$ by combining Propositions \ref{p:piabsorbing} and \ref{p:MvNeq}. \end{proof} \section{The Kirchberg--Phillips Theorem}\label{s:KP} This section contains the proofs of Theorems \ref{t:existsimple}, \ref{t:uniquesimple}, and \ref{t:KP}. Note that Theorem \ref{t:KPUCT} -- classification of Kirchberg algebras satisfying the UCT -- is an easy corollary of Theorem \ref{t:KP} as explained in the introduction, so I consider the proof of Theorem \ref{t:KPUCT} as complete. \subsection{KK-preliminaries}\label{ss:KKprel} For the construction of groups $KK(A,B)$ by Kasparov and Skandalis' variation $KK_\nuc(A,B)$ the reader is referred to \cite{Blackadar-book-K-theory} and \cite{Skandalis-KKnuc} respectively. The constructions can also be obtained from Section \ref{s:KK} as a special case. Two pictures of $KK_\nuc$ will be emphasised: the Fredholm picture and the Cuntz pair picture. For this, fix a separable $C^\ast$-algebra $A$, and a $\sigma$-unital $C^\ast$-algebra $B$. All Hilbert modules $E$ are assumed to be right modules and countably generated. For Hilbert modules $E,F$, let $\mathcal B(E,F)$ be the space of adjointable operators $E\to F$, and let $\mathcal K(E,F)$ be the closed linear span of rank one operators $E\to F$. I will refer to $\mathcal K(E,F)$ as the \emph{``compacts''}.\footnote{I write ``compacts'' instead of compacts to emphasise that ``compact'' operators are not actually compact in the usual sense.} \begin{remark}[The Fredholm picture] If $E$ is a Hilbert $B$-module, then a $\ast$-homo\-morphism $\psi \colon A \to \mathcal B(E)$ is called \emph{weakly nuclear} if $\langle \xi, \psi(-) \xi \rangle_E \colon A \to B$ is a nuclear map for any $\xi \in E$. In the case $E=B$, one has $\mathcal B(E) = \multialg{B}$ and so this definition agrees with Definition \ref{d:weaklynuc}. A triple $(\psi_0, \psi_1, v)$ is called a \emph{weakly nuclear cycle} if $\psi_i \colon A \to \mathcal B(E_i)$ are weakly nuclear $\ast$-homomorphisms with $E_i$ Hilbert $B$-modules, and $v\in \mathcal B(E_0,E_1)$ satisfies that \begin{equation} v\psi_0(a) - \psi_1(a) v, \qquad \psi_0(a) (v^\ast v - 1_{\mathcal B(E_0)}), \qquad \psi_1(a) (vv^\ast - 1_{\mathcal B(E_1)}) , \end{equation} are ``compact'' for all $a\in A$. If these are all zero for every $a\in A$ then $(\psi_0, \psi_1 , v)$ is called \emph{degenerate}. One may add two weakly nuclear cycles by taking their direct sum in the obvious way. If $(\psi_0, \psi_1,v)$ is a weakly nuclear cycle and $u\in \mathcal B(E_0', E_0)$ is a unitary, then $(\Ad u \circ \psi_0, \psi_1, v u)$ is a weakly nuclear cycle, and the two cycles $(\psi_0,\psi_1, v)$ and $(\Ad u \circ \psi_0, \psi_1, vu)$ are said to be \emph{unitarily equivalent}. Similarly, one defines unitary equivalence for a unitary $u \in \mathcal B(E_1, E_1')$. Say that two weakly nuclear cycles $(\psi_0, \psi_1 , v_0)$ and $(\psi_0, \psi_1, v_1)$ are \emph{operator homotopic} if there is a continuous path $(v_s)_{s\in [0,1]}$ from $v_0$ to $v_1$ such that each $(\psi_0, \psi_1, v_s)$ is weakly nuclear cycle. Then $KK_\nuc(A,B)$ is (canonically isomorphic to) the set of weakly nuclear cycles modulo the equivalence relation generated by unitary equivalence, addition of degenerate weakly nuclear cycles, and operator homotopy. The isomorphism is given by taking a weakly nuclear cycle $(\psi_0, \psi_1, v)$ to the Kasparov module $\left( E_0 \oplus E_1^\op, \psi_0\oplus \psi_1, \left(\begin{array}{cc} 0 & v^\ast \\ v & 0 \end{array}\right) \right)$.\footnote{Here $E_0$ and $E_1$ have the trivial $\mathbb Z/2\mathbb Z$-grading where every element has degree $0$, whereas $E_1^\op$ is $E_1$ with the opposite grading, i.e.~every element in $E_1^\op$ has degree 1.} One may construct $KK(A,B)$ in the exact same way by removing the words ``weakly nuclear'' everywhere. Hence it obviously follows that there is a canonical homomorphism \begin{equation} KK_\nuc(A,B) \to KK(A,B), \end{equation} and that this is an isomorphism if either $A$ or $B$ is nuclear (as any cycle will automatically be weakly nuclear).\footnote{It is a somewhat common misconception that $KK_\nuc(A, B) \to KK(A,B)$ is injective, or equivalently, that $KK_\nuc(A,B)$ is the subgroup of $KK(A,B)$ generated by weakly nuclear cycles. Though I know of no example where this is not true, there is a priori no reason to believe that this is the case.} \end{remark} \begin{remark}[The Cuntz pair picture] A pair $(\psi_0, \psi_1)$ of $\ast$-homomorphism $A \to \multialg{B\otimes \mathcal K}$ is called a \emph{Cuntz pair} if $\psi_0(a) - \psi_1(a)\in B \otimes \mathcal K$ for all $a\in A$. A Cuntz pair $(\psi_0, \psi_1)$ is called \emph{weakly nuclear} if $\psi_0$ and $\psi_1$ are both weakly nuclear.\footnote{Note that this is the case exactly when $(\psi_0, \psi_1, 1_{\multialg{B\otimes \mathcal K}})$ is a weakly nuclear cycle. To make sense of this, use the Hilbert $B$-modules $E_0 = E_1 = \ell^2(\mathbb N) \otimes B$.} Say that two weakly nuclear Cuntz pairs $(\phi_0, \phi_1)$ and $(\psi_0, \psi_1)$ are \emph{homotopic} if there is a family $(\eta_0^{(s)}, \eta_1^{(s)})_{s\in [0,1]}$ of weakly nuclear Cuntz pairs such that \begin{equation} (\eta_0^{(0)}, \eta_1^{(0)}) = (\phi_0, \phi_1), \qquad (\eta_0^{(1)} , \eta_1^{(1)}) = (\psi_0, \psi_1), \end{equation} the map $[0,1] \ni s \mapsto \eta_i^{(s)}(a)$ is strictly continuous for $i=0,1$ and $a\in A$, and $[0,1] \ni s\mapsto \eta_0^{(s)}(a) - \eta_1^{(s)}(a)$ is norm-continuous for any $a\in A$. One can form sums of weakly nuclear Cuntz pairs by \begin{equation} (\phi_0, \phi_1) \oplus_{s_1,s_2} (\psi_0, \psi_1) = (\phi_0 \oplus_{s_1,s_2} \psi_0, \phi_1 \oplus_{s_1,s_2} \psi_1), \end{equation} where $s_1,s_2\in \multialg{B\otimes \mathcal K}$ are $\mathcal O_2$-isometries. As any two Cuntz sums are unitarily equivalent, and as the unitary group of $\multialg{B\otimes \mathcal K}$ is path-connected by \cite{CuntzHigson-Kuipersthm}, sums of weakly nuclear Cuntz pairs are unique up to homotopy. The map $(\psi_0, \psi_1) \mapsto (\psi_0, \psi_1, 1_{\multialg{B\otimes \mathcal K}})$ induces an isomorphism of homotopy classes of weakly nuclear Cuntz pairs and $KK_\nuc(A,B)$ (in the Fredholm picture). A nuclear $\ast$-homomorphism $\phi \colon A \to B \otimes \mathcal K$ induces an element $KK_\nuc(\phi)$ in $KK_\nuc(A,B)$ via the weakly nuclear Cuntz pair $(\phi, 0)$. If $\psi \colon A \to B \otimes \mathcal K$ is another nuclear $\ast$-homo\-morphism then the Cuntz pair $(\phi, \psi)$ induces the element $KK_\nuc(\phi) - KK_\nuc(\psi)$. Finally, suppose $\theta \colon A \to B \otimes \mathcal K$ is a nuclear $\ast$-homomorphism such that there exist $\mathcal O_2$-isometries $s_1,s_2 \in \multialg{B \otimes \mathcal K} \cap \theta(A)'$. Then $KK_\nuc(\theta) + KK_\nuc(\theta)$ is represented by the weakly nuclear Cuntz pair \begin{equation} (\theta, 0) \oplus_{s_1,s_2} (\theta, 0 ) = (s_1 \theta(-) s_1^\ast + s_2 \theta(-)s_2^\ast, 0) = ((s_1s_1^\ast + s_2 s_2^\ast)\theta(-), 0) = (\theta, 0). \end{equation} Hence $KK_\nuc(\theta) + KK_\nuc(\theta) = KK_\nuc(\theta)$ which implies that $KK_\nuc(\theta)=0$ since $KK_\nuc(A,B)$ is a group. \end{remark} \begin{remark}[The Kasparov product] For separable $C^\ast$-algebras $A,B$ and $C$ there is a canonical homomorphism \begin{equation} \circ \colon KK(B,C) \otimes KK(A,B) \to KK(A,C) \end{equation} called the \emph{Kasparov product} which satisfies \begin{equation} KK(\psi) \circ KK(\phi) = KK(\psi \circ \phi) \end{equation} whenever $\phi \colon A\to B$ and $\psi \colon B \to C$ are $\ast$-homomorphisms.\footnote{It is somewhat unconventional to denote the Kasparov product by the symbol $\circ$, although it appears like this in \cite[Section 18.1]{Blackadar-book-K-theory}. Often one uses $\times$ to denote the Kasparov product, in which case the product is usually reversed, so that $KK(\phi) \times KK(\psi) = KK(\psi \circ \phi)$. I write the Kasparov product $\circ$ in the same order as one composes maps.} Similarly, there are Kasparov products \begin{eqnarray} \circ \colon KK_\nuc(B,C) \otimes KK(A,B) &\to& KK_\nuc(A,C), \\ \circ \colon KK(B,C) \otimes KK_\nuc(A,B) &\to& KK_\nuc(A,C), \end{eqnarray} which are also well-behaved with respect to composition of morphisms when one is nuclear. Unfortunately, there is (at least to my knowledge) no way of describing the Kasparov product using either the Fredholm picture or the Cuntz pair picture. The reader is referred to \cite{Blackadar-book-K-theory} and \cite{Skandalis-KKnuc} for details. Alternatively, everything will be reproved in a much more general setting in Section \ref{s:KK}. \end{remark} The following elementary lemma will be used. \begin{lemma}\label{l:asMvNKKnuc} Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital $C^\ast$-algebra, and let $\phi,\psi \colon A\to B$ be nuclear $\ast$-homomorphisms. If $\phi \sim_\asMvN \psi$ then $KK_\nuc(\phi) = KK_\nuc(\psi)$. \end{lemma} \begin{proof} As the corner inclusion $\id_B \oplus 0 \colon B \to M_2(B)$ induces an isomorphism $KK_\nuc(A, B) \cong KK_\nuc(A, M_2(B))$, it is enough to show that $KK_\nuc(\phi\oplus 0) = KK_\nuc(\psi \oplus 0)$. By Proposition \ref{p:MvNeq}, $\phi \oplus 0$ and $\psi \oplus 0$ are asymptotically unitarily equvalent, say by a unitary path $(u_s)_{s\in [0,1)} \in \multialg{M_2(B)}$. Letting $\eta_s := \Ad u_s \circ (\phi \oplus 0)$ for $s\in [0,1)$, and $\eta_1 = \psi \oplus 0$, one obtains a point-wise nuclear homotopy from $\Ad u_0 \circ (\phi\oplus 0)$ to $\psi \oplus 0$, and thus $KK_\nuc(\phi \oplus 0) = KK_\nuc(\Ad u_0 \circ (\phi \oplus 0)) = KK_\nuc(\psi \oplus 0)$. \end{proof} \subsection{Proofs of Theorems \ref{t:existsimple}, \ref{t:uniquesimple} and \ref{t:KP}} Before proving the main results, a few lemmas will be required. The first uses Kirchberg's celebrated $\mathcal O_2$-embedding theorem, see \cite{Kirchberg-ICM} or \cite{KirchbergPhillips-embedding}. \begin{lemma}\label{l:fullO2map} Let $A$ be a separable, exact $C^\ast$-algebra, and let $B$ be a $\sigma$-unital $C^\ast$-algebra which contains a full, properly infinite projection. Then $B$ contains a $\sigma$-unital, stable, full, hereditary $C^\ast$-subalgebra, and there exists full, nuclear $\ast$-homomorphism $\theta \colon A \to B\otimes \mathcal K$ such that $\mathcal O_2$ embeds unitally in $\multialg{B\otimes \mathcal K} \cap \theta(A)'$. \end{lemma} \begin{proof} As $B$ contains a full, properly infinite projection $p$, there is a full embedding obtained as a composition $\mathcal O_2 \hookrightarrow \mathcal O_\infty \hookrightarrow pBp \subseteq B$, where we used that $pBp$ contains a unital copy of $\mathcal O_\infty$, see \cite[Proposition 4.2.3]{Rordam-book-classification}. Let $\eta$ denote the composition \begin{equation} \mathcal O_2 \otimes \mathcal O_2 \otimes \mathcal K \hookrightarrow \mathcal O_2 \hookrightarrow B \end{equation} for some embedding $\mathcal O_2 \otimes \mathcal O_2 \otimes \mathcal K \hookrightarrow \mathcal O_2$ which exists by the $\mathcal O_2$-embedding theorem \cite{KirchbergPhillips-embedding}. Let $B_0$ denote the hereditary $C^\ast$-subalgebra of $B$ generated by the image of $\eta$. Clearly $B_0$ is a $\sigma$-unital, full, hereditary $C^\ast$-subalgebra, and it is also stable (see e.g.~\cite[Proposition 4.4]{HjelmborgRordam-stability}), completing the first part of the proof. Brown's stable isomorphism theorem \cite{Brown-stableiso} implies that $B_0 \cong B\otimes \mathcal K$. So we may replace $B\otimes \mathcal K$ with $B_0$. Let $\theta$ denote the composition \begin{equation} A \xrightarrow j \mathcal O_2 \xrightarrow{1_{\mathcal O_2} \otimes \id_{\mathcal O_2} \otimes e_{1,1}} \mathcal O_2 \otimes \mathcal O_2 \otimes \mathcal K \xrightarrow \eta B_0, \end{equation} where $j \colon A \hookrightarrow \mathcal O_2$ is an embedding, whose existence again uses the $\mathcal O_2$-embedding theorem. Clearly $\theta$ is full and nuclear. The composition \begin{equation} \mathcal O_2 \xrightarrow{\id_{\mathcal O_2} \otimes 1_{\multialg{\mathcal O_2 \otimes \mathcal K}}} \multialg{\mathcal O_2 \otimes \mathcal O_2 \otimes \mathcal K} \xrightarrow{\multialg{\eta}} \multialg{B_0} \end{equation} gives a unital embedding of $\mathcal O_2$ in $\multialg{B_0} \cap \theta(A)'$. \end{proof} \begin{lemma}\label{l:absCuntzpair} Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital $C^\ast$-algebra, and let $\Theta \colon A \to \multialg{B\otimes \mathcal K}$ be a weakly nuclear, nuclearly absorbing representation. Then any element $x\in KK_\nuc(A,B)$ is represented by a weakly nuclear Cuntz pair of the form $(\psi, \Theta)$. \end{lemma} \begin{proof} The proof is a chain of standard reductions. In the following, let $\mathcal H_B := \ell^2(\mathbb N)\otimes B$ and identify $\multialg{B\otimes \mathcal K}$ with $\mathcal B(\mathcal H_B)$. Represent $x$ by a weakly nuclear Cuntz pair $(\psi_0,\psi_1)$. Then the weakly nuclear cycle $(\psi_0,\psi_1,1_{\mathcal B(\mathcal H_B)})$ represents $x$. Let \begin{equation} \Psi' = \psi_0 \oplus \psi_1\oplus \psi_0 \oplus \psi_1 \oplus \cdots \colon A \to \mathcal B(E) \end{equation} where $E= \bigoplus_{\mathbb N} \mathcal H_B$. Note that $\Psi'$ is weakly nuclear. Then $(\psi_0\oplus \Psi' , \psi_1 \oplus \Psi', 1_{\mathcal B(\mathcal H_B)} \oplus 1_{\mathcal B(E)})$ also represents $x$, as $(\Psi',\Psi', 1_{\mathcal B(E)})$ is degenerate. Let $w$ be a unitary in $\mathcal B(\mathcal H_B \oplus E) = \mathcal B(\mathcal H_B \oplus \bigoplus_{\mathbb N} \mathcal H_B)$ which permutes the direct summands, so that $\Ad w \circ \psi_0 \oplus \Psi' = \psi_1 \oplus \Psi'$. As $(\psi_0 \oplus \Psi', \psi_1 \oplus \Psi', 1_{\mathcal B(\mathcal H_B)}\oplus 1_{\mathcal B(E)})$ and $(\psi_1 \oplus \Psi', \psi_1 \oplus \Psi', w)$ are unitarily equivalent, the latter represents $x$. Let $\Psi'' = \psi_1 \oplus \Psi'$, so that $(\Psi'', \Psi'', w)$ represents $x$. Thus $(\Psi'' \oplus \Theta, \Psi'' \oplus \Theta, w \oplus 1_{\mathcal B(\mathcal H_B)})$ also represents $x$. As $\Theta$ is weakly nuclear and nuclearly absorbing we may find a unitary $u \in \mathcal B(\mathcal H_B, \mathcal H_B\oplus E \oplus \mathcal H_B)$ such that $\Ad u \circ (\Psi'' \oplus \Theta)(a) - \Theta(a)$ is ``compact'' for all $a\in A$. Let $\Psi = \Ad u \circ (\Psi'' \oplus \Theta)$ and $v = u^\ast (w \oplus 1_{\mathcal B(\mathcal H_B)}) u$. Note that $v$ is a unitary. Since $(\Psi, \Psi, v)$ and $(\Psi'' \oplus \Theta , \Psi'' \oplus \Theta , w \oplus 1_{\mathcal B(\mathcal H_B)})$ are unitarily equivalent, the former represents $x$. Clearly $(\Psi \oplus \Theta, \Psi \oplus \Theta, v \oplus 1_{\mathcal B(\mathcal H_B)})$ represents $x$, since $(\Theta, \Theta, 1_{\mathcal B(\mathcal H_B)})$ is degenerate. Let $R_t = \left( \begin{array}{cc} \cos(t\pi/2) & \sin(t\pi /2) \\ - \sin(t\pi /2) & \cos(t\pi /2) \end{array}\right) \in M_2(\mathcal B(\mathcal H_B)) = \mathcal B(\mathcal H_B \oplus \mathcal H_B)$ for $t\in [0,1]$ be the usual path of $2\times 2$ unitary rotation matrices. As $\Psi$ and $\Theta$ agree modulo the ``compacts'', $(\Psi \oplus \Theta)(a)$ is of the form $(\Theta \oplus \Theta)(a)$ modulo the ``compacts''. As $R_t$ has scalar entries, it follows that $R_t$ and $(\Psi \oplus \Theta)(a)$ commute modulo the ``compacts''. It follows that $(\Psi \oplus \Theta, \Psi \oplus \Theta, R_t^\ast (v \oplus 1_{\mathcal B(\mathcal H_B)}) R_t)$ defines an operator homotopy from $(\Psi \oplus \Theta, \Psi \oplus \Theta, v \oplus 1_{\mathcal B(\mathcal H_B)})$ to $(\Psi \oplus \Theta, \Psi \oplus \Theta, 1_{\mathcal B(\mathcal H_B)} \oplus v) = (\Psi, \Psi, 1_{\mathcal B(\mathcal H_B)}) \oplus (\Theta, \Theta, v)$, so the latter represents $x$. As $(\Psi, \Psi, 1_{\mathcal B(\mathcal H_B)})$ is degenerate it follows that $(\Theta, \Theta, v)$ represents $x$. Finally, let $\psi = \Ad v \circ \Theta$, which is a weakly nuclear $\ast$-homomorphism since $v$ is a unitary. Then $(\Theta, \Theta, v)$ and $(\psi, \Theta, 1_{\mathcal B(\mathcal H_B)})$ are unitarily equivalent, so $x$ is represented by the weakly nuclear Cuntz pair $(\psi, \Theta)$. \end{proof} With these ingredients, the proof of Theorem \ref{t:existsimple} -- the main existence part in the Kirchberg--Phillips classification theorem -- is ready to be handled. \begin{proof}[Proof of Theorem \ref{t:existsimple}] As $B$ contains a full, properly infinite projection, there is a $\sigma$-unital, stable, full, hereditary $C^\ast$-subalgebra $B_0 \subseteq B$ by Lemma \ref{l:fullO2map}. As the inclusion $\iota \colon B_0 \hookrightarrow B$ induces an isomorphism \begin{equation} \iota_\ast \colon KK_\nuc(A,B_0 ) \xrightarrow \cong KK_\nuc(A,B), \end{equation} there is a unique $y\in KK_\nuc(A,B_0)$ such that $\iota_\ast( y) = x$. Hence we may assume that $B$ is stable. By Lemma \ref{l:fullO2map}, there is a full, nuclear $\ast$-homomorphism $\theta \colon A \to B$ such that $\mathcal O_2$ embeds unitally in $\multialg{B} \cap \theta(A)'$. Let $s_1,s_2,\dots \in \multialg{B}$ be isometries with mutually orthogonal range projections such that $\sum_{k=1}^\infty s_k s_k^\ast =1_{\multialg{B}}$. By Theorem \ref{t:fullnucabs}, the infinite repeat $\theta_\infty = \sum_{k=1}^\infty s_k \theta(-) s_k^\ast$ is weakly nuclear and nuclearly absorbing. By Lemma \ref{l:absCuntzpair}, $x\in KK_\nuc(A,B)$ is represented by a weakly nuclear Cuntz pair of the form $(\psi , \theta_\infty)$. Let $(u_t)_{t\in \mathbb [0,1)}$ be a unitary path in $\multialg{B} \cap \theta_\infty(A)'$ as given by Lemma \ref{l:keyexistence},\footnote{As in Remark \ref{r:infrep}, we use that $\Omega \colon B \otimes \mathcal K \to B$ given on elementary tensors by $b\otimes e_{i,j} = s_i b s_j^\ast$ (once extended to multiplier algebras) maps $\theta \otimes 1_{\multialg{\mathcal K}}$ to $\theta_\infty$, and $\phi(a) \otimes e_{1,1} + \theta(a) \otimes (1_{\multialg{\mathcal K}} - e_{1,1})$ to $s_1 \phi(a) s_1^\ast + \sum_{k=2}^\infty s_k \theta(a) s_k^\ast$.} and let $\phi \colon A \to B$ be the $\ast$-homomorphism such that $u_t \psi(-) u_t^\ast$ converges point-norm to $\phi_0 := s_1\phi(-)s_1^\ast + \sum_{k=2}^\infty s_k \theta(-) s_k^\ast$. As $\phi_0$ is weakly nuclear it follows that $\phi = s_1^\ast \phi_0(-) s_1$ is weakly nuclear and thus nuclear since it takes values in $B$ (as opposed to $\multialg{B}$). As $B$ is stable and $1_{\multialg{B}} - s_1s_1^\ast \geq s_2s_2^\ast$, we may fix an isometry $s_0 \in \multialg{B}$ with $s_0s_0^\ast = 1_{\multialg{B}} - s_1 s_1^\ast$ so that $s_1,s_0$ are $\mathcal O_2$-isometries.\footnote{Here the following well-known fact is used: if $B$ is stable and $p\in \multialg{B}$ is a projection so that $1_{\multialg{B}}\preceq p$, then $1_{\multialg{B}} \sim p$. To see this, using that $1_{\multialg{B}}$ is properly infinite, one gets $p\oplus p \leq 1_{\multialg{B}} \oplus 1_{\multialg{B}} \preceq 1_{\multialg{B}} \preceq p$. From \cite[Proposition 1.1.2]{Rordam-book-classification} it follows that $p$ is properly infinite, and $p$ is full since $1_{\multialg{B}}\preceq p$. As $K_0(\multialg{B}) = 0$ by \cite[Proposition 12.2.1]{Blackadar-book-K-theory}, it follows from \cite[Proposition 4.1.4]{Rordam-book-classification} that all properly infinite full projections in $\multialg{B}$ are equivalent, and in particular $p\sim 1_{\multialg{B}}$.} Define $\theta_0 := \sum_{k=2}^\infty s_0^\ast s_k \theta(-) s_k^\ast s_0$ (which is also an infinite repeat of $\theta$). Then $\phi_0$ can be expressed as the Cuntz sum $\phi \oplus_{s_1,s_0} \theta_0$ and similarly $\theta_\infty = \theta \oplus_{s_1,s_0} \theta_0$. We obtain a homotopy of weakly nuclear Cuntz pairs, from $(\psi, \theta_\infty)$ to $(\phi_0 , \theta_\infty) = (\phi, \theta) \oplus_{s_1,s_0} (\theta_0, \theta_0)$, given by \begin{equation} (\Ad u_t \circ \psi, \Ad u_t \circ \theta_\infty) = (\Ad u_t \circ \psi, \theta_\infty) \end{equation} for $t\in (0,1)$. Thus $x$ is represented by $(\phi, \theta) \oplus_{s_1,s_0} (\theta_0, \theta_0)$. As $(\theta_0, \theta_0)$ is degenerate, $x$ is represented by $(\phi, \theta)$. Hence $x = KK_\nuc(\phi) - KK_\nuc(\theta)$. As $\mathcal O_2$ embeds unitally in $\multialg{B} \cap \theta(A)'$ it follows that $KK_\nuc(\theta) = 0$. Therefore $x = KK_\nuc(\phi)$. Note that any Cuntz sum $\phi \oplus_{s_1,s_0}\theta$ is a \emph{full}, nuclear $\ast$-homomorphism. Since $\theta$ is full and $\phi$ is nuclear, $\theta$ approximately dominates $\phi$ by Proposition \ref{p:fulldom}. As $\theta$ is strongly $\mathcal O_\infty$-stable by Proposition \ref{p:Oinftyfactor}, as it factors through $\mathcal O_2$, it follows from Proposition \ref{p:piabsorbing}$(a)$ that $\phi \oplus_{s_1,s_0} \theta$ is strongly $\mathcal O_\infty$-stable. Replacing $\phi$ with $\phi \oplus_{s_1,s_0} \theta$ we thus have a full, nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homomorphism, such that $KK_\nuc(\phi) = x$. It remains to prove the ``moreover'' part, so from now on we assume that $A$ and $B$ are unital. ``Only if'': For any unital $C^\ast$-algebra $C$ the element $[1_C]_0 \in K_0(C) = KK(\mathbb C, C)$ is induced by the unique unital $\ast$-homomorphism $\eta_C \colon \mathbb C \to C$. Hence if $\phi \colon A \to B$ is unital and $KK_\nuc(\phi) = x$ then \begin{equation} \Gamma_0(x)([1_A]_0) = KK_\nuc(\phi) \circ KK(\eta_A) = KK_\nuc(\phi \circ \eta_A) = KK_\nuc(\eta_B) = [1_B]_0. \end{equation} ``If'': By the not necessarily unital version of the theorem we may find a full, nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homo\-morphism $\phi_0 \colon A \to B$ such that $KK_\nuc(\phi_0) = x$. The strong $\mathcal O_\infty$-stability of $\phi_0$ implies that $\phi_0(1_A)$ is a properly infinite projection, see Remark \ref{r:fullpropinfproj}. As $[\phi_0(1_A)]_0 = \Gamma_0(x)([1_A]_0) = [1_B]_0$ and as both $\phi_0(1_A)$ and $1_B$ are properly infinite,\footnote{If $B$ is unital and contains a full, properly infinite projection, then $1_B$ is properly infinite.} full projections, it follows from a result of Cuntz \cite{Cuntz-K-theoryI} that $\phi_0(1_A)$ and $1_B$ are Murray--von Neumann equivalent. So we may pick $v \in B$ an isometry such that $vv^\ast = \phi_0(1_A)$ and define $\phi := v^\ast \phi_0(-) v \colon A \to B$. Then $\phi$ is a full, nuclear, unital, strongly $\mathcal O_\infty$-stable $\ast$-homomorphism, and as we clearly have $v\phi(-)v^\ast = \phi_0$ it follows that $\phi \sim_\asMvN \phi_0$ (implemented by the constant path $v$). Hence by Lemma \ref{l:asMvNKKnuc} \begin{equation} KK_\nuc(\phi) = KK_\nuc(\phi_0) = x.\qedhere \end{equation} \end{proof} In the proof of Theorem \ref{t:uniquesimple} presented below, the following stable uniqueness theorem of Dadarlat and Eilers \cite{DadarlatEilers-asymptotic} will be needed. \begin{theorem}[{\cite[Theorem 3.10]{DadarlatEilers-asymptotic}}]\label{t:DE} Let $A$ be a separable, exact $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra, let $\phi, \psi \colon A \to \multialg{B}$ be weakly nuclear $\ast$-homomorphisms for which $\phi(a)- \psi(a) \in B$ for all $a\in A$, and suppose that $\theta \colon A \to B$ is a full, nuclear $\ast$-homomorphism. If $[\phi, \psi]$ vanishes in $KK_\nuc(A,B)$, then there is a continuous path $(u_t)_{t\in \mathbb R_+}$ of unitaries in the unitisation $\widetilde B$ such that \begin{equation}\label{eq:DE} \lim_{t\to \infty} \| u_t (\phi \oplus_{s_1,s_2} \theta_\infty)(a) u_t^\ast - (\psi \oplus_{s_1,s_2} \theta_\infty)(a) \| =0 \end{equation} for all $a\in A$. Here $s_1,s_2\in \multialg{B}$ are $\mathcal O_2$-isometries and $\theta_\infty$ is an infinite repeat of $\theta$. \end{theorem} \begin{proof}[Proof of Theorem \ref{t:uniquesimple}] $(ii)\Rightarrow (i)$ follows from Lemma \ref{l:asMvNKKnuc}, and $(ii)\Leftrightarrow (iii)$ is Proposition \ref{p:MvNvsue}, where one uses Remark \ref{r:fullpropinfproj} to see that $B$ contains a full projection. $(i)\Rightarrow (ii)$: Suppose $KK_\nuc(\phi) = KK_\nuc(\psi)$. By Proposition \ref{p:MvNeq}, it suffices to show that $\phi \otimes e_{1,1}, \psi \otimes e_{1,1} \colon A \to B\otimes \mathcal K$ are asymptotically Murray--von Neumann equivalent, so we may assume without loss of generality that $B$ is stable. Lemma \ref{l:fullO2map} produces a full, nuclear $\ast$-homo\-morphism $\theta \colon A \to B$ such that $\mathcal O_2$ embeds unitally in $\multialg{B} \cap \theta(A)'$. Proposition \ref{p:fulldom} implies that $\phi$ and $\psi$ approximately dominate $\theta$. By Theorem \ref{t:fullnucabs} it follows that any infinite repeat of $\theta$ is nuclearly absorbing. Thus by Theorem \ref{t:DE} and Lemma \ref{l:keyuniqueness} -- the key lemma for uniqueness -- it follows that $\phi \sim_\asMvN \psi$. \end{proof} As a consequence of Theorems \ref{t:existsimple} and \ref{t:uniquesimple} one obtains the proof of Theorem \ref{t:KP}. \begin{proof}[Proof of Theorem \ref{t:KP}] $(a)$: By Theorem \ref{t:existsimple} we find full, nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homo\-morphisms $\phi_0 \colon A \to B$ and $\psi_0 \colon B \to A$ such that $KK(\phi_0) = x$ and $KK(\psi_0) = x^{-1}$. Note that $KK(\psi_0 \circ \phi_0) = KK(\id_A)$ and $KK(\phi_0 \circ \psi_0) = KK(\id_B)$. Hence by Theorem \ref{t:uniquesimple} it follows that $\psi_0 \circ \phi_0 \sim_\asu \id_A$ and $\phi_0 \circ \psi_0 \sim_\asu \id_B$. By Proposition \ref{p:asint}\footnote{Note that an approximate intertwining argument a la Elliott (see \cite[Corollary 2.3.4]{Rordam-book-classification}) implies that there is an isomorphism $\phi \colon A \xrightarrow \cong B$ satisfying $\phi_\ast = (\phi_0)_\ast = \Gamma(x)$. However, this isn't quite good enough since we want $KK(\phi) = x$.} it follows that we may pick an isomorphism $\phi \colon A \xrightarrow \cong B$ which is homotopic to $\phi_0$, and thus $KK(\phi) = KK(\phi_0) = x$. $(b)$: This is proved exactly as above, but by using the unital versions of Theorems \ref{t:existsimple} and \ref{t:uniquesimple}. \end{proof} \subsection{Kirchberg's and Phillips' existence and uniqueness results}\label{ss:classical} Variations of Theorems \ref{t:existsimple} and \ref{t:uniquesimple} are also the key steps in Phillips' \cite{Phillips-classification} and Kirchberg's \cite{Kirchberg-simple} approaches to classification, see also \cite[Theorems 8.2.1 and 8.3.3]{Rordam-book-classification}. The results presented in this paper are more general than either approach, as will be explained below. In Phillips' approach, the domain $A$ is always separable, nuclear, simple and unital, and the target is $B\otimes \mathcal O_\infty \otimes \mathcal K$ for a separable, unital $C^\ast$-algebra $B$. In this case, any $\ast$-homomorphism $\phi \colon A \to B\otimes \mathcal O_\infty \otimes \mathcal K$ is always nuclear and strongly $\mathcal O_\infty$-stable, and $KK(A,B) = KK_\nuc(A,B)$, so Phillips' main results are immediate corollaries of Theorems \ref{t:existsimple} and \ref{t:uniquesimple}. Kirchberg's main results are more technical, as well as more general. His domain $A$ is separable, exact, unital (and usually stabilised, i.e.~one considers $A\otimes \mathcal K$ instead), and his targets are $B\otimes \mathcal K$ where $B$ is unital and properly infinite.\footnote{Kirchberg also assumes that $B$ contains $\mathcal O_2$ unitally, but this is redundant when stabilising due to Brown's stable isomorphism theorem \cite{Brown-stableiso}.} The existence part of Kirchberg's result says that every element in $KK_\nuc(A,B)$ lifts to a nuclear $\ast$-homomorphism $A\otimes \mathcal K \to B\otimes \mathcal K$ and is therefore covered by Theorem \ref{t:existsimple}. For uniqueness, Kirchberg uses unitary homotopy as his equivalence relation on $\ast$-homo\-morphisms. Two $\ast$-homo\-morphisms $\phi,\psi \colon A \to B$ are \emph{unitarily homotopic}, written $\phi \sim_\mathrm{uh} \psi$, if there is a strictly continuous path $(u_t)_{t\in \mathbb R_+}$ of unitaries in $\multialg{B}$ such that $\lim_{t\to \infty} u_t \phi(a) u_t^\ast = \psi(a)$ for all $a\in A$. Clearly $\phi \sim_\asu \psi$ implies $\phi \sim_\mathrm{uh} \psi$, and if $\phi \sim_\mathrm{uh} \psi$ is implemented by $(u_t)_{t\in \mathbb R_+}$, then $\Ad u_0 \circ \phi$ is homotopic to $\psi$ by a path implemented by unitary conjugation. Hence if $\phi,\psi$ are nuclear and $\phi \sim_\mathrm{uh} \psi$, then $KK_\nuc(\phi) = KK_\nuc(\psi)$. In the cases where Theorem \ref{t:uniquesimple} is applicable with stable target, it therefore follows that $\phi \sim_\asu \psi \Leftrightarrow \phi \sim_{\mathrm{uh}} \psi \Leftrightarrow KK_\nuc(\phi) = KK_\nuc(\psi)$. Kirchberg shows that nuclear $\ast$-homomorphisms $\phi,\psi \colon A\otimes \mathcal K \to B\otimes \mathcal K$ have the same $KK_\nuc$-class exactly when $\phi\oplus \theta \sim_\mathrm{uh} \psi \oplus \theta$, where $\theta$ is a suitably chosen full (necessarily nuclear) $\ast$-homomorphism factoring through $\mathcal O_2$. By Propositions \ref{p:fulldom} and \ref{p:piabsorbing}$(a)$, it follows that $\phi\oplus \theta$ and $\psi \oplus \theta$ are nuclear, strongly $\mathcal O_\infty$-stable, and full, and they have the same $KK_\nuc$-classes as $\phi$ and $\psi$ respectively since $KK_\nuc(\theta) = 0$. Thus Theorem \ref{t:uniquesimple} implies \begin{equation} KK_\nuc(\phi) = KK_\nuc(\psi) \quad \Leftrightarrow \quad \phi\oplus \theta \sim_\asu \psi \oplus \theta \quad \Leftrightarrow \quad \phi\oplus \theta \sim_\mathrm{uh} \psi \oplus \theta. \end{equation} Kirchberg also shows that if $B$ in addition is simple and purely infinite, and if $\phi,\psi \colon A\otimes \mathcal K \to B\otimes \mathcal K$ are injective, nuclear $\ast$-homomorphisms, then $KK_\nuc(\phi) = KK_\nuc(\psi)$ if and only if $\phi \sim_{\mathrm{uh}} \psi$. Simplicity of $B$ and injectivity of the maps imply that they are full. By Corollary \ref{c:nucintosimplepi} (proved in the following section), it follows that $\phi, \psi$ are strongly $\mathcal O_\infty$-stable, and hence this result is also covered by Theorem \ref{t:uniquesimple}. \subsection{Approximate equivalence}\label{ss:KLnuc} \begin{remark}[KL-theory] In \cite{Dadarlat-KKtop}, Dadarlat equips $KK(A,B)$ (for $A$ and $B$ both separable) with a (not necessarily Hausdorff) topology. By \cite[Theorem 3.5]{Dadarlat-KKtop}, this topology can be described as the unique first countable topology on $KK(A,B)$ such that a sequence $(x_n)_{n\in \mathbb N}$ converges to $x\in KK(A,B)$ if and only if there is an element $y \in KK(A, C(\widetilde{\mathbb N}, B))$ with $(\ev_n)_\ast(y) = x_n$ for $n\in \mathbb N$ and $(\ev_\infty)_\ast(y) = x$. Here $\widetilde{\mathbb N} = \mathbb N \cup \{\infty\}$ denotes the one-point compactification of $\mathbb N$. One defines \begin{equation} KL(A,B) := KK(A,B) / \overline{\{0\}}. \end{equation} The group $KL(A,B)$ was first defined by Rørdam in \cite{Rordam-classsimple} whenever $A$ satisfies the UCT. Dadarlat's definition above generalises this to the non-UCT case. Doing the same in $KK_\nuc$ when $A$ is separable and $B$ is $\sigma$-unital, say that a sequence $(x_n)_{n\in \mathbb N}$ converges to $x$ in $KK_\nuc(A,B)$ if there is an element $y \in KK_\nuc(A, C(\widetilde{\mathbb N}, B))$ with $(\ev_n)_\ast(y) = x_n$ for $n\in \mathbb N$ and $(\ev_\infty)_\ast(y) = x$. Define \begin{equation} KL_\nuc(A,B) := KK_\nuc(A,B) / \overline{\{0\}}. \end{equation} \end{remark} \begin{proposition}\label{p:aMvNKL} Let $A$ be a separable $C^\ast$-algebra, let $B$ be a $\sigma$-unital $C^\ast$-algebra and suppose that $\phi, \psi \colon A \to B$ are nuclear $\ast$-homomorphisms. If $\phi \sim_\aMvN \psi$ then $KL_\nuc(\phi) = KL_\nuc(\psi)$. \end{proposition} \begin{proof} As the map $\id_B \otimes e_{1,1} \colon B \to B\otimes \mathcal K$ induces a topological isomorphism $KK_\nuc(A,B) \cong KK_\nuc(A, B \otimes \mathcal K)$, it descends to an isomorphism $KL_\nuc(A,B) \cong KL_\nuc(A,B\otimes \mathcal K)$. Hence we may assume that $B$ is stable. By Proposition \ref{p:MvNvsue} it follows that $\phi \sim_\au \psi$, so pick a sequence $(u_n)_{n\in \mathbb N}$ of unitaries in $\multialg{B}$ such that $u_n^\ast \phi(-) u_n \to \psi$. This gives a $\ast$-homomorphism \begin{equation} \Phi \colon A \to C(\widetilde{\mathbb N}, B), \qquad \Phi(a)(n) = \left\{ \begin{array}{ll} u_n^\ast \phi(a) u_n, & n\in \mathbb N \\ \psi(a) , & n = \infty \end{array} \right. \end{equation} for $a\in A$. The map $\Phi$ is nuclear by Lemma \ref{l:XnucC(Y)}\footnote{An alternative direct proof can be sketched as follows: one can use that on a finite set $\mathcal F \subseteq A$ and up to a tolerance $\epsilon >0$, $\Phi(a)(n)$ for $a\in \mathcal F$ is constant up to $\epsilon$ for $n$ sufficiently large. Hence $\Phi$ approximately decomposes as a direct sum of nuclear maps $A \to C(\{1,\dots, n\}, B)$ and $\ev_\infty \circ \Phi \colon A \to B \subseteq C(\{ n+1, \dots, \infty\}, B)$. Nuclearity of $\Phi$ easily follows.} and thus induces an element $KK_\nuc(\Phi) \in KK_\nuc(A, C(\widetilde{\mathbb N}, B))$ such that $(\ev_n)_\ast(KK_\nuc(\Phi)) = KK_\nuc(\phi)$ for $n\in \mathbb N$, and $(\ev_\infty)_\ast(KK_\nuc(\Phi)) = KK_\nuc(\psi)$. Hence we get $KL_\nuc(\phi) = KL_\nuc(\psi)$. \end{proof} From Theorems \ref{t:existsimple} and \ref{t:uniquesimple} one gets the following approximate uniqueness theorem. In order to get it for (not necessarily strongly) $\mathcal O_\infty$-stable maps, a McDuff type theorem \cite[Corollary 4.5]{Gabe-O2class} will be applied. \begin{theorem}[Approximate uniqueness -- full case]\label{t:approxuniquesimple} Let $A$ be a separable $C^\ast$-algebra, and let $B$ be a $\sigma$-unital $C^\ast$-algebra. Suppose that $\phi, \psi\colon A \to B$ are nuclear, $\mathcal O_\infty$-stable, full $\ast$-homomorphisms. The following are equivalent: \begin{itemize} \item[$(i)$] $KL_\nuc(\phi) = KL_\nuc(\psi)$; \item[$(ii)$] $\phi$ and $\psi$ are approximately Murray--von Neumann equivalent. \end{itemize} If either $B$ is stable, or if $A,B,\phi$ and $\psi$ are all unital, then $(i)$ and $(ii)$ are equivalent to \begin{itemize} \item[$(iii)$] $\phi$ and $\psi$ are approximately unitarily equivalent (with unitaries in the minimal unitisation). \end{itemize} \end{theorem} \begin{proof} $(ii)\Rightarrow (i)$ is Proposition \ref{p:aMvNKL}. For proving $(i) \Rightarrow (ii)$, assume that $KL_\nuc(\phi) = KL_\nuc(\psi)$. By Proposition \ref{p:MvNeq}, it suffices to show that $\phi\otimes e_{1,1}, \psi\otimes e_{1,1} \colon A \to B\otimes \mathcal K$ are approximately unitarily equivalent. Note that $KL_\nuc(\phi \otimes e_{1,1}) = KL_{\nuc}(\psi \otimes e_{1,1})$ and that $\phi\otimes e_{1,1}$ and $\psi\otimes e_{1,1}$ are nuclear, $\mathcal O_\infty$-stable and full. Hence we may instead assume that $B$ is stable. By \cite[Corollary 4.5]{Gabe-O2class}, $\phi$ and $\psi$ are approximate Murray--von Neumann equivalent to $\ast$-homomorphisms factoring through $A \otimes \mathcal O_\infty$, so these maps are strongly $\mathcal O_\infty$-stable by Proposition \ref{p:Oinftyfactor}. By replacing $\phi$ and $\psi$ with such maps, we may assume that $\phi$ and $\psi$ are strongly $\mathcal O_\infty$-stable. Let $\mathcal F\subset A$ be finite and $\epsilon>0$. Pick $y\in KK_\nuc(A, C(\widetilde{\mathbb N},B))$ such that $(\ev_n)_\ast(y) = KK_\nuc(\phi)$ for $n\in \mathbb N$ and $(\ev_\infty)_\ast(y) = KK_\nuc(\psi)$. Note that the existence a full, nuclear, $\mathcal O_\infty$-stable map implies that $A$ is exact (see Remark \ref{r:nucemb}) and that $B$ contains a full, properly infinite projection (see Remark \ref{r:fullpropinfproj}). Hence $C(\widetilde{\mathbb N}, B)$ also contains a full properly infinite projection, so by Theorem \ref{t:existsimple}, there is a full, nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homomorphism $\Phi \colon A \to C(\widetilde{\mathbb N},B)$ so that $KK_\nuc(\Phi) = y$. Pick $n\in \mathbb N$ such that \begin{equation} \| \ev_n(\Phi(a)) - \ev_{\infty}(\Phi(a))\| < \epsilon, \qquad \textrm{for all }a\in \mathcal F. \end{equation} By Theorem \ref{t:uniquesimple}, it follows that $\phi \sim_{\asu} \ev_n \circ \Phi$ and $\psi \sim_{\asu} \ev_\infty\circ \Phi$. It easily follows that we may find a unitary $u\in \multialg{B}$ such that $\| u^\ast \phi(a) u - \psi(a)\| < \epsilon$ for all $a\in \mathcal F$. This completes the proof of $(i) \Rightarrow (ii)$. The part involving $(iii)$ follows immediately from Proposition \ref{p:MvNvsue}. \end{proof} If $A$ satisfies the UCT, Dadarlat and Loring proved \cite{DadarlatLoring-UMCT} that there is a universal \emph{multi}coefficient theorem (UMCT), given by the short exact sequence \begin{equation} \mathrm{PExt}(K_{1-\ast}(A), K_\ast(B)) \rightarrowtail KK(A,B) \twoheadrightarrow \Hom_\Lambda(\underline{K}(A), \underline{K}(B)). \end{equation} Here $\mathrm{PExt}(K_{1-\ast}(A), K_\ast(B))$ is the subgroup of $\Ext(K_{1-\ast}(A), K_\ast(B))$ consisting of equivalence classes of pure extensions, and $\underline{K}(C)$ is the \emph{total $K$-theory}, which is a module over a certain ring $\Lambda$. I refer the reader to \cite{DadarlatLoring-UMCT} for the relevant details. By \cite[Theorem 4.1]{Dadarlat-KKtop} it follows that if $A$ and $B$ are separable, and $A$ satisfies the UCT, then there is a natural isomorphism of topological groups \begin{equation} KL(A,B) \xrightarrow \cong \Hom_\Lambda(\underline{K}(A), \underline{K}(B)) \end{equation} where the latter group is equipped with the topology of point-wise convergence. As the topologies on $KK(A,B)$ and $KK_\nuc(A,B)$ only depend on $A$ up to $KK$-equivalence (by the above definition), it follows that \begin{equation} KL_\nuc(A,B) \cong KL(A,B) \end{equation} whenever $A$ is $KK$-equivalent to a nuclear $C^\ast$-algebra. In particular, as $A$ satisfies the UCT (and is thus $KK$-equivalent to a commutative $C^\ast$-algebra) one obtains natural isomorphisms \begin{equation}\label{eq:KLnucHom} KL_\nuc(A,B) \cong KL(A,B) \cong \Hom_\Lambda(\underline{K}(A), \underline{K}(B)) \end{equation} for $B$ separable by \cite[Theorem 4.1]{Dadarlat-KKtop}. Separability plays no important role here, only the existence of an absorbing representation $A \to \multialg{B\otimes \mathcal K}$, which always exists when $A$ is separable and nuclear and $B$ is $\sigma$-unital by \cite[Theorem 6]{Kasparov-Stinespring}. Hence \eqref{eq:KLnucHom} holds whenever $A$ is separable and satisfies the UCT, and $B$ is $\sigma$-unital. From Theorems \ref{t:existsimple} and \ref{t:approxuniquesimple}, together with the result \eqref{eq:KLnucHom}, one immediately gets the following classification result for nuclear, $\mathcal O_\infty$-stable, full $\ast$-homo\-morphisms using the $K$-theoretic invariant $\underline K$. This was first proved by Lin in \cite[Theorem 4.10]{Lin-stableapproxuniqueness} in the special case of Kirchberg algebras satisfying the UCT. \begin{theorem} Let $A$ be a separable, exact $C^\ast$-algebra satisfying the UCT, and let $B$ be a $\sigma$-unital $C^\ast$-algebra containing a properly infinite, full projection. Then the nuclear, $\mathcal O_\infty$-stable, full $\ast$-homomorphisms $A\to B$ are parametrised up to approximate Murray--von Neumann equivalence by morphisms $\underline K(A) \to \underline K(B)$. \end{theorem} In the following, let $\underline K_u(A) = (\underline K(A), [1_A]_0)$ for a unital $C^\ast$-algebra $A$, so that a morphism $\alpha \colon \underline K_u(A) \to \underline K_u(B)$ is a $\Lambda$-homomorphism for which the induced map $\alpha_0 \colon K_0(A) \to K_0(B)$ satisfies $\alpha_0([1_A]_0) = [1_B]_0$. Then, exactly as the theorem above, one can classify unital maps up to approximate unitary equivalence. \begin{theorem} Let $A$ be a separable, exact, unital $C^\ast$-algebra satisfying the UCT, and let $B$ be a unital, properly infinite $C^\ast$-algebra. Then the nuclear, $\mathcal O_\infty$-stable, full, unital $\ast$-homomorphisms $A\to B$ are parametrised up to approximate unitary equivalence by morphisms $\underline K_u(A) \to \underline K_u(B)$. \end{theorem} \section{Strongly $\mathcal O_\infty$-stable $\ast$-homomorphisms}\label{s:stronglyOinfty} The following section is a slight detour from the main classification theorem of the paper, and is strictly speaking not needed for proving this result. Corollary \ref{c:nucintosimplepi} was used in Subsection \ref{ss:classical} to show that Theorems \ref{t:existsimple} and \ref{t:uniquesimple} are in fact more general than all the classification results obtained by Kirchberg in \cite{Kirchberg-ICM}. In this section, sufficient criteria are given for when an $\mathcal O_\infty$-stable map is strongly $\mathcal O_\infty$-stable. The main tool is to show that if a map $\phi$ is $\mathcal O_\infty$-stable, and induces a sequential relative commutant which is $K_1$-injective, then $\phi$ is strongly $\mathcal O_\infty$-stable. In particular, a positive solution to the open problem of whether every properly infinite, unital $C^\ast$-algebra is $K_1$-injective, would imply that every $\mathcal O_\infty$-stable map is strongly $\mathcal O_\infty$-stable. This will be applied, using a results of Kirchberg and Rørdam from \cite{KirchbergRordam-absorbingOinfty} to show that any nuclear $\ast$-homomorphism into a strongly purely infinite $C^\ast$-algebra is strongly $\mathcal O_\infty$-stable. First some notation and preliminary observations will be set up. The thing to keep in mind in the following construction is that $[0,\infty)$ can be constructed by gluing the sequence intervals $[n,n+1]$ together. This will be used to write path algebras and their relative commutants as pull-backs of certain related sequence algebras. Fix a $C^\ast$-algebra $B$, and let $IB := C([0,1],B)$. There are canonical $\ast$-homo\-morphisms \begin{equation} \ev_{\mathbb N} \colon B_\as \to B_\infty, \qquad \ev_I \colon B_\as \to (IB)_\infty \end{equation} induced by $f \mapsto (f(n))_{n\in \mathbb N}$ and $f \mapsto (f_n)_{n\in \mathbb N}$ for $f\in C_b(\mathbb R_+, B)$ respectively, where $f_n \in IB$ is given by $f_n(s) = f(s+n)$ for $s\in [0,1]$. Similarly, let \begin{equation} \sigma \colon B_\infty \to B_\infty , \qquad \ev_s \colon (IB)_\infty \to B_\infty \end{equation} be induced by the shift map $(b_n)_{n\in \mathbb N} \mapsto (b_{n+1})_{n\in \mathbb N}$ and the evaluation map $(g_n)_{n\in \mathbb N} \mapsto (g_n(s))_{n\in \mathbb N}$ for $s\in [0,1]$ respectively. Letting $SB:= C((0,1), B)$, it is easy to see that one obtains the following commutative diagram \begin{equation}\label{eq:asseqpullback} \xymatrix{ 0 \ar[r] & (SB)_\infty \ar@{=}[d] \ar[r]^j & B_\as \ar[d]^{\ev_I} \ar[r]^{\ev_{\mathbb N}} & B_\infty \ar[d]^{\id \oplus \sigma} \ar[r] & 0 \\ 0 \ar[r] & (SB)_\infty \ar[r] & (IB)_\infty \ar[r]^{\ev_0 \oplus \ev_1 \quad} & B_\infty \oplus B_\infty \ar[r] & 0 } \end{equation} with exact rows, where $j$ is induced by the map $\hat{j} \colon \prod_{\mathbb N} SB \to C_b(\mathbb R_+, B)$ given by $\hat{j}((f_n)_{n\in \mathbb N})(t) = f_n(t-n)$ whenever $t\in [n,n+1]$ for $(f_n)_\in \prod_{\mathbb N}SB$. In particular, the right square above is a pull-back square. For a $\ast$-homomorphism $\phi \colon A \to B$, one considers $\phi(A)$ as a subalgebra of $B_\as$, $B_\infty$ and $(IB)_\infty$ in the usual way. \begin{lemma} With notation as above, the diagram \eqref{eq:asseqpullback} induces the commutative diagram \begin{equation}\label{eq:relcompullback} \xymatrix{ 0 \ar[r] & \frac{(SB)_\infty \cap \phi(A)'}{(SB)_\infty \cap \Ann \phi(A)} \ar@{=}[d] \ar[r]^{\overline j} & \frac{B_\as \cap \phi(A)'}{\Ann \phi(A)} \ar[d]^{\overline{\ev}_I} \ar[r]^{\overline{\ev}_{\mathbb N}} & \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)} \ar[d]^{\id \oplus \overline{\sigma}} \ar[r] & 0 \\ 0 \ar[r] & \frac{(SB)_\infty \cap \phi(A)'}{(SB)_\infty \cap \Ann \phi(A)} \ar[r] & \frac{(IB)_\infty \cap \phi(A)'}{\Ann \phi(A)} \ar[r]^{\overline{\ev}_0 \oplus \overline{\ev}_1 \qquad} & \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)} \oplus \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)} \ar[r] & 0 } \end{equation} with exact rows. In particular, the right square above is a pull-back square. \end{lemma} \begin{proof} It is obvious that the all maps in \eqref{eq:asseqpullback} preserve commutativity with elements $\phi(a)$ for all $a\in A$ (as these elements are constant sequences/paths), and that they map annihilators of $\phi(A)$ to annihilators of $\phi(A)$. Hence all maps in \eqref{eq:relcompullback} are well-defined. The only thing not obvious about exactness of the rows, is that $\overline{\ev}_{\mathbb N}$ and $\overline{\ev}_0 \oplus \overline{\ev}_1$ are surjective. I present a proof that $\overline{\ev}_{\mathbb N}$ is surjective, the map $\overline{\ev}_0 \oplus \overline{\ev}_1$ is surjective by a similar argument. Let $x\in \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ be any element. Let $(b_n)_{n\in \mathbb N} \in \prod_{\mathbb N} B$ be a lift of $x$, and define $f\in C_b(\mathbb R_+, B)$ by interpolation $f(t) = (t + 1 - n) b_n + (t - n) b_{n+1}$ for $t\in [n,n+1]$. As $\lim_{n\to\infty} \|[ b_n , \phi(a) ] \| = 0$ for every $a\in A$, it follows that $\lim_{t\to\infty}\| [ f(t), \phi(a)]\| = 0$ for every $a\in A$. Hence $f$ induces an element $\overline{f}$ in $\frac{B_\as \cap \phi(A)'}{\Ann \phi(A)}$, and $\overline{\ev}_{\mathbb N}(\overline{f}) = x$ by construction. \end{proof} Recall that a unital $C^\ast$-algebra $D$ is \emph{$K_1$-injective} if whenever $u\in D$ is a unitary with trivial $K_1$-class then $u$ is homotopic to $1_D$ in the unitary group of $D$. Using results from \cite{BlanchardRohdeRordam-K1inj}, one obtains the following. \begin{proposition}\label{p:K1injstrongOinfty} Let $A$ and $B$ be $C^\ast$-algebras with $A$ separable, and let $\phi \colon A \to B$ be an $\mathcal O_\infty$-stable $\ast$-homomorphism. If the sequential relative commutant $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is $K_1$-injective, then $\phi$ is strongly $\mathcal O_\infty$-stable. \end{proposition} \begin{proof} As observed in Remark \ref{r:Oinftypictures}, strong $\mathcal O_\infty$-stability of $\phi$ means that the asymptotic relative commutant $\frac{B_\as\cap \phi(A)'}{\Ann \phi(A)}$ is properly infinite, so we prove this. Since the right square of \eqref{eq:relcompullback} is a pull-back square with the map $\overline{\ev}_0 \oplus \overline{\ev}_1$ surjective, it follows from \cite[Proposition 2.7]{BlanchardRohdeRordam-K1inj}\footnote{To apply \cite[Proposition 2.7]{BlanchardRohdeRordam-K1inj}, one would a priori need the map $\id \oplus \overline{\sigma}$ to also be surjective. However, the proof only needs surjectivity of one of the maps, see \cite[Lemma 6.2]{GabeRuiz-unitalExt}.} that a sufficient condition is that $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ and $\frac{(IB)_\infty \cap \phi(A)'}{\Ann \phi(A)}$ are properly infinite, and $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)} \oplus \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is $K_1$-injective. The latter condition is one of our hypothesis (as direct sums of $K_1$-injective $C^\ast$-algebras are clearly $K_1$-injective), and $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is properly infinite by $\mathcal O_\infty$-stability of $\phi$. It is easy to see that $\mathcal O_\infty$-stability of $\phi$ implies that $\frac{(IB)_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is properly infinite. Alternatively, one can use that it is the sequential relative commutant of the composition $A\xrightarrow \phi B \xrightarrow{\mathrm{constant}} IB$, and that this composition is $\mathcal O_\infty$-stable by \cite[Lemma 3.20$(i)$]{Gabe-O2class}. \end{proof} \begin{remark} It is an open problem whether every unital, properly infinite $C^\ast$-algebra is $K_1$-injective. As a $\ast$-homomorphism $\phi \colon A \to B$ is $\mathcal O_\infty$-stable if and only if the unital $C^\ast$-algebra $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is properly infinite, an affirmative answer to this open problem would imply that every $\mathcal O_\infty$-stable $\ast$-homomorphism is strongly $\mathcal O_\infty$-stable. \end{remark} Following \cite[Remark 5.10]{KirchbergRordam-absorbingOinfty}, a $C^\ast$-algebra $B$ is \emph{strongly purely infinite} if for all $b_1,b_2\in B$ positive, and $\epsilon >0$, there are $s_1,s_2\in B$ such that \begin{equation} \max_{i=1,2}\| s_i^\ast b_i s_i - b_i \| \leq \epsilon, \quad \textrm{and} \quad \| s_2^\ast b_2^{1/2} b_1^{1/2} s_1 \| \leq \epsilon. \end{equation} The following was essentially proved by Kirchberg and Rørdam in \cite{KirchbergRordam-absorbingOinfty}, albeit the domain $A$ was assumed to be nuclear as opposed to exact, and it was essentially how they proved that separable, nuclear, strongly purely infinite $C^\ast$-algebras are $\mathcal O_\infty$-stable. The proposition will be improved in Theorem \ref{t:nucintospi} below. \begin{proposition}[Kirchberg--Rørdam]\label{p:KRspi} Let $A$ be a separable, exact $C^\ast$-algebra, and let $B$ be a strongly purely infinite $C^\ast$-algebra. Then every nuclear $\ast$-homo\-morphism $\phi \colon A \to B$ is $\mathcal O_\infty$-stable. \end{proposition} \begin{proof} The proof will first be carried out assuming that $B$ is also stable. In particular, $\mathcal O_2$ embeds unitally into $\multialg{B}$. Let $\mathscr C_0 \subseteq \CP(A, B_\infty)$ be the set of c.p.~maps $\rho$ which are approximately dominated by $\phi$, and for which $C^\ast(\rho(A))$ is commutative. Let $\mathscr C\subseteq \CP(A,B_\infty)$ be the set of c.p.~maps of the form \begin{equation}\label{eq:coccmap} A \ni a \mapsto \sum_{i,j=1}^n y_i^\ast \rho(x_i^\ast a x_j) y_j \in B_\infty \end{equation} for $\rho \in \mathscr C_0$, $n\in \mathbb N$, $x_1,\dots, x_n\in \multialg{A}$ and $y_1,\dots, y_n\in \multialg{B_\infty}$, and let $\overline{\mathscr C}$ denote the point-norm closure of $\mathscr C$. By \cite[Proposition 7.14$(i)$ and Lemma 7.19]{KirchbergRordam-absorbingOinfty}, every map in $\overline{\mathscr C}$ is approximately 1-dominated by $\phi$. By \cite[Lemma 7.16]{KirchbergRordam-absorbingOinfty}, $\mathscr C$ is an operator convex cone in the sense of \cite[Definition 4.1]{KirchbergRordam-zero}, and thus $\overline{\mathscr C}$ is a point-norm closed operator convex cone. Note that every map in $\mathscr C_0$ is nuclear, and thus so is every map of the form \eqref{eq:coccmap}. In particular, $\overline{\mathscr C} \subseteq \CP_\nuc(A, B_\infty)$. By \cite[Proposition 7.13]{KirchbergRordam-absorbingOinfty}, there is for every $a\in A_+$ a map $\rho \in \mathscr C_0$ such that $\phi(a) = \rho(a)$. A minor modification of \cite[Proposition 4.2]{KirchbergRordam-zero},\footnote{Where one moves the nuclearity assumption of $A$ to assuming that all the maps are nuclear.} see \cite[Theorem 2.5]{Gabe-cplifting}, implies that $\phi\in \overline{\mathscr C}$. Let $s_1,s_2 \in \multialg{B}$ be $\mathcal O_2$-isometries, and $\phi \oplus_{s_1,s_2} \phi := s_1 \phi(-)s_1^\ast + s_2\phi(-)s_2^\ast$. Then $\phi \oplus_{s_1,s_2} \phi \in \overline{\mathscr C}$ by \cite[Lemma 7.16$(i)$]{KirchbergRordam-absorbingOinfty}, and thus this map is approximately $1$-dominated by $\phi$. Hence \cite[Lemma 7.3]{KirchbergRordam-absorbingOinfty} provides $b\in B_\infty$ (which we may assume to be a contraction, otherwise do a standard cutting down argument with an approximate identity in $A$), such that $b^\ast \phi(-) b = \phi(-) \oplus_{s_1,s_2} \phi(-)$. Let $t_i := b s_i \in B_\infty$. Since $t_1^\ast \phi(-) t_1 = t_2^\ast \phi(-) t_2 = \phi$, it follows from Lemma \ref{l:conjhom} that $t_1,t_2 \in B_\infty \cap \phi(A)'$. Continuing to apply Lemma \ref{l:conjhom}, one obtains \begin{equation} t_i^\ast t_j \phi(a) = t_i^\ast \phi(a) t_j = s_i^\ast (s_1 \phi(a) s_1^\ast + s_2 \phi(a) s_2^\ast) s_j = \delta_{i,j} \phi(a) \end{equation} for all $a\in A$, and $i,j=1,2$. Hence $t_1$ and $t_2$ induce isometries in $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ with orthogonal range projections, so $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is properly infinite, or equivalently, $\phi$ is $\mathcal O_\infty$-stable. This completes the case where $B$ is stable. Now, if $B$ is not stable, it follows from \cite[Proposition 5.11]{KirchbergRordam-absorbingOinfty} that $B\otimes \mathcal K$ is strongly purely infinite, and clearly $\phi \otimes e_{1,1} \colon A \to B\otimes \mathcal K$ is nuclear. Hence $\phi \otimes e_{1,1}$ is $\mathcal O_\infty$-stable by what was proved above, and thus $\phi$ is $\mathcal O_\infty$-stable by Lemma \ref{l:relcombasic}$(a)$. \end{proof} The following lemma provides a way for concluding nuclearity of $\ast$-homo\-morphisms out of tensor products when one of the tensors is nuclear. \begin{lemma}\label{l:nucoutoftensor} Let $A,B$ and $C$ be $C^\ast$-algebras with $C$ nuclear, and let $\psi \colon A \otimes C \to B$ be a $\ast$-homomorphism. If $(e_\lambda)_{\lambda \in \Lambda}$ is an approximate identity in $C$, and if $\psi(- \otimes c_\lambda) \colon A \to B$ is nuclear for each $\lambda \in \Lambda$, then $\psi$ is nuclear. \end{lemma} \begin{proof} Let $\psi_A \colon A \to B^{\ast \ast}$ be a point-ultraweak accumulation point of $\psi(- \otimes e_\lambda)$, and similarly define $\psi_C \colon C \to B^{\ast \ast}$. Then $\psi_A$ and $\psi_C$ are $\ast$-homo\-morphisms with commuting images and $\psi_A$ is weakly nuclear as it is a limit of nuclear maps. By \cite[Corollary 2.8]{Gabe-qdexact}, $\psi$ is nuclear. \end{proof} Consequently, one obtains the following lemma which seems interesting in its own right. \begin{lemma}\label{l:sepOinftystable} Let $A$ be a separable, exact $C^\ast$-algebra, let $B$ be a strongly purely infinite $C^\ast$-algebra, and let $\phi \colon A \to B$ be a nuclear $\ast$-homomorphism. For every separable, nuclear $C^\ast$-subalgebra $C\subseteq \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$, there exists a unital embedding \begin{equation}\label{eq:Oinftyindoublecom} \mathcal O_\infty \hookrightarrow \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)} \cap C'. \end{equation} \end{lemma} \begin{proof} By replacing $C$ with its minimal unitisation (which is also nuclear), one may assume that $C$ is a unital $C^\ast$-subalgebra of $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$. Let $\psi \colon A \otimes C \to B_\infty$ be the $\ast$-homomorphism given on elementary tensors by $\psi(a\otimes c) = \phi(a) \overline{c}$ for $a\in A$ and $c\in C$, where $\overline{c} \in B_\infty \cap \phi(A)'$ is a lift of $c$. As $\psi(-\otimes 1_C)= \phi$ is nuclear, Lemma \ref{l:nucoutoftensor} implies that $\psi$ is nuclear. As $B_\infty$ is strongly purely infinite by \cite[Proposition 5.12]{KirchbergRordam-absorbingOinfty}, it follows that $\psi$ is $\mathcal O_\infty$-stable by Proposition \ref{p:KRspi}. Hence there exist sequences $t_i^{(1)}, t_i^{(2)}, \dots \in B_\infty$ for $i=1,2$ such that \begin{equation} \lim_{n\to \infty} \|[ \psi(x), t_i^{(n)}] \| = 0, \qquad \lim_{n\to \infty} \| ((t_i^{(n)})^\ast t_j^{(n)} - \delta_{i,j}) \psi(x) \| = 0 \end{equation} for all $x\in A\otimes C$. By a standard diagonal argument for sequence algebras, one finds $t_1,t_2 \in B_\infty \cap \psi(A\otimes C)'$ such that \begin{equation} t_i^\ast t_j \psi(x) = \delta_{i,j} \psi(x), \qquad i,j = 1,2,\, x\in A\otimes C. \end{equation} Hence $t_1$ and $t_2$ induce isometries in $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)} \cap C'$ with mutually orthogonal range projections. Consequently, this $C^\ast$-algebra contains a unital copy of $\mathcal O_\infty$. \end{proof} With this, one can improve on Proposition \ref{p:KRspi} as follows. \begin{theorem}\label{t:nucintospi} Let $A$ be a separable, exact $C^\ast$-algebra, and let $B$ be a strongly purely infinite $C^\ast$-algebra. Then every nuclear $\ast$-homomorphism $\phi \colon A \to B$ is strongly $\mathcal O_\infty$-stable. \end{theorem} \begin{proof} By Propositions \ref{p:K1injstrongOinfty} and \ref{p:KRspi}, it suffices to prove that $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ is $K_1$-injective. Let $u\in \frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ be a unitary with trivial $K_1$-class. As $C^\ast(u)$ is nuclear Lemma \ref{l:sepOinftystable} provides the existence of a unital copy of $\mathcal O_\infty$ in $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$ which commutes with $u$. By \cite[Lemma 2.4(ii)]{BlanchardRohdeRordam-K1inj}, $u$ is homotopic to the unit in the unitary group of $\frac{B_\infty \cap \phi(A)'}{\Ann \phi(A)}$. \end{proof} Using that simple, purely infinite $C^\ast$-algebras are strongly purely infinite by \cite[Corollary 6.9]{KirchbergRordam-absorbingOinfty}, one obtains the following. \begin{corollary}\label{c:nucintosimplepi} Let $A$ be a separable, exact $C^\ast$-algebra, and let $B$ be a simple, purely infinite $C^\ast$-algebra. Then every nuclear $\ast$-homomorphism $\phi \colon A \to B$ is strongly $\mathcal O_\infty$-stable. \end{corollary} \section{Ideals and actions of topological spaces} The rest of this paper is about generalising the Kirchberg--Phillips theorem to the non-simple case. This entails keeping track of the ideal structure of the $C^\ast$-algebras as well as how these interact. I should emphasise that all the key methods used in the simple case -- in particular, the methods developed in Sections \ref{s:PIhom} and \ref{s:unitary} -- are also the key ingredients in the non-simple case. The situation in the non-simple case is much more complex than the simple case and needs a generalised version of $KK$-theory to work. For instance, one can construct separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras $A$ and $B$ which both have exactly one non-trivial ideal $I_A$ and $I_B$ respectively, such that $I_A \sim_{KK} I_B$ and $A \sim_{KK} B$, but for which $A$ and $B$ are not stably isomorphic. For instance, the two (non-isomorphic) six-term exact sequences \begin{equation} \xymatrix{ \mathbb Z \ar[r]^{\id} & \mathbb Z \ar[r]^0 & \mathbb Z \ar[d]^\id & \mathbb Z \ar[r]^{0} & \mathbb Z \ar[r]^\id & \mathbb Z \ar[d]^0 \\ \mathbb Z \ar[u]^0 & \mathbb Z \ar[l]_{\id} & \mathbb Z \ar[l]_0 & \mathbb Z \ar[u]^\id & \mathbb Z \ar[l]_0 & \mathbb Z \ar[l]_\id } \end{equation} arise as the $K$-theory of separable, nuclear, stable, $\mathcal O_\infty$-stable $C^\ast$-algebras $A$ and $B$ respectively with unique non-trivial ideals $I_A$ and $I_B$ respectively, so that $I_A,I_B,A/I_A$ and $B/I_B$ satisfy the UCT, see \cite[Proposition 5.4]{Rordam-classsixterm}. By the Kirchberg--Phillips Theorem (Theorem \ref{t:KPUCT}) $I_A \cong I_B$ and $A/I_A \cong B/I_B$. Moreover, $A$ and $B$ satisfy the UCT by the 2-out-of-3 property, and thus $A\sim_{KK} B$ since $A$ and $B$ have isomorphic $K$-theory. However, stably isomorphic $C^\ast$-algebras with a unique non-trivial ideal induce isomorphic six-term exact sequences in $K$-theory, and thus $A$ and $B$ are not stably isomorphic. The generalised version of $KK$-theory needed for classification will be studied in the Section \ref{s:KK}. In this section the focus is on how to incorporate the ideal structure of $C^\ast$-algebras in a way that makes non-simple $C^\ast$-algebras amenable for classification. \subsection{Ideal lattices} \emph{All ideals are assumed to be two-sided and closed.} Recall that a \emph{complete lattice} is a partially ordered set in which every subset has a supremum and an infimum. Note that any complete lattice $\mathcal L$ has a largest element $\sup \mathcal L = \inf \emptyset$ and a smallest element $\inf \mathcal L = \sup \emptyset$. \begin{definition}\label{d:lattice} Let $\mathcal L$ be a complete lattice, and $I,J \in \mathcal L$. Say that $I$ is \emph{compactly contained} in $J$, written $I\Subset J$, if whenever $(I_\lambda)$ is a family in $\mathcal L$ for which $J \leq \sup I_\lambda$, then there are finitely many $I_{\lambda_1}, \dots, I_{\lambda_n}$ such that $I \leq \sup I_{\lambda_k}$. A map $\Phi \colon \mathcal L \to \mathcal L'$ of complete lattices is called a \emph{$\Cu$-morphism} if it preserves suprema and compact containment.\footnote{See \cite[Section 2.2]{Gabe-O2class} for motivation for why this name makes sense.} Whenever $\Phi , \Psi \colon \mathcal L \to \mathcal L'$ are maps of partially ordered sets, write $\Phi \leq \Psi$ whenever $\Phi(I) \leq \Psi(I)$ for all $I \in \mathcal L$. \end{definition} Recall that the ideal lattice $\mathcal I(A)$ of a $C^\ast$-algebra $A$ -- the set of all two-sided, closed ideals in $A$ -- is a complete lattice with suprema and infima of a non-empty subset $\mathcal S \subseteq \mathcal I(A)$ given by \begin{equation} \sup \mathcal S = \overline{\sum_{I\in \mathcal S} I}, \qquad \inf \mathcal S = \bigcap_{I\in \mathcal S} I. \end{equation} By convention, $\sup \emptyset = 0$ and $\inf \emptyset = A$ in $\mathcal I(A)$. As $\mathcal I(A)$ is a complete lattice, there is a notion of compact containment of ideals in $C^\ast$-algebras which played a major role in \cite{Gabe-O2class}. It is important to consider the ideal lattice as an invariant which is also defined for maps as in the following definition. \begin{definition} For any completely positive map $\phi \colon A \to B$, let \begin{equation} \mathcal I(\phi) \colon \mathcal I(A) \to \mathcal I(B), \qquad \mathcal I(\phi)(I) = \overline{B \phi(I) B} \end{equation} for $I\in \mathcal I(A)$. \end{definition} \begin{remark} It was shown in \cite[Lemma 2.12]{Gabe-O2class}, that $\mathcal I(\phi)$ always preserves suprema, and that $\mathcal I(\phi)$ is a $\Cu$-morphism provided $\phi$ is a $\ast$-homomorphism. \end{remark} \begin{remark} Suppose that $\phi \colon A \to B$ is a $\ast$-homomorphism. Above one associates a map $\mathcal I(\phi) \colon \mathcal I(A) \to \mathcal I(B)$ in a covariant way. This was done in \cite{Gabe-O2class} so that one could make use of compact containment of ideals. In the work of Kirchberg \cite{Kirchberg-non-simple} he instead considers the pre-image map $\phi^{-1} \colon \mathcal I(B) \to \mathcal I(A)$ which is a contravariant approach. One can show that these two notions are each others duals in the sense that there is a natural one-to-one correspondence between $\Cu$-morphisms $\mathcal I(A) \to \mathcal I(B)$, and maps $\mathcal I(B) \to \mathcal I(A)$ which are monotone continuous, i.e.~maps that preserve infima and increasing suprema. This one-to-one correspondence takes $\mathcal I(\phi)$ to $\phi^{-1}$. This is because $(\phi^{-1}, \mathcal I(\phi))$ is a \emph{Galois connection}, see \cite[Section O-3]{GHKLMS-book}, i.e.~both $\phi^{-1}$ and $\mathcal I(\phi)$ are order preserving, and whenver $I\in \mathcal I(A)$ and $J \in \mathcal I(B)$ one has \begin{equation} \mathcal I(\phi)(I) \subseteq J \quad \textrm{if and only if} \quad I \subseteq \phi^{-1}(J). \end{equation} As the duality is not needed in this paper, I leave it to the reader to confirm this at their own will.\footnote{It is straight forward to verify; alternatively it can be deduced by combining several results from \cite[Section O-3]{GHKLMS-book}.} \end{remark} It was observed \cite[Remark 2.11]{Gabe-O2class}, that $\mathcal I$ is \emph{not} functorial on the category of $C^\ast$-algebras with c.p.~maps as morphisms.\footnote{For instance, if $\phi \colon \mathbb C \to M_2(\mathbb C)$ is the embedding to the $(1,1)$-corner, and $\psi \colon M_2(\mathbb C) \to \mathbb C$ is the compression to the $(2,2)$-corner, then $\mathcal I(\psi \circ \phi) \neq \mathcal I(\psi) \circ \mathcal I(\phi)$.} However, this annoyance goes away if we restrict our attention to $\ast$-homomorphisms.\footnote{Or, more generally, if we only consider c.p.~order zero maps.} The following proposition was proved in \cite{Gabe-O2class}, and as it is so fundamental, it will usually not be mentioned. \begin{proposition}[Cf.~\cite{Gabe-O2class} Proposition 2.15] $\mathcal I$ is a covariant functor from the category of $C^\ast$-algebras with $\ast$-homomorphisms to the category of complete lattices with $\Cu$-morphisms. \end{proposition} A special case of Definition \ref{d:lattice} says that for c.p.~maps $\phi, \psi \colon A \to B$ we write $\mathcal I(\phi) \leq \mathcal I(\psi)$ if $\mathcal I(\phi)(I) \subseteq \mathcal I(\psi)(I)$ for all $I \in \mathcal I(A)$. \begin{theorem}[{\cite[Theorem 3.3]{Gabe-O2class}}]\label{t:approxdom} Let $A$ and $B$ be $C^\ast$-algebras with $A$ exact, and let $\phi , \rho \colon A \to B$ be nuclear maps with $\phi$ a $\ast$-homomorphism. The following are equivalent: \begin{itemize} \item[$(i)$] $\phi$ approximately dominates $\rho$ (see Definition \ref{d:approxdom}); \item[$(ii)$] $\mathcal I(\rho) \leq \mathcal I(\phi)$; \item[$(iii)$] $\rho(a) \in \overline{B \phi(a) B}$ for any positive $a\in A$. \end{itemize} \end{theorem} The above theorem will be used in place of Proposition \ref{p:fulldom} for full maps, which stated that any full $\ast$-homomorphism approximately dominates any nuclear map. \subsection{Actions of topological spaces on $C^\ast$-algebras} \begin{definition} Let $X$ be a topological space. An \emph{action} of $X$ on a $C^\ast$-algebra $A$ is an order preserving map \begin{equation} \Phi_A \colon \mathcal O(X) \to \mathcal I(A) \end{equation} where $\mathcal O(X)$ denotes the complete lattice of open subsets of $X$.\footnote{Suprema in $\mathcal O(X)$ is given by taking unions, and infima is given by taking the interior of the intersection, i.e.~if $(U_\lambda)_{\lambda \in \Lambda}$ is a collection in $\mathcal O(X)$ then $\inf_{\lambda \in \Lambda} U_\lambda = (\bigcap_{\lambda \in \Lambda} U_\lambda)^\circ$.} The pair $(A, \Phi_A)$ is called an \emph{$X$-$C^\ast$-algebra}. Often $\Phi_A$ is omitted from the notation by defining $A(U) := \Phi_A(U)$ for all $U\in \mathcal O(X)$. A map $\phi \colon A \to B$ between $X$-$C^\ast$-algebras is called \emph{$X$-equivariant} (or $\Phi_A$-$\Phi_B$-equivariant) if \begin{equation} \phi(A(U)) \subseteq B(U) \qquad \textrm{for all }U\in \mathcal O(X), \end{equation} or equivalently, if $\phi(\Phi_A(U)) \subseteq \Phi_B(U)$ for all $U\in \mathcal O(X)$. \end{definition} Clearly a c.p.~map $\phi \colon A \to B$ is $X$-equivariant if and only if $\mathcal I(\phi) \circ \Phi_A \leq \Phi_B$. The advantage of considering $X$-$C^\ast$-algebras is that one can consider the category of all (separable) $X$-$C^\ast$-algebras with $X$-equivariant $\ast$-homo\-morphisms as morphisms. In this sense, $KK(X)$ -- which will be defined in the Section \ref{s:KK} -- becomes a functor from this category to some target category which turns out to be triangulated, see \cite[Proposition 3.11]{MeyerNest-bootstrap}. Often it is an advantage only to consider $X$-$C^\ast$-algebras for which the action has certain additional properties, such as actions that preserve (increasing) suprema and/or infima. This motivates the following definition. \begin{definition}\label{d:Xalg} Let $X$ be a topological space, let $(A, \Phi_A)$ be an $X$-$C^\ast$-algebra. Then $(A, \Phi_A)$ is said to be \begin{itemize} \item \emph{(fintely) lower semicontinuous} if $\Phi_A$ preserves (finite) infima; \item \emph{(finitely) upper semicontinuous} if $\Phi_A$ preserves (finite) suprema; \item \emph{monotone upper semicontinuous} if $\Phi_A$ preserves suprema of non-empty, upwards directed sets; \item \emph{(monotone) continuous} if it is both lower semicontinuous and (monotone) upper semicontinuous; \item \emph{$X$-compact} if $\Phi_A$ preserves compact containment; \item \emph{tight} if $\Phi_A$ is an order isomorphism. \end{itemize} \end{definition} Recall that in any complete lattice $\inf \emptyset$ is the largest element and $\sup \emptyset$ is the smallest element. Hence if $A$ is an $X$-$C^\ast$-algebra which is (finitely) lower semicontinuous then $A(X) = A(\inf \emptyset) = \inf \emptyset = A$. Similarly, if $A$ is (finitely) upper semicontinuous then $A(\emptyset) = 0$ (where $\emptyset \in \mathcal O(X)$ is the smallest element). \begin{remark} All but one of the above definitions of actions have appeared previously in the literature, in particular in \cite{Kirchberg-non-simple}. The only exception is the notion of an \emph{$X$-compact} action which is a new concept. \end{remark} \begin{example}[Canonical tight action]\label{ex:tightaction} As is customary, let $\Prim A$ denote the \emph{primitive ideal space} of the $C^\ast$-algebra $A$, i.e.~the set of all primitive ideals\footnote{An ideal is \emph{primitive} if it is the kernel of an irreducible representation.} of $A$ equipped with the Jacobsen topology. Recall -- see for instance \cite[Theorem 4.1.3]{Pedersen-book-automorphism} -- that for a $C^\ast$-algebra $A$ there is an order iso\-morphism $\I_A \colon \mathcal O(\Prim A) \xrightarrow \cong \mathcal I(A)$ given by \begin{equation} \I_A(V) = \bigcap_{\mathfrak p \in \Prim A \setminus V} \mathfrak p, \qquad V\in \mathcal O(\Prim A). \end{equation} Hence $(A, \I_A)$ is a tight $\Prim A$-$C^\ast$-algebra. \end{example} \begin{notation} The action $\I_A \colon \mathcal O(\Prim A) \to \mathcal I(A)$ from Example \ref{ex:tightaction} will be referred to as the \emph{canonical tight action} of $A$. \end{notation} \begin{example}[Ordinary $C^\ast$-algebras]\label{ex:onepointXalg} Let $\{\star\}$ be a one-point topological space, and let $A$ be a $\{\star\}$-$C^\ast$-algebra. Then $A(\emptyset) = 0$ and $A(\{\star\}) = A$ if and only if $A$ is continuous. In particular, the category of $C^\ast$-algebras is isomorphic to the category of continuous $\{\star\}$-$C^\ast$-algebras. If $A$ is a continuous $\{\star\}$-$C^\ast$-algebra then \begin{itemize} \item $A$ is tight if and only if the $C^\ast$-algebra $A$ is simple; \item $A$ is $\{\star\}$-compact if and only if the primitive ideal space $\Prim A$ is compact. \end{itemize} \end{example} \begin{example}[$C_0(X)$-algebras]\label{ex:C(X)alg} Let $X$ be a locally compact Hausdorff space. A \emph{$C_0(X)$-algebra} is a $C^\ast$-algebra $A$ together with a $\ast$-homomorphism $\psi_A \colon C_0(X) \to \mathscr Z\multialg{A}$ (the center of the multiplier algebra) such that \begin{equation}\label{eq:psiC(X)A} \overline{\psi_A(C_0(X)) A} = A.\footnote{There is some disagreement in the literature whether or not one only wants to consider the case where $\psi_A$ is injective. However, this additional criteria rules out important special cases such as the skyscraper $C_0(X)$-algebras, obtained by letting $\psi_A$ be a composition $C_0(X) \xrightarrow{\ev_x} \mathbb C \to \mathscr Z\multialg{A}$ of a point-evaluation and the canonical unital embedding of $\mathbb C$.} \end{equation} The map $\psi_A$ induces an action of $X$ on $A$, namely $\Phi_A \colon \mathcal O(X) \to \mathcal I(A)$ given by \begin{equation} \Phi_A(U) = \overline{\psi_A(C_0(U)) A}, \qquad U\in \mathcal O(X). \end{equation} The $C_0(X)$-algebra $A$ is called \emph{continuous} if the map \begin{equation} X \ni x \mapsto \| a + \Phi_A(X \setminus \{x\})\|_{A/\Phi_A(X \setminus \{x\})} \end{equation} is continuous for every $a\in A$. It turns out that the assignment $\psi_A \mapsto \Phi_A$ described above is a one-to-one correspondence between $\ast$-homomorphisms $\psi_A \colon C_0(X) \to \mathscr Z\multialg{A}$ satisfying \eqref{eq:psiC(X)A}, and actions of $X$ on $A$ which are upper semicontinuous and finitely lower semicontinuous, see \cite[Sections 2.1 and 2.2]{MeyerNest-bootstrap}. In this way, continuous $C_0(X)$-algebras correspond exactly to continuous $X$-$C^\ast$-algebras. \end{example} \begin{example}[$C^\ast$-algebras over topological spaces]\label{ex:CoverX} Let $X$ be a topological space. A \emph{$C^\ast$-algebra over $X$} is a $C^\ast$-algebra $A$ together with a continuous map $\psi_A \colon \Prim A \to X$. This induces an action of $X$ on $A$, namely $\Phi_A \colon \mathcal O(X) \to \mathcal I(A)$ given by \begin{equation} \Phi_A (U) = \I_A (\psi_A^{-1}(U)), \qquad U \in \mathcal O(X), \end{equation} where $\I_A \colon \mathcal O(\Prim A) \xrightarrow \cong \mathcal I(A)$ is the canonical tight action (Example \ref{ex:tightaction}). It turns out that $(A, \Phi_A)$ is always upper semicontinuous and fintely lower semicontinuous. If $X$ is sober,\footnote{A topological space $X$ is called \emph{sober} if the map $X \to \mathcal O(X)$ given by $x\mapsto X \setminus \overline{\{x\}}$ is injective and maps onto the set of all prime open subsets of $X$. An open subset $U \subseteq X$ is \emph{prime} if whenever $V,W \in \mathcal O(X)$ are such that $V \cap W \subseteq U$ then $V \subseteq U$ or $W \subseteq U$. For any topological space $X$ there is a sober space $\hat X$ such that $\mathcal O(X) \cong \mathcal O(\hat X)$, so it is essentially no loss of generality to assume that $X$ is sober, see \cite[Section 2.5]{MeyerNest-bootstrap}.} which is essentially no loss of generality, then the above construction $\psi_A \mapsto \Phi_A$ is a one-to-one correspondence between continuous maps $\Prim A \to X$ and actions $\mathcal O(X) \to \mathcal I(A)$ which are upper semicontinuous and finitely lower semicontinuous, see \cite[Lemma 2.25]{MeyerNest-bootstrap}. \end{example} \subsection{$X$-fullness} Recall that a $\ast$-homomorphism $\phi \colon A \to B$ is full if $\phi(a)$ is full in $B$ for every non-zero $a\in A$. Essentially, this means that $\phi$ is as large as possible in an ideal-related sense. The same phenomena will be studied for $X$-equivariant maps in the sense that $X$-fullness means that $\phi$ is as (ideal-related) large as possible, provided one assumes $\phi$ is $X$-equivariant. For a subset $Y$ of a topological space $X$, let $Y^\circ$ denote the interior of $Y$. \begin{lemma}\label{l:dualaction} Let $(A, \Phi_A)$ be a lower semicontinuous $X$-$C^\ast$-algebra. There is a well-defined order preserving map $\Psi_A \colon \mathcal I(A) \to \mathcal O(X)$ given by \begin{equation}\label{eq:dualaction} \Psi_A (I) = \bigg( \bigcap_{\substack{V \in \mathcal O(X) \\ I \subseteq \Phi_A(V)}} V \bigg)^\circ , \qquad I\in \mathcal I(A). \end{equation} The maps $\Phi_A$ and $\Psi_A$ satisfy \begin{equation}\label{eq:dualGalois} I \subseteq \Phi_A(U) \qquad \textrm{if and only if} \qquad \Psi_A(I) \subseteq U \end{equation} for all $I\in \mathcal I(A)$ and $U \in \mathcal O(X)$.\footnote{This means that $(\Phi_A, \Psi_A)$ is a Galois connection, see \cite[Section O-3]{GHKLMS-book}. In particular, it follows from \cite[Corollary O-3.5]{GHKLMS-book} that there exists a map $\Psi_A$ satisfying \eqref{eq:dualGalois} if and only if $(A, \Phi_A)$ is lower semicontinuous.} In particular, \begin{equation}\label{eq:dualeq} I \subseteq \Phi_A(\Psi_A(I)) \qquad \textrm{and} \qquad \Psi_A (\Phi_A(U)) \subseteq U \end{equation} for all $I\in \mathcal I(A)$ and $U \in \mathcal O(X)$. \end{lemma} \begin{proof} Note that $\inf \emptyset = X$ in the complete lattice $\mathcal O(X)$, and that $\inf \emptyset = A$ in the complete lattice $\mathcal I(A)$. Hence as $\Phi_A$ preserves infima of the empty set, one has $\Phi_A(X) = A$. It easily follows that $\Psi_A$ is well-defined since the index of the intersection in \eqref{eq:dualaction} always contains $X\in \mathcal O(X)$ and is therefore never empty. Clearly $\Psi_A$ is order preserving. If $I \subseteq \Phi_A(U)$ then $\Psi_A(I) \subseteq U$ by the definition of $\Psi_A$. Conversely, suppose $\Psi_A(I) \subseteq U$. As $\Phi_A$ preserves infima,\footnote{This means that $\Phi_A((\bigcap \mathcal U)^\circ) = \bigcap \Phi_A(\mathcal U)$ for a subset $\mathcal U \subseteq \mathcal O(X)$.} it follows that \begin{equation} \Phi_A(\Psi_A(I)) = \bigcap_{\substack{V \in \mathcal O(X) \\ I \subseteq \Phi_A(V)}} \Phi_A(V). \end{equation} As the right hand side above contains $I$ by definition, one gets \begin{equation} I \subseteq \Phi_A(\Psi_A(I)) \subseteq \Phi_A(U) \end{equation} since $\Phi_A$ is order preserving. ``In particular'' follows from \eqref{eq:dualGalois} by considering the cases $I= \Phi_A(U)$ and $\Psi_A(I) = U$. \end{proof} \begin{definition} Let $(A,\Phi_A)$ be a lower semicontinuous $X$-$C^\ast$-algebra. The \emph{dual action} of $\Phi_A$ (or the dual action of the $X$-$C^\ast$-algebra $(A, \Phi_A)$) is the map $\Psi_A \colon \mathcal I(A) \to \mathcal O(X)$ defined in \eqref{eq:dualaction}. \end{definition} \begin{lemma}\label{l:Cuaction} Let $(A,\Phi_A)$ be a lower semicontinuous $X$-$C^\ast$-algebra with dual action $\Psi_A$, and let $(B, \Phi_B)$ be an $X$-$C^\ast$-algebra. \begin{itemize} \item[$(a)$] A c.p.~map $\phi \colon A \to B$ is $X$-equivariant if and only if $\mathcal I(\phi) \leq \Phi_B \circ \Psi_A$. \item[$(b)$] $\Psi_A$ preserves suprema. In particular, $\Phi_B \circ \Psi_A \colon \mathcal I(A) \to \mathcal I(B)$ preserves suprema whenever $(B, \Phi_B)$ is upper semicontinuous. \item[$(c)$] If $(A, \Phi_A)$ is monotone continuous then $\Psi_A$ preserves compact containment. In particular, $\Phi_B \circ \Psi_A \colon \mathcal I(A) \to \mathcal I(B)$ is a $\Cu$-morphism whenever $(B, \Phi_B)$ is $X$-compact and upper semicontinuous (in addition to $(A, \Phi_A)$ being monotone continuous). \end{itemize} \end{lemma} \begin{proof} $(a)$: If $\phi$ is $X$-equivariant and $I\in \mathcal I(A)$ then \begin{equation} \phi(I) \stackrel{\eqref{eq:dualeq}}{\subseteq} \phi(\Phi_A(\Psi_A(I))) \subseteq \Phi_B(\Psi_A(I)), \end{equation} so $\mathcal I(\phi) \leq \Phi_B \circ \Psi_A$. Conversely, if $\mathcal I(\phi) \leq \Phi_B \circ \Psi_A$ and $U\in \mathcal O(X)$ then \begin{equation} \phi(\Phi_A(U)) \subseteq \Phi_B (\Psi_A( \Phi_A(U))) \stackrel{\eqref{eq:dualeq}}{\subseteq} \Phi_B(U), \end{equation} so $\phi$ is $X$-equivariant. $(b)$: Let $S \subseteq \mathcal I(A)$ be a (possibly empty) subset, let $I := \sup S$ and $U := \sup \Psi_A(S)$. As $\Psi_A(J) \subseteq \Psi_A(I)$ for all $J \in S$, one gets $U \subseteq \Psi_A(I)$, and it remains to show the reverse inclusion. Since $\Psi_A(J) \subseteq U$ for all $J \in S$ one gets $J \subseteq \Phi_A(U)$ for all $J\in S$ by \eqref{eq:dualGalois}. Hence $I \subseteq \Phi_A(U)$ so by \eqref{eq:dualGalois} one gets $\Psi_A(I) \subseteq U$. $(c)$: Suppose $I \Subset J$ in $\mathcal I(A)$ and let $(U_\lambda)_{\lambda \in \Lambda}$ be an increasing net in $\mathcal O(X)$ such that $\Psi_A(J) \subseteq \sup U_\lambda$. By \eqref{eq:dualGalois} one has \begin{equation} J \subseteq \Phi_A(\sup U_\lambda) = \sup \Phi_A(U_\lambda) \end{equation} where the second equality follows from monotone upper semicontinuity of $(A, \Phi_A)$. As $I\Subset J$ there is $\lambda\in \Lambda$ such that $I \subseteq \Phi_A(U_\lambda)$. By \eqref{eq:dualGalois} one has $\Psi_A(I) \subseteq U_\lambda$, so $\Psi_A(I) \Subset \Psi_A(J)$. Thus $\Psi_A$ preserves compact containment. \end{proof} If $\phi \colon A \to B$ is an $X$-equivariant c.p.~map then the largest $\mathcal I(\phi)$ can possibly be in an ideal-related sense is $\Phi_B \circ \Psi_A$ by Lemma \ref{l:Cuaction}$(a)$. This motivates the following definition. \begin{definition} Let $(A, \Phi_A)$ and $(B, \Phi_B)$ be $X$-$C^\ast$-algebras with $(A,\Phi_A)$ lower semicontinuous, and let $\Psi_A$ be the dual action of $\Phi_A$. A c.p.~map $\phi \colon A \to B$ is said to be \emph{$X$-full} if $\mathcal I(\phi) = \Phi_B \circ \Psi_A$. \end{definition} \begin{remark} Suppose $A$ and $B$ are $X$-$C^\ast$-algebras with $A$ lower semicontinuous and dual action $\Psi_A \colon \mathcal I(A) \to \mathcal O(X)$. A c.p.~map $\phi \colon A \to B$ is $X$-full if and only if $\phi(a)$ is full in $B(\Psi_A(\overline{AaA}))$ for any $a\in A$. This was how $X$-fullness was (essentially) defined in \cite{GabeRuiz-absrep}, and how $X$-fullness was described in the introduction. In fact, by definition of $\Psi_A$, $U=\Psi_A(\overline{AaA})$ is the smallest open subset of $X$ such that $a\in A(U)$. \end{remark} \begin{example} Let $\{\star\}$ be a one-point topological space, and let $A$ and $B$ be continuous $\{ \star\}$-$C^\ast$-algebras, cf.~Example \ref{ex:onepointXalg}. Then a $\ast$-homo\-morphism $\phi \colon A \to B$ is full if and only if it is $\{\star\}$-full. \end{example} \begin{example} Let $(A, \Phi_A)$ be an $X$-$C^\ast$-algebra. Then $(A, \Phi_A)$ is tight if and only if $(A,\Phi_A)$ is lower semicontinuous, $\Phi_A$ is injective and $\id_A$ is $X$-full. \end{example} \begin{example} Let $\phi \colon A \to B$ be a c.p.~map, let $\I_A \colon \mathcal O(\Prim A) \to \mathcal I(A)$ be the canonical tight action, and let $\Phi_B := \mathcal I(\phi) \circ \I_A$. Then $(A, \I_A)$ and $(B, \Phi_B)$ are $\Prim A$-$C^\ast$-algebras, and $\phi$ is $\Prim A$-full. Similarly, if $\phi$ is a $\ast$-homomorphism, one can let $\I_B \colon \mathcal O(\Prim B) \to \mathcal I(B)$ be the canonical tight action, and $\Phi_A := \phi^{-1} \circ \I_B$. Then $(A, \Phi_A)$ and $(B, \I_B)$ are $\Prim B$-$C^\ast$-algebras and $\phi$ is $\Prim B$-full. \end{example} While the following is not needed in this paper, it shows that there often exist $X$-full c.p.~maps. It is an immediate consequence of Lemma \ref{l:Cuaction}$(b)$ and \cite[Proposition 5.5]{Gabe-O2class}. Results of this form were one of the key ingredients in \cite{Gabe-cplifting}. \begin{proposition} Let $A$ and $B$ be separable $X$-$C^\ast$-algebras for which $A$ is lower semicontinuous, and $B$ is nuclear and upper semicontinuous. Then there exists an $X$-full c.p.~map $A\to B$. \end{proposition} The following is an immediate consequence of Theorem \ref{t:approxdom} and Lemma \ref{l:Cuaction} $(a)$. \begin{corollary}\label{c:Xfulldom} Let $A$ and $B$ be $X$-$C^\ast$-algebras with $A$ exact and lower semicontinuous, suppose that $\phi \colon A \to B$ is an $X$-full, nuclear $\ast$-homo\-morphism, and that $\rho\colon A \to B$ is an $X$-equivariant, nuclear c.p.~map. Then $\phi$ approximately dominates $\rho$. \end{corollary} \subsection{Tensor products} Let $X$ be a topological space, let $A$ be an $X$-$C^\ast$-algebra, and let $D$ be a $C^\ast$-algebra. The tensor products $A \otimes D$ (spatial tensor product) and $A \otimes_{\max{}} D$ (maximal tensor product) have canonical actions of $X$ given by \begin{equation} (A\otimes D)(U) = A(U) \otimes D, \qquad (A \otimes_{\max{}} D) (U) = A(U) \otimes_{\max{}} D \end{equation} for $U\in \mathcal O(X)$. \begin{remark} Unless otherwise stated, if $A$ is an $X$-$C^\ast$-algebra and $D$ is a $C^\ast$-algebra, then $A\otimes D$ and $A\otimes_{\max{}} D$ are implicitly assumed to be $X$-$C^\ast$-algebras equipped with the action of $X$ defined above. \end{remark} If $B$ is an $X$-$C^\ast$-algebra and $Y$ is a locally compact, Hausdorff space, then $C_0(Y,B)$ is an $X$-$C^\ast$-algebra via the canonical identification $C_0(Y, B) \cong C_0(Y) \otimes B$. \begin{remark}[Homotopies] The above construction allows one to consider \emph{homotopies} of $X$-equivariant $\ast$-homomorphisms: Two $X$-equivariant $\ast$-homomorphisms $\phi_0,\phi_1 \colon A \to B$ are \emph{homotopic} if (by definition) there is an $X$-equivariant $\ast$-homo\-morphism $\Phi \colon A \to C([0,1], B)$ such that $\phi_0 = \ev_0 \circ \Phi$ and $\phi_1 = \ev_1 \circ \Phi$. In this sense, the cone $C_0((0,1], B)$ of an $X$-$C^\ast$-algebra $B$ is always contractible (i.e.~homotopic to zero) as an $X$-$C^\ast$-algebra. However, there might be other actions of $X$ on $C_0((0,1], B)$ such that the corresponding $X$-$C^\ast$-algebra is not contractible. For instance, $C_0((0,1])$ with the canonical tight action of $(0,1]$ is not contractible as $(0,1]$-$C^\ast$-algebras. \end{remark} \subsection{$X$-nuclear maps} Just as in the classical case of absorption from Section \ref{s:absrep}, one needs nuclearity in order to properly study absorption. In this subsection the focus is on this ideal-related version of nuclearity. \begin{definition}\label{d:Xnucabs} Let $A$ and $B$ be $X$-$C^\ast$-algebras. A c.p.~map $\eta\colon A \to B$ is called \emph{$X$-nuclear} (or $X$-residually nuclear, or $\Phi_A$-$\Phi_B$-residually nuclear) if $\eta$ is $X$-equivariant and the induced map \begin{equation} [\eta]_U \colon A/A(U) \to B/B(U) \end{equation} is nuclear for each $U\in \mathcal O(X)$. \end{definition} As quotients of nuclear $C^\ast$-algebras are nuclear \cite[Corollary 9.4.4]{BrownOzawa-book-approx}, it follows that if $A$ or $B$ is a nuclear $X$-$C^\ast$-algebra then any $X$-equivariant map $\phi \colon A \to B$ is $X$-nuclear. In the case where $A$ is exact, a map $\phi \colon A \to B$ being $X$-nuclear is equivalent (up to minor assumptions on the actions) to being nuclear and $X$-equivariant, as witnessed by the following lemma. \begin{lemma}\label{l:nucquotient} Let $A$ and $B$ be $X$-$C^\ast$-algebras for which $A$ is exact, and suppose that $B(\emptyset) =0$. Then a c.p.~map $\eta \colon A \to B$ is $X$-nuclear if and only if it is nuclear and $X$-equivariant. \end{lemma} \begin{proof} If $\eta$ is $X$-nuclear it is $X$-equivariant by definition. Moreover, as $\eta(A(\emptyset)) \subseteq B(\emptyset) = 0$, it follows that $\eta$ is the composition $A \to A/A(\emptyset) \xrightarrow{[\eta]_\emptyset} B/B(\emptyset) = B$. As $[\eta]_\emptyset$ is nuclear, so is $\eta$. For the converse, suppose that $\eta$ is nuclear and $X$-equivariant, and let $U\in \mathcal O(X)$. The composition $A \xrightarrow \eta B \twoheadrightarrow B/B(U)$ is nuclear by nuclearity of $\eta$, and $A(U)$ is contained in its kernel since $\eta$ is $X$-equivariant. Hence by \cite[Proposition 3.2]{Dadarlat-qdmorphisms}\footnote{Which uses the deep result that exact $C^\ast$-algebras are locally reflexive, and that local reflexivity passes to quotients, see \cite[Section 9]{BrownOzawa-book-approx}.} it follows that the induced map $A/A(U) \to B/B(U)$ is nuclear. So $\eta$ is $X$-nuclear. \end{proof} The following shows that the criteria $B(\emptyset) = 0$ in the above lemma is always satisfied in the cases we are interested in. \begin{remark}\label{r:Bempty} Let $(A,\Phi_A)$ and $(B,\Phi_B)$ be $X$-$C^\ast$-algebras with $(A,\Phi_A)$ lower semicontinuous and suppose that there exists an $X$-full c.p.~map $\phi \colon A \to B$. Then $\Phi_B(\emptyset) = 0$. In fact, let $\Psi_A\colon \mathcal I(A) \to \mathcal O(X)$ be the dual action of $\Phi_A$. As $0 \subseteq \Phi_A(\emptyset)$ it follows from \eqref{eq:dualGalois} that $\Psi_A(0) = \emptyset$. By $X$-fullness one has $\mathcal I(\phi) = \Phi_B \circ \Psi_A$, so \begin{equation} \Phi_B(\emptyset) = \Psi_B(\Psi_A(0)) = \mathcal I(\phi)(0) = 0. \end{equation} \end{remark} The following lemma will be useful. \begin{lemma}\label{l:XnucC(Y)} Let $A$ and $B$ be $X$-$C^\ast$-algebras, and let $Y$ be a locally compact, Hausdorff space. Then a c.p.~map $\rho \colon A \to C_0(Y, B)$ is $X$-equivariant (respectively $X$-nuclear) if and only if $\ev_y \circ \rho \colon A \to B$ is $X$-equivariant (respectively $X$-nuclear) for every $y\in Y$. \end{lemma} \begin{proof} We first show that a map $\rho \colon A \to C_0(Y, B)$ is nuclear if and only if $\ev_y \circ \rho$ is nuclear for every $y\in Y$. If $\rho$ is nuclear then clearly $\ev_y \circ \rho$ is nuclear for every $y\in Y$ (see Observation \ref{o:nuccomp}). To prove the converse, suppose that each $\ev_y \circ \rho$ is nuclear. We will use the tensor product characterisation of nuclear maps, Proposition \ref{p:nuctensor}, to prove that $\rho$ is nuclear, so fix a $C^\ast$-algebra $C$ and an element $d\in A \otimes_{\max{}} C$ which vanishes in $A\otimes C$. We should check that $(\rho \otimes_{\max{}} \id_C)(d) =0$ in $C_0(Y, B) \otimes_{\max{}} C$. By identifying $C_0(Y, B)\otimes_{\max{}} C$ and $C_0(Y, B\otimes_{\max{}} C)$ in the canonical way, it suffices to check that $\ev_y((\rho \otimes_{\max{}} \id_C)(d)) =0$ for every $y\in Y$. However, we have \begin{equation} \ev_y((\rho \otimes_{\max{}} \id_C)(d)) = ((\ev_y \circ \rho) \otimes_{\max{}} \id_C)(d) =0 \end{equation} for all $y\in Y$ since $\ev_y \circ \rho$ is nuclear (Proposition \ref{p:nuctensor}), and hence $\rho$ is nuclear. Now, the $X$-equivariant case is obvious, and the $X$-nuclear case is a consequence of what was proved above. \end{proof} \begin{remark} Given Lemma \ref{l:nucquotient}, the reader may be wondering why $X$-nuclearity is not simply defined as an $X$-equivariant map which is nuclear, since this is equivalent when the domain is exact which is all we will eventually care about. The reason is that I do not believe that this is the ``correct'' definition when the domain is non-exact. In fact, the role of nuclearity and $X$-nuclearity is essentially about determining approximate domination of maps, cf.~Proposition \ref{p:fulldom} and Theorem \ref{t:approxdom}. While Theorem \ref{t:approxdom} determines approximate domination of nuclear maps with exact domain, Proposition \ref{p:fulldom} had the advantage that any (not necessarily nuclear) full $\ast$-homomorphism out of a (not necessarily exact) $C^\ast$-algebra approximately dominates any nuclear map. The same holds in the $X$-equivariant case: any $X$-full $\ast$-homomorphism approximately dominates any $X$-nuclear map. However, my current proof of this is quite involved and I have therefore chosen not to include it in the paper as it is not needed. \end{remark} \section{Absorbing representations revisited} In this section the theory of absorbing representations from Section \ref{s:absrep} will be generalised to $X$-$C^\ast$-algebras. It will be convenient to work with Hilbert $C^\ast$-modules instead of just $C^\ast$-algebras and their multiplier algebras, as Hilbert modules play an important role in $KK$-theory. Throughout this section all Hilbert modules are assumed to be \emph{right} Hilbert modules, and the inner product is conjugate linear in the first variable and linear in the second variable, in contrast to what is customary for Hilbert spaces. I refer the reader to \cite{Lance-book-Hilbertmodules} for the basics of Hilbert modules. For a Hilbert $B$-module $E$, $\mathcal B_B(E)$ denotes the $C^\ast$-algebra of adjointable operators on $E$, and $\mathcal K_B(E)$ denotes the ``compact'' operators on $E$, i.e.~the closed linear span of all rank one operators. Often $\mathcal B(E)$ and $\mathcal K(E)$ will be written instead. \begin{definition}\label{d:weaklyXnuc} Let $A$ and $B$ be $X$-$C^\ast$-algebras, and let $E$ be a right Hilbert $B$-module. A $\ast$-homomorphism $\phi \colon A \to \mathcal B(E)$ is called \emph{weakly $X$-equivariant} (respectively \emph{weakly $X$-nuclear}) if the c.p.~map \begin{equation}\label{eq:weaknucHilbert} A \ni a \mapsto \langle \xi, \phi(a) \xi \rangle_E \in B \end{equation} is $X$-equivariant (respectively $X$-nuclear) for every $\xi\in E$. \end{definition} \begin{remark} If $E = B$ then one has $\mathcal B(E) = \multialg{B}$. Hence a $\ast$-homomorphism $\phi \colon A \to \multialg{B}$ is weakly $X$-equivariant (respectively weakly $X$-nuclear) if and only if $b^\ast \phi(-) b \colon A \to B$ is $X$-equivariant (respectively $X$-nuclear) for all $b\in B$. \end{remark} \begin{remark} A $\ast$-homomorphism $\phi \colon A \to \mathcal B(E)$ is weakly $X$-equivariant if and only if $\phi(A(U)) E \subseteq E B(U)$ for all $U\in \mathcal O(X)$. Hence the above definition coincides with what Kirchberg calls ``$X$-equivariant'' in \cite[Definition 4.1]{Kirchberg-non-simple}. The ``if'' is trivial, and for ``only if'' suppose $\phi$ is weakly $X$-equivariant. Let $a\in A(U)$ and $\xi \in E$. By $X$-equivariance of $\langle\xi, \phi(-)\xi\rangle_E$, \begin{equation} \langle \phi(a) \xi , \phi(a) \xi\rangle_E = \langle \xi, \phi(a^\ast a)\xi\rangle_E \in B(U). \end{equation} Let $(e_\lambda)$ be an approximate identity in $B(U)$. Then \begin{equation} \| \phi(a) \xi - \phi(a)\xi e_{\lambda}\|^2_E = (1_{\widetilde B} - e_\lambda) \langle \phi(a) \xi, \phi(a) \xi\rangle_E (1_{\widetilde B} - e_\lambda) \to 0 \end{equation} so $\phi(a)\xi = \lim_{\lambda} \phi(a) \xi e_\lambda \in \overline{EB(U)}$. By Cohen's factoristion theorem (see \cite[Theorem 4.6.4]{BrownOzawa-book-approx}), $\overline{EB(U)} = EB(U)$. \end{remark} \begin{remark}\label{r:Skandalis} In \cite{Skandalis-KKnuc}, Skandalis considers $\ast$-homo\-morphisms $\phi \colon A \to \mathcal B(E)$ such that \begin{equation}\label{eq:weaknucHilbertS} A \ni a \mapsto (\langle \xi_i, \phi(a) \xi_j\rangle_E)_{i,j=1}^n \in M_n(B) \end{equation} is nuclear for all $n\in \mathbb N$ and $\xi_1,\dots, \xi_n \in E$. By Lemma \ref{l:nucmatrix} below, this is equivalent to the map \eqref{eq:weaknucHilbert} being nuclear for all $\xi\in E$, i.e.~one only needs to verify the case $n=1$. So Skandalis' definition is a special case of the Definition \ref{d:weaklyXnuc} in the case where $X= \{\star\}$ is a one-point space, and $A$ and $B$ are continuous $\{\star\}$-$C^\ast$-algebras, see Example \ref{ex:onepointXalg}. This implies that $KK_\nuc(\{\star\}; A, B) = KK_\nuc(A,B)$ ($KK_\nuc(X)$ is defined in the following section). \end{remark} \begin{lemma}\label{l:nucmatrix} Let $\rho = (\rho_{i,j})_{i,j=1}^n \colon A \to M_n(B)$ be a c.p.~map. Then $\rho$ is nuclear if and only if $\rho_{i,i}\colon A \to B$ is nuclear for each $i=1,\dots, n$. \end{lemma} \begin{proof} ``Only if'' is obvious. To prove ``if'', assume $\rho_{i,i}$ is nuclear for $i=1,\dots, n$. To show that $\rho$ is nuclear we will use the tensor product characterisation of nuclear maps, Proposition \ref{p:nuctensor}, so let a $C^\ast$-algebra $C$ be given, and $x\in A\otimes_{\max{}} C$ be a positive element vanishing in $A \otimes C$. We should show that $(\rho \otimes_{\max{}} \id_C)(x) =0$ in $M_n(B) \otimes_{\max{}} C = M_n(B \otimes_{\max{}} C)$. As $(\rho \otimes_{\max{}} \id_C)(x)$ is positive, it is zero if all its diagonal entries are zero (when considered as an $n\times n$-matrix). These entries are exactly $(\rho_{i,i} \otimes_{\max{}} \id_C)(x)$ for $i=1,\dots, n$ and these vanish as each $\rho_{i,i}$ is nuclear (use Proposition \ref{p:nuctensor} again). Hence $\rho$ is nuclear. \end{proof} A useful tool for showing that a representation $\phi \colon A \to \mathcal B(E)$ is weakly $X$-nuclear will be Lemma \ref{l:densespan}, which shows that one only has to check that \eqref{eq:weaknucHilbert} is $X$-nuclear for $\xi$ that span a dense subset of $E$. This will play an important role when showing that the Kasparov product is well-defined in the $X$-equivariant and $X$-nuclear setting. To prove this, the following lemma is needed. \begin{lemma}\label{l:nucsum} Let $\phi, \psi \colon A \to B$ be c.p.~maps. If $\phi + \psi$ is nuclear, then $\phi$ and $\psi$ are nuclear. In particular, if $\phi+\psi$ is $X$-equivariant (respectively $X$-nuclear), then $\phi$ and $\psi$ are $X$-equivariant (respectively $X$-nuclear). \end{lemma} \begin{proof} We use the tensor product characterisation of nuclear maps, Proposition \ref{p:nuctensor}, so let $D$ be any $C^\ast$-algebra, and $y \in A \otimes_{\max{}} D$ be a positive element such that $y$ vanishes in $A \otimes D$. Nuclearity entails $((\phi + \psi)\otimes \id_D) (y) =0$, so \begin{equation} 0 \leq (\phi \otimes \id_D) (y) \leq (\phi \otimes \id_D) (y) + ( \psi \otimes \id_D) (y) = ((\phi + \psi)\otimes \id_D )(y) = 0, \end{equation} so $(\phi \otimes \id_D)(y)=0$. Hence $\phi$, and similarly $\psi$, is nuclear. ``In particular'': If $\phi+ \psi$ is $X$-equivariant, let $U\in \mathcal O(X)$, and $a\in A(U)$ be positive. We have $0 \leq \phi(a) \leq \phi(a) + \psi(a) \in B(U)$, and since $B(U)$ is hereditary, $\phi(a) \in B(U)$. Hence $\phi$, and similarly $\psi$, is $X$-equivariant. If $\phi + \psi$ is $X$-nuclear, then $[\phi + \psi]_U = [\phi]_U + [\psi]_U$ is nuclear for each $U\in \mathcal O(X)$. Hence $[\phi]_U$ and $[\psi]_U$ are nuclear by what we proved above, so $\phi$ and $\psi$ are $X$-nuclear. \end{proof} \begin{notation}\label{n:Xcone} Recall that $\CP(A,B)$ denotes the convex cone of completely positive maps from $A$ to $B$ equipped with the point-norm topology. If $A$ and $B$ are $X$-$C^\ast$-algebras we define \begin{equation} \CP(X; A, B) := \{ \phi \in \CP(A,B) : \phi \textrm{ is $X$-equivariant}\} \end{equation} and similarly \begin{equation} \CP_\nuc(X;A, B) := \{ \phi \in \CP(A,B) : \phi \textrm{ is $X$-nuclear}\}. \end{equation} These are both easily seen to be point-norm closed subcones of $\CP(A,B)$. \end{notation} The following observation will help simplify some proofs, as one will only have to prove things once. \begin{observation} A $\ast$-homomorphism $A \to \mathcal B(E)$ is weakly $X$-equivariant (respectively weakly $X$-nuclear) exactly when $\langle \xi , \phi(-) \xi \rangle_E \in \mathscr C$ for all $\xi \in E$, where $\mathscr C = \CP(X; A, B)$ (respectively $\mathscr C = \CP_\nuc(X; A, B)$). \end{observation} \begin{remark}\label{r:Xnuccone} Lemma \ref{l:nucsum} means that $\CP(X;A ,B)$ and $\CP_\nuc(X; A, B)$ are \emph{hereditary} in the sense that if $\phi, \psi \in \CP(A,B)$ satisfy $\phi + \psi \in \mathscr C$, then $\phi, \psi \in \mathscr C$ where $\mathscr C$ denotes either $\CP(X; A, B)$ or $\CP_\nuc(X; A, B)$.\footnote{$\CP(X; A, B)$ and $\CP_\nuc(X;A,B)$ are point-norm closed, operator convex cones in the sense of \cite[Definition 4.1]{KirchbergRordam-zero}. All results and constructions in this -- as well as the following -- section have analogues where $\mathscr C \subseteq \CP(A,B)$ is such a point-norm closed, operator convex cone. A crucial and non-trivial fact is that such cones are hereditary, which can be proved by the same methods as the proof of \cite[Proposition 4.2]{KirchbergRordam-zero}, but as this will not be needed the details are omitted.} \end{remark} That the cones $\CP(X;A,B)$ and $\CP_\nuc(X;A,B)$ are hereditary, is the crucial ingredient which implies the following important lemma. \begin{lemma}\label{l:densespan} Let $\phi \colon A \to \mathcal B(E)$ be a $\ast$-homomorphism, and let $S \subseteq E$ be a subset such that $\overline{\mathrm{span}}\, S = E$. Then $\phi$ is weakly $X$-equivariant (respectively weakly $X$-nuclear) if and only if $\langle \xi, \phi(-) \xi \rangle_E$ is $X$-equivariant (respectively $X$-nuclear) for all $\xi\in S$. \end{lemma} \begin{proof} ``Only if'' is trivial, so we only prove ``if''. Let $\mathscr C$ denote either $\CP(X; A, B)$ or $\CP_\nuc(X; A, B)$, depending on whether we are proving the $X$-equivariant or the $X$-nuclear case. Suppose that $\xi_1, \xi_2\in E$ are such that $\langle \xi_i , \phi(-) \xi_i \rangle_E\in \mathscr C$. Then \begin{equation} 2 \langle \xi_1 , \phi(-) \xi_1\rangle_E + 2\langle \xi_2 , \phi(-) \xi_2 \rangle_E \in \mathscr C \end{equation} since $\mathscr C$ is a convex cone. As \begin{eqnarray} && \langle \xi_1 + \xi_2, \phi(-) (\xi_1 + \xi_2) \rangle_E + \langle \xi_1 - \xi_2 , \phi(-) (\xi_1 - \xi_2) \rangle_E \nonumber\\ &=& 2 \langle \xi_1 , \phi(-) \xi_1\rangle_E + 2\langle \xi_2 , \phi(-) \xi_2 \rangle_E, \end{eqnarray} it follows that $\langle \xi_1 + \xi_2, \phi(-) (\xi_1 + \xi_2) \rangle_E \in \mathscr C$ since $\mathscr C$ is hereditary by Remark \ref{r:Xnuccone}. In particular, if $\lambda \in \mathbb C$ and $\xi \in S$, then \begin{equation} \langle \lambda \xi, \phi(-) \lambda \xi\rangle_E = |\lambda|^2 \langle \xi, \phi(-) \xi\rangle_E \in \mathscr C, \end{equation} so by iterating the above result, it follows that \begin{equation} \left\langle \sum_{i=1}^n \lambda_i \xi_i , \phi(-) \sum_{j=1}^n \lambda_j \xi_j \right\rangle_E \in \mathscr C, \end{equation} for $n\in \mathbb N$, $\lambda_1, \dots,\lambda_n \in \mathbb C$ and $\xi_1, \dots, \xi_n \in S$. This shows that $\langle \xi, \phi(-) \xi\rangle_E \in \mathscr C$ for all $\xi \in \mathrm{span}\, S$. Let $\xi \in E$ and $(\xi_n)_{n\in \mathbb N}$ be a sequence in $\mathrm{span}\, S$ converging to $\xi$. Then \begin{eqnarray} && \| \langle \xi_n , \phi(a) \xi_n \rangle_E - \langle \xi, \phi(a) \xi \rangle_E \| \nonumber\\ &\leq & \| \langle \xi_n - \xi, \phi(a) \xi_n \rangle_E\| + \|\langle \xi, \phi(a) (\xi_n - \xi) \rangle_E \| \nonumber\\ &\leq& \| \xi_n - \xi\| \| \xi_n\| \| a \| + \| \xi_n - \xi\| \| \xi\| \| a \| \nonumber\\ &\to & 0 \end{eqnarray} for all $a\in A$. So the c.p.~maps $\langle \xi_n , \phi(-) \xi_n \rangle_E \in \mathscr C$ converge point-norm to $\langle \xi, \phi(-) \xi \rangle_E$. As $\mathscr C$ is point-norm closed, it follows that $\langle \xi, \phi(-) \xi \rangle_E \in \mathscr C$. \end{proof} The following is an immediate consequence. \begin{corollary}\label{c:dirsumrep} If $\gamma_n \colon A \to \mathcal B(E_n)$ are $\ast$-homomorphisms which are weakly $X$-equivariant (respectively weakly $X$-nuclear) for $n\in \mathbb N$, then the diagonal sum \begin{equation} \gamma \colon A \to \mathcal B( \bigoplus_{n\in \mathbb N} E_n) , \qquad \gamma(a) ((\xi_n)_{n\in \mathbb N}) = (\gamma_n(a)\xi_n)_{n\in \mathbb N} \end{equation} is weakly $X$-equivariant (respectively weakly $X$-nuclear). \end{corollary} Recall that $\mathcal K := \mathcal K(\ell^2(\mathbb N))$. The following additional corollary will be recorded for later use. In the following, $\multialg{B\otimes \mathcal K}$ and $\mathcal B_B(\ell^2(\mathbb N) \otimes B)$ will be identified in the canonical way. \begin{corollary}\label{c:densespan} Let $A$ and $B$ be $X$-$C^\ast$-algebras, and let \begin{equation} \phi \colon A\to \multialg{B\otimes \mathcal K} = \mathcal B_B(\ell^2(\mathbb N) \otimes B) \end{equation} be a $\ast$-homomorphism. Then $\phi$ is weakly $X$-equivariant in the multiplier algebra sense (i.e.~$x^\ast \phi(-) x$ is $X$-equivariant for all $x\in B\otimes \mathcal K$) if and only if $\phi$ is weakly $X$-equivariant in the Hilbert module sense (i.e.~the c.p.~map $\langle \xi, \phi(-) \xi\rangle_{\ell^2(\mathbb N) \otimes B}$ is $X$-equivariant for every $\xi \in \ell^2(\mathbb N) \otimes B$). The same is true if one replaces ``$X$-equivariant'' with ``$X$-nuclear''. \end{corollary} At first sight, the conclusion of the above result may seem obvious. The main difficulty is that $x^\ast \phi(-) x$ takes values in $B\otimes \mathcal K$ while $\langle \xi, \phi(-) \xi\rangle_B$ takes values in $B$. The result will be important when considering the Cuntz pair picture of $KK$-theory. \begin{proof} We only do the $X$-nuclear case. By applying Lemma \ref{l:densespan} to $B\otimes \mathcal K$ as a Hilbert $(B \otimes \mathcal K)$-module, and the subset $S = \{ b \otimes e_{i,j} : b\in B, i, j \in \mathbb N\}$, it follows that $\phi$ is weakly $X$-nuclear in the multiplier algebra sense, if and only if $(b^\ast\otimes e_{j,i}) \phi(-) (b\otimes e_{i,j})$ is $X$-nuclear for all $b\in B$ and $i,j\in \mathbb N$. As \begin{equation} (b^\ast\otimes e_{j,i}) \phi(-) (b\otimes e_{i,j}) = (1_{\multialg{B}} \otimes e_{j,i}) (b^\ast \otimes e_{i,i}) \phi(-) (b \otimes e_{i,i})(1_{\multialg{B}} \otimes e_{i,j}), \end{equation} it easily follows that this in turn is equivalent to \begin{equation} (b^\ast \otimes e_{i,i}) \phi(-) (b\otimes e_{i,i}) = \langle (\delta_i\otimes b) , \phi(-) (\delta_i\otimes b)\rangle_{\ell^2(\mathbb N)\otimes B} \end{equation} being $X$-nuclear for all $b\in B$ and $i\in \mathbb N$, where $\delta_i \in \ell^2(\mathbb N)$ is the characteristic function on $\{i\} \subseteq \mathbb N$. By Corollary \ref{c:dirsumrep} (after identifying $\ell^2(\mathbb N)\otimes B$ and $\bigoplus_{i\in \mathbb N} B$ canonically), this is equivalent to $\phi$ being weakly $X$-nuclear in the Hilbert module sense. \end{proof} \begin{definition} Let $A$ be a separable $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital, stable $X$-$C^\ast$-algebra. A $\ast$-homomorphism $\phi \colon A \to \multialg{B}$ is called \emph{$X$-nuclearly absorbing} if it absorbs (see Definition \ref{d:abs}) any weakly $X$-nuclear $\ast$-homomorphism $A \to \multialg{B}$. \end{definition} The following theorem will be used in the same way as Theorem \ref{t:fullnucabs} was used in the case of full maps. \begin{theorem}\label{t:infrepXabs} Let $X$ be a topological space, and let $A$ and $B$ be $X$-$C^\ast$-algebras for which $A$ is separable, exact, and lower semicontinuous, and $B$ is $\sigma$-unital and stable. Suppose that $\theta \colon A \to B$ is an $X$-full, nuclear $\ast$-homomorphism. Then any infinite repeat $\theta_\infty \colon A \to \multialg{B}$ of $\theta$ is weakly $X$-nuclear and $X$-nuclearly absorbing. \end{theorem} \begin{proof} By Remark \ref{r:Bempty} one has $B(\emptyset) = 0$. Hence Lemma \ref{l:nucquotient} implies that a c.p.~map $\eta \colon A \to B$ is $X$-nuclear if and only if it is $X$-equivariant and nuclear. Thus $\theta_\infty$ is weakly $X$-nuclear by Corollary \ref{c:dirsumrep} and Remark \ref{r:infrep}. It remains to show that $\theta_\infty$ is $X$-nuclearly absorbing, so fix a weakly $X$-nuclear $\ast$-homomorphism $\psi \colon A \to \multialg{B}$. We wish to show that $\theta_\infty$ absorbs $\psi$. By Proposition \ref{p:absnonunital} it suffices to show that $\theta$ approximately dominates $b^\ast \psi(-) b \colon A \to B$ for every $b\in B$. Fix such $b\in B$. As $\psi$ is weakly $X$-nuclear the map $b^\ast \psi(-) b$ is $X$-nuclear and thus nuclear and $X$-equivariant by Lemma \ref{l:nucquotient}. As $\theta$ is $X$-full it follows from Lemma \ref{l:Cuaction}$(a)$ that $\mathcal I(b^\ast \psi(-) b) \leq \mathcal I(\phi)$. Finally, as both $\theta$ and $b^\ast \psi(-) b$ are nuclear, it follows from Theorem \ref{t:approxdom} that $\theta$ approximately dominates $b^\ast \psi(-) b$, thus finishing the proof. \end{proof} \section{Ideal-related $KK$-theory}\label{s:KK} In this section, Kirchberg's ideal-related $KK$-theory for $X$-$C^\ast$-algebras from \cite{Kirchberg-non-simple} will be constructed from the bottom up. This will be done both in an $X$-equivariant version which generalises constructions such as Kasparov's $\mathscr RKK(X;A,B)$ \cite{Kasparov-eqKKNovikov} for $C(X)$-algebras, as well as a $X$-nuclear version which generalises Skandalis' $KK_\nuc(A,B)$ \cite{Skandalis-KKnuc}. The reader is expected to be familiar with the basics of $KK$-theory, see \cite{Blackadar-book-K-theory} and \cite{JensenThomsen-book-KK-theory}. The pictures of $KK$-theory that will be used for classification -- the Cuntz pair picture and the Fredholm picture -- are considered in detail in Sections \ref{ss:CuntzPairs} and \ref{ss:Fredholm} respectively. \subsection{The basics} All Hilbert $C^\ast$-modules are \emph{right} Hilbert $C^\ast$-modules. Recall that a \emph{($\mathbb Z/2\mathbb Z$-)graded $C^\ast$-algebra} is a $C^\ast$-algebra $A$ together with an automorphism $\beta_A$ such that $\beta_A \circ \beta_A = \id_A$. A graded $C^\ast$-algebra $A$ is \emph{trivially graded} if $\beta_A = \id_A$. A c.p.~map $\phi \colon A \to B$ between graded $C^\ast$-algebras is \emph{graded} if $\phi \circ \beta_A = \beta_B \circ \phi$. A (two-sided, closed) ideal $I$ in a graded $C^\ast$-algebra $A$ is called \emph{graded} if $\beta_A(I) = I$. For graded $C^\ast$-algebras $A$ and $B$, a \emph{Kasparov $A$-$B$-module} is a triple $(E,\phi, F)$ where $E$ is a countably generated, graded Hilbert $B$-module, \begin{equation} \phi \colon A \to \mathcal B(E) \end{equation} is a graded $\ast$-homomorphism, and $F\in \mathcal B(E)$ is an element of odd degree, such that \begin{equation} [F, \phi(A)] \subseteq \mathcal K(E),\footnote{Recall that one uses \emph{graded} commutators, so that $[a,b] = ab - (-1)^{\partial a \cdot \partial b} ba$ for homogeneous elements $a,b$.} \quad (F^2 - 1_{\mathcal B(E)})\phi(A) \subseteq \mathcal K(E), \quad (F-F^\ast) \phi(A) \subseteq \mathcal K(E). \end{equation} A Kasparov module $(E,\phi, F)$ is \emph{degenerate} if \begin{equation} [F, \phi(A)]=\{0\} , \quad (F^2 - 1_{\mathcal B(E)})\phi(A) = \{0\} , \quad (F-F^\ast) \phi(A) =\{0\}. \end{equation} \begin{definition} Let $X$ be a topological space. A \emph{graded $X$-$C^\ast$-algebra} is a graded $C^\ast$-algebra $A$ with an action of $X$, such that $A(U)$ is a graded ideal for every $U\in \mathcal O(X)$. \end{definition} \emph{For the rest of this section, let $X$ be a topological space, and let $A$, $B$, and $C$ be graded $X$-$C^\ast$-algebras.} \begin{definition} Let $A$ and $B$ be graded $X$-$C^\ast$-algebras. A Kasparov $A$-$B$-module $(E, \phi , F)$ is said to be \emph{$X$-equivariant} (respectively \emph{$X$-nuclear}) if $\phi$ is weakly $X$-equivariant (respectively weakly $X$-nuclear), see Definition \ref{d:weaklyXnuc}. \end{definition} \begin{remark}[Direct sums] By Corollary \ref{c:dirsumrep} it follows that (finite) direct sums of $X$-equivariant (respectively $X$-nuclear) Kasparov modules are again $X$-equivariant (respectively $X$-nuclear). Also, it follows that (countable) infinite direct sums of degenerate $X$-equivariant (respectively $X$-nuclear) Kasparov modules are again (necessarily degenerate) $X$-equivariant (respectively $X$-nuclear) Kasparov modules. \end{remark} \begin{notation} Given a graded $X$-$C^\ast$-algebra $B$, let $IB:= C([0,1], B)$ be the induced graded $X$-$C^\ast$-algebra with grading automorphism $\beta_{IB}(f)(t) = \beta_B(f(t))$ for $f\in IB$ and $t\in [0,1]$, and action given by \begin{equation} IB(U) = C([0,1], B(U)) \qquad \textrm{for } U\in \mathcal O(X). \end{equation} For $t\in [0,1]$, let $\ev_t \colon IB \to B$ denote the evaluation map at $t$. \end{notation} For any graded Hilbert $IB$-module $E$ and any $t\in [0,1]$, there is an induced graded Hilbert $B$-module $E_t$ given by completing the pre-Hilbert module \begin{equation} E/\{ \xi \in E : \ev_t(\langle \xi, \xi \rangle) = 0\}, \end{equation} with inner product given by $\langle [\xi] , [\eta] \rangle_{E_t} = \ev_t(\langle \xi , \eta\rangle_E)$. There is an isomorphism $E\hat \otimes_{\ev_t} B \xrightarrow \cong E_t$ given on elementary tensors by $\xi \hat \otimes b \mapsto [\xi] b$ for $\xi \in E$ and $b\in B$. There is a $\ast$-homomorphism $(\ev_t)_\ast \colon \mathcal B(E) \to \mathcal B(E_t)$, which takes an operator $T\in \mathcal B(E)$ and maps it to the operator $[\xi] \mapsto [T\xi]$ for $\xi \in E$. \begin{lemma}\label{l:weakhtpy} If $E$ is a graded Hilbert $IB$-module, then a $\ast$-homomorphism $\phi \colon A \to \mathcal B(E)$ is weakly $X$-equivariant (respectively weakly $X$-nuclear), if and only if $\phi_t := (\ev_t)_\ast \circ \phi \colon A \to \mathcal B(E_t)$ is weakly $X$-equivariant (respectively weakly $X$-nuclear) for all $t\in [0,1]$. \end{lemma} \begin{proof} We do the $X$-nuclear case, the $X$-equivariant case is identical. Suppose that $\phi_t$ is weakly $X$-nuclear for each $t\in [0,1]$. For every $\xi \in E$ we get that \begin{equation} a\mapsto \ev_t(\langle \xi , \phi(a) \xi \rangle_E) = \langle [\xi] , \phi_t(a) [\xi] \rangle_{E_t} \end{equation} is $X$-nuclear for each $t\in [0,1]$. By Lemma \ref{l:XnucC(Y)}, $\langle \xi , \phi(-) \xi \rangle_E$ is $X$-nuclear. Conversely, suppose $\phi$ is weakly $X$-nuclear and fix $t\in [0,1]$. As $\{ [\xi] : \xi \in E\} \subseteq E_t$ is dense, it suffices by Lemma \ref{l:densespan} to show that $\langle [\xi] , \phi_t(-) [\xi] \rangle_{E_t} = \ev_t(\langle \xi , \phi(-) \xi\rangle_E)$ is $X$-nuclear for all $\xi \in E$. This again follows from Lemma \ref{l:XnucC(Y)} since $\phi$ is weakly $X$-nuclear. \end{proof} The usual notions of homotopy, operator homotopy, etc.~also carries over to the $X$-equivariant and $X$-nuclear setting, simply by assuming that all Kasparov modules involved are $X$-equivariant or $X$-nuclear. The details are filled in here. \begin{definition} Let $A$ and $B$ be graded $X$-$C^\ast$-algebras, and let $\mathcal E_0 = (E_0,\phi_0, F_0)$ and $\mathcal E_1 = (E_1,\phi_1,F_1)$ be $X$-equivariant (respectively $X$-nuclear) Kasparov $A$-$B$-modules. \begin{itemize} \item Say that $\mathcal E_0$ and $\mathcal E_1$ are \emph{unitarily equivalent} if there is a unitary $u\in \mathcal B(E_0,E_1)$ of degree 0, such that $u^\ast \phi_1(-) u = \phi_0$ and $u^\ast F_1 u = F_0$. Write $\mathcal E_0 \approx_u \mathcal E_1$ when $\mathcal E_0$ and $\mathcal E_1$ are unitarily equivalent. \item A \emph{homotopy} from $\mathcal E_0$ to $\mathcal E_1$ is an $X$-equivariant (respectively $X$-nuclear) Kasparov $A$-$IB$-module $(E_I, \phi_I, F_I)$, such that \begin{equation} \mathcal E_0 \approx_u ((E_I)_0, (\phi_I)_0, (F_I)_0), \qquad \mathcal E_1 \approx_u ((E_I)_1, (\phi_I)_1, (F_I)_1). \end{equation} Write $\mathcal E_0 \approx_\h \mathcal E_1$ if there is a homotopy from $\mathcal E_0$ to $\mathcal E_1$. \item Say that $\mathcal E_0$ and $\mathcal E_1$ are \emph{homotopic}, if there are $X$-equivariant (respectively $X$-nuclear) Kasparov modules $\mathcal F_1,\dots, \mathcal F_n$, such that \begin{equation} \mathcal E_0 \approx_\h \mathcal F_1 \approx_\h \dots \approx_\h \mathcal F_n \approx_\h \mathcal E_1. \end{equation} Write $\mathcal E_0 \sim_{\h} \mathcal E_1$ when $\mathcal E_0$ and $\mathcal E_1$ are homotopic. \item Say that $\mathcal E_0$ and $\mathcal E_1$ are \emph{operator homotopic} if $E_0= E_1$, $\phi_0 = \phi_1$ and there is a norm-continuous path $[0,1] \ni t \mapsto G_t$ with $G_0=F_0$, $G_1=F_1$ and $(E_0 , \phi_0 , G_t)$ is a Kasparov module for each $t$. \item Write $\mathcal E_0 \approx_\oh \mathcal E_1$ if there are operator homotopic modules $\mathcal F_0$ and $\mathcal F_1$ such that $\mathcal E_i \approx_u \mathcal F_i$ for $i=0,1$. \item Write $\mathcal E_0 \sim_\oh \mathcal E_1$ if there are $X$-equivariant (respectively $X$-nuclear), degenerate Kasparov $A$-$B$-modules $\mathcal D_0$ and $\mathcal D_1$ such that \begin{equation} \mathcal E_0 \oplus \mathcal D_0 \approx_\oh \mathcal E_1 \oplus \mathcal D_1. \end{equation} \end{itemize} \end{definition} \begin{remark} The notations ``$\approx_\h$'', ``$\sim_\h$'', and ``$\sim_\oh$'' do not contain the information that everything is done in the $X$-equivariant (respectively $X$-nuclear) case. This will always be implied. The relations ``$\approx_u$'', and ``$\approx_\oh$'' do not depend on any $X$-equivariant (respectively $X$-nuclear) structure, as this additional structure is automatically preserved. \end{remark} \begin{remark} One can show that $\approx_\h$ is actually transitive and that therefore $\approx_\h$ and $\sim_\h$ are the same equivalence relation. While this is claimed without proof in \cite{Kasparov-KKExt} and \cite{Blackadar-book-K-theory}, I do not know of any other reference in the classical case other than the very recent \cite[Proposition A.1]{KumjianPaskSims-gradedKtheory}. The same proof carries over to the $X$-equivariant (respectively $X$-nuclear) setting, but in order to keep this as self-contained as possible, the transitive closure $\sim_\h$ will simply be used instead. \end{remark} The following is a consequence of Lemma \ref{l:weakhtpy}. \begin{corollary}\label{c:deghtpy} Degenerate $X$-equivariant (respectively $X$-nuclear) Kasparov modules are homotopic in the $X$-equivariant (respectively $X$-nuclear) sense to the Kasparov module $(0,0,0)$. \end{corollary} \begin{proof} We only do the $X$-nuclear case. Let $(E,\phi,F)$ be a degenerate $X$-nuclear Kasparov module, and let \begin{equation} (C_0(0,1] \hat \otimes E, 1_{\mathcal B(C_0(0,1])} \hat \otimes \phi, 1_{\mathcal B(C_0(0,1])} \hat \otimes F) \end{equation} be the induced (degenerate) $A$-$IB$-module, which we should prove is $X$-nuclear. By Lemma \ref{l:weakhtpy} it suffices to show that $(1_{\mathcal B(C_0(0,1])} \hat \otimes \phi)_t$ is weakly $X$-nuclear for each $t\in [0,1]$. For $t=0$ one has $(1_{\mathcal B(C_0(0,1])} \hat \otimes \phi)_0 = 0$ and if $t\in (0,1]$ then $(1_{\mathcal B(C_0(0,1])} \hat \otimes \phi)_t = \phi$ which in both cases are weakly $X$-nuclear by assumption, so $1_{\mathcal B(C_0(0,1])}\hat \otimes \phi$ is weakly $X$-nuclear. \end{proof} \begin{corollary}\label{c:ohstronger} $\sim_\oh$ is a stronger equivalence relation than $\sim_\h$ on $X$-equivariant (respectively $X$-nuclear) Kasparov modules. \end{corollary} \begin{proof} By Lemma \ref{l:weakhtpy}, $\approx_\oh$ is stronger than $\sim_\h$ by considering the obvious homotopy. Hence the result follows from Corollary \ref{c:deghtpy}. \end{proof} Once it is shown that the Kasparov product is well-defined, it will follow as in the classical case that $\sim_\h$ and $\sim_\oh$ are the same equivalence relation on $X$-equivariant (respectively $X$-nuclear) Kasparov modules, whenever $A$ is separable and $B$ is $\sigma$-unital. \begin{definition} Let $A$ and $B$ be graded $X$-$C^\ast$-algebras. \begin{itemize} \item Let $KK(X; A,B)$ (respectively $KK_\nuc(X; A,B)$) denote the semigroup of $\sim_\h$-equivalence classes of $X$-equivariant (respectively $X$-nuclear) Kasparov $A$-$B$-modules. The semigroup structure comes from direct sums. For an $X$-equivariant (respectively $X$-nuclear) Kasparov $A$-$B$-module $(E,\phi, F)$, write $[E, \phi, F]$ for its induced equivalence class. \item If $\phi \colon A \to B$ is a graded, $X$-equivariant $\ast$-homomorphism, let \begin{equation} KK(X; \phi) := [ B, \phi, 0] \in KK(X; A, B). \end{equation} Similarly, if $\phi \colon A \to B$ is a graded, $X$-nuclear $\ast$-homomorphism, let \begin{equation} KK_\nuc(X; \phi) := [B , \phi , 0] \in KK_\nuc(X; A, B). \end{equation} \end{itemize} \end{definition} \begin{lemma} $KK(X; A,B)$ and $KK_\nuc(X; A, B)$ are abelian groups. \end{lemma} \begin{proof} Only $KK(X; A, B)$ will be shown to be an abelian group, the proof $KK_\nuc(X; A, B)$ is obtained by replacing the word ``$X$-equivariant'' with ``$X$-nuclear''. $KK(X; A, B)$ is clearly an abelian semigroup, as $\mathcal E \oplus \mathcal E' \approx_u \mathcal E' \oplus \mathcal E$, and a monoid with identity element $[0,0,0]$. If $(E,\phi, F)$ is an $X$-equivariant Kasparov module then $(E^\op, \phi \circ \beta_A, -F)$\footnote{Recall that $E^\op$ is equal to $E$ as Hilbert modules but with the opposite grading, i.e.~if $\xi\in E$ is homogeneous of degree $i$ for $i\in \{0,1\}$, then $\xi$ has degree $1-i$ in $E^\op$.} is also an $X$-equivariant Kasparov module since $A(U) = \beta_A(A(U))$ for all $U\in \mathcal O(X)$. Thus \begin{equation} (E, \phi, F) \oplus (E^\op, \phi \circ \beta_A, -F) \approx_\oh (E\oplus E^\op , \phi \oplus (\phi \circ \beta_A), \left( \begin{array}{cc} 0 & 1_{E} \\ 1_{E} & 0 \end{array} \right) ), \end{equation} via the operator homotopy induced by $t \mapsto \left( \begin{array}{cc} F \cos t & \sin t \\ \sin t & - F\cos t \end{array} \right)$. The latter $X$-equivariant Kasparov module above is degenerate, so $[E^\op , \phi \circ \beta_A, -F]$ is the inverse of $[E, \phi, F]$ by Lemma \ref{c:deghtpy}. \end{proof} \begin{example}[The classical cases] Considering a one-point topological space $X= \{\star\}$, and continuous $\{\star\}$-$C^\ast$-algebras $A$ and $B$, one may simply think of $A$ and $B$ as $C^\ast$-algebras without an action, see Example \ref{ex:onepointXalg}. One clearly has $KK(\{\star\}; A, B) = KK(A,B)$. Moreover, by Remark \ref{r:Skandalis} one has $KK_\nuc(\{\star\}; A, B) = KK_\nuc(A,B)$ as defined by Skandalis in \cite{Skandalis-KKnuc}. \end{example} \begin{example}[$C_0(X)$-algebras] If $X$ is a locally compact Hausdorff space, and $A$ and $B$ are continuous $X$-$C^\ast$-algebras, then one may think of $A$ and $B$ as continuous $C_0(X)$-algebras, see Example \ref{ex:C(X)alg}. It can easily be checked that $KK(X; A, B) = \mathscr RKK(X; A, B)$ as defined by Kasparov in \cite{Kasparov-eqKKNovikov}. \end{example} \begin{observation}\label{o:KK(X)hom} There are canonical homomorphisms \begin{equation} KK_\nuc(X; A, B) \to KK(X; A, B) \to KK(A,B) \end{equation} given by forgetting that the modules are $X$-nuclear or $X$-equivariant. In particular, any element $x\in KK_\nuc(X; A, B)$ induces a homomorphism $\Gamma_0(x) \colon K_0(A) \to K_0(B)$ such that if $\phi \colon A \to B$ is an $X$-nuclear $\ast$-homo\-morphism then \begin{equation} \Gamma_0(KK_\nuc(X;\phi)) = \phi_0 \colon K_0(A) \to K_0(B). \end{equation} \end{observation} \subsection{The Kasparov product} The \emph{Kasparov product} of a Kasparov $A$-$B$-module $(E_1, \phi_1, F_1)$ and a $B$-$C$-module $(E_2, \phi_2, F_2)$ is a Kasparov $A$-$C$-module $(E_1 \hat \otimes_{\phi_2} E_2, \phi_1 \hat \otimes 1_{\mathcal B(E_2)} , F)$ where $F$ satisfies \begin{itemize} \item[(a)] $(\phi_1(a) \hat \otimes 1_{\mathcal B(E_2)})[F_1 \otimes 1_{\mathcal B(E_2)}, F] (\phi_1(a) \hat \otimes 1_{\mathcal B(E_2)})^\ast$ is positive modulo the ``compacts'' $\mathcal K(E_1 \hat \otimes_{\phi_2} E_2)$ for every $a\in A$; and \item[(b)] $[\widetilde T_\xi, F_2\oplus F] \in \mathcal K(E_2 \oplus (E_1 \hat \otimes_{\phi_2} E_2))$ for every $\xi \in E_1$ where $\widetilde T_\xi = \left( \begin{array}{cc} 0 & (\xi \hat \otimes -)^\ast \\ \xi \hat \otimes - & 0 \end{array}\right) \in \mathcal B(E_2 \oplus (E_1 \hat \otimes_{\phi_2} E_2))$. \end{itemize} Such an $F$ exists and is unique up to operator homotopy, provided $A$ is separable, and $B$ and $C$ are $\sigma$-unital, see \cite[Section 18.4]{Blackadar-book-K-theory}. Hence in the $X$-equivariant and $X$-nuclear case, all one has to do is check that $\phi_1 \hat \otimes 1_{\mathcal B(E_2)}$ is in fact weakly $X$-equivariant or weakly $X$-nuclear in order to get a well-defined Kasparov product. \begin{lemma}\label{l:tensormodule} Let $A$, $B$, and $C$ be graded $X$-$C^\ast$-algebras, let $E_1$ and $E_2$ be graded Hilbert $B$- and $C$-modules respectively, and let $\phi_1 \colon A \to \mathcal B(E_1)$ and $\phi_2 \colon B \to \mathcal B(E_2)$ be graded, weakly $X$-equivariant representations. Then \begin{equation} \phi_1 \hat \otimes 1_{\mathcal B(E_2)} \colon A \to \mathcal B(E_1 \hat \otimes_{\phi_2} E_2) \end{equation} is weakly $X$-equivariant. Moreover, if one of $\phi_1$ and $\phi_2$ is weakly $X$-nuclear, then $\phi_1 \hat \otimes 1_{\mathcal B(E_2)}$ is weakly $X$-nuclear. \end{lemma} \begin{proof} As $\overline{\mathrm{span}} \{ \xi_1 \otimes \xi_2 : \xi_1\in E_1, \xi_2 \in E_2\} = E_1 \hat \otimes_{\phi_2} E_2$, Lemma \ref{l:densespan} implies that it suffices to show that the map \begin{equation} a \mapsto \langle \xi_1 \otimes \xi_2 , (\phi_1 \hat \otimes 1_{\mathcal B(E_2)})(a) (\xi_1 \otimes \xi_2) \rangle_{E_1 \hat \otimes_{\phi_2} E_2} \end{equation} is $X$-equivariant, or $X$-nuclear if one of $\phi_1$ and $\phi_2$ is weakly $X$-nuclear, for all $\xi_1 \in E_1$ and $\xi_2 \in E_2$. Fix $\xi_i \in E_i$ and let $\psi_i = \langle \xi_i, \phi_i(-) \xi_i\rangle_{E_i}$ for $i=1,2$. Then $\psi_i$ is $X$-equivariant by weak $X$-equivariance of $\phi_i$, and $\psi_i$ is $X$-nuclear if $\phi_i$ is weakly $X$-nuclear. One has \begin{eqnarray} && \langle \xi_1 \otimes \xi_2 , (\phi_1 \hat \otimes 1_{\mathcal B(E_2)})(a) (\xi_1 \otimes \xi_2) \rangle_{E_1 \hat \otimes_{\phi_2} E_2} \nonumber\\ &=& \langle \xi_2, \phi_2(\langle \xi_1, \phi_1(a) \xi_1\rangle_{E_1}) \xi_2 \rangle_{E_2} \nonumber\\ &=& \psi_2 (\psi_1 ( a)). \end{eqnarray} As the composition of two $X$-equivariant maps is clearly $X$-equivariant, and the composition of an $X$-equivariant map and an $X$-nuclear map is $X$-nuclear, see Observation \ref{o:nuccomp}, the result follows. \end{proof} In the proof of the above lemma, it was important that one could check weak $X$-nuclearity only on elementary tensors. This relied on the non-trivial fact Lemma \ref{l:nucsum}; that the set of $X$-nuclear maps is hereditary, see also Remark \ref{r:Xnuccone}. \begin{proposition}\label{p:Kasparovprod} Let $\mathcal E_1 = (E_1, \phi_1, F_1)$ be an $X$-equivariant Kasparov $A$-$B$-module, and let $\mathcal E_2 = (E_2,\phi_2,F_2)$ be an $X$-equivariant Kasparov $B$-$C$-module. If $A$ is separable, and $B$ and $C$ are $\sigma$-unital, then there is a Kasparov product $\mathcal E_{12} = (E_1 \hat \otimes_{\phi_2} E_2, \phi_1 \hat \otimes 1_{\mathcal B(E_2)}, F)$ of $\mathcal E_1$ and $\mathcal E_2$, unique up to operator homotopy, and every such Kasparov module is $X$-equivariant. Moreover, if one of $\mathcal E_1$ and $\mathcal E_2$ is $X$-nuclear, then so is $\mathcal E_{12}$. \end{proposition} \begin{proof} This follows immediately from Lemma \ref{l:tensormodule} and \cite[Theorem 2.2.8]{JensenThomsen-book-KK-theory}. \end{proof} \begin{lemma}\label{l:prodoh} Let $A,B$, and $C$ be graded $X$-$C^\ast$-algebras for which $A$ is separable, and $B$ and $C$ are $\sigma$-unital, let $\mathcal E_1$ and $\mathcal E_1'$ be $X$-equivariant Kasparov $A$-$B$-modules, and let $\mathcal E_2$ and $\mathcal E_2'$ be $X$-equivariant Kasparov $B$-$C$-modules. Let $\mathcal E_{12}$ be a Kasparov product of $\mathcal E_1$ and $\mathcal E_2$, and let $\mathcal E_{12}'$ be a Kasparov product of $\mathcal E_1'$ and $\mathcal E_2'$. Suppose that $\mathcal E_1 \sim_\oh \mathcal E_1'$ and $\mathcal E_2 \sim_\oh \mathcal E_2'$ (both in the $X$-equivariant sense). The following hold. \begin{itemize} \item[(1)] $\mathcal E_{12} \sim_\oh \mathcal E_{12}'$ (in the $X$-equivariant sense). \item[(2)] If $i\in \{1,2\}$ is such that $\mathcal E_i$ and $\mathcal E_i'$ are $X$-nuclear and $\mathcal E_i \sim_\oh \mathcal E_i'$ (in the $X$-nuclear sense), then $\mathcal E_{12} \sim_\oh \mathcal E_{12}'$ (in the $X$-nuclear sense). \end{itemize} \end{lemma} \begin{proof} This follows immediately from Proposition \ref{p:Kasparovprod} and \cite[Lemmas 2.2.9--14]{JensenThomsen-book-KK-theory}. \end{proof} Recall that we may form a \emph{graded spatial (a.k.a.~minimal) tensor product} $A \hat \otimes C$ of graded $C^\ast$-algebras, which has grading operator $\beta_A \hat \otimes \beta_C$. Also, if $E_A$ and $E_C$ are graded Hilbert $A$- and $C$-modules respectively, we may form their \emph{exterior tensor product} $E_A \hat \otimes E_C$ which is a graded Hilbert $A \hat \otimes C$-module. If $A$ is a graded $C^\ast$-algebra (with no given action of $X$) and $C$ is a graded $X$-$C^\ast$-algebras then $A\hat \otimes C$ is a graded $X$-$C^\ast$-algebras with the action \begin{equation} (A\hat \otimes C)(U) = A \hat \otimes C(U), \qquad U \in \mathcal O(X). \end{equation} When $A,B$, and $C$ are graded $C^\ast$-algebras, and $\mathcal E = (E,\phi, F)$ is a Kasparov $A$-$B$-module, one may form the exterior tensor product \begin{equation} \mathcal E \otimes C := ( E \hat \otimes C , \phi \hat \otimes \id_C \colon A\hat \otimes C \to \mathcal B(E \hat \otimes C), F\hat \otimes 1_{\mathcal B(C)}), \end{equation} which is a Kasparov $A\hat \otimes C$-$B\hat \otimes C$-module. Clearly the external tensor product $- \otimes C$ of Kasparov modules preserves direct sums (up to unitary equivalence), operator homotopies, and degeneracy. Note that none of these conditions rely on $X$-equivariance or $X$-nuclearity. \begin{lemma}\label{l:exttensorbasic} Let $A$ and $B$ be graded $C^\ast$-algebras (with no given action of $X$), let $C$ be a graded $X$-$C^\ast$-algebra, and let $\mathcal E = (E, \phi, F)$ be a Kasparov $A$-$B$-module. Then $\mathcal E \otimes C$ is an $X$-equivariant Kasparov $(A\hat \otimes C)$-$(B\hat \otimes C)$-module. In particular, if $\mathcal E_0\sim_{\oh} \mathcal E_1$ (as ordinary Kasparov $A$-$B$-modules), then $\mathcal E_0 \otimes C \sim_{\oh} \mathcal E_1 \otimes C$ in the $X$-equivariant sense. \end{lemma} \begin{proof} As $\overline{\mathrm{span}} \{ \xi \otimes c : \xi \in E , c \in C \textrm{ are homogeneous}\} = E \hat \otimes C$, Lemma \ref{l:densespan} implies that it suffices to show that \begin{equation} A \hat \otimes C \ni x \mapsto \langle \xi \otimes c , (\phi \hat \otimes \id_C)(x) (\xi \otimes c)\rangle_{E \hat \otimes C} \in B \hat \otimes C \end{equation} is $X$-equivariant for all homogeneous $\xi \in E$ and $c\in C$. Denote this map by $\phi_0$ for a given $\xi$ and $c$. Hence, given $U\in \mathcal O(X)$ we should show that $\phi_0(A \hat \otimes C(U)) \subseteq B \hat \otimes C(U)$. Let $a\in A$ and $d\in C(U)$ be homogeneous. Then \begin{equation} \phi_0(a \otimes d) = \langle \xi \otimes c , (\phi(a) \otimes d) (\xi \otimes c)\rangle_{E \hat \otimes C} = \pm \langle \xi , \phi(a) \xi\rangle_E \otimes c^\ast dc \end{equation} where the sign depends on the degrees of $\xi, a,c$, and $d$ (see \cite[14.4.4]{Blackadar-book-K-theory}, although the exact computation is not needed). No matter the sign we have $\phi_0(a\otimes d) \in B\hat \otimes C(U)$. As $A \hat \otimes C(U)$ is densely spanned by its elementary tensors of homogeneous elements, it follows that $\phi_0(A \hat \otimes C(U)) \subseteq B \hat \otimes C(U)$. Hence $\mathcal E \otimes C$ is $X$-equivariant. The ``in particular'' part follows immediately since $- \otimes C$ preserves direct sums, operator homotopies, and takes degenerate modules to degenerate modules which are $X$-equivariant by what we already proved. \end{proof} The following holds as in the classical case. \begin{proposition}\label{p:ohvsh} Let $A$ and $B$ be graded $X$-$C^\ast$-algebras with $A$ separable and $B$ $\sigma$-unital. Then the equivalence relations $\sim_\h$ and $\sim_\oh$ agree on $X$-equivariant (respectively $X$-nuclear) Kasparov $A$-$B$-modules. \end{proposition} \begin{proof} As $\sim_\oh$ is stronger than $\sim_\h$ by Corollary \ref{c:ohstronger} we only need to prove the converse. We only do the $X$-nuclear case. Let $\mathcal E_0$ and $\mathcal E_1$ be $X$-nuclear Kasparov modules such that $\mathcal E_0 \sim_\h \mathcal E_1$. We should prove that $\mathcal E_0 \sim_\oh \mathcal E_1$. As $\sim_\h$ is the transitive closure of $\approx_\h$, and as $\sim_\oh$ is an equivalence relation, we may assume without loss of generality that $\mathcal E_0 \approx_\h \mathcal E_1$. Fix $\mathcal E$ an $X$-nuclear Kasparov module which is a homotopy from $\mathcal E_0$ to $\mathcal E_1$. By \cite[Lemma 18.5.1]{Blackadar-book-K-theory}, $(C[0,1], \ev_0, 0) \sim_\oh (C[0,1], \ev_1, 0)$ in the classical sense as Kasparov $C[0,1]$-$\mathbb C$-modules. As $(C[0,1], \ev_i , 0) \otimes B$ and $(IB, \ev_i, 0)$ are unitarily equivalent for $i=0,1$, Lemma \ref{l:exttensorbasic} implies that $(IB, \ev_0,0) \sim_\oh (IB, \ev_1, 0)$ in the $X$-equivariant sense. As $\mathcal E_i$ is a Kasparov product of $\mathcal E$ and $(IB, \ev_i, 0)$ for $i=0,1$, it follows from Lemma \ref{l:prodoh} that $\mathcal E_0 \sim_{\oh} \mathcal E_1$ in the $X$-nuclear sense. \end{proof} \begin{theorem}[The Kasparov product]\label{t:Kaspprod} Let $X$ be a topological space, and let $A$, $B$, and $C$ be separable, graded $X$-$C^\ast$-algebras. The Kasparov product induces bilinear products \begin{equation} \begin{array}{rrclcl} \circ \colon & KK(X; B, C) &\otimes& KK(X; A, B) & \to & KK(X; A, C) \\ \circ \colon & KK_\nuc(X; B, C) &\otimes& KK(X; A, B) & \to & KK_\nuc(X; A, C) \\ \circ \colon & KK(X; B, C) &\otimes& KK_\nuc (X; A, B) & \to & KK_\nuc(X; A, C) \\ \circ \colon & KK_\nuc(X; B, C) &\otimes& KK_\nuc(X; A, B) & \to & KK_\nuc(X; A, C). \end{array} \end{equation} These bilinear products are all associative in the obvious sense. \end{theorem} \begin{proof} This is an immediate consequence of Lemma \ref{l:prodoh}, Proposition \ref{p:ohvsh}, and \cite[Theorem 18.6.1]{Blackadar-book-K-theory}\footnote{The proof of this result is exactly showing associativity of the Kasparov product up to operator homotopy.} for associativity. \end{proof} At first it might seem weird that taking the product of a $KK(X)$-element and a $KK_\nuc(X)$-element produces a $KK_\nuc(X)$-element. This is essentially because the composition of two c.p.~maps, one of which is nuclear, is again a nuclear c.p.~map. \begin{remark} As in the classical case it follows that \begin{equation} KK(X; \psi) \circ KK(X; \phi) = KK(X; \psi \circ \phi) \end{equation} for graded, $X$-equivariant $\ast$-homomorphisms. If one of these $\ast$-homo\-morph\-isms is $X$-nuclear, say $\phi$, then \begin{equation} KK(X; \psi) \circ KK_\nuc(X; \phi) = KK_\nuc(X; \psi \circ \phi), \end{equation} and similarly if $\psi$ is $X$-nuclear. \end{remark} \begin{lemma}\label{l:productid} Let $A$ and $B$ be graded $X$-$C^\ast$-algebras for which $A$ is separable and $B$ is $\sigma$-unital. Let $\mathcal E$ be an $X$-equivariant (respectively $X$-nuclear) Kasparov $A$-$B$-module. Then the Kasparov product of $\mathcal E$ with either the $X$-equivariant $A$-$A$-module $(A,\id_A,0)$ or the $X$-equivariant $B$-$B$-module $(B,\id_B,0)$, is $X$-equivariantly (respectively $X$-nuclearly) homotopic to $\mathcal E$. \end{lemma} \begin{proof} The case with $(B,\id_B,0)$ is trivial since $\mathcal E$ is itself a Kasparov product of $(B,\id_B,0)$ and $\mathcal E$. For the other product, let $\mathcal E = (E, \phi, F)$. In (the proof of) \cite[Proposition 18.3.6]{Blackadar-book-K-theory}, a Kasparov $A$-$IB$-module is constructed of the form $\overline{\mathcal E} = (\overline{E_0}, \psi, G)$ such that $\overline{\mathcal E}_0$ is operator homotopic to $\mathcal E$ where $\overline{E_0} = \{ f\in C([0,1], E) : f(1) \in \overline{\phi(A)E}\}$ and $\psi$ is the map which is point-wise $\phi$. By Lemma \ref{l:weakhtpy} it follows that $\overline{\mathcal E}$ is $X$-equivariant (respectively $X$-nuclear), and thus by Lemma \ref{l:prodoh} we may replace $\mathcal E$ with $\overline{\mathcal E}_1$ and therefore assume without loss of generality that $\overline{\phi(A)E} = E$. But in this case $\mathcal E$ is itself a Kasparov product of $\mathcal E$ and $(A,\id_A,0)$. \end{proof} As a technical device which will be used later, the following result about full, hereditary $C^\ast$-subalgebras is proved. \begin{proposition}\label{p:KKXfullher} Let $A$ be a separable $X$-$C^\ast$-algebra, and let $B_0$ and $B$ be $\sigma$-unital $X$-$C^\ast$-algebras for which $B_0 \subseteq B$ is a full, hereditary $C^\ast$-subalgebra. If $B_0(U)$ is full in $B(U)$ for all $U\in \mathcal O(X)$ then the inclusion $\iota \colon B_0 \hookrightarrow B$ induces isomorphisms \begin{equation} KK(X; A, B_0) \xrightarrow \cong KK(X; A, B), \quad KK_\nuc(X; A, B_0) \xrightarrow \cong KK_\nuc(X; A, B). \end{equation} \end{proposition} \begin{proof} We do the $X$-nuclear case; the $X$-equivariant case is identical. Let $\mathcal E = (\overline{B B_0}, \id_B \colon B \to B = \mathcal K_{B_0}(\overline{B B_0}), 0)$ be the induced Kasparov $B$-$B_0$-module. Taking the Kasparov product with $\mathcal E$ induces a homomorphism \begin{equation} [\mathcal E] \circ - \colon KK_\nuc(X; A ,B) \to KK_\nuc(X; A , B_0). \end{equation} As the Kasparov product is associative up to $\approx_\oh$, even when only the first $C^\ast$-algebra is separable assuming the relevant Kasparov products exist (see \cite[Lemma 22]{Skandalis-RemarksKK}), it suffices by Lemma \ref{l:productid} to show that $(B_0,\id_{B_0},0)$ and $(B,\id_B,0)$ are Kasparov products of $\mathcal E$ and $(\iota) = (B, \iota \colon B_0 \to B = \mathcal K_B(B), 0)$ up to $X$-equivariant homotopy. Clearly $(B,\id_B,0)$ is a Kasparov product $(\iota) \circ \mathcal E$. The other Kasparov product, $\mathcal E \circ (\iota)$, also exists and the canonical choice is unitarily equivalent to $(\overline{B B_0} , \iota \colon B_0 \to B = \mathcal K_{B_0}(\overline{B B_0}), 0)$. Let $E= \{ f\in C([0,1], \overline{BB_0}) : f(0) \in B_0\}$ as a Hilbert $IB_0$-module. Then the Kasparov $B_0$-$IB_0$-module $(E, B_0 \to \mathcal K(E), 0)$ defines an $X$-equivariant homotopy from $(\overline{B B_0} , \iota \colon B_0 \to B = \mathcal K_{B_0}(\overline{B B_0}), 0)$ to $(B_0, \id_{B_0}, 0)$. \end{proof} \begin{corollary}\label{c:KKXstable} If $p\in \mathcal K$ is a rank one projection then the embedding $\id_B \otimes p \colon B \hookrightarrow B\otimes \mathcal K$ induces an isomorphism \begin{equation} KK(X; A, B) \xrightarrow \cong KK(X; A, B\otimes \mathcal K), \end{equation} and similarly for $KK_\nuc(X)$. \end{corollary} \begin{corollary}\label{c:KKXasMvN} If $\phi, \psi \colon A \to B$ are $X$-equivariant (respectively $X$-nuc\-lear) $\ast$-homo\-morphisms and $\phi \sim_\asMvN \psi$, then \begin{equation} KK(X; \phi ) = KK(X; \psi) \qquad (\textrm{respectively } KK_\nuc(X; \phi) = KK_\nuc(X; \psi)). \end{equation} \end{corollary} \begin{proof} We only do the $X$-nuclear version. By Proposition \ref{p:MvNeq}, there is a norm-continuous unitary path $(u_t)_{t\in [0,1)}$ in $\multialg{B\otimes \mathcal K}$ such that $\Ad u_t \circ (\phi\otimes e_{1,1}) \xrightarrow{t\to 1} \psi \otimes e_{1,1}$ in the point-norm topology. As the unitary group of $\multialg{B\otimes \mathcal K}$ is connected by \cite{CuntzHigson-Kuipersthm}, we may assume that $u_0 = 1$. Let $\Phi \colon A \to C([0,1], B \otimes \mathcal K)$ be given by $\Phi(a)(t) = \Ad u_t \circ (\phi \otimes e_{1,1})$ for $t\in [0,1)$ and $\Phi(a)(1) = \psi \otimes e_{1,1}$. By Lemma \ref{l:XnucC(Y)}, $\Phi$ is $X$-nuclear and thus $KK_\nuc(X; \phi\otimes e_{1,1}) = KK_\nuc(X; \psi \otimes e_{1,1})$. By Corollary \ref{c:KKXstable}, it follows that $KK_\nuc(X; \phi) = KK_\nuc(X; \psi)$. \end{proof} \subsection{The Cuntz pair picture}\label{ss:CuntzPairs} As in Subsection \ref{ss:KKprel}, two pictures of $KK$-theory will be considered: the Cuntz pair picture and the Fredholm picture. For these pictures to make sense, the $C^\ast$-algebras $A$ and $B$ will always be assumed to be trivially graded. Since $\multialg{B \otimes \mathcal K} \cong \mathcal B_B(\ell^2(\mathbb N)\otimes B)$ canonically, the two will be identified without mentioning. Recall that an \emph{($A$-$B$-)Cuntz pair} $(\psi_0, \psi_1)$ consists of a pair of $\ast$-homo\-morphisms $\psi_0,\psi_1 \colon A \to \multialg{B\otimes \mathcal K}$ for which $\psi_0(a) - \psi_1(a) \in B\otimes \mathcal K$ for all $a\in A$. A \emph{homotopy} of two $A$-$B$-Cuntz pairs $(\phi_0, \phi_1)$ and $(\psi_0, \psi_1)$, is a family $(\eta_0^{(s)}, \eta_1^{(s)})_{s\in [0,1]}$ of Cuntz pairs, such that \begin{equation}\label{eq:CPhtpy} (\eta_0^{(0)}, \eta_1^{(0)}) = (\phi_0, \phi_1), \qquad (\eta_0^{(1)} , \eta_1^{(1)}) = (\psi_0, \psi_1), \end{equation} $s \mapsto \eta_i^{(s)}(a)$ is strictly continuous for $i=0,1$ and $a\in A$, and $s\mapsto \eta_0^{(s)}(a) - \eta_1^{(s)}(a)$ is norm-continuous for any $a\in A$.\footnote{The point is that $\multialg{I B \otimes \mathcal K}$ is canonically isomorphic to the set of strictly continuous, bounded functions from $[0,1]$ to $\multialg{B\otimes \mathcal K}$, and hence a homotopy of $A$-$B$-Cuntz pairs is the same as an $A$-$IB$-Cuntz pair. Here, as usual, $IB = C([0,1], B)$.} \begin{definition} The Cuntz pair $(\psi_0,\psi_1)$ is called \emph{weakly $X$-equivariant} (respectively weakly $X$-nuclear), if both $\ast$-homomorphisms $\psi_0$ and $\psi_1$ are weakly $X$-equivariant (respectively weakly $X$-nuclear). Weakly $X$-equivariant (respectively weakly $X$-nuclear) Cuntz pairs $(\phi_0, \phi_1)$ and $(\psi_0, \psi_1)$ are said to be \emph{homotopic} if there is a homotopy $(\eta_0^{(s)}, \eta_1^{(s)})_{s\in [0,1]}$ of Cuntz pairs satisfying \eqref{eq:CPhtpy}, and for which $\eta_i^{(s)}$ is weakly $X$-equivariant (resp. weakly $X$-nuclear) for $i=0,1$ and $s\in [0,1]$. \end{definition} One can form sums of weakly $X$-equivariant/nuclear Cuntz pairs by \begin{equation} (\phi_0, \phi_1) \oplus_{s_1,s_2} (\psi_0, \psi_1) = (\phi_0 \oplus_{s_1,s_2} \psi_0, \phi_1 \oplus_{s_1,s_2} \psi_1), \end{equation} where $s_1,s_2\in \multialg{B\otimes \mathcal K}$ are $\mathcal O_2$-isometries. As any two Cuntz sums are unitarily equivalent, and as the unitary group of $\multialg{B\otimes \mathcal K}$ is path-connected, sums of weakly nuclear Cuntz pairs are unique up to homotopy. The set of homotopy classes of weakly $X$-equivariant (respectively $X$-nuclear) $A$-$B$-Cuntz pairs is an abelian group with the addition defined by Cuntz sums as above. Denote the equivalence class of $(\phi_0,\phi_1)$ by $[\phi_0, \phi_1]$. Let $\mathcal H_B := \ell^2(\mathbb N) \otimes B$ and identify $\multialg{B\otimes \mathcal K}$ with $\mathcal B(\mathcal H_B)$ in the standard way. Moreover, recall that $\mathcal H_B$ has the grading where every element has degree $0$, and $\mathcal H_B^\op$ is equal to $\mathcal H_B$ as Hilbert $B$-modules, but every element in $\mathcal H_B^\op$ has degree $1$. \begin{proposition}[The Cuntz pair picture]\label{p:KKCP} Let $X$ be a topological space, and let $A$ and $B$ be $X$-$C^\ast$-algebras for which $A$ is separable and $B$ is $\sigma$-unital. The assignment \begin{equation}\label{eq:CPmodule} (\phi_0,\phi_1) \mapsto \mathcal E_{(\phi_0,\phi_1)} := \left( \mathcal H_B \oplus \mathcal H_B^\op, \phi_0 \oplus \phi_1, \left( \begin{array}{cc} 0 & 1_{\mathcal H_B} \\ 1_{\mathcal H_B} & 0 \end{array} \right) \right) \end{equation} from the collection of $A$-$B$-Cuntz pairs to Kasparov $A$-$B$-modules induces an isomorphism between the group of homotopy classes of weakly $X$-equivariant (respectively weakly $X$-nuclear) $A$-$B$-Cuntz pairs, and the Kasparov group $KK(X; A, B)$ (respectively $KK_\nuc(X; A, B)$). \end{proposition} \begin{proof} As the proof very close to the similar proof in the classical case, the proof will only be sketched, and only in the $X$-nuclear case. By Corollary \ref{c:densespan}, $\mathcal E_{(\phi_0,\phi_1)}$ is $X$-nuclear whenever $(\phi_0,\phi_1)$ is weakly $X$-nuclear. A homotopy of weakly $X$-nuclear $A$-$B$-Cuntz pairs induces an $A$-$IB$-Cuntz pair which is weakly $X$-nuclear by Lemma \ref{l:XnucC(Y)}. So by applying the map \eqref{eq:CPmodule} one obtains a homotopy of $X$-nuclear Kasparov $A$-$B$-modules, which implies that the map $[\phi_0, \phi_1] \mapsto [\mathcal E_{(\phi_0,\phi_1)}]$ is well-defined. It is routine to show that it takes Cuntz sums to diagonal sums, and clearly $[0,0] \mapsto [0]$, so $[\phi_0,\phi_1] \mapsto [\mathcal E_{(\phi_0,\phi_1)}]$ is a well-defined group homomorphism into $KK_\nuc(X; A, B)$. Arguing exactly as \cite[Section 17.6]{Blackadar-book-K-theory} it follows that the map $[\phi_0, \phi_1] \mapsto [\mathcal E_{(\phi_0,\phi_1)}]$ is surjective. Similarly, by applying these arguments to an $X$-nuclear homotopy from $\mathcal E_{(\phi_0, \phi_1)}$ to $\mathcal E_{(\psi_0,\psi_1)}$, one may lift such a homotopy of $X$-nuclear Kasparov modules, to a homotopy of weakly $X$-nuclear Cuntz pairs. The endpoints of this homotopy are unitarily equivalent to the weakly $X$-nuclear Cuntz pairs $(\phi_0, \phi_1) \oplus_{s_1,s_2} (0,0)$ and $(\psi_0,\psi_1)\oplus_{s_1,s_2} (0,0)$ respectively. As these are homotopic to $(\phi_0,\phi_1)$ and $(\psi_0,\psi_1)$ respectively, it follows that $[\phi_0,\phi_1] \mapsto [\mathcal E_{(\phi_0,\phi_1)}]$ is injective and thus induces a group isomorphism. \end{proof} \subsection{The Fredholm picture}\label{ss:Fredholm} In this subsection, a Fredholm type picture of $KK(X)$ and $KK_\nuc(X)$ is obtained, cf.~\cite[Section 2.1]{Higson-charKK}. Let $A$ be a separable $X$-$C^\ast$-algebra, $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra (these are considered as trivially graded). An \emph{($A$-$B$-)cycle} is a triple \begin{equation} (\phi_0 \colon A \to \mathcal B(E_0), \phi_1 \colon A \to \mathcal B(E_1), u), \end{equation} usually just written $(\phi_0, \phi_1, u)$, where $E_0$ and $E_1$ are countably generated Hilbert $B$-modules, $\phi_0$ and $\phi_1$ are $\ast$-homomorphisms, and $u\in \mathcal B(E_0,E_1)$ satisfies \begin{equation}\label{eq:Frintertwiner} u \phi_0(a) - \phi_1(a) u \in \mathcal K(E_0, E_1) \end{equation} \begin{equation}\label{eq:Fredholmunitary} (u^\ast u - 1_{\mathcal B(E_0)})\phi_0(a) \in \mathcal K(E_0) , \quad (uu^\ast - 1_{\mathcal B(E_1)}) \phi_1(a) \in \mathcal K(E_1) \end{equation} for all $a\in A$. A cycle $(\phi_0,\phi_1,u)$ is called \emph{degenerate} if \begin{equation}\label{eq:Fredholmdeg} u\phi_0(a) - \phi_1(a) u = 0, \quad (u^\ast u-1_{\mathcal B(E_0)}) \phi_0(a) =0, \quad (uu^\ast - 1_{\mathcal B(E_1)}) \phi_1(a) =0, \end{equation} for all $a\in A$. Two cycles \begin{equation}\label{eq:twoCcycles} (\phi_0 \colon A \to \mathcal B(E_0), \phi_1 \colon A \to \mathcal B(E_1), u), \; \; (\phi_0' \colon A \to \mathcal B(E_0'), \phi_1' \colon A \to \mathcal B(E_1'), u'), \end{equation} are \emph{unitarily equivalent} if there are unitaries $v_i \in \mathcal B(E_i, E_i')$ for $i=0,1$, such that \begin{equation} \phi_0(a) v_0 = v_0 \phi_0'(a), \quad \phi_1(a) v_1 = v_1 \phi_1'(a) , \quad v_1 u = u' v_0. \end{equation} for all $a\in A$. Say that two cycles as in \eqref{eq:twoCcycles} are \emph{operator homotopic} if $\phi_i = \phi_i'$ (and thus also $E_i = E_i'$), for $i=0,1$, and if there is a norm continuous path $[0,1] \ni t \mapsto v_t \in \mathcal B(E_0,E_1)$ such that $v_0 = v$, $v_1 = v'$, and each $(\phi_0,\phi_1, v_t)$ is a cycle. Addition of two cycles as in \eqref{eq:twoCcycles} is defined by direct sums as \begin{equation}\label{eq:dirsumcycles} (\phi_0 \oplus \phi_0' \colon A \to \mathcal B(E_0\oplus E_0'), \phi_1 \oplus \phi_1' \colon A \to \mathcal B(E_1\oplus E_1'), u \oplus u'). \end{equation} \begin{definition} Say that an $A$-$B$-cycle $(\phi_0,\phi_1,v)$ is \emph{weakly $X$-equivariant} (respectively \emph{weakly $X$-nuclear}) if $\phi_0$ and $\phi_1$ are weakly $X$-equivariant (respectively weakly $X$-nuclear). Let $\sim_\oh$ denote the equivalence relation on weakly $X$-equivariant (resp. weakly $X$-nuclear) $A$-$B$-cycles generated by unitary equivalence, operator homotopy, and addition of degenerate, weakly $X$-equivariant (respectively weakly $X$-nuclear) $A$-$B$-cycles. \end{definition} The set of $\sim_\oh$-equivalence classes of weakly $X$-equivariant (respectively $X$-nuclear) $A$-$B$-cycles is an abelian group where addition is as in \eqref{eq:dirsumcycles}. \begin{proposition}[The Fredholm picture]\label{p:KKFredholm} Let $X$ be a topological space, and let $A$ and $B$ be $X$-$C^\ast$-algebras for which $A$ is separable and $B$ is $\sigma$-unital. The assignment \begin{equation}\label{eq:Fredholmmodule} (\phi_0,\phi_1,u) \mapsto \mathcal E_{(\phi_0,\phi_1,u)} := \left(E_0 \oplus E_1^{\op} , \phi_0 \oplus \phi_1 , \left( \begin{array}{cc} 0 & u^\ast \\ u & 0 \end{array} \right) \right) \end{equation} between the collections of $A$-$B$-cycle and Kasparov $A$-$B$-modules induces an isomorphism between the group of $\sim_\oh$-equivalence classes of weakly $X$-equi\-variant (respectively weakly $X$-nuclear) $A$-$B$-cycles, and the Kasparov group $KK(X; A, B)$ (respectively $KK_\nuc(X; A, B)$). \end{proposition} \begin{proof} As with the proof of the Cuntz pair picture, this is very close to the similar proof in the classical case, so it will only be sketched, and only in the $X$-nuclear case. By definition, $\mathcal E_{(\phi_0,\phi_1,u)}$ is $X$-nuclear whenever $(\phi_0,\phi_1, u)$ is weakly $X$-nuclear. As the assignment \eqref{eq:Fredholmmodule} preserves direct sums (up to unitary equivalence), unitary equivalence, operator homotopy, and degeneracy, it follows that it induces a well-defined homomorphism into $KK_\nuc(X; A, B)$. For surjectivity, the Cuntz pair picture (Proposition \ref{p:KKCP}) implies that for every element in $KK_\nuc(X; A, B)$ there is a weakly $X$-nuclear Cuntz pair $(\phi_0,\phi_1)$ such that $\mathcal E_{(\phi_0,\phi_1)}$ represents the element. We consider $\phi_0$ and $\phi_1$ as maps to $\mathcal B(\mathcal H_B)$ where $\mathcal H_B = \ell^2(\mathbb N) \otimes B$. By Corollary \ref{c:densespan}, $(\phi_0,\phi_1,1_{\mathcal B(\mathcal H_B)})$ is a weakly $X$-nuclear cycle and $\mathcal E_{(\phi_0,\phi_1)} = \mathcal E_{(\phi_0,\phi_1,1_{\mathcal B(\mathcal H_B)})}$, so our map is surjective. For injectivity, let $(\phi_0,\phi_1,u) $ and $(\psi_0, \psi_1, v)$ be weakly $X$-nuclear cycles for which $\mathcal E_0 := \mathcal E_{(\phi_0,\phi_1,u)}$ and $\mathcal E_1 := \mathcal E_{(\psi_0,\psi_1,v)}$ have the same class in $KK_\nuc(X; A, B)$. By Proposition \ref{p:ohvsh}, $\mathcal E_0 \sim_\oh \mathcal E_1$, so there are degenerate, $X$-nuclear Kasparov modules $\mathcal D_0$ and $\mathcal D_1$ such that $\mathcal E_0\oplus \mathcal D_0 \approx_\oh \mathcal E_1 \oplus \mathcal D_1$. Arguing as in \cite[Section 17.5]{Blackadar-book-K-theory},\footnote{The Fredholm picture in \cite[Section 17.5]{Blackadar-book-K-theory} is slightly different from the one given here. While it requires that $u$ in the cycle $(\phi_0,\phi_1, u)$ is a unitary, it also requires that $A$ is unital. The arguments given in \cite[Section 17.5]{Blackadar-book-K-theory} still carry through word for word, but produce an operator $u$ satisfying \eqref{eq:Fredholmunitary} instead of a unitary.} there are degenerate, $X$-nuclear Kasparov modules $\mathcal D_i'$ for $i=0,1$, such that $\mathcal D_i \oplus \mathcal D_i'$ is unitarily equivalent to a module of the form $\mathcal E_{(\eta_0^{(i)} , \eta_1^{(i)} , w^{(i)})}$ for a degenerate, weakly $X$-nuclear cycle $(\eta_0^{(i)} , \eta_1^{(i)} , w^{(i)})$. It is easy to see that the operator homotopy $\mathcal E_0\oplus \mathcal D_0 \oplus \mathcal D_0' \approx_\oh \mathcal E_1 \oplus \mathcal D_1 \oplus \mathcal D_1'$ induces an operator homotopy between \begin{equation} (\phi_0,\phi_1, u) \oplus (\eta_0^{(0)} , \eta_1^{(0)} , w^{(0)}) \quad \textrm{and} \quad (\psi_0,\psi_1, v) \oplus (\eta_0^{(1)} , \eta_1^{(1)} , w^{(1)}). \end{equation} Hence the assignment \eqref{eq:Fredholmmodule} induces an isomorphism as desired. \end{proof} If $(\phi_0,\phi_1)$ is an $A$-$B$-Cuntz pair, then $(\phi_0,\phi_1, 1_{\ell^2(\mathbb N)\otimes B})$ is an $A$-$B$-cycle. By Corollary \ref{c:densespan}, if $(\phi_0, \phi_1)$ is weakly $X$-equivariant or weakly $X$-nuclear, then so is $(\phi_0, \phi_1, 1_{\ell^2(\mathbb N) \otimes B})$. However, it is a very non-trivial fact (relying on homotopy and operator homotopy agreeing for Kasparov modules) that this assignment induces a well-defined map between homotopy class of Cuntz pairs and $\sim_\oh$-classes of cycles. It follows immediately from Propositions \ref{p:KKCP} and \ref{p:KKFredholm} that it is actually well-defined, and, moreover, an isomorphism. \begin{corollary}\label{c:CPvsFredholm} Let $X$ be a topological space, and let $A$ and $B$ be $X$-$C^\ast$-algebras for which $A$ is separable and $B$ is $\sigma$-unital. The assignment \begin{equation} (\phi_0, \phi_1) \mapsto (\phi_0,\phi_1, 1_{\ell^2(\mathbb N)\otimes B}) \end{equation} between the collections of $A$-$B$-Cuntz pairs and $A$-$B$-cycles induces an isomorphism between the group of homotopy classes of weakly $X$-equi\-variant (respectively weakly $X$-nuclear) $A$-$B$-Cuntz pairs, and the group of $\sim_\oh$-equivalence classes of weakly $X$-equi\-variant (respectively weakly $X$-nuclear) $A$-$B$-cycles. \end{corollary} \section{A stable uniqueness theorem} The new proof of the Kirchberg--Phillips theorem presented in Section \ref{s:KP} required a stable uniqueness theorem due to Dadarlat and Eilers \cite[Theorem 3.10]{DadarlatEilers-asymptotic}. A similar stable uniqueness theorem for $KK_\nuc(X)$ will be used to prove the non-simple classification theorem. The goal of this section is to prove this stable uniqueness theorem. The proof presented here mimics the original proof, but one runs into trouble due to the original proof using several instances of unitisations. Resolving these unitisation issues becomes one of the main complications when compared to the original proof. Consider the automorphism group of a $C^\ast$-algebra as equipped with the uniform topology.\footnote{In particular, a \emph{norm}-continuous unitary path $(u_t)$ in $\multialg{B}$ gives rise to a continuous path $(\Ad u_t)_{t\in \mathbb R_+}$ in $\Aut(B)$, whereas for a \emph{strictly} continuous unitary path $(v_t)$ in $\multialg{B}$, the path $(\Ad v_t)_{t\in \mathbb R_+}$ is not necessarily continuous! However, $(\Ad v_t)_{t\in \mathbb R_+}$ is continuous if one instead equips $\Aut(B)$ with the point-norm topology, but this will not be relevant here.} \begin{proposition}[{\cite[Proposition 2.15]{DadarlatEilers-asymptotic}}]\label{p:DEaut} Let $D$ be a unital, separable $C^\ast$-algebra, and let $(\alpha_t)_{t\in \mathbb R_+}$ be a continuous path of automorphisms on $D$ with $\alpha_0 = \id_D$. Then there exists a continuous path $(v_t)_{t\in \mathbb R_+}$ of unitaries in $D$ with $v_0 = 1_D$ such that \begin{equation} \lim_{t\to \infty} \| \alpha_t(d) - \Ad v_t (d) \| = 0 , \qquad d\in D. \end{equation} \end{proposition} The following is an extension of the above proposition, and the (unital version of the) proof is essentially contained in the proof of \cite[Proposition 3.6]{DadarlatEilers-asymptotic}. This version is easier to apply, when working with not necessarily separable and unital $C^\ast$-algebras. \begin{corollary}\label{c:DEaut} Let $E$ be a $C^\ast$-algebra, $A\subseteq E$ be a separable $C^\ast$-sub\-algebra, and $(\alpha_t)_{t\in \mathbb R_+}$ be a continuous path of automorphisms on $E$ with $\alpha_0 = \id_E$. Then there is a continuous path $(v_t)_{t\in \mathbb R_+}$ of unitaries in $\widetilde E$ with $v_0 = 1_{\widetilde E}$ such that \begin{equation} \lim_{t\to \infty} \| \alpha_t(a) - \Ad v_t (a) \| = 0 , \qquad \textrm{ for all }a\in A. \end{equation} \end{corollary} \begin{proof} If $E$ is non-unital then each $\alpha_t$ extends canonically to an automorphism $\widetilde \alpha_t$ on $\widetilde E$. For any $x\in E$ and $\mu \in \mathbb C$ we have \begin{eqnarray} \| \widetilde \alpha_t(x + \mu1_{\widetilde E}) - \widetilde \alpha_s(x+\mu 1_{\widetilde E}) \| &=& \| \alpha_t(x) + \mu1_{\widetilde E} - \alpha_s(x)- \mu 1_{\widetilde E} \| \nonumber \\ &=& \| \alpha_t(x) - \alpha_s(x)\|. \end{eqnarray} Moreover, if $\| x + \mu 1_{\widetilde E} \| \leq 1$, then $\| x\| \leq 2$ and $|\mu | \leq 1$. Thus it follows that $\| \widetilde \alpha_t - \widetilde \alpha_s\|_{\mathcal B(\widetilde E)} \leq 2 \| \alpha_t - \alpha_s\|_{\mathcal B(E)}$,\footnote{Here $\mathcal B(E)$ denotes the Banach algebra of bounded operators on $E$, not to be confused with the $C^\ast$-algebra of adjointable operators when $E$ is a Hilbert $C^\ast$-module. I hope this overlap of notation does not confuse the reader.} so $(\widetilde \alpha_t)_{t\in \mathbb R_+}$ is also continuous. Hence we may assume that $E$ is unital. Let $D$ denote the unital $C^\ast$-subalgebra of $E$ generated by all subalgebras of the form \begin{equation} (\alpha_{t_1}^{j_1} \circ \dots \circ \alpha_{t_n}^{j_n} )(A) \end{equation} for all $n\in \mathbb N$, $j_i \in \mathbb Z$ and $t_i \in \mathbb R_+ \cap \mathbb Q$. Then $D$ is separable and unital, contains $A$, and each $\alpha_t$ restricts to an automorphism on $D$. Thus $(\alpha_{t}|_D)_{t\in \mathbb R_+}$ and $D$ satisfy the condition of Proposition \ref{p:DEaut}, so there exists a continuous path of unitaries $(v_t)_{t\in \mathbb R_+}$ in $D$, which are also unitaries in $E$, such that \begin{equation} \lim_{t\to \infty} \| \alpha_t(d) - \Ad v_t (d) \| = 0 , \qquad \textrm{ for all }d\in D. \end{equation} As $A\subseteq D$, this finishes the proof. \end{proof} Given $C^\ast$-algebras $A$ and $B$ and a $\ast$-homomorphism $\phi \colon A \to \multialg{B}$, let $\dot \phi \colon A \to \corona{B}$ denote the induced $\ast$-homomorphism, and \begin{equation} D_\phi := \{ x\in \multialg{B} : [x, \phi(A)] \subseteq B\}, \qquad E_\phi := \phi(A) + B. \end{equation} Note that if $u\in D_\phi$ is a unitary then $\Ad u$ induces a (not necessarily inner) automorphism on $E_\phi$. Also, one obtains a short exact sequence \begin{equation}\label{eq:sesgamma} 0 \to B \to D_\phi \to \corona{B} \cap \dot \phi(A)' \to 0. \end{equation} If $\psi\colon A \to \multialg{B}$ is such that $\dot \psi = \dot \phi$, i.e.~$\phi(a) - \psi(a) \in B$ for all $a\in A$, then $D_\phi = D_{\psi}$, and $E_\phi = E_{\psi}$, so in particular, $\psi$ induces the exact same exact sequence as above. \begin{definition} Two $\ast$-homomorphisms $\phi, \psi \colon A \to \multialg{B}$ are \emph{properly asymptotically unitarily equivalent}, written $\phi \approxeq \psi$, if there is a continuous unitary path $(u_t)_{t\in \mathbb R_+}$ in $\widetilde B$ such that \begin{equation} \lim_{t\to \infty} \| u_t^\ast \phi(a) u_t - \psi(a) \| = 0, \qquad a\in A. \end{equation} Write $\phi \approxeq_0 \psi$ if it is possible to choose $u_0 = 1_{\widetilde B}$ above. \end{definition} Obviously $\phi(a) - \psi(a) \in B$ for all $a\in A$ provided $\phi \approxeq \psi$. The following is essentially proved in \cite{DadarlatEilers-asymptotic} and is how one obtains proper asymptotic unitary equivalence. \begin{lemma}\label{l:propertrick} Let $A$ be a separable $C^\ast$-algebra, and let $\phi, \psi \colon A \to \multialg{B}$ be $\ast$-homo\-morphisms such that $(\phi(A) + \mathbb C 1_{\multialg{B}}) \cap B = \{0\}$ and for which $\phi(a) - \psi(a) \in B$ for all $a\in A$. Suppose that $(u_t)_{t\in \mathbb R_+}$ is a norm-continuous unitary path in $D_\phi$ for which \begin{equation} \lim_{t\to \infty} \| u_t^\ast \phi(a) u_t - \psi(a) \| = 0, \qquad a\in A. \end{equation} Then $\Ad u_0 \circ \phi \approxeq_0 \psi$. In particular, if $u_0 \in \widetilde B$ then $\phi \approxeq \psi$. \end{lemma} \begin{proof} Let $\alpha_t = \Ad (u_0^\ast u_t) \in \Aut(E_\phi)$, and note that $\alpha_t \circ \phi$ converges point-norm to $\Ad u_0^\ast \circ \psi$. Clearly $(\alpha_t)_{t\in \mathbb R_+}$ is a continuous path in $\Aut (E_\phi)$ (with the uniform topology) with $\alpha_0 = \id_{E_\phi}$, so by Corollary \ref{c:DEaut} there is a continuous unitary path $(v_t)_{t\in \mathbb R_+}$ in $\widetilde E_\phi = E_\phi + \mathbb C 1_{\multialg{B}}$ with $v_0=1_{\multialg{B}}$ such that \begin{equation} \lim_{t\to \infty} \| \alpha_t(\phi(a)) - \Ad v_t(\phi(a)) \| = 0, \qquad a\in A. \end{equation} In particular, $\Ad v_t \circ \phi$ converges point-norm to $\Ad u_0^\ast \circ \psi$. Since $(\phi(A) + \mathbb C 1_{\multialg{B}}) \cap B = \{0\}$, there is a canonical identification $\phi(A) + \mathbb C 1_{\multialg{B}} \cong (E_\phi + \mathbb C1_{\multialg{B}}) /B$, so let $(x_t)_{t\in \mathbb R_+}$ be the unitary path in $\phi(A) + \mathbb C 1_{\multialg{B}}$ corresponding to $(v_t + B)_{t\in \mathbb R_+}$ in $(E_\phi + \mathbb C1_{\multialg{B}}) /B$. For $y_t := v_t - x_t \in B$, the path $(y_t)_{t\in \mathbb R_+}$ is clearly continuous and bounded. Note that \begin{eqnarray} \lim_{t\to \infty} \| x_t \phi(a) x_t^\ast - \phi(a) \| &=& \lim_{t\to \infty} \| v_t u_0 \psi (a) u_0^\ast v_t^\ast - \phi(a) + B\|_{(E_\phi + \mathbb C1_{\multialg{B}}) /B} \nonumber \\ &\leq& \lim_{t\to \infty} \| v_t^\ast \phi(a) v_t - \Ad u_0^\ast \circ \psi(a)\| \nonumber \\ &=& 0. \end{eqnarray} Letting $w_t = x_t^\ast v_t = 1_{\multialg{B}} + x_t^\ast y_t$ produces a continuous unitary path $(w_t)_{t\in \mathbb R_+}$ in $\widetilde B$ for which $w_0 = 1_{\widetilde B}$. It follows that \begin{eqnarray} && \| w_t^\ast \phi(a) w_t - \Ad u_0^\ast \circ \psi(a) \| \nonumber\\ &\leq& \| v_t^\ast (x_t \phi(a) x_t^\ast - \phi(a)) v_t\| + \| v_t^\ast \phi(a) v_t - \Ad u_0^\ast \circ \psi(a)\| \nonumber\\ &\to& 0, \end{eqnarray} for all $a\in A$, so $\phi \approxeq_0 \Ad u_0^\ast \circ \psi$, or equivalently, $\Ad u_0 \circ \phi \approxeq_0 \psi$. \end{proof} It is finally time to take care of the unitisation issues mentioned at the beginning of the section. For a topological space $X$, let $X^\dagger$ denote the (always non-Hausdorff) topological space $X \sqcup \{ \star\}$ with open sets $\mathcal O(X^\dagger) = \mathcal O(X) \sqcup \{ X^\dagger\}$. If $A$ is an $X$-$C^\ast$-algebra, then the forced unitisation $A^\dagger$ has a canonical action of $X^\dagger$ given by \begin{equation} A^\dagger(U) = \left\{ \begin{array}{ll} A(U), & \textrm{ if }U\in \mathcal O(X) \; \; (\subseteq \mathcal O(X^\dagger)) \\ A^\dagger , & \textrm{ if } U = X^\dagger. \end{array} \right. \end{equation} Similarly, if $B$ is an $X$-$C^\ast$-algebra then there is a canonical action of $X^\dagger$ on $B$ which is the ordinary action for open subsets in $X \subseteq X^\dagger$, and $B(X^\dagger) = B$. \begin{lemma}\label{l:Xnucunitise} Let $A$ and $B$ be $X$-$C^\ast$-algebras and let $\phi \colon A^\dagger \to B$ be a c.p.~map. Then $\phi$ is $X^\dagger$-equivariant (respectively $X^\dagger$-nuclear) if and only if $\phi|_A$ is $X$-equivariant (respectively $X$-nuclear). \end{lemma} \begin{proof} Since $\phi(A^\dagger(X^\dagger)) = \phi(A^\dagger) \subseteq B = B(X^\dagger)$, it easily follows that $\phi$ is $X^\dagger$-equivariant if and only if $\phi|_A$ is $X$-equivariant. If $U\in \mathcal O(X)$, then \begin{equation} [\phi]_U \colon A^\dagger/A^\dagger(U) \to B/B(U), \qquad [\phi|_A]_U \colon A/A(U) \to B/B(U), \end{equation} the latter map being the restriction of $[\phi]_U$ to $A^\dagger/A^\dagger(U) \cong (A/A(U))^\dagger$. By Lemma \ref{l:unitisednuc} it follows that $[\phi]_U$ is nuclear if and only if $[\phi|_A]_U$ is nuclear, so $\phi$ is $X^\dagger$-nuclear if and only if $\phi|_A$ is $X$-nuclear. \end{proof} \begin{proposition}\label{p:KKunitise} Let $A$ be a separable $X$-$C^\ast$-algebra and let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra. Then the maps \begin{equation} KK(X; A,B) \to KK(X^\dagger; A^\dagger, B), \quad KK_\nuc(X; A, B) \to KK_\nuc(X^\dagger ; A^\dagger , B) \end{equation} given in the Cuntz pair picture by $[\phi, \psi] \mapsto [\phi^\dagger, \psi^\dagger]$ are well-defined, injective homomorphisms. \end{proposition} \begin{proof} We only do the $X$-nuclear version. If $\phi$ is weakly $X$-nuclear then $\phi^\dagger$ is weakly $X^\dagger$-nuclear by Lemma \ref{l:Xnucunitise}. By unitising an $X$-nuclear homotopy between Cuntz pairs $(\phi_0,\psi_0)$ and $(\phi_1 , \psi_1)$, one thus obtains an $X^\dagger$-nuclear homotopy between $(\phi_0^\dagger, \psi_0^\dagger)$ and $(\phi_1^\dagger, \psi_1^\dagger)$. Hence the map is well-defined, and is clearly homomorphism. As any $X^\dagger$-nuclear homotopy from $(\phi^\dagger, \psi^\dagger)$ to $(0,0)$ restricts to a $X$-nuclear homotopy from $(\phi, \psi)$ to $(0,0)$, the homomorphism is injective. \end{proof} If $(\phi_0,\phi_1, u)$ is an $A$-$B$-cycle then $(\phi_0^\dagger, \phi_1^\dagger, u)$ is an $A^\dagger$-$B$-cycle if and only if $u$ is a unitary modulo ``compacts''.\footnote{Since one adds the relations $u^\ast u -1_{\mathcal B(E_0)} \in \mathcal K(E_0)$ and $uu^\ast - 1_{\mathcal B(E_1)} \in \mathcal K(E_1)$ to \eqref{eq:Fredholmunitary}.} In particular, if $\gamma \colon A \to \multialg{B\otimes \mathcal K}$ is a $\ast$-homo\-morphism and $w\in D_\gamma$ is a unitary, then $(\gamma^\dagger, \gamma^\dagger, w)$ is an $A^\dagger$-$B$-cycle. By Corollary \ref{c:densespan} and Lemma \ref{l:Xnucunitise}, if $\gamma$ is weakly $X$-nuclear, then $(\gamma^\dagger, \gamma^\dagger , w) $ is a weakly $X^\dagger$-nuclear cycle. \begin{lemma}\label{l:unitiseunitaries} Suppose $\gamma \colon A \to \multialg{B\otimes \mathcal K}$ is a weakly $X$-nuclear $\ast$-homo\-morphism and that $w_1,w_2 \in D_\gamma$ are unitaries. If $[\gamma, \gamma, w_1] = [\gamma , \gamma , w_2]$ in $KK_\nuc(X; A, B)$ then $[\gamma^\dagger, \gamma^\dagger , w_1] = [\gamma^\dagger, \gamma^\dagger, w_2]$ in $KK_\nuc(X^\dagger, A^\dagger, B)$ (using the Fredholm picture, Proposition \ref{p:KKFredholm}). \end{lemma} \begin{proof} The cycle $(\gamma, \gamma, w_i)$ is unitarily equivalent to $(\gamma, \Ad w_i \circ \gamma, 1_{\multialg{B\otimes \mathcal K}})$. Using Corollary \ref{c:CPvsFredholm} to view everything in the Cuntz pair picture (Proposition \ref{p:KKCP}), $[\gamma, \Ad w_1 \circ \gamma] = [\gamma , \Ad w_2 \circ \gamma]$ in $KK_\nuc(X; A, B)$. By Proposition \ref{p:KKunitise}, $[\gamma^\dagger, (\Ad w_1 \circ \gamma)^\dagger] = [\gamma^\dagger, (\Ad w_2 \circ \gamma)^\dagger]$ in $KK_\nuc(X^\dagger; A^\dagger, B)$. As $(\Ad w_i \circ \gamma)^\dagger = \Ad w_i \circ \gamma^\dagger$, and as the cycles $(\gamma^\dagger, \Ad w_i \circ \gamma^\dagger, 1_{\multialg{B\otimes \mathcal K}})$ and $(\gamma^\dagger, \gamma^\dagger, w_i)$ are unitarily equivalent, it follows from Corollary \ref{c:CPvsFredholm} that $[\gamma^\dagger, \gamma^\dagger, w_1] = [\gamma^\dagger, \gamma^\dagger, w_2]$ in $KK_\nuc(X^\dagger; A^\dagger, B)$. \end{proof} The following lemma, which is an ideal-related version of \cite[Lemma 3.5]{DadarlatEilers-asymptotic}, will be needed. \begin{lemma}\label{l:DEoh} Let $A$ be a separable $X$-$C^\ast$-algebra and let $B$ a $\sigma$-unital $X$-$C^\ast$-algebra. Let $\gamma \colon A \to \multialg{B\otimes \mathcal K}$ be a weakly $X$-nuclear $\ast$-homomorphism and let $w_1,w_2\in D_\gamma$ be unitaries. If $[\gamma, \gamma, w_1] = [\gamma, \gamma , w_2]$ in $KK_\nuc(X; A,B)$ then there is a unital, weakly $X^\dagger$-nuclear $\ast$-homomorphism $\Theta \colon A^\dagger \to \multialg{B\otimes \mathcal K}$ such that \begin{equation} (\gamma^\dagger \oplus \Theta, \gamma^\dagger \oplus \Theta , w_1 \oplus 1_{\multialg{B\otimes \mathcal K}}) \quad \textrm{and} \quad (\gamma^\dagger \oplus \Theta, \gamma^\dagger \oplus \Theta , w_2 \oplus 1_{\multialg{B\otimes \mathcal K}}) \end{equation} are operator homotopic. \end{lemma} \begin{proof} By Lemma \ref{l:unitiseunitaries}, $[\gamma^\dagger, \gamma^\dagger, w_1] = [\gamma^\dagger, \gamma^\dagger, w_2]$ in $KK_\nuc(X^\dagger; A^\dagger, B)$ (using the Fredholm picture). Hence there are degenerate, weakly $X^\dagger$-nuclear cycles \begin{equation} (\theta_0^{(i)} \colon A^\dagger \to \mathcal B(E_0^{(i)}), \theta_1^{(i)} \colon A^\dagger \to \mathcal B(E_1^{(i)}), v^{(i)}) \end{equation} for $i=1,2$ such that \begin{equation}\label{eq:Frohlem} (\gamma^\dagger, \gamma^\dagger, w_1) \oplus (\theta_0^{(1)}, \theta_1^{(1)}, v^{(1)}) \quad \textrm{and} \quad (\gamma^\dagger, \gamma^\dagger, w_2) \oplus (\theta_0^{(2)}, \theta_1^{(2)}, v^{(2)}) \end{equation} are unitarily equivalent to operator homotopic cycles. By otherwise cutting down with $\theta_j^{(i)}(1_{A^\dagger})$ all over, we may assume without loss of generality that each $\theta_j^{(i)}$ is a unital map and that each $v^{(i)}$ is a unitary for $i=1,2$ and $j=0,1$.\footnote{This only works because $(\theta_0^{(i)}, \theta_1^{(i)}, v^{(i)})$ is degenerate for $i=1,2$.} Define $E_j := \bigoplus_{\mathbb N} (E_j^{(1)} \oplus E_j^{(2)})$, $\Theta_j := \bigoplus_{\mathbb N} (\theta_j^{(1)} \oplus \theta_j^{(2)}) \colon A^\dagger \to \mathcal B(E_j)$, and $v = \bigoplus_{\mathbb N} (v^{(1)} \oplus v^{(2)}) \in \mathcal B(E_0,E_1)$, the latter being a unitary. As \begin{equation} (\theta_0^{(i)}, \theta_1^{(i)}, v^{(i)})\oplus (\Theta_0, \Theta_1 , v) \quad \textrm{and} \quad (\Theta_0, \Theta_1 , v) \end{equation} are unitarily equivalent (by unitaries that permute the indices of the direct sums), it follows that \begin{equation} (\gamma^\dagger, \gamma^\dagger, w_1) \oplus (\Theta_0, \Theta_1 , v) \quad \textrm{and} \quad (\gamma^\dagger, \gamma^\dagger, w_2) \oplus (\Theta_0, \Theta_1 , v) \end{equation} are operator homotopic. Since $v$ is a unitary and $(\Theta_0, \Theta_1 , v)$ is degenerate, it follows that $\Theta_0 = \Ad v \circ \Theta_1$, and thus $(\Theta_0, \Theta_1 , v)$ is unitarily equivalent to $(\Theta_0, \Theta_0 , 1_{\mathcal B(E_0)})$. Let $\pi \colon A^\dagger \to \mathcal B(\ell^2(\mathbb N)\otimes B)$ be the unique unital $\ast$-homomorphism such that $\pi|_A = 0$. By Lemma \ref{l:Xnucunitise}, $\pi$ is weakly $X^\dagger$-nuclear. Using that $E_0 \oplus (\ell^2(\mathbb N)\otimes B) \cong \ell^2(\mathbb N)\otimes B$ by Kasparov's stabilisation theorem, let $\Theta \colon A^\dagger \to \mathcal B(\ell^2(\mathbb N)\otimes B) = \multialg{B\otimes \mathcal K}$ be a map unitarily equivalent to $\Theta_0 \oplus \pi$. As $(\Theta, \Theta, 1_{\multialg{B\otimes \mathcal K}})$ is unitarily equivalent to $(\Theta_0, \Theta_1, v)\oplus (\pi, \pi, 1_{\multialg{B\otimes \mathcal K}})$, it follows that \begin{equation} (\gamma^\dagger \oplus \Theta, \gamma^\dagger \oplus \Theta, w_1\oplus 1_{\multialg{B\otimes \mathcal K}}) \quad \textrm{and} \quad (\gamma^\dagger \oplus \Theta, \gamma^\dagger \oplus \Theta, w_2\oplus 1_{\multialg{B\otimes \mathcal K}}) \end{equation} are operator homotopic. \end{proof} With the above ingredients, one can now prove an ideal-related version of the Dadarlat--Eilers stable uniqueness theorem from \cite{DadarlatEilers-asymptotic}. \begin{theorem}\label{t:irDE} Let $X$ be a topological space, let $A$ be a separable $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra. Let $\phi, \psi \colon A \to \multialg{B \otimes \mathcal K}$ be weakly $X$-nuclear $\ast$-homo\-morphisms such that $\phi(a) - \psi(a) \in B \otimes \mathcal K$ for all $a\in A$. If $[\phi,\psi] = 0$ in $KK_\nuc(X; A, B)$ (using the Cuntz pair picture, Proposition \ref{p:KKCP}) then there exist a weakly $X$-nuclear $\ast$-homomorphism $\Theta\colon A \to \multialg{B\otimes \mathcal K}$ and a continuous path $(v_t)_{t\in \mathbb R_+}$ of unitaries in $(B \otimes \mathcal K)^\sim$ such that \begin{equation} \| v_t (\phi \oplus_{s_1,s_2} \Theta)(a) v_t^\ast - (\psi \oplus_{s_1,s_2} \Theta)(a) \| \to 0, \end{equation} for all $a\in A$. Here $s_1,s_2\in \multialg{B \otimes \mathcal K}$ are $\mathcal O_2$-isometries. \end{theorem} \begin{proof} Let $\phi_\infty$ and $\psi_\infty$ be infinite repeats of $\phi$ and $\psi$, see Remark \ref{r:infrep}, and let $\sigma = \phi_\infty \oplus \psi_\infty$ be a Cuntz sum. Define $\phi_0 = \phi \oplus \sigma$ and $\psi_0 = \psi \oplus \sigma$ by Cuntz sums. Then $\psi_0 = \Ad u' \circ \phi_0$ for some unitary $u'$ which permutes the direct summands. Thus \begin{eqnarray} [\phi_0 , \phi_0, u'] &=& [\phi_0 , \psi_0 , 1_{\multialg{B\otimes \mathcal K}}] \nonumber\\ &=& [\phi, \psi, 1_{\multialg{B\otimes \mathcal K}}]\oplus [\sigma, \sigma ,1_{\multialg{B\otimes \mathcal K}}] \nonumber \\ &=& 0 \nonumber\\ &=& [\phi_0, \phi_0, 1_{\multialg{B\otimes \mathcal K}}] \end{eqnarray} in $KK_\nuc(X; A, B)$ using the Fredholm picture, Proposition \ref{p:KKFredholm}, as well as using Corollary \ref{c:CPvsFredholm} to conclude that $[\phi,\psi,1_{\multialg{B\otimes \mathcal K}}]=0$. As $(\phi_0, \phi_0 , u')$ is a cycle, it follows that $[u', \phi_0(A)] \subseteq B\otimes \mathcal K$ by \eqref{eq:Frintertwiner}, so $u'$ is a unitary in $D_{\phi_0}$. By Lemma \ref{l:DEoh}, there is a unital, weakly $X^\dagger$-nuclear $\ast$-homomorphism $\Theta_0 \colon A^\dagger \to \multialg{B \otimes \mathcal K}$ such that \begin{equation} (\phi_0^\dagger \oplus \Theta_0, \phi_0^\dagger \oplus \Theta_0, u' \oplus 1_{\multialg{B\otimes \mathcal K}}) \quad \textrm{and} \quad (\phi_0^\dagger \oplus \Theta_0, \phi_0^\dagger \oplus \Theta_0, 1_{\multialg{B\otimes \mathcal K}} \oplus 1_{\multialg{B\otimes \mathcal K}}) \end{equation} are operator homotopic. By replacing $\Theta_0$ with its infinite repeat, we may obtain the above with $\Theta_0$ being an infinite repeat. We make sense of the direct sum $\oplus$ above by a fixed Cuntz sum. Let $\Phi_0 := \phi_0^\dagger \oplus \Theta_0$ and $u= u' \oplus 1_{\multialg{B\otimes \mathcal K}}$. Let $(w_s)_{s\in [0,1]}$ be a norm-continuous path in $\multialg{B\otimes \mathcal K}$ which implements the above operator homotopy from $w_0 =1_{\multialg{B\otimes \mathcal K}}$ to $w_1 = u$. As $(\Phi_0, \Phi_0, w_s)$ is a cycle for every $s\in [0,1]$, and as $\Phi_0$ is unital, it follows from \eqref{eq:Frintertwiner} and \eqref{eq:Fredholmunitary} that the image in $\corona{B\otimes \mathcal K}$, $(\dot w_s)_{s\in [0,1]}$, is a continuous unitary path in $\corona{B\otimes \mathcal K} \cap \dot \Phi_0(A^\dagger)'$ which connects $1_{\corona{B\otimes \mathcal K}}$ and $\dot u$.\footnote{This is the subtle part where the unitality we fought for earlier in the section plays a crucial role. The arguments could had been run without any unitality assumptions, but then the unitary path would be in $\corona{B \otimes \mathcal K} \cap \dot \Phi_0(A)' / \Ann \dot \Phi_0(A)$. Considering this relative commutant, the final part of the proof would not be enough to obtain a unitary path in $(B \otimes \mathcal K)^\sim$.} Using the exact sequence \eqref{eq:sesgamma}, we may lift $(\dot w_s)_{s\in [0,1]}$ to a continuous unitary path $(u_s)_{s\in[0,1]}$ in $D_{\Phi_0}$ for which $u_1 = u$ and $u_0 \in (B\otimes \mathcal K)^\sim$. Let $\Theta := (\sigma \oplus \Theta_0)|_A$, so that $\phi \oplus \Theta = \Phi_0|_A$. As $\psi \oplus \sigma = \psi_0 = \Ad u' \circ \phi \oplus \sigma$, it follows that $\Ad u \circ \psi \oplus \Theta = \phi \oplus \Theta$ (with respect to the same Cuntz sum $\oplus = \oplus_{s_1,s_2}$). Hence $(u_s)_{s\in [0,1)}$ is a continuous path of unitaries in $D_{\Phi_0}$ with $u_0 \in (B\otimes \mathcal K)^\sim$ such that \begin{equation} \lim_{s\to 1} \| u_s^\ast (\psi \oplus \Theta)(a) u_s - (\phi \oplus \Theta)(a)\| = 0. \end{equation} As $\phi$ and $\psi$ agree modulo $B \otimes \mathcal K$, and as $\Phi_0$ is the forced unitisation of $\phi \oplus \Theta$, it follows that $D_{\psi \oplus \Theta} = D_{\phi \oplus \Theta} = D_{\Phi_0}$. As $\psi\oplus \Theta$ is unitarily equivalent to the infinite repeat $(\phi \oplus \psi)_\infty \oplus \Theta_0|_A$, it follows that $((\psi \oplus \Theta)(A) + 1_{\multialg{B\otimes \mathcal K}}\mathbb C ) \cap (B\otimes \mathcal K) = \{0\}$. Hence Lemma \ref{l:propertrick} implies that $\psi \oplus \Theta \approxeq \phi \oplus \Theta$. \end{proof} Using that infinite repeats of nuclear $X$-full $\ast$-homomorphisms are weakly $X$-nuclear and $X$-nuclearly absorbing by Theorem \ref{t:infrepXabs}, the following is a consequence of the above theorem. \begin{theorem}\label{t:irDE2} Let $X$ be a topological space, let $A$ be a separable, exact, lower semicontinuous $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital, stable $X$-$C^\ast$-algebra. Let $\phi,\psi,\theta \colon A \to B$ be nuclear, $X$-equivariant $\ast$-homomorphisms and suppose that $\theta$ is $X$-full. If $KK_\nuc(X; \phi) = KK_\nuc(X; \psi)$ then there is a continuous unitary path $(v_t)_{t\in \mathbb R_+}$ in $\widetilde B$ such that \begin{equation} \lim_{t\to \infty} \| v_t ( \phi \oplus_{s_1, s_2} \theta_\infty)(a) v_t^\ast - (\psi \oplus_{s_1,s_2} \theta_\infty) (a) \| = 0 \end{equation} for all $a\in A$. Here $s_1,s_2 \in \multialg{B}$ are $\mathcal O_2$-isometries, and $\theta_\infty$ is an infinite repeat of $\theta$. \end{theorem} \begin{proof} By Theorem \ref{t:irDE} there is a weakly $X$-nuclear $\Theta \colon A \to \multialg{B}$ such that $\phi \oplus \Theta \approxeq \psi \oplus \Theta$. Hence $\phi \oplus \Theta \oplus \theta_\infty \approxeq \psi \oplus \Theta \oplus \theta_\infty$. By Theorem \ref{t:infrepXabs}, $\theta_\infty$ is weakly $X$-nuclear and $X$-nuclearly absorbing, and in particular it absorbs $\Theta$. As $\theta_\infty$ is an infinite repeat, and is thus unitarily equvalent to an infinite repeat of itself $(\theta_\infty)_\infty$, it follows from \cite[Lemmas 2.4 and 3.4]{DadarlatEilers-asymptotic} that $\phi \oplus \theta_\infty \approxeq \psi \oplus \theta_\infty$. \end{proof} \section{An ideal-related $\mathcal O_2$-embedding theorem} The iconic $\mathcal O_2$-embedding theorem of Kirchberg \cite{Kirchberg-ICM} (see also \cite{KirchbergPhillips-embedding}) asserts that any separable, exact $C^\ast$-algebra admits an embedding into $\mathcal O_2$. This played a major role in the proofs of Theorems \ref{t:existsimple}, \ref{t:uniquesimple}, \ref{t:KP}, and \ref{t:KPUCT}. Similarly, an ideal-related version of this theorem will be used to prove Theorems \ref{t:irexistence}, \ref{t:iruniqueness}, and \ref{t:nonsimpleclass}. Such an ideal-related $\mathcal O_2$-embedding theorem was also a main ingredient in Kirchberg's approach to Theorem \ref{t:nonsimpleclass}, see \cite[Hauptsatz 2.15]{Kirchberg-non-simple}, and a similar result has appeared in \cite[Theorem 6.1]{Gabe-O2class}. The latter of these results assumed that the target $C^\ast$-algebra $B$ was separable, nuclear, and $\mathcal O_\infty$-stable, so that one could combine an application of Michael's selection theorem with a deep result of Kirchberg and Rørdam from \cite{KirchbergRordam-zero} to produce the desired map. While \cite[Theorem 6.1]{Gabe-O2class} would be strong enough to prove the main classification result, Theorem \ref{t:nonsimpleclass}, one would only be able to prove the existence result Theorem \ref{t:irexistence} under the additional assumptions that the target $B$ be separable and nuclear. To remove these assumptions, I present Theorem \ref{t:irO2}; a generalised version of the ideal-related $\mathcal O_2$-embedding theorem from \cite{Gabe-O2class}. The proof does not require the use of Michael's selection theorem or the results of Kirchberg and Rørdam. Instead it uses a recent construction from \cite{BGSW-nucdim}, which despite its technical proof is a fairly elementary application of Voiculescu's quasidiagonality theorem \cite{Voiculescu-qdhtpy}. With that at hand, the proof from \cite{Gabe-O2class} carries over almost verbatim. \begin{theorem}\label{t:irO2} Let $A$ be a separable, exact $C^\ast$-algebra, let $B$ be an $\mathcal O_\infty$-stable $C^\ast$-algebra, and let $\Phi \colon \mathcal I(A) \to \mathcal I(B)$ be a $\Cu$-morphism (see Definition \ref{d:lattice}). Then there exists a nuclear, strongly $\mathcal O_2$-stable $\ast$-homomorphism $\phi \colon A \to B$ such that $\mathcal I(\phi) = \Phi$. \end{theorem} \begin{proof} Arguing exactly as in the beginning of the proof of \cite[Theorem 6.1]{Gabe-O2class}, we may assume that $B$ is stable and $\mathcal O_2$-stable.\footnote{Essentially, by embedding $B \otimes \mathcal O_2 \otimes \mathcal K \hookrightarrow B\otimes \mathcal O_\infty \cong B$ and making sure that these maps do the right thing on ideals.} By \cite[Lemma 3.5]{BGSW-nucdim}, we find a $\ast$-homomorphism $\rho \colon C_0(0,1] \otimes A \to B_\infty$ such that \begin{equation}\label{eq:IIJBinfty} \mathcal I(\rho)(I \otimes J) = \overline{B_\infty \Phi(J) B_\infty}, \qquad I \in \mathcal I(C_0(0,1]) \setminus \{0\}, \quad J \in \mathcal I(A), \end{equation} and such that the completely positive order zero map $\rho(\id_{(0,1]} \otimes -)$ can be represented as a sequence of completely positive contractive maps (which would be $(\eta_n \circ \psi_n(\id_{(0,1]} \otimes -))_{n\in \mathbb N}$ in that lemma), each of which is nuclear as it factors through a finite dimensional $C^\ast$-algebra. By exactness of $A$, $\rho(\id_{(0,1]} \otimes -)$ is nuclear by \cite[Proposition 3.3]{Dadarlat-qdmorphisms}, and so $\rho$ is nuclear by \cite[Theorem 2.9]{Gabe-qdexact}. By \cite[Lemma 6.7]{Gabe-O2class}, $\rho$ is $\mathcal O_2$-stable, since we assumed $B$ is $\mathcal O_2$-stable. Now the proof follows that of \cite[Theorem 6.1]{Gabe-O2class} almost word by word. Let $\Psi \colon C_0(0,1)\otimes A \to B_\infty$ be the restriction of $\rho$, which is nuclear, $\mathcal O_2$-stable, and for which $\mathcal I(\Psi)(I\otimes J) = \overline{B_\infty \Phi(J) B_\infty}$ for $I\neq 0$. Let $\alpha$ be an automorphism on $C_0(0,1)$ such that $C_0(0,1)\rtimes_\alpha \mathbb Z \cong C(\mathbb T)\otimes \mathcal K$, and let $\beta := \alpha \otimes \id_A$. Then $\Psi \circ \beta$ is also nuclear, $\mathcal O_2$-stable, and $\mathcal I(\Psi \circ \beta) = \mathcal I(\Psi)$ by \eqref{eq:IIJBinfty}. By \cite[Theorem 3.23]{Gabe-O2class}, $\Psi \circ \beta$ and $\Psi$ are approximately Murray--von Neumann equivalent. By a standard reindexing argument (cf.~\cite[Lemma 4.1]{Gabe-O2class}) there is a contraction $u\in B_\infty$ such that $u\Psi(-) u^\ast = \Psi \circ \beta$ and $u^\ast \Psi(\beta(-)) u = \Psi$. Hence there exists a $\ast$-homomorphism $\psi_0 \colon (C_0(0,1)\otimes A)\rtimes_\beta \mathbb Z \to B_\infty$ such that $\psi_0(x v^n) = \Psi(x) u^n$ for $x\in C_0(0,1)\otimes A$ and $n\in \mathbb N_0$ and $\psi_0(xv^{-n}) = \Psi(x) (u^\ast)^n$ if $n\in \mathbb N$.\footnote{Dilate $u$ to a unitary $w = \left( \begin{array}{cc} u & (1_{\widetilde B}-uu^\ast)^{1/2} \\ (1_{\widetilde B}-u^\ast u)^{1/2} & - u^\ast \end{array} \right) \in M_2(\widetilde B_\infty)$. This satisfies $w (\Psi \oplus 0) w^\ast = (\Psi \circ \beta)\oplus 0$ (one can use Lemma \ref{l:conjhom} to see this) so there is an induced $\ast$-homomorphism $(C_0(0,1)\otimes A)\rtimes_\beta \mathbb Z \to M_2(\widetilde B_\infty)$. Using Lemma \ref{l:conjhom} it is easy to check that it factors through the $(1,1)$-corner and induces a $\ast$-homomorphism $\psi_0$ as described.} Here $v$ denotes the canonical unitary in the crossed product. By \cite[Lemma 6.10]{Gabe-O2class},\footnote{This is proved in the unital case. One obtains the non-unital case by unitising everything, and using that the unitisation of nuclear maps are nuclear.} nuclearity of $\Psi$ entails nuclearity of $\psi_0$. As $(C_0(0,1) \otimes A)\rtimes_\beta \mathbb Z \cong (C_0(0,1)\rtimes_\alpha \mathbb Z) \otimes A$ canonically, we identify these. Since $C_0(0,1)\rtimes_\alpha \mathbb Z \cong C(\mathbb T)\otimes \mathcal K$ contains a full projection $p$, let $\psi := \psi_0(p\otimes -) \colon A \to B_\infty$. Using \eqref{eq:IIJBinfty}, it is not hard to see\footnote{The proof of \cite[Theorem 6.1, bottom of page 39]{Gabe-O2class} contains the exact details for this proof, word by word.} that \begin{equation}\label{eq:IpsiJBinfty} \mathcal I(\psi)(J) = \overline{B_\infty \Phi(J) B_\infty}, \qquad J\in \mathcal I(A). \end{equation} Consider the unitisation $\psi^\dagger \colon A^\dagger \to \multialg{B}_\infty$ of $\psi$ which is nuclear by Lemma \ref{l:unitisednuc}. Let $\eta\colon \mathbb N\to \mathbb N$ be a map such that $\lim_{n\to \infty} \eta(n) = \infty$ and denote by $\eta^\ast \colon \multialg{B}_\infty \to \multialg{B}_\infty$ the induced $\ast$-homomorphism. As every ideal in the image of $\mathcal I(\psi)$ is generated by constant elements, \cite[Lemmas 6.6 and 6.7]{Gabe-O2class} implies that the maps $\psi^\dagger$ and $\eta^\ast \circ \psi^\dagger$ are nuclear, $\mathcal O_2$-stable, and agree when applying $\mathcal I$. Hence by \cite[Theorem 3.23]{Gabe-O2class}, $\psi^\dagger$ and $\eta^\ast \circ \psi^\dagger$ are approximately unitarily equivalent. By \cite[Theorem 4.3]{Gabe-O2class}, $\psi$ is approximately unitarily equivalent to a $\ast$-homomorphism $\phi \colon A \to B \subseteq B_\infty$. As $\mathcal I(\psi) = \mathcal I(\phi)$ (when $\phi$ is considered as a map into $B_\infty$), it follows from \eqref{eq:IpsiJBinfty} that $\mathcal I(\phi) = \Phi$ when $\phi$ is considered as a $\ast$-homomorphism into $B$. Moreover, $\phi$ is nuclear by nuclearity of $\psi$, and strongly $\mathcal O_2$-stable since it was assumed that $B$ is $\mathcal O_2$-stable. \end{proof} The following is an immediate consequence of the above theorem and Lemma \ref{l:Cuaction}$(c)$. \begin{corollary}\label{c:irO2X} Let $X$ be a topological space, let $A$ be a separable, exact, monotone continuous $X$-$C^\ast$-algebra, and let $B$ be an $\mathcal O_\infty$-stable, $X$-compact, upper semicontinuous $X$-$C^\ast$-algebra. Then there exists a nuclear, strongly $\mathcal O_2$-stable, $X$-full $\ast$-homomorphism $\phi \colon A \to B$. \end{corollary} \section{The main theorems} The end is nigh, and the classification of separable, nuclear, $\mathcal O_\infty$-stable $C^\ast$-algebras -- which are not necessarily simple -- is finally within reach. The proof will follow the same strategy as in Section \ref{s:KP} for the simple case. \subsection{Existence of $\theta$} In Theorems \ref{t:existsimple} and \ref{t:uniquesimple}, the target $C^\ast$-algebra $B$ was always assumed to contain a properly infinite, full projection. Such a projection ensures the existence of a full embedding $\mathcal O_2 \hookrightarrow B$, and combined with Kirchberg's $\mathcal O_2$-embedding theorem one could produce a $\ast$-homomorphism as the composition $A \hookrightarrow \mathcal O_2 \hookrightarrow B$. In Lemma \ref{l:fullO2map}, this idea was slightly modified to construct a full, nuclear $\ast$-homomorphism $\theta \colon A \to B \otimes \mathcal K$ such that $\mathcal O_2$-embeds unitally in $\multialg{B\otimes \mathcal K}\cap \theta(A)'$. The existence of such a map $\theta$ allows one to apply the results from Section \ref{s:unitary}. This was a key technical ingredient in the proofs of Theorems \ref{t:existsimple} and \ref{t:uniquesimple}. In this section some similar results are presented on how to produce $X$-full, nuclear $\ast$-homomorphisms $\theta \colon A \to \multialg{B\otimes \mathcal K}$ such that $\mathcal O_2$ embeds unitally in $\multialg{B\otimes \mathcal K}\cap \theta(A)'$. The main one of these constructions comes from the existence of an $\mathcal O_\infty$-stable $C^\ast$-subalgebra $D \subseteq B$ which induces the action of $X$ on $B$, see Lemma \ref{l:XO2map} for the precise statement. The existence of such a $C^\ast$-subalgebra $D\subseteq B$ is an ideal-related analogue of $B$ containing a properly infinite, full projection. First some ideal-related versions of standard results will be proved. Kasparov's stabilisation theorem \cite[Theorem 2]{Kasparov-Stinespring} implies that if $B$ is a $\sigma$-unital, stable $C^\ast$-algebra, and if $B_0 \subseteq B$ is a $\sigma$-unital, hereditary $C^\ast$-subalgebra, then $B_0$ is isomorphic to a corner in $B$. Part $(a)$ below is an ideal-related version of a technical extension of this result. Part $(b)$ is an ideal-related version of Brown's stable isomorphism theorem \cite{Brown-stableiso}. Recall that if $\phi \colon A \to B$ is a $\ast$-homomorphism, then $\mathcal I(\phi) \colon \mathcal I(A) \to \mathcal I(B)$ is the map $I \mapsto \overline{B\phi(I)B}$. Also, a $\ast$-homomorphism $\phi \colon D \to B$ is called \emph{extendible} if the hereditary $C^\ast$-subalgebra $\overline{\phi(D) B \phi(D)}$ is a corner $PBP$ for a (unique) projection $P\in \multialg{B}$. Note that $\phi$ extends canonically to a $\ast$-homomorphism $\multialg{\phi} \colon \multialg{D} \to \multialg{B}$ satisfying $\multialg{\phi}(1_{\multialg{D}}) = P$. \begin{proposition}\label{p:Brown} Let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra. \begin{itemize} \item[$(a)$] Suppose that $D$ is a $\sigma$-unital $C^\ast$-algebra and that $\eta \colon D \to B$ is a $\ast$-homo\-morphism. Then there exists an extendible $\ast$-homomomorphism $\kappa \colon D \to B$ such that $\mathcal I(\kappa) = \mathcal I(\eta)$ and for which $1_{\multialg{B}} - \multialg{\kappa}(1_{\multialg{D}})$ is Murray--von Neumann equivalent to $1_{\multialg{B}}$ in $\multialg{B}$. Also, if $\eta$ is nuclear, then $\kappa$ may be chosen nuclear. \item[$(b)$] Suppose $B_0$ is a $\sigma$-unital, full, hereditary, stable $C^\ast$-subalgebra of $B$, and let $j \colon B_0 \hookrightarrow B$ denote the inclusion. Then there exists an isomorphism $\Omega \colon B_0 \xrightarrow \cong B$, such that $\mathcal I(\Omega) = \mathcal I(j)$. \end{itemize} \end{proposition} \begin{proof} The proofs of $(a)$ and $(b)$ only differ at the very end. Let $B_0 := \overline{\eta(D) B \eta(D)}$ (which we only assume is full and stable when considering part $(b)$ at the end of the proof). Consider the $C^\ast$-algebra $C= \left( \begin{array}{cc} B_0 & \overline{B_0 B} \\ \overline{BB_0} & B \end{array}\right)$ which is a subalgebra of $M_2(B)$. As $\overline{B_0 B} \oplus B \cong B$ as Hilbert $B$-modules by Kasparov's stabilisation theorem \cite[Theorem 2]{Kasparov-Stinespring}, it follows that $C \cong \mathcal K_B(\overline{B_0 B} \oplus B) \cong B$ is $\sigma$-unital and stable. Let $\eta_0 \colon D \to B_0$ be the corestriction of $\eta$, which is non-degenerate and nuclear if $\eta$ is nuclear, and let $j\colon B_0 \to B$ be the inclusion, so that $\eta = j \circ \eta_0$. Consider the obvious embedding $j_{1,1} \colon B_0 \to C$ which is extendible, as well as $j_{2,2} \colon B \to C$, $i \colon C \to M_2(B)$, and $\id_B \otimes e_{k,k} \colon B \to M_2(B)$ for $k=1,2$ which are all embeddings of full, hereditary $C^\ast$-subalgebras, and are thus invertible once applying $\mathcal I$ (cf.~\cite[Proposition 4.1.10]{Pedersen-book-automorphism}). Moreover, as $\mathcal I(\id_B \otimes e_{1,1}) = \mathcal I(\id_B \otimes e_{2,2})$, $(\id_B \otimes e_{1,1}) \circ j = i \circ j_{1,1}$ and $i \circ j_{2,2} = \id_{B} \otimes e_{2,2}$, it follows that \begin{eqnarray} \mathcal I(j) &=& \mathcal I(\id_B \otimes e_{1,1})^{-1} \circ \mathcal I(i \circ j_{1,1}) \nonumber\\ &=& \mathcal I(j_{2,2})^{-1} \circ \mathcal I(i)^{-1} \circ \mathcal I(i) \circ \mathcal I(j_{1,1}) \nonumber\\ &=& \mathcal I(j_{2,2})^{-1} \circ \mathcal I(j_{1,1}). \label{eq:j11j22} \end{eqnarray} Note that $j_{2,2} \colon B \hookrightarrow C$ is the embedding of $B$ as a full corner in $C$, and let $p_{2,2} = \multialg{j_{2,2}} (1_{\multialg{B}})$. As both $B$ and $C$ are $\sigma$-unital and stable, it follows from \cite[Theorem 4.23]{Brown-semicontmultipliers} that there is an isometry $v$ in $\multialg{C}$ with $vv^\ast = p_{2,2}$. Clearly $\Omega_{2,2} := \Ad v \circ j_{2,2}$ is an isomorphism $B \xrightarrow \cong C$ such that $\mathcal I(\Omega_{2,2}) = \mathcal I(j_{2,2})$, so $\kappa := \Omega_{2,2}^{-1} \circ j_{1,1} \circ \eta_0$ is extendible, nuclear if $\eta$ is nuclear, and satisfies \begin{equation} \mathcal I(\kappa) = \mathcal I(j_{2,2})^{-1} \circ \mathcal I(j_{1,1}) \circ \mathcal I(\eta_0) \stackrel{\eqref{eq:j11j22}}{=} \mathcal I(j \circ \eta_0) = \mathcal I(\eta). \end{equation} Moreover, as $\multialg{\Omega_{2,2}}(1_{\multialg{B}}- \multialg{\kappa}(1_{\multialg{D}})) = p_{2,2}$, which is equivalent to $1_{\multialg{C}}$ in $\multialg{C}$, it follows that $1_{\multialg{B}}-\multialg{\kappa}(1_{\multialg{D}})$ is equivalent to $1_{\multialg{B}}$ in $\multialg{B}$. This completes part $(a)$. For part $(b)$, $B_0$ is assumed to be stable and full in $B$, so arguing as for $\Omega_{2,2}$ above, one constructs an isomorphism $\Omega_{1,1} \colon B_0 \xrightarrow{\cong} C$ such that $\mathcal I(\Omega_{1,1}) = \mathcal I(j_{1,1})$. Hence $\Omega := \Omega_{2,2}^{-1} \circ \Omega_{1,1}$ satisfies the conditions of $(b)$ by \eqref{eq:j11j22}. \end{proof} \begin{lemma}\label{l:genO2emb} Let $D$ be a $\sigma$-unital, $\mathcal O_\infty$-stable $C^\ast$-algebra, let $B$ be a $\sigma$-unital, stable $C^\ast$-algebra, and let $\eta \colon D \to B$ be a $\ast$-homomorphism. Then there exists a $\ast$-homomorphism $\kappa \colon D \to B$ such that $\mathcal I(\kappa) = \mathcal I(\eta)$, and such that $\mathcal O_2$ embeds unitally into $\multialg{B} \cap \kappa(D)'$. Moreover, if $\eta$ is nuclear then $\kappa$ may be chosen nuclear. \end{lemma} \begin{proof} As $D$ is $\mathcal O_\infty$-stable, and as $\mathcal O_\infty$ is strongly self-absorbing (see \cite{TomsWinter-ssa}), we may assume that $D = D_1 \otimes \mathcal O_\infty$ and that there exists an isomorphism $\delta \colon D \xrightarrow \cong D_1$ such that $\id_D$ and $\delta \otimes 1_{\mathcal O_\infty}$ are approximately unitarily equivalent.\footnote{In fact, we may assume $D=D_2 \otimes \mathcal O_\infty \otimes \mathcal O_\infty$. As $\mathcal O_\infty$ is strongly self-absorbing, there is an isomorphism $\phi \colon \mathcal O_\infty \xrightarrow \cong \mathcal O_\infty \otimes \mathcal O_\infty$ which is approximately unitarily equivalent to $\id_{\mathcal O_\infty} \otimes 1_{\mathcal O_\infty}$. Letting $D_1 = D_2 \otimes \mathcal O_\infty$, and $\delta = \id_{D_2} \otimes \phi^{-1}$ does the trick.} Hence $\mathcal I(\delta \otimes 1_{\mathcal O_\infty}) = \id_{\mathcal I(D)}$. Fix a non-zero projection $p\in \mathcal O_\infty$ with $[p]_0 =0$ in $K_0(\mathcal O_\infty)$, and let $\mathcal O_\infty^\st := p \mathcal O_\infty p$. As $\mathcal O_\infty$ is simple, it follows that $\delta \otimes p \colon D \to D$ agrees with $\delta \otimes 1_{\mathcal O_\infty}$ when applying $\mathcal I$. Consider the $\ast$-homomorphism $\delta \otimes 1_{\mathcal O_\infty^\st} \colon D \to D_1 \otimes \mathcal O_\infty^\st$, the inclusion $\iota \colon D_1 \otimes \mathcal O_\infty^\st \hookrightarrow D_1 \otimes \mathcal O_\infty = D$, and note that $\delta \otimes p = \iota \circ (\delta \otimes 1_{\mathcal O_\infty^\st})$. Hence \begin{equation}\label{eq:Iiotadelta} \mathcal I(\iota) \circ \mathcal I(\delta \otimes 1_{\mathcal O_\infty^\st}) = \mathcal I(\delta \otimes p) = \mathcal I(\delta \otimes 1_{\mathcal O_\infty}) = \id_{\mathcal I(D)}. \end{equation} Apply Proposition \ref{p:Brown}$(a)$ to the $\ast$-homomorphism $\eta \circ \iota \colon D_1 \otimes \mathcal O_\infty^\st \to B$, and obtain an extendible $\ast$-homomorphism $\kappa_0 \colon D_1 \otimes \mathcal O_\infty^\st \to B$ such that $\mathcal I(\kappa_0) = \mathcal I(\eta) \circ \mathcal I(\iota)$ and $1_{\multialg{B}} - \multialg{\kappa_0}(1_{\multialg{D_1\otimes \mathcal O_\infty^\st}}) \sim 1_{\multialg{B}}$ in $\multialg{B}$. If $\eta$ is nuclear then so is $\eta \circ \iota$, so that $\kappa_0$ could be chosen nuclear. Define $\kappa := \kappa_0 \circ (\delta \otimes 1_{\mathcal O_\infty^\st}) \colon D \to B$ which is nuclear if $\kappa_0$ is nuclear. Then \begin{equation} \mathcal I(\kappa) = \mathcal I(\kappa_0) \circ \mathcal I(\delta \otimes 1_{\mathcal O_\infty^\st}) = \mathcal I(\eta) \circ \mathcal I(\iota) \circ \mathcal I(\delta \otimes 1_{\mathcal O_\infty^\st}) \stackrel{\eqref{eq:Iiotadelta}}{=} \mathcal I(\eta). \end{equation} Fix $\mathcal O_2$-isometries $s_1,s_2\in \mathcal O_\infty^\st$. As $B$ is stable and $1_{\multialg{B}} - \multialg{\kappa}(1_{\multialg{D}}) \sim 1_{\multialg{B}}$, we may pick $t_1,t_2 \in \multialg{B}$ such that \begin{equation} t_1^\ast t_1 = t_2^\ast t_2 = t_1t_1^\ast + t_2 t_2^\ast = 1_{\multialg{B}}-\multialg{\kappa}(1_{\multialg{D}}). \end{equation} Then $S_i := \multialg{\kappa_0}(1_{\multialg{D_1}} \otimes s_i) + t_i \in \multialg{B}$ for $i=1,2$ are $\mathcal O_2$-isometries that commute with the image of $\kappa$, so $\mathcal O_2$ embeds unitally in $\multialg{B} \cap \kappa(D)'$. \end{proof} \begin{lemma}\label{l:theta} Let $X$ be a topological space, let $A$ be a separable, exact, lower semicontinuous $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra. Suppose that there exists an $X$-full, nuclear, $\mathcal O_\infty$-stable $\ast$-homomorphism $\phi \colon A \to B$. Then there exists an $X$-full, nuclear $\ast$-homomorphism $\theta \colon A \to B\otimes \mathcal K$ such that $\mathcal O_2$ embeds unitally in $\multialg{B\otimes \mathcal K} \cap \theta(A)'$. \end{lemma} \begin{proof} By the McDuff type property for $\mathcal O_\infty$-stable maps \cite[Corollary 4.5]{Gabe-O2class}, there exists a $\ast$-homomorphism $\psi \colon A \otimes \mathcal O_\infty \to B\otimes \mathcal K$ such that $\psi\circ(\id_A \otimes 1_{\mathcal O_\infty})$ and $\phi \otimes e_{1,1}$ are approximately Murray--von Neumann equivalent. As $\phi\otimes e_{1,1}$ is nuclear, so is $\psi\circ(\id_A \otimes 1_{\mathcal O_\infty})$ and thus by Lemma \ref{l:nucoutoftensor}, $\psi$ is nuclear. By Lemma \ref{l:genO2emb}, there is a nuclear $\ast$-homomorphism $\kappa \colon A \otimes \mathcal O_\infty \to B$ such that $\mathcal I(\kappa ) = \mathcal I(\psi)$ and for which $\mathcal O_2$ embeds unitally in $\multialg{B\otimes \mathcal K} \cap \kappa(A \otimes \mathcal O_\infty)'$. Letting $\theta = \kappa \circ(\id_A \otimes 1_{\mathcal O_\infty})$ one has \begin{equation} \mathcal I(\theta) = \mathcal I(\psi) \circ \mathcal I(\id_A \otimes 1_{\mathcal O_\infty}) = \mathcal I(\phi \otimes e_{1,1}) \end{equation} and therefore $\theta$ is $X$-full since $\phi\otimes e_{1,1}$ is clearly $X$-full. \end{proof} Consequently, and by using the ideal-related $\mathcal O_2$-embedding theorem from the previous section, one obtains the following $X$-equivariant version of Lemma \ref{l:fullO2map}. The condition of containing an $\mathcal O_\infty$-stable subalgebra as in the lemma should be considered as an ideal-related version of containing a properly infinite, full projection, as was assumed in Theorem \ref{t:existsimple}. \begin{lemma}\label{l:XO2map} Let $X$ be a topological space, let $A$ be a separable, exact, monotone continuous $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital $C^\ast$-algebra, containing an $\mathcal O_\infty$-stable $C^\ast$-subalgebra $D \subseteq B$. Suppose $D$ is equipped with an upper semicontinuous, $X$-compact action of $X$, and equip $B$ with the action of $X$ given by $B(U) := \overline{B D(U) B}$ for $U\in \mathcal O(X)$. Then there exists a nuclear, $X$-full $\ast$-homomorphism $\theta \colon A \to B \otimes \mathcal K$ such that $\mathcal O_2$ embeds unitally in $\multialg{B \otimes \mathcal K} \cap \theta(A)'$. \end{lemma} \begin{proof} By Corollary \ref{c:irO2X}, there is a nuclear $X$-full $\ast$-homomorphism $\phi \colon A \to D$. The composition $\phi \colon A \to D \subseteq B$ is a nuclear, $X$-full, $\mathcal O_\infty$-stable $\ast$-homo\-morphism, so $\theta$ as in the statement of the lemma exists by Lemma \ref{l:theta}. \end{proof} \subsection{Proving Theorems \ref{t:irexistence}, \ref{t:iruniqueness} and \ref{t:nonsimpleclass}} The following is an $X$-equivariant analogue of Lemma \ref{l:absCuntzpair}, and the proof is identical. \begin{lemma}\label{l:absCuntzpair2} Let $A$ be a separable $X$-$C^\ast$-algebra, $B$ be a $\sigma$-unital, stable $X$-$C^\ast$-algebra, and suppose that $\Theta \colon A \to \multialg{B}$ is a weakly $X$-nuclear, $X$-nuclearly absorbing $\ast$-homomorphism. Any element $x\in KK_\nuc(X; A ,B)$ is represented by a weakly $X$-nuclear Cuntz pair of the form $(\psi, \Theta)$. \end{lemma} \begin{proof} The proof of Lemma \ref{l:absCuntzpair} carries over verbatim, by replacing the word ``nuclear'' with the word ``$X$-nuclear''. \end{proof} The following is the main existence result. It is stated in a somewhat abstract way, but I emphasise that Lemma \ref{l:XO2map} can be used to produce a nuclear, $X$-full, $\mathcal O_\infty$-stable map $A \to B$ as in the statement of the proposition. \begin{proposition}\label{p:existence} Let $X$ be a topological space, let $A$ be a separable, exact, lower semicontinuous $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra containing a $\sigma$-unital, stable, full, hereditary $C^\ast$-subalgebra. Suppose that there exists a nuclear, $X$-full, $\mathcal O_\infty$-stable $\ast$-homomorphism $A \to B$. For any $x\in KK_\nuc(X; A,B)$ there exists a nuclear, strongly $\mathcal O_\infty$-stable, $X$-full $\ast$-homomorphism $\phi \colon A \to B$ such that $KK_\nuc(X; \phi) = x$. Moreover, if $A$ and $B$ are unital then $\phi$ may be picked to also be unital if and only if the following conditions hold: \begin{itemize} \item[(1)] $B(U) = B$ for every $U\in \mathcal O(X)$ satisfying $A(U) = A$; and \item[(2)] $\Gamma_0(x)([1_A]_0) = [1_B]_0$ in $K_0(B)$. \end{itemize} \end{proposition} \begin{proof} Let $B_0\subseteq B$ be a $\sigma$-unital, stable, full, hereditary $C^\ast$-subalgebra of $B$. The inclusion $\iota \colon B_0 \hookrightarrow B$ induces an isomorphism $\mathcal I(\iota) \colon \mathcal I(B_0) \xrightarrow \cong \mathcal I(B)$, and thus induces an action of $X$ on $B_0$ which satisfies that $B_0(U)$ is full in $B(U)$ for all $U\in \mathcal O(X)$. By Proposition \ref{p:KKXfullher}, the inclusion $\iota$ induces an isomorphism $KK_\nuc(X; A , B_0) \cong KK_\nuc(X; A, B)$. Let $x_0\in KK_\nuc(X; A , B_0)$ be the element mapped to $x$ by this isomorphism. The strategy will be to find a nuclear, strongly $\mathcal O_\infty$-stable, $X$-full $\ast$-homomorphism $\phi_0 \colon A \to B_0$ such that $KK_\nuc(X; \phi_0) = x_0$. Then the composition $\phi := \iota \circ \phi_0 \colon A \to B$ is nuclear, strongly $\mathcal O_\infty$-stable (by Lemma \ref{l:relcombasic}$(c)$), $X$-full, and satisfies $KK_\nuc(X; \phi) = x$. The ideal-related version of Brown's stable isomorphism theorem, Proposition \ref{p:Brown}$(b)$, implies that $B_0= B_0 \otimes e_{1,1}$ and $B\otimes \mathcal K$ are isomorphic as $X$-$C^\ast$-algebras, so by Lemma \ref{l:theta} there exists a nuclear, $X$-full $\ast$-homomorphism $\theta \colon A \to B_0$ such that $\mathcal O_2$ embeds unitally into $\multialg{B_0}\cap \theta(A)'$. By Theorem \ref{t:infrepXabs}, any infinite repeat $\theta_\infty = \sum_{i=1}^\infty s_i \theta(-) s_i^\ast$ of $\theta$ is weakly $X$-nuclear, and $X$-nuclearly absorbing. Let $s_0\in \multialg{B_0}$ be an isometry satisfying $s_0s_0^\ast = 1_{\multialg{B_0}}-s_1 s_1^\ast$ so that $s_1,s_0$ are $\mathcal O_2$-isometries. Lemma \ref{l:absCuntzpair2} provides the existence of a weakly $X$-nuclear Cuntz pair of the form $(\psi, \theta_\infty)$ which induces $x_0$. By Lemma \ref{l:keyexistence}, there is a continuous unitary path $(v_t)_{t\in \mathbb R_+}$ in $\multialg{B_0}\cap \theta_\infty(A)'$ with $v_0=1_{\multialg{B_0}}$, and a $\ast$-homomorphism $\phi_1 \colon A \to B_0$ such that $v_t \psi(-) v_t^\ast$ converges point-norm to $\phi_0 := s_1 \phi_1(-)s_1^\ast + \sum_{i=2}^\infty s_i \theta(-) s_i^\ast$. Let $\theta_0 := \sum_{i=2}^\infty s_0^\ast s_i \theta(-) s_i^\ast s_0$ so that $\phi_0 = \phi_1 \oplus_{s_1,s_0} \theta_0$ and $\theta_\infty = \theta\oplus_{s_1,s_0} \theta_0$. There is a homotopy of weakly $X$-nuclear Cuntz pairs from $(\psi , \theta_\infty)$ to $(\phi_1, \theta) \oplus_{s_1,s_0} (\theta_0, \theta_0)$ given via $(\Ad v_t\circ \psi, \theta_\infty)$. As $(\theta_0, \theta_0)$ is zero homotopic, it follows that $(\phi_1, \theta)$ is a weakly $X$-nuclear Cuntz pair inducing $x_0$. As $\phi_1 = s_1^\ast \phi_0 (-) s_1$ is weakly $X$-nuclear and takes values in $B_0$, it follows that it is $X$-nuclear. Thus \begin{equation} x_0 = KK_\nuc(X; \phi_1) - KK_\nuc(X; \theta) = KK_\nuc(X; \phi_1). \end{equation} Here we used that $\mathcal O_2$ embeds unitally in $\multialg{B_0} \cap \theta(A)'$, which by the same argument as for ordinary $KK$-theory implies that $KK_\nuc(X; \theta) = 0$. Let $\phi_2 = \phi_1 \oplus_{s_1,s_0} \theta$. As both $\phi_1$ and $\theta$ are $X$-nuclear, so is $\phi_2$. As $\theta$ is nuclear and $X$-full, and $\phi_1$ is nuclear and $X$-equivariant, it follows from Corollary \ref{c:Xfulldom} that $\theta$ approximately dominates $\phi_2$. As $\theta$ is strongly $\mathcal O_\infty$-stable, Proposition \ref{p:piabsorbing}$(a)$ implies that $\phi_2$ is strongly $\mathcal O_\infty$-stable. As $\theta$ is nuclear and $X$-full, and as $\phi_1$ is nuclear and $X$-equivariant, it follows that $\phi_2$ is nuclear and $X$-full. Hence \begin{equation} KK_\nuc(X; \phi_2) = KK_\nuc(X; \phi_1) + KK_\nuc(X; \theta) = KK_\nuc(X; \phi_1 ) = x_0. \end{equation} Letting $\phi := \iota \circ \phi_2 \colon A \to B$, we have obtained a nuclear, strongly $\mathcal O_\infty$-stable, $X$-full $\ast$-homomorphism for which $KK_\nuc(X; \phi) = x$ as desired. This finishes the proof in the not necessarily unital case. ``Moreover'': Now suppose that $A$ and $B$ are unital. For ``only if'', suppose $\phi \colon A \to B$ is an $X$-full, unital $\ast$-homomorphism. If $U\in \mathcal O(X)$ is such that $A(U) = A$, then \begin{equation} 1_B = \phi(1_A) \in \phi(A(U)) \subseteq B(U) \end{equation} which implies that $B(U) = B$, so $(1)$ holds. That $(2)$ holds follows from Observation \ref{o:KK(X)hom}. For ``if'', suppose that $(1)$ and $(2)$ hold. Use the not necessarily unital part of the proposition to lift $x$ to a nuclear, strongly $\mathcal O_\infty$-stable, $X$-full $\ast$-homomorphism $\phi_0 \colon A \to B$. Let $\Phi_B$ denote the action of $B$, and let $\Psi_A$ denote the dual action of $A$. As $\phi_0$ is $X$-full, it follows that $\mathcal I(\phi_0) = \Phi_B \circ \Psi_A$. By Lemma \ref{l:dualaction}, it follows that $U = \Psi_A(A) \in \mathcal O(X)$ satisfies $A(U) = A$. So $(1)$ implies that $\Phi_B \circ \Psi_A(A) = B$. Hence $\overline{B \phi_0(A) B} = B$, and thus $\phi_0(1_A)$ is a full projection in $B$. As $\phi_0$ is strongly $\mathcal O_\infty$-stable, it follows that $\phi_0(1_A)$ is properly infinite by Remark \ref{r:fullpropinfproj}, and hence $1_B$ is also a full, properly infinite projection. By $(2)$ it follows that $[1_B]_0 = [\phi_0(1_A)]_0$ in $K_0(B)$, so by \cite{Cuntz-K-theoryI} there is an isometry $v\in B$ for which $vv^\ast = \phi_0(1_A)$. Now $\phi := v^\ast \phi_0(-) v \colon A \to B$ is a nuclear, $X$-full $\ast$-homomorphism with the same $KK_\nuc(X)$-class as $\phi_0$. Finally $\phi$ is strongly $\mathcal O_\infty$-stable by Lemma \ref{l:relcombasic}$(c)$. \end{proof} Theorem \ref{t:irexistence} is an immediate corollary. \begin{proof}[Proof of Theorem \ref{t:irexistence}] Combine Proposition \ref{p:existence} with Lemma \ref{l:XO2map} in the special case $D=B$. \end{proof} \begin{proof}[Proof of Theorem \ref{t:iruniqueness}] $(ii) \Rightarrow (i)$ is Corollary \ref{c:KKXasMvN} (using Lemma \ref{l:nucquotient} and Remark \ref{r:Bempty} to see that being $X$-nuclear is the same as being $X$-equivariant and nuclear). The equivalence $(ii) \Leftrightarrow (iii)$ is Proposition \ref{p:MvNvsue}. Only $(i) \Rightarrow (ii)$ remains to be proved, so assume that $KK_\nuc(X; \phi) = KK_\nuc(X; \psi)$. By Proposition \ref{p:MvNeq}, it suffices to prove that the maps $\phi\otimes e_{1,1}, \psi \otimes e_{1,1} \colon A \to B\otimes \mathcal K$ are asymptotically Murray--von Neumann equivalent. Let $\theta\colon A \to B\otimes \mathcal K$ be as in Lemma \ref{l:theta}. Since $\phi \otimes e_{1,1},\psi \otimes e_{1,1}$ and $\theta$ are nuclear and $X$-full, they approximately dominate each other by Corollary \ref{c:Xfulldom}. Thus, combining the stable uniqueness theorem, Theorem \ref{t:irDE2}, with the key lemma for uniqueness, Lemma \ref{l:keyuniqueness} it follows that $\phi \otimes e_{1,1} \sim_\asMvN \psi \otimes e_{1,1}$. \end{proof} The following is a consequence of the uniqueness result above which does not initially use actions of topological spaces. Recall that for any $C^\ast$-algebra $A$, there is a canonical action $\I_A \colon \mathcal O(\Prim A) \to \mathcal I(A)$ which is an order isomorphism. \begin{corollary} Let $A$ be a separable, exact $C^\ast$-algebra, let $B$ be a $\sigma$-unital $C^\ast$-algebra, and let $\phi, \psi \colon A \to B$ be nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homomorphisms. Then $\phi$ and $\psi$ are asymptotically Murray--von Neumann equivalent if and only if $\mathcal I(\phi) = \mathcal I(\psi)$ and for $X:= \Prim A$ the induced action $\Phi_B = \mathcal I(\phi) \circ \I_A \colon \mathcal O(X) \to \mathcal I(B)$ one has \begin{equation} KK_\nuc(X; \phi) = KK_\nuc(X; \psi) \textrm{ in } KK_\nuc(X; (A, \I_A), (B, \Phi_B)). \end{equation} \end{corollary} The main classification theorem, Theorem \ref{t:nonsimpleclass}, is an easy consequence of Theorems \ref{t:irexistence} and \ref{t:iruniqueness}, using the asymptotic intertwining (Proposition \ref{p:asint}) to see that one may lift the ideal-related $KK$-equivalence to an isomorphism of the $C^\ast$-algebras. The details are presented below. \begin{proof}[Proof of Theorem \ref{t:nonsimpleclass}] $(a)$: By Theorem \ref{t:irexistence} it is possible to find $X$-full $\ast$-homo\-morphisms $\phi_0 \colon A \to B$ and $\psi_0 \colon B \to A$ such that $KK(X; \phi_0) = x$ and $KK(X; \psi_0) = x^{-1}$. As $\phi_0$ and $\psi_0$ are $X$-full so is $\psi_0 \circ \phi_0$, and as $A$ is tight, it follows that $\psi_0(\phi_0(A))$ is full in $A$. Hence both $\psi_0 \circ \phi_0$ and $\id_A$ are $X$-full, nuclear, strongly $\mathcal O_\infty$-stable $\ast$-homomorphisms with full images for which \begin{equation} KK(X; \psi_0 \circ \phi_0) = x^{-1} \circ x = KK(X; \id_A). \end{equation} By Theorem \ref{t:iruniqueness} $(i) \Rightarrow (iii)$ it follows that $\psi_0 \circ \phi_0 \sim_\asu \id_A$. Similarly, one obtains $\phi_0 \circ \psi_0 \sim_\asu \id_B$. By Proposition \ref{p:asint}, there exists an isomorphism $\phi \colon A \xrightarrow \cong B$ and a homotopy $(\phi_s)_{s\in [0,1]}$ from $\phi_0$ to $\phi$, such that $\phi_s \sim_\aMvN \phi_t$ for all $s,t\in [0,1]$. As approximate Murray--von Neumann equivalence preserves $X$-equivariance of $\ast$-homomorphisms, it follows that each $\phi_t$ is $X$-equivariant. By Lemma \ref{l:XnucC(Y)}, it follows that $\phi_0$ and $\phi$ are homotopic in the $X$-equivariant sense, and thus $KK(X; \phi_0) = KK(X; \phi)$. As $A$ and $B$ are both tight, any $X$-equivariant $\ast$-isomorphism is automatically an isomorphism of $X$-$C^\ast$-algebras, thus completing the proof of part $(a)$. $(b)$: This is proved exactly as above but using the unital versions of Theorems \ref{t:irexistence} and \ref{t:iruniqueness}. \end{proof} \subsection{Approximate equivalence} As when defining $KL_\nuc$ (see Section \ref{ss:KLnuc}) one can do the same for $KK_\nuc(X)$. I will avoid addressing whether the approach described below defines a topology as done in \cite{Dadarlat-KKtop}, and take a shortcut by simply defining $\overline{\{0\}}$. If $A$ is a separable $X$-$C^\ast$-algebra, and $B$ is a $\sigma$-unital $X$-$C^\ast$-algebra, let $\overline{\{0\}} \subseteq KK_\nuc(X; A,B)$ be the set of elements $x\in KK_\nuc(X; A, B)$, for which there exists $y\in KK_\nuc(X; A, C(\widetilde{\mathbb N}, B))$ such that $(\ev_n)_\ast(y) = 0$ for $n\in \mathbb N$, and $(\ev_\infty)_\ast(y) = x$. Clearly $\overline{\{0\}}$ is a subgroup of $KK_\nuc(X; A, B)$, so one may define \begin{equation} KL_\nuc(X; A, B) := KK(X; A, B)/\overline{\{0\}}. \end{equation} If $\phi \colon A \to B$ is an $X$-nuclear $\ast$-homomorphism, the induced element in $KL_\nuc(X)$ is denote by $KL_\nuc(X; \phi)$. It is easy to see that $KL_\nuc(X; \phi) = KL_\nuc(X; \psi)$ if and only if there exists $y\in KK_\nuc(X; A, C(\widetilde{\mathbb N}, B))$ such that $(\ev_n)_\ast(y) = KK_\nuc(X; \phi)$ for all $n\in \mathbb N$, and $(\ev_\infty)_\ast(y) = KK_\nuc(X; \psi)$. \begin{proposition} Let $A$ be a separable $X$-$C^\ast$-algebra, let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra and suppose that $\phi, \psi \colon A \to B$ are $X$-nuclear $\ast$-homo\-morphisms. If $\phi \sim_\aMvN \psi$ then $KL_\nuc(X; \phi) = KL_\nuc(X; \psi)$. \end{proposition} \begin{proof} The proof is essentially identical to that of the non-$X$-equivariant version, Proposition \ref{p:aMvNKL}, and using Corollary \ref{c:KKXstable} to get $KK_\nuc(X; A, B) \cong KK_\nuc(X; A, B\otimes \mathcal K)$. The rest of the proof is omitted. \end{proof} The invariant $KL_\nuc(X)$ can be used to characterise approximate equivalence. As in Theorem \ref{t:approxuniquesimple}, this is done for $\mathcal O_\infty$-stable maps instead of strongly $\mathcal O_\infty$-stable maps. \begin{theorem}[Approximate uniqueness]\label{t:approxuniqueness} Let $X$ be a topological space, let $A$ be a separable, exact, lower semicontinuous $X$-$C^\ast$-algebra, and let $B$ be a $\sigma$-unital $X$-$C^\ast$-algebra. Suppose that $\phi, \psi \colon A \to B$ are $X$-full, nuclear, $\mathcal O_\infty$-stable $\ast$-homo\-morphisms. The following are equivalent: \begin{itemize} \item[$(i)$] $KL_\nuc(X; \phi) = KL_\nuc(X; \psi)$; \item[$(ii)$] $\phi$ and $\psi$ are approximately Murray--von Neumann equivalent. \end{itemize} Additionally, if either $B$ is stable, or if $A,B,\phi$ and $\psi$ are all unital, then $(i)$ and $(ii)$ are equivalent to \begin{itemize} \item[$(iii)$] $\phi$ and $\psi$ are approximately unitarily equivalent (with multiplier unitaries in the stable case). \end{itemize} \end{theorem} \begin{proof} The proof is identical to that of Theorem \ref{t:approxuniquesimple}, but using Proposition \ref{p:existence} and Theorem \ref{t:iruniqueness} instead of Theorems \ref{t:existsimple} and \ref{t:uniquesimple}. The details are omitted. \end{proof} In the non-$X$-equivariant case, $KL_\nuc$ could be computed using the $K$-theoretic invariant $\underline K$ (assuming the UCT). Unfortunately there is no known $K$-theoretic invariant that computes $KL_\nuc(X)$ under UCT assumptions. This makes Theorem \ref{t:approxuniqueness} harder to apply than the non-$X$-equivariant analogue. \newcommand{\etalchar}[1]{$^{#1}$}
1,314,259,996,083
arxiv
\section{Conclusions} The geometric and electronic properties of hydrogenated graphene are investigated by the DFT calculations. The chemical and physical properties are enriched by the configuration and concentration of adatoms. The strong H-C bonds and the drastic changes in C-C bonds dominate the optimized geometric structures, charge distributions, energy bands, and DOS. The C-C and H-C lengths, and the H-C-C angle are sensitive to the variation of H-concentration. The serious orbital hybridizations, strong $\sigma$ bonds, and weak $\pi$ bondings can diversify the electronic properties. There exist middle-gap, gapless, and narrow-gap systems, respectively, corresponding to zigzag, armchair, and chiral configurations. The main features of DOS are evidenced by the delta-function-like peaks, the discontinuous shoulders, and the logarithmically divergent peaks. The hydrogen-enriched band structures exhibit the absence or recovery of low-lying $\pi$ bands, the weakly dispersive bands dominated by the second-nearest C atoms, the one pair of low-energy parabolic bands due to non-passivated C atoms, and the (C,H)-related partially flat bands. The band gap in 100 \% zigzag system is the largest one determined by the $\sigma$ bands, and it is largely reduced at lower concentrations. That the fully terminated $\pi$ bonding is replaced by the partially suppressed one is responsible for this result. As for armchair and chiral systems, the distorted Dirac cone structure is revealed at the sufficiently low concentration. The strong charge bondings of 2$p_z$ and 1s orbitals contribute to the partially flat bands at middle energy for all the systems, leading to the similar peaks of orbital-projected DOS. The decrease of H-concentration can induce the special structures in DOS, including the recovery of low-lying peaks, the emerged strong peaks, and the delta-funtion-like peak centered at $E_{F}$. The hydrogenated graphene exhibit feature-rich properties, the optimal geometric structures, band gaps and energy dispersions, and special structures of DOS. They could be, respectively, examined by the STM, ARPES, and STS measurements, providing the useful informations about the hydrogen distribution and concentration. This system is suitable for the application of hydrogen storage. The tunable electronic properties are expected to be potentially important in nanoelectronic and nanophotonic devices. \par\noindent {\bf Acknowledgments} This work was supported by the Physics Division, National Center for Theoretical Sciences (South), the Nation Science Council of Taiwan (Grant No. NSC 102-2112-M-006-007-MY3). We also thank the National Center for High-performance Computing (NCHC) for computer facilities. \newpage \renewcommand{\baselinestretch}{0.2}
1,314,259,996,084
arxiv
\section{Introduction} \label{intro} General relativity (GR) remains among the most successful physics theory at the moment and still attracts interest of many physicists today. For example, in 2015, LIGO's finding on gravitational waves from binary black hole merger that occurred billions of years ago \cite{grav_waves_LIGO} proved the consistency of the theory. Despite of its enormous success, GR has some limitations. The most notable one is the inconsistency with the fact that the universe undergoes an accelerated expansion phase \cite{riess1998observational,perlmutter1999measurements}. Cosmological constant is the simplest modification which solve that inconsistency, but the theoretical prediction is still dramatically off from the observation i.e. by factor of $10^{120}$. Another explanation for the late acceleration has been given by Freese and Lewis in 2002 \cite{freese2002cardassian}, where they considered the following modified Friedmann equation \begin{equation}\label{cardassian_ansatz_friedmann} H^2 = A \rho + B \rho^n, \end{equation} where $A \equiv \kappa_4^2/3$ and $B$ is some constant. In the case where $n<2/3$ the $\rho^n$ term is called the Cardassian term \cite{freese2002cardassian} and can be shown to provide an accelerated expansion phase in the matter dominated era. The Cardassian Friedmann equation \eqref{cardassian_ansatz_friedmann} can be generalized into $H^2 = g(\rho)$, where $g(\rho)$ is some arbitrary function that is approximately $\rho$ beyond some threshold high energy scale while still producing the accelerated expansion in the matter dominated area. One of the most common models is called the polytropic Cardassian \cite{wang2003future} \begin{equation}\label{cardassian_ansatz_friedmann_poly} H^2 = A \rho \left[1+ \left({\frac{\rho_{\text{car}}}{\rho}}\right)^{m(1-n)}\right]^{1/m}, \end{equation} where $\rho_{\text{car}}$ satisfy $A\rho_\text{car} = B \rho_\text{car}^n$. It can be seen that when $m=1$, equation \eqref{cardassian_ansatz_friedmann_poly} went back to \eqref{cardassian_ansatz_friedmann}. The Cardassian term has been noted by Chung, et al. in Ref. \cite{chung1999cosmological} to be originated from some specific bulk energy momentum tensor $T_{AB}$ in braneworld scenario. Braneworld scenario is basically a specific scenario when we modify GR by considering a higher (larger than four) dimensional spacetime. This approach is motivated by some candidates of unification theory such as M-theory which predicts that our spacetime is an eleven dimensional manifold. In particular, Horava-Witten solution suggests that the spacetime is $\mathbb{R}^{10} \times S^1/\mathbb{Z}_2$, where six of the extra dimensions were compactified on a very small scale leading to an effectively five dimensional theory\cite{maartens_2010} $\mathbb{R}^{4} \times S^1 / \mathbb{Z}_2$ . This assumption leads to a model called the braneworld model where our 3+1 spacetime is a hypersurface called brane in a five dimensional bulk \cite{randall_1,randall_2}. In particular, the first and second modified Friedmann equations in the braneworld model with Einstein-Hilbert action read \cite{binetruy_2} \begin{align} \left(\frac{\dot{a}}{a}\right)^2 &= \frac{\kappa_4^2}{3} \rho - \frac{k}{a^2} + \frac{\Lambda_4}{3} + \frac{\kappa_5^4}{36} \rho^2 + \frac{\chi}{a^4}, \label{friedmann_1_braneworld_EH} \\ \frac{\ddot{a}}{a} &= - \frac{\kappa_4^2}{6} (3 p + \rho) + \frac{\Lambda_4}{3} - \rho^2 \frac{\kappa_5^4}{18} - \rho p \frac{\kappa_5^4}{12} - \frac{\chi}{a^4}, \label{friedmann_2_braneworld_EH} \end{align} where $\chi$ is a constant, $\sigma$ is the brane tension, $\Lambda_4 \equiv \tfrac{1}{2}\kappa_5^2(\Lambda_5 + \tfrac{1}{6} \kappa_5^2 \sigma^2)$ is the effective four dimensional cosmological constant resulting from the bulk cosmological constant $\Lambda_5$, $\kappa_4^2 \equiv \tfrac{\sigma}{6}\kappa_5^4$ is the Einstein's four dimensional kappa constant, where $\kappa_5$ is the Einstein's five dimensional kappa constant. The last two terms of \eqref{friedmann_1_braneworld_EH} are the correction terms for the Friedmann equation. As can be seen, the braneworld model gives rise to both high energy quadratic matter term, and low energy correction from the dark radiation term $\chi a^{-4}$ and effective cosmological constant $\Lambda_4$. It can also be inferred that the brane tension $\sigma$ contributes to the effective cosmological constant $\Lambda_4$ on the brane, so that cosmological constant on the bulk is not mandatory anymore to provide the acceleration phase. It also can be seen that a nonzero brane tension is necessary for the modified Friedmann equation to be consistent with its low energy limit, which is the conventional four dimensional Friedmann equation. Another popular way to modify GR is to consider a scalar-tensor theory where the metric tensor and its derivatives are coupled to a scalar field $\phi$. In this context a healthy theory is such that the Lagrangian doesn't contain nondegenerate higher (than two) derivative Lagrangian, which can suffer from the Ostrogradsky instability. The most general scalar-tensor theory in four dimensional spacetime that produces second order field equations is called the Horndeski theory. The Horndeski theory is built up from four Lagrangian ($\mathcal{L}_i$, $i=2,3,4,5$). By appropriately choosing the parameters in each Lagrangian, well known specific scalar-tensor Lagrangian can be recovered, such as quintessence \cite{caldwell1998cosmological}, k-essence \cite{armendariz2000dynamical}, nonminimal derivative coupling (NMDC) \cite{amendola1993cosmology,sushkov_2009,suroso2013,suroso2012,Hikmawan:2016tif}, etc. are all contained in Horndeski theory. The theory has been discovered by Horndeski a long time ago in 1974 \cite{horndeski1974second} but has gained a lot of attention only recently in light of its equivalence to covariantization of Galileon \cite{deffayet2009covariant,deffayet2009generalized,deffayet2011k}. Galileon is a scalar field invariant under the "Galilean" transformation $\phi \rightarrow \phi + a + b_{\mu} x^\mu$, where $a, b_{\mu}$ are some constants. The Galileon is interesting because it has been found that DGP braneworld model \cite{deffayet2002accelerated} which provides a self-accelerating cosmology, under certain decoupling limit can be described as a scalar-tensor Lagrangian which has second order field equations \cite{gabadadze2006coupling} and such that the scalar field possesses the aforementioned "Galilean" symmetry \cite{nicolis2009galileon}. As we have seen, braneworld model produces nonlinear matter term. Naturally, we can ask whether action different from Einstein-Hilbert in braneworld model can produce another different nonlinear matter terms, or even the sought after Cardassian term. In particular, in this article, cosmological model in a five dimensional braneworld model which action originates from a general scalar-tensor theory comprises of various Horndeski Lagrangian is investigated. There are several works along this line of research. Friedmann equations resulting from braneworld model with Einstein-Hilbert Lagrangian have been obtained by several authors a long time ago in 1999 (to name a few, see Refs. \cite{sms,binetruy_1,ida2000brane}). Dynamical analysis and the effect of the dark radiation term have been investigated in \cite{Pardede:2018nsg}. Cosmology in braneworld model with Lorentz invariant violation has been reported in \cite{zen2010modified}. Kaluza-Klein brane cosmology has been considered in the case of one brane \cite{zen_kaluzakleinbrane} and two brane \cite{feranie_4plusn}. Braneworld with minimal coupling bimetric cosmology has been considered in Ref. \cite{bimetric_brane_youm}, while the nonminimal coupling braneworld has been considered in Ref. \cite{nonminimal_scalar_bogdanos}. A nonminimal derivative coupling (NMDC) five dimensional braneworld model has been investigated by several authors, e.g. in the case of time dependent scalar field \cite{suroso2013,widiyani2015randall} or where the scalar field is the function of extra dimension coordinate \cite{minamitsuji_2014}. In the next section, the field equations for cosmology in the general scalar-tensor braneworld theory will be derived. After that, we derive and analyze the consequences of the modified Friedmann equations for the cases where the fifth Horndeski Lagrangian $\mathcal{L}_5$ is either strongly or weakly coupled to the rest of the Lagrangian. For the strongly coupled case we have identified Cardassian terms $\rho^n$ for $n=\pm1/2$. In the weakly coupled case we found higher order matter cubic term and the dark radiation-matter interaction term. Finally, as in Ref. \cite{Pardede:2018nsg} for the case of braneworld model, we will investigate how those correction terms affect the evolution of the universe through the Hubble diagram. Before we continue, we would like to give a quick remark about the observational constraint on this theory. For example, gravitational wave observation GW170817 \cite{GW170817} has confirmed that gravitational waves travel with the speed of light with deviation smaller than $10^{-15}$. By using the language of effective field theory of dark energy, this measurement implied some precise relations among the operators \cite{creminelli_dark}. In particular, for example it can be shown from the relations that $\mathcal{L}_5$ is excluded. However, it should be noted that the energy scale of this event lie very close to typical cutoff of many DE models. Therefore, the validity of this constraint is still a subject of debate, because the UV completion can modify the speed of the gravitational waves \cite{rainbow}. Moreover, if we consider a possibility that the speed of gravitational waves can vary in time, it can dynamically set to unity at present without introducing any fine tuning between the operators in Horndeski\cite{copeland2019dark}. By considering the previous "loophole" it can be argued that even nontrivial contribution from $\mathcal{L}_5$ can be compatible with the data \cite{copeland2019dark}. In this regard, we will assume that $\mathcal{L}_5$ is not ruled out convincingly by the gravitational waves experiment yet. \section{Setup and field equations} The setup is a braneworld model with the following metric \begin{equation}\label{metric} ds^2 = g_{AB} dX^A dX^B = q_{\mu \nu} dx^\mu dx^\nu + dy^2, \end{equation} where $y$ is the extra dimension coordinate (the brane is located at $y=0$) and $g_{AB}$, $q_{\mu \nu}$ is the five and four dimensional metric respectively. The four dimensional metric reads \begin{equation} q_{\mu \nu} dx^\mu dx^\nu = -N^2(t,y) dt^2 + a^2(t,y) \gamma_{ij} dx^i dx^j, \end{equation} where $N$ is some function, $a$ is scale factor, and $\gamma_{ij}$ is the three dimensional maximally symmetric space for which $k=-1,0,1$ refers to hyperbolic, flat and spherical space respectively. Some words about notation, in this article braneworld model is $4+1$ dimensional, where the full spacetime coordinate is denoted as $X^A = (X^0, X^1, \dots , y \equiv X^4)$, brane coordinate as $x^\mu = (x^0, x^1, \dots x^3)$, and spatial coordinate as $x^i = (x^1, x^2, x^3)$. We denote $\dot{f} = df/dt$ and $f' = df/dy$ for some function $f$. Lastly, we will occasionally refer to $R$ and ${}^{(q)}R$ for example, as five and four dimensional Ricci scalar, and similarly for any other tensor. The action in this model is \begin{equation} S= \int \frac{d^5 X}{2 \kappa_5^2} \sqrt{-g}(R + \mathcal{L}_H) + S_{b}, \end{equation} where $\kappa_5^2 \equiv 8\pi G_5$ is the five dimensional Einstein's kappa constant, where the gravitational constant is denoted by $G_5$. $\mathcal{L}_H$ is the five dimensional Horndeski Lagrangian \begin{equation}\label{mathcal_L_H} \mathcal{L}_H \equiv \sum_{i=2}^5 \xi_i \mathcal{L}_i, \end{equation} where $\xi_i$'s are the coupling constants while the $\mathcal{L}_i$'s are defined as follows \begin{equation}\label{Horndeski_Lagrangian} \begin{aligned} \sum_{i=2}^{5} \mathcal{L}_i &= G_2(\phi, X) \\ &~~ + G_3 (\phi, X) \square \phi \\ &~~+ G_4(\phi, X) R + G_{4X} \left[(\square \phi)^2 - \phi_{AB} \phi^{AB}\right] \\ &~~+ G_5(\phi, X) G_{A B} \phi^{AB} - \frac{1}{6} G_{5X}\left[(\square \phi)^3 - 3 (\square \phi ) \phi_{A B} \phi^{AB} + 2 \phi_{AB} \phi^{BC} {\phi_{C}}^A\right] , \end{aligned} \end{equation} with an obvious classification for $\mathcal{L}_i$, where $\phi_A \equiv \nabla_A \phi$, $X \equiv \nabla_A \phi \nabla^A \phi$, and $G_{iX} = \partial G_i/\partial X$, $G_{i\phi} = \partial G_i/\partial \phi$ for $i=2,3,4,5$. In the definition above, $S_b$ is the matter in the brane action \begin{equation} S_b = \int d^4x \sqrt{-q} \mathcal{L}_b[q,\phi], \end{equation} which variation with respect to four dimensional metric, comprises of brane tension $\sigma$ and brane energy momentum tensor $\tau_{\mu \nu}$ \begin{equation}\label{S_mu_nu} S_{\mu \nu} \equiv - \frac{2}{\sqrt{-q}} \frac{\delta }{\delta q^{\mu \nu}} (\sqrt{-q} \mathcal{L}_b) = - \sigma q_{\mu \nu} + \tau_{\mu \nu}. \end{equation} Perfect fluid is chosen as the brane energy momentum tensor. In this section we will use variational method to obtain the field equations. In order to do so, first we need to transform the Lagrangian \eqref{Horndeski_Lagrangian} into geometric form, for which $R$, $G_{AB}$, and $\phi_{AB}$ are translated into brane variables such as the brane Ricci scalar ${}^{(q)}R$ and extrinsic curvature $K_{\mu \nu} = {q_{\mu}}^A {q_\nu}^B \nabla_A n_B$, where $n^A$ is a normal vector with respect to the brane. For the Einstein-Hilbert action, the result is the well known projection identity of Ricci scalar \cite{maeda_einstein_brane} \begin{equation}\label{Ricci_scalar_projection} R = {}^{(q)} R + K^2 - K^{AB}K_{AB} + 2 \nabla_A (n^B \nabla_B n^A - n^A \nabla_B n^B), \end{equation} where in the braneworld case, the normal vector is spacelike $n_A n^A = 1$. For the Horndeski Lagrangian, the translation procedure has been carried out by Gleyzes, et al. \cite{gleyzes2013essential} in the case of four dimensional spacetime, where the "brane" is taken to be the $\phi$ constant hypersurface. In Ref. \cite{gleyzes2013essential}, the scalar field is assumed to be a function of $t$ only, so the hypersurface considered in that paper is a time constant hypersurface. Interestingly enough, this procedure also works in our case where our brane is a $y$ constant hypersurface, if we assume that our scalar field is a $y$ only dependent function, $\phi = \phi(y)$. Following the aforementioned procedure, assuming $\phi = \phi(y)$, the Horndeski Lagrangian \eqref{Horndeski_Lagrangian} can be translated into a geometric form as follows \begin{equation}\label{Horndeski_Lagrangian_geometric} \begin{aligned} \sum_{i=2}^{5} \mathcal{L}_i &= G_2(\phi, X) \\ &~~-2X^{3/2} K F_{3X} - F_{3\phi} X \\ &~~+ G_4{}^{(q)}R - (K^2 - K_{AB} K^{AB})(2G_{4X} X - G_4) + 2G_{4\phi} K X^{1/2} \\ &~~+ \Bigg[F_5 X^{1/2} \left(K^{AB} {}^{(q)}R_{AB} - \frac{K}{2} {}^{(q)}R \right) + \frac{X}{2}(G_{5\phi}-F_{5\phi}) ~{}^{(q)}R \\&~~+ \frac{X}{2} G_{5\phi} (K^{AB} K_{AB} - K^2) \\&~~+ \frac{G_{5X}}{3} X^{3/2} \left(K^3 - 3 K K_{AB} K^{AB} + 2K_{AB} K^{AC} {K_C}^B\right)\Bigg], \end{aligned} \end{equation} where $F_3$ and $F_5$ are defined as follows \begin{equation} \begin{aligned} & G_3 \equiv F_3 + 2X F_{3X},\\ & G_{5X} \equiv F_{5X} + \frac{F_5}{2X}, \end{aligned} \end{equation} and $F_{iX} = \partial F_i/ \partial X$, $F_{i\phi} = \partial F_i/\partial \phi$. In the rest of this article, we derive the modified Friedmann equations for the Horndeski Lagrangian \eqref{Horndeski_Lagrangian_geometric}, albeit with some assumptions. Firstly, for the reason that will be obvious when we calculate the scalar field equation, let us assume that $G_{i \phi} = 0$, for $i=2,3,4,5$. Next, as we will show, unlike the braneworld model with Einstein-Hilbert action \cite{sms}, the Friedmann equation resulting from a model that contains the $\mathcal{L}_3$ will be able to go back to the conventional four dimensional Friedmann equation, even with zero brane tension. Thus we set $\sigma=0$ in our definition of brane matter action \eqref{S_mu_nu}. In short, we consider the following action \begin{equation}\label{action_after_assumption} \begin{aligned} S &=\int \frac{d^5 X}{2 \kappa_5^2} \sqrt{-g} \left( R + \xi_2 \mathcal{L}_2 + \xi_3 \mathcal{L}_3 + \xi_4 \mathcal{L}_4 + \xi_5 \mathcal{L}_5\right) + S_{b} \\ &= \int \frac{d^5 X}{2 \kappa_5^2} \sqrt{-g} \Bigg\{R + \xi_2 G_2 - 2\xi_3 \phi'^3 K F_{3X} \\&~~+ \xi_4 \Big(G_4 {}^{(q)}R - (K^2 - K_{\mu \nu} K^{\mu \nu})(2G_{4X} \phi'^2 - G_4)\Big) \\&~~+ \xi_5\Bigg[F_5 \phi' \left(K^{\mu \nu} {}^{(q)}R_{\mu \nu} - \frac{K}{2} {}^{(q)}R \right) \\&~~+ \frac{G_{5X}}{3} \phi'^3 \left(K^3 - 3 K K_{\mu \nu} K^{\mu \nu} + 2K_{\alpha \beta} K^{\alpha \gamma} {K_\gamma}^\beta\right)\Bigg] \Bigg\} + S_b. \end{aligned} \end{equation} \section{Cardassian terms from strong coupling $\mathcal{L}_5$ Friedmann equation} In this section we compute bulk field equations and the junction conditions for general action \eqref{action_after_assumption}. After that, we apply the strong $\mathcal{L}_5$ coupling condition to find the modified Friedmann equations. Let us first compute the scalar field equation. From the variation of \eqref{action_after_assumption} with respect to $\phi$ we have \begin{equation}\label{scalar_field_eq} \begin{aligned} \mathcal{C}(t) &= -2\xi_2a^3N G_{2X} \phi' + 2\xi_3 \phi'^2\left(3F_{3X} + 2\phi'^2F_{3XX}\right)(3a'a^2 N + a^3N')\\&-12\xi_4 \Bigg[ G_{4X} \phi' \left(kaN + \frac{\ddot{a} a^2}{N} + \frac{\dot{a}^2 a}{N} - \frac{\dot{N} \dot{a} a^2}{N^2}\right) \\ &~~~~~~-\phi'\left(2G_{4XX}\phi'^2+G_{4X} \right)(a^2 a'N' + aa'^2N)\Bigg] \\&+\xi_5\Bigg[3\left(2F_{5X}\phi'^2 + F_5 \right)\left(\frac{a' \dot{a}^2}{N} - 2 \frac{\ddot{a} a' a}{N}+ ka'N + k a N' + \frac{\dot{a}^2 a N'}{N^2} -2 \frac{a' \dot{a} a \dot{N}}{N^2}\right) \\ &~~~~~+2\phi'^2\left(2G_{5XX}\phi'^2 + 3G_{5X}\right)(a'^3 N + 3a'^2 a N')\Bigg], \end{aligned} \end{equation} where $\mathcal{C}(t)$ is some function. Notice now that our $G_{i\phi} = 0$ assumption has ensured that the scalar field equation is a first order differential equation in $\phi$. Next, we will derive the $yy$ and $\mu y$-field equations. To do that, we need to introduce shift scalar $b$ and shift vector $b^\mu$ into our metric as follows \cite{gao2010modified} \begin{equation}\label{metric_adm} ds^2 = b^2 dy^2 + q_{\mu \nu} (dx^\mu + b^\mu dy)(dx^\nu + b^\nu dy). \end{equation} Note that in this metric \begin{align} X &= \frac{\phi'^2}{b^2}, \\ K_{\mu \nu} &= \frac{1}{2b} \left(\partial_y q_{\mu \nu} - {}^{(q)} \nabla_\mu b_\nu - {}^{(q)} \nabla_\nu b_\mu \right) \label{curv_ex_adm}. \end{align} After we have the field equations on our hand, we can set $b=1$, $b^\mu = 0$ to obtain our original metric \eqref{metric}. By varying the action \eqref{action_after_assumption} with respect to $b^\mu$, we obtain the $\mu y$-field equation. From \eqref{curv_ex_adm}, we can see that $b^\mu$ only appear in terms containing $K_{\mu \nu}$. After some algebra, it can be shown that by assuming $b$ constant, the $ty$-equation ($\mu=0$) can be solved by taking \begin{equation}\label{ty_solution} \dot{a}' = \dot{a} \frac{N'}{N}. \end{equation} It is somewhat surprising that \eqref{ty_solution} is already the same solution that solve the $ty$-field equation in braneworld Einstein-Hilbert cosmological model \cite{binetruy_1,binetruy_2}. Similarly, variation of \eqref{action_after_assumption} with respect to $b$ give us the $yy$-field equation \begin{equation}\label{yy_field_eq} \begin{aligned} &6\left(kaN + \frac{\ddot{a} a^2}{N} + \frac{\dot{a}^2 a}{N} - \frac{\dot{a}\dot{N}a^2}{N^2} - a'a^2N' - a'^2 a N\right) + \xi_2 a^3 N G_2 \\&+6\xi_4\left[G_4 \left( k a N + \frac{\ddot{a}a^2}{N} + \frac{\dot{a}^2 a}{N} - \frac{\dot{a} \dot{N} a^2}{N^2}\right) + \left(2 G_{4X} \phi'^2 - G_4\right) (a' a^2 N' + a'^2 a N)\right] \\&- 4\xi_5 G_{5X} \phi'^3 (a'^3 N + 3 a'^2 a N') + \mathcal{C}(t) \phi' =0, \end{aligned} \end{equation} where we have used the definition of $\mathcal{C}(t)$ in \eqref{scalar_field_eq} and set $b=1$. At this point we can revert back to our original metric \eqref{metric}. Now we will derive the $tt$ and $ij$-field equations from the variation of $N$ and $a$ respectively. By varying the action with respect to $N$ we have \begin{equation}\label{tt_field_eq} \begin{aligned} &6 \left(ka + \frac{\dot{a}^2 a}{N^2}- a'' a^2 - a'^2 a\right)+\xi_2 a^3 G_2 +2\xi_3 a^3 \phi'^2\phi'' \left(3F_{3X} + 2\phi'^2 F_{3XX} \right) \\&+6\xi_4\Bigg[\left(2G_{4X} \phi'^2 - G_4 \right) \left(a''a^2 + a'^2a \right) \\&~~~~~+G_4 \left(ka + \frac{\dot{a}^2 a}{N^4} \right) + 2a'a^2\phi' \phi''\left(2G_{4XX}\phi'^2 + G_{4X}\right)\Bigg] \\&+ \xi_5\Bigg[3 \left(2F_{5X} \phi'^2 + F_5\right) \frac{\phi''}{b}\left(ka + \frac{\dot{a}^2 a}{N^2}\right) \\&~~~- 4 G_{5X} \phi'^3 (a'^3 + 3 a a' a'') - 6 a'^2 a\phi'^2 \phi'' \left(2G_{5XX}\phi'^2 + 3 G_{5X}\right)\Bigg].\\&= 0. \end{aligned} \end{equation} The variation with respect to $a$ that give us the $ij$-field equation, will be a pretty complicated expression. On the other hand, because our ultimate purpose is not to solve the bulk equations but rather to find the effective Friedmann equation on the brane, it will be clear later that the sole purpose of the $ij$-field equation is to find the appropriate metric junction condition on the brane. Hence in the following, we will only calculate the metric variable $a$ or $N$ terms that is second order in the derivative of $y$. Thus by varying \eqref{action_after_assumption} with respect to $a$, we find \begin{equation}\label{ij_field_eq} \begin{aligned} &- 6 \left(a^2 N'' + 2 a a'' N \right)+6\xi_4\left(2G_{4X} \phi'^2 - G_4\right) \left(2 a'' a N + a^2 N''\right) \\&-12\xi_5 G_{5X} \phi'^3 \left(a'^2 N' + a' a'' N + a a'' N' + a a' N''\right) + \text{ others} =0, \end{aligned} \end{equation} where "others" refers to any other terms that contain $\phi''$ or first order (or less) derivative of $y$. Basically, we have obtained all of the bulk field equations. To take the existence of brane into consideration, we need junction conditions on the brane. First, note that the variation of four dimensional brane action $S_b$ with respect to some variable, let say $a$, contributes to the five dimensional bulk field equations a term that looks like the following \begin{equation}\label{contribution_of_brane} \frac{\delta S_b}{\delta a} \delta(y). \end{equation} A term like this, only comes into play when we evaluate the field equations on the brane, where $y=0$. We thus need to look for terms that contribute Dirac delta function $\delta(y)$ in the bulk field equations. Now, let $\phi''(y=0) = 0$ so that $\phi'$ is continuous on the brane. Notice that this was the reason that we didn't compute the terms that contain $\phi''$ in \eqref{ij_field_eq}. On the other hand, we only require the continuity of $a$ and $N$ so that neither $a'$ nor $N'$ need to be continuous. Thus $a''$ and $N''$ will contain distributional part $[a'']_{\text{D}}$ and $[N'']_{\text{D}}$ respectively as follows \cite{binetruy_1} \begin{align} a'' &= [a'']_{\text{ND}} + [a'']_{\text{D}}, \\ N'' &= [N'']_{\text{ND}} + [N'']_{\text{D}}, \end{align} while $\text{ND}$ denotes the nondistributional part. For example if $a' = y^2 + |y|$, then $[a'']_{\text{ND}} = 2y$. On the other hand, $[a'']_{\text{D}}$ captures the discontinuity of $a'$ at $y=0$ as follows \cite{binetruy_1} \begin{equation} [a'']_{\text{D}} = \Big[a'(y=\epsilon) - a'(y=-\epsilon)\Big]\delta(y), \end{equation} for a small $\epsilon>0$, and similarly for $N$. Finally, by considering the term from brane action \eqref{contribution_of_brane}, we could integrate the $tt$ and $ij$-field equations \eqref{tt_field_eq}, \eqref{ij_field_eq} to obtain the junction conditions for the metric \begin{align} \Bigg[\frac{a'}{a}B_H - \tilde{\alpha} \left(\frac{a'}{a}\right)^2\Bigg]_{y=0} &= - \frac{\kappa_5^2}{6} \rho, \label{cardassian_junction_1}\\ \Bigg[\left(2\frac{a'}{a} + \frac{N'}{N}\right)B_H - \tilde{\alpha} \left[\left(\frac{a'}{a}\right)^2 + 2 \frac{a'}{a}\frac{N'}{N}\right]\Bigg]_{y=0} &= \frac{\kappa_5^2}{2} p \label{cardassian_junction_2}, \end{align} where we have set $\sigma=0$, applied the $\mathbb{Z}_2$ symmetry on the brane \begin{equation} \begin{aligned} a'(y=\epsilon) &= - a'(y=-\epsilon),\\ N'(y=\epsilon) &= - N'(y=-\epsilon), \end{aligned} \end{equation} and used the following definition \begin{align} B_H &\equiv \left[1+\xi_4(G_4-2G_{4X}\phi'^2)\right],\\ \tilde{\alpha} &\equiv -2 \xi_5 G_{5X} \phi'^3. \end{align} It can be checked that the previous junction conditions \eqref{cardassian_junction_1}, \eqref{cardassian_junction_2} satisfy the conservation of energy momentum tensor \begin{equation}\label{conserved_EMT} \dot{\rho} + 3 \frac{\dot{a}}{a} ( \rho + p) = 0. \end{equation} Finally, to derive the junction condition for the scalar field, consider again the scalar field equation, but this time with the contribution from the brane action \eqref{contribution_of_brane} \begin{equation}\label{junction_scalar_1} \sqrt{-q}\frac{\partial \mathcal{L}_b}{\partial \phi} \delta(y) - \frac{d}{dy} \frac{\partial \mathcal{L}}{\partial \phi'}=0. \end{equation} By integrating and applying the $\mathbb{Z}_2$ symmetry to \eqref{junction_scalar_1} we have \begin{equation}\label{junction_scalar_2} \left[\frac{\partial \mathcal{L}_b}{\partial \phi}\right]_{y=0} = \left[ \frac{2}{\sqrt{-q}} \frac{\partial \mathcal{L}}{\partial \phi'}\right]_{y=0}, \end{equation} so that junction condition for scalar field simply say that brane Lagrangian $\mathcal{L}_b$ contains a term $\ell_b[\phi]$, which is defined as follows \begin{equation} \ell_b[\phi] \equiv \phi \left[\frac{2}{\sqrt{-q}} \frac{\partial \mathcal{L}}{\partial \phi'}\right]_{y=0}. \end{equation} Now, assume that $\mathcal{L}_5$ Lagrangian is strongly coupled to the rest of the Lagrangian \begin{equation}\label{cardassian_strong_coupling_cond} \left|\tilde{\alpha} \frac{a'}{a}\right| \gg B_H, \end{equation} so that the solution of the first junction condition for this model \eqref{cardassian_junction_1} can be approximated by \begin{equation}\label{cardassian_junction_sol} \left[\frac{a'}{a}\right]_{y=0} = \left(\frac{\tilde{\beta}}{\tilde{\alpha}}\right)^{1/2}, \end{equation} where \begin{equation} \tilde{\beta} \equiv \frac{\kappa_5^2}{6}\rho. \end{equation} Now, following Binetruy, et al. \cite{binetruy_2}, using \eqref{ty_solution}, it can be shown that the $yy$ \eqref{yy_field_eq} and $tt$ \eqref{tt_field_eq} field equations can be rewritten into first order differential equations \begin{equation} \begin{aligned} &\dot{\chi} = \frac{\dot{a}}{3N} \times \frac{\partial \mathcal{L}}{\partial y} = 0,\\ &\chi' = \frac{a'}{3} \times \frac{\partial \mathcal{L}}{\partial N} = 0, \end{aligned} \end{equation} where $\chi$ is defined through the following relation \begin{equation}\label{cardassian_first_order} \begin{aligned} H^2 \left[B_H - \tilde{\alpha} \frac{a'}{a} \right] &= - \frac{k}{a^2} \left[B_H - \tilde{\alpha} \frac{a'}{a} \right] - B_1 \frac{a'}{a} - B_{2} \left(\frac{a'}{a}\right)^2 - B_3 \left(\frac{a'}{a}\right)^3 \\&~~~+ \frac{\chi}{a^4} - B_0, \end{aligned} \end{equation} with the following definitions \begin{equation}\label{cardassian_definition_B_i} \begin{aligned} B_0 &\equiv \frac{\xi_2}{12}\left(G_2 - 2G_{2X} \phi'^2\right), \\ B_1 &\equiv \frac{2}{3} \xi_3\phi'^3 (3 F_{3X} + 2 \phi'^2 F_{3XX}), \\ B_2 &\equiv -\left[1- \xi_4 \left(4 G_{4X} \phi'^2 + 2 G_{4XX} \phi'^4 - G_4\right)\right], \\ B_3 &\equiv -\frac{2}{3} \xi_5 \phi'^3 ( 2G_{5XX} \phi'^2 + 5 G_{5X}). \end{aligned} \end{equation} Finally, from \eqref{cardassian_first_order} and \eqref{cardassian_junction_sol}, we secure the Friedmann equation for the strongly coupled $\mathcal{L}_5$ \begin{equation}\label{cardassian_friedmann_eq} H^2= - \frac{k}{a^2} + \frac{B_1}{\tilde{\alpha}} + \left(\frac{\kappa_5^2}{6\tilde{\alpha}^3}\right)^{1/2} B_2 \rho^{1/2} + \frac{\kappa_5^2}{6 \tilde{\alpha}^2} B_3 \rho - \left(\frac{6}{\tilde{\alpha} \kappa_5^2}\right)^{1/2} \left(\frac{\chi}{a^4}-B_0\right) \rho^{-1/2}. \end{equation} From \eqref{cardassian_friedmann_eq}, we have shown that Horndeski Lagrangian \eqref{action_after_assumption} with strongly coupled $\mathcal{L}_5$ \eqref{cardassian_strong_coupling_cond} is one of the specific bulk energy momentum tensor $T_{AB}$ in braneworld scenario \cite{chung1999cosmological} that will generate Cardassian terms \cite{freese2002cardassian} $\rho^n$ with $n=\pm 1/2$ in its four dimensional effective Friedmann equations. Furthermore, the latest combined observational evidence in 2017 from BAO, CMB, SNIa, $f_{\sigma 8}$, and $H_0$ value observation, has given the following constraints for the polytropic Cardassian \eqref{cardassian_ansatz_friedmann_poly} \cite{zhai2017evaluation} \begin{equation}\label{cardassian_batas_parameter_m_n} m = 1.1^{+0.8}_{-0.4}, \hspace{5mm} n=0.02^{+0.25}_{-0.41}, \end{equation} so that the $n=-1/2$ term lies quite close to the observed value. However, it should be noted that the modified Friedmann equation \eqref{cardassian_friedmann_eq} is more general than \eqref{cardassian_ansatz_friedmann_poly} and thus requires a numerical evaluation on its own. \section{Modified Friedmann equations for the weak $\mathcal{L}_5$ coupling} In the previous section, we saw that the strongly coupled $\mathcal{L}_5$ case produced Cardassian terms which is basically a lower energy correction for the matter term. In this section, we will see how the weakly coupled $\mathcal{L}_5$ case generates among others the cubic matter term $\rho^3$ which is a high energy correction. We start by assuming that $\mathcal{L}_5$ Lagrangian is weakly coupled to the rest of the Lagrangian so that $\xi_5$ is small and thus $\xi_5^2$ can be abandoned. Next, from \eqref{Horndeski_Lagrangian_geometric}, it can be seen that the $\mathcal{L}_4$ has an almost identical expression with $R$ \eqref{Ricci_scalar_projection}, differing only with some scalar field function coefficients. Thus, in order to make our result tidier, we will also abandon $\mathcal{L}_4$. In the rest of the section, we will identify the correction terms provided by the Friedmann equation obtained from general scalar-tensor braneworld model \eqref{action_after_assumption} where the $\xi_4=0$ and $\xi_5$ is small. Firstly, the junction conditions for $\xi_4=0$ are \begin{align} \Bigg[\frac{a'}{a} + \alpha \left(\frac{a'}{a}\right)^2\Bigg]_{y=0} &= - \frac{\kappa_5^2}{6} \rho \label{tt_junction},\\ \Bigg[2\frac{a'}{a} + \frac{N'}{N} + \alpha \left[\left(\frac{a'}{a}\right)^2 + 2 \frac{a'}{a}\frac{N'}{N}\right]\Bigg]_{y=0} &= \frac{\kappa_5^2}{2} p, \label{ij_junction} \end{align} where \begin{equation} \alpha \equiv 2\xi_5 \phi'^3 G_{5X}. \end{equation} It can be checked that these junction conditions satisfy the conservation of energy momentum tensor \eqref{conserved_EMT}. Now, assume that $\mathcal{L}_5$ Lagrangian is weakly coupled to the rest of the Lagrangian, so that $\xi_5$ is small. From \eqref{tt_junction}, expansion of $a'/a$ to various order of $\alpha$ give us the explicit solution for the junction conditions \begin{equation}\label{junction_solution} \left[\frac{a'}{a}\right]_{y=0} = - \frac{\kappa_5^2}{6}\rho \left(1+ \alpha\frac{\kappa_5^2}{6}\rho\right). \end{equation} Next, analogous to \eqref{cardassian_first_order} for $\xi_4=0$ we have the following definition for $\chi$ \begin{equation}\label{first_order} \begin{aligned} \chi &= \left[ka^2 -(a'a)^2 + \frac{(\dot{a}a)^2}{N^2}\right]+ \frac{\xi_2}{12}a^4(G_2 - 2G_{2X}\phi'^2) +\frac{2}{3} \xi_3 \phi'^3 a' a^3 (3F_{1X} + 2 \phi'^2 F_{1XX}) \\&~~+ 2 \xi_5 \phi'^3 G_{5X} (kaa' + a a' \dot{a}^2) - \frac{2}{3} \xi_5 \phi'^3 a a'^3 (2 G_{5XX} \phi'^2 + G _{5X}). \end{aligned} \end{equation} By evaluating \eqref{first_order} on the brane using junction conditions solution \eqref{junction_solution}, we get the Friedmann equation for this model up to linear order of $\alpha$ \begin{equation}\label{friedmann_horndeski} \begin{aligned} H^2 = - \frac{k}{a^2} + \frac{\kappa_5^2}{6} A_1 \rho + \frac{\kappa_5^4}{36} A_2 \rho^2 + \frac{\kappa_5^6}{216} A_3 \rho^3 + \frac{\chi}{a^4}\left(1+ \frac{\alpha\kappa_5^2}{6}\rho\right) + A_0, \end{aligned} \end{equation} where $N(y=0)=1$ has been taken, and the following definitions have been used \begin{equation}\label{parameter_Ai} \begin{aligned} A_0 &\equiv - \frac{\xi_2}{12}(G_2 - 2G_{2X} \phi'^2),\\ A_1 &\equiv \frac{2}{3} \xi_3 \phi'^3(3F_{3X}+2\phi'^2F_{3XX}) - \frac{\alpha\xi_2}{12}(G_2 - 2G_{2X}\phi'^2),\\ A_2 &\equiv 1+\frac{4\alpha}{3} \xi_3 \phi'^3(3F_{3X}+2\phi'^2F_{3XX}), \\ A_3 &\equiv \frac{4}{3}\left(\alpha - \xi_5 \phi'^5 G_{5XX} \right). \end{aligned} \end{equation} For consistency, it can checked that by taking \begin{equation} \xi_2 = - 2 \kappa_5^2, \hspace{3mm} \xi_3, ~\xi_5 = 0, \hspace{3mm} G_2 = \Lambda_5, \end{equation} the Friedmann equation in this model \eqref{friedmann_horndeski} go back to the braneworld Einstein-Hilbert model \eqref{friedmann_1_braneworld_EH} with $\sigma=0$ \begin{equation} H^2 = -\frac{k}{a^2} + \frac{\kappa_5^4}{36}\rho^2 + \frac{\chi}{a^4} + \frac{\kappa_5^2}{6}\Lambda_5. \end{equation} This model Friedmann equation \eqref{friedmann_horndeski} contain some new correction terms which are not present in the Einstein-Hilbert braneworld model \eqref{friedmann_1_braneworld_EH}. Firstly, notice that every terms coefficient is some sort of function build from the scalar field. In fact, we can identify the four dimensional Einstein's kappa constant as follows \begin{equation}\label{kappa_4d_identification} \frac{\kappa_4^2}{3} = \frac{\kappa_5^2}{6}A_1. \end{equation} From \eqref{kappa_4d_identification}, it can be inferred that $A_1^{-1}$ is proportional to radius of the extra dimension. Next, we can see that in addition to quadratic matter term, we also have another high energy correction term, the cubic matter. Interestingly, we also have a term proportional to $\chi a^{-4} \rho$ which mediate the interaction between normal matter and the dark radiation. This interaction though is small because $\alpha$ is small from the assumption that $\mathcal{L}_5$ is weakly coupled. Lastly, as usual, we still have the cosmological constant term $A_0$ coming from the the scalar field $\phi$ evaluated on the brane. Therefore, assuming $\xi_2<0$, in the cosmological constant domination phase, the universe in this model is undergoing de Sitter expansion \begin{equation} a(t) = a_0 \exp\left({\sqrt{A_0}t}\right). \end{equation} Also notice that for $\xi_2 >0$, the scale factor is oscillating: $a(t) \propto \exp\left(i \sqrt{-A_0} t\right)$. Before finding the constraint for the value of $A_1$, $A_2$, and $A_3$, we will transform the Friedmann equation \eqref{friedmann_horndeski} into an explicit form in term of Hubble function. Define the following density parameters \begin{equation}\label{density_param} \begin{aligned} \Omega_{\rho,0} = \frac{\kappa_4^2}{3} \frac{\rho_0}{H_0^2}, \hspace{3mm} \Omega_{\chi,0} = \frac{\chi}{H_0^2}, \hspace{3mm} \Omega_{k,0} = -\frac{k}{H_0^2}, \hspace{3mm} \Omega_{\Lambda,0} = \frac{A_0}{H_0^2}. \end{aligned} \end{equation} In the previous definitions \eqref{density_param}, $\rho_0$ is the sum of radiation and dust, evaluated in the present epoch \begin{equation} \Omega_{\rho,0} = \Omega_{r,0} + \Omega_{m,0}, \hspace{3mm} \text{where} \hspace{3mm} \Omega_{r,0} = \frac{\kappa_4^2}{3} \frac{\rho_{r,0}}{H_0^2}, \hspace{3mm} \Omega_{m,0} = \frac{\kappa_4^2}{3} \frac{\rho_{m,0}}{H_0^2}. \end{equation} Now, assuming that the spatial curvature of the universe is $\Omega_{k,0} = 0$, in the epoch of single matter domination with equation of state $p=w\rho$, the Friedmann equation \eqref{friedmann_horndeski} can be recast into \begin{equation}\label{friedmann_horndeski_in_z} \begin{aligned} H(z) &= H_0 \Big[\Omega_{\rho,0} (1+z)^{3(w+1)} + H_0^2 A_1^{-2} A_2 \Omega_{\rho,0}^2 (1+z)^{6(w+1)} \\&~~+ H_0^4 A_1^{-3} A_3\Omega_{\rho,0}^3 (1+z)^{9(w+1)} + \Omega_{\chi,0}(1+z)^4 \\&~~+ \alpha H_0^2 A_1^{-1} \Omega_{\chi,0}\Omega_{\rho,0} (1+z)^{(3w+7)} + \Omega_{\Lambda,0}\Big]^{1/2}, \end{aligned} \end{equation} with $z=a^{-1}-1$ is the redshift factor. Now we are ready to constraint $A_1$, $A_2$, and $A_3$ using big bang nucleosynthesis (BBN) constraint. BBN constraint say that the contribution of high energy correction term \eqref{friedmann_horndeski_in_z} must be negligible relative to the contribution of linear matter contribution before the BBN, where $z_{\text{BBN}} \simeq 4 \times 10^8$ \cite{maartens_2010}. Because when $z=z_{\text{BBN}}$ the universe is dominated by radiation, $\Omega_{\rho,0} \approx \Omega_{r,0}$, then BBN constraint give the following conditions \begin{equation} \begin{aligned} \theta_1 &\equiv \frac{H_0^2 A_1^{-2} A_2 \Omega_{r,0}^2 (1+z_{\text{BBN}})^{8}}{\Omega_{r,0} (1+z_{\text{BBN}})^{4}} = H_0^2 A_1^{-2} A_2 \Omega_{r,0} (1+z_{\text{BBN}})^{4} \ll 1,\\ \theta_2 & \equiv \frac{H_0^4 A_1^{-3} A_3\Omega_{r,0}^3 (1+z_{\text{BBN}})^{12}}{\Omega_{r,0} (1+z_{\text{BBN}})^{4}} = H_0^4 A_1^{-3} A_3 \Omega_{r,0}^2 (1+z_{\text{BBN}})^{8} \ll 1. \end{aligned} \end{equation} Note that both $\theta_1$ and $\theta_2$ are dimensionless. Now by taking $H_0 = 67$ km s${}^{-1}$ Mpc${}^{-1}$ $\approx 2\cdot 10^{-18}$ s${}^{-1}$ (as in Ref. \cite{planck_2015}) and $\Omega_{r,0} \approx 5 \cdot 10^{-5}$, in SI unit \begin{equation} \begin{aligned} &\theta_1 \ll 1 \implies A_1^{-2} A_2 \ll 10^5, \\ &\theta_2 \ll 2 \implies A_1^{-3} A_3 \ll 10^{11}. \end{aligned} \end{equation} From definition \eqref{parameter_Ai} and small $\alpha$ condition, then $A_2 \approx 1$ s${}^{2}$ m${}^{-2}$. Thus from the condition of small $\theta_1$, we have $A_1 \gg 10^{-2.5}$ m${}^{-1}$. Now, because from \eqref{kappa_4d_identification} $A_1^{-1}$ is proportional to extra dimension radius, it can be inferred that the condition for $\theta_1$ simply says that the extra dimension radius must be smaller than 10${}^{2.5}$ m, which is a natural condition. Similarly, condition for $\theta_2$ gives the upper bound for $A_3$, that is $A_3 \ll 10^{11} A_1^{3}$. In principle, the radius of extra dimension can be very small, even as small as $M_\text{P}^{-1}$ as in Randall-Sundrum I model \cite{randall_1}. Therefore $A_1$ can have an arbitrarily high value. For example, if we take $A_1 \approx 10^{-2}$ m${}^{-1}$, condition for $\theta_2$ gives us the condition that $A_3 \ll 10^{2}$ s${}^{2}$ m${}^{-2}$. From the definition of $A_3$ \eqref{parameter_Ai}, the previous condition is a natural one, because in this model we assume that $\alpha$ is small. In conclusion, this model provides the high energy correction term and supports the BBN process. This characteristic is not so common, and in fact, is not shared by the braneworld Einstein-Hilbert model \eqref{friedmann_1_braneworld_EH}. The Friedmann equation in that model for $k=0$ can be recast into \begin{equation} H = H_0 \left[\Omega_{\rho,0}(1+z)^{3(1+w)}+ \frac{3H_0^2}{2 \kappa_4^2 \sigma}\Omega_{\rho,0}^2 (1+z)^{6(1+w)}+ \Omega_{\chi,0}(1+z)^4 + \Omega_{\Lambda,0}\right]^{1/2}, \end{equation} with the following identification \begin{equation} \begin{aligned} \frac{\kappa_4^2}{3}&=\frac{\kappa_5^4\sigma}{18}, \\ \Omega_{\Lambda,0} &= \frac{\kappa_5^2}{6}\left(\frac{\kappa_5^2 \sigma^2}{6}+ \Lambda_5\right). \end{aligned} \end{equation} BBN condition for this model requires an unnaturally high value of brane tension \begin{equation} \sigma \gg \frac{3H_0^2}{2 \kappa_4^2} \Omega_{r,0} (1+z_{\text{BBN}})^{4} \approx 10^{20} ~\frac{\text{kg}}{\text{m s}{}^{2}}. \end{equation} To close this section, we will give a visualization of how the dark radiation might affect the evolution of the universe. In principle, dark radiation is just an ordinary radiation, but this time with no requirement for the value to be nonnegative. This in fact, can make the total sum of energy density parameter of the other component of the universe to be greater than one even in a flat universe. To be more precise, consider the following models \begin{equation}\label{three_model} \begin{aligned} H_\text{EH}(z) &= H_0 \left[\Omega_{m,0}(1+z)^{3}+ \Omega_{\Lambda,0}\right]^{1/2},\\ H_\text{BW}(z) &= H_0 \left[\Omega_{m,0}(1+z)^{3}+ \frac{3H_0^2}{2 \kappa_4^2 \sigma}\Omega_{m,0}^2 (1+z)^{6}+ \Omega_{\chi,0}(1+z)^4 + \Omega_{\Lambda,0}\right]^{1/2},\\ H_\text{HD}(z) &= H_0 \Big[\Omega_{m,0} (1+z)^{3} + H_0^2 A_1^{-2} A_2 \Omega_{m,0}^2 (1+z)^{6} + H_0^4 A_1^{-3} A_3\Omega_{m,0}^3 (1+z)^{9} \\&\hspace{12mm}+ \Omega_{\chi,0}(1+z)^4 + \alpha H_0^2 A_1^{-1} \Omega_{\chi,0}\Omega_{m,0} (1+z)^{7} + \Omega_{\Lambda,0}\Big]^{1/2}, \\ \end{aligned} \end{equation} where $H_\text{EH}(z)$, $H_\text{BW}(z)$, and $H_{\text{HD}}(z)$ refer to conventional four dimensional model, braneworld model with Einstein-Hilbert action \eqref{friedmann_1_braneworld_EH}, and general scalar-tensor braneworld model \eqref{friedmann_horndeski}. In those expressions, we also have assumed that $\Omega_{k,0} = 0$ and $\Omega_{r,0} = 0$ \begin{figure}[h!] \centering \includegraphics[scale=0.8]{hubble_l3_braneworld.png} \caption[Comparison of Hubble diagram for conventional four dimensional model, braneworld Einstein-Hilbert model, and general scalar-tensor braneworld model with SNIa data Davis, et al.(2007)]{\small Comparison of Hubble diagram for conventional four dimensional model (EH), braneworld Einstein-Hilbert model (BW), and general scalar-tensor braneworld model (HD) with SNIa data, Davis, et al.(2007) \cite{davis_scrutinizing,vassey_observational,riess_new_hubble}. For EH, we use the cosmological parameter from Planck 2015 \cite{planck_2015} $H_0 = 67$ km s${}^{-1}$ Mpc${}^{-1}$, $\Omega_{m,0} = 0.308$, $\Omega_{\Lambda,0} = 0.692$. For BW and HD, we use $H_0=66.157$ km s${}^{-1}$ Mpc${}^{-1}$, $\Omega_{m,0} = 0.3453$, $\Omega_{\Lambda,0} = 0.6568$ and $\Omega_{\mathcal{E},0} = -0.03$. The numerical value for the rest are $A_1, A_3, \alpha = 10^{-3}$, $A_2 = 1$, and $\sigma = 10^{22}$. The chi-square value $\chi_\epsilon^2$ for EH, BW and HD are 207.8483, 197.1633, and 197.1633 respectively. By taking $\mathcal{N} = 192 -4 = 188$, the value of $\chi_\epsilon^2 \mathcal{N}^{-1}$ for EH, BW and HD are 1.1056, 1.0487, 1.0487 respectively.} \label{fig:hubble_diag} \end{figure} The previous models will be compared with $N_{\text{obs}} = 192$ SNIa data compiled by Davis, et al. in 2007 \cite{davis_scrutinizing,vassey_observational,riess_new_hubble}. Observation data are given in terms of modulus distance ($\mu$) against redshift ($z$). From the various expressions for $H(z)$ in \eqref{three_model}, the modulus distance can be calculated as follows \begin{equation} \mu(z) = 5 \log \left[\frac{(1+z)c}{\text{10 pc}}\int_0^z \frac{dz'}{H(z')}\right], \end{equation} where $c$ is the speed of light in vacuum and pc is parsec. A good model is one with $\chi_\epsilon^2 \mathcal{N}^{-1} \approx 1$, where the chi-squared error $\chi_\epsilon^2$ is defined as follows \cite{numericalrecipes_1992} \begin{equation}\label{{epsilon_square}} \chi_\epsilon^2 = \sum_{i=1}^{N_{\text{obs}}} \frac{[\mu(z_i) - \mu_{\text{obs}}(z_i)]^2}{\sigma_i^2}, \end{equation} where $\mu_{\text{obs}}(z_i)$ is the modulus distance obtained from observation for redshift factor $z=z_i$ and error $\sigma_i$, while $\mathcal{N}$ is the difference between number of data $N_{\text{obs}}$ with the free parameter of the model. The difference between evolution of the three models \eqref{three_model} and the observational data is given in figure \ref{fig:hubble_diag}. It should be noted that supernova is a phenomenon with a low $z$. For example, the data used in this article has $z_{\text{max}} = 1.8$. Thus the high energy correction terms provided by modified Friedmann equation can't be detected via supernova observation. It can also be inferred that, practically all the three models only have three free parameters $H_0$, $\Omega_{m,0}$, and $\Omega_{\Lambda,0}$, while $\Omega_{\mathcal{E},0}$ can be approximated from $\Omega_{\mathcal{E},0} \approx 1 - \Omega_{m,0} - \Omega_{\Lambda,0}$. From the Hubble diagram of figure \ref{fig:hubble_diag}, it can be seen that braneworld Einstein-Hilbert \eqref{friedmann_1_braneworld_EH} and general scalar-tensor braneworld model \eqref{friedmann_horndeski} that has different high order correction, can't be distinguished. Meanwhile, the four dimensional conventional model can be distinguished from the others by the contribution of dark radiation term $\Omega_{\mathcal{\chi},0}$. \section{Conclusion and outlook} A braneworld cosmological model in general scalar-tensor action comprises of various Horndeski Lagrangian have been investigated. The derivation of the corresponding field equations has been given. The resulting Friedmann equation in this model, in the case of strongly coupled $\mathcal{L}_5$ model produces the Cardassian term $\rho^n$ with $n=\pm 1/2$, which can served as alternative explanation for the accelerated expansion phase of the universe. The latest combined observational facts from BAO, CMB, SNIa, $f_{\sigma_8}$, and $H_0$ value observation, suggest that the $n=-1/2$ term lies quite close to the constrained value. On the other hand, the weakly coupled $\mathcal{L}_5$ case has several new correction terms, e.g. the cubic term $\rho^3$ and the dark radiation-matter interaction term $\propto \chi a^{-4} \rho$. Furthermore, this model provides a cosmological constant constructed from the bulk scalar field, requires no brane tension, and supports the BBN constraint naturally. For the future works, on the computational side it is interesting to consider a parameter numerical fitting for the specific Cardassian model provided in this paper \eqref{cardassian_friedmann_eq}. On the other hand, one might be interested to consider braneworld cosmological model with a more general action than Horndeski, for example the beyond Horndeski model or Proca theory. \section*{Acknowledgments} AS and FPZ gratefully acknowledge the support from Ministry of Research, Technology, and Higher Education of the Republic of Indonesia for the PDUPT Research Grant.
1,314,259,996,085
arxiv
\section{Introduction} Analog Integrated Circuit (IC) design is a complex process involving multiple steps. Billions of nanoscale transistor devices are fabricated on a silicon die and connected via intricate metal layers during those steps. The final product is an IC, which powers much of our life today. An essential aspect of IC design is analog design, which continues to suffer from long design cycles and high design complexity due to lack of automation in analog Electronic Design Automation (EDA) tools compared to digital flows. In particular, ``circuit sizing'' tends to consume a significant portion of analog designers' time. In order to tackle this labor-intensive nature and reduce time-to-market requirements, analog circuit sizing automation has attracted high interest in recent years. Prior work on analog circuit sizing automation can be divided into two categories: knowledge-based and optimization-based methods. In the knowledge-based approach, design experts transcribe their domain knowledge into algorithms and equations \cite{10.1023/A:1015098112015},\cite{JANGKRAJARNG2003237}. However, such methods create dependency on expert human-designers, circuit topology, and technology nodes. Thus, these methods are highly time-consuming and not scalable. Optimization-based methods are further categorized into two classes: equation-based and simulation-based methods. Equation-based methods try to express circuit performance via posynomial equations or regression models using simulation data. Then the equation-based optimization methods such as Geometric Programming\cite{1196196}, \cite{BoydOpAmpGP} or Semidefinite Programming (SDP) relaxations \cite{6881491} are applied to convex or non-convex formulated problems to find an optimal solution. Although those methods are generally fast, developing accurate expressions for circuit performances is not easy and deviates largely from the actual values. On the other hand, simulation-based methods employ black-box or learning-based optimization techniques to explore design space. These methods make guided exploration in the search space and target a global minimum using the real evaluations from circuit simulators. Traditionally, there have existed various model-free optimization methods such as particle swarm optimization (PSO)\cite{Vural2012AnalogCS} and advanced differential evolution \cite{ Liu:2013:ADA:2526263}. Although these methods have good convergence behavior, they are known to be sample-inefficient (i.e., SPICE simulation intensive). Recently surrogate model-based and learning-based methods are becoming increasingly popular due to their efficiency in exploring solution space. In surrogate model-based methods, Gaussian Process Regression (GPR)\cite{10.5555/1162254} is generally used for design space modeling, and the next design point is determined through model predictions. For example, GASPAD method is introduced into Radio Frequency (RF) IC synthesis where GPR predictions guide evolutionary search \cite{GASPAD}. WEIBO method proposed a GPR based Bayesian Optimization \cite{NIPS2012_05311655} algorithm where a blended version of weighted Expected Improvement (wEI) and the probability of feasibility is selected as acquisition function to handle constrained nature of analog sizing \cite{Lyu:2018:MBO:3195970.3196078}. The main drawback of Bayesian Optimization methods is scalability as GP modeling has cubic complexity in the number of samples, $\mathcal{O}(N^3)$. Recently, reinforcement learning algorithms are applied in the area as learning-based methods. GCN-RL\cite{Wang2020GCNRLCD} leverages Graph Neural Networks (GNN) and proposes a transferable framework. Despite reporting superior results over various methods and human-designer, a) it requires thousands of simulations for convergence (without transfer learning) and b) it suffers from engineering effort to determine observation vector, architecture selection, and reward engineering. AutoCkt \cite{Settaluri2020AutoCktDR} is a sparse sub-sampling RL technique optimizing the circuit parameters by taking discrete actions in the solution space. AutoCkt shows more efficiency over random RL agents and Differential Evolution. Still, it requires to be trained with thousands of SPICE simulations before deployment, which is costly. In this paper we introduce DNN-Opt, a two-stage deep learning black-box optimization scheme, where we merge the strengths of Reinforcement Learning (RL), Bayesian Optimization (BO), and population-based techniques in a novel way. The key features of the DNN-Opt framework are below. \begin{itemize} \item We tailored a two-stage Deep Neural Network (DNN) architecture for black-box optimization tasks inspired by the actor-critic algorithms developed in the RL community. \item To leverage convergence behavior of population-based methods, DNN-Opt adopts a population-based search space control mechanism. \item We introduce a recipe for extending our work for large industrial designs using sensitivity analysis. In collaboration with a design house, we demonstrate that our work can also efficiently size large circuits with tens of thousands of devices in addition to small building blocks. \end{itemize} The rest of the paper is organized as follows. We formulate analog circuit sizing problem in Section II and introduce DNN-Opt with its RL core and other details. In Section III, the performance of DNN-Opt is demonstrated on small building blocks and large industrial circuits. We also provide performance comparisons of DNN-Opt with other optimization methods. The conclusions are provided in Section IV. \section{DNN-Opt Framework} \subsection{Analog Circuit Sizing: Problem Formulation} We formulate analog circuit sizing task as a constrained optimization problem succinctly as below. \begin{equation} \label{eq:prob_formulation} \begin{aligned} \operatorname{minimize}\text{ } & f_{0}(\mathbf{x}) \\ \text { subject to } & f_{i}(\mathbf{x}) \leq 0 \quad \text { for } i=1, \ldots, m \end{aligned} \end{equation} where, $\mathbf{x}\in\mathbb{D}^{d}$ is the parameter vector and $d$ is the number of design variables of sizing task. Thus, $\mathbb{D}^{d}$ is the design space. $f_0(\mathbf{x})$ is the objective performance metric we aim to minimize. Without loss of generality, we denote $i^\text{th}$ constraint by $f_i(\mathbf{x})$. \subsection{DNN-Opt Core: RL Inspired Two-Stage DNN Architecture} \begin{figure} \centering \includegraphics[scale=0.37]{figures/DNN_Opt_Core_final.pdf} \vspace*{-3mm} \caption{DNN-Opt Framework} \label{fig:DNN_Opt_Core_final} \vspace*{-4mm} \end{figure} The overall framework of DNN-Opt is shown in Figure \ref{fig:DNN_Opt_Core_final}. DNN-Opt comprises a two-stage deep neural network architecture that interacts with a circuit simulator during the optimization process. The flow starts from generated samples in the design space; then, a critic-network is used to predict any new design point's performance. This prediction is used by the actor network to propose new candidates for simulation. This search scheme efficiently mimics BO behavior in space exploration. Besides, the sample generation is further optimized by adopting a population control scheme. The two-stage network architecture of our work borrows its structure from Deep Deterministic Policy Gradient (DDPG) algorithm \cite{journals/corr/LillicrapHPHETS15}, which is an RL actor-critic algorithm \cite{Konda00actor-criticalgorithms} developed for continuous action spaces. However, actor-critic algorithms are not directly applicable to analog circuit sizing since it is not a Markov Decision Processes (MDP) \cite{10.5555/3312046}, which is a \textit{necessary condition} for any RL problem. Therefore we adapt DDPG algorithm with significant modifications tailored for analog circuit sizing. In the context of analog circuit sizing, we will keep some of the RL notation but replace many for simplicity and clarity.\\ \textbf{Design}: A design is a set of circuit parameters which we denote by $\mathbf{x}$ and it is a vector of size $d$ where each element corresponds to a particular design variable. The optimization goal is to find optimal $\mathbf{x}_{\text{opt}}$ which satisfies Eq. \ref{eq:prob_formulation}. \\ \textbf{Population}: A population is set of multiple designs. \\ \textbf{Design Population Matrix}: We define a design population matrix as $\mathbf{X} \in \mathbb{R}^{N{\times}d}$, where $N$ is the population size. The parameters of $i^\mathrm{th}$ design is a row in the design population matrix $\mathbf{X}$, which is denoted as $\mathbf{x}_i$. \\ \textbf{State Space}: Our work maps optimization parameters (circuit design variables) to state representation in RL notation. A state of $k^\text{th}$ design is transformed as $\textbf{s}_{k} = \mathbf{x}_k$.\\ \textbf{Action Space}: Each action $\textbf{a}_{k}$ in our new architecture corresponds to \textit{change} in optimization parameters vector, $\mathbf{x}_k$, which can be denoted as $\textbf{a}_{k} = \Delta\mathbf{x}_k$. An intuitive explanation of this choice is that an ideal action for an optimization task should propose change in each design variable to have a better design. \\ \textbf{Critic-Network}: Originally, a critic-network parameterized by $\theta^Q$ approximates the return value of an MDP $\mathrm{Return} = Q(s_t,a_t | \theta^Q)$. We modify its role and use this network as a proxy in lieu of expensive SPICE simulator. Our modified critic-network provides a vector-to-vector mapping by taking an $(\mathbf{x}, \Delta \mathbf{x}) \in \mathbb{D}^{2d}$ as input and providing performance predictions $Q(\mathbf{x},\Delta \mathbf{x} | \theta^Q) \in \mathbb{R}^{m+1}$ at output, one-dimension is for objective specification and m for constraint specifications.\\ \textbf{Actor-Network}: An actor-network parameterized by $\theta^\mu$ would take a state as its input and determine an action to take $\mathbf{a}_k = \mu(\mathbf{s}_k | \theta^\mu)$. In the context of analog circuit sizing, actor-network provides change in design parameter vector for design $k$ as: $\Delta \mathbf{x}_k = \mathbf{a}_k = \mu(\mathbf{x}_k | \theta^\mu)$.\\ \textbf{Critic-Network Training}: We utilize critic-network for modeling design variable to circuit performance relationship. For effective training, we use data augmentation techniques to generate $N^2$ \textit{pseudo-samples} $\mathrm{(ps)}$ using original $N$ samples. In order to generate pseudo-samples, we use two-samples $\mathbf{x}_i \text{ and }\mathbf{x}_j$ and corresponding spec vectors $f(\mathbf{x}_i) \text{ and }f(\mathbf{x}_j)$, as follows: \begin{equation} \vspace{-2mm} \label{eq:pseudo_smp} \begin{aligned} &\mathbf{x}^\mathrm{ps}_{ij} = \left[\mathbf{x}_{i}, \Delta \mathbf{x}_{ij}\right] = \left[\mathbf{x}_i, \mathbf{x}_j - \mathbf{x}_i\right] \\ &f^{\mathrm{ps}}(\mathbf{x}^\mathrm{ps}_{ij}) = f(\mathbf{x}_{j}) \end{aligned} \end{equation} This leads to change in the input dimensionality of critic-network from $d$ to $2d$ since we now have to use $(\mathbf{x}, \Delta \mathbf{x})$ instead of $\mathbf{x}\text{ or } \mathbf{(x + } \Delta \mathbf{x})$. Our experiments conducted on Bayesmark \cite{BayesMark} benchmark problems showed that using $\mathrm{2}d$ inputs and training with pseudo-samples boosted critic-network's accuracy significantly over a network trained with $d$ inputs and original samples. For a batch-size of $N_b$ pseudo-samples, the following Mean Squared Error (MSE) loss function is used to train the critic network. \begin{equation} \label{eq:train_crit} \resizebox{0.91\columnwidth}{!}{% $L\left(\theta^{Q}\right)=\frac{1}{N_b (m+1)} \sum^{N_b}_{k=1}\sum^{{m+1}}_{l=1} \left( Q(\mathbf{x}_k, \Delta \mathbf{x}_k)^l - f(\mathbf{x}_k + \Delta \mathbf{x}_k)^l\right)^2 $} \end{equation} where $Q(\mathbf{x}_k, \Delta \mathbf{x}_k)^l$ is the critic-network's approximation for $\mathrm{k}^{th}$ pseudo-sample's $l^{th}$ performance and $f(\mathbf{x}_k + \Delta \mathbf{x}_k)^l$ is the SPICE simulated value for the same design-performance pair. To clarify, we have SPICE simulation values for pseudo-samples because the way they are constructed. \\ \textbf{Actor-Network Training}: Training of actor-network is done after critic-network is trained and its hyperparameters are fixed. The training of actor-network corresponds to search in design space for \textit{better} designs. We come up with a Figure of Merit (FoM) function, $g(\cdot)$, based on performance-vector to objectively quantify how better a design is with respect to others. \begin{equation} \label{eq:scalarizationfunc} g\left[f(\mathbf{x})\right] = w_0\times f_0(\mathbf{x}) + \sum_{i=1}^{{m}} \mathrm{min}\left(1, \mathrm{max}(0, w_i \times f_i(\mathbf{x}))\right) \end{equation} where $w_i$ is the weighting factor. Note, a $\mathrm{max(\cdot)}$ clipping used for equating designs after constraint are met and $\mathrm{min(\cdot)}$ clipping is used for practical purposes to prevent single constraint violation to dominate $g(\cdot)$ value. We train actor-network parameters by using $g(\cdot)$ function and replacing SPICE simulation values $f(\cdot)$ by the critic-network predictions $Q(\mathbf{x},\Delta\mathbf{x})$. We will further use a population of ``elite" solutions (es) of size $N_{\text{es}}$ to restrict search space for actor network. Population of elite solutions is a subset of total population determined based on the FoM ranking. For a batch-size of $N_b$ samples the following loss-function is used to train actor network. \begin{equation} \label{eq:train_actor} L\left(\theta^{\mu}\right) = \frac{1}{N_b} \sum^{N_b}_{k=1} \left(g\left[ Q(\mathbf{x}_k, \mu(\mathbf{x}_k \mid \theta^\mu)) \right] + \Vert\lambda* \mathrm{viol}_k\Vert_2\right) \end{equation} \noindent where $\mu(\mathbf{x}_k \mid \theta^\mu)$ is proposed parameter change vector $\Delta \mathbf{x}_k$ by the actor network. $\left(\lambda * \mathrm{viol}_k\right)$ is an element-wise vector multiplication where $\lambda$ is weighting coefficient chosen to be very large to prevent any boundary violation and keep the search in the restricted search region. The total boundary violation $\mathrm{viol}_k$ for action $k$ is defined as follows: \begin{equation} \resizebox{0.91\columnwidth}{!}{% $\mathrm{viol}_k = \mathrm{max}(0,lb_{\mathrm{rest}} - (\mathbf{x}_k + \Delta \mathbf{x}_k)) + \mathrm{max}(0, (\mathbf{x}_k + \Delta \mathbf{x}_k) - ub_{\mathrm{rest}}) $ } \end{equation} where $lb_{\mathrm{rest}}$ and $ub_{\mathrm{rest}}$ are the restriction boundary vectors for design variables determined by the population of elite solutions given by: $$ \begin{aligned} lb^i_{\mathrm{rest}} =& \mathrm{min}(\mathbf{x}^i) \;\; \forall i =1, \ldots,d \\ ub^i_{\mathrm{rest}} =& \mathrm{max}(\mathbf{x}^i) \;\; \forall i =1, \ldots,d \end{aligned} $$ where, $\mathbf{x}^i$ is the column vector of size $N_{es}$ consisting of $i^{th}$ parameter of all designs in the elite population. The hyperparameters (number of layers, number of nodes, learning rate, etc.) of the architecture for the actor and critic networks were found based on empirical studies. \subsection{Sensitivity Analysis} We use sensitivity analysis to prune design search space for efficiently finding an optimized solution. A blind search space exploration may lead to wasted circuit simulations during optimization. For example, in a classical seven transistor Operational Amplifier (OpAmp) \cite{BoydOpAmpGP} power dissipation does not depend on the differential pair devices once they are in saturation. Thus, if we want to size a circuit for reducing power, we should not make device properties of the differential pair devices as variables. To use sensitivity analysis in practice for any generic circuit, we first traverse the circuit hierarchy and collect all unique device design variables, $d$. Then, we perform sensitivity analysis by perturbing each of the design variables around its nominal value and observing its impact on objective and constraints, $f_i$. More formally, we compute sensitivity $\mathcal{S}_{ij}$ as \begin{equation} \label{eq:sens_equation} \mathcal{S}_{ij} = \frac{\delta f_i}{\delta d_j}, \forall i=0,\ldots,m ; j = 1, \ldots, d. \end{equation} We only need to consider design variables for which $\mathcal{S}_{ij} > thresh$, where $thresh$ is a user-defined number. Empirically, this analysis prunes design search space effectively, allowing us to work on large scale circuits. We are now ready to present the overall framework of DNN-Opt in the next subsection. \subsection{DNN-Opt: Overall Framework} The overall framework for DNN-Opt is provided in Algorithm \ref{alg:dnnopt}. As a prerequisite, we apply sensitivity analysis for a large design and reduce number of design variables to a workable range. We then randomly sample $N_{\mathrm{init}}$ points from the design search space to build initial population. For optimization iteration $t$, first step is to initialize actor-critic parameters followed by pseudo-sample generation. Next actor-network and critic-network are trained. After this, an elite-population is constructed based on FoM of total-population (this elite-population will be updated with optimization iterations). The next query point is generated from elite-population, $\mathbf{X}^{\text{es}}$, using pre-trained actor-critic as follows. We use every design, $\mathbf{x}_{i}^{\text{es}}$, in the pool of elite-population as input to actor-network. The output of actor-network, $\Delta \mathbf{x}_i^\mathrm{es} = \mu(\mathbf{x}_{i}^{\mathrm{es}})$, is proposed change for design parameters in search of an optimal solution. With the imposed exploration noise $(\mathcal{N})$, a candidate design point is naturally formed as: $\mathbf{x}^\mathrm{ca}_i = \mathbf{x}^\text{es}_i + \mu(\mathbf{x}^{es}_i) + \mathcal{N}$. At this step, we have exactly the same number of proposed candidates, $\mathbf{X}^\text{ca} = [\mathbf{x}^\mathrm{ca}_i, \dots, \mathbf{x}^\mathrm{ca}_{N_\text{es}}]$, as the size of elite-population. Once the population pairs, $\mathbf{X}^\text{es}$ and $\mathbf{X}^\text{ca}$, are formed the next sample point for iteration $t$ is selected using Eq. \ref{eq:find_query}. \begin{equation} \label{eq:find_query} \resizebox{0.91\columnwidth}{!}{% $\mathbf{x}_{t}^\mathrm{sample} = \big[\mathbf{x}^\text{ca}_k \text{ for } k=arg\,min_i\left(g[Q(\mathbf{x}^\text{es}_i, \mathbf{x}^\text{ca}_i - \mathbf{x}^\text{es}_i)] \right)\big] $} \end{equation} \begin{algorithm}[h] \caption{DNN-Opt Algorithm} \label{alg:dnnopt} \begin{algorithmic}[1] \Require Dimensionality reduction with sensitivity analysis \textbf{if} design is \textit{large} \Require An initial sample set $\mathbf{X}^\mathrm{init}$ of $N_\mathrm{init}$ designs and their evaluations $f(\mathbf{X}^\mathrm{init})$ \State Define total population $\mathbf{X}^\mathrm{tot} = \mathbf{X}^\mathrm{init}$ \For {$t = 1, 2, \dots,t_{max}$} \State Initialize actor \& critic network parameters $\theta^\mu$ and $\theta^Q$ \State Generate pseudo-samples using existing design $\mathbf{X}^\mathrm{tot}\rightarrow$ Eqn. \ref{eq:pseudo_smp} \State Train critic-network $\rightarrow$ Eqn. \ref{eq:train_crit} \State Train actor-network $\rightarrow$ Eqn. \ref{eq:train_actor} \State Calculate FoM for each design by $\text{FoM} = g[f(\mathbf{X}^\text{tot})]$ \State Choose $N_\text{es}$ designs with smallest FoM to form population of elite solutions $\mathbf{X}^\mathrm{es}$. \State Find query point (next sample) $\mathbf{x}_t^\mathrm{sample}$ using actor-model $\rightarrow$ Eqn. \ref{eq:find_query} \State Simulate the query point and obtain specs $f(\mathbf{x}_t^\mathrm{sample})$ via SPICE sims \If {return cond(e.g. specs are met)} \State break \EndIf \State $\mathbf{X}^\mathrm{tot}\text{.append}(\mathbf{x}_t^\mathrm{sample})$ \State Go back to line 3 \EndFor \State \Return The design with highest FoM \end{algorithmic} \vspace*{0mm} \end{algorithm} \section{Experimental Results \label{sec:experiments}} \vspace{-1.5mm} To demonstrate the reliability and efficiency of the DNN-Opt, we apply it to two sets of experiments using six circuit examples. The first experiment set is on small building blocks where every transistor is parameterized and sized, and the second experiment set includes larger industrial circuits with thousands of nodes and devices. \vspace{-2mm} \subsection{Experiments with Small Building Blocks} \vspace{-1mm} \label{sec:small_building_blocks} We tested DNN-Opt on two small building blocks: a folded cascode amplifier and a strong-arm latch comparator. We included the majority of the circuit performances in the constraint list to mimic real-world design experience. Both designs are implemented in 180nm CMOS technology. We compare our algorithm with three other well-known methods: a) A Differential Evolution (DE) method, which is a conventional population-based model-free algorithm, b) Bayesian Optimization with weighted Expected Improvement (BO-wEI)\cite{Lyu:2018:MBO:3195970.3196078}, which is a modified version of Bayesian Optimization for constrained problems, and c) GASPAD method\cite{GASPAD}, a surrogate model (GP) assisted evolutionary framework. To account for the randomized techniques involved in all these methods, we repeat experiments ten times to report each method's findings. We determine the simulation budgets for our experiments by considering the convergence nature of the methods. DE has a simulation budget of 10000, and BO-wEI, GASPAD, and DNN-Opt are limited by 500 simulations. All the experiments are run on a workstation with Intel Xeon CPU and 128GB RAM, and a commercial SPICE simulator. We used several metrics to compare the algorithms. We provide statistics of the methods for each example, and we denote the number of times a feasible solution is found by \textit{success rate}. We also share the evolution of FoM value calculated based on Eq. \ref{eq:scalarizationfunc} to demonstrate each algorithm's convergence during runtime. The constraint expressions given in Eq. \ref{specifications_folded} and \ref{specifications:SA} can be trivially readjusted to fit into the form of Eq. \ref{eq:prob_formulation}. \textbf{Folded Cascode OTA}: The first test case is a two-stage folded-cascode Operational Transconductance Amplifier (OTA) (Figure \ref{fig:foldedschematic}).It has 20 design variables, and the designer provided search ranges are as shown in Table I. \begin{figure}[t] \vspace*{0mm} \centering \vspace*{0mm} \includegraphics[scale=0.45]{figures/FoldedCascodeOTA.pdf} \vspace*{-2mm} \caption{Schematic of the folded-cascode OTA} \label{fig:foldedschematic} \vspace*{-4mm} \end{figure} \vspace*{-3mm} \begin{table}[!h] \centering \caption{Design parameters and ranges for the folded-cascode OTA} \label{table:foldedspace} \vspace*{-4mm} \begin{center} \resizebox{0.9\columnwidth}{!}{% \begin{tabular}{l |l |l| l} \hline Parameter Name & Unit & LB & UB \\ \hline L1-L2-L3-L4-L5-L6-L7& $\mu\text{m}$& 0.18 & 2\\ \hline W1-W2-W3-W4-W5-W6-W7 \;\;\;& $ \mu\text{m}$& 0.24 & 150\\ \hline N1-N2-N8-N9& integer & 1 & 20\\ \hline MCAP & $f$$\text{F}$ & 100 & 2000\\ \hline Cf & $f$ $\text{F}$ & 100 & 10000\\ \hline \end{tabular}% } \end{center} \vspace*{-3mm} \begin{center} \begin{tablenotes} \small \item W:device width; L:device length; UB:upper bound; LB:lower bound \end{tablenotes} \end{center} \vspace*{-3mm} \end{table} The sizing problem is defined as follows: \vspace*{0mm} \small \begin{equation}\label{specifications_folded} \resizebox{0.92\columnwidth}{!}{% $ \begin{array}{l} {\text { minimize } \text{Power}} \\ {\text { s.t. } \text{\enspace DC Gain} >60 \mathrm{\enspace dB} \qquad {\text{Settling Time}<\mathrm{30} \mathrm{\enspace ns}}} \\ {\qquad \begin{array}{l l} {\text{CMRR}>80 \mathrm{ dB}} & {\text{Saturation Margin}>\mathrm{50 \enspace mV}} \\ {\text{PSRR}>80 \mathrm{\enspace dB}} & {\text{Unity Gain Freq.}>30 \mathrm{\enspace MHz}}\\ {\text{Out. Swing}>2.4 \mathrm{\enspace V}} & {\text{Out. Noise}<\mathrm{30} \mathrm{\enspace mV_{rms}}} \\ {\text{Static error}<0.1} &{\text{Phase Margin}>60 \mathrm{\enspace deg.}}\\ \end{array}} \end{array}% $} \vspace*{0mm} \end{equation} \normalsize In our experiment, the following transistors are required to operate in the saturation region: M1, M3, M4, M7, M9, M10, M12, M13, and [M15-M26]. The total number of design constraints becomes 29. The statistical results for all the reference algorithms are shown in Table II. DNN-Opt shows high reliability and find a feasible solution in all its trials. However, other model-based methods, BO-wEI and GASPAD, fail to achieve similar behavior. DE can also find feasible results, but DNN-Opt is 24x more efficient in the number of required simulations to find the first feasible result. It is also demonstrated in Table II that, on average, the final design proposed by DNN-Opt draws up to 43\% less power. The modeling time required by DNN-Opt is up to 50x smaller compared to other model-based methods. This results in 2.5--16x efficiency for total runtime. \begin{figure}[t] \centering \includegraphics[scale=0.5]{figures/folded_FoM.pdf} \vspace*{-3mm} \caption{The average FoM (lower is better) curve for 500 simulations} \label{fig:Folded_FoM} \vspace*{-2mm} \end{figure} \begin{table} \label{table:Folded_Results} \begin{centering} \caption{Statistics for different algorithms: Folded Cascode OTA} \end{centering} \label{table:foldedresults} \vspace*{-4mm} \begin{center} \resizebox{0.96\columnwidth}{!}{% \begin{tabular}{|l|cccc|} \hline Algorithm & DE & BO-wEI & GASPAD &\textbf{DNN-Opt} \\ \hline success rate & 10/10 & 2/10 & 4/10 & \textbf{10/10}\\ \hline \# of simulations & 3200 & $>$500 & $>$500 & \textbf{132}\\ \hline Min power (m$W$) & 0.75 & 0.91 & 0.72 & \textbf{0.62}\\ \hline Max power (m$W$) & 1.53 & 1.62 & 1.75 & \textbf{0.77}\\ \hline Mean power (m$W$) & 1.14 & 1.25 & 0.96 & \textbf{0.71}\\ \hline Modeling time (h) &NA & 30 &6.5 & \textbf{0.6}\\ \hline Simulation time (h) &54 &2.7 &2.7 &2.7 \\ \hline Total runtime (h) &54 &32.7 &8.2 &\textbf{3.3} \\ \hline \end{tabular}% } \end{center} \vspace{-7mm \end{table} Figure \ref{fig:Folded_FoM} includes the FoM curve with iterations, where DNN-Opt shows strong convergence behavior and outperforms other methods. For our ten runs, DNN-Opt finds the feasible solution within $\text{205}$ iterations (marked with vertical dashed line) across all its ten trials. Although it is slow, GASPAD shows convergence to optimal FoM, but we observed that BO-wEI is often trapped in local optima. \textbf{Strong-Arm Latch Comparator}: The second test case is SA-Latch Comparator, which is shown in Figure \ref{fig:SAschematic}. It has 13 design variables, and their names and bounds are shown in table \ref{table:SAspace}. \vspace*{-3mm} \begin{table}[!h] \centering \caption{Design parameters and their ranges for SA-Latch Comparator} \label{table:SAspace} \vspace*{-3mm} \begin{center} \resizebox{0.9\columnwidth}{!}{% \begin{tabular}{l |l |l| l} \hline Parameter Name & Unit & LB & UB \\ \hline L1-L2-L3-L4-L5-L6 \; \; \; \; & $\mu\text{m}$& 0.18 & 10\\ \hline W1-W2-W3-W4-W5-W6 \;\;\;\;\;\;\;\;\;\;\;& $\mu\text{m}$& 0.22 & 50\\ \hline CL\_finger& integer & 10 & 300\\ \hline \end{tabular}% } \end{center} \vspace*{-2mm} \vspace*{-2mm} \end{table} \begin{figure}[t!] \centering \includegraphics[scale=0.5]{figures/SA_Latch_FoM.pdf} \vspace*{-3mm} \caption{The average FoM (lower is better) curve for 500 simulations} \label{fig:SA_FoM} \vspace*{0mm} \end{figure} \begin{figure}[t!] \vspace*{0mm} \centering \vspace*{-3mm} \includegraphics[scale=0.45]{figures/SA_Latch_Fig_Final.pdf} \vspace*{-2mm} \caption{Schematic of SA-Latch Comparator} \label{fig:SAschematic} \vspace*{-3mm} \end{figure} The constrained optimization problem consists of 10 constraints in total: \small \begin{equation} \label{specifications:SA} \begin{array}{l}{\text { minimize } \text{Power}} \\ {\text { s.t. } \text{\enspace Set Delay}<10 \mathrm{\enspace ns}} \\ {\qquad \begin{array}{l}{\text{Reset Delay}<6.5 \mathrm{\enspace ns}} \\ {\text{Area}< \mathrm{26} \mathrm{\enspace \mu m^2}} \\ {\text{Input-referred Noise}< \mathrm{50} \mathrm{\enspace \mu Vrms}} \\ {\text{Differential Reset Voltage}< \mathrm{1} \mathrm{\enspace \mu V}} \\ {\text{Differential Set Voltage}> \mathrm{1.195} \mathrm{\enspace V}} \\ {\text{Positive-Integration Node Reset Voltage}< \mathrm{60} \mathrm{\enspace \mu V}} \\ {\text{Negative-Integration Node Reset Voltage}< \mathrm{60} \mathrm{\enspace \mu V}} \\ {\text{Positive-Output Node Reset Voltage}< \mathrm{0.35} \mathrm{\enspace \mu V}} \\ {\text{Negative-Output Node Reset Voltage}< \mathrm{0.35} \mathrm{\enspace \mu V.}} \\ \end{array}} \end{array} \vspace*{0mm} \end{equation} \normalsize The statistical results for all the reference algorithms are shown in Table-\ref{table:SA_results}. Due to relatively tighter constraints for SA-Latch Comparator, methods typically needed a larger number of simulations to converge. DDN-Opt is the only method that finds a feasible solution in all trials, and our method shows more than 30x efficiency compared to DE. GASPAD shows relatively competitive results, but DNN-Opt finds a solution with 25\% better power consumption than successful runs of GASPAD. The runtime observations are similar to the folded cascode case. FoM curves are shown in Figure \ref{fig:SA_FoM} for different methods. DNN-Opt finds a feasible solution within 348 simulations, which is much earlier than the others. BO-wEI shows a similar convergence trend for initial iterations then fails to model one of the constraints properly. Our observations showed that all the runs with the BO-wEI method were unable to meet input-referred noise, and some failed for set delay. \vspace{-2mm} \begin{table}[h!] \centering \caption{SA Latch Comparator Results} \vspace{-2mm} \label{table:SA_results} \resizebox{0.96\columnwidth}{!}{% \begin{tabular}{|c|c|c|c|c|} \hline Algorithm&DE&BO-wEI&GASPAD&DNN-Opt \\ \hline success rate & 5/10 & 0/10 & 6/10 & \textbf{10/10}\\ \hline \# of simulations & $>$10000 & $>$500 & $>$500 & \textbf{330}\\ \hline min ($\mu$ W) & 2.98 & NA & 3.05 & \textbf{2.50}\\ \hline max ($\mu$ W) & 4.22 & NA & 3.75 & \textbf{2.75}\\ \hline mean ($\mu$ W) & 3.57 & NA & 3.45 & \textbf{2.65}\\ \hline Modeling time (h) &NA &17 & 3 & \textbf{0.3}\\ \hline Simulation time (h) &72 &3.6 &3.6 &3.6 \\ \hline Total runtime (h) & 72 &20.6 &6.6 &\textbf{3.9} \\ \hline \end{tabular}% } \vspace{-5mm} \end{table} \subsection{Experiments with Industrial Scale Circuits } We tested DNN-Opt on four industrial circuits designed at a very advanced technology node. These circuits were already in the process of manual sizing by expert analog designers and needed some fine-tuning. For these industrial circuits, we did not have access to other algorithms (DE, GASPAD, BO-wEI), and hence our baseline is with a commercial black-box optimizer based on Simulated Annealing. As will be demonstrated in this section, DNN-Opt performs well on large circuits and is not limited to small examples. Analog designers assisted in selecting permissible parameter ranges of the devices, considering layout impacts and process rules. For industrial cases, we identify critical devices based on Eq. \ref{eq:sens_equation} for the failing constraints ($f_i$'s of Eq. \ref{eq:prob_formulation}). Note, MLParest \cite{9218495} was used in the loop of DNN-Opt which helps analog designer estimate post-layout effects early in the design. \textbf{Inverter Chain}: The first case is a simple inverter chain used mainly for tool development and flow testing. We used all the devices (8) in the four stage inverter chain. There were only two specs, delay and power. \textbf{Level Shifter}: Sensitivity analysis identified ten critical devices impacting failing performances, and that led to a design space of $3.9\times 10^{15}$. There were 60 total specs like delay, rise, fall, power, current, etc. \textbf{Low-Dropout (LDO) Regulator}: We used sensitivity analysis to identify six critical devices leading to search space of $1.6\times 10^{13}$. The circuit had PSRR, Gain Margin, Phase Margin, DC Gain, GBW, etc., as part of nine constraints. The number of devices is high due to arrayed instances used by the analog engineer. \textbf{Continuous-Time Linear Equalizer (CTLE)}: Sensitivity analysis identified eight critical devices impacting failing performances. With design parameter and ranges identified by analog designers, we had a design space of $3.3\times 10^{25}$. There were a total of 14 constraints like DC Gain, offset, Nyquist Gain, Fpeak, Peaking Max, Power, etc. As illustrated in Table-V, DNN-Opt outperforms commercial optimizer available in the industry in terms of the number of simulations required to meet the constraints by 5x. We would like to emphasize that we can deal with fairly complex CTLE circuit by using 4x smaller number of costly SPICE simulations. Additionally, the optimal solution proposed by DNN-Opt consumed 8\% lesser power than simulated annealing. Our examples represent real use cases where designers already spend several days worth of human time in fixing constraints. Had we started with designs without any knowledge of human designers baked-in, we would have seen even greater returns in sample efficiency like Section \ref{sec:small_building_blocks}. \begin{table} \begin{centering} \caption{DNN-Opt Results on Industrial Circuits} \vspace{-3mm} \end{centering} \label{table:industrialcktresult} \begin{center} \resizebox{0.99\columnwidth}{!}{% \begin{tabular}{|l|c|c|c|c|} \hline Circuit & MOS & Nodes & Simulated Annealing (SA) &\textbf{DNN-Opt} \\ \hline Inverter Chain & 8 & 7 & $>$1000 &\textbf{90}\\ \hline Level Shifter & 1.2k & 3.9k & 1200 &\textbf{195}\\ \hline LDO & 167k & 2.8k & 552 &\textbf{112}\\ \hline CTLE & 173k & 63k & 587 &\textbf{150}\\ \hline \end{tabular}% } \end{center} \vspace*{-3mm} \begin{center} \begin{tablenotes} \small \item Number of SPICE simulations shown in column SA and DNN-Opt for meeting constraints (lower is better). \end{tablenotes} \end{center} \vspace*{-2mm} \vspace{-4mm \end{table} \section{Conclusion \label{sec:conclusions}} In this work, we presented DNN-Opt, a novel sample efficient black-box optimization algorithm that combined the strengths of deep neural networks and reinforcement learning paradigm. We also give a recipe to extend our work for large circuits with thousands of devices. Our algorithm's effectiveness has been successfully demonstrated on various circuit building blocks and large industrial circuits leading to 5--30x sample efficiency, while being able to find feasible solution for all circuit sizing tasks and showing superior converge curves compared to other methods. \section*{Acknowledgement} This work is supported in part by NSF under Grant No. 1704758. \bibliographystyle{IEEEtran}
1,314,259,996,086
arxiv
\section{Introduction} Deep Learning has revolutionised many fields of science including pattern recognition and machine learning. Its tremendous power in learning the non-linear relationships between visible and hidden layers over many layers of Deep Neural Networks (DNNs) makes it suitable for learning the most powerful features. Likewise, Deep Learning has revolutionised the field of speech recognition. A Deep Learning approach called, ``Deep Speech''~\cite{DBLP:journals/corr/HannunCCCDEPSSCN14} has significantly outperformed the state-of-the-art commercial speech recognition systems, such as Google Speech API and Apple Dictation. It achieves a word error rate of 19.1\%, whereas the best commercial systems achieved 30.5\% error. We capitalise on this success and aim to develop a deep learning framework for classifying emotion in speech. Emotion classification from speech is a widely studied topic. Before Deep Learning, Gaussian Mixture Models jointly with Hidden Markov Model were the most popular method for emotion classification from speech. After the Deep Learning breakthrough in speech recognition, the landscape of emotion recognition from speech has started to change. A number of studies have emerged where Deep Learning is used for speech emotion recognition. Linlin Chao~\cite{chao2014improving} et al. use Autoencoders, which is the simplest form of DNN. It is a simple 3-layer neural network where output units are directly connected back to input units. Typically, the number of hidden units is much less than the number of visible (input/output) ones. As a result, when data is passed through the network, the input vector is first compressed (encoded) to "fit" in a smaller representation, and then reconstructed (decoded) back. The task of training is to minimise an error or reconstruction, i.e. find the most efficient compact representation (encoding) of the input data. Autoencoders are efficient in discriminating some data vectors in favour of others, however, they are not generative. That is they cannot generate new data, and therefore cannot generate as rich features as the advanced DNNs. The other form of Deep Neural Networks is the Deep Belief Networks, which uses stacked Restricted Boltzmann Machines (RBMs) to form a deep architecture. RBM has a two layer construction: a hidden layer and a visible layer and the learning procedure consists of several steps of Gibbs sampling, that is, in propagation, hidden layer is sampled given the visible layer and in reconstruction visible layer is sampled given the hidden layers. The reconstruction and the propagation passes are repeated adjusting the weights to minimise reconstruction error. Unlike Autoencoder, RBM is a generative model. It can generate samples from learned hidden representations. A number of studies have used DBNs for emotion classification from voice. In~\cite{le2013emotion}, DBN is used in conjunction with Hidden Markov Model. This work provides insights into similarities and differences between speech and emotion recognition. Authors in~\cite{niu2014acoustic} use a similar DBN-HMM architecture. In~\cite{sanchez2014deep}, authors test DBNs for feature learning and classification and also in combination with other classifiers (while using DBNs for learning features only) like k-Nearest Neighbour (kNN), Support Vector Machine (SVM) and others, which are widely used for classification~\cite{wei2012distributed}. An interesting approach called Optimized Multi-Channel Deep Neural Network (OMC-DNN) has been proposed in~\cite{stolar2014optimized}, which for speech emotion recognition uses input features generated as simple 2D black and white images representing graphs of the MFCC coefficients. Finally, Convolution Neural Networks (CNNs)~\cite{huang2014speech}, which is another deep architecture, has also been used for emotion recognition from speech. CNNs are somewhat similar to RBMs, but instead of learning single global weight matrix between two layers, they aim to find a set of locally connected neurons, i.e., neurons that are spatially close to each other. CNNs are mostly used in image recognition. This is because, when dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. For the same reason, CNNs are hardly applicable for input other than images. Furthermore, this paper does not consider noisy speech. Despite a number of attempts of using DBNs for emotion recognition from speech, no study has particularly looked into emotion recognition from ``noisy speech''. In this paper, we consider this challenge. \section{Work in Progress} \begin{figure}[ht] \centering \includegraphics[width=1\linewidth]{DBNerror} \caption{Emotion recognition in various noisy conditions.} \label{fig:fig2} \end{figure} Apart from the capacity of producing powerful discriminative features, DBNs are also naturally robust to noise. We use simulations to validate the robustness of DBNs. As shown in Figure~\ref{fig:fig1}, artificial noise is imputed to clean emotional utterance. Utterances are divided into segments and passed onto the DBNs for segment level emotion classification. The Berlin emotional speech database~\cite{burkhardt2005database} is used in experiments for classifying discrete emotions. In this database, ten actors (5m/5f) each uttered ten sentences (5 short and 5 longer, typically between 1.5 and 4 seconds) in German to simulate seven different emotions: anger, boredom, disgust, anxiety/fear, happiness, and sadness. Utterances scoring higher than 80\% emotion recognition rate in a subjective listening test are included in the database. We classify all the seven emotions in this work. The numbers of speech files for these emotion categories in the presented Berlin database are: anger (127), boredom (81), disgust (46), fear (69), joy (71), neutral (79) and sadness (62). In order to simulate noise-corrupted speech signals, the DEMAND noise database~\cite{thiemann2013diverse} has been used in this paper. This database involves 18 types of noises including white noise and noises at the cafeteria, car, restaurant, etc. The DBN implementation has been adopted from~\cite{keyvanrad2014brief}. The DBN used in the paper uses three RBMs, where the first two RBMs use 1000 hidden unit each, and the third RBM uses 2000 hidden units. For each speech segment, a 13 coefficient MFCC vector~\footnote{MFCC features are widely used for acoustic signals~\cite{wei2013real}} is generated and used as the input to the DBN. The output is the classification result. Emotion recognition performance under different noise conditions is shown in Figure~\ref{fig:fig2}. To provide emphasis on robustness, we do not group the results based on the type of emotions, we rather present the overall classification accuracy. For each noise category, the results show the ``percentage difference in accuracy'' between emotion classification from clean and noisy speech. We made a number of observations: \begin{enumerate} \item The percentage errors can be categorised into three groups: error less than 10\%, error less than 20\% but greater than 10\% percent, error less than 30\% but greater than 20\%. \item From listening, the noise types causing the smallest error (error less than 10\%) have low magnitude and low variability, such as noises produced inside a car, inside kitchen etc. Noises causing the higher errors have relatively higher variabilities. \item In general, accuracy is diminished in the presence of noise except in the case of ``washing'', when the accuracy increased. It is not unnatural for DBN to perform better in the presence of noise as this helps avoid overfitting, when the noise is not dominant. \end{enumerate} \section{Conclusion and Future Work} We are currently developing methods to achieve greater robustness with DBNs for speech emotion recognition. This involves designing algorithms to obtain the optimal configuration (e.g., number of layers, nodes per layer etc.) of the DBNs. We are also developing an algorithm for optimal feature selection to achieve the best classification performance. Finally, we are conducting some in-depth analysis on the effect of various noise types in speech emotion recognition and how to overcome those effects. \bibliographystyle{IEEEtran}
1,314,259,996,087
arxiv
\section{Introduction} \input{section-intro} \section{Preliminaries} \label{se:prelim} \input{section-prelim} \section{Conditioning of Tropical Diagrams} \label{se:conditioning} \input{section-conditioning} \bibliographystyle{alpha} \subsection{Motivation} Let $\Xcal\in\prob\<\Gbf\>$ be a $\Gbf$-diagram of probability spaces containing probability space $U=X_{i_{0}}$ indexed by an object $i_{0}\in\Gbf$. Given an atom $u\in U$ we can define a conditioned diagram $\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u$. If the diagram $\Xcal$ is homogeneous, then the isomorphism class of $\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u$ is independent of $u$, so that $\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u$ is a constant family. On the other hand we have shown, that the power of any diagram can be approximated by homogeneous diagrams, thus suggesting that in the tropical setting $\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U$ should be a well-defined tropical diagram, rather than a family. Below we give a definition of tropical conditioning operation and prove its consistency. \subsection{Classical-tropical conditioning} Here we define the operation of conditioning of classical diagram, such that the result is a tropical diagram. Let $\Xcal$ be a $\Gbf$-diagram of probability spaces and $U$ be a space in $\Xcal$. We define the conditioning map \[ [\cdot\,|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\cdot]: \prob\<\Gbf\> \to \prob[\Gbf] \] by conditioning $\Xcal$ by $u\in U$ and averaging the corresponding tropical diagrams: \[ [\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U] := \int_{u\in U}\bernoulli{(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)}\d p_{U}(u) \] where $\bernoulli{(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)}$ is the tropical diagram represented by a linear sequence generated by $\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u$, see section~\ref{s:tropical-diagrams}. Note that the integral on the right-hand side is just a finite convex combination of tropical diagrams. Expanding all the definitions we will get for $[\Ycal]:=[\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U]$ the representative sequence \[ \Ycal(n) = \bigotimes_{u\in U}(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)^{\lfloor n\cdot p(u)\rfloor} \] \subsection{Properties} \subsubsection{Conditioning of Homogeneous Diagrams} If the diagram $\Xcal$ is \emph{homogeneous}, then for any atom $u\in U$ with positive weight \[ [\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U] \,\aeq\, \bernoulli{(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)} \] \subsubsection{Entropy} Recall that earlier we have defined a quantity \[ \ent_{*}(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U):=\int_{U}\ent_{*}(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)\d p_{U}(u) \] Now that $[\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U]$ is a tropical diagram, the expression $\ent_{*}(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U)$ can be interpreted in two, a priori different, ways: by the formula above and as the entropy of the object introduced in the previous subsection. Fortunately, the numeric value of it does not depend on the interpretation, since the entropy is a linear functional on $\prob[\Gbf]$. \subsubsection{Additivity} If $\Xcal$ and $\Ycal$ are two $\Gbf$-diagrams with $U:=X_{\iota}$, $V:=Y_{\iota}$ for some $\iota\in\Gbf$, then \[ [(\Xcal\otimes\Ycal)|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} (U\otimes V)] = [\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U] + [\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V] \] \begin{proof} \begin{align*} [(\Xcal\otimes&\Ycal)|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} (U\otimes V)] = \int_{U\otimes V} \bernoulli{(\Xcal\otimes\Ycal)|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} (u,v)} \d p(u) \d p(v) \\ &= \int_{U\otimes V} (\bernoulli{\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u} + \bernoulli{\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} v}) \d p(u) \d p(v) = \int_{U} \bernoulli{\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u} \d p(u) + \int_{V} \bernoulli{\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} v} \d p(v) \\ &= [\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U] + [\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V] \end{align*} \end{proof} \subsubsection{Homogeneity}\Label{s:cond-homo} It follows that for any diagram $\Xcal$ with a space $U$ and $n\in\Nbb_{0}$ holds \[ [\Xcal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U^{n}] = n\cdot [\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U] \] \subsection{Continuity and Lipschitz property} \begin{proposition}{p:cond-lip} Let $\Gbf$ be a complete poset category, $\Xcal,\Ycal\in\prob\<\Gbf\>$ be two $\Gbf$ diagrams, $U:=X_{\iota}$ and $V:=Y_{\iota}$ be two spaces in $\Xcal$ and $\Ycal$, respectively, indexed by some $\iota\in\Gbf$. Then \[ \aikd\Big([\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],\,[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \leq (2\cdot\size{\Gbf}+1)\cdot\ikd\left(\Xcal,\Ycal\right) \] \end{proposition} Using homogeneity property of conditioning, Section~\ref{s:cond-homo}, we can obtain the following stronger inequality. \begin{corollary}{p:cond-lip-aikd} In the setting of Proposition~\ref{p:cond-lip} holds \[ \aikd\Big([\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],\,[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \leq (2\cdot\size{\Gbf}+1)\cdot\aikd\left(\Xcal,\Ycal\right) \] \end{corollary} Before we prove Proposition~\ref{p:cond-lip} we will need some preparatory lemmas. \begin{lemma}{p:dist-cond-types} Let $\Acal$ be a $\Gbf$-diagram of probability spaces and $E$ be a space in it. Let $\qbf: E^{n}\to(\Delta E,\tau_{n})$ be the empirical reduction. Then for any $n\in\Nbb$ and any $\bar {e},\bar {e}'\in E^{n}$ \[ \ikd(\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} \bar{e},\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} \bar{e}') \leq {n}\cdot\|\ent_{*}(\Acal)\|_{1}\cdot\|\qbf(\bar{e})-\qbf(\bar{e}')\|_{1} \] \end{lemma} \begin{proof} To prove the lemma we construct a coupling between $\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} \bar{e}$ and $\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} \bar{e}'$ in the following manner. Note that there exists a permutation $\sigma\in S_{n}$ such that \[ \big|\!\set{i\;{\bm :}\; e_{i}\neq e_{\sigma i}'}\!\big| = \frac{n}{2}\cdot\|\qbf(\bar e)-\qbf(\bar e')\|_{1} \] Let \begin{align*} I &= \set{i \;{\bm :}\; e_{i}=e'_{\sigma i}} \\ \tilde I &= \set{i \;{\bm :}\; e_{i}\neq e'_{\sigma i}} \end{align*} Using that $|\tilde I|=\frac{n}{2}\cdot\|\qbf(\bar e)-\qbf(\bar e')\|_{1}$ we can estimate \begin{align*} \ikd\Big(\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e\,,\,\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e'\Big) &= \ikd\left( \bigotimes_{i=1}^{n}(\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} e_{i}) \,,\, \bigotimes_{i=1}^{n}(\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} e'_{\sigma i})\right) \\ &\leq \sum_{i\in I} \kd(\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} e_{i}\ootoo[=]\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} e'_{\sigma i}) \,+\, \sum_{i\in\tilde I} \kd(\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} e_{i}\ootoo[\otimes]\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} e'_{\sigma i}) \\ &\leq n\cdot\|\ent_{*}(\Acal)\|_{1}\cdot\|\qbf(\bar e)-\qbf(\bar e')\|_{1} \end{align*} where $\Acal\oto[=]\Bcal$ denotes the isomorphism coupling of two naturally isomorphic diagrams, while $\Acal\oto[\otimes]\Bcal$ denotes the ``independence'' coupling. \end{proof} \begin{lemma}{p:int-dist-cond} Let $\Acal$ be a $\Gbf$-diagram of probability spaces and $E$ be a space in $\Acal$. Then \[ \int_{E^{n}} \ikd(\Acal^{n},\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e)\d p(\bar{e}) \leq 2n\cdot\size{\Gbf}\cdot\ent(E) + \o(n) \] \end{lemma} \begin{proof} First we apply Proposition~\ref{p:slicing} slicing the first argument \begin{align*} \int_{E^{n}} & \ikd(\Acal^{n},\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e) \d p(\bar e) \\ &\leq \int_{E^{n}} \int_{E^{n}} \ikd(\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e',\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e) \d p(\bar e') \d p(\bar e) + 2n\cdot\size{\Gbf}\cdot\ent(E) \end{align*} We will argue now that the double integral on the right-hand side grows sub-linearly with $n$. We estimate the double integral by applying Lemma \ref{p:dist-cond-types} to the integrand \begin{align*} \int_{E^{n}} \int_{E^{n}} & \ikd(\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e',\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e) \d p(\bar e') \d p(\bar e) \\ &\leq \int_{E^{n}} \int_{E^{n}} n\cdot \size{\Gbf}\cdot |\ent_{*}(\Acal)|_{1}\cdot |\qbf(\bar e)-\qbf(\bar e')|_{1} \d p(\bar e') \d p(\bar e) \\ &= n\cdot \size{\Gbf}\cdot |\ent_{*}(\Acal)|_{1}\cdot\int_{\Delta E} \int_{\Delta E} |\pi-\pi'|_{1} \d \tau_{n}(\pi) \d \tau_{n}(\pi') = \o(n) \end{align*} where the convergence to zero of the last double integral follows from Sanov's theorem. \end{proof} \begin{corollary}{p:dist-cond} Let $\Acal$ be a $\Gbf$-diagram and $E$ a probability space included in $\Acal$. Then \[ \aikd\Big(\bernoulli{\Acal},[\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} E]\Big) \leq 2\size{\Gbf}\cdot\ent(E) \] \end{corollary} \begin{proof} Let $n\in\Nbb$. Then \begin{align*} \aikd\Big(\bernoulli{\Acal},[\Acal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} E]\Big) &= \frac{1}{n} \aikd\Big(\bernoulli{\Acal^{n}},[\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} E^{n}]\Big) \\ &= \frac1n \aikd\left( \bernoulli{\Acal^{n}}, \int_{E^{n}}\bernoulli{\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e}\d p(\bar e) \right) \\ &\leq \frac1n \int_{E^{n}} \aikd\left(\bernoulli{\Acal^{n}},\,\bernoulli{\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e}\right) \d p(\bar e) \\ &= \frac1n \int_{E^{n}} \aikd(\Acal^{n},\Acal^{n}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\bar e) \d p(\bar e) \\ &\leq 2\cdot\size{\Gbf}\cdot \ent(E) + \o(n^0) \end{align*} where we used Lemma \ref{p:int-dist-cond} and the fact that $\aikd\leq\ikd$ in the last line. We finish the proof by taking the limit $n\to\infty$. \end{proof} \begin{proof}[of Proposition~\ref{p:cond-lip}] We start with a note on general terminology: a reduction $f:A\to B$ of probability spaces can also be considered as a fan $\Fcal:=(A\ot[=] A\to[f]B)$. Then the entropy distance of $f$ is \[ \kd(f):=\kd(\Fcal)=\ent A-\ent B \] If the reduction $f$ is a part of a bigger diagram containing also space $U$, then the following inequality holds \[ \int_{U}\kd(f|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)\d p(u)\leq \kd(f) \] Let $\Kcal\in\prob\<\Gbf,\ensuremath{\bm\Lambda}_{2}\>$ \[ \Kcal= \left( \begin{cd}[row sep=0mm,column sep=small] \Xcal \& \Zcal \arrow{l}[above]{f} \arrow{r}{g} \& \Ycal \end{cd} \right) \in\prob\<\Gbf,\ensuremath{\bm\Lambda}_{2}\>=\prob\<\ensuremath{\bm\Lambda}_{2},\Gbf\> \] be an optimal coupling between $\Xcal$ and $\Ycal$. It can also we viewed as a $\Gbf$-diagram of two-fans, $\Kcal=\set{\Kcal_{i}}_{i\in\Gbf}$ each of which is a minimal coupling between $X_{i}$ and $Y_{i}$. Among them is the minimal fan $\Wcal:=\Kcal_{\iota}=(U\oot[f_{\iota}] W\too[g_{\iota}] V)$. We use triangle inequality to bound the distance $\aikd\Big([\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big)$ by four summands as follows. \begin{align*} \aikd\Big([\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \leq& \aikd\Big([\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U]\Big) \;+ \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} W]\Big) \,+ \\& \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} W],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) + \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V],[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \end{align*} We will estimate each of the four summands separately. The bound for the first one is as follows. \begin{align*} \aikd\Big(&[\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U]\Big) = \aikd \left( \int_{U} \bernoulli{\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u} \d p(u) , \int_{U} \bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u} \d p(u) \right) \\\label{eq:cond-case-1-triang} &\leq \int_{U}\aikd\left(\bernoulli{\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u},\, \bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u}\right)\d p(u) = \int_{U}\aikd\left(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u,\, \Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u\right)\d p(u) \\ & \leq \int_{U}\ikd\left(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u,\, \Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u\right)\d p(u) \leq \int_{U}\kd(f|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)\d p(u) \\ &\leq \sum_{i\in\Gbf}\int_{U}\kd(f_{i}|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)\d p(u) = \sum_{i\in\Gbf}\kd(f_{i}) = \kd(f) \end{align*} An analogous calculation shows that \[ \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V],[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \leq \kd(g) \] To bound the second summand we will use Corollary~\ref{p:dist-cond} \begin{align*} \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} W]\Big) &= \aikd \left( \int_{U}\bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u}\d p(u) , \int_{W} \bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} w} \d p(w) \right) \\ &= \aikd \left( \int_{U}\bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u} \d p(u) , \int_{U}\int_{W|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u}\bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} w}\d p(w|u)\d p(u) \right) \\ &\leq \int_{U} \aikd \left( \bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u},\int_{W|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u}\bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} w} \d p(w|u) \right) \d p(u) \end{align*} We will now use Corollary \ref{p:dist-cond} with $\Acal = \Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u$ and $E = W |}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u$ to estimate the integrand. Then, \begin{align*} \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} W]\Big) &= \int_{U} \aikd\left( \bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u},\int_{W|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u}\bernoulli{\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} w} \d p(w|u) \right) \d p(u) \\ &\leq 2\size{\Gbf} \cdot \int_U \ent( W |}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u ) \d p(u) \\ &\leq 2\size{\Gbf} \cdot \ent( W |}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U ) \leq 2\size{\Gbf} \cdot \kd(f) \end{align*} Similarly \[ \aikd\Big([\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} W],[\Zcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \leq 2\size{\Gbf} \cdot \kd(g) \] Combining the estimates we get \[ \aikd\Big([\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U],[\Ycal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} V]\Big) \leq (2\size{\Gbf}+1)\cdot(\kd{(f)}+\kd(g)) = (2\size{\Gbf}+1)\cdot\ikd(\Xcal,\Ycal) \] \end{proof} \subsection{Tropical conditioning} Let $[\Xcal]$ be a tropical $\Gbf$-diagram and $[U]=[X_{\iota}]$ for some $\iota\in\Gbf$. Choose a representative $\big(\Xcal(n)\big)_{n\in\Nbb_{0}}$ and denote $u(n):=X_{\iota}(n)$. We define now a conditioned diagram $[\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U]$ by the following limit \[ [\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U]:=\lim_{n\to\infty}\frac1n [\Xcal(n)|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} U(n)] \] Proposition~\ref{p:cond-lip-aikd} guarantees, that the limit exists and is independent of the choice of representative. For a fixed $\iota\in\Gbf$ the conditioning is a linear Lipschitz map \[ [\,\cdot\;|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}}\;\cdot_{\iota}\,]:\prob[\Gbf]\to\prob[\Gbf] \] \subsection{Probability spaces and their diagrams} \subsubsection{Probability spaces} By a \term{finite probability space} we mean a set with a probability measure, that has finite support. A \term{reduction} from one probability space to another is an equivalence class of measure-preserving maps. Two maps are equivalent, if they coincide on a set of full measure. We call a point $x$ in a probability space $X=(\underline{X},p)$ an \term{atom} if it has positive weight and we write $x\in X$ to mean $x$ is an atom in $X$ (as opposed to $x\in\underline{X}$ for points in the underlying set). For a probability space $X$ we denote by $|X|$ the cardinality of the support of the probability measure. \subsubsection{Indexing categories} To record the combinatorial structure of a commutative diagrams of probability spaces and reductions we use an object that we call an \term{indexing category}. By an indexing category we mean a finite category $\Gbf$ such that for any pair of objects $i,j\in\Gbf$ there is at most one morphism between them either way. In addition, we will assume it satisfies one additional property that we will describe after introducing some terminology. For a pair of objects $i,j\in\Gbf$ such that there is a morphism $\gamma_{ij}:i\to j$, object $i$ will be called an \term{ancestor} of $j$ and object $j$ will be called a \term{descendant} of $i$. The subcategory of all descendants of an object $i\in\Gbf$ is called an \term{ideal} generated by $i$ and will be denoted $\left\lceil i\right\rceil$, while we will call the subcategory consisting of all ancestors of $i$ together with all the morphisms in it a \term{co-ideal} generated by $i$ and denote it by $\left\lfloor i\right\rfloor$. (The term \term{filter} is also used for co-ideal in the literature about lattices) The additional property that an indexing category has to satisfy is that for any pair of objects $i,j\in\Gbf$ there exists a \term{minimal common ancestor} $\hat\imath$, that is $\hat\imath$ is an ancestor for both $i$ and $j$ and any other ancestor of them both is also an ancestor of $\hat\imath$. An equivalent formulation of the property above is the following: the intersection of the co-ideals generated by two objects $i,j\in\Gbf$ is also a co-ideal generated by some object $\hat\imath\in\Gbf$. Any indexing category $\Gbf$ is necessarily \term{initial}, which means that there exists an \term{initial object}, that is an object $i_{0}$ such that $\Gbf=\left\lceil i_{0}\right\rceil$. A \term{fan} in a category is a pair of morphisms with the same domain. A fan $(i\ot k\to j)$ is called \term{minimal} if for any other fan $(i\ot l\to j)$ included in a commutative diagram \[ \begin{cd}[row sep=-1mm] \mbox{} \& k \arrow{dl} \arrow{dd} \arrow{dr} \\ i \&\& j \\ \& l \arrow{ul} \arrow{ur} \end{cd} \] the vertical arrow must be an isomorphism. For any pair of objects $i,j$ in an indexing category $\Gbf$ there exists a unique minimal fan $(i\ot\hat\imath\to j)$ in $\Gbf$. \subsubsection{Diagrams} We denote by $\prob$ the category of finite probability spaces and reductions. For an indexing category $\Gbf=\set{i;\, \gamma_{ij}}$, a $\Gbf$-diagram is a functor $\Xcal:\Gbf\to\prob$. A reduction $f$ from one $\Gbf$-diagram $\Xcal=\set{X_{i};\,\chi_{ij}}$ to another $\Ycal=\set{Y_{i};\, \upsilon_{ij}}$ is a natural transformation between the functors. It amounts to a collection of reductions $f_{i}:X_{i}\to Y_{i}$ such that the big diagram consisting of all spaces $X_{i}$, $Y_{i}$ and all morphisms $\chi_{ij}$, $\upsilon_{ij}$ and $f_{i}$ is commutative. The category of $\Gbf$-diagrams and reductions will be denoted $\prob\<\Gbf\>$. The construction of diagrams could be iterated, thus we can consider $\Hbf$-diagrams of $\Gbf$-diagrams and denote the corresponding category $\prob\<\Gbf\>\<\Hbf\>=\prob\<\Gbf,\Hbf\>$. Every $\Hbf$-diagram of $\Gbf$-diagrams can also be considered as $\Gbf$-diagram of $\Hbf$-diagrams, thus there is a natural equivalence of categories $\prob\<\Gbf,\Hbf\>\cong\prob\<\Hbf,\Gbf\>$. A $\Gbf$-diagram $\Xcal$ will be called \term{minimal} if it maps minimal fans in $\Gbf$ to minimal fans in the target category. The subspace of all minimal $\Gbf$-diagrams will be denoted $\prob\<\Gbf\>_{\msf}$. In~\cite{Matveev-Asymptotic-2018} we have shown that for any fan in $\prob$ or in $\prob\<\Gbf\>$ its minimization exists and is unique up to isomorphism. \subsubsection{Tensor product} The tensor product of two probability spaces $X=(\un X,p)$ and $Y=(\un Y,q)$ is their independent product, $X\otimes Y:=(\un X\times\un Y,p\otimes q)$ . For two $\Gbf$-diagrams $\Xcal=\set{X_{i};\,\chi_{ij}}$ and $\Ycal=\set{Y_{i};\,\upsilon_{ij}}$ we define their tensor product to be $\Xcal\otimes\Ycal=\set{X_{i}\otimes\Ycal;\, \chi_{ij}\times\upsilon_{ij}}$. \subsubsection{Constant diagrams} Given an indexing category $\Gbf$ and a probability space we can form a \term{constant} diagram $X^\Gbf$ that has all spaces equal to $X$ and all reductions equal to the identity isomorphism. Sometimes when such constant diagram is included in a diagram with another $\Gbf$-diagrams (such as, for example, a reduction $\Xcal\to X^{\Gbf}$) we will write simply $X$ in place of $X^{\Gbf}$. \subsubsection{Entropy} Evaluating entropy on every space in a $\Gbf$-diagram we obtain a tuple of non-negative numbers indexed by objects in $\Gbf$, thus entropy gives a map \[ \ent_{*}:\prob\<\Gbf\>\to\Rbb^{\Gbf} \] where the target space $\Rbb^{\Gbf}$ is a space of real-valued functions on the set of objects in $\Gbf$ endowed with the $\ell^{1}$-norm. Entropy is a homomorphism in that it satisfies \[ \ent_{*}(\Xcal\otimes\Ycal)=\ent_{*}(\Xcal)+\ent_{*}(\Ycal) \] \subsubsection{Entropy distance} Let $\Gbf$ be an indexing category and $\Kcal=(\Xcal\ot\Zcal\to\Ycal)$ be a fan of $\Gbf$-diagrams. We define the \term{entropy distance} \[ \kd(\Kcal) := \left\| \ent_{*}\Zcal-\ent_{*}\Xcal \right\|_{1} + \left\| \ent_{*}\Zcal-\ent_{*}\Ycal \right\|_{1} \] The \term{intrinsic entropy distance} between two $\Gbf$-diagrams is defined to be the infimal entropy distance of all fans with terminal diagrams $\Xcal$ and $\Ycal$ \[ \ikd(\Xcal,\Ycal):=\inf\set{\kd(\Kcal)\;{\bm :}\; \Kcal=(\Xcal\ot\Zcal\to\Ycal)} \] The intrinsic entropy distance was introduced in \cite{Kovavcevic-Hardness-2012, Vidyasagar-Metric-2012} for probability spaces. In~\cite{Matveev-Asymptotic-2018} it is shown that the infimum is attained, that the optimal fan is minimal, that $\ikd$ is a pseudo-distance which vanishes if and only if $\Xcal$ and $\Ycal$ are isomorphic and that $\ent_{*}$ is a 1-Lipschitz linear functional with respect to $\ikd$. \subsection{Diagrams of sets, distributions and empirical reductions} \subsubsection{Distributions on sets} For a set $S$ we denote by $\Delta S$ the collection of all finitely-supported probability distributions on $S$. For a pair of distributions $\pi_{1},\pi_{2}\in\Delta S$ we denote by $\left\| \pi_{1} -\pi_{2}\right\|_{1}$ the \term{total variation distance} between them. For a map $f:S\to S'$ between two sets we denote by $f_{*}:\Delta S\to\Delta S'$ the induced affine map (the map preserving convex combinations). For $n\in\Nbb$ define the \term{empirical map} $\emp:S^{n}\to\Delta S$ by the assignment below. For $\bar s=(s_{1},\dots,s_{n})\in S^{n}$ and $A\subset S$ set \[ \emp(\bar s)(A):=\frac1n \cdot\big|\!\set{k\;{\bm :}\; s_{k}\in A}\!\big| \] For a finite probability space $X=(S,p)$ the \term{empirical distribution} on $\Delta X$ is the push-forward $\tau_{n}:=\emp_{*}p^{\otimes n}$. Thus \[ \emp:X^{n}\to(\Delta X,\tau_{n}) \] is a reduction of finite probability spaces. The construction of empirical reduction is functorial, that is for a reduction between two probability spaces $f:X\to Y$ the diagram of reductions \[ \begin{cd}[row sep=small] X^{n} \arrow{r}{f^{n}} \arrow{d}{\emp} \& Y^{n} \arrow{d}{\emp} \\ (\Delta X,\tau_{n}) \arrow{r}{f_{*}} \& (\Delta Y,\tau_{n}) \end{cd} \] commutes. \subsubsection{Distributions on diagrams of sets} Let $\Set$ denote the category of sets and surjective maps. For an indexing category $\Gbf$, we denote by $\Set\<\Gbf\>$ the category of $\Gbf$-diagrams in $\Set$. That is, objects in $\Set\<\Gbf\>$ are commutative diagrams of sets indexed by $\Gbf$, the spaces in such a diagram are sets and arrows represent surjective maps, subject to commutativity relations. For a diagram of sets $\Scal=\set{S_{i};\sigma_{ij}}$ we define the \term{space of distributions on the diagram} $\Scal$ by \[ \Delta\Scal := \set{(\pi_{i})\in\prod_i\Delta S_{i}\;{\bm :}\; (\sigma_{ij})_{*}\pi_{i}=\pi_{j}} \] If $S_{0}$ is the initial set of $\Scal$, then there is an isomorphism \begin{align*} \tageq{distributions-iso} \Delta S_{0}&\oto[\cong]\Delta\Scal\\ \Delta S_{0}\ni\pi_{0} &\mapsto \set{(\sigma_{0i})_{*}\pi_{0}}\in\Delta\Scal \\ \Delta S_{0} \ni \pi_{0} &\leftmapsto \set{\pi_{i}}\in\Delta\Scal \end{align*} Given a $\Gbf$-diagram of sets $\Scal=\set{S_{i};\sigma_{ij}}$ and an element $\pi\in\Delta\Scal$ we can construct a $\Gbf$-diagram of probability spaces $(\Scal,\pi):=\set{(S_{i},\pi_{i});\sigma_{ij}}$. Note that any diagram $\Xcal$ of probability spaces has this form. \subsection{Conditioning} Consider a $\Gbf$-diagram of probability spaces $\Xcal=(\Scal,\pi)$, where $\Scal$ is a diagram of sets and $\pi\in\Delta\Scal$. Let $X_{0}=(S_{0},\pi_{0})$ be the initial space in $\Xcal$ and $U=X_{i}$ be another space in $\Xcal$. Since $S_{0}$ is initial, there is a map $\sigma_{0,i}:S_{0}\to S_{i}$. Fix an atom $u\in U$ and define the conditioned distribution $\pi_{0}(\cdot|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)$ on $S_{0}$ as the distribution supported in $\sigma^{-1}_{0,i}(u)$ and for every $s\in\sigma^{-1}_{0,i}(u)$ defined by \[ \pi_{0}(s|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u):=\frac{\pi_{0}(s)}{\pi_{0}(\sigma^{-1}_{0,i}(u))} \] Let $\pi(\cdot|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)\in\Delta\Scal$ be the distribution corresponding to $\pi_{0}(\cdot|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u)$ under the isomorphism in~(\ref{eq:distributions-iso}). We define the \term{conditioned} $\Gbf$-diagram $\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u:=(\Scal,\pi(\cdot|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u))$. \subsubsection{The Slicing Lemma} In~\cite{Matveev-Asymptotic-2018} we prove the so-called Slicing Lemma that allows to estimate the intrinsic entropy distance between two diagrams in terms of distances between conditioned diagrams. Among the corollaries of the Slicing Lemma is the following inequality. \begin{proposition}{p:slicing} Let $(\Xcal\ot\hat\Xcal\to U^{\Gbf})\in\prob\<\Gbf,\Lambda_{2}\>$ be a fan of $\Gbf$-diagrams of probability spaces and $\Ycal\in\prob\<\Gbf\>$ be another diagram. Then \[ \ikd(\Xcal,\Ycal) \leq \int_{U}\ikd(\Xcal|}%\mkern-2.5mu\raisebox{-0.57ex}{\rule{0.3em}{0.15ex}} u,\Ycal)\d p(u) + 2\size{\Gbf}\cdot\ent U \] \end{proposition} The fan in the assumption of the the proposition above can often be constructed in the following manner. Suppose $\Xcal$ is a $\Gbf$-diagram and $U=X_{\iota}$ is a space in it for some $\iota\in\Gbf$. We can construct a fan $(\Xcal\ot[f]\hat\Xcal\to[g] U^{\Gbf})\in\prob\<\Gbf,\Lambda_{2}\>$ by assigning $\hat X_{i}$ to be the initial space of the (unique) minimal fan in $\Xcal$ with terminal spaces $X_{i}$ and $U$ and $f_{i}$ and $g_{i}$ to be left and right reductions in that fan, for any $i\in\Gbf$. \subsection{Tropical Diagrams}\Label{s:tropical-diagrams} A detailed discussion of the topics in this section can be found in~\cite{Matveev-Tropical-2019}. The asymptotic entropy distance between two diagrams of the same combinatorial type is defined by \[ \aikd(\Xcal,\Ycal):=\lim\frac1n \ikd(\Xcal^{n},\Ycal^{n}) \] A tropical $\Gbf$-diagram is an equivalence class of certain sequences of $\Gbf$-diagrams of probability spaces. Below we describe the type of sequences and the equivalence relation. A function $\phi:\Rbb\geq1\to\Rbb\geq0$ is called an \term{admissible function} if $\phi$ is non-decreasing and there is a constant $D_{\phi}$ such that for any $t\geq1$ \[ 8t\cdot\int_{t}^{\infty}\frac{\phi(t)}{t^{2}}\d t\leq D_{\phi}\cdot\phi(t) \] An example of an admissible function will be $\phi(t)=t^{\alpha}$, for $\alpha\in[0,1)$. A sequence $\bar\Xcal=(\Xcal(n):\,n\in\Nbb_{0})$ of diagrams of probability spaces will be called \term{quasi-linear} with \term{defect} bounded by an admissible function $\phi$ if it satisfies \[ \aikd\big(\Xcal(n+m),\Xcal(n)\otimes\Xcal(m)\big)\leq C\cdot\phi(n+m) \] For example for a diagram $\Xcal$, the sequence $\bernoulli\Xcal:=(\Xcal^{n}:\, n\in\Nbb_{0})$ is $\phi$-quasi-linear for $\phi\equiv0$ (and for any admissible $\phi$). Such sequences are called \term{linear}. The asymptotic entropic distance between two quasi-linear sequences $\bar\Xcal=\big(\Xcal(n):\,n\in\Nbb_{0}\big)$ and $\bar\Ycal=\big(\Ycal(n):\,n\in\Nbb_{0}\big)$ is defined to be \[ \aikd(\bar\Xcal,\bar\Ycal):=\lim_{n\to\infty}\frac1n \ikd(\Xcal(n),\Ycal(n)) \] and sequences are called \term{asymptotically equivalent} if $\aikd(\bar\Xcal,\bar\Ycal)=0$. An equivalence class of a sequence $\bar\Xcal$ will be denoted $[\Xcal]$ and the totality of all the classes $\prob[\Gbf]$. The sum of two such equivalence classes is defined to be the equivalence class of the sequence obtained by tensor-multiplying representative sequences of the summands term-wise. In addition there is a doubly transitive action of $\Rbb_{\geq0}$ on $\prob[\Gbf]$. In~\cite{Matveev-Tropical-2019} the following theorem is proven \begin{theorem}{p:tropical-summary} Let $\Gbf$ be an indexing category. Then \begin{enumerate}\def\roman{enumi}{\roman{enumi}} \item The space $\prob[\Gbf]$ does not depend on the choice of a positive admissible function $\phi$ up to isometry. \item The space $\prob[\Gbf]$ is metrically complete. \item The map $\Xcal\mapsto\bernoulli\Xcal$ is a $\aikd$-$\aikd$-isometric embedding. The space of linear sequences, i.e.~the image of the map above, is dense in $\prob[\Gbf]$. \item There is a distance-preserving homomorphism from $\prob[\Gbf]$ into a Banach space $B$, whose image is a closed convex cone in $B$. \item The entropy functional \begin{align*} \ent_{*}:\prob[\Gbf]&\to\Rbb^{\Gbf}\\ [\big(\Xcal(n)\big)_{n\in\Nbb_{0}}] &\mapsto \lim_{n\to\infty} \frac1n \ent_{*}\Xcal(n) \end{align*} is a well-defined 1-Lipschitz linear map. \end{enumerate} \end{theorem} \subsection{Asymptotic Equipartition Property for Diagrams} Among all $\Gbf$-diagrams there is a special class of maximally symmetric ones. We call such diagrams \term{homogeneous}, see below for the definition. Homogeneous diagrams come very handy in many considerations, because their structure is easier to describe then that of general diagrams. We show below that among tropical diagrams, those that have homogeneous representatives are dense. It means, in particular, that when considering continuous functionals on the space of diagrams, it suffices to only look at homogeneous diagrams. \subsubsection{Homogeneous diagrams} A $\Gbf$-diagram $\Xcal$ is called \term{homogeneous} if the automorphism group $\Aut(\Xcal)$ acts transitively on every space in $\Xcal$, by which we mean that the action is transitive on the support of the probability measure. Homogeneous probability spaces are isomorphic to uniform spaces. For more complex indexing categories this simple description is not sufficient. \subsubsection{Tropical Homogeneous Diagrams} The subcategory of all homogeneous $\Gbf$-diagrams will be denoted $\Prob\<\Gbf\>_{\hsf}$ and we write $\Prob\<\Gbf\>_{\hsf,\msf}$ for the category of minimal homogeneous $\Gbf$-diagrams. These spaces are invariant under the tensor product, thus they are metric Abelian monoids and the general ``tropicalization'' described in~\cite{Matveev-Tropical-2019} can be performed. Passing to the tropical limit we obtain spaces of tropical (minimal) homogeneous diagrams, that we denote by $\Prob[\Gbf]_{\hsf}$ and $\Prob[\Gbf]_{\hsf,\msf}$, respectively. \subsubsection{Asymptotic Equipartition Property} In~\cite{Matveev-Asymptotic-2018} the following theorem is proven \begin{theorem}{p:aep-complete} Suppose $\Xcal\in\prob\<\Gbf\>$ is a $\Gbf$-diagram of probability spaces for some fixed indexing category $\Gbf$. Then there exists a sequence $\bar\Hcal=(\Hcal_{n})_{n=0}^{\infty}$ of homogeneous $\Gbf$-diagrams such that \[\tageq{quantaep} \frac{1}{n} \ikd (\Xcal^{n},\Hcal_{n}) \leq C(|X_0|,\size{\Gbf}) \cdot \sqrt{\frac{\ln^3 n}{n}} \] where $C(|X_0|, \size{\Gbf})$ is a constant only depending on $|X_0|$ and $\size{\Gbf}$. \end{theorem} The approximating sequence of homogeneous diagrams is evidently quasi-linear with the defect bounded by the admissible function \[ \phi(t) := 2C(|X_0|,\size{\Gbf})\cdot t^{3/4} \geq 2C(|X_0|,\size{\Gbf})\cdot t^{1/2}\cdot \ln^{3/2}t \] Thus, Theorem~\ref{p:aep-complete} above states that $\lin(\prob\<\Gbf\>)\subset\prob[\Gbf]_{\hsf}$. On the other hand we have shown in~\cite{Matveev-Tropical-2019}, that the space of linear sequences $\lin(\prob\<\Gbf\>)$ is dense in $\prob[\Gbf]$. Combining the two statements we get the following theorem. \begin{theorem}{p:aep-tropical} For any indexing category $\Gbf$, the space $\prob[\Gbf]_{\hsf}$ is dense in $\prob[\Gbf]$. Similarly, the space $\prob[\Gbf]_{\hsf,\msf}$ is dense in $\prob[\Gbf]_{\msf}$. \end{theorem}
1,314,259,996,088
arxiv
\section{Introduction} With the advent of convolutional neural networks, object detection in images has improved significantly giving rise to several object detection algorithms like YOLO \cite{DBLP:journals/corr/RedmonDGF15}, SSD \cite{DBLP:journals/corr/LiuAESR15}, etc. Most object detection networks work with raw image pixels as inputs. The networks are highly nonlinear in nature and thus the output predictions depend a lot on the image parameters like brightness, contrast, etc. \cite{937690,6521924,Osadchy2004EfficientDU,6115698}. In real-world scenarios, camera parameters like the shutter-speeds, gains, etc. with which the images are taken, matter a lot in the performance of an object detection network. A photographer changes a lot of parameters like the shutter speed, voltage gains, etc. \cite{5765998} while capturing images according to the lighting conditions and the movements of the subject. In autonomous navigation, robotics, etc. there are several instances where the lighting conditions and the subject speed changes. In these cases, using fixed shutter speed and voltage-gain values would result in an image which would not be conducive for object detection. Most cameras rely on the built-in auto-exposure algorithms to set the exposure parameters of the camera. Although the images obtained from these auto-exposure algorithms may be \textit{pleasing} to a human eye, they may not be the best image to perform object detection on. Also, most of the object detection networks are trained using images from a dataset which are captured either by using a single operation mode \cite{Geiger2013IJRR} or no control over the parameters of the camera \cite{imagenet_cvpr09,Agustsson_2017_CVPR_Workshops,huiskes08}. Thus, a pre-trained network may have a larger affinity towards images captured with similar parameters as the ones in the dataset it was trained on.\\ To tackle the problem of sudden variations in the photography conditions, we propose to train a Reinforcement Learning (RL) agent to digitally transform images in real-time such that the object detection performance is maximised. Although we perform experiments with digital transformations, this method can ideally be extended to choose the camera parameters to capture the images by using the image formation model proposed by Hassinoff et al. \cite{5540167}. We train the model with images which are digitally distorted, for example: changing brightness, contrast, color, etc. It should be noted that we do not necessarily want the agent to recover the original image. \\ The claimed contribution of the paper is a Deep RL methodology called \textit{ObjectRL} (Object Reinforcement Learning) to change the image digitally with rewards based on the performance of a pre-trained object detector on the agent-transformed image. An overview of the related work is provided in the next section. The proposed method for \textit{ObjectRL} is described in detail in Section \ref{section:model} and the experiments to validate the hypotheses along with results are provided in Section \ref{section:experiments} and Section \ref{section:results} respectively. \begin{figure}[!t] \centering \includegraphics[scale=0.4]{figures/flow.png} \caption{The overall training procedure for \textit{ObjectRL}. The image is randomly distorted to simulate the bad images. An episode can be carried out for $n$ steps which we set to 1 for training stability. Thus, the agent has to take a single action on each image.} \label{fig:flow} \end{figure} \section{Related Works} \label{section:related} We briefly review the literature and the existing methods related to image modifications for object detection improvement. Bychkovsky et al. \cite{5995332} present a dataset of input and retouched image pairs called MIT-Adobe FiveK, which was created by professional experts. They use this dataset to train a supervised model for color and tone adjustment in images. The main motive of this work is not inclined towards improving object detection but is more focused towards training a model to edit an image according to the user preferences. In \cite{DBLP:journals/corr/WuT17} the authors create a dataset of images taken with different combinations of shutter speeds and voltage gains of a camera. They create a performance table which is a matrix of mean average precision (mAP) for detection of objects in images taken with different combinations of shutter speed and gains. To choose the optimal parameters to capture images, they propose to choose the combination which gives the maximum precision. One of the problems with this method is that a dataset with images taken with different combinations of shutter speeds, voltage gains and illuminations has to be manually annotated with bounding boxes around the objects which is quite time-consuming. Also, the dataset consists of images with static objects. Thus, the effect of changing shutter speed is just on the overall brightness of the image. But one of the main reasons for changing shutter speed while capturing images is to increase (for artistic purposes) or (preferably) decrease motion blur in the moving objects. In \cite{DBLP:journals/corr/abs-1804-04450} the authors propose a reinforcement learning based method to recover digitally distorted images. The authors model the agent to take actions sequentially by choosing the type of modification (brightness, contrast, color saturation, etc.). The main motive of this model is to recover back the distorted images. The reward for the agent is the difference of mean square difference of the images at the current time step and the previous time step. This work is quite different from our \textit{ObjectRL} model as our main motive is to maximise the object detection performance of a pre-trained detector. Reinforcement Learning has been used in conjunction with computational photography in recent works by Yang et al. \cite{DBLP:journals/corr/abs-1803-02269} and Hu et al. \cite{10.1145/3181974} where the authors train RL agents to either capture images or post-process images in such a way that the resultant image is \textit{visually pleasing}. The agent gets a reward from the users according to their preferences of exposures on cameras in the former one whereas in the later one the agent receives a reward based on the discriminator loss of a Generative Adversarial Network \cite{10.5555/2969033.2969125}. Another area of research orthogonal to ours is using reinforcement learning to obtain region proposals for object-detection and object-localization\cite{DBLP:journals/corr/MatheS14,DBLP:journals/corr/abs-1810-10325,DBLP:journals/corr/CaicedoL15,7780685}. In these works, the main motivation is to make the agent focus its attention toward candidate regions to detect objects by sequentially shifting the proposed region and rewarding the agent according to the Intersection over Union $(IoU-$explained in Section \ref{section:IOU}). \section{Background}\label{section:background} \subsection{Reinforcement Learning} \label{section:RL} Reinforcement learning (RL) tries to solve the sequential decision problems by learning from trial and error. Considering the standard RL setting where an agent interacts with an environment $\mathcal{E}$ over discrete time steps. In the time step $t$, the agent receives a state $s_t \in \mathcal{S}$ and selects an action $a_t \in \mathcal{A}$ according to its policy $\pi$, where $\mathcal{S}$ and $\mathcal{A}$ denote the sets of all possible states and actions respectively. After the action, the agent observes a scalar reward $r_t$ and receives the next state $s_{t+1}$. The goal of the agent is to choose actions to maximize the cumulative sum of rewards over time. In other words, the action selection implicitly considers the future rewards. The discounted return is defined as $R_t = \sum_{\tau=t}^{\infty}\gamma^{\tau-t}r_{\tau}$, where $\gamma \in [0, 1]$ is a discount factor that trades-off the importance of recent and future rewards. RL algorithms can be divided into two main sub-classes: Value-based and Policy-based methods. In value-based methods, values are assigned to states by calculating an expected cumulative score of the current state. Thus, the states which get more rewards, get higher values. In policy-based methods, the goal is to learn a map from the states to actions, which can be stochastic as well as deterministic. A class of algorithms called actor-critic methods \cite{NIPS1999_1786} lie in the intersection of value-based methods and policy-based methods, where the critic learns a value function and the actor updates the policy in a direction suggested by the critic. \textbf{Proximal Policy Optimization (PPO)}: We use PPO \cite{DBLP:journals/corr/SchulmanWDRK17} which is a type of actor-critic method for optimising the RL agent. One of the key points in PPO is that it ensures that a new update of the current policy does not change it too much from the previous policy. This leads to less variance in training at the cost of some bias, but ensures smoother training and also makes sure the agent does not go down an unrecoverable path of taking unreasonable actions. PPO uses a clipped surrogate objective function which is a first order trust region approximation. The purpose of the clipped surrogate objective is to stabilize training via constraining the policy changes at each step. \subsection{Object Detection}\label{section:IOU} Object recognition is an essential research direction in computer vision. Most of the successful object recognition algorithms use deep convolutional neural networks which are trained to give the co-ordinates of the bounding boxes around the objects. To decide whether an object is detected or not, we use the Intersection over Union (IoU) criteria. Intersection over Union is the ratio of area of overlap and area of union of the predicted and the ground truth bounding boxes. Let $p$ be the predicted box, and $g$ be the ground truth box for the target object. Then, $IoU$ between $p$ and $g$ is defined as $IoU(p,g)=Area(p\cap g)/Area(p\cup g)$. Generally, if $IoU> 0.5$ an object is said to be a True-Positive. \subsection{Image Distortions}\label{section:img_distortions} Different parameters of an image like brightness, contrast and color can be changed digitally. We describe the formulae used to transform the pixel intensity $(I)$ values at the co-ordinates(x,y). We assume distortion factor $\alpha\geq0$ \begin{itemize} \item Brightness: The brightness of an image can be changed by a factor $\alpha$ as follows:\\ $I(x,y)\gets \min (\alpha I(x,y),255))$ \item Color: The color of an image is changed by a factor $\alpha$ as follows: We evaluate the gray-scale image as:\\ $gray= (I(r) +I(g) +I(b))/3$, where I(r), I(g) and I(b) are the R, G \& B pixel values respectively.\\ $ I(x,y)\gets \min(\alpha I(x,y) + (1 - \alpha)gray(x,y),255)$ \item Contrast: The contrast in an image is changed by a factor $\alpha$ as follows:\\ $\mu_{gray}=mean(gray)$\\ $I(x,y)\gets\min(\alpha I(x,y) + (1 - \alpha)\mu_{gray},255)$ \end{itemize} \section{Model}\label{section:model} Given an image, the goal of \textit{ObjectRL} is to provide a digital transformations which would be applied to the input image. This transformed image should extract maximum performance (F1 score) on object detection when given as an input to a pre-trained object detection network. \subsection{Formulation} We cast the problem of image parameter modifications as a Markov Decision Process (MDP) \cite{Puterman:1994:MDP:528623} since this setting provides a formal framework to model an agent that makes a sequence of decisions. Our formulation considers a single image as the state. To simulate the effect of \textit{bad} images as well as increase the variance in the images in a dataset, we digitally distort the images. These digital distortions are carried out by randomly choosing $\alpha$ for a particular type of distortion (brightness, contrast, color). We have a pre-trained object detection network which could be trained either on the same dataset or any other dataset. \iflogvar \begin{figure*}[] \centering \noindent \includegraphics[width=0.095\textwidth]{figures/Brightness/0.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/1.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/2.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/3.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/4.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/5.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/6.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/7.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/8.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/9.png} \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Brightness/10.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/11.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/12.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/13.png} \includegraphics[width=0.095\textwidth]{figures/Brightness/14.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/15.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/16.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/17.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/18.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/19.png}\hspace{0.5mm}% \vspace{1mm} \par \caption{Variation in images with varying brightness distortion factor $\alpha$ from 0 to 2 in steps of 0.1.} \label{fig:distortion_range_brightness} \end{figure*} \begin{figure*}[] \centering \noindent \includegraphics[width=0.095\textwidth]{figures/Contrast/0.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/1.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/2.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/3.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/4.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/5.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/6.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/7.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/8.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/9.png} \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Contrast/10.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/11.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/12.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/13.png} \includegraphics[width=0.095\textwidth]{figures/Contrast/14.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/15.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/16.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/17.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/18.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/19.png}\hspace{0.5mm}% \vspace{1mm} \par \caption{Variation in images with varying contrast distortion factor $\alpha$ from 0 to 2 in steps of 0.1.} \label{fig:distortion_range_contrast} \end{figure*} \begin{figure*}[] \centering \noindent \includegraphics[width=0.095\textwidth]{figures/Color/0.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/1.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/2.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/3.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/4.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/5.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/6.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/7.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/8.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/9.png} \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Color/10.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/11.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/12.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/13.png} \includegraphics[width=0.095\textwidth]{figures/Color/14.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/15.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/16.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/17.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/18.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/19.png}\hspace{0.5mm}% \vspace{1mm} \par \caption{Variation in images with varying color distortion factor $\alpha$ from 0 to 2 in steps of 0.1.} \label{fig:distortion_range_color} \end{figure*} \fi \subsection{ObjectRL}\label{section:ObjectRLMDP} Formally, the MDP has a set of actions $\mathcal{A}$, a set of states $\mathcal{S}$ and a reward function $\mathcal{R}$ which we define in this section.\\ \textbf{States}: The states for the agent are $128\times128\times3$ RGB images from the PascalVOC dataset \cite{Everingham15} which are distorted by random factors $\alpha$ chosen according to the scale of distortion. We consider only one type of distortion (brightness, color, contrast) at a time, ie. we train different models for different types of the distortion. Combining all the different types of distortions in a single model remains to be a key direction to explore in future work.\\ \textbf{Scales of Distortion:} We perform experiments with the following two degrees of distortion in the image: \begin{itemize} \item Full-scale distortion: The random distortion in the images $\alpha \in [0,2]$. \item Minor-scale distortion: The random distortion in the images $\alpha \in [0.5,1.8]$. This constraint limits the images to not have distortions which cannot be reverted back with the action space, the agent has access to. \end{itemize} The variation of the the distorted images can be seen in Fig \ref{fig:distortion_range_brightness}, \ref{fig:distortion_range_contrast}, \ref{fig:distortion_range_color}.\\ \textbf{Actions}: The agent can choose to change the global parameter (brightness, color, contrast) of the image by giving out a scalar $a_t\in [0,2]$. Here, $a_t$ is equivalent to $\alpha$ in the image distortion equations described in Section \ref{section:img_distortions}. The action $a_t$ can be applied sequentially upto $n$ number of times. After $n$ steps the episode is terminated. Here, we set the value of $n=1$ to achieve stability in training as having larger horizons lead to the images getting distorted beyond repair during the initial stages of learning and hence does not explore with the \textit{better} actions.\\ \textbf{Reward}: First, we evaluate scores $d_t$ for the images as follows: \begin{equation} d_t(x) = \gamma (IoU(x)) + (1-\gamma) (F1(x)) \label{eqn:reward} \end{equation} $x$ is the input image to the pre-trained object detector. IoU is the average of all the intersection over union for the bounding boxes predicted in the image and F1 is the F1-score for the image. We set $\gamma=0.1$ because we want to give more importance to the number of correct objects being detected.\\ We evaluate: \begin{itemize} \setlength\itemsep{-1mm} \item $d_{o,t} = d_t(\text{original image})$ \item $d_{d,t} = d_t(\text{distorted image})$ \item $d_{s,t} = d_t(\text{state})$ \end{itemize} where the \textit{original image} is the one before the random distortion, \textit{distorted image} is the image after the random distortion and \textit{state} is the image obtained after taking the action proposed by the agent. We define, \begin{equation} \beta_t = 2 d_{s,t}-d_{o,t}-d_{d,t} \end{equation} Here, $\beta_t$ is positive if and only if the agent's action leads to an image which gives better detection performance than both the original image as well as the distorted image. Thus we give the reward ($r_t$) as follows: \[ r_t = \begin{cases} \text{+1,} &\quad\text{if } \beta_t \ge -\epsilon \\ \text{-1,} &\quad\text{otherwise} \\ \end{cases} \] Note that $d_{o,t} \textrm{ and } d_{d,t}$ do not change in an episode and only $d_{s,t}$ changes over the episode. We set the hyperparameter $\epsilon=0.01$ as we do not want to penalise the minor shifts in bounding boxes which result in small changes in IoU in Eqn[\ref{eqn:reward}]. Fig \ref{fig:flow} shows the training procedure for \textit{ObjectRL}. \subsection{Motivation for ObjectRL} In scenarios where object-detection algorithms are deployed in real-time, for example in autonomous vehicles or drones, lighting conditions and subject speeds can change quickly. If cameras use a single operation mode, the image might be quite blurred or dark and hence the image obtained may not be ideal for performing object detection. In these cases it would not be possible to create new datasets with images obtained from all the possible combinations of camera parameters along with manually annotating them with bounding-boxes. Also, due to the lack of these annotated images we cannot fine-tune the existing object-detection networks on the distorted images. Our model leverages digital distortions on existing datasets with annotations to learn a policy such that it can tackle changes in image parameters in real-time to improve the object detection performance.\\ One of the main motivations of \textit{ObjectRL} is to extend it to control camera parameters to capture images which are good for object detection in real time. Thus, we propose an extension to \textit{ObjectRL} (for future work) where we have an RL agent which initially captures images by choosing random combinations of camera parameters (exploration phase). A human would then give rewards according to the objects detected in the images in the current buffer. These rewards would then be used to update the policy to improve the choice of camera parameters. This method of assigning a $\{\pm1\}$ reward is comparatively much faster than annotating the objects in the image to extend the dataset and training a supervised model with this extended model. This methodology is quite similar to the DAgger method (Dataset Aggregation) by Ross et al. \cite{DBLP:journals/corr/abs-1011-0686} where a human labels the actions in the newly acquired data before adding it into the experience for imitation learning. \section{Experiments}\label{section:experiments} In this section, we describe the experimental setup for \textit{ObjectRL}. We have built our network with PyTorch. For the object detector, we use a Single Shot Detector (SSD) \cite{DBLP:journals/corr/LiuAESR15} and YOLO-v3 \cite{DBLP:journals/corr/RedmonDGF15} trained on the PascalVOC dataset with a VGG-base network \cite{DBLP:journals/corr/SimonyanZ14a} for SSD. We use Proximal Policy Optimization (PPO) \cite{DBLP:journals/corr/SchulmanWDRK17} for optimising the \textit{ObjectRL} agent. We train the agent network on a single NVIDIA GTX 1080Ti with the PascalVOC dataset. \\ Both the actor and the critic networks consist of 6 convolutional layers with (kernel size, stride, number of filters)= \{(4,2,8), (3,2,16), (3,2,32), (3,2,64), (3,1,128), (3,1,256)\} followed by linear layers with output size 100, 25, 1. The agent is updated after 2000 steps for 20 epochs with batch-size=64. We use Adam Optimizer \cite{Adam} with a learning rate of $10^{-3}$. We use an $\epsilon-$\textit{Greedy} method for exploration where we anneal $\epsilon$ linearly with the number of episodes until it reaches $0.05$. \begin{figure*}[t] \setlength\tabcolsep{2pt \centering \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth]{figures/learning_curve.png}& \includegraphics[width=0.32\textwidth]{figures/learning_curve_color.png}& \includegraphics[width=0.32\textwidth]{figures/learning_curve_contrast.png}\\ Brightness & Color & Contrast \end{tabular} \caption{Episodic return of the \textit{ObjectRL} while training with a moving average of size $30$. Each iteration represents 1K episodes.} \label{fig:learning_curves} \end{figure*} \section{Results}\label{section:results} \begin{figure*}[t] \setlength\tabcolsep{2pt \centering \begin{tabular}{cccc} Original& \includegraphics[width=0.30\textwidth]{objectRL/001887o.png}& \includegraphics[width=0.30\textwidth]{objectRL/005647o.png}& \includegraphics[height=0.22\textwidth]{objectRL/006187o.png}\\ Distorted& \includegraphics[width=0.30\textwidth]{objectRL/001887d.png}& \includegraphics[width=0.30\textwidth]{objectRL/005647d.png}& \includegraphics[height=0.22\textwidth]{objectRL/006187d.png}\\ Agent& \includegraphics[width=0.30\textwidth]{objectRL/001887a.png}& \includegraphics[width=0.30\textwidth]{objectRL/005647a.png}& \includegraphics[height=0.22\textwidth]{objectRL/006187a.png}\\ & (a) & (b) & (c) \end{tabular} \caption{A few of the outputs from \textit{ObjectRL} with SSD and minor-scale distortion. The top row contains the original images. The second row contains the distorted images. The bottom row contains images obtained from the agent. Bounding boxes are drawn over the objects detected by the detector.} \label{fig:objectRL_outputs} \end{figure*} \subsection{Measure for evaluation for ObjectRL: TP-Score} To the best of our knowledge, we believe no suitable measure is defined for this problem and hence we define a measure called \textit{TP-Score(k)} (True Positive Score). This score is the number of images in which $k-$or more true positives were detected which were not detected in the image before transformation. The \textit{TP-Score(k)} is initialised to zero for a set of images $\mathcal{I}$. For example: Let the number of true-positives detected before the transformation be 3 and let the number of true-positives detected after the transformation be 5. Then we have one image where 2 extra true-positives were detected which were not detected in the input image. Thus, we increase \textit{TP-Score(1)} and \textit{TP-Score(2)} by one. \subsection{Baseline for ObjectRL} To obtain the baselines, we first distort the images in the original dataset. The images are distorted with $\alpha$ being randomly chosen from the set $\mathcal{S} = \{0.1,\hdots,1.9,2.0\}$ or $\mathcal{S} = \{0.5,\hdots,1.7, 1.8\}$ depending on the scale. The set of available actions to be applied on on these images are: $\hat{\mathcal{S}} = \{\frac{1}{s} \forall s \in \mathcal{S}\}$. We evaluate the \textit{TP-Score(k)} on the distorted images by applying the transformations by performing a grid-search over all $\alpha \in \hat{\mathcal{S}}$ and report the scores obtained with the best-performing actions for different types and scales of distortions in Table \ref{table:objectRL_brightness}, \ref{table:objectRL_color} and \ref{table:objectRL_contrast}. We also report the \textit{TP-Scores} obtained after applying the transformations proposed by \textit{ObjectRL} on the images distorted using full-scale and minor-scales. The scores reported are averaged over 10 image sets $\mathcal{I}$, each containing 10,000 images. Note that the means and standard deviations are rounded to the nearest integers. \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{8}{c|}{\textbf{Brightness}}\\ \cline{2-9} & \multicolumn{4}{c|}{Full-scale} & \multicolumn{4}{c|}{Minor-scale}\\ \cline{2-9} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} \\ \cline{2-9} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL}\\ \hline\hline 1 & $955\pm 14$ & $532\pm20$ & $1360\pm 22$ & $976\pm 18$ & $435\pm 25$ & $428\pm23$ & $1025\pm 23$ & $883\pm 24$\\ \hline 2 & $154\pm 6$ & $87\pm3$ & $202\pm 15$ & $118\pm15$ & $87\pm 12$ & $80\pm9$ & $85\pm 15$ & $63\pm15$\\ \hline 3 & $49\pm 3$ & $32\pm4$ & $52\pm 8$ & $18\pm 6$ & $14\pm 5$ & $12\pm3$ & $8\pm 2$ & $5\pm 1$\\ \hline 4 & $18\pm 3$ & $7\pm1$ & $17\pm 2$ & $4\pm 1$ & $5\pm 1$ & $3\pm0$ & $2\pm 0$ & $0$\\ \hline 5 & $7\pm 2$ & $2\pm0$ & $4\pm1$ & $2\pm 0$ & $0$ & $0$ & $0$ & $0$\\ [1ex] \hline \end{tabular} \caption{\textit{TP-Score(k)} with brightness distortion. GS stands for Grid-Search.} \label{table:objectRL_brightness} \end{table} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{8}{c|}{\textbf{Color}}\\ \cline{2-9} & \multicolumn{4}{c|}{Full-scale} & \multicolumn{4}{c|}{Minor-scale}\\ \cline{2-9} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} \\ \cline{2-9} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL}\\ \hline\hline 1 & $973\pm 17$ & $672\pm19$ & $1250\pm 23$ & $1103\pm 21$ & $561\pm 18$ & $532\pm22$ & $974\pm 21$ & $930\pm 22$\\ \hline 2 & $123\pm 7$ & $84\pm4$ & $210\pm 16$ & $135\pm13$ & $43\pm 9$ & $37\pm9$ & $83\pm 12$ & $82\pm12$\\ \hline 3 & $53\pm 4$ & $31\pm3$ & $63\pm 7$ & $23\pm 6$ & $1\pm 0$ & $0$ & $15\pm 2$ & $10\pm 1$\\ \hline 4 & $11\pm 2$ & $3\pm1$ & $19\pm 2$ & $5\pm 1$ & $0$ & $0$ & $6\pm 1$ & $3\pm0$\\ \hline 5 & $5\pm 1$ & $1\pm0$ & $6\pm1$ & $2\pm 0$ & $0$ & $0$ & $0$ & $0$\\ [1ex] \hline \end{tabular} \caption{\textit{TP-Score(k)} with color distortion. GS stands for Grid-Search.} \label{table:objectRL_color} \end{table} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{8}{c|}{\textbf{Contrast}}\\ \cline{2-9} & \multicolumn{4}{c|}{Full-scale} & \multicolumn{4}{c|}{Minor-scale}\\ \cline{2-9} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} \\ \cline{2-9} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL} & \textit{GS} & \textit{ObjectRL}\\ \hline\hline 1 & $955\pm 15$ & $532\pm20$ & $1360\pm 21$ & $976\pm 19$ & $680\pm 22$ & $663\pm24$ & $1038\pm 23$ & $975\pm 24$\\ \hline 2 & $163\pm 8$ & $101\pm4$ & $213\pm 16$ & $134\pm15$ & $62\pm 10$ & $49\pm9$ & $104\pm 13$ & $85\pm15$\\ \hline 3 & $55\pm 4$ & $36\pm4$ & $67\pm 7$ & $39\pm 6$ & $14\pm 3$ & $6\pm2$ & $19\pm 3$ & $16\pm 2$\\ \hline 4 & $21\pm 2$ & $11\pm1$ & $28\pm 2$ & $13\pm 1$ & $1\pm 0$ & $1\pm0$ & $5\pm 0$ & $3\pm0$\\ \hline 5 & $4\pm 1$ & $2\pm0$ & $5\pm1$ & $2\pm 0$ & $0$ & $0$ & $0$ & $0$\\ [1ex] \hline \end{tabular} \caption{\textit{TP-Score(k)} with contrast distortion. GS stands for Grid-Search.} \label{table:objectRL_contrast} \end{table} As seen in Table \ref{table:objectRL_brightness},\ref{table:objectRL_color} and \ref{table:objectRL_contrast}, \textit{ObjectRL} is not able to perform as well as the grid-search for full-scale distortions. The reason for this is that many of the images obtained after the full-scale distortions are not repairable with the action set provided to the agent. But with minor-scale distortions, \textit{ObjectRL} is able to perform as well as the grid-search. The total time taken for the grid-search over all brightness values for one image is $12.5094\pm 0.4103$s for YOLO and $15.1090\pm0.3623$ for SSD on a CPU. The advantage of using \textit{ObjectRL} is that the time taken by the agent is 10 times less than grid-search. This latency is quite crucial in applications like surveillance drones and robots where the lighting conditions can vary quickly and the tolerance for errors in object-detection is low. \subsection{Discussion on the outputs of \textit{ObjectRL}} In this section, we discuss the outputs obtained from \textit{ObjectRL} with SSD and minor-scale distortion which are shown in Fig \ref{fig:objectRL_outputs}. In column (a) 4 true positives are detected in the original image, 3 true positives are detected in the distorted image and 4 true positives are detected in the original image. The distorted image is slightly darker the the original one. \textit{ObjectRL} is able to recover the object lost after distortion. In column (b) 3 true positives are detected in the original image, 4 true positives are detected in the distorted image and 5 true positives are detected in the original image. In this case, even the distorted image performs better than original image. But the agent-obtained image performs the best with 5 true-positives. In column (c) 1 true positive is detected in the original image, 1 true positive is detected in the distorted image and 2 true positives are detected in the original image. In this case the agent obtained image outperforms both the distorted and the original image. For a human eye, the agent-obtained image may not look \textit{pleasing} as it is much brighter than the original image. Ideally for a human, the distorted image in column (c) is the most \textit{pleasing}. Column (c) is one of the perfect examples to demonstrate the fact that whatever looks pleasing to a human eye may not necessarily be the optimal one for object-detection. Thus on an average, the agent is able to recover either as many objects as detected in the original image or more. According to our experiments, there were $8\pm1$ images with SSD and $34\pm5$ images with YOLO-v3, where the agent-obtained image had lesser number of true-positives than the original image. Although, this number of true-positives was more than the number of true-positives detected in the distorted image. \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{2}{c|}{\textbf{Brightness}} & \multicolumn{2}{c|}{\textbf{Color}} & \multicolumn{2}{c|}{\textbf{Contrast}}\\ \cline{2-7} & $\pi_{yolo}^{ssd}$ & $\pi_{ssd}^{yolo}$ & $\pi_{yolo}^{ssd}$ & $\pi_{ssd}^{yolo}$ & $\pi_{yolo}^{ssd}$ & $\pi_{ssd}^{yolo}$ \\ \hline\hline 1 & $582\pm 13$ & $1045\pm24$ & $800\pm 15$ & $1249\pm 26$ & $813\pm15$ & $1243\pm26$ \\ \hline 2 & $36\pm 6$ & $73\pm11$ & $72\pm 8$ & $138\pm11$ & $65\pm8$ & $145\pm12$ \\ \hline 3 & $2\pm 0$ & $9\pm4$ & $10\pm1$ & $13\pm3$ & $2\pm0$ & $19\pm4$ \\ [1ex] \hline \end{tabular} \caption{\textit{TP-Score(k)} by crossing the policies.} \label{table:cross_policies} \end{table} \subsection{Crossing Policies} In this section we perform experiments by swapping the detectors for the learned policies. Thus, we use $\pi_{yolo}$ with SSD, (denoted as $\pi_{yolo}^{ssd}$) and $\pi_{ssd}$ with YOLO, (denoted as $\pi_{ssd}^{yolo}$). In Table \ref{table:cross_policies}, we report the number of images where $k-$or lesser true positives were detected with the swapped policy than what were detected using the original policy on their corresponding detectors. As shown in Table \ref{table:cross_policies}, $\pi_{SSD}$ on YOLO is worse than $\pi_{YOLO}$ on SSD. This is because the range of values for which SSD gives optimal performance is bigger than the range of values for which YOLO gives optimal performance. In essence, YOLO is more sensitive to the image parameters than SSD. \section{Conclusion} This paper proposes the usage of reinforcement learning to improve the object detection of a pre-trained object detector network by changing the image parameters (\textit{ObjectRL}). We validate our approach by experimenting with distorted images and making the agent output actions necessary to improve detection. Our experiments showed that pre-processing of images is necessary to extract the maximum performance from a pre-trained detector. Future work includes combining all the different distortions in a single model and using it for controlling camera parameters to obtain images. Along with this, local image manipulations such as changing the image parameters only in certain regions of the image could be tried out. \section*{Acknowledgements} The first author would like to thank Hannes Gorniaczyk, Manan Tomar and Rahul Ramesh for their insights on the project. \bibliographystyle{splncs04} \section{} and \subsection{} commands, these will automatically be printed on this slide as an overview of your presentation \section{The problem statement} \begin{frame}{The Problem Statement} \begin{itemize} \item Given an image try to find the optimal set of digital transformations to be applied on the image such that the object detection performance of a pre-trained detector improves. \end{itemize} \end{frame} \section{Pipeline} \begin{frame}{Pipeline} \begin{figure} \centering \includegraphics[scale=0.5]{figures/flow.png} \caption{Pipeline} \label{fig:flow} \end{figure} \end{frame} \section{Digital Distortions} \begin{frame}{Digital Distortions} Although we work with digital distortions, ObjectRL can be extended to choose the camera parameters to capture the images by using the image formation model proposed by Hassinoff et al. \footcite{5540167} \begin{itemize} \item Brightness: $I(x,y)\gets \min (\alpha I(x,y),255))$ \item Color: $gray= (I(r) +I(g) +I(b))/3$, where I(r), I(g) and I(b) are the R, G \& B pixel values respectively.\\ $ I(x,y)\gets \min(\alpha I(x,y) + (1 - \alpha)gray(x,y),255)$ \item Contrast: $\mu_{gray}=mean(gray)$\\ $I(x,y)\gets\min(\alpha I(x,y) + (1 - \alpha)\mu_{gray},255)$ \end{itemize} \end{frame} \begin{frame}{Scales of Distortion} \item\textbf{Scales of Distortion:} We perform experiments with the following two degrees of distortion in the image: \begin{itemize} \item Full-scale distortion: The random distortion in the images $\alpha \in [0,2]$. \item Minor-scale distortion: The random distortion in the images $\alpha \in [0.5,1.8]$. This constraint limits the images to not have distortions which cannot be reverted back with the action space, the agent has access to. \end{itemize} \end{frame} \section{Distortion Scales} \begin{frame}{Distortion Scales} \begin{figure*} \centering \noindent \includegraphics[width=0.095\textwidth]{figures/Brightness/0.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/1.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/2.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/3.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/4.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/5.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/6.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/7.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/8.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/9.png} \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Brightness/10.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/11.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/12.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/13.png} \includegraphics[width=0.095\textwidth]{figures/Brightness/14.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/15.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/16.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/17.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/18.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Brightness/19.png}\hspace{0.5mm}% \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Color/0.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/1.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/2.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/3.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/4.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/5.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/6.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/7.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/8.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/9.png} \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Color/10.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/11.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/12.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/13.png} \includegraphics[width=0.095\textwidth]{figures/Color/14.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/15.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/16.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/17.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/18.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Color/19.png}\hspace{0.5mm}% \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Contrast/0.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/1.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/2.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/3.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/4.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/5.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/6.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/7.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/8.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/9.png} \vspace{1mm} \includegraphics[width=0.095\textwidth]{figures/Contrast/10.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/11.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/12.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/13.png} \includegraphics[width=0.095\textwidth]{figures/Contrast/14.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/15.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/16.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/17.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/18.png}\hspace{0.5mm}% \includegraphics[width=0.095\textwidth]{figures/Contrast/19.png}\hspace{0.5mm}% \end{figure*} \end{frame} \section{Reward systems} \begin{frame}{Reward} \begin{equation} d_t(x) = \gamma\hdot (IoU(x)) + (1-\gamma)\hdot (F1(x)) \label{eqn:reward} \end{equation} Evaluate: \begin{itemize} \setlength\itemsep{-1mm} \item $d_{o,t} = d_t(\text{original image})$ \item $d_{d,t} = d_t(\text{distort image})$ \item $d_{s,t} = d_t(\text{state})$ \end{itemize} \begin{equation} \beta_t = 2\hdot d_{s,t}-d_{o,t}-d_{d,t} \end{equation} \[ r_t = \begin{cases} \text{+1,} &\quad\text{if } \beta_t \ge -\epsilon \\ \text{-1,} &\quad\text{otherwise} \\ \end{cases} \] \end{frame} \section{Motivation} \begin{frame}{Motivation for ObjectRL} \begin{itemize} \item In real-time detection applications, lighting conditions and subject speeds can change quickly. \item Single operation mode on cameras will not work well. \item In these cases it would not be possible to create new datasets with images obtained from all the possible combinations of camera parameters along with manually annotating them with bounding-boxes. \item Also, due to the lack of these annotated images we cannot fine-tune the existing object-detection networks on the distorted images. \end{itemize} \end{frame} \begin{frame}{Motivation} \begin{itemize} \item We propose an extension to \textit{ObjectRL} (for future work) where we have an RL agent which initially captures images by choosing random combinations of camera parameters (exploration phase). \item A human would then give rewards according to the objects detected in the images in the current buffer and these rewards would then be used to update the policy to improve the choice of camera parameters. \item This method of assigning a $\{\pm1\}$ reward is comparatively much faster than annotating the objects. This methodology is quite similar to the DAgger method (Dataset Aggregation) by Ross et al. \footcite{DBLP:journals/corr/abs-1011-0686} where a human labels the actions in the newly acquired data before adding it into the experience for imitation learning. \end{itemize} \end{frame} \section{Metric for evaluation} \begin{frame}{Metric: TP-Score} To the best of our knowledge, we believe no suitable measure is defined for this problem and hence we define a measure called \textit{TP-Score(k)} (True Positive Score). \begin{itemize} \item TP-Score is the number of images in an image set $\mathcal{I}$ in which $k-$or more true positives were detected which were not detected in the image before transformation. \item The \textit{TP-Score(k)} is initialised to zero for a set of images $\mathcal{I}$. \item For example: Let the number of true-positives detected before the transformation be 3 and let the number of true-positives detected after the transformation be 5. Then we have one image where 2 extra true-positives were detected which were not detected in the input image. Thus, we increase \textit{TP-Score(1)} and \textit{TP-Score(2)} by one. \end{itemize} \end{frame} \section{Baselines} \begin{frame}{Baselines} \begin{itemize} \item To obtain the baselines, we first distort the images in the original dataset with $\alpha$ being randomly chosen from the set $\mathcal{S} = \{0.1,\hdots,1.9,2.0\}$ or $\mathcal{S} = \{0.5,\hdots,1.7, 1.8\}$ depending on the scale. The set of available actions to be applied on on these images are: $\hat{\mathcal{S}} = \{\frac{1}{s} \forall s \in \mathcal{S}\}$. \item We evaluate the \textit{TP-Score(k)} on the distorted images by applying the transformations by performing a grid-search over all $\alpha \in \hat{\mathcal{S}}$ and report the scores obtained with the best-performing actions for different types and scales of distortions. \end{itemize} \end{frame} \section{Results} \begin{frame}{Learning Curves} \begin{figure*} \setlength\tabcolsep{2pt \centering \begin{tabular}{ccc} \includegraphics[width=0.32\textwidth]{figures/learning_curve.png}& \includegraphics[width=0.32\textwidth]{figures/learning_curve_color.png}& \includegraphics[width=0.32\textwidth]{figures/learning_curve_contrast.png}\\ Brightness & Color & Contrast \end{tabular} \caption{Episodic return of the \textit{ObjectRL} while training with a moving average of size $30$. Each iteration represents 1K episodes.} \label{fig:learning_curves} \end{figure*} \end{frame} \begin{frame}{} \begin{figure*} \setlength\tabcolsep{2pt \centering \begin{tabular}{ccc} \includegraphics[width=0.30\textwidth]{objectRL/001887o.png}& \includegraphics[width=0.30\textwidth]{objectRL/005647o.png}& \includegraphics[height=0.22\textwidth]{objectRL/006187o.png}\\ \includegraphics[width=0.30\textwidth]{objectRL/001887d.png}& \includegraphics[width=0.30\textwidth]{objectRL/005647d.png}& \includegraphics[height=0.22\textwidth]{objectRL/006187d.png}\\ \includegraphics[width=0.30\textwidth]{objectRL/001887a.png}& \includegraphics[width=0.30\textwidth]{objectRL/005647a.png}& \includegraphics[height=0.22\textwidth]{objectRL/006187a.png}\\ \end{tabular} \caption{A few of the outputs from \textit{ObjectRL} with SSD and minor-scale distortion. The top row contains the original images. The second row contains the distorted images. The bottom row contains images obtained from the agent. Bounding boxes are drawn over the objects detected by the detector.} \label{fig:objectRL_outputs} \end{figure*} \end{frame} \begin{frame}{Results} \begin{figure*} \setlength\tabcolsep{2pt \centering \begin{tabular}{ccc} \includegraphics[width=0.30\textwidth]{objectRL/008499o.png}& \includegraphics[width=0.30\textwidth]{objectRL/001902o.png}& \includegraphics[height=0.22\textwidth]{objectRL/008748o.png}\\ \includegraphics[width=0.30\textwidth]{objectRL/008499d.png}& \includegraphics[width=0.30\textwidth]{objectRL/001902d.png}& \includegraphics[height=0.22\textwidth]{objectRL/008748d.png}\\ \includegraphics[width=0.30\textwidth]{objectRL/008499a.png}& \includegraphics[width=0.30\textwidth]{objectRL/001902a.png}& \includegraphics[height=0.22\textwidth]{objectRL/008748a.png}\\ (a) & (b) & (c) \end{tabular} \label{fig:objectRL_outputs_supp} \end{figure*} \end{frame} \begin{frame}{Results} \begin{table}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{8}{c|}{\textbf{Brightness}}\\ \cline{2-9} & \multicolumn{4}{c|}{Full-scale} & \multicolumn{4}{c|}{Minor-scale}\\ \cline{2-9} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} \\ \cline{2-9} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}}\\ \hline\hline 1 & $955\pm 14$ & $532\pm20$ & $1360\pm 22$ & $976\pm 18$ & $435\pm 25$ & $428\pm23$ & $1025\pm 23$ & $883\pm 24$\\ \hline 2 & $154\pm 6$ & $87\pm3$ & $202\pm 15$ & $118\pm15$ & $87\pm 12$ & $80\pm9$ & $85\pm 15$ & $63\pm15$\\ \hline 3 & $49\pm 3$ & $32\pm4$ & $52\pm 8$ & $18\pm 6$ & $14\pm 5$ & $12\pm3$ & $8\pm 2$ & $5\pm 1$\\ \hline 4 & $18\pm 3$ & $7\pm1$ & $17\pm 2$ & $4\pm 1$ & $5\pm 1$ & $3\pm0$ & $2\pm 0$ & $0$\\ \hline 5 & $7\pm 2$ & $2\pm0$ & $4\pm1$ & $2\pm 0$ & $0$ & $0$ & $0$ & $0$\\ [1ex] \hline \end{tabular} } \caption{\textit{TP-Score(k)} with brightness distortion. GS stands for Grid-Search.\footnote{The scores reported are averaged over 10 image sets $\mathcal{I}$, each containing 10,000 images. The means and standard deviations are rounded to the nearest integers.}} \label{table:objectRL_brightness} \end{table} \end{frame} \begin{frame}{Results} \begin{table}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{8}{c|}{\textbf{Color}}\\ \cline{2-9} & \multicolumn{4}{c|}{Full-scale} & \multicolumn{4}{c|}{Minor-scale}\\ \cline{2-9} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} \\ \cline{2-9} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}}\\ \hline\hline 1 & $973\pm 17$ & $672\pm19$ & $1250\pm 23$ & $1103\pm 21$ & $561\pm 18$ & $532\pm22$ & $974\pm 21$ & $930\pm 22$\\ \hline 2 & $123\pm 7$ & $84\pm4$ & $210\pm 16$ & $135\pm13$ & $43\pm 9$ & $37\pm9$ & $83\pm 12$ & $82\pm12$\\ \hline 3 & $53\pm 4$ & $31\pm3$ & $63\pm 7$ & $23\pm 6$ & $1\pm 0$ & $0$ & $15\pm 2$ & $10\pm 1$\\ \hline 4 & $11\pm 2$ & $3\pm1$ & $19\pm 2$ & $5\pm 1$ & $0$ & $0$ & $6\pm 1$ & $3\pm0$\\ \hline 5 & $5\pm 1$ & $1\pm0$ & $6\pm1$ & $2\pm 0$ & $0$ & $0$ & $0$ & $0$\\ [1ex] \hline \end{tabular} } \caption{\textit{TP-Score(k)} with color distortion. GS stands for Grid-Search.} \label{table:objectRL_color} \end{table} \end{frame} \begin{frame}{Results} \begin{table}[t] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{8}{c|}{\textbf{Contrast}}\\ \cline{2-9} & \multicolumn{4}{c|}{Full-scale} & \multicolumn{4}{c|}{Minor-scale}\\ \cline{2-9} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} & \multicolumn{2}{c|}{SSD} & \multicolumn{2}{c|}{YOLO} \\ \cline{2-9} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}} & \thead{\textit{GS}} & \thead{\textit{ObjectRL}}\\ \hline\hline 1 & $955\pm 15$ & $532\pm20$ & $1360\pm 21$ & $976\pm 19$ & $680\pm 22$ & $663\pm24$ & $1038\pm 23$ & $975\pm 24$\\ \hline 2 & $163\pm 8$ & $101\pm4$ & $213\pm 16$ & $134\pm15$ & $62\pm 10$ & $49\pm9$ & $104\pm 13$ & $85\pm15$\\ \hline 3 & $55\pm 4$ & $36\pm4$ & $67\pm 7$ & $39\pm 6$ & $14\pm 3$ & $6\pm2$ & $19\pm 3$ & $16\pm 2$\\ \hline 4 & $21\pm 2$ & $11\pm1$ & $28\pm 2$ & $13\pm 1$ & $1\pm 0$ & $1\pm0$ & $5\pm 0$ & $3\pm0$\\ \hline 5 & $4\pm 1$ & $2\pm0$ & $5\pm1$ & $2\pm 0$ & $0$ & $0$ & $0$ & $0$\\ [1ex] \hline \end{tabular} } \caption{\textit{TP-Score(k)} with contrast distortion. GS stands for Grid-Search.} \label{table:objectRL_contrast} \end{table} \end{frame} \section{Crossing Policies} \begin{frame}{Cross Policies} \item To check for the dependence of the policy learned by the agents on the detector it was trained on, we test $\pi_{yolo}$ with SSD, (denoted as $\pi_{yolo}^{ssd}$) and $\pi_{ssd}$ with YOLO, (denoted as $\pi_{ssd}^{yolo}$). \item We report the number of images where $k-$or lesser true positives were detected with the swapped policy than what were detected using the original policy on their corresponding detectors. \end{frame} \begin{frame}{Cross-Policies} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{k} & \multicolumn{2}{c|}{\textbf{Brightness}} & \multicolumn{2}{c|}{\textbf{Color}} & \multicolumn{2}{c|}{\textbf{Contrast}}\\ \cline{2-7} & \thead{$\pi_{yolo}^{ssd}$} & \thead{$\pi_{ssd}^{yolo}$} & \thead{$\pi_{yolo}^{ssd}$} & \thead{$\pi_{ssd}^{yolo}$} & \thead{$\pi_{yolo}^{ssd}$} & \thead{$\pi_{ssd}^{yolo}$} \\ \hline\hline 1 & $582\pm 13$ & $1045\pm24$ & $800\pm 15$ & $1249\pm 26$ & $813\pm15$ & $1243\pm26$ \\ \hline 2 & $36\pm 6$ & $73\pm11$ & $72\pm 8$ & $138\pm11$ & $65\pm8$ & $145\pm12$ \\ \hline 3 & $2\pm 0$ & $9\pm4$ & $10\pm1$ & $13\pm3$ & $2\pm0$ & $19\pm4$ \\ [1ex] \hline \end{tabular} \caption{\textit{TP-Score(k)} by crossing the policies. $\pi_{SSD}$ on YOLO is worse than $\pi_{YOLO}$ on SSD. This is because the range of values for which SSD gives optimal performance is bigger than the range of values for which YOLO gives optimal performance. In essence, YOLO is more sensitive to the image parameters than SSD.} \label{table:cross_policies} \end{table} \end{frame} \section{Future work} \begin{frame}{Future work} \begin{itemize} \item Combining all the distortions together instead of one at a time. \item Making local manipulations in the images. \item Extending ObjectRL for choosing camera parameters. \end{itemize} \end{frame} \section{Acknowledgements} \begin{frame}{Acknowledgements} I would like to thank Hannes Gorniaczyk, Rahul Ramesh and Manan Tomar for their insights. \end{frame} \section{End} \begin{frame} \Huge{\centerline{The End}} \end{frame} \end{document}
1,314,259,996,089
arxiv
\section{\label{sec:introduction}Introduction} Confined fluid systems are an important field of study due to the great range of applications and situations where they can be found. Physically interesting systems in biology or chemistry involve dealing with confined particles, such as carbon nanotubes\cite{Ketal11,MCH11} or biological ion-channels,\cite{DNHEG08} to cite just a couple of examples. In many of these systems, the geometry is so restrictive that they become quasi one-dimensional (Q1D) systems. These Q1D systems can be used to model a wide range of extremely confined two- or three-dimensional systems, in which the space available along one of the dimensions is much larger than along the other ones. The study of this type of fluids is especially interesting from a statistical-mechanical perspective since many of them are amenable to exact analytical solutions, therefore providing insight into the thermodynamical and structural properties of such systems. An important subset of confined fluids are those under the so-called single-file confinement,\cite{PGIB21,HP18} where particles are inside a pore which is not wide enough as to allow particles to either bypass each other or interact with their second nearest neighbors, therefore confining them into a single-file formation. Q1D systems, usually restricted to single-file configurations, constitute an active field of study for both equilibrium\cite{B62,B64b,WPM82,PK92,KP93,P02,KMP04,FMP04,VBG11,GM14,GM15,HFC18,M14b,M15,M20,HBPT20,P20,PBT22,JF22} and nonequilibrium properties,\cite{GM14,FMP04,KMS14,RGM16,TFCM17,WLB20,LG20,HBPT21,RS22,MBGM22,RGIB22,MGB22} as well as for jamming effects,\cite{GM14,ZGM20,I20,LM20,LM22} from different perspectives. In the case of confined two-dimensional systems, a simple but nevertheless functional way of modeling the particle interaction is by means of the hard-disk interaction potential, in which particles are not allowed to interpenetrate but otherwise they do not interact among themselves. Exact thermodynamical solution for the single-file configuration with only nearest-neighbor interactions was already constructed via the transfer-matrix method.\cite{B64b,KP93} However, the method involves numerical schemes to solve an eigenvalue equation in order to obtain the equation of state of the system, and no real analytical solution has yet been found. In this sense, several proposals have been developed during the last few years to obtain analytically accurate approximations to the exact solution, involving first-order approximations of the contact distance of the particles,\cite{VBG11} virial-coefficient expansions,\cite{M14b,M15,M20} or distinguishing between high- and low-pressures regimes.\cite{KMP04,GM14} In this paper, we revisit the exact transfer-matrix solution\cite{KP93} for the single-file Q1D hard-disk fluid and perform a perturbation analysis to calculate the exact third and fourth virial coefficients. Interestingly, they differ from previous evaluations via the standard diagrammatic method,\cite{M14b,M15,M20} the reason being that the textbook cancelation of the so-called reducible diagrams do not hold in the case of confined fluids. We also study the behavior in the close-packing, high-pressure limit, and analyze its asymptotic properties. On view of this, we propose two different analytical approximations for the equation of state and study its behavior against the exact solution. Our basic approximation, despite its simplicity, is able to recover the second virial coefficient, provides reasonable estimates of the third and fourth virial coefficients, and predicts the correct close-packing linear density. A more sophisticated and accurate advanced approximation improves the estimates of the third and fourth virial coefficient and, moreover, reduces to the exact solution in the close-packing limit. For high pressures and wide pores, the execution times of the basic and advanced approximations are seen to be up to about $10^5$ and $10^3$ times shorter, respectively, than in the exact solution. Our paper is organized as follows: Section~\ref{sec:solution} defines the system and its exact solution, including an analysis of the low- and high-pressure behaviors in Secs.~\ref{sec:system_lowpressure} and~\ref{sec:system_highpressure}, respectively. Section~\ref{sec:approximations} presents our two analytical approximations to the equation of state, while Sec.~\ref{sec:4} performs an assessment of both approximations versus the exact solution. The paper is closed in Sec.~\ref{sec: concl} with some concluding remarks. The most technical details are relegated to Appendices~\ref{app:mapping_onedimension}--\ref{app:numerical_methods}. \section{\label{sec:solution}The Confined Hard-Disk Fluid. Exact Properties} \subsection{System} We consider a system of $N$ hard disks of unit diameter confined in a long channel of length $L \gg 1$ and width $w = 1+\epsilon$, with $0\leq \epsilon \leq \epsilon_{\mathrm{max}}$, where $\epsilon_{\mathrm{max}}=\sqrt{3}/2\simeq 0.866$ in order to ensure the single-file condition and preclude second nearest-neighbor interactions, as depicted in Fig.~\ref{fig:model_image02}(a). As illustrated in Fig.~\ref{fig:model_image02}(b), if the transverse separation between two disks is $s$, their longitudinal separation is \beq \label{eq:a(s)} a(s)\equiv\sqrt{1-s^2}. \eeq \begin{figure} \includegraphics[trim={0cm 3cm 0cm 3cm},clip,width=0.95\columnwidth]{model_image02.eps}\\ \includegraphics[trim={0cm 3cm 0cm 3cm},clip,width=0.95\columnwidth]{model_image01.eps} \caption{Schematic representation of the single-file hard-disk fluid. Panel (a) shows the maximum allowed value of the pore size, $1+\epsilon_{\max}$ (with $\epsilon_{\mathrm{max}}=\sqrt{3}/2$), beyond which a disk can interact with its second nearest-neighbors, thus violating the single-file condition. Panel (b) depicts a case with $\epsilon<\epsilon_{\mathrm{max}}$, where the two disks on the right show the definition of the longitudinal separation at contact, $a(s)$, while the three disks on the left illustrate the close-packing configuration.} \label{fig:model_image02} \end{figure} The number of disks per unit area is $\rho=N/Lw$. However, due to the Q1D configuration of the system, it is more convenient to characterize the number density through the number of particles per unit length, $\lambda \equiv N/L=\rho w$. Given a value of the excess pore width $\epsilon$, the close-packing value in the number of particles per unit length is $\lambda_{\mathrm{cp}}(\epsilon) = 1/a(\epsilon)$, as inferred from Fig.~\ref{fig:model_image02}(b), at which the particles occupy the maximum available space, resulting in the pressure diverging at that value. This divergence will be discussed in depth in Sec.~\ref{sec:system_highpressure}. Note that $\lambda_{\text{cp}}(\epsilon_{\mathrm{max}}) = 2$. Due to the anisotropy of the original two-dimensional system, the transverse pressure ($P_\perp$) is different from the longitudinal one ($P_{\|})$. We then define the (reduced) one-dimensional pressure as $p \equiv P_{\|} w$, where henceforth we take $k_BT=1$ as unit energy ($k_B$ and $T$ being the Boltzmann constant and the absolute temperature, respectively). \subsection{\label{sec:system_solution}Transfer-Matrix Solution} The exact solution to this system can be obtained via the transfer-matrix method. In the thermodynamic limit of large $N$, the excess Gibbs free energy per particle, $g^{\text{ex}}(p)$, may be written as\cite{KP93} \begin{equation}\label{eq:gibbs_equation} g^{\text{ex}}(p) = - \ln\frac{\ell(p)}{\epsilon}, \end{equation} where $\ell(p)$ is the maximum eigenvalue corresponding to the problem \begin{equation}\label{eq:eigenvalue_equation} \int\mathrm{d}y_2\, e^{-a(y_1-y_2)p}\phi(y_2)=\ell \phi(y_1), \end{equation} $\phi(y)$ being the associated eigenfunction. Here, we have taken $y=0$ at the midpoint between both walls located at $y\pm\epsilon/2$. Henceforth, all integrations over the $y$-variable will be understood to run along the interval $-\epsilon/2\leq y\leq \epsilon/2$ and the integration limits will be omitted. With the normalization condition \begin{equation}\label{eq:eigenvalue_normalization} \int\mathrm{d}y\,\phi^2(y)=1, \end{equation} $\phi^2(y)$ represents the probability density along the transverse direction $y$ within this framework. Multiplying both sides of Eq.~\eqref{eq:eigenvalue_equation} by $\phi(y_1)$ and integrating over $y_1$, we get \begin{equation} \label{eq:2.5} \ell = \int\mathrm{d}y_1\int\mathrm{d}y_2\, e^{-a(y_1-y_2)p}\phi(y_1)\phi(y_2), \end{equation} where the normalization condition, Eq.~\eqref{eq:eigenvalue_normalization}, has been used. Of course, both $\ell$ and $\phi(y)$ are functions of $p$. Differentiating both sides of Eq.~\eqref{eq:2.5} with respect to $p$, one gets \bal \label{eq:ell_derivative} \partial_p\ell=&-\int\mathrm{d}y_1\int\mathrm{d}y_2\, e^{-a(y_1-y_2)p}a(y_1-y_2)\phi(y_1)\phi(y_2)\nonumber\\ &+2\int\mathrm{d}y_1\int\mathrm{d}y_2\, e^{-a(y_1-y_2)p}\phi(y_2)\partial_p\phi(y_1). \eal On account of Eq.~\eqref{eq:eigenvalue_equation}, the second term on the right-hand side of Eq.~\eqref{eq:ell_derivative} can be rewritten as $2\ell \int\mathrm{d}y_1\,\phi(y_1) \partial_p \phi(y_1)=\ell \partial_p\int\mathrm{d}y_1\,\phi^2(y_1)=0$. Thus, $\partial_p\ell$ is only given by the first term on the right-hand side of Eq.~\eqref{eq:ell_derivative}. The compressibility factor $ Z \equiv p/\lambda$ can be obtained from the Gibbs free energy by the thermodynamic relation $Z=1+p\partial_p g^{\text{ex}}(p)=1-(p/\ell)\partial_p\ell$. Making use of Eq.~\eqref{eq:ell_derivative}, one gets \begin{equation}\label{eq:z_exact_01} Z = 1 + \frac{p}{\ell} \,\int\mathrm{d}y_1\int\mathrm{d}y_2 \,e^{-a(y_1-y_2)p}a(y_1-y_2)\phi(y_1)\phi(y_2). \end{equation} Taking into account Eq.~\eqref{eq:2.5}, Eq.~\eqref{eq:z_exact_01} can be rewritten as \begin{equation}\label{eq:z_exact_02} Z = 1 + p \frac{\int\mathrm{d}y_1\int\mathrm{d}y_2 \, e^{-a(y_1-y_2)p}a(y_1-y_2)\phi(y_1)\phi(y_2)}{\int\mathrm{d}y_1\int\mathrm{d}y_2 \, e^{-a(y_1-y_2)p}\phi(y_1)\phi(y_2)}. \end{equation} Note that, in contrast to the form \eqref{eq:z_exact_01}, the eigenfunction $\phi(y)$ in the form \eqref{eq:z_exact_02} does not need to be normalized. While both forms are fully equivalent inasmuch as the exact $\ell$ and $\phi(y)$ are used, they differ in the case of approximations. It is worth remarking that the solution shown here can also be obtained by a mapping of the original two-dimensional system onto a one-dimensional nonadditive mixture of hard rods, as outlined in Appendix~\ref{app:mapping_onedimension}. \subsection{\label{sec:system_lowpressure}Low-Pressure Behavior} Virial expansions are one of the most common methods to describe fluids in low-density (or, equivalently, low-pressure) conditions.\cite{HM13,S16} In general, access to the exact virial coefficients of a given system, at least the lower-order ones, is fundamental to improve the knowledge of the system and also to test the accuracy of approximate methods. The virial coefficients $\{B_n\}$ are defined from the expansion of the compressibility factor in powers of density: \beq \label{eq:virial_density} Z=1+\sum_{n=2}^\infty B_n \lambda^{n-1}. \eeq Analogously, one can introduce the expansion of $g^{\text{ex}}$ and Z in powers of pressure, \begin{subequations} \beq \label{eq:2.9a} g^{\text{ex}}=\sum_{n=2}^\infty \frac{B_n'}{n-1} p^{n-1}, \eeq \beq \label{eq:2.9b} Z=1+\sum_{n=2}^\infty B_n' p^{n-1}, \eeq \end{subequations} where \begin{equation} \label{eq:2.10} B'_2 = B_2,\quad B'_3 = B_3-B_2^2,\quad B'_4 = B_4-3 B_2 B_3 + 2B_2^3, \end{equation} and so on. The second virial coefficient has an analytical expression, namely\cite{KMP04,M18} \begin{equation} \label{eq:B2} B_2 = \frac{2}{3}\frac{ \left(1+\frac{\epsilon ^2}{2}\right) \sqrt{1-\epsilon ^2}-1}{\epsilon ^2}+\frac{\sin ^{-1}(\epsilon )}{\epsilon }. \end{equation} Let us introduce the expansion in powers of $p$ of both the eigenvalue and the eigenfunction in Eq.~\eqref{eq:eigenvalue_equation} as \begin{equation}\label{eq:virial_series} \phi(y) = \sum_{n=0}^{\infty} \phi_n(y) p^n, \quad \ell = \sum_{n=0}^{\infty} \ell_n p^n. \end{equation} Inserting the expansion of $\ell$ into Eq.~\eqref{eq:gibbs_equation} and comparing with Eq.~\eqref{eq:2.9a}, we get \beq \label{eq:2.13} B_3'=-2\frac{\ell_2}{\epsilon}+B_2^2, \quad B_4'=-3\frac{\ell_3}{\epsilon}-3B_2\frac{\ell_2}{\epsilon}+B_2^3, \eeq where use has been made of $\ell_0=\epsilon$ and $\ell_1=-\epsilon B_2$ (see Appendix~\ref{app:virial_series_math}). Alternatively, the expansion of $\phi(y)$ provides the expansion of the integral \bal \label{eq:I} I\equiv&\int\mathrm{d}y_1\int\mathrm{d}y_2\, e^{-a(y_1-y_2)p}a(y_1-y_2)\phi(y_1)\phi(y_2)\nonumber\\ =&\sum_{n=0}^\infty I_n p^n. \eal Since $I=-\partial_p\ell$, one has \beq \label{eq:In} I_n=-(n+1)\ell_{n+1}. \eeq By inserting the series expansions of Eq.~\eqref{eq:virial_series} into both the normalization condition, Eq.~\eqref{eq:eigenvalue_normalization}, and the eigenvalue equation, Eq.~\eqref{eq:eigenvalue_equation}, and equating the coefficients with the same powers of $p$ on both sides of the equation, one can in principle obtain as many terms as desired. Appendix~\ref{app:virial_series_math} shows the calculation of $\{\phi_0,\phi_1,\phi_2\}$ and $\{\ell_0,\ell_1,\ell_2\}$. Also, $\ell_3$ can be obtained from $I_2$. Substitution of $\ell_2$ and $\ell_3$ into Eq.~\eqref{eq:2.13}, yields \begin{subequations} \label{eq:B3p&B4p} \bal \label{eq:B3p} B_3'=&-\left(1+2W_2-3B_2^2-\frac{\epsilon^2}{6}\right)\nonumber\\ =&-\frac{\epsilon^4}{80}\left(1 +\frac{41 \epsilon^2}{126} + \frac{349 \epsilon^4}{2520} +\cdots\right), \eal \bal \label{eq:B4p} B_4'=&-\Bigg[\left(12W_2-10B_2^2+\frac{3}{2}-\frac{\epsilon^2}{4}\right)B_2-3W_3\nonumber\\ &+\frac{(1-\epsilon^2)^{5/2}-1-5\epsilon^2}{15\epsilon^2}\Bigg]\nonumber\\ =&-\frac{23\epsilon^6}{15120}\left(1 +\frac{567 \epsilon^2}{920}+\frac{14823 \epsilon^4}{40480} +\cdots\right), \eal \end{subequations} where $W_2$ and $W_3$ are given by Eqs.~\eqref{eq:W_2} and \eqref{eq:W_3}, requiring to numerically carry out a simple and double integration, respectively. \begin{table} \caption{\label{table:1}Comparison between exact and MC values\cite{M20} of $\Delta Z\equiv Z-(1+B_2 p)$ and the truncated series $B_3' p^2$ and $B_3' p^2+B_4' p^3$.} \begin{ruledtabular} \begin{tabular}{cccccccc} $\epsilon$&$p$&$\Delta Z_{\text{exact}}$&$\Delta Z_{\text{MC}}$&$B_3'p^2$&$B_3'p^2$&$B_{3,\text{M}}'p^2$&$B_{3,\text{M}}'p^2$\\ &&&&&$+B_4'p^3$&&$+B_{4,\text{M}}'p^3$\\ \hline $0.4$&$1.2$&$-0.0005$&$-0.0006$&$-0.0005$&$-0.0005$&$-0.0003$&$-0.0003$\\ &$12$&$-0.0631$&$-0.0632$&$-0.0487$&$ -0.0606$&$-0.0269$&$-0.0748$\\ $0.8$&$1.2$&$-0.0108$&$-0.0107$&$-0.0095$&$-0.0107$&$-0.0052$&$-0.0093$\\ &$12$&$-2.6547$&$-2.6546$&$-0.9541$&$-2.0894$&$-0.5195$&$-4.6449$\\ \end{tabular} \end{ruledtabular} \end{table} It is worth noticing that the exact expressions derived here for $B_3'$ and $B_4'$ differ from those recently obtained by Mon\cite{M14b,M15,M20} (here referred to as $B_{3,\text{M}}'$ and $B_{4,\text{M}}'$) via integration of Ree--Hoover diagrams. In particular, the leading terms in the expansions in powers of $\epsilon$ of Mon's coefficients are $B_{3,\text{M}}'=-\frac{\epsilon^4}{144}+\mathcal{O}(\epsilon^6)$ and $B_{4,\text{M}}'=-\frac{\epsilon^6}{160}+\mathcal{O}(\epsilon^8)$, which contrast with the leading terms in Eqs.~\eqref{eq:B3p&B4p}. As a test, let us define $\Delta Z\equiv Z-(1+B_2 p)=B_3' p^2+B_4' p^3+\cdots$. Table \ref{table:1} compares exact results obtained from Eq.~\eqref{eq:z_exact_01} and Monte Carlo (MC) simulation data\cite{M20} of $\Delta Z$ with the values obtained from the truncated series $B_3' p^2$ and $B_3' p^2+B_4' p^3$, when both the exact virial coefficients, Eqs.~\eqref{eq:B3p&B4p}, and Mon's coefficients\cite{M20} are used. We can observe a good behavior of $B_3' p^2$ and $B_3' p^2+B_4' p^3$ (especially in the cases with $p=1.2$) when the exact coefficients are used, but not in the case of Mon's coefficients. The origin of the discrepancy between the exact virial coefficients obtained here from the transfer-matrix solution, Eq.~\eqref{eq:z_exact_01}, and those derived from the standard diagrammatic method\cite{M14b,M15,M20} is very subtle and lies on the implicit assumption of a cancelation of the so-called reducible diagrams in the latter method. This cancelation is inherently associated with the factorization property of the reducible diagrams into products of irreducible ones,\cite{S16} as a consequence of the translational invariance of the position of any particle. While this factorization property holds in bulk fluids, it fails in confined fluids, due to a breakdown of the translational invariance along the confined directions. Let us take the coefficient $B_3$ as the simplest example. By assuming cancelation of the reducible diagrams, one would have\cite{M20} \beq B_{3,\text{M}}=-\frac{1}{3}\Sthree. \eeq On the other hand, without prejudicing any cancelation, the actual result is \beq B_3=B_{3,\text{M}}+\Delta B_3,\quad \Delta B_3\equiv(\Stwo)^2-\Rthree. \eeq Here, the diagrams have its standard meaning,\cite{S16} except that they are supposed to be divided by $L\epsilon^n$, $n$ being the number of particles represented in the diagram. In a bulk fluid, $\Delta B_3=0$ due to the factorization property of the reducible diagrams. However, in our confined system one has \beq \Stwo=-2B_2,\quad \Rthree=4W_2, \eeq so that $\Delta B_3=4(B_2^2-W_2)\neq 0$. As a byproduct, from Eq.~\eqref{eq:B3p} we obtain \beq \label{eq:B3Mp} B_{3,\text{M}}'=B_3'-\Delta B_3 =-\left(1-2W_2+B_2^2-\frac{\epsilon^2}{6}\right). \eeq This is equivalent to but much more compact than the expression derived in Ref.~\onlinecite{M20}. \subsection{\label{sec:system_highpressure} High-Pressure Behavior} Solving numerically the eigenvalue problem in Eq.~\eqref{eq:eigenvalue_equation} becomes increasingly more difficult as pressure grows and the system approaches the close-packing limit. It is, therefore, of interest to study analytically the limit $p\to\infty$ (or, equivalently, $\lambda\to\lambda_\text{cp}$) in order to understand the full behavior of the system. In this high-pressure limit, particles accumulate more and more near the walls, which means that $\phi(y)$ becomes non-zero only in two symmetric layers near $y=\pm \frac{\epsilon}{2}$. As a consequence, the eigenfunction $\phi(y)$ and the eigenvalue $\ell$ for high values of $p$ adopt the forms (see Appendix \ref{app:C} for details) \begin{subequations} \label{eq:high-p} \begin{equation} \label{eq:phi_high} \phi(y) \to \frac{1}{\sqrt{\mathcal{N}}}\left[ \phi_+(y)+\phi_-(y)\right], \quad \phi_\pm(y)\equiv e^{-a(y\pm\frac{\epsilon}{2})p}, \end{equation} \begin{equation} \label{eq:l_high} \ell \to \frac{a(\epsilon)}{2\epsilon p}e^{-a(\epsilon)p}. \end{equation} \end{subequations} In Eq.~\eqref{eq:phi_high}, the normalization constant is \beq \label{eq:2.22a} \mathcal{N}\to \frac{a(\epsilon)}{\epsilon p}e^{-2a(\epsilon)p}. \eeq Note that, for high $p$, $\phi_\pm(y)$ is practically nonzero only inside a region of width of the order of $a(\epsilon)/\epsilon p$, adjacent to the wall at $y=\pm \frac{\epsilon}{2}$. \begin{table} \caption{\label{table:2}Comparison between exact and MC values\cite{M20} of $Z$ and the high-pressure asymptotic form, Eq.~\eqref{eq:Z_high}.} \begin{ruledtabular} \begin{tabular}{ccccc} $\epsilon$&$p$&$Z_{\text{exact}}$ &$Z_{\text{MC}}$&$2+a(\epsilon)p$\\ \hline $0.4$&$12$&$12.774$&$12.774$&$12.998$\\ &$120$&$112.04$&$112.03$&$111.98$\\ $0.8$&$12$&$9.6547$&$9.6548$&$9.2000$\\ &$120$&$74.017$&$74.016$&$74.000$\\ \end{tabular} \end{ruledtabular} \end{table} As proved in Appendix \ref{app:C}, the high-pressure compressibility factor becomes \beq \label{eq:Z_high} Z\to 2+a(\epsilon)p. \eeq Table \ref{table:2} shows that exact and MC simulation data\cite{M20} confirm the validity of Eq.~\eqref{eq:Z_high} as pressure increases. Recalling that $\lambda_\text{cp}=1/a(\epsilon)$, Eq.~\eqref{eq:Z_high} can be recast as \begin{equation}\label{eq:z_residue} Z\to\frac{2}{1-\lambda/\lambda_{\text{cp}}}. \end{equation} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{plot_Z_renormalized.eps} \caption{Normalized compressibility factor $(1-\lambda/\lambda_\text{cp})Z$ versus $\lambda/\lambda_{\text{cp}}$ for (from right to left) $\epsilon=0.3$, $0.4$, \ldots, $0.8$.} \label{fig:z_residue} \end{figure} Equation~\eqref{eq:z_residue} embodies two important features of the high-pressure asymptotic behavior of the compressibility factor. First, $Z$ presents a simple pole at $\lambda=\lambda_{\text{cp}}$, as expected. Second, the residue of the pole is not $1$ (as happens in the hard-rod Tonks gas\cite{T36}), but $2$. These two features are made quite apparent in Fig.~\ref{fig:z_residue}, where the exact \emph{normalized} compressibility factor $(1-\lambda/\lambda_\text{cp})Z$ is plotted as a function of the scaled density $\lambda/\lambda_{\text{cp}}$ for several values of $\epsilon$. It is obvious that the limiting value $(1-\lambda/\lambda_\text{cp})Z\to 2$ requires densities closer and closer to $\lambda_\text{cp}$ as $\epsilon$ decreases. In fact, in the Tonks gas, $\lambda_\text{cp}=1$ and $Z=1/(1-\lambda)$ for any density. This shows that the limits $p\to\infty$ and $\epsilon\to 0$ do not commute. \section{Approximate Equations of State}\label{sec:approximations} In order to obtain the exact equilibrium properties of the confined hard-disk system, one needs to solve Eq.~\eqref{eq:eigenvalue_equation}, which, however, does not seem to have any known analytical solution, so that one must resort to numerical methods.\cite{KP93} Some authors have proposed to simplify the model by replacing $a(s)$ by its linear approximation, Eq.~\eqref{eq:as_series},\cite{VBG11} or by means of fitting parameters.\cite{KMP04} We propose here an alternative approach that does not rely on solving Eq.~\eqref{eq:eigenvalue_equation} or using any fitting parameters, but instead benefits from the study of the physical properties in the low- and high-pressure limits. For this purpose, it is convenient to consider the equation of state as written in Eq.~\eqref{eq:z_exact_02}, where the eigenvalue $\ell$ is not explicitly written and $\phi(y)$ does not need to be normalized. In the following, two different analytic approximations for $\phi(y)$ will be proposed and analyzed, which will be referred to as the \emph{basic} approximation (BA) and the \emph{advanced} approximation (AA). \subsection{Basic Approximation} Under low-pressure (and, therefore, low-density) conditions, particles barely interact with one another and are then allowed to move almost freely around the available space. This setup yields a nearly uniform density profile along the transverse direction. In the limit $p \rightarrow 0$, this density profile is exactly constant, as seen in Appendix~\ref{app:virial_series_math}. Based on this behavior, we construct here the BA by taking $\phi(y)=\text{const}$ not only for $p\to 0$ but for any value of $p$. As we will see, despite its crudeness, the BA can provide reasonable results, except for very high pressures and/or wide pores. Under this approximation, Eq.~\eqref{eq:z_exact_02} yields \begin{equation}\label{eq:z_basic_00} Z_{\mathrm{BA}} = 1 + p \,\frac{\int\mathrm{d}y_1\int\mathrm{d}y_2 e^{-a(y_1-y_2)p}a(y_1-y_2)}{\int\mathrm{d}y_1\int\mathrm{d}y_2 e^{-a(y_1-y_2)p}}. \end{equation} Then, by setting $s=y_1 - y_2$ and using the mathematical identity \begin{equation}\label{eq:basic_integration_change} \int\mathrm{d}y_1\int\mathrm{d}y_2\,F(y_2-y_1) = \int_{0}^\epsilon \mathrm{d}s \left[ F(s) + F(-s)\right](\epsilon-s), \end{equation} Eq.~\eqref{eq:z_basic_00} can be simplified as \begin{equation} \label{Z_BA} Z_{\mathrm{BA}} = 1+p\frac{\int_0^{\epsilon} \mathrm{d}s \, a(s)(\epsilon-s)e^{-a(s)p}}{\int_0^{\epsilon} \mathrm{d}s \, (\epsilon-s)e^{-a(s)p}}. \end{equation} Expanding in powers of $p$ in both the numerator and the denominator of Eq.~\eqref{Z_BA}, it is not difficult to obtain the virial coefficients in this BA. As expected, the second virial coefficient $B_2$ is recovered, while the higher-order virial coefficients are approximate. In particular, \begin{subequations} \bal B_{3,\text{BA}}'=&-\left(1-B_2^2-\frac{\epsilon^2}{6}\right)\nonumber\\ =&-\frac{7 \epsilon^4}{720}\left(1 + \frac{31 \epsilon^2}{98} + \frac{261 \epsilon^4}{1960}+\cdots\right), \eal \bal B_{4,\text{BA}}'=&B_2^3-B_2\left(\frac{9}{8}-\frac{\epsilon^2}{4}\right)+\frac{1-(1-\epsilon^2)^{5/2}}{20\epsilon^2}\nonumber\\ =&-\frac{11 \epsilon^6}{15120}\left(1 + \frac{543 \epsilon^2}{880} + \frac{14259 \epsilon^4}{38720}+\cdots\right). \eal \end{subequations} In the opposite high-pressure limit, an analysis similar to that described in Appendix \ref{app:C} yields $Z_{\text{BA}}\to 3+a(\epsilon)p$, which implies \begin{equation} Z_{\text{BA}}\to\frac{3}{1-\lambda/\lambda_{\text{cp}}}. \end{equation} Thus, the BA predicts the right pole at $\lambda=\lambda_{\text{cp}}$ but overestimates the residue by $50\%$. \subsection{Advanced Approximation} On a different vein, the AA is constructed by taking for $\phi(y)$ the same functional form as in the limit $p\to\infty$, Eq.~\eqref{eq:phi_high}, namely \begin{widetext} \begin{equation}\label{eq:z_AA} Z_{\text{AA}} = 1 + p \frac{\int\mathrm{d}y_1\int\mathrm{d}y_2 \, e^{-a(y_1-y_2)p}a(y_1-y_2)\phi_+(y_1)\left[\phi_+(y_2)+\phi_-(y_2)\right]}{\int\mathrm{d}y_1\int\mathrm{d}y_2 \, e^{-a(y_1-y_2)p}\phi_+(y_1)\left[\phi_+(y_2)+\phi_-(y_2)\right]}, \end{equation} \end{widetext} where use has been made of the property $\phi_-(y)=\phi_+(-y)$. Even though the AA is inspired by the exact high-pressure behavior, Eq.~\eqref{eq:z_AA} makes sense even for low $p$. In fact, since $\lim_{p\to 0}\phi_\pm(y)=1$, both the AA and the BA yield the exact second virial coefficient. Expanding the numerator and the denominator in Eq.~\eqref{eq:z_AA} in powers of $p$, and after some algebra, one finds \begin{subequations} \bal B_{3,\text{AA}}'=&-\left[1-\frac{\epsilon^2}{6}-2B_2^2-2B_2\frac{1-(1-\epsilon^2)^{3/2}}{3\epsilon^2}+2U_2\right]\nonumber\\ =&-\frac{\epsilon^4}{80} \left(1 + \frac{8 \epsilon^2}{21} + \frac{58 \epsilon^4}{315}+\cdots\right), \eal \bal B_{4,\text{AA}}'=&\frac{15}{4}B_2^3 -B_2\left(\frac{4+2\epsilon^2+\epsilon^4}{4\epsilon^2}+6U_2\right)+ \frac{2}{3}+U_3\nonumber\\ &+\left(7B_2^2-\frac{1}{3}-4U_2+\frac{2}{\epsilon^2}B_2\right)\frac{1-(1-\epsilon^2)^{3/2}}{3\epsilon^2}\nonumber\\ =&-\frac{\epsilon^6}{504} \left(1 + \frac{279 \epsilon^2}{400} + \frac{2041 \epsilon^4}{4400}+\cdots\right), \eal \end{subequations} where \begin{subequations} \bal \label{eq:U2} U_2\equiv&\frac{1}{\epsilon}\int\mathrm{d}y\,\psi_1(y)a\left(y+\frac{\epsilon}{2}\right)\nonumber\\ =&1 - \frac{\epsilon^2}{4} - \frac{13 \epsilon^4}{720} - \frac{23 \epsilon^6}{3360}+\cdots, \eal \bal U_3\equiv&\frac{1}{2\epsilon^2}\int\mathrm{d}y_1\int\mathrm{d}y_2\,a(y_1-y_2)a\left(y_1+\frac{\epsilon}{2}\right)\nonumber\\ &\times\left[a\left(y_2+\frac{\epsilon}{2}\right)+a\left(y_2-\frac{\epsilon}{2}\right)\right]\nonumber\\ =&1 - \frac{5 \epsilon^2}{12} - \frac{17 \epsilon^6}{2880}+\cdots. \eal \end{subequations} In Eq.~\eqref{eq:U2}, the function $\psi_1(y)$ is defined by Eq.~\eqref{eq:virial_int_00} \section{Assessment of the Basic and Advanced Approximations} \label{sec:4} The main idea behind both the BA and AA is that it is possible to approximate the numerator and denominator integrals in Eq.~\eqref{eq:z_exact_02} by replacing the actual eigenfunction $\phi(y)$ by simple approximations. It is now convenient to study how well the system is described by these two approximations, as well as their range of validity. For that purpose, we analyze in this section several properties of the system, comparing the proposed approximations with the numerical solution corresponding to the exact description presented in Sec.~\ref{sec:solution}. Some technical details about our numerical solution of the eigenvalue problem, Eq.~\eqref{eq:eigenvalue_equation}, and the numerical evaluation of the compressibility factor from Eqs.~\eqref{eq:z_exact_01}, \eqref{Z_BA}, and \eqref{eq:z_AA} are given in Appendix \ref{app:numerical_methods}. \subsection{Density Profiles} Figure~\ref{fig:profile_advanced} shows a comparison between the exact numerical density profile coming from Eq.~\eqref{eq:eigenvalue_equation} and the AA analytical density profile, Eq.~\eqref{eq:phi_high}, for $\epsilon=0.4$ and some representative values of $p$. Note that here the normalization constant $\mathcal{N}$ is not given by Eq.~\eqref{eq:2.22a} but is instead obtained by requiring fulfillment of Eq.~\eqref{eq:eigenvalue_normalization}. Although this normalization constant is not needed in Eq.~\eqref{eq:z_AA}, it is needed in Fig.~\ref{fig:profile_advanced}. We observe that, even though the AA was based on the exact high-pressure limit behavior, a good agreement with the numerical solution is reached for all pressure ranges, including the low-pressure regime, where the solution $\phi \approx \text{const}$ is recovered. In fact, we find that the worst agreement is centered around the medium pressure regime. Similar results can also be found for other values of the width parameter $\epsilon$. \begin{figure} \includegraphics[height=0.6\columnwidth]{plot_profile_normal.eps} \includegraphics[height=0.6\columnwidth]{plot_profile_log.eps} \caption{Plot of the density profile $\phi^2(y)$ as obtained from the numerical solution of Eq.~\eqref{eq:eigenvalue_equation} (solid lines) and as given by the AA, Eq.~\eqref{eq:phi_high}, (dashed lines) for $\epsilon=0.4$ and several values of $p$. In panels (a) and (b), the vertical axis is in normal and logarithmic scale, respectively. Note that, due to symmetry, only the region $0\leq y\leq\frac{\epsilon}{2}$ is considered.} \label{fig:profile_advanced} \end{figure} \subsection{Virial Coefficients} \begin{figure} \includegraphics[height=0.57\columnwidth]{plot_B3p_comparison.eps} \includegraphics[height=0.57\columnwidth]{plot_B4p_comparison.eps} \caption{Plot of (a) $B_3'/\epsilon^4$ and (b) $B_4'/\epsilon^6$ as functions of the excess pore width $\epsilon$. The solid, dashed, and dash-dotted lines correspond to the exact, AA, and BA results, respectively.} \label{fig:B3p&B4p} \end{figure} Figure~\ref{fig:B3p&B4p} compares the exact and approximate values of $B_3'/\epsilon^4$ and $B_4'/\epsilon^6$. As can be observed, the AA predictions are more accurate than the BA ones. On the other hand, since $B_3'$ and $B_4'$ are rather small, the conventional virial coefficients $B_3$ and $B_4$ are dominated by $B_2^2$ and $B_2^3$, respectively [see Eq.~\eqref{eq:2.10}]. Thus, the impact on $B_3$ ad $B_4$ of the deviations observed in Fig.~\ref{fig:B3p&B4p} is very small. At the maximum width, $\epsilon_{\max}=\sqrt{3}/2\simeq 0.866$, the relative deviations in $B_3$ are approximately $0.3\%$ (BA) and $-0.03\%$ (AA), while in the case of $B_4$ they are approximately $-0.5\%$ (BA) and $0.04\%$ (AA). \subsection{Equation of State} The equation of state involves performing the integrals in Eq.~\eqref{eq:z_exact_02} once the density profiles (either exact or approximate) are known. Figure~\ref{fig:comparison_z} depicts the comparison between the two proposed approximations and the results coming from both the numerical evaluation of the exact solution and independently calculated MC simulations.\cite{M20} It shows a good agreement of the BA under low-pressure and/or narrow-pore conditions, and a very good agreement of the AA for practically all ranges of pressure and pore sizes. In the case of the AA, the results disagree visibly from the exact solution only within a small region of medium pressures for large values of the pore size. It is interesting to note that the compressibility factor, especially with an excess pore width $\epsilon=0.80$, presents two inflection points, a feature captured even by the BA. Although the system lacks a true phase transition, those two inflection points can be seen as precursors of the phase transition in genuine two-dimensional systems.\cite{BK11,GM14} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{plot_Z_comparison.eps} \caption{Compressibility factor as a function of the longitudinal density $\lambda$ for different values of the excess pore width $\epsilon$. The circles represent MC data,\cite{M20} while the solid, dashed, and dash-dotted lines correspond to exact, AA, and BA results, respectively.} \label{fig:comparison_z} \end{figure} Even though the transfer-matrix solution and our approximations were developed only for nearest-neighbor interactions (single-file condition), which precludes an excess width of the channel larger than $\epsilon_{\mathrm{max}}=\sqrt{3}/2$, it is also of interest to study how the theoretical treatments behave when this limit is exceeded.\cite{KP93} In that case, the function $a(s)$ defined by Eq.~\eqref{eq:a(s)} must be supplemented as $a(s)=0$ if $s>1$.\cite{KP93} A comparison with MC simulation data\cite{KP93} for $\epsilon=1$ and $1.118$ is shown in Fig.~\ref{fig:Z_morethanemax}. We observe that, as density or pressure increase, none of the three methods is accurate. Paradoxically, however, the BA does a reasonable job and is perhaps the most reliable approximation in the case $\epsilon=1.118$. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{plot_Z_morethan_epsmax.eps} \caption{Compressibility factor as a function of the longitudinal density $\lambda$ for two values of $\epsilon$ beyond the nearest-neighbor condition: $\epsilon=1$ and $1.118$. The symbols represent MC data,\cite{KP93} while the solid, dashed, and dash-dotted lines correspond to results from the solution of the eigenvalue problem, Eq.~\eqref{eq:eigenvalue_equation}, the AA, and the BA, respectively.} \label{fig:Z_morethanemax} \end{figure} \subsection{Execution Times} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{cputime_timeratio.eps} \caption{Wall time ratios between both approximations and the exact solution versus $p$ for some representative values of $\epsilon$. Closed and open symbols represent the BA and AA values, respectively. Lines are guides to the eye.} \label{fig:cpu_times} \end{figure} Even though formally analytical expressions exist for the exact solution as well as for the approximations, in all the cases the final computation of $Z$ must be performed numerically (see Appendix \ref{app:numerical_methods}). It is then worth studying the different execution times (the so-called wall times\cite{WallTime}) in order to assess the cost of using the exact solution against any of the two approximations proposed in this paper. Figure~\ref{fig:cpu_times} shows the BA-to-exact and AA-to-exact wall time ratios. We clearly see that both approximations are much faster than the exact evaluation for all ranges of pressure and pore sizes, this wall time advantage increasing with increasing pressure and pore width. For the AA, this is especially relevant in the case of large pore sizes and high pressures, where the performance of the AA is excellent (see Fig.~\ref{fig:comparison_z}). In the case of the BA, the gain in wall time is still very remarkable even for small pore sizes and small or moderate pressures, where both the exact solution and the BA practically yield the same results (see again Fig.~\ref{fig:comparison_z}). \section{Concluding Remarks} \label{sec: concl} In this work we have started from the exact equation of state of the single-file hard-disk confined fluid, as derived from the transfer-matrix method.\cite{KP93} We showed that the same result is also obtained by mapping the original system onto a one-dimensional polydisperse mixture of nonadditive hard rods and just making some mean-field-like assumptions on the latter. From the exact solution we then explored the low-pressure regime by using a perturbation scheme to obtain the exact third and fourth virial coefficients, which, to the best of our knowledge, were still unknown. The results differ from a recent alternative derivation\cite{M20} based on the conventional Ree--Hoover diagrams, thus showing that the conventional cancelation of the reducible diagrams do not hold for confined fluids, a fact usually overlooked in the literature.\cite{M14b,M15,M20} The high-pressure regime, near the close-packing region, was also studied in order to get the asymptotic behavior of the equation of state, which is seen to present a simple pole at $\lambda=\lambda_{\text{cp}}$ with a residue equal to $2$, in contrast to the residue equal to $1$ in the Tonks gas.\cite{T36} The study of the exact physical properties of the system allowed us to propose two different approximations for the equation of state, namely the BA and the AA. The first one has a much simpler form than the second one but its range of validity is restricted to narrow pores and/or low pressures, whereas the AA is valid throughout the entire range of pore sizes and pressures, yielding results which are virtually indistinguishable from the exact solution, except in a small region of high pore sizes and intermediate pressures. The usefulness and reliability of the approximations were tested for different quantities, such as the transverse density profile, the virial coefficients, and the equation of state. In the case of the latter quantity, we also considered situations beyond the nearest-neighbor constraint $\epsilon\leq \epsilon_{\mathrm{max}}$ and even beyond the single-file condition $\epsilon\leq 1$. Tests regarding execution times of the exact solution, on the one hand, and the two approximations, on the other hand, were done in order to assess the practical convenience of using the approximate methods instead of the exact solution. Execution times for the approximate compressibility factors were found to be $10$--$10^3$ times and $10^2$--$10^{5}$ times faster in the cases of the AA and BA, respectively. We plan to exploit the one-dimensional mapping to obtain the structural correlation functions of the confined hard-disk fluid. Also, the extensions of the BA and AA for the hard-sphere fluid confined in a narrow cylindrical pore will be undertaken in the near future. \acknowledgments The authors acknowledge financial support from Grant No.~PID2020-112936GB-I00 funded by MCIN/AEI/10.13039/501100011033, and from Grants No.~IB20079 and No.~GR21014 funded by Junta de Extremadura (Spain) and by ERDF ``A way of making Europe.'' A.M.M. is grateful to the Spanish Ministerio de Ciencia e Innovaci\'on for a predoctoral fellowship PRE2021-097702.
1,314,259,996,090
arxiv
\section{Introduction} The last decade has seen a period of intense scrutiny for the flavour structure of the Standard Model (SM). The plethora of experimental results has served as a catalyst to investigate the different sectors, both from the SM point of view as well as looking at possible effects of New Physics (NP) on them. Some of the different measurements which drive the interest in the theoretical community include the anomalous magnetic moment of the muon\cite{Abi:2021gix,Bennett:2006fi}, decays corresponding to $b\to s$ transitions ($B$~physics) \cite{LHCb:2014vgu,LHCb:2017avl,Belle:2019oag,LHCb:2019hip,LHCb:2021trn}, $b\to c$ transitions \cite{BaBar:2013mob,Belle:2015qfa,LHCb:2015gmp,LHCb:2017smo,Belle:2019gij,LHCb:2022piu}, nuclear beta decays and implications for $V_{ud}$ \cite{Hardy:2020qwl}, as well as decays corresponding to $s \to d$ transitions (kaon physics) \cite{NA62:2021zjw,Bician:2020ukv,Ahn:2018mvc}. Many of these experiments have hinted at the possibility of non-standard physics in their respective data analysed thus far. Among the most interesting hints of New Physics is the observation of lepton flavour universality violation (LFUV) in rare $B$~decays \cite{LHCb:2014vgu,LHCb:2017avl,Belle:2019oag,LHCb:2019hip,LHCb:2021trn}. Global fits using the effective field theory approach proved useful in constructing the appropriate beyond the SM scenarios to explain the LFUV~\cite{Hurth:2021nsi,Alguero:2021anc,Altmannshofer:2021qrr,Ciuchini:2020gvn,Geng:2021nhg,Datta:2019zca,Alok:2019ufo,Kowalska:2019ley,DAmico:2017mtc}. It is natural to expect that these NP effects would also impact operators contributing to kaon decays. This provides a strong motivation to consider an effective theory fit dedicated to kaons and to possibly achieve a similar sensitivity to the corresponding analyses in $B$~decays. There are several observables in $K$~systems that have the capability to make an individual impact on the eventual global fits. Recently, there has been a significant (and ongoing) experimental effort focused on the measurements of the branching fractions of $K^+\to \pi^+ \nu \bar{\nu}$ (NA62 at CERN~\cite{NA62:2021zjw}) and $K_L \to \pi^0 \nu \bar{\nu}$ (KOTO at J-PARC~\cite{Ahn:2018mvc}). Both of them have negligible uncertainty on long-distance contributions, making them highly sensitive to non-standard physics. However, any analysis involving only these decays proves inadequate to make concrete claims about LFUV effects in kaons, thereby prompting the addition of new observables. In the context of direct sensitivity to LFUV, $K^+\to\pi^+\ell\bar{\ell}$ offers an exciting prospect. It has been shown that the difference of the leading order polynomial expansion coefficients of the vector form factor in these decays is sensitive to short-distance flavour violating effects \cite{Crivellin:2016vjc}. The parameter space that is permitted by these three observables can be further limited by the consideration of decays like $K_{L,S}\to \ell \bar{\ell}$ and $K_L\to \pi^0\ell\bar{\ell}$. With the exception of $K_L\to\mu\bar{\mu}$, the others have only upper bounds with respect to their experimental status. While they are weak at present, they are still useful in drawing attention to a specific part of the parameter space of the NP Wilson coefficients. Each of them ($K_{L,S}\to \ell \bar{\ell}$, $K_L\to \pi^0\ell\bar{\ell}$) is characterised by dominant long-distance effects, making it relatively more challenging to extract non-standard physics. This translates into a limitation on the existing accuracy in their SM computation. However, there exists a well-defined experimental program for each of these decays. This may either imply an improvement in the existing sensitivities or a measurement at the SM level and is outlined in the third column of Table~\ref{tab:data}. In light of several ongoing and planned measurements for decays in the kaon sector, this paper intends to demonstrate the complementary capability for LFUV measurements in kaon systems. We analyse each of these decays and make a careful evaluation of the theoretical uncertainties. Using the updated values from CKM and other input parameters, the uncertainties are computed using a Monte Carlo approach. An interesting difference from past literature is observed for the $K_L\to \mu\bar{\mu}$ decay which is described by asymmetric uncertainties. Integrating these decays in {\tt{SuperIso}}~\cite{Mahmoudi:2007vz,Mahmoudi:2008tp,Mahmoudi:2009zz,Neshatpour:2021nbn}, the relevant parameter space of the New Physics Wilson coefficients is identified. Guided by a well-defined strategy for the measurement of many of these decays, the experimental uncertainties are used appropriately. Furthermore, for decays for which no such well-defined strategy exists, we also present projections on the progress on the experimental side. Demonstrating a rich yield of interesting physics could motivate modified strategies for such decays in the future. This is particularly true for the measurement of vector form factors in $K^+\to\pi^+\ell \bar{\ell}$. While measurements of these form factors exist for both the electron ~\cite{E865:1999ker,NA482:2009pfe,NA482:2010zrc} and the muon \cite{Bician:2020ukv}, a strong case for higher precision measurements of these quantities is presented in this work. Similarly, our results present the need for a reduction in the error on the theoretical computation of $K_L\to\mu \bar{\mu}$. The paper is organised as follows: In Section~\ref{sec:2} we analyse the decay modes of interest in considerable detail. The considered processes are $K^+(K_L)\to \pi^+(\pi^0)\nu\bar\nu$ in Section~\ref{sec:ktopinunu}, lepton flavour universality violation (LFUV) in $K^+\to\pi^+\ell\ell$ decays in Section~\ref{sec:lfuv}, $K_{S,L}\to \mu \bar{\mu}$ in Section~\ref{sec:Ksmumu} and $K_L\to\pi^0\ell \bar{\ell}$ in Section~\ref{sec:KLtopill}. The analyses in each of these subsections (along with the appendices) are self-contained and offer an up-to-date evaluation of the SM values as well as the corresponding uncertainties. In Section~\ref{sec:global} we present a global picture involving all the decays, which illustrates the existing bounds from the different observables. Section~\ref{sec:global1} is devoted to the description of the methodology of our fit. In Section~\ref{sec:global2} we perform a global fit to current experimental data. Section~\ref{sec:global3} offers possible improvements in the fits at the end of the run for most of the experiments. This includes using the official projections for some observables as well as choosing optimistic reaches for the others. Finally, we conclude in Section~\ref{sec:conc}. \section{Theoretical framework} \label{sec:2} In this section, we set up the convention to be followed for the rest of the paper. The $s \to d$ transitions can be parameterised by the following effective Hamiltonian: \begin{equation}\label{eq:Heff} \mathcal{H}_{\rm eff}=-\frac{4G_F}{\sqrt{2}}\lambda_t^{sd}\frac{\alpha_e}{4\pi}\sum_k C_k^{\ell}O_k^{\ell}\,, \end{equation} where $\lambda_t^{sd}\equiv V^*_{ts}V_{td}$ and the relevant effective operators are \begin{align}\nonumber &{O}_9^{\ell} = (\bar{s} \gamma_\mu P_L d)\,(\bar{\ell}\gamma^\mu \ell)\,, &&{O}_{10}^{\ell} = (\bar{s} \gamma_\mu P_L d)\,(\bar{\ell}\gamma^\mu\gamma_5 \ell)\,, \end{align} \begin{equation} {O}_L^{\ell} = (\bar{s} \gamma_\mu P_L d)\,(\bar{\nu}_\ell\,\gamma^\mu(1-\gamma_5)\, \nu_\ell)\,, \label{eq:operators} \end{equation} with $P_L=(1-\gamma_5)/2$. The most general Hamiltonian also includes scalar and pseudoscalar operators, as well as the chirality-flipped counterpart of the above operators where the quark currents are right-handed. In this instance, we focus on this small subset of operators which have the same structure as the most relevant operators for explaining the neutral current $B$-anomalies~\cite{LHCb:2014vgu,LHCb:2017avl,Belle:2019oag,LHCb:2019hip,LHCb:2021trn}. The Wilson coefficients $C_k^{\ell}$ include any potential (flavour violating) New~Physics contribution parameterised~as\footnote{Within the considered basis, a real $\delta C_i$ results in both real and imaginary short-distance contributions in the effective Hamiltonian.} \begin{equation} C_k^{\ell} = C_{k,{\rm SM}}^{\ell}+ \delta C_{k}^{\ell}\,. \end{equation} In recent years, there has been much progress in the measurements of rare kaon decays. However, still several of the rare kaon decays have not been observed and there are only upper bounds available for them. In general, different New Physics contributions with various combinations of the operator structures of Eq.~\ref{eq:operators} can contribute to kaon decays. Nonetheless, given the rather limited experimental data currently available for rare kaon decays and the fact that New Physics is more conveniently explored in the chiral basis, we limit our analysis to the class of NP scenarios where the charged and neutral leptons are related to each other by the SU(2)$_{\rm L}$ gauge symmetry. As we consider only left-handed quark currents, the different Wilson coefficients that we consider are related to each other as $\delta C_{L}^{\ell} \equiv \delta C_9^{\ell} = - \delta C_{10}^{\ell}$. With this background, we set up the theoretical description of the different decay modes in the following subsections. \subsection{\texorpdfstring{$K^+\to \pi^+\nu \bar{\nu}$}{K+ -> pi+ nu nu} and \texorpdfstring{$K_L\to \pi^0\nu \bar{\nu}$}{KL -> pi0 nu nu}}\label{subsec:Kpinunu} \label{sec:ktopinunu} These rare decay modes receive dominant short-distance (SD) contributions. Their high sensitivity to any NP effect while having very small theoretical uncertainties justify their status as being among the eagerly awaited measurements from the corresponding experiments \cite{NA62:2021zjw,Ahn:2018mvc}. In the notation discussed above, the branching fractions for these modes are given as~\cite{Bobeth:2017ecx} \begin{align} \label{eq:Br-KLpinunu} {\rm BR}(K_L \to \pi^0 \nu \bar{\nu}) & = \frac{\kappa_L }{\lambda^{10}}\frac{1}{3}s_W^4 \sum_{\ell} {\rm Im}^2 \left[\lambda_t C_L^{\ell} \right]\,, \\ \label{eq:Br-Kppipnunu} {\rm BR}(K^+ \to \pi^+ \nu \bar{\nu}) & = \frac{\kappa_+ (1 + \Delta_{\rm EM})}{\lambda^{10}}\frac{1}{3} s_W^4 \sum_{\ell} \left[ {\rm Im}^2 \Big(\lambda_t C_L^{\ell} \Big) + {\rm Re}^2 \Big(-\frac{\lambda_c X_{c}}{s_W^2} + \lambda^{sd}_t C_L^{\ell} \Big)\right]\,, \end{align} where the sum is over the three neutrino flavours. The short-distance SM contribution is given by $X_c$ and $C_{L,{\rm SM}}^{\ell} = C_{L,{\rm SM}} ={-X(x_t)}/{ s_W^2}$ (see Appendix~\ref{app:Xxt}) with the relevant input parameters collected in Appendix~\ref{app:inputs}. The values of the branching fraction for the SM, corresponding to these inputs are given in Table~\ref{tab:data} where the theory uncertainties are estimated using a Monte Carlo method, assuming Gaussian errors for the input parameters. \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kppinunu_CLe_CLmuEqualCLtau_exp.png}\quad \includegraphics[width=0.49\textwidth]{images/KLpinunu_CLe_CLmuEqualCLtau_exp.png} \caption{\small BR($K^+\to \pi^+ \bar{\nu}\nu$) (left) and BR($K_L\to \pi^0 \bar{\nu}\nu$) (right) as a function of $\delta C_L^e$ and $\delta C_L^\mu=\delta C_L^\tau$. The dotted grey line represents the lepton flavour universality scenario. In the left plot, the brown solid (dotted) line corresponds to the measured central value ($1\sigma$ experimental uncertainty) by NA62~\cite{NA62:2021zjw}. In the right plot, the upper bound on BR($K_L\to \pi^0 \bar{\nu}\nu$) is not visible for the scanned values. \label{fig:Kppinunu_CLeCLmu}} \end{center} \end{figure} An interesting feature of these decay modes is that an experimental result consistent with the SM prediction does not necessarily imply the absence of NP. This is due to the fact that summation over the three species of neutrinos can result in a relative cancellation between the corresponding NP Wilson coefficients. This is illustrated in Fig.~\ref{fig:Kppinunu_CLeCLmu} for $K^+ \to \pi^+ \nu \bar{\nu}$ (left) and $K_L \to \pi^0 \nu \bar{\nu}$ (right). For simplicity, we have set $\delta C^\tau_L=\delta C^\mu_L$. This facilitates a visual comparison on the departures from the lepton flavour universality given by the dotted grey line\footnote{An alternative situation with $\delta C^\tau_L=\delta C^e_L$ is illustrated in Appendix~\ref{app:otherpossibility}.}. The figure shows concentric circles, centered at $(\delta C_L^\mu,\delta C_L^e)=(8.5,9.0)$ on the left, and at $(\delta C_L^\mu,\delta C_L^e)=(6.5, 6.5)$ on the right. The steady darkening of the annuli on moving away from the centre represents an increase in the value of the corresponding branching fraction. At the centre, the contributions due to the SM and the NP Wilson coefficients cancel exactly to result in a null value for the branching fractions. Moving along the circumference of any circle evaluates to the same value of the branching fraction. In the left plot, the brown solid line represents the current measured value for $K^+ \to \pi^+ \nu \bar{\nu}$ with the corresponding $\pm 1\sigma$ uncertainties given by the dotted lines. In the right plot, the upper bound for $K_L \to \pi^0 \nu \bar{\nu}$ \cite{Ahn:2018mvc} is not visible for the regions scanned in the $(\delta C_L^\mu,\delta C_L^e)$ plane. In this study, we re-estimate the SM predictions for these branching fractions using the updated inputs, as given in Table~\ref{tab:inputs}. They are represented by a cross in Fig.~\ref{fig:Kppinunu_CLeCLmu}. The corresponding theory uncertainty is not visible on the scale of the figure and the evaluated numbers are quoted below: \begin{align} \text{BR}(K^+\to \pi^+\nu \bar{\nu})^{\rm SM} &= (7.86 \pm 0.61)\times 10^{-11}\,,\nonumber\\ \text{BR}(K_L\to \pi^0\nu \bar{\nu})^{\rm SM} &= (2.68 \pm 0.30) \times 10^{-11}\,. \end{align} We are in agreement with the corresponding evaluation in \cite{Brod:2021hsj}. Fig.~\ref{fig:Kppinunu_CLeCLmu} also illustrates the fact that a mere observation in agreement with the SM prediction of either of these decays cannot be a conclusive claim for lepton flavour universality. This is evident by comparing the current measurement for $K^+ \to \pi^+ \nu \bar{\nu}$ with the corresponding SM value as given in Table~\ref{tab:data}. The orange band represents the $1\sigma$ region consistent with the current measurement. Although the SM prediction, which implies flavour universality, is in agreement with experimental measurement, combinations of possibly LFUV NP contributions to $\delta C_L^{e,\mu}$ in the $[-11,29]$ range also result in theoretical predictions within the $1\sigma$ range of the measured value. This prompts the inclusion of other decay modes for the kaons. \subsection{LFUV in \texorpdfstring{$K^+\to \pi^+\ell \bar{\ell}$}{K+ -> pi+ ll}} \label{sec:lfuv} In an attempt to look for observables which may aid in making conclusive observations regarding NP as well as the possibility of lepton flavour universality violation effects, it is natural to look for motivation from $B$~physics. The $R_{K}$ ratios for testing universality are constructed~\cite{Hiller:2003js} using the $B\to H \ell \bar{\ell}$ processes for $H=(K^{(*)},\phi,...)$. An analogous mode in kaons is the $K^+\to\pi^+\ell \bar{\ell}$. Thus it is natural to explore these modes to construct similar observables in kaon systems. The branching fractions for $K^+\to\pi^+\ell \bar{\ell}$ decay is dominated by the long-distance contribution $K^+\to\pi^+\gamma^*$ which can be approximated by the following amplitude: \begin{equation} A_V^{K^+\to\pi^+\gamma^*}=-\frac{G_f\alpha}{4\pi}V_+(z)\bar u_l(p_-)(\gamma_\mu k^\mu+\gamma_\mu p^\mu)v_l(p_+)\,, \end{equation} where $V_+$ is the vector form factor approximated as \begin{equation} V_+(z)=a_++b_+z+V_+^{\pi\pi}(z)\,, \end{equation} with $z=\frac{(p_{\ell}+p_{\bar{\ell}})^2}{M_K^2}$ and $V_+^{\pi\pi}(z)$ describing the contribution from the two-pion intermediate state~\cite{DAmbrosio:1998gur} with input from the external parameter fit to $K\to\pi\pi\pi$ data~\cite{Kambor:1991ah,Bijnens:2002vr}, while the parameters $a_+$ and $b_+$ are determined by experiments via a fit to experimental data on $K^+\to\pi^+\ell \bar{\ell}$. This can then be used for the SM computations of the corresponding branching fractions~\cite{DAmbrosio:1998gur,Cirigliano:2011ny}. The assumption of a SM-like pattern while estimating the coefficients $a_+$ and $b_+$ is reasonable on account of dominant long-distance effects. Thus, any information regarding New Physics contributions due to short-distance physics is hidden and not immediately apparent by noting the individual values of the branching for each channel. Nonetheless, a key point here is that the long-distance effects are purely universal and the same for all lepton flavours. Thus any deviation from this paradigm is necessarily due to NP contributions. A convenient representation is to take the difference of the coefficients as \cite{Crivellin:2016vjc} \begin{align}\label{eq:K_LFUV} a_+^{\mu\mu}-a_+^{ee} = - \sqrt{2}\,{\rm Re}\left[V_{td}V^*_{ts} (C^{\mu}_{9}-C^{e}_{9}) \right] \,, \end{align} where the long-distance part cancels out and one is only sensitive to the short-distance effects if any. This is also a measure of non-universality between the leptons. \begin{table}[t] \renewcommand{\arraystretch}{1.3} \centering \scalebox{0.75}{ \begin{tabular}{cccr} \multicolumn{4}{c}{\emph{Historical progression}}\\\hline\hline Channel & $a_+$ & $b_+$ & Reference \\ \hline $ee$ & $-0.587\pm 0.010$ & $-0.655\pm 0.044$ & E865~\cite{E865:1999ker}\\ $ee$ & $-0.578\pm 0.016$ & $-0.779\pm 0.066$ & NA48/2~\cite{NA482:2009pfe}\\ $\mu\mu$ & $-0.575\pm 0.039$ & $-0.813\pm 0.145$ & NA48/2~\cite{NA482:2010zrc}\\ \end{tabular} \quad \begin{tabular}{cccr} \multicolumn{4}{c}{\emph{Current situation}}\\\hline\hline Channel & $a_+$ & $b_+$ & Reference \\ \hline \multirow{2}{*}{$ee$} & \multirow{2}{*}{$-0.561\pm 0.009$} & \multirow{2}{*}{$-0.694\pm 0.040$} & \multirow{2}{*}{comb.~\cite{DAmbrosio:2018ytt}}\\ & & & \\ $\mu\mu$ & $-0.592\pm 0.015$ & $-0.699\pm 0.058$ & NA62~\cite{Bician:2020ukv}\\ \end{tabular} } \caption{\small Summary of the estimation of vector form factors for $K^+\to \pi^+ \ell \bar\ell$. The left panel gives the historical progression and the right panel gives the current status.} \label{tab:a+b+} \end{table} In the past, the extraction of $a_+$ for the electron and the muon has been done from experimental data in Refs.~\cite{E865:1999ker,NA482:2009pfe,NA482:2010zrc} as shown in the left panel of Table~\ref{tab:a+b+}. The central values and the corresponding uncertainties lead to the conclusion of the measurements being consistent with lepton flavour universality conservation. The most recent determination of the vector form factor for muons is from the NA62 experiment~\cite{Bician:2020ukv} as given in the right panel in Table~\ref{tab:a+b+}. Comparing with the number due to NA48/2~\cite{NA482:2010zrc}, we find that while the central value remains largely unchanged, the uncertainties have been reduced by more than a factor of 2. With the ongoing program, further improvements are expected in the future. For the electron sector, there are two measurements by the E865~\cite{E865:1999ker} and NA48/2~\cite{NA482:2009pfe} experiments. The parameters of the form factor~$V_+(z)$ are individually fitted to the two available data sets. The data sets are in agreement for most values of $z$ except for those around $z=0.3$ \cite{DAmbrosio:2018ytt}. However, a rescaling of the errors in that region by a factor of about 2.5 leads to an agreement between the two. Thus the combination, using the rescaling at $z=0.3$ lead to the numbers in the right column of Table~\ref{tab:a+b+}. Similar to Fig.~\ref{fig:Kppinunu_CLeCLmu}, we represent the results in $(\delta C_e,\delta C_\mu)$ plane in Fig.~\ref{fig:K_LFUV}. Using the updated values in Table~\ref{tab:a+b+} and Eq.~\ref{eq:K_LFUV}, we obtain the region consistent with the measurements. The SM point $(0,0)$ is about $1.5\sigma$ away from the region consistent with the measured values. As illustrated by the green band (within $1\sigma$ for one degree of freedom), the non-universality can be explained by a broad range of values. However, a key point to note is the requirement of a zero electron contribution suggests an unreasonably large contribution from the Wilson coefficient for the muon. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kaon_LFUV.png} \caption{\small Region consistent with the estimation of the LFUV variable in $K^+ \to \pi^+ \ell \bar{\ell}$ decays. \label{fig:K_LFUV}} \end{center} \end{figure} \subsection{BR(\texorpdfstring{$K_{S,L}\to \mu \bar{\mu}$}{KS,KL -> mu mu}), their interference and theoretical errors} \label{sec:Ksmumu} The branching ratio of the $K_{S,L} \to \mu \bar{\mu}$ decays are interesting in different aspects. The precise determination of $K_L\to\mu \bar{\mu}$ \cite{PDG2020} in addition to the ongoing efforts in $K_S\to\mu \bar{\mu}$ by LHCb \cite{LHCb:2020ycd} prompt the inclusion of these decay modes in the observables of interest. The analytic form of the branching fractions, in the absence of right-handed and (pseudo)scalar operators, and suited to our notation is given by~\cite{Isidori:2003ts,Chobanova:2017rkj} \begin{align} \label{eq:brKSmumu} {\rm BR}(K_{S} \to \mu \bar{\mu} ) &= \tau_{S} \frac{ f_K^2 m_K^3 \beta_{\mu}} { 16 \pi} \left\{ \beta_\mu^2 \left|N_{S}^{\rm LD}\right|^2 + \left( \frac{2m_\mu}{m_K}\frac{G_F \alpha_e }{\sqrt{2}\pi} \right)^2 {\rm Im}^2 \left[ -\lambda_c \frac{Y_c}{s_W^2} +\lambda_t C_{10}^{\ell}\right] \right\}, \end{align} and for the branching ratio of the $K_{L} \to \mu \bar{\mu}$ decay we have \begin{align} \label{eq:brKLmumu} {\rm BR}(K_{L} \to \mu \bar{\mu} ) &= \tau_{L} \frac{ f_K^2 m_K^3 \beta_{\mu}} { 16 \pi} \left| N_{L}^{\rm LD} - \left( \frac{2m_\mu}{m_K}\frac{G_F \alpha_e }{\sqrt{2}\pi} \right) {\rm Re}\! \left[ -\lambda_c \frac{Y_c}{s_W^2} +\lambda_t C_{10}^{\ell} \right] \right|^2, \end{align} where the short-distance SM contribution is given by $Y_c$ and $C_{10,{\rm SM}}^{\ell}=C_{10,{\rm SM}} ={-Y(x_t)}/{ s_W^2}$ (see Appendix~\ref{app:Yxt}) and the long-distance contributions as extracted in~\cite{Chobanova:2017rkj} from~\cite{Ecker:1991ru, Isidori:2003ts, DAmbrosio:2017klp, Mescia:2006jd}: \begin{align} N_{S}^{\rm LD} & = (-2.65 + 1.14 i )\times 10^{-11} \textrm{\,(GeV})^{-2}\,,\\ N_{L}^{\rm LD} & = \pm \left[0.54(77) - 3.95 i\right]\times 10^{-11} \textrm{\,(GeV})^{-2}\,, \label{eq:LDKmumu} \end{align} with $N_{L}^{\rm LD}$ having an unknown sign (see Appendix~\ref{app:NLLD} for further details). Our SM evaluation for $K_{S,L}\to\mu \bar{\mu}$ using the updated inputs with the corresponding uncertainties are given below: \begin{align} \text{BR}(K_S\to \mu \bar{\mu})^{\rm SM} &= (5.15\pm1.50)\times 10^{-12}\,, \\ \text{BR}(K_L \to \mu \bar{\mu})^{\rm SM} &= \begin{cases} {\rm LD}(+)\!:\; \left(6.82^{+0.77}_{-0.24}\pm0.04\right)\times 10^{-9}\,,\\[4pt] {\rm LD}(-)\!:\; \left(8.04^{+1.46}_{-0.97}\pm0.09\right)\times 10^{-9}\,. \end{cases} \end{align} The estimation for BR($K_S\to \mu \bar{\mu}$) is in perfect agreement with past literature \cite{Ecker:1991ru,Isidori:2003ts,Gorbahn:2006bm,DAmbrosio:2017klp}. The corresponding evaluation of $K_L\to\mu \bar{\mu}$ leads to `two' SM predictions, each corresponding to a given sign of $N_{L}^{\rm LD}$. The point of interest is in the evaluation of the corresponding asymmetric error emerging mainly from the uncertainty in the long-distance contribution. The existing computations quote symmetric errors which leads to a 1$\sigma$ agreement for both signs of $N_{L}^{\rm LD}$ with the corresponding experimental measurement. In this work, we investigate the asymmetric uncertainty of the branching fraction. \begin{figure}[bth!] \begin{center} \includegraphics[width=0.55\textwidth]{images/BRKLmumuAsymmErrPlot.pdf} \caption{\small The SM theoretical error of BR($K_L \to \mu \bar{\mu}$) due to the uncertainty in the long-distance contribution. \label{fig:KLmumu_err}} \end{center} \end{figure} The dotted lines in Fig.~\ref{fig:KLmumu_err} show the variation in $K_L\to\mu \bar{\mu}$ as $N_{L}^{\rm LD}$ is varied over the 1$\sigma$ interval: blue (orange) corresponds to the +($-$) sign of $N_{L}^{\rm LD}$. The grey shaded region is the experimental measurement within the allowed $1\sigma$ error bars. The central values are represented by orange and blue horizontal lines in the coloured regions. Considering all inputs and assuming a Gaussian distribution of the errors of the inputs, we estimate the errors for each sign of $N_{L}^{\rm LD}$ with a Monte Carlo analysis (see Appendix~\ref{app:error} for details). There are some points that stand out at this juncture:~A)~asymmetric pattern of the errors about the central values and~B)~Minor disagreement (slightly above $1\sigma$) of the negative sign of $N_{L}^{\rm LD}$ with the experimental measurement. The large uncertainty in the long-distance contribution results in quite asymmetric uncertainties in the branching ratio of $K_L \to \mu \bar{\mu}$ (see Fig.~\ref{fig:KLmumu_err}). The asymmetry is also reflected in the computation of $K_L\to\mu \bar{\mu}$ with the inclusion of NP. The left plot of Fig.~\ref{fig:KLmumu_CL} gives the computation of BR$(K_L\to\mu \bar{\mu})$ as a function of $\delta C_L^\mu (\equiv -\delta C_{10}^\mu)$ for both signs of the long-distance contributions. The widths of the coloured bands represent the $1\sigma$ theoretical uncertainties. The band has a non-uniform width which appears to be pinched at $\delta C_L\simeq -5$ corresponding to the negligible lower uncertainty at that point. As noted before and in Table~\ref{tab:data}, the experimental measurement of BR($K_{L} \to \mu \bar{\mu}$) is precise with less than 2\% uncertainty and is shown by the grey band in the figure. Thus, irrespective of the large theory uncertainty and the unknown sign of the long-distance contributions from $A_{L\gamma\gamma}^\mu$, NP contribution to $C_L^\mu$ is limited to the $[-13.4,3.4]$ range at 1$\sigma$. \begin{figure}[htb!] \begin{center} \includegraphics[width=0.48\textwidth]{images/KLmumuAsymmNPErr.png}\quad \includegraphics[width=0.48\textwidth]{images/KSmumu_CLmu_current_exp_log.png} \caption{\small BR($K_L\to \mu\bar{\mu}$) as a function of $\delta C_L^\mu(\equiv \delta C_9^\mu=-\delta C_{10}^\mu)$ assuming both possible signs for the long-distance contribution from $A_{L\gamma\gamma}^\mu$ on the left panel. BR($K_S\to \mu \bar{\mu}$) as a function of NP contributions in $\delta C_L^\mu$ on the right panel. In the left (right) panel, the grey band indicates the experimental measurement (upper limit) while the coloured bands correspond to the theoretical uncertainties. The LHCb bound and prospect for BR($K_S\to \mu \bar{\mu}$) are from Ref.~\cite{LHCb:2020ycd} and Ref.~\cite{LHCb:2018roe}, respectively. \label{fig:KLmumu_CL}} \end{center} \end{figure} The measurement of BR($K_{S} \to \mu \bar{\mu}$) is in its preliminary stages. This is illustrated by the right plot of Fig.~\ref{fig:KLmumu_CL}, where the grey band gives the current upper bound from LHCb \cite{LHCb:2020ycd} and the pink region gives the computation for a broad range of values of $\delta C_L$. The varying width corresponds to the varying uncertainty as a function of $\delta C_L$. Note that even with the projected reach of LHCb with an integrated luminosity of 300 fb$^{-1}$ of data, this decay mode on its own is not sensitive to the regions of $\delta C_L$ permitted by BR($K_{L} \to \mu \bar{\mu}$). This prompts us to include the interference effects with $K_{L} \to \mu \bar{\mu}$ in $K_S\to\mu \bar{\mu}$ which was proposed in~\cite{DAmbrosio:2017klp}. However, it should be noted that the future measurement of BR($K_S \to \mu \bar\mu$) at LHCb will be a powerful probe of New Physics scenarios involving scalar and pseudoscalar contributions~\cite{Chobanova:2017rkj}. Fig.~\ref{fig:KSmumuEff} gives the impact of the interference of $K_L\to \mu\bar{\mu}$ on the effective branching fraction of $K_S\to\mu\bar{\mu}$. The results are presented for two extreme values of the dilution factor $D=\tfrac{K^0-\bar K^0}{K^0+\bar K^0}$, which is a measure of the initial asymmetry of $K^0$ and $\bar K^0$. The left (right) column corresponds to $A_{L\gamma\gamma}<0$ ($A_{L\gamma\gamma}>0$) and the shaded region in each plot represents the region ruled out by the measurement of BR($K_L\to\mu \bar\mu$). Comparing with Fig.~\ref{fig:KLmumu_CL}, we note that the inclusion of interference effects makes the $\delta C_L^\mu\sim\mathcal{O}(1)$ region accessible to the high luminosity phase of LHCb. \begin{figure}[thb!] \begin{center} \includegraphics[width=0.48\textwidth]{images/KsmumuEffZoomedLDnegDp1.png}\quad\includegraphics[width=0.48\textwidth]{images/KsmumuEffZoomedLDposDp1.png}\\ \includegraphics[width=0.48\textwidth]{images/KsmumuEffZoomedLDnegDm1.png}\quad \includegraphics[width=0.48\textwidth]{images/KsmumuEffZoomedLDposDm1.png} \caption{\small The impact of the interference of $K_L\to \mu\bar{\mu}$ on the effective branching fraction of $K_S\to\mu\bar{\mu}$. The left and the right panels correspond to negative and positive signs for $A_{L\gamma\gamma}$, respectively. The upper and lower panels correspond to dilution factor $D=+1$ and $D=-1$, respectively. \label{fig:KSmumuEff}} \end{center} \end{figure} \subsection{\texorpdfstring{$K_L\to \pi^0\ell \bar{\ell}$}{KL -> pi0 ll}} \label{sec:KLtopill} These processes have long been considered a smoking gun for the detection of direct CP violation. The description is composed of three contributions: The CP-conserving long-distance contribution, which is through the two-photon process $K_L\to \pi^0\gamma^*\gamma^*$, an indirect CP-violating contribution proportional to the CP-conserving process BR($K_S\to\pi^0 \ell \bar{\ell}$) and $\epsilon$ which parameterises the $K^0-\bar K^0$ mixing and finally the direct CP-violating contribution. Considering the different contributions, the branching fraction of $K_L \to \pi^0 \ell \bar{\ell}$ can be expressed as~\cite{Buchalla:2003sj,Isidori:2004rb,Mescia:2006jd} \begin{align} {\rm BR}(K_L \to \pi^0 \ell \bar{\ell}) = \left( C_{\rm dir}^\ell \pm C_{\rm int}^\ell|a_S| + C_{\rm mix}^\ell|a_S|^2 + C_{\gamma \gamma}^\ell \right)\cdot 10^{-12}\,, \end{align} with $|a_S| = 1.20 \pm 0.20$, extracted from experimental results on the branching fractions of $K_S \to \pi^0 e \bar{e}$ and $K_S \to \pi^0 \mu \bar{\mu}$. The numerical values of the different components are given in~\cite{Mescia:2006jd} as collected below: \begin{table}[H] \renewcommand{\arraystretch}{1.3} \centering \scalebox{0.99}{ \begin{tabular}{c|c|c|c|c} & $C_{\rm dir}^\ell$ & $C_{\rm int}^\ell$ & $C_{\rm mix}^\ell$ & $C_{\gamma \gamma}^\ell$ \\ \hline $\ell=e$ & $(4.62\pm 0.24) (w_{7V}^2 + w_{7A}^2)$ & $(11.3\pm 0.3) w_{7V}$ & $14.5\pm 0.5$ & $ \approx 0$ \\ $\ell=\mu$ & $(1.09\pm 0.05) (w_{7V}^2 + 2.32w_{7A}^2)$ & $(2.63\pm 0.06) w_{7V}$ & $3.36 \pm 0.20$ & $5.2 \pm 1.6$ \\ \end{tabular} } \end{table} \noindent where $C_{\rm dir}$ corresponds to the direct CP-violating term determined by short-distance contributions proportional to Im($\lambda_t^{sd}$) in the SM (and minimal flavour violating scenarios). The $C_{\rm mix}^\ell$ term indicates the indirect CP-violating contribution and $C_{\rm int}^\ell$ corresponds to the interference between the direct and indirect CP-violating contributions. The sign of the latter contribution is unclear, although constructive interference is preferred~\cite{Buchalla:2003sj}\footnote{In this paper we consider constructive interference when investigating NP.}. Finally, the $C_{\gamma \gamma}^\ell$ ($\equiv C_{\rm CPC}$) term corresponds to the CP-conserving contribution from the two-photon intermediate states which can be deduced from the measurement of the $K_L\to\pi^0\gamma\gamma$ spectrum~\cite{NA48:2002xke}. The fact that this contribution is negligible for the electron mode strengthens the idea that the electron mode, in particular, could be an incontrovertible signal for the presence of direct CP violation. As indicated in \cite{Buchalla:2003sj}, 40$\%$ of the contribution to the branching fraction is due to the clean short-distance physics, primarily driven by the interference with the indirect CP-violating part. The $C^\ell_{\rm dir}$ and $C^\ell_{\rm int}$ contributions are parameterised by the factors $w_{7V,7A}$ which encode the short-distance SM and NP effects. They are defined as (see e.g.~\cite{Bobeth:2016llm}) \begin{align} w_{7V} = \frac{1}{2\pi}{\rm Im}\left[ \frac{\lambda_t^{sd}}{1.407\times 10^{-4}} C_9 \right]\,,\quad w_{7A} = \frac{1}{2\pi}{\rm Im}\left[ \frac{\lambda_t^{sd}}{1.407\times 10^{-4}} C_{10} \right]\,, \end{align} where $1.407\times 10^{-4}$ corresponds to the input used by~\cite{Mescia:2006jd} for $\lambda_t^{sd}$. Using the updated inputs in Table~\ref{tab:inputs} we find for constructive (destructive) interference \begin{align} {\rm BR}^{\rm SM}(K_L \to \pi^0 e \bar{e}) &= 3.46^{+0.92}_{-0.80} \left( 1.55^{+0.60}_{-0.48} \right) \times 10^{-11}\,,\\ {\rm BR}^{\rm SM}(K_L \to \pi^0 \mu \bar{\mu}) &= 1.38^{+0.27}_{-0.25} \left( 0.94^{+0.21}_{-0.20} \right) \times 10^{-11}\,, \end{align} while current experimental bounds from KTeV~\cite{KTeV:2003sls,KTEV:2000ngj} at 90\% confidence level (CL) are one order of magnitude larger \begin{align} {\rm BR}^{\rm exp}(K_L \to \pi^0 e \bar{e}) &< 28 \times 10^{-11}\qquad \text{at 90\% CL}\,,\\ {\rm BR}^{\rm exp}(K_L \to \pi^0 \mu \bar{\mu}) &< 38 \times 10^{-11}\qquad \text{at 90\% CL}\,. \end{align} It is expected that in the hybrid phase of the future CERN kaon program these decay modes are going to be observed~\cite{Goudzovski:2022vbt}. Currently, the dominating theoretical uncertainty is due to $|a_S|$~\cite{Buchalla:2008jp} followed by the uncertainty on the two-photon intermediate state contribution in the muon mode which for destructive interference is as large as the uncertainty due to $|a_S|$. The prospect of the $|a_S|$ form factor is 10\% statistical precision with LHCb Upgrade II~\cite{LHCb:2018roe,Alves:2018npj}. \begin{figure}[htbp] \begin{center} \includegraphics[width=0.6\textwidth]{images/KLpi0ee_KLpi0mumu_Err.png} \caption{\small BR($K_L \to \pi^0 \ell \bar\ell$) as a function of $\delta{C_L^\ell} (\equiv C_9^\ell=-C_{10}^\ell)$ assuming constructive interference between direct and indirect CP-violating contributions. \label{fig:KLpill_pos}} \end{center} \end{figure} The effect of NP in $\delta C_L$ on the branching fraction of $K_{L} \to \pi^0 \ell \bar{\ell}$ is shown in Fig.~\ref{fig:KLpill_pos} for the electron and the muon sectors. Since currently there are only upper bounds from experiments, these two observables do not put stringent constraints on possible NP contributions. For the muon sector, the upper bound gives a much weaker constrain compared to BR($K_L \to \mu \bar{\mu}$) as given in the previous subsection. Nonetheless, it is impressive that for the electron sector the current upper limit on BR($K_{L} \to \pi^0 e \bar{e}$) which is about one order of magnitude larger than the SM prediction is already compatible with what is obtained by BR($K^+\to \pi^+ \nu \bar{\nu}$), indicating $\delta C_L^e \lesssim 28$ at 90\% CL. \begin{table}[t] \renewcommand{\arraystretch}{1.5} \begin{center} \setlength\extrarowheight{1pt} \scalebox{0.68}{ \hspace*{-1.cm} \begin{tabular}{|l|l|lc|l|}\hline \bf{Observable} & \bf{SM prediction}& \bf{Exp results} & \bf{Ref.}& \bf{Experimental Err. Projections} \\ \hline BR$(K^+\to \pi^+\nu\nu)$ & $(7.86 \pm 0.61)\times 10^{-11}$ & $(10.6^{+4.0}_{-3.5} \pm 0.9 ) \times 10^{-11}$ & \cite{NA62:2021zjw}& 10\%(@2025)\,5\%(CERN; long-term)~\cite{Goudzovski:2022vbt} \\ BR$(K^0_L\to \pi^0\nu\nu)$ & $(2.68 \pm 0.30) \times 10^{-11}$ & $ <3.0\times 10^{-9} $ @$90\%$ CL & \cite{Ahn:2018mvc}& $20\%$(CERN; long-term ~\cite{Goudzovski:2022vbt})\, 15\% (KOTO~\cite{NA62:2020upd}) \\ LFUV($a_+^{\mu\mu}-a_+^{ee}$)&\multicolumn{1}{c|}{0}&$-0.031\pm 0.017$&\cite{DAmbrosio:2018ytt,Bician:2020ukv}&$\pm0.007$ (assuming $\pm0.005$ for each mode)\\ BR$(K_L\to \mu\mu)$ ($+$) & $(6.82^{+0.77}_{-0.29})\times 10^{-9}$ & \multirow{2}{*}{$(6.84\pm0.11)\times 10^{-9}$} & \multirow{2}{*}{\cite{PDG2020}} & \multirow{2}{*}{ {\small experimental uncertainty kept to current value}}\\ BR$(K_L\to \mu\mu)$ ($-$) & $ (8.04^{+1.47}_{-0.98})\times 10^{-9}$ & & & \\ BR$(K_S\to \mu\mu)$ & $(5.15\pm1.50)\times 10^{-12}$ & $ < 2.1(2.4)\times 10^{-10}$ @$90(95)\%$ CL & \cite{LHCb:2020ycd} & $<8\times10^{-12}$ @$95\%$ CL (CERN; long-term~\cite{LHCb:2018roe})\\ BR$(K_L\to \pi^0 ee)(+)$ & $(3.46^{+0.92}_{-0.80})\times 10^{-11}$ & \multirow{2}{*}{$ < 28\times 10^{-11}$ @$90\%$ CL} & \multirow{2}{*}{\cite{KTeV:2003sls}}& \multirow{4}{*}{observation (CERN; long-term~\cite{Goudzovski:2022vbt})}\\ BR$(K_L\to \pi^0 ee)(-)$ & $(1.55^{+0.60}_{-0.48})\times 10^{-11}$ & & & \\ BR$(K_L\to \pi^0 \mu\mu)(+)$ & $(1.38^{+0.27}_{-0.25})\times 10^{-11}$ & \multirow{2}{*}{$ < 38\times 10^{-11}$ @$90\%$ CL} & \multirow{2}{*}{\cite{KTEV:2000ngj}} & {\footnotesize (we assume 100\% error)} \\ BR$(K_L\to \pi^0 \mu\mu)(-)$ & $(0.94^{+0.21}_{-0.20})\times 10^{-11}$ & & & \\ \hline \end{tabular}} \caption{\small Table showing a comparison of the SM values and the experimental status for the different observables. The SM values and the corresponding uncertainties in the first two columns are evaluated using the updated inputs using {\tt{SuperIso}}\cite{Mahmoudi:2008tp}. The third column gives the projections in experimental sensitivity for the corresponding experiments (see also~\cite{Goudzovski:2022}). For the case with more than one projection, the CERN long-term ones are used. \label{tab:data}} \end{center} \end{table} \section{Global picture} \label{sec:global} Table~\ref{tab:data} summarises the results of the last section. The first column gives our evaluated SM value and the second column is the current experimental precision. The last column gives the experimental projection for the measurement of these observables and is detailed in Section~\ref{sec:global1}. It is convenient to present a unified picture of the topics discussed thus far. \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kaon_current_CLe_CLmuEqualCLtau_KLmumuPM.png} \includegraphics[width=0.48\textwidth]{images/Kaon_current_Zoom_CLe_CLmuEqualCLtau_KLmumuPM.png} \caption{\small The bounds from individual observables. The right panel is the zoomed version of the left panel. The coloured regions correspond to $68\%$ CL when there is a measurement and the dashed ones to upper limits at 90\% CL. $K_L \to \mu \bar{\mu}$ has been shown for both signs of the long-distance contribution. For $K_L \to \pi^0 e \bar{e}$ and $K_L \to \pi^0 \mu \bar{\mu}$, constructive interference between direct and indirect CP-violating contributions has been assumed. \label{fig:all_obs_individually}} \end{center} \end{figure} In Fig.~\ref{fig:all_obs_individually}, for each individual observable, we show the $68\%$ CL regions in the $(C_L^e,C_L^\mu)$ plane for those observables which have been measured, as well as the 90\% CL upper limits for the observables where there are only upper bounds. Note the mild tension in the upper part of the 68\% CL region of $K^+\to \pi^+\nu\bar\nu$ (in maroon) with the upper bound on $K_L\to \pi^0 e \bar{e}$ (dashed blue line). This is specific to the case where we choose $\delta C_L^\mu=\delta C_L^\tau$. A contrasting picture corresponding to $ \delta C_L^e = \delta C_L^\tau$ is presented in Fig.~\ref{fig:all_obs_individually} in Appendix~\ref{app:otherpossibility}. A zoomed version is shown in the right plot. Other decays include $K_L\to\mu \bar{\mu}$ shown with the orange (blue) band for negative (positive) long-distance contributions. The upper bound from $K_L\to\pi^0\nu\bar\nu$ is indicated by the dashed green line. The region of intersection by the horizontal blue dashed and the vertical red dashed lines represent the parameter space allowed by $K_L\to\pi^0\ell \bar{\ell}$. While this is instructive, it would be useful to find the region in the $(\delta C_L^\mu=\delta C_L^\tau,\,\delta C_L^e)$ plane which is consistent with all observables. This prompts the implementation of a global fit, similar to those employed for $B$~decays. However given the limited experimental data for most observables, we adapt a multi-prong strategy for the fits taking into account both the current and future possibilities for many of these observables. \subsection{Global Fits } \label{sec:global1} We begin with the definition of the $\chi^2$ statistic as follows: \begin{equation} \chi^2 = \sum_{i,j =1}^{N} \left( O_i^{\rm th}(\delta C_L^{e,\mu}) - O_i^{\rm exp} \right) \; C_{ij}^{-1} \; \left( O_j^{\rm th}(\delta C_L^{e,\mu}) - O_j^{\rm exp} \right)\,, \label{eq:chisq} \end{equation} where $C_{i,j} $ denotes the total (theoretical and experimental) covariance matrix. Note that an observable ${O}_i^{\rm th} $ in Eq.~\ref{eq:chisq} is expressed as a function of $\delta C_L^{e,\mu}$, and the contribution due to $\delta C_L^\tau$ is in principle not fixed in a two-dimensional fit to $\delta C_L^{e,\mu}$. Henceforth, unless otherwise stated we stick to the convention $\delta C_L^{\tau}=\delta C_L^{\mu}$. While this choice is motivated by the convention followed in the paper thus far, the future phase of data accumulation for each of these experiments would enable us to make a more adequate choice. To ensure clarity, we divide the discussion that follows into two parts: fits with current data and projected fits. \subsection{Fits with current data} \label{sec:global2} We first perform a New Physics fit of $\delta C_L^e$ and $\delta C_L^\mu = \delta C_L^\tau$ to the current experimental data on rare kaon decays (collected in the second column of Table~\ref{tab:data}). The results of the fit are given in Fig.~\ref{fig:fitcomparion} where the 68~and~95\% CL regions are given in light and dark purple, respectively. Due to the ambiguity in the sign of the long-distance contribution from $A_{L\gamma\gamma}$ to $K_L\to \mu \bar{\mu}$, the fit results are given for both signs with $A_{L\gamma\gamma}<0\; (>0)$ on the left (right). The purple cross in each plot represents the corresponding best-fit point. While the fits are qualitatively similar, we note the appearance of a wall-shaped feature on the left side of the fit for $A_{L\gamma\gamma}>0$~(right), which is better defined than the one corresponding to $A_{L\gamma\gamma}<0$~(left). Its origin can be traced back to the blue band in Fig.~\ref{fig:all_obs_individually}. But in general, the difference in shapes for the fits between positive and negative values of $A_{L\gamma\gamma}$ show that a future improvement in the sensitivity can lead to a resolution of the sign ambiguity. \begin{figure}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kaon_current_All_LDm_tulEqualmuonOrZero.png}\quad \includegraphics[width=0.48\textwidth]{images/Kaon_current_All_LDp_tulEqualmuonOrZero.png} \caption{\small Comparison of the global fits between scenarios $\delta C_L^{\tau}=\delta C_L^{\mu}$ (purple regions) and $\delta C_L^{\tau}=~\!\!0$ (black solid and dashed lines). The fits are implemented using existing data with long-distance contribution to $K_L\to \mu \bar{\mu}$ having a negative (positive) sign, on the left (right). \label{fig:fitcomparion}} \end{center} \end{figure} One of the defining features of kaon decays is that it allows a ``relatively clean" possibility for identifying the extent of contributions due to effective operators involving tau leptons. Note that the ``relatively clean" refers to both the status of the SM computation as well as the future projections for the measurement of observables where operators involving the tau play a direct role. These operators contribute to the computation of $K^+\to\pi^+\nu\bar \nu$ and $K_L\to\pi^0\nu\bar \nu$. As explained in detail in Section~\ref{sec:2}, both are characterised by highly precise SM computations, owing to negligible long-distance uncertainties. Furthermore, there exists a well-defined strategy for a highly precise measurement of both decays~\cite{Goudzovski:2022,NA62:2021zjw,Ahn:2018mvc}. It is natural to expect insight into the extent of the contributions due to the operators involving tau leptons by means of global analyses which include these decay modes. This can be illustrated by a comparison of the fits in the absence of $\delta C_L^\tau$ as shown with dashed and solid black lines corresponding to the 68 and 95\% CL regions, respectively. The black cross in Fig.~\ref{fig:fitcomparion} indicates the best-fit point for the case of $\delta C_L^\tau = 0$. A feature common to both signs of $A_{L\gamma\gamma}$ is that the $\delta C_L^{\tau}=\delta C_L^{\mu}$ case prefers a smaller parameter space, translating into a strong lower bound on $\delta C_L^\mu$. The larger spread in $\delta C_L^\mu$ for the fit with $\delta C_L^{\tau}=0$ can be attributed to the larger contribution required from the same to be consistent with the observation of $K^+\to\pi^+\nu \bar{\nu}$. Other noticeable feature is the shapes of the 68$\%$ CL contours on the right for both plots. The depression in the centre can be attributed to the region allowed by $K^+\to\pi^+\nu \bar{\nu}$ shown in maroon in Fig.~\ref{fig:all_obs_individually}. Furthermore, the wall-like feature on the left side of the global fit on the right plot reflects the region allowed by $K_L\to\mu \bar{\mu}$ and shown by the blue band in Fig.~\ref{fig:all_obs_individually}. Visually, the left plot reveals a greater level of discrimination between the two scenarios. This is because of a relatively stronger lower bound on $\delta C^{\mu}_L$ for the negative sign of $A_{L\gamma\gamma}$ as shown by the orange region in Fig.~\ref{fig:all_obs_individually}. However, for either plot, the two scenarios are statistically equivalent with the present data. A concrete discrimination may be possible with future runs of many of these experiments. This information would be available by $\sim 2035$, at the projected end of data accumulation for NA62/KOTO thereby reaching the required precision. This strengthens the possibility of kaon experiments offering a handle on effective operators involving tau. \subsection{Projection fits} \label{sec:global3} The results of the above fit arouse a natural curiosity about the possible impact of the future runs of the experiments. Given the preliminary stage of the run of most of these experiments, any projection on the fits requires both the possible measured value as well as the experimental precision. The latter is rather straightforward and we can adapt the intended long-term experimental precision that is true for some of the decay modes. In the case of $K^+(K_L)\to\pi^+(\pi^0)\nu\bar\nu$, there is a well-defined sensitivity goal due to NA62/KOTO. In the case of LFUV in $K^+ \to \pi^+ \ell \bar{\ell}$, we assume the uncertainty on $a_+^{\ell\ell}$ to become less than half the current value to reach $0.005$ for either of the electron and muon modes. For $K_L\to\pi^0\ell \bar{\ell}$ as mentioned in Section~\ref{sec:KLtopill} they are expected to be observed in the future CERN kaon program, however, in the absence of a well-defined projection for the uncertainty, we assume 100$\%$ error. All the numbers are collected in the last column of Table~\ref{tab:data}. A prediction of the possible measured central value, on the other hand, is not possible, especially for those with only an existing upper bound. In light of this, we present a two-faceted approach. In the first approach, labelled projection~\textbf{A}, the predicted central values for those observables with only an upper bound is projected to be the same as the SM prediction. On the other hand, the corresponding values for $K^+\to\pi^+\nu\bar\nu $, LFUV in $K^+ \to \pi^+ \ell \bar{\ell}$ and $K_L\to\mu \bar{\mu}$ are chosen to be the same as the existing measurement. In the second approach, labelled projection~\textbf{B}, the central values for all of the observables are projected to be the best-fit points obtained from the fits with the existing data. \begin{figure}[b!] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kaon_projections_LDm.png}\quad \includegraphics[width=0.48\textwidth]{images/Kaon_projections_LDp.png} \caption{\small Projected fit to rare kaon observables within 68 and 95\% CL. Lighter shades of red: projection \textbf{A} with current measurements (whenever there is one) or SM predictions (if currently not observed). Darker shades of red: projection \textbf{B} with current best-fit point used for all projections. The contours of the fit to current data (corresponding to the purple regions in Fig.~\ref{fig:fitcomparion}) are shown with purple solid lines. \label{fig:fitprojections}} \end{center} \end{figure} The results of the fits, for projections~\textbf{A} and~\textbf{B}, are illustrated in Fig.~\ref{fig:fitprojections} with $A_{L\gamma\gamma} < 0\;(>)$ on the left (right). The 68 and 95\% CL regions are shown with lighter shades of red for projection~\textbf{A} and similarly for projection~\textbf{B} with darker shades of red. A~comparison with fit to current data as indicated by the purple solid contours (coinciding with the 68 and 95\% contours of the purple regions in Fig.~\ref{fig:fitcomparion}) reveals a significant reduction in the parameter space for both projections. Projection~\textbf{A} leads to an overall consistency with the SM, represented by $(0,0)$, up to $3\sigma$. This can be expected with the choice of the predicted central values being the same as the SM for those observables not currently measured. Projection~\textbf{B} on the other hand predicts an overwhelming departure from the SM. This is also in line with our expectation as the best-fit points of the fit to the current data (purple crosses in Fig.~\ref{fig:fitcomparion}) presented a significant departure from their corresponding SM prediction with the assumption of the projected sensitivities. The entire discussion of Section~\ref{sec:global} can be conveniently encapsulated by presenting a summary plot in Fig.~\ref{fig:combined}. The left~(right) plot corresponds to $A_{L\gamma\gamma}<0~(>0)$. Either plot gives the results with the current fit along with the two approaches for the projected global fits: projection~A~(lighter shade of red) and B~(darker shade of red). They are overlaid on the results of Fig.~\ref{fig:all_obs_individually}. This gives us an illustrative understanding of the observables that are the driving force behind the fits. As expected, $K^+\to\pi^+\nu \bar{\nu}$, shown by the ``maroon-doughnut'' shape plays the most significant role in determining the shape of the regions. This is true for both the current as well as the projected fits. In an ideal case, one would expect the regions to be concentrated towards the top left for the projected fit due to the impact of the LFUV observables. However, the dominance of the theoretical errors due to $K_L\to\mu \bar{\mu}$ makes its effect to be less pronounced. \begin{figure}[thb!] \begin{center} \includegraphics[width=0.48\textwidth]{images/Combined_KLmumuLDminus.png}\quad \includegraphics[width=0.48\textwidth]{images/Combined_KLmumuLDplus.png}\\ \caption{\small Global fit with current data (in purple) and projection A (lighter shades of red) and projection B (darker shades of red) together with 68\% CL for individual observables with current data. The purple cross indicates the best-fit point with the current data. \label{fig:combined}} \end{center} \end{figure} \section{Conclusions} \label{sec:conc} This work, motivated by $B$-anomalies, presents a possible new way to analyse rare kaon decays looking for LFUV: we have performed global fits to the Wilson coefficients of the operators contributing to kaon decays. The road leading to the fits is set up by a careful re-examination of the different observables involved. This includes a computation of the SM values and the theoretical uncertainties. In the case of the latter, asymmetric uncertainties were observed for some of the modes, in particular for $K_L\to\mu \bar{\mu}$. These inputs were then used to construct a global picture and develop a strategy for the fits, which were divided into two parts. The first part gave a glimpse into the existing parameter space using the current experimental information. This was then followed by a ``projection-fit" which took into account the future sensitivity and measurement goals for many of the observables while assuming realistic projections for the rest (\textit{e.g.}~vector form factors in $K^+\to\pi^+\ell \bar{\ell}$). Given the uncertainty in the experimental central values for the observables for which only an upper bound exists, we adapted two methodologies: A) Assuming SM as the central values and B) Assuming the best-fit values from the existing fits as the central values. The results of the projection-fit highlighted the need to achieve a better accuracy in the theoretical computation of $K_L\to\mu \bar{\mu}$. Our $K_L\to \mu\bar\mu$ analysis leading to asymmetric errors indicates quantitatively how to strategically improve the theoretical error in order to solve the ambiguity in the sign of the long-distance contributions (can be resolved if there is an improvement by about $\sim 50\%$). Although not considered in our global fit, we also demonstrated the interference of $K_S \to \mu\bar\mu$ and $K_L \to \mu\bar\mu$ besides giving a handle on the sign of the SM long-distance contributions in the latter, can be an effective probe of NP in the muon sector (in case of having an experimental setup with a large dilution factor). This analysis presented a global picture of the physics goals that can be achieved with kaon experiments, and the possibility to probe sensitivity to lepton flavour universality violating New Physics. \section*{Acknowledgements} We thank Baptise Filoche for his collaboration in the initial stages of the project. AMI would like to thank IP2I Lyon for the hospitality where the initial parts of the project were discussed. AMI also acknowledges support from the CEFIPRA under the project ``Composite Models at the Interface of Theory and Phenomenology'' (Project No. 5904-C). We would like to thank Prof.~Luca Lista for fruitful discussions, particularly regarding the study of asymmetric uncertainties via Monte Carlo error analysis. \clearpage \begin{appendix} \section*{Appendix} \section{Input parameters} \label{app:inputs} Table~\ref{tab:inputs} gives the input values used in the computation of different observables. \begin{table}[htb!] \begin{center} \footnotesize{\begin{tabular}{|lr|lr|}\hline $m_{K^\pm} = 493.677 \pm 0.016 $ MeV & \cite{PDG2020} & $\lambda = 0.22650(48)$ & \cite{PDG2020}\\ $m_{K}= 497.611 \pm 0.013$ MeV & \cite{PDG2020} & $A = 0.790(17)$ & \cite{PDG2020}\\ $m_c(m_c)= 1.27 \pm 0.02$ GeV & \cite{PDG2020} & $\bar{\rho} = 0.141(17)$ & \cite{PDG2020}\\ $m_b(m_b)= 4.18 ^{+0.03}_{-0.02}$ GeV & \cite{PDG2020} & $\bar{\eta} = 0.357(11)$ & \cite{PDG2020} \\ $m_t^{pole}= 172.69 (0.30) $ GeV & \cite{PDG2020} & {$\lambda_t^{sd} = \left[-3.11 (15) + i \,1.36 (7)\right]\times 10^{-4}$} &\\ $ f_K= 155.7(3)$ MeV & \cite{FlavourLatticeAveragingGroup:2019iem} & &\\ \hline \end{tabular}} \caption{Input parameters used in this work. \label{tab:inputs}} \end{center} \end{table} \section{Results with \texorpdfstring{$\delta C_L^e=\delta C_L^\tau $}{CLe=CLtau}} \label{app:otherpossibility} In this section, we provide the results of the scan corresponding to the possibility where the NP Wilson coefficients for the electron and tau are set equal to each other: $\delta C_L^e=\delta C_L^\tau $. As tau contribution is relevant only for the decays involving the neutrinos, we present the other possibility for Figs.~\ref{fig:Kppinunu_CLeCLmu} and~\ref{fig:all_obs_individually}. \begin{figure}[htb!] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kppinunu_CLmu_CLeEqualCLtau_exp.png}\quad \includegraphics[width=0.49\textwidth]{images/KLpinunu_CLmu_CLeEqualCLtau_exp.png} \caption{BR($K^+\to \pi^+ \nu \bar{\nu}$) (left) and BR($K_L\to \pi^0 \nu \bar{\nu}$) (right) as a function of $\delta C_L^e=\delta C_L^\tau$ and $\delta C_L^\mu$. In the left plot, the solid (dashed) line corresponds to the measured central value ($1\sigma$ experimental uncertainty) by NA62~\cite{NA62:2021zjw}. In the right the upper bound on BR($K_L\to \pi^0 \nu \bar{\nu}$) is not visible for the values scanned. \label{fig:Kppinunu_CLeCLmub}} \end{center} \end{figure} \begin{figure}[htb!] \begin{center} \includegraphics[width=0.48\textwidth]{images/Kaon_current_CLmu_CLeEqualCLtau_KLmumuPM.png} \includegraphics[width=0.48\textwidth]{images/Kaon_current_Zoom_CLmu_CLeEqualCLtau_KLmumuPM.png} \caption{\small The bounds from individual observables. The right panel is the zoomed version of the left panel. The coloured regions correspond to $68\%$ CL when there is a measurement and the dashed ones to upper limits. $K_L \to \mu \bar{\mu}$ has been shown for both signs for the long-distance contribution. For $K_L \to \pi^0 e \bar{e}$ and $K_L \to \pi^0 \mu \bar{\mu}$, constructive interference between direct and indirect CP-violating contributions have been assumed. \label{fig:all_obs_individuallyb}} \end{center} \end{figure} In the case of the former, Fig.~\ref{fig:Kppinunu_CLeCLmub} illustrates corresponding regions when the New Physics Wilson coefficients for the electron and tau are set equal to each other. This has implications when we present the combination of all observables and is shown in Fig.~\ref{fig:all_obs_individuallyb}. In comparison with Fig.~\ref{fig:all_obs_individually}, note the flattening of the maroon ellipse which makes the upper bound for $K_L\to\pi^0e \bar{e}$ less powerful in constraining the regions of the parameter space that were scanned. \section{Calculations of Theory Error}\label{app:error} The theoretical errors for $K_L\to\mu \bar{\mu}$ and $K_L\to\pi^0\ell \bar{\ell}$ are characterised by asymmetric uncertainties. In particular, for the former, the degree of departure from the symmetric Gaussian errors was significant for the former and in contrast with the values quoted thus far. In this section, we elaborate on the reasons for this departure and argue why a symmetric Gaussian uncertainty does not accurately reflect the true theoretical uncertainty. For any given observable, we take into account the latest values of the inputs and the corresponding errors assuming a Gaussian distribution and employing a Monte Carlo simulation for the uncertainty propagation. The blue (orange) distribution in Fig.~\ref{fig:error1} gives the probability distribution function (PDF) for the positive (negative) sign of the long-distance contribution for $K_L\to\mu \bar{\mu}$. The central values quoted in Table~\ref{tab:data} reflect the value estimated by using the central values of the inputs given in Table~\ref{tab:inputs}, while the asymmetric uncertainties are calculated by considering the boundaries between which the area under the PDF curve is $0.68$. This is to be compared with the solid lines which reflect the Gaussian description where $\mu$ and $\sigma$ have been naively calculated from the Monte Carlo distribution without taking into account the asymmetric nature of the distribution. We emphasise the large departure of the symmetric description compared to the Monte Carlo distribution, especially for the case of negative sign for long-distance contributions ($A_{L\gamma\gamma}^\mu<0$, shown in orange). \begin{figure}[htb!] \begin{center} \includegraphics[width=0.57\textwidth]{images/PDF_KLmumu.png} \caption{\small Comparison of the Monte Carlo PDF for $K_L\to\mu \bar{\mu}$ with the Gaussian description of the uncertainty. \label{fig:error1}} \end{center} \end{figure} A similar situation, albeit to a much lesser degree, is also noted for $K_L\to\pi^0\ell \bar{\ell}$ as shown in Fig.\ref{fig:error2}. For completeness, we show the distributions for both the signs of the interference between the direct and indirect CP-violating terms. The difference between the actual distribution and the corresponding naive Gaussian description is only marginal. The values in Table~\ref{tab:data} reflect the values obtained from the actual asymmetric distribution. \begin{figure}[hbt!] \begin{center} \includegraphics[width=0.48\textwidth]{images/PDF_KLpimumu.png} \includegraphics[width=0.48\textwidth]{images/PDF_KLpiee.png} \caption{\small Comparison of the Monte Carlo PDF for $K_L\to\pi^0\ell \bar{\ell}$ with the Gaussian description of the uncertainty for both positive and negative interference. \label{fig:error2}} \end{center} \end{figure} \section{\texorpdfstring{$Y(x_t)$}{Y(xt)} and \texorpdfstring{$Y_c$}{Yc} expressions}\label{app:Yxt} The SM expressions of $Y(x_t)$ is given in Ref.~\cite{Buchalla:1998ba} \begin{align}\label{eq:Yfull} Y(x_t) = Y_0(x_t) + \frac{\alpha_s(\mu_t)}{4\pi} Y_1(x_t), \end{align} where the LO gauge-independent $Y_0(x_t)\equiv C_0(x_t)-B_0(x_t)$ \begin{align} Y_0(x) = \frac{x_t}{8}\left[\frac{4-x_t}{1-x_t}+\frac{3x_t}{(1-x_t)^2}\log x_t\right] \end{align} and \begin{align} Y_1(x_{t}) &= \frac{10x_{t} + 10 x_{t}^2 + 4x_{t}^3}{3(1-x_{t})^2} - \frac{2x_{t} - 8x_{t}^2-x_{t}^3-x_{t}^4}{(1-x_{t})^3} \log x_{t}\nonumber\\ &+\frac{2x_{t} - 14x_{t}^2 + x_{t}^3 - x_{t}^4}{2(1-x_{t})^3} \log^2 x_{t} + \frac{2x_{t} + x_{t}^3}{(1-x_{t})^2} {\rm Li_2}(1-x_{t})\nonumber\\ &+8x_{t} \frac{\partial Y_0}{\partial x_{t}} \log \frac{\mu^2}{M^2_W}. \end{align} For the charm contributions, we have $Y_c = \lambda^4 P_c(Y)$ where $P_c(Y)$ is calculated at NNLO in QCD~\cite{Gorbahn:2006bm}. The analytic expression is not given, however, an approximate formula with $\lambda$-dependence is offered \begin{align}\label{eq:PcYlambda} P_c(Y) & = 0.115 \pm 0.008_{\rm theor} \pm 0.008_{m_c} \pm 0.001_{\alpha_s} = \left ( 0.115 \pm 0.018 \right ) \left ( \frac{0.225}{\lambda} \right)^4 \, , \end{align} Another approximate formula with an accuracy of better than $\pm1.0\%$ in the ranges $1.15 \! \text{ GeV} \le m_c (m_c) \le 1.45 \! \text{ GeV}$, $0.1150 \le \alpha_s (M_Z) \le 0.1230$, $1.0 \! \text{ GeV} \le \mu_c \le 3.0 \! \text{ GeV}$ and $2.5 \! \text{ GeV} \le \mu_b \le 10.0 \! \text{ GeV}$ is also given in Ref.~\cite{Gorbahn:2006bm} \begin{align}\label{eq:PcYscales} P_c(Y) & = 0.1198 \left ( \frac{m_c (m_c)}{1.30 \! \text{ GeV}} \right )^{2.3595} \left ( \frac{\alpha_s (M_Z)}{0.1187} \right )^{6.6055} \nonumber\\ &\times \left ( 1 + \sum_{i,j,k,l} \kappa_{ijlm} L_{m_c}^i L_{\alpha_s}^j L_{\mu_c}^k L_{\mu_b}^l \right )\left(\frac{0.225}{\lambda} \right)^4 , \end{align} where \begin{align} L_{m_c} & = \ln \left ( \frac{m_c (m_c)}{1.30 \! \text{ GeV}} \right ) \, , & L_{\alpha_s} & = \ln \left ( \frac{\alpha_s (M_Z)}{0.1187} \right ) \, , \nonumber\\ L_{\mu_c} & = \ln \left ( \frac{\mu_c}{1.5 \! \text{ GeV}} \right ) \, , & L_{\mu_b} & = \ln \left ( \frac{\mu_b}{5.0 \! \text{ GeV}} \right ) \, , \end{align} with \begin{align} \kappa_{1000} &= -0.5373, && \kappa_{0100} = -6.0472, && \kappa_{0010} = -0.0956, \nonumber\\[-2mm] \kappa_{0001} &= 0.0114, && \kappa_{1100} = 3.9957, && \kappa_{1010} = 0.3604, \nonumber\\[-2mm] \kappa_{0110} &= 0.0516, && \kappa_{0101} = -0.0658, && \kappa_{2000} = -0.1767, \nonumber\\[-2mm] \kappa_{0200} &= 16.4465, && \kappa_{0020} = -0.1294, && \kappa_{0030} = 0.0725. \end{align} \section{\texorpdfstring{$X(x_t)$}{X(xt)} and \texorpdfstring{$X_c$}{Xc} expressions}\label{app:Xxt} The short-distance contribution $X(x_t)$ in the SM (extracted from the original papers~\cite{Buchalla:1993bv, Misiak:1999yg, Buchalla:1998ba,Brod:2010hi}) is given in~\cite{Buras:2015qea} \begin{align} X(x_t) = X_0(x_t) + \frac{\alpha_s(\mu_t)}{4\pi}X_1(x_t) + \frac{\alpha}{4\pi}X_{\rm EW}(x_t), \end{align} where $X_0$ is the leading order result, and $X_1$, $X_{\rm EW}$ are the NLO QCD and EW corrections, respectively. The coupling constants $\alpha_s$ and $\alpha$, as well as the parameter $x_t = m_t^2/m_W^2$, have to be evaluated at scale $\mu\sim\mathcal{O}(M_t)$. The LO expression is the gauge-independent linear combination $X_0(x_t) \equiv C(x_t) - 4 B(x_t)$~\cite{Inami:1980fz,Buchalla:1990qz} \begin{equation}\label{X01} X_0(x_t) = \frac{x_t}{8}\left[\frac{x_t+2}{x_t-1} + \frac{3x_t-6}{(x_t-1)^2}\log x_t\right]. \end{equation} The NLO QCD correction \cite{Buchalla:1993bv,Misiak:1999yg,Buchalla:1998ba}, in the $\overline{\text{MS}}$ scheme reads \begin{equation}\begin{aligned} X_1(x_t) &= -\frac{29x_t - x_t^2 - 4x_t^3}{3(1-x_t)^2} - \frac{x_t + 9x_t^2 - x_t^3 - x_t^4}{(1-x_t)^3}\log x_t\\ &+ \frac{8x_t + 4x_t^2 + x_t^3 - x_t^4}{2(1-x_t)^3}\log^2 x_t - \frac{4x_t - x_t^3}{(1-x_t)^2}{\rm Li}_2(1-x_t)\\ &+ 8x_t\frac{\partial X_0}{\partial x_t}\log\frac{\mu^2}{M_W^2}, \end{aligned}\end{equation} where $\mu$ is the renormalisation scale. The 2-loop EW correction $X_{\rm EW}$ has been calculated in \cite{Brod:2010hi}. The charm contributions, $X_c^\nu(\equiv \lambda^4 P_c(X))$, are described via \begin{align} P_c(X)= P_c^{\rm SD}(X) + \delta P_{c,u} \end{align} where $\delta P_{c,u}=0.04\pm0.02$ corresponds to the long-distance contributions as calculated in Ref.~\cite{Isidori:2003ts}. The short-distance contribution of the charm quark $P_c(X)$ including NNLO correction is calculated in Ref.~\cite{Brod:2008ss} but the explicit analytical expression is not given. However, an approximate formula is given by\footnote{ The approximate formula in Ref.~\cite{Brod:2008ss} is given for $\lambda=0.2255$, to take into account changes of $\lambda$, it should be multiplied by $(\lambda/0.2255)^4$. } \begin{equation}\label{eq:PCSD} \begin{split} P_c^{\rm SD}(X) &= 0.38049 \left( \frac{m_c (m_c)}{1.30 \textrm{GeV}} \right)^{0.5081} \left( \frac{\alpha_s (M_Z)}{0.1176} \right )^{1.0192} \left( 1 + \sum_{i,j} \kappa_{ij} L_{m_c}^i L_{\alpha_s}^j \right) \\ &\pm 0.008707 \left( \frac{m_c (m_c)}{1.30 \textrm{GeV}} \right)^{0.5276} \left( \frac{\alpha_s (M_Z)}{0.1176} \right )^{1.8970} \left( 1 + \sum_{i,j} \epsilon_{ij} L_{m_c}^i L_{\alpha_s}^j \right) \, , \end{split} \end{equation} where \begin{equation} \label{eq:defLs} L_{m_c} = \ln \left( \frac{m_c (m_c)}{1.30 \textrm{GeV}} \right) \, , \qquad L_{\alpha_s} = \ln \left( \frac{\alpha_s (M_Z)}{0.1176} \right) \, , \end{equation} and \begin{align} &\kappa_{10} = 1.6624,\quad \kappa_{01} = -2.3537,\quad \kappa_{11} = -1.5862,\quad \kappa_{20} = 1.5036,\quad \kappa_{02} = -4.3477,\nonumber\\ &\epsilon_{10} = -0.3537,\quad \epsilon_{01} = 0.6003,\quad\epsilon_{11} = -4.7652,\quad\epsilon_{20} = 1.0253,\quad\epsilon_{02} = 0.8866. \end{align} The NP effects that are neutrino-flavour dependent, beside NP$\times$NP terms contribute via SM$\times$NP interference terms. Thus, for these types of NP effects in ${\rm BR}(K^+ \to \pi^+ \nu \bar{\nu})$ we need the NNLO charm contributions for the different neutrino flavours in a separated form (see Eq.~\ref{eq:Br-KppipnunuExpanded} below). However, since the charm contributions are not available separately at NNLO as given in Eq.~\ref{eq:PCSD}, for the SM$\times$NP interference terms we use the NLO results from Appendix~C.2 of Ref~\cite{Bobeth:2016llm} which is given for $\mu_c = 1.3$ GeV \begin{align}\label{eq:Xcflavour} X_c^{e/\mu}&= 10.05\times 10^{-4}, && X_c^{\tau}= 6.64\times 10^{-4}. \end{align} The SM$\times$NP interference terms are clearly seen when having Eqs.~\ref{eq:Br-KLpinunu} and~\ref{eq:Br-Kppipnunu} in their expanded form \begin{align} \label{eq:Br-KLpinunuExpanded} & {\rm BR}(K_L \to \pi^0 \nu \bar{\nu}) = \frac{\kappa_L }{\lambda^{10}}\frac{1}{3}s_W^4 \sum_{\nu_\ell} {\rm Im}^2 \left[\lambda_t C_L^{\nu_\ell} \right] \\[4pt]\nonumber &= {\rm BR}(K_L \to \pi^0 \nu \bar{\nu})_{\rm SM} + \frac{\kappa_L }{\lambda^{10}}\frac{1}{3} s_W^4 \Bigg[ \sum_{\nu_\ell} \mbox{Im}^2\left( \lambda_t\, C_{L,{\rm NP}}^{\nu_\ell} \right) +2\,\mbox{Im}\left(\lambda_t\, C_{L,{\rm SM}} \right) \sum_{\nu_\ell} \mbox{Im}\left(\lambda_t \,C_{L,{\rm NP}}^{\nu_\ell} \right) \Bigg] \\[20pt] \label{eq:Br-KppipnunuExpanded} &{\rm BR}(K^+ \to \pi^+ \nu \bar{\nu}) = \frac{\kappa_+ (1 + \Delta_{\rm EM})}{\lambda^{10}}\frac{1}{3} s_W^4 \sum_{\nu_\ell} \left[ {\rm Im}^2 \Big(\lambda_t C_L^{\nu_\ell} \Big) + {\rm Re}^2 \Big(-\frac{\lambda_c X_{c}}{s_W^2} + \lambda_t C_L^{\nu_\ell} \Big)\right]\\[4pt]\nonumber &={\rm BR}(K^+ \to \pi^+ \nu \bar{\nu})_{\rm SM} + \frac{\kappa_+ (1 + \Delta_{\rm EM})}{\lambda^{10}}\frac{1}{3} s_W^4 \Bigg[ \sum_{\nu_\ell} \mbox{Im}^2\left( \lambda_t C_{L,{\rm NP}}^{\nu_\ell} \right) + 2\,\mbox{Im}\left(\lambda_t C_{L,{\rm SM}} \right)\sum_{\nu_\ell} \mbox{Im}\left(\lambda_t C_{L,{\rm NP}}^{\nu_\ell} \right) \\[-6pt]\nonumber &\quad+ \sum_{\nu_\ell} \mbox{Re}^2\left( \lambda_t C_{L,{\rm NP}}^{\nu_\ell} \right) +2\,\mbox{Re}\left(\lambda_t C_{L,{\rm SM}} \right)\sum_{\nu_\ell} \mbox{Re}\left(\lambda_t C_{L,{\rm NP}}^{\nu_\ell} \right) -2\sum_{\nu_\ell} \mbox{Re}\left(\frac{\lambda_c X_c^\nu}{s_W^2} \right) \mbox{Re}\left(\lambda_t C_{L,{\rm NP}}^{\nu_\ell} \right) \Bigg]\nonumber \end{align} \section{The long-distance contribution to \texorpdfstring{$K_L \to \mu \bar{\mu}$}{KL -> mu mu}}\label{app:NLLD} The long-distance contributions given in eq.~\ref{eq:LDKmumu} can be written as~\cite{DAmbrosio:2017klp,Chobanova:2017rkj} \begin{align}\label{eq:NSLD} N_{L}^{\rm LD} & = \frac{\pm4\, \alpha_{\rm em}\, m_\mu}{\pi\, f_K\, M_K^2\, } \sqrt{\frac{2\pi}{M_K}\frac{{\rm Br}(K_L^0 \to \gamma \gamma)^{\rm EXP}}{\tau_L}} (\chi_{\rm disp} + i\chi_{\rm abs}) \end{align} with ${\rm Br}(K_L^0 \to \gamma \gamma)^{\rm EXP} = (5.47 \pm 0.04)\times 10^{-4}$~\cite{PDG2020}, and $(\chi_{\rm disp} + i\chi_{\rm abs}) $ corresponding to the $2\gamma$ intermediate state given by~\cite{DAmbrosio:1986zin,GomezDumm:1998gw,Knecht:1999gb,Isidori:2003ts} \begin{align} \chi_{\rm abs} & = {\rm Im}\left(C_{\gamma\gamma}\right) = \frac{\pi}{2\beta_{\mu,K}}\ln\left( \frac{1-\beta_{\mu,K}}{1+\beta_{\mu,K}} \right)\,,\\ \chi_{\rm disp} &= \chi_{\gamma \gamma}(\mu) - \frac{5}{2} +\frac{3}{2}\ln\left(\frac{m_\mu^2}{\mu^2}\right) + {\rm Re}\left( C_{\gamma\gamma} \right)\,, \end{align} with \begin{align} \beta_{\mu,K} &= \sqrt{1-4m_\mu/M_K^2}\,, \\ C_{\gamma\gamma} &= \frac{1}{\beta_{\mu,K}}\left[ {\rm Li}_2\left( \frac{\beta_\mu -1}{\beta_\mu +1} \right) + \frac{\pi^2}{3} + \frac{1}{4}\ln^2 \left(\frac{\beta_\mu -1}{\beta_\mu +1} \right)\right]\,, \end{align} where the low-energy coupling $\chi_{\gamma \gamma}(\mu)$ which depends on the $K_L \to \gamma \gamma$ form factor behaviour outside the physical region is estimated in Ref.~\cite{Isidori:2003ts} \begin{align} \chi_{\gamma \gamma}(M_\rho) = 5.83 \pm 0.15_{\rm exp} \pm 1.0_{\rm th}\;. \end{align} resulting in $(\chi_{\rm disp} + i\chi_{\rm abs}) = (0.71 \pm 0.15 \pm 1.0) +i(-5.21)$~\cite{Mescia:2006jd}. \end{appendix} \providecommand{\href}[2]{#2}\begingroup\raggedright
1,314,259,996,091
arxiv
\section{Discount Values and Weights in Modified Kneser Ney} \label{app:kmn} The discount value $D(c)$ used in formula~(\ref{eq:smoothing:mod-kneser-ney-high}) is defined as~\cite{J:CSL:1999:ChenG}: {\small \begin{equation}\label{eq:smoothing:mod-kneser-ney-d} D(c)= \begin{cases}\ 0 & \text{if}\ c=0 \\ D_1 & \text{if}\ c=1 \\ D_2 & \text{if}\ c=2 \\ D_{3+} & \text{if}\ c>2 \\ \end{cases} \end{equation} }% The discounting values $D_1$, $D_2$, and $D_{3+}$ are defined as \cite{chen1998empirical} {\small \begin{subequations}\label{eq:smoothing:mod-kneser-ney-ds} \begin{align} D_1=1-2Y\frac{n_2}{n_1}\\ D_2=2-3Y\frac{n_3}{n_2}\\ D_{3+}=3-4Y\frac{n_4}{n_3} \end{align} \end{subequations} }% with $Y=\frac{n_1}{n_1+n_2}$ and $n_i$ is the total number of $n$-grams which appear exactly $i$ times in the training data. The weight $\gamma_{high}(w_{i-n+1}^{i-1})$ is defined as: {\small \begin{align}\label{eq:smoothing:mod-kneser-ney-gamma-high2} \gamma_{high}&(w_{i\!-\!n\!+\!1}^{i\!-\!1})= \frac{D_1N_1(w_{i\!-\!n\!+\!1}^{i\!-\!1}\!\bullet)\!+\!D_2N_2(w_{i\!-\!n\!+\!1}^{i\!-\!1}\!\bullet)\!+\!D_{3+}N_{3+}(w_{i\!-\!n\!+\!1}^{i\!-\!1}\!\bullet)}{c(w_{i\!-\!n\!+\!1}^{i\!-\!1})} \end{align} }% And the weight $\gamma_{mid}(w_{i-n+1}^{i-1})$ is defined as: {\small \begin{align}\label{eq:smoothing:mod-kneser-ney-gamma-low1} \gamma_{mid}&(w_{i\!-\!n\!+\!1}^{i\!-\!1})=&\frac{D_1N_1(w_{i\!-\!n\!+\!1}^{i\!-\!1}\!\bullet)\!+\!D_2N_2(w_{i\!-\!n\!+\!1}^{i\!-\!1}\!\bullet)\!+\!D_{3+}N_{3+}(w_{i\!-\!n\!+\!1}^{i\!-\!1}\!\bullet)}{N_{1+}(\bullet\! w_{i\!-\!n\!+\!1}^{i\!-\!1}\bullet)} \end{align} }% where $N_1(w_{i-n+1}^{i-1}\bullet)$, $N_2(w_{i-n+1}^{i-1}\bullet)$, and $N_{3+}(w_{i-n+1}^{i-1}\bullet)$ are analogously defined to $N_{1+}(w_{i-n+1}^{i-1}\bullet)$. \section{Experimental Setup and Data Sets}\label{sec:method} To evaluate the quality of our generalized language models we empirically compare their ability to explain sequences of words. To this end we use text corpora, split them into test and training data, build language models as well as generalized language models over the training data and apply them on the test data. We employ established metrics, such as cross entropy and perplexity. In the following we explain the details of our experimental setup. \subsection{Data Sets} For evaluation purposes we employed eight different data sets. The data sets cover different domains and languages. As languages we considered English (\emph{en}), German (\emph{de}), French (\emph{fr}), and Italian (\emph{it}). As general domain data set we used the full collection of articles from Wikipedia (\emph{wiki}) in the corresponding languages. The download dates of the dumps are displayed in Table~\ref{tab:wikidates}. \begin{table}[h] \centering \begin{tabularx}{0.47\textwidth}{c|c|c|c} de & en & fr & it \\ \hline Nov 22\textsuperscript{nd} & Nov 04\textsuperscript{th} & Nov 20\textsuperscript{th} & Nov 25\textsuperscript{th} \\ \end{tabularx} \caption{Download dates of Wikipedia snapshots in November 2013.} \label{tab:wikidates} \end{table} Special purpose domain data are provided by the multi-lingual JRC-Acquis corpus of legislative texts (\emph{JRC})~\cite{P:LREC:2006:SteinbergerPWI}. Table~\ref{tab:dataSetStats} gives an overview of the data sets and provides some simple statistics of the covered languages and the size of the collections. \begin{table}[h] \centering \begin{tabularx}{0.45\textwidth}{Y | c c } \toprule \multicolumn{1}{c}{} & \multicolumn{2}{c}{Statistics} \\ Corpus & total words & unique words \\ & in Mio. & in Mio. \\ \midrule wiki-de & 579 & 9.82 \\ JRC-de & 30.9 & 0.66 \\ \hline wiki-en & 1689 & 11.7 \\ JRC-en & 39.2 & 0.46 \\ \hline wiki-fr & 339 & 4.06 \\ JRC-fr & 35.8 & 0.46 \\ \hline wiki-it & 193 & 3.09 \\ JRC-it & 34.4 & 0.47 \\ \bottomrule \end{tabularx} \caption{Word statistics and size of of evaluation corpora} \label{tab:dataSetStats} \end{table} The data sets come in the form of structured text corpora which we cleaned from markup and tokenized to generate word sequences. We filtered the word tokens by removing all character sequences which did not contain any letter, digit or common punctuation marks. Eventually, the word token sequences were split into word sequences of length $n$ which provided the basis for the training and test sets for all algorithms. Note that we did not perform case-folding nor did we apply stemming algorithms to normalize the word forms. Also, we did our evaluation using case sensitive training and test data. Additionally, we kept all tokens for named entities such as names of persons or places. \subsection{Evaluation Methodology} All data sets have been randomly split into a training and a test set on a sentence level. The training sets consist of 80\% of the sentences, which have been used to derive $n$-grams, skip $n$-grams and corresponding continuation counts for values of $n$ between 1 and 5. Note that we have trained a prediction model for each data set individually. From the remaining 20\% of the sequences we have randomly sampled a separate set of $100,000$ sequences of $5$ words each. These test sequences have also been shortened to sequences of length $3$, and $4$ and provide a basis to conduct our final experiments to evaluate the performance of the different algorithms. We learnt the generalized language models on the same split of the training corpus as the standard language model using modified Kneser-Ney smoothing and we also used the same set of test sequences for a direct comparison. To ensure rigour and openness of research the data set for training as well as the test sequences and the entire source code is open source. \footnote{http://west.uni-koblenz.de/Research} \footnote{https://github.com/renepickhardt/generalized-language-modeling-toolkit} \footnote{http://glm.rene-pickhardt.de} We compared the probabilities of our language model implementation (which is a subset of the generalized language model) using KN as well as MKN smoothing with the Kyoto Language Model Toolkit \footnote{http://www.phontron.com/kylm/}. Since we got the same results for small $n$ and small data sets we believe that our implementation is correct. In a second experiment we have investigated the impact of the size of the training data set. The wikipedia corpus consists of $1.7$ bn. words. Thus, the $80\%$ split for training consists of $1.3$ bn. words. We have iteratively created smaller training sets by decreasing the split factor by an order of magnitude. So we created $8\%$ / $92\%$ and $0.8\%$ / $99.2\%$ split, and so on. We have stopped at the $0.008\% / 99.992\%$ split as the training data set in this case consisted of less words than our 100k test sequences which we still randomly sampled from the test data of each split. Then we trained a generalized language model as well as a standard language model with modified Kneser-Ney smoothing on each of these samples of the training data. Again we have evaluated these language models on the same random sample of $100,000$ sequences as mentioned above. \subsection{Evaluation Metrics} As evaluation metric we use \emph{perplexity}: a standard measure in the field of language models ~\cite{Manning:1999:LM}. First we calculate the \emph{cross entropy} of a trained language model given a test set using \begin{equation} H(P_{\tt{alg}}) = - \sum_{s\in T}P_{\tt{MLE}}(s) \cdot \log_2{P_{\tt{alg}}(s)} \end{equation} Where $P_{\tt{alg}}$ will be replaced by the probability estimates provided by our generalized language models and the estimates of a language model using modified Kneser-Ney smoothing. $P_{\tt{MLE}}$, instead, is a maximum likelihood estimator of the test sequence to occur in the test corpus. Finally, $T$ is the set of test sequences. The perplexity is defined as: \begin{equation} \textit{Perplexity}(P_{\tt{alg}}) = 2^{H(P_{\tt{alg}})} \end{equation} Lower perplexity values indicate better results. \section{Discussion}\label{sec:discussion} In our experiments we have observed an improvement of our generalized language models over classical language models using Kneser-Ney smoothing. The improvements have been observed for different languages, different domains as well as different sizes of the training data. In the experiments we have also seen that the GLM performs well in particular for small training data sets and sparse data, encouraging our initial motivation. This feature of the GLM is of particular value, as data sparsity becomes a more and more immanent problem for higher values of $n$. This known fact is underlined also by the statistics shown in Table~\ref{tab:percentage-enwiki-complete}. The fraction of total $n$-grams which appear only once in our Wikipedia corpus increases for higher values of $n$. However, for the same value of $n$ the skip $n$-grams are less rare. Our generalized language models leverage this additional information to obtain more reliable estimates for the probability of word sequences. \begin{table}[ht] \centering \begin{tabularx}{0.4\textwidth}{Y | c | c} \toprule \textbf{$w_1^n$}&\textbf{total}&\textbf{unique}\\ \midrule $w_1$&$0.5\%$&$64.0\%$\\ \midrule $w_1w_2$&$5.1\%$&$68.2\%$\\ $w_1\_w_3$&$8.0\%$&$79.9\%$\\ $w_1\_\_w_4$&$9.6\%$&$72.1\%$\\ $w_1\_\_\_w_5$&$10.1\%$&$72.7\%$\\ \midrule $w_1w_2w_3$&$21.1\%$&$77.5\%$\\ $w_1\_w_3w_4$&$28.2\%$&$80.4\%$\\ $w_1w_2\_w_4$&$28.2\%$&$80.7\%$\\ $w_1\_\_w_4w_5$&$31.7\%$&$81.9\%$\\ $w_1\_w_3\_w_5$&$35.3\%$&$83.0\%$\\ $w_1w_2\_\_w_5$&$31.5\%$&$82.2\%$\\ \midrule $w_1w_2w_3w_4$&$44.7\%$&$85.4\%$\\ $w_1\_w_3w_4w_5$&$52.7\%$&$87.6\%$\\ $w_1w_2\_w_4w_5$&$52.6\%$&$88.0\%$\\ $w_1w_2w_3\_w_5$&$52.3\%$&$87.7\%$\\ \midrule $w_1w_2w_3w_4w_5$&$64.4\%$&$90.7\%$\\ \bottomrule \end{tabularx} \caption{Percentage of generalized $n$-grams which occur only once in the English Wikipedia corpus. Total means a percentage relative to the total amount of sequences. Unique means a percentage relative to the amount of unique sequences of this pattern in the data set.} \label{tab:percentage-enwiki-complete} \end{table} Beyond the general improvements there is an additional path for benefitting from generalized language models. As it is possible to better leverage the information in smaller and sparse data sets, we can build smaller models of competitive performance. For instance, when looking at Table~\ref{tab:fullPerplexityDataSize} we observe the $3$-gram MKN approach on the full training data set to achieve a perplexity of $586.9$. This model has been trained on $7$ GB of text and the resulting model has a size of $15$ GB and $742$ Mio.\ entries for the count and continuation count values. Looking for a GLM with comparable but better performance we see that the $5$-gram model trained on $1\%$ of the training data has a perplexity of $528.7$. This GLM model has a size of $9.5$ GB and contains only $427$ Mio. entries. So, using a far smaller set of training data we can build a smaller model which still demonstrates a competitive performance. \section{Conclusion and Future Work}\label{sec:future} \subsection{Conclusion We have introduced a novel generalized language model as the systematic combination of skip $n$-grams and modified Kneser-Ney smoothing. The main strength of our approach is the combination of a simple and elegant idea with an an empirically convincing result. Mathematically one can see that the GLM includes the standard language model with modified Kneser-Ney smoothing as a sub model and is consequently a real generalization. In an empirical evaluation, we have demonstrated that for higher orders the GLM outperforms MKN for all test cases. The relative improvement in perplexity is up to $12.7\%$ for large data sets. GLMs also performs particularly well on small and sparse sets of training data. On a very small training data set we observed a reduction of perplexity by $25.7\%$. Our experiments underline that the generalized language models overcome in particular the weaknesses of modified Kneser-Ney smoothing on sparse training data. \subsection{Future work A desirable extension of our current definition of GLMs will be the combination of different lower lower order models in our generalized language model using different weights for each model. Such weights can be used to model the statistical reliability of the different lower order models. The value of the weights would have to be chosen according to the probability or counts of the respective skip $n$-grams. Another important step that has not been considered yet is compressing and indexing of generalized language models to improve the performance of the computation and be able to store them in main memory. Regarding the scalability of the approach to very large data sets we intend to apply the Map Reduce techniques from~\cite{P:ACL:2013:HeafieldPCK} to our generalized language models in order to have a more scalable calculation. This will open the path also to another interesting experiment. Goodman~\cite{Tech:2001:Goodman} observed that increasing the length of $n$-grams in combination with modified Kneser-Ney smoothing did not lead to improvements for values of $n$ beyond 7. We believe that our generalized language models could still benefit from such an increase. They suffer less from the sparsity of long $n$-grams and can overcome this sparsity when interpolating with the lower order skip $n$-grams while benefiting from the larger context. Finally, it would be interesting to see how applications of language models---like next word prediction, machine translation, speech recognition, text classification, spelling correction, e.g.---benefit from the better performance of generalized language models. \section{Introduction motivation} Language Models are a probabilistic approach for predicting the occurrence of a sequence of words. They are used in many applications, e.g.\ word prediction~\cite{P:HLT:2005:BickelHS}, speech recognition~\cite{rabiner1993fundamentals}, machine translation~\cite{brown1990statistical}, or spelling correction~\cite{mays1991context}. The task language models attempt to solve is the estimation of a probability of a given sequence of words $w_{1}^{l} = w_1,\dots,w_l$. The probability $P(w_{1}^{l})$ of this sequence can be broken down into a product of conditional probabilities: {\small \begin{align} P(w_{1}^{l})=&P(w_1) \cdot P(w_2|w_1) \cdot \ldots \cdot P(w_l| w_1\cdots w_{l-1}) =&\prod_{i=1}^{l}P(w_i|w_1\cdots w_{i-1}) \label{eq:probseq} \end{align} }% Because of combinatorial explosion and data sparsity, it is very difficult to reliably estimate the probabilities that are conditioned on a longer subsequence. Therefore, by making a Markov assumption the true probability of a word sequence is only approximated by restricting conditional probabilities to depend only on a local context $w_{i-n+1}^{i-1}$ of $n-1$ preceding words rather than the full sequence $w^{i-1}_1$. The challenge in the construction of language models is to provide reliable estimators for the conditional probabilities. While the estimators can be learnt---using, e.g., a maximum likelihood estimator over $n$-grams obtained from training data---the obtained values are not very reliable for events which may have been observed only a few times or not at all in the training data. Smoothing is a standard technique to overcome this data sparsity problem. Various smoothing approaches have been developed and applied in the context of language models. Chen and Goodman~\cite{J:CSL:1999:ChenG} introduced modified Kneser-Ney Smoothing, which up to now has been considered the state-of-the-art method for language modelling over the last 15 years. Modified Kneser-Ney Smoothing is an interpolating method which combines the estimated conditional probabilities $P(w_{i}|w_{i-n+1}^{i-1})$ recursively with lower order models involving a shorter local context $w_{i-n+2}^{i-1}$ and their estimate for $P(w_{i}|w_{i-n+2}^{i-1})$. The motivation for using lower order models is that shorter contexts may be observed more often and, thus, suffer less from data sparsity. However, a single rare word towards the end of the local context will always cause the context to be observed rarely in the training data and hence will lead to an unreliable estimation. Because of Zipfian word distributions, most words occur very rarely and hence their true probability of occurrence may be estimated only very poorly. One word that appears at the end of a local context $w^{i-1}_{i-n+1}$ and for which only a poor approximation exists may adversely affect the conditional probabilities in language models of all lengths --- leading to severe errors even for smoothed language models. Thus, the idea motivating our approach is to involve several lower order models which systematically leave out one position in the context (one may think of replacing the affected word in the context with a wildcard) instead of shortening the sequence only by one word at the beginning. This concept of introducing gaps in $n$-grams is referred to as skip $n$-grams~\cite{J:CLS:1994:NeyEK,J:CSL:1993:HuangFHM}. Among other techniques, skip $n$-grams have also been considered as an approach to overcome problems of data sparsity~\cite{Tech:2001:Goodman}. However, to best of our knowledge, language models making use of skip $n$-grams models have never been investigated to their full extent and over different levels of lower order models. Our approach differs as we consider all possible combinations of gaps in a local context and interpolate the higher order model with all possible lower order models derived from adding gaps in all different ways. In this paper we make the following contributions: \begin{enumerate} \item We provide a framework for using modified Kneser-Ney smoothing in combination with a systematic exploration of lower order models based on skip $n$-grams. \item We show how our novel approach can indeed easily be interpreted as a generalized version of the current state-of-the-art language models. \item We present a large scale empirical analysis of our generalized language models on eight data sets spanning four different languages, namely, a wikipedia-based text corpus and the JRC-Acquis corpus of legislative texts. \item We empirically observe that introducing skip $n$-gram models may reduce perplexity by $12.7\%$ compared to the current state-of-the-art using modified Kneser-Ney models on large data sets. Using small training data sets we observe even higher reductions of perplexity of up to $25.6\%$. \end{enumerate} The rest of the paper is organized as follows. We start with reviewing related work in Section~\ref{sec:relwork}. We will then introduce our generalized language models in Section~\ref{sec:notation}. After explaining the evaluation methodology and introducing the data sets in Section~\ref{sec:method} we will present the results of our evaluation in Section~\ref{sec:eval}. In Section ~\ref{sec:discussion} we discuss why a generalized language model performs better than a standard language model. Finally, in Section~\ref{sec:future} we summarize our findings and conclude with an overview of further interesting research challenges in the field of generalized language models. \section*{Acknowledgements} We would like to thank Heinrich Hartmann for a fruitful discussion regarding notation of the skip operator for $n$-grams. The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013), REVEAL (Grant agree number 610928). \bibliographystyle{plain} \section{Generalized Language Models}\label{sec:notation} \subsection{Notation for Skip $n$-gram with $k$ Skips} We express skip $n$-grams using an operator notation. The operator $\skp{i}$ applied to an $n$-gram removes the word at the $i$-th position. For instance: $\skp{3} w_1 w_2 w_3 w_4 = w_1 w_2 \_ w_4 $, where $\_$ is used as wildcard placeholder to indicate a removed word. The wildcard operator allows for larger number of matches. For instance, when $c(w_1 w_2 w_{3a} w_4) = x$ and $c(w_1 w_2 w_{3b} w_4)=y$ then $c(w_1 w_2 \_ w4)\geq x+y$ since at least the two sequences $w_1 w_2 w_{3a} w_4$ and $w_1 w_2 w_{3b} w_4$ match the sequence $w_1 w_2 \_ w_4$. In order to align with standard language models the skip operator applied to the first word of a sequence will remove the word instead of introducing a wildcard. In particular the equation $\skp{1} w_{i-n+1}^{i} = w_{i-n+2}^{i}$ holds where the right hand side is the subsequence of $w_{i-n+1}^{i}$ omitting the first word. We can thus formulate the interpolation step of modified Kneser-Ney smoothing using our notation as ${\hat P}_{\text{MKN}}(w_i|w_{i-n+2}^{i-1}) = {\hat P}_{\text{MKN}}(w_i| \skp{1} w_{i-n+1}^{i-1})$. Thus, our skip $n$-grams correspond to $n$-grams of which we only use $k$ words, after having applied the skip operators $\skp{i_1}\dots\skp{i_{n-k}}$ \subsection{Generalized Language Model} Interpolation with lower order models is motivated by the problem of data sparsity in higher order models. However, lower order models omit only the first word in the local context, which might not necessarily be the cause for the overall $n$-gram to be rare. This is the motivation for our generalized language models to not only interpolate with one lower order model, where the first word in a sequence is omitted, but also with all other skip $n$-gram models, where one word is left out. Combining this idea with modified Kneser-Ney smoothing leads to a formula similar to~(\ref{eq:smoothing:mod-kneser-ney-high}). {\small \begin{align} \label{eq:smoothing:glm-mkn-high} P_{\text{GLM}}&(w_i|w_{i-n+1}^{i-1})= &\frac{\text{max}\{ c(w_{i-n+1}^i)-D(c(w_{i-n+1}^i)),0\}}{c(w_{i-n+1}^{i-1})} &+\gamma_{high}(w_{i-n+1}^{i-1}) \sum_{j=1}^{n-1} \frac{1}{n\!-\!1} {\hat P}_{\text{GLM}}(w_i| \skp{j} w_{i-n+1}^{i-1}) \end{align} }% The difference between formula (\ref{eq:smoothing:mod-kneser-ney-high}) and formula (\ref{eq:smoothing:glm-mkn-high}) is the way in which lower order models are interpolated. Note, the sum over all possible positions in the context $w_{i-n+1}^{i-1}$ for which we can skip a word and the according lower order models $P_{\text{GLM}}(w_i| \skp{j} (w_{i-n+1}^{i-1}))$. We give all lower order models the same weight $\frac{1}{n-1}$. The same principle is recursively applied in the lower order models in which some words of the full $n$-gram are already skipped. As in modified Kneser-Ney smoothing we use continuation counts for the lower order models, incorporating the skip operator also for these counts. Incorporating this directly into modified Kneser-Ney smoothing leads in the second highest model to: {\small \begin{align} {\hat P}_{\text{GLM}}&(w_i| \skp{j}(w_{i\!-\!n\!+\!1}^{i-1}))= &\frac{\text{max}\{ N_{1+}(\skp{j}(w_{i\!-\!n}^i))-D(c(\skp{j}(w_{i\!-\!n\!+\!1}^i))),0\}}{N_{1+}(\skp{j}(w_{i\!-\!n\!+\!1}^{i\!-\!1})\bullet)} &+\!\gamma_{mid}(\skp{j}(w_{i\!-\!n\!+\!1}^{i\!-\!1}))\!\sum_{k=1 \atop k\neq j}^{n-1} \frac{1}{n\!-\!2}{\hat P}_{\text{GLM}}(w_i|\skp{j}\skp{k}(w_{i\!-\!n\!+\!1}^{i\!-\!1})) \nonumber \end{align} }% Given that we skip words at different positions, we have to extend the notion of the count function and the continuation counts. The count function applied to a skip $n$-gram is given by $c(\skp{j}(w_{\!i-\!n}^i))\!=\!\sum_{w_{j}}c(w_{\!i-\!n}^i)$, i.e.\ we aggregate the count information over all words which fill the gap in the $n$-gram. Regarding the continuation counts we define: {\small \begin{align} N_{1+}(\skp{j}(w_{\!i-\!n}^i)) & = & |\{w_{i\!-\!n\!+\!j\!-\!1}\!:\!c(w_{i\!-\!n}^{i})\!>\!0\}| \\ N_{1+}(\skp{j}(w_{i\!-\!n}^{i\!-\!1})\bullet) & = & |\{(w_{i\!-\!n\!+\!j\!-\!1},w_i)\!:\!c(w_{i\!-\!n}^{i})\!>\!0\}| \end{align} }% As lowest order model we use---just as done for traditional modified Kneser-Ney~\cite{J:CSL:1999:ChenG}---a unigram model interpolated with a uniform distribution for unseen words. The overall process is depicted in Figure~\ref{fig:glmSmoothing}, illustrating how the higher level models are recursively smoothed with several lower order ones. \begin{figure}[btph] \includegraphics[width=0.9\columnwidth]{glm-pascal-interpolation.eps} \centering \caption{Interpolation of models of different order and using skip patterns. The value of $n$ indicates the length of the raw $n$-grams necessary for computing the model, the value of $k$ indicates the number of words actually used in the model. The wild card symbol \_ marks skipped words in an $n$-gram. The arrows indicate how a higher order model is interpolated with lower order models which skips one word. The bold arrows correspond to interpolation of models in traditional modified Kneser-Ney smoothing. The lighter arrows illustrate the additional interpolations introduced by our generalized language models. } \label{fig:glmSmoothing} \end{figure} \section{Related Work}\label{sec:relwork} Work related to our generalized language model approach can be divided in two categories: various smoothing techniques for language models and approaches making use of skip $n$-grams. Smoothing techniques for language models have a long history. Their aim is to overcome data sparsity and provide more reliable estimators---in particular for rare events. The Good Turing estimator~\cite{good1953population}, deleted interpolation~\cite{P:PRP:1980:JelinekM}, Katz backoff~\cite{J:TASSP:1987:Katz} and Kneser-Ney smoothing~\cite{P:ICASSP:1995:KneserN} are just some of the approaches to be mentioned. Common strategies of these approaches are to either backoff to lower order models when a higher order model lacks sufficient training data for good estimation, to interpolate between higher and lower order models or to interpolate with a prior distribution. Furthermore, the estimation of the amount of unseen events from rare events aims to find the right weights for interpolation as well as for discounting probability mass from unreliable estimators and to retain it for unseen events. The state of the art is a modified version of Kneser-Ney smoothing introduced in~\cite{J:CSL:1999:ChenG}. The modified version implements a recursive interpolation with lower order models, making use of different discount values for more or less frequently observed events. This variation has been compared to other smoothing techniques on various corpora and has shown to outperform competing approaches. We will review modified Kneser-Ney smoothing in Section~\ref{subsec:MKN} in more detail as we reuse some ideas to define our generalized language model. Smoothing techniques which do not rely on using lower order models involve clustering~\cite{J:CL:1990:BrownSMP,J:CLS:1994:NeyEK}, i.e.\ grouping together similar words to form classes of words, as well as skip $n$-grams~\cite{J:CLS:1994:NeyEK,J:CSL:1993:HuangFHM}. Yet other approaches make use of permutations of the word order in $n$-grams~\cite{schukat1995permugram,Tech:2001:Goodman}. Skip $n$-grams are typically used to incorporate long distance relations between words. Introducing the possibility of gaps between the words in an $n$-gram allows for capturing word relations beyond the level of $n$ consecutive words without an exponential increase in the parameter space. However, with their restriction on a subsequence of words, skip $n$-grams are also used as a technique to overcome data sparsity~\cite{Tech:2001:Goodman}. In related work different terminology and different definitions have been used to describe skip $n$-grams. Variations modify the number of words which can be skipped between elements in an $n$-gram as well as the manner in which the skipped words are determined (e.g.\ fixed patterns~\cite{Tech:2001:Goodman} or functional words~\cite{P:IJCNLP:2004:GaoS}). The impact of various extensions and smoothing techniques for language models is investigated in~\cite{Tech:2001:Goodman,P:ICASSP:2000:Goodman}. In particular, the authors compared Kneser-Ney smoothing, Katz backoff smoothing, caching, clustering, inclusion of higher order $n$-grams, sentence mixture and skip $n$-grams. They also evaluated combinations of techniques, for instance, using skip $n$-gram models in combination with Kneser-Ney smoothing. The experiments in this case followed two paths: (1) interpolating a $5$-gram model with lower order distribution introducing a single gap and (2) interpolating higher order models with skip $n$-grams which retained only combinations of two words. Goodman reported on small data sets and in the best case a moderate improvement of cross entropy in the range of $0.02$ to $0.04$. In \cite{P:LREC:2006:GuthrieALG}, the authors investigated the increase of observed word combinations when including skips in $n$-grams. The conclusion was that using skip $n$-grams is often more effective for increasing the number of observations than increasing the corpus size. This observation aligns well with our experiments. \subsection{Review of Modified Kneser-Ney Smoothing} \label{subsec:MKN} We briefly recall modified Kneser-Ney Smoothing as presented in~\cite{J:CSL:1999:ChenG}. Modified Kneser-Ney implements smoothing by interpolating between higher and lower order $n$-gram language models. The highest order distribution is interpolated with lower order distribution as follows: {\small \begin{align} \label{eq:smoothing:mod-kneser-ney-high} P_{\text{MKN}}&(w_i|w_{i-n+1}^{i-1})= &\frac{\text{max}\{ c(w_{i-n+1}^i)-D(c(w_{i-n+1}^i)),0\}}{c(w_{i-n+1}^{i-1})} &+\gamma_{high}(w_{i-n+1}^{i-1}){\hat P}_{\text{MKN}}(w_i|w_{i-n+2}^{i-1}) \end{align} }% where $c(w_{i-n+1}^i)$ provides the frequency count that sequence $w_{i-n+1}^i$ occurs in training data, $D$ is a discount value (which depends on the frequency of the sequence) and $\gamma_{high}$ depends on $D$ and is the interpolation factor to mix in the lower order distribution\footnote{The factors $\gamma$ and $D$ are quite technical and lengthy. As they do not play a significant role for understanding our novel approach we refer to Appendix~\ref{app:kmn} for details.}. Essentially, interpolation with a lower order model corresponds to leaving out the first word in the considered sequence. The lower order models are computed differently using the notion of continuation counts rather than absolute counts: {\small \begin{align} {\hat P}_{\text{MKN}}&(w_i| (w_{i-n+1}^{i-1}))= &\frac{\text{max}\{ N_{1+}(\bullet w_{i-n+1}^i)-D(c(w_{i-n+1}^i)),0\}}{N_{1+}(\bullet w_{i-n+1}^{i-1}\bullet)} &+\gamma_{mid}(w_{i-n+1}^{i-1}) {\hat P}_{\text{MKN}}(w_i|w_{i-n+2}^{i-1})) \end{align} } where the continuation counts are defined as $N_{1+}(\bullet w_{i-n+1}^i) = |\{w_{i-n}: c(w_{i-n}^{i})>0\}|$, i.e. the number of different words which precede the sequence $w_{i-n+1}^i$. The term $\gamma_{mid}$ is again an interpolation factor which depends on the discounted probability mass $D$ in the first term of the formula. \section{Results}\label{sec:eval} \subsection{Baseline} As a baseline for our generalized language model (GLM) we have trained standard language models using modified Kneser-Ney Smoothing (MKN). These models have been trained for model lengths $3$ to $5$. For unigram and bigram models MKN and GLM are identical. \subsection{Evaluation Experiments} The perplexity values for all data sets and various model orders can be seen in Table~\ref{tab:fullPerplexity}. In this table we also present the relative reduction of perplexity in comparison to the baseline. \begin{table}[tbhp] \centering \begin{tabularx}{0.45\textwidth}{Y | c c c} \toprule \multicolumn{1}{c}{} & \multicolumn{3}{c}{model length} \\ Experiments & $n=3$ & $n=4$ & $n=5$ \\ \midrule wiki-de MKN & 1074.1 & 778.5 & 597.1 \\ wiki-de GLM & \textbf{1031.1} & \textbf{709.4} & \textbf{521.5} \\ rel. change & 4.0\% & 8.9\% & 12.7\% \\ \midrule JRC-de MKN & 235.4 & 138.4 & 94.7 \\ JRC-de GLM & \textbf{229.4} & \textbf{131.8} & \textbf{86.0} \\ rel. change & 2.5\% & 4.8\% & 9.2\% \\ \hline wiki-en MKN & 586.9 & 404 & 307.3 \\ wiki-en GLM & \textbf{571.6} & \textbf{378.1} & \textbf{275} \\ rel. change & 2.6\% & 6.1\% & 10.5\% \\ \midrule JRC-en MKN & 147.2 & 82.9 & 54.6 \\ JRC-en GLM & \textbf{145.3} & \textbf{80.6} & \textbf{52.5} \\ rel. change & 1.3\% & 2.8\% & 3.9\% \\ \hline wiki-fr MKN & 538.6 & 385.9 & 298.9 \\ wiki-fr GLM & \textbf{526.7} & \textbf{363.8} & \textbf{272.9} \\ rel. change & 2.2\% & 5.7\% & 8.7\% \\ \midrule JRC-fr MKN & 155.2 & 92.5 & 63.9 \\ JRC-fr GLM & \textbf{153.5} & \textbf{90.1} & \textbf{61.7} \\ rel. change & 1.1\% & 2.5\% & 3.5\% \\ \hline wiki-it MKN & 738.4 & 532.9 & 416.7 \\ wiki-it GLM & \textbf{718.2} & \textbf{500.7} & \textbf{382.2} \\ rel. change & 2.7\% & 6.0\% & 8.3\% \\ \midrule JRC-it MKN & 177.5 & 104.4 & 71.8 \\ JRC-it GLM & \textbf{175.1} & \textbf{101.8} & \textbf{69.6} \\ rel. change & 1.3\% & 2.6\% & 3.1\% \\ \bottomrule \end{tabularx} \caption{Absolute perplexity values and relative reduction of perplexity from MKN to GLM on all data sets for models of order $3$ to $5$} \label{tab:fullPerplexity} \end{table} As we can see, the GLM clearly outperforms the baseline for all model lengths and data sets. In general we see a larger improvement in performance for models of higher orders ($n=5$). The gain for 3-gram models, instead, is negligible. For German texts the increase in performance is the highest ($12.7\%$) for a model of order $5$. We also note that GLMs seem to work better on broad domain text rather than special purpose text as the reduction on the wiki corpora is constantly higher than the reduction of perplexity on the JRC corpora. We made consistent observations in our second experiment where we iteratively shrank the size of the training data set. We calculated the relative reduction in perplexity from MKN to GLM for various model lengths and the different sizes of the training data. The results for the English Wikipedia data set are illustrated in Figure~\ref{fig:GLMCorpusSize}. \begin{figure*}[tbhp] \centering \includegraphics[width=0.9\textwidth]{rel-perplex-glm.eps} \caption{Variation of the size of the training data on 100k test sequences on the English Wikipedia data set with different model lengths for GLM.} \label{fig:GLMCorpusSize} \end{figure*} We see that the GLM performs particularly well on small training data. As the size of the training data set becomes smaller (even smaller than the evaluation data), the GLM achieves a reduction of perplexity of up to $25.7\%$ compared to language models with modified Kneser-Ney smoothing on the same data set. The absolute perplexity values for this experiment are presented in Table~\ref{tab:fullPerplexityDataSize}. \begin{table}[tbhp] \centering \begin{tabularx}{0.45\textwidth}{Y | c c c} \toprule \multicolumn{1}{c}{} & \multicolumn{3}{c}{model length} \\ Experiments & $n=3$ & $n=4$ & $n=5$ \\ \midrule $80\%$ MKN & 586.9 & 404 & 307.3 \\ $80\%$ GLM & \textbf{571.6} & \textbf{378.1} & \textbf{275} \\ rel. change & 2.6\% & 6.5\% & 10.5\% \\ \midrule $8\%$ MKN & 712.6 & 539.8 & 436.5 \\ $8\%$ GLM & \textbf{683.7} & \textbf{492.8} & \textbf{382.5} \\ rel. change & 4.1\% & 8.7\% & 12.4\% \\ \midrule $0.8\%$ MKN & 894.0 & 730.0 & 614.1 \\ $0.8\%$ GLM & \textbf{838.7} & \textbf{650.1} & \textbf{528.7} \\ rel. change & 6.2\% & 10.9\% & 13.9\% \\ \midrule $0.08\%$ MKN & 1099.5 & 963.8 & 845.2 \\ $0.08\%$ GLM & \textbf{996.6} & \textbf{820.7} & \textbf{693.4} \\ rel. change & 9.4\% & 14.9\% & 18.0\% \\ \midrule $0.008\%$ MKN & 1212.1 & 1120.5 & 1009.6 \\ $0.008\%$ GLM & \textbf{1025.6} & \textbf{875.5} & \textbf{750.3} \\ rel. change & 15.4\% & 21.9\% & 25.7\% \\ \bottomrule \end{tabularx} \caption{Absolute perplexity values and relative reduction of perplexity from MKN to GLM on shrunk training data sets for the English Wikipedia for models of order $3$ to $5$} \label{tab:fullPerplexityDataSize} \end{table} Our theory as well as the results so far suggest that the GLM performs particularly well on sparse training data. This conjecture has been investigated in a last experiment. For each model length we have split the test data of the largest English Wikipedia corpus into two disjoint evaluation data sets. The data set \textit{unseen} consists of all test sequences which have never been observed in the training data. The set \textit{observed} consists only of test sequences which have been observed at least once in the training data. Again we have calculated the perplexity of each set. For reference, also the values of the \text{complete} test data set are shown in Table~\ref{tab:sparsity}. \begin{table}[tbhp] \centering \begin{tabularx}{0.47\textwidth}{Y | c c c} \toprule \multicolumn{1}{c}{} & \multicolumn{3}{c}{model length} \\ Experiments & $n=3$ & $n=4$ & $n=5$ \\ \midrule MKN\textsuperscript{complete} & 586.9 & 404 & 307.3 \\ GLM\textsuperscript{complete} & \textbf{571.6} & \textbf{378.1} & \textbf{275} \\ rel. change & 2.6\% & 6.5\% & 10.5\% \\ \midrule MKN\textsuperscript{unseen} & 14696.8 & 2199.8 & 846.1 \\ GLM\textsuperscript{unseen} & \textbf{13058.7} & \textbf{1902.4} & \textbf{714.4} \\ rel. change & 11.2\% & 13.5\% & 15.6\% \\ \midrule MKN\textsuperscript{observed} & \textbf{220.2} & \textbf{88.0} & \textbf{43.4} \\ GLM\textsuperscript{observed} & 220.6 & 88.3 & 43.5 \\ rel. change & $-0.16\%$ & $-0.28\%$ & $-0.15\%$ \\ \bottomrule \end{tabularx} \caption{Absolute perplexity values and relative reduction of perplexity from MKN to GLM for the complete and split test file into observed and unseen sequences for models of order $3$ to $5$. The data set is the largest English Wikipedia corpus.} \label{tab:sparsity} \end{table} As expected we see the overall perplexity values rise for the \textit{unseen} test case and decline for the \textit{observed} test case. More interestingly we see that the relative reduction of perplexity of the GLM over MKN increases from $10.5\%$ to $15.6\%$ on the \textit{unseen} test case. This indicates that the superior performance of the GLM on small training corpora and for higher order models indeed comes from its good performance properties with regard to sparse training data. It also confirms that our motivation to produce lower order $n$-grams by omitting not only the first word of the local context but systematically all words has been fruitful. However, we also see that for the \textit{observed} sequences the GLM performs slightly worse than MKN. For the observed cases we find the relative change to be negligible.
1,314,259,996,092
arxiv
\section{Introduction} While the academic discussion of algorithmic bias has an over 20-year long history ~\cite{friedman1996bias}, we have now reached a transitional phase in which this debate has taken a practical turn. The growing awareness of algorithmic bias and the need to responsibly build and deploy Artificial Intelligence (AI) have led increasing numbers of practitioners to focus their work and careers on translating these calls to action within their domains~\cite{madaio20, holstein2019improving}. New AI and machine learning (ML) responsibility or fairness roles and teams are being announced, product and API interventions are being presented, and the first public successes \textemdash and lessons learned \textemdash are being disseminated ~\cite{haydn2020}. However, practitioners still face considerable challenges in attempting to turn theoretical understanding of potential inequities into concrete action ~\cite{holstein2019improving, Krafft20}. Gaps exist between what academic research prioritizes, and what practitioners need. The latter includes developing organizational tactics and stakeholder management~\cite{holstein2019improving, tutorialfat} rather than technical methods alone. Beyond the need for domain-specific translation, methods, and technical tools, responsible AI initiatives also require operationalization within \textemdash or around \textemdash existing corporate structures and organizational change. Industry professionals, who are increasingly tasked with developing accountable and responsible AI processes, need to grapple with inherent dualities in their role~\cite{metcalfowners} as both agents for change based on their own values and/or their official role, but also as workers with careers in an organization with potentially misaligned incentives that may not reward or welcome change ~\cite{madaio20}. Most commonly, practitioners have to navigate the interplay of their organizational structures and algorithmic responsibility efforts with relatively little guidance. As Orlikowski points out, whether designing, appropriating, modifying, or even resisting technology, human agents are influenced by the properties of their organizational context ~\cite{orlikowski1992duality}. This also means that some organizations can be differentially successful at implementing organizational changes. Individuals' strategies must adapt to the organizational context and follow what is seen as successful and effective behavior within that setting. Myerson for example describes the concept of "tempered radicals." These are employees who slowly but surely create corporate change by pushing organizations through persistent small steps. Advocating for socially responsible business practices became part of tempered radicals' role over time. These employees create both individual and collective action, relying on their own perceived legitimacy, influence and support built within their organizational context ~\cite{meyerson2004tempered}. Interestingly, the tension between academic research and industry practice is visible in research communities such as FAccT, AIES, and CSCW, where people answering calls to action with practical methods are sometimes met with explicit discomfort or disapproval from practitioners working within large corporate contexts. Vice versa, practitioners affecting concrete change in practice may have achieved such results in ways that may not fit external research community expectations or norms. Within the discourse on unintended consequences of ML-driven system, we have seen both successes and very public failures \textemdash even within the same corporation ~\cite{haydn2020} \textemdash making it imperative to understand such dynamics. This paper builds on the prior literature in both organizational change and algorithmic responsibility in practice to better understand how these still relatively early efforts are taking shape within organizations. We know that attention to the potential negative impacts of machine learning is growing within organizations, but how to leverage this growing attention to effectively drive change in the AI industry remains an open question. To this end, we present a study involving 26 semi-structured interviews with professionals in roles that involve concrete projects related to investigating responsible AI concerns or "fair-ML" (fairness-aware machine learning ~\cite{selbst2019fairness}) in practice. We intend this to refer not only to fairness-related projects but also more broadly projects related to the work on responsible AI and accountability of ML products and services given the high degree of overlap in goals, research, and people working on these topics. Using the data from the semi-structured qualitative interviews to compare across organizations, we describe prevalent, emergent, and aspirational future states of organizational structure and practices in the responsible AI field, based on how often respondents identified the practice during the interview and whether the practice is currently existing or is a desired future change. We investigate practitioners' perceptions of their own role, the role of the organizational structures in their context, and how those structures interact with adopting responsible AI practices. Based on those answers, we identify four major questions that organizations must now adapt to answer as responsible AI initiatives scale. Furthermore, we describe how respondents perceived transitions occurring within their current contexts, focusing on organizational barriers and enablers for change. Finally, we present the outcome of a workshop where attendees reflected upon early insights of this study through a structured design activity. The main contribution of our work is the qualitative analysis of semi-structured interviews about the responsible AI work practices of practitioners in industry. We found that most commonly, practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and misalignment of incentives within decision-making structures that are only reactive to external pressure. Emerging practices that are not yet widespread include the use of organization-level frameworks and metrics, structural support, and proactive evaluation and mitigation of issues as they arise. For the future, interviewees aspired to have organizations invest in anticipating and avoiding harms from their products, redefine results to include societal impact, integrate responsible AI practices throughout all parts of the organization, and align decision-making at all levels with an organization's mission and values. Preliminary findings were shared at an interactive workshop during a large machine learning conference, which yielded organizational level recommendations to (1) create veto ability across levels, (2) coordinate internal and external pressures, (3) build robust communication channels between and within levels of an organization, and (4) design initiatives that account for the interdependent nature of the responsible AI work practices we have heretofore discussed. \section{Literature review} \subsection{Algorithmic responsibility in practice} An almost overwhelming collection of principles and guidelines have been published to address the ethics and potential negative impact of machine learning. Mittelstadt et al.~\cite{mittelstadt2019ai} discuss over sixty sets of ethical guidelines, Zeng et al. \cite{zeng2018linking} provide a taxonomy of 74 sets of principles, while Jobin et al. find 84 different sets of principles \cite{jobin2019global}. Even if there is relative, high-level agreement between most of these abstract guidelines ~\cite{jobin2019global, zeng2018linking}, how they are translated into practice in each context remains very unclear ~\cite{mittelstadt2019ai}. Insight is available from how companies changed their practices in domains such as privacy and compliance in response to legislative directives ~\cite{privacybook}. The active debate on how requirements in the EU's GDPR are to be interpreted ~\cite{kaminski2020multi, malgieri2020concept}, however, illustrate the challenges of turning yet nascent external guidance into concrete requirements. Krafft et al. ~\cite{Krafft20} point out that even between experts, there is a disconnect between policymakers and researchers' definitions of such foundational terms as `AI'. This makes the application of abstract guidelines even more challenging and raises the concern that focus may be put on future, similarly abstract technologies rather than current, already pressing problems. The diverse breadth of application domains for machine learning suggests that requirements for applying guidelines in practice should be steered by the specific elements of the technologies used, specific usage contexts, and relevant local norms ~\cite{mittelstadt2019ai}. Practitioners encounter a host of challenges when trying to perform such work in practice ~\cite{holstein2019improving}. Organizing and getting stakeholders on board are necessary to be able to drive change ~\cite{tutorialfat}. This includes dealing with imperfection, and realizing that tensions and dilemmas may occur when "doing the right thing" does not have an obvious and widely agreed upon answer ~\cite{Fazelpour20, cramer2018assessing}. It can be hard to foresee all potential consequences of systems while building them, and it can be equally difficult to identify how to overcome unwanted side-effects, or even why they occur technically ~\cite{googleAudit2020}. A fundamental challenge is that such assessment should not simply be about technical, statistical disparities, but rather active engagement to overcome the lack of guidance decision-makers have on what constitute "just" outcomes in non-ideal practice ~\cite{Fazelpour20}. Additional challenges include organizational pressures for growth, common software development approaches such as agile working that focus on rapid releases of minimal viable products, and incentives that motivate a focus on revenue within corporate environments ~\cite{holstein2019improving,madaio20, haydn2020}. Taking inspiration from other industries where auditing processes are standard practice still means that auditing procedures have to be adjusted to product and organizational contexts, and require defining the goal of the audit in context ~\cite{googleAudit2020}. This means that wider organizational change is necessary to translate calls to action into actual process and decision-making. \subsection{Organizational change and internal/external dynamics} Current challenges faced by responsible AI efforts can be compared to a wide selection of related findings in domains such as legal compliance~\cite{trevino1999managing} where questions arise regarding whether compliance processes actually lead to more ethical behavior~\cite{krawiec2003cosmetic}, diversity and inclusion in corporate environments ~\cite{barak2016managing, kalev2006best}, and corporate privacy practices ~\cite{privacybook}. All of these domains appear to have gone through a process that is mirrored in current algorithmic responsibility discussions: publication of high-level principles and values by a variety of actors, the creation of dedicated roles within organizations, and urgent questions about overcoming challenges and achieving "actual" results in practice and how to avoid investing in processes that are costly but do not deliver beyond cosmetic impact. As Weaver et al. pointed out in 1999 ~\cite{weaver1999corporate}, in an analysis of the Fortune 1000 ethics practices, success relies not only on centralized principles, but also their diffusion into managerial practices in the wider organization. Interestingly, while external efforts can effectively put reputational and legislative pressure on companies, internal processes and audits are just as important, and they all interact. Internally, this is apparent in the process of legitimization of the work on `tempered radicals' in Myerson's work ~\cite{meyerson2004tempered} as described in the introduction, and these radicals' internal journey. External forces can help in more or less productive ways in that process. As discussed by Bamberger and Mulligan ~\cite{privacybook}, for corporate privacy efforts in particular, both external and internal forces are necessary for work on corporate responsibility to be effective. Internally, they suggest focusing on getting onto board-level agendas to ensure attention and resourcing, having a specific boundary-spanning privacy professional to lead adoption of work practices, and ensuring `managerialization' of privacy practices by increasing expertise within business units and integration within existing practices. Externally, they suggest that creating positive ambiguity by keeping legislation broad can push more accountability onto firms for their specific domains, which can create communities and promote sharing around privacy failures. They found that ambiguity in external privacy discussions could foster reliance on internal professionals' judgements, and thus created autonomy and power for those professionals identified as leading in privacy protection. Thus, they illustrate how ambiguity \textemdash rather than a fully defined list of requirements \textemdash can actually help promote more reflection and ensure that efforts go beyond compliance. A similar internal/external dynamic is visible within the algorithmic responsibility community. For example, in the Gender Shades project, Buolamwini and Gebru~\cite{buolamwini2018gender} presented not only an external audit of facial recognition APIs, but also reactions from the companies whose services were audited to illustrate more and less effective responses. Such external audits can result in momentum inside of companies to respond to external critique, and in selected cases, to make concrete changes to their products. Internal efforts in turn have access to more data, ensure that auditing can be completed before public releases, develop processes for companies, and allow companies to take responsibility for their impact ~\cite{googleAudit2020}. Successes are beginning to emerge, and have ranged from positive changes on policy and process resulting from corporate activism, tooling built for clients or internal purposes, to direct product "fixes" in response to external critique ~\cite{haydn2020}. For example, Raji et al.~\cite{googleAudit2020} present an extensive algorithmic auditing framework developed by a small team within the larger corporate context of Google. They offer general methods such as data and model documentation ~\cite{modelcards} and also tools such as metrics to enable auditing in specific contexts like image search ~\cite{Mitchell20metrics}. Implementing these methods and tools then requires corporate processes to provide the resources for such auditing and to ensure that results of audits impact decisions within the larger organizational structure. \subsubsection{Organizational research and structures} To situate our work in this broader context, we will briefly examine different perspectives on organizational structures. First, it is worthwhile to revisit what organizational theorist Wanda Orlikowski~\cite{orlikowski1992duality} called the duality of technology in organizations. Orlikowski discusses how people in organizations create and recreate meaning, power and norms. Orlikowski's `structurational' model of technology comprises of these human agents, the technology that mediates their task execution, and the properties of organizations. The latter institutional properties range from internal business strategies, control mechanisms, ideology, culture, division of labor and procedures, communication patterns, as well as outside pressures such as governmental regulation, competition and professional norms, and wider socio-economic conditions. People's actions are then enabled and constrained by these structures, which are themselves the product of previous human actions. This perspective was augmented by Orlikowski ~\cite{Orlikowski2000} to include a practice orientation; repeated interactions with technologies within the specific circumstances also enact and form structures. Similarly, Dawson provides an extensive review of perspectives in studies on organizational change ~\cite{dawson2019reshaping} and discusses the `process turn', where organizations are seen as ever-changing rather than in discrete states; what may appear as stable routines may in actuality be fluid. Dawson emphasizes the socially constructed process, and the subjective lived experiences: actors' collaborative efforts in organizations unfold over time and dialogue between them shapes interpretations of changes. Such dynamics are also present in what organizational theorist Richard Scott \cite{scott2015organizations} summarized as the rational, natural, and open perspective on organizations. `Rational' organizations were seen as `the machine', best suited to industries such as assembly line manufacturing where tasks are specified by pre-designed workflow processes. The `natural' organization signified a shift in organizational ideology. No longer were people seen as mere appendages to the machines, but rather as crucial learners in relationship with machines. The metaphor is that of the organization as an 'organism' with a strong interior vs. exterior boundary, and needs to `survive'. Similar to an organism, the organisation grows, learns, and develops. As a consequence of the survival ideology, the exterior environment can be seen as a threat against which the organism must adapt to survive. Scott however describes how the notion of `environment as threat' was replaced by the realization that environmental features are the conditions for survival. The central insight emerging from `open' systems thinking is that all organizations are incomplete and depend on exchanges with other systems. The metaphor became that of an `ecology'. Open systems are characterized by (1) interdependent flows of information and (2) interdependent activities, performed by (3) a shifting coalition of participants by way of (4) linking actors, resources and institutions, in order to (5) solve problems in (6) complex environments. For responsible AI efforts to succeed then, organizations must successfully navigate the changes necessary within `open' systems. \subsubsection{Multi-stakeholder communities as meta organizational structures} The described `ecologies', particularly in `open' systems, contain formal and informal meta-organizational structures, which have been studied in other contexts and are of increasing growing importance to the field of responsible AI. Organizations often interact with each other through standards bodies, communities, processes, and partnerships. These meta-processes can have as goals (1) producing responses to proposed regulations, standards, and best practices, (2) fostering idea exchange between silos, and (3) self-regulation. Organizations participate in multi-stakeholder initiatives to achieve a number of their own goals, including advocating for business interests, keeping up to date on industry trends, and having a voice in shaping standards or regulations that they will then be subjected to. Berkowitz ~\cite{Berkowitz2018} discusses the shift towards governance in sustainability contexts, and the key role that meta-organizations can have in facilitating meta-governance of corporate responsibility beyond simply complying with legislation. She identifies six capabilities needed for sustainable innovations: (1) anticipation of changes and negative impacts of innovation, (2) resilience to changes, (3) reflexivity, (4) responsiveness to external pressures and changing circumstances, (5) inclusion of stakeholders beyond immediate decision makers, and (6) comprehensive accountability mechanisms. Meta-organizations can promote inter-organizational learning and building of these six capabilities. Similarly, within the field of AI, multi-stakeholder organizations, standards, and self-organized projects have been created in recent years to acknowledge the need for interdisciplinary expertise to grapple with the wide reaching impacts of AI on people. Many AI researchers have been vocal proponents of expanding the number of perspectives consulted and represented, including stakeholders such as policymakers, civil society, academics from other departments, impacted users, and impacted nonusers. Reconciling perspectives from diverse stakeholders presents its own set of challenges that change depending on the structure of the organization. Participatory action offers relevant frameworks for characterizing options for decision making in multistakeholder contexts. Decision making can be centralized within a formal organization with stakeholders being informed, consulted, involved, collaboration, or else stakeholders can self-organize informally to achieve the same levels of participation. The structures present at a meta organizational level will differ and enable the application of different group-level decision making processes. For example, ad hoc groups of researchers have self-organized to create unconference events and write multi-stakeholder reports, including reports with large groups of authors (e.g. ~\cite{brundage2020trustworthy}) based originally on discussions within workshops held under Chatham house rules, while others have created new formal organizations, conferences such as AIES or FAccT, or research institutes. In a similar manner to Berkowitz ~\cite{Berkowitz2018}, we focus here on the "how" of achieving more adoption of responsible AI work practices in industry. We further investigate how practitioners experience these changes within the context of different organizational structures, and what they see as the shifts that drive or hinder their work within their organizations. \section{Study and Methods} \label{studyandmethods} Our motivation for this work was to identify enablers that could shift organizational change towards adopting responsible AI practices. Responsible AI research has influenced organizational practices in recent years, with individuals and groups within companies increasingly tasked with implementing research into action, whether formally or informally. Our research applies theories and frameworks of organizational structure and change management to characterize the growing practice of applied responsible AI. To better understand the implications of organizational structure on day-to-day responsible AI work and outcomes, we interviewed practitioners who are actively involved in these initiatives by themselves or within a larger team. We conducted 26 semi-structured interviews with people based in 4 continents from 19 organizations. Except for two 30 minute interviews, all other interviews lasted between 60 and 90 minutes. Participants were given a choice of whether to allow researchers to record the interview for note taking purposes. A total of 11 interviews were recorded. In cases where the interview was not recorded, we relied on writing down the respondents' answers to the questions during the course of the interview. In several cases, participants requested to additionally validate any written notes and make necessary clarifications before their use in the study to ensure that their anonymity was not compromised. \input{table_respondents_roles} \subsubsection{Sampling technique} Participants were recruited through a convenience sampling technique through snowball sampling from participants who recommended other interviewees. Three recruiting criteria were used to find interviewees: (1) did they work closely with product, policy, and/or legal teams, (2) did the outputs of their work have a direct impact on ML products and services, and (3) were some aspects of their work related to the field of responsible AI. We filtered out individuals whose roles were solely research, although interviewees may also be active contributors to responsible AI research in addition to their existing work stream. Through the ongoing conversations we had with practitioners before as well as after conducting the qualitative interviews, we aimed to establish a substantial level of trust and transparency, which we felt was necessary given the sensitive nature of the topics discussed. This allowed for more open, nuanced, and in-depth discussions where practitioners felt that there is a shared understanding between interviewers and interviewees about the often unvoiced challenges in responsible AI work. We intentionally sought to interview as diverse a group of practitioners as possible to capture perspectives from a broad range of organizational contexts. In \emph{Table~\ref{tab:roles-table}} we summarize the functional roles of the interviewees who participated in the project and how they describe their responsible AI work. Participants came from a wide variety of functions, including AI Strategy, Engineering, Human Resources, Legal, Marketing and Sales, Machine Learning Research, Policy, and Product Management. Among the 26 participants, ten had educational background in Social Science, eight in Computer Science, seven in Law and Policy, and one practitioner had a degree in Economics. The majority of respondents were geographically located in the US (21 out of 26), two participants were in the UK, and the rest of the respondents were based in Australia, Denmark, and Japan. The average length that interviewees have been with their organization is 5 years and 5 months, where more than one third of the practitioners have been with their company for more than five years (9 people) and 2 people spent decades with their organization. Lastly in terms of organizational sectors, 11 practitioners worked in business-to-business organizations, 2 in business-to-consumer, and 13 in organizations which were both business-to-business and business-to-consumer. \subsubsection{Interview protocol} The script and questions for the semi-structured interviews were reviewed by an industrial-organizational psychologist and responsible AI practitioners within three different organizations. Questions were grouped into different sections, exploring the current state of responsible AI work, the evolution of the work over time, how the work is situated within the organization, how responsibility and accountability for the work are distributed, performance review processes and incentives, and what desired aspirational future structures and processes would enable more effective work. The semi-structured nature of the interview provided standard questions that were asked of all participants while allowing interviewers the flexibility to follow up on interesting insights as they arose during interviews. The full set of questions can be found in \emph{Appendix \ref{questionnaire}}. \subsubsection{Analysis} To analyze the interview data, we utilized a standard methodology from contextual design - interpretation session and affinity diagramming \cite{holtzblatt1997contextual}. Through a bottom up affinity diagramming approach, we iteratively assigned codes to various concepts and themes shared by the interviewees. We iteratively grouped these codes into successive higher level themes and studied the relationships between them. \subsubsection{Workshop} In addition to semi-structured interviews, we organized a workshop at a machine learning conference attended by a highly self-selected group of people interested in responsible AI from industry, academia, government, and civil society. The first half of the workshop was a presentation of preliminary insights from the literature review and results sections of this paper. We then conducted an interactive design exercise where participants were organized into 13 groups of between 4 and 6 individuals per group. Each group was given a scenario description of an AI organization that exemplified the prevalent work practices discussed in the \emph{Results: Interviews} section below. The facilitators guided groups through a whiteboard discussion of the following questions: \begin{itemize} \item What are examples of emerging responsible AI work practices in the context of the scenario? \item What are examples of structures or processes in the prevalent organizational structure which are outside of the scope of responsible AI work but which act to protect and enable emerging fair-ML practices? \item What are examples of outlier practices outside of the prevalent practices in the scenario? \item What connections exist between these practices and organizational structures? \item What practices or organizational structures could enable positive self-reinforcing outcomes through making the connections stronger? \end{itemize} The workshop activity was designed to allow participants to (1) gain a deeper understanding of the responsible AI challenges by connecting study findings to their own experiences, (2) collaboratively explore what organizational structures could enable the hypothetical organization developing AI products and services to resolve them through, and (3) map interdependencies and feedback loops that exist between practices to identify potentially effective recommendations to address the challenges of implementing responsible AI initiatives. \section{Results: Interviews} \label{results} We start with a high level overview of our findings followed by a discussion of the key themes that emerged from the conducted interviews. \subsection{Overview} About a quarter of the participants had initiated their responsible AI work in their current organization within the past year (7 out of 26) compared to 73\% (19 out of 26) worked on efforts that had started more than an year ago. More than half of the interviewees worked on their initiatives as individuals and not as part of a team (14 out of 26). About 40\% of the respondents reported that they volunteer time outside of their official job function to do their work on responsible AI initiatives (11 out of 26) while the remaining 15 out of 26 participants had official roles related to responsible AI. Among the 15 interview participants with official roles related to responsible AI, 8 individuals were externally hired into their current role, while 7 transitioned into it from other roles within their organization. Interviewees who changed the focus of their existing roles or transitioned into responsible AI-related role were most commonly previously in project management roles (4 out of 7), then research (2 out of 7), then legal (1 out of 7). The majority of participants who had official responsible AI-related roles reported benefiting from an organizational structure that allowed them to craft their own role in a very dynamic and context-specific way. Since the beginning of our conversations, we noticed that practitioners used different language in the way they described their work and how it relates to responsible AI. We observed commonalities in the way practitioners from each function framed their responsible AI work (see \emph{Table~1}). For example, while project managers described their work in terms of product life-cycles and industry trends, legal practitioners discussed the responsible AI aspects of their role in terms of comprehensive pillars and ethical governance guidelines. We note that a few interviewees described going through stress-related challenges in relation to their responsible AI work. During some of the interviews, we saw a noticeable tone change in the interviewees' voice when discussing questions related to ethical tensions, accountability, risk culture, and others. Furthermore, some respondents have left their organizations between when we conducted the interviews in late 2019 and when we submitted this paper in October 2020. While we acknowledge the nascent state of responsible AI functions, these observations could point to opportunities for further study. There were various common perspectives that we heard practitioners express repeatedly. We saw the need for a multi-faceted thematic analysis which encompasses three intuitive clusters of data: (1) currently dominant or prevalent practices, (2) emerging practices, and (3) aspirational future context for responsible AI work practices in industry: \begin{itemize} \item The prevalent practices comprise what we saw most commonly in the data. \item The set of emerging practices includes practices which are shared among practitioners but less common than prevalent practices. \item The aspirational future consists of the ideas and perspectives practitioners shared when explicitly asked about what they envision for the ideal future state of their work within their organizational context. \end{itemize} Within the thematic analysis (see \emph{Table~\ref{table:1_overview}}), we found four related but distinct key questions that every organization must have processes and structures to support answering: \begin{itemize} \item When and how do we act? \item How do we measure success? \item What are the internal structures we rely on? \item How do we resolve tensions? \end{itemize} As organizations seek to scale responsible AI practices, they will have to transition from the prevalent or emerging practices of answering these questions to the structures and processes of the aspirational future. It is important to note that not all emerging practices we found in the data will necessarily lead to the aspirational future. In what follows, we provide details about practitioners' personal perspectives and experiences within the individual themes and questions. \input{table_overview} \subsection{When and how do we act?} One transition we identified in the data is how organizations choose when and how to act. This includes questions of who chooses to prioritize what information within which decision-making processes. We found that many organizations behave reactively, fewer are now proactive, and respondents aspire for their organizations to become anticipatory in the future. \subsubsection{Prevalent work practices} Most commonly, interviewees described responsible AI work in their organizations as reactive. The most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo. Many participants reported that responsible AI work can be perceived as a "taboo topic" in their organizations. Raising awareness was a challenge for one interviewee who shared: "It was an organizational challenge for us, it's hard as when something is so new - we run into 'Whose job is this?'" when they bring up topics about algorithmic fairness or inequity in harm at work. We found that the uncertainty and unwillingness to engage in a deeper understanding of responsible AI issues may lead to unproductive discussions or outright dismissal of important but often unvoiced concerns. responsible AI work is often not compensated, as in the case of the 40\% of respondents volunteering their time to work on responsible AI initiatives, or is perceived as ambiguous or too complicated for the organization's current level of resources. In response to the question about how interviewees are recognized for their work, one interviewee shared: "many of the people volunteering with our team had trouble figuring out how to put this work in the context of their performance evaluation." In several cases, the formation of a full-time team to conduct responsible AI work was only catalyzed by the results from \textit{volunteer}-led investigations of potential bias issues within models that were en route to deployment. The volunteers for these investigations went far beyond their existing role descriptions, sometimes risking their own career progression, to take on additional uncompensated labor to prevent negative outcomes for the company. This highlights the reactive nature of organizational support for responsible AI work in prevalent practice. Legal compliance was another factor that participants said could motivate organizational action. Beyond legal concerns, some practitioners reported that being able to use reputational risk as leverage to increase investment in responsible AI work, bringing hypothetical questions like "What if ProPublica found out about ...?" into decision-making meetings. Participant responses in this section illustrate how a reactive organizational stance towards responsible AI work shifts the labor and cost of identifying and addressing issues onto the individual worker. \subsubsection{Emerging work practices} In emerging practices on how and when to act, a few organizations have implemented proactive responsible AI evaluation and review processes for their ML systems, with the work and accountability often distributed across several teams. For example, some respondents reported support and oversight from legal teams. In a few cases, interviewees spoke with enthusiasm about the growing number of both internal and external educational initiatives. This included onboarding and upskilling employees through internal responsible AI curricula to educate employees about responsible AI-related issues and risks as well as externally facing materials to educate consumers and customers. Respondents referred to these efforts as an organization-level proactive investment to set up the organization to better address future responsible AI issues. Furthermore, a few participants shared about the availability or their involvement in preparing externally facing materials to educate their organization's customers or potential customers about responsible AI considerations in practice. A small number of interviewees reported that their work on responsible AI is acknowledged and explicitly part of their compensated role, in contrast to the volunteers in the prevalent practices theme, which is another organization-level difference between prevalent and emerging practices. On the other hand, emerging practices still show how individuals rather than organizational processes or structures remain the engine of proactive practices. In a few cases, proactive champions organizing grassroots actions and internal advocacy with leadership have made responsible AI a company-wide priority, which then sometimes made it easier for people to get resourcing for responsible AI initiatives and to establish proactive organization-wide processes. Some participants reported leveraging existing internal communication channels to organize responsible AI discussions. One participant even captured screenshots of problematic algorithmic outcomes and circulated them among key internal stakeholders to build support for responsible AI work. Similar to prevalent practices, these individuals are tasked with the labor of using existing organizational structures to build organizational support for their responsible AI work in addition to doing their responsible AI work. The difference in emerging organizational practice is how these individuals are finding more success in instilling a proactive, rather than reactive, mindset for approaching algorithmic responsibility. \subsubsection{Mapping the aspirational future} In an ideal future, many interviewees envisioned organizational frameworks that encourage an anticipatory approach. In the future state, an individual wanting to engage with algorithmic responsibility issues would not necessarily need to do the organizational labor of changing structures as in the prevalent and emerging practices, but rather be supported by organization-wide resources and processes to focus their efforts directly on responsible AI work. In this aspirational future, respondents envisioned technical tools to enable large-scale implementation of responsible AI evaluations both internally and externally: well-integrated technical tools would assess algorithmic models developed by product teams and feed seamlessly into organization-wide evaluation processes that identify and address risks of pending ML systems before they go live in products, while externally, customers using the algorithmic models in different contexts have oversight through explicit assessments, which feed information about identified risks back to the organization. Their organizations would utilize clear and transparent communication strategies to explain the process and results of these evaluations both internally within the entire organization and externally with customers and other stakeholders. One practitioner questioned if their team should even engage with customers who do not agree to deploy an assessment framework ex-ante, suggesting a new baseline expectation for customers to also play their role in faster feedback loops for identifying and mitigating risk. Respondents reported that in the ideal future, product managers would have an easier way to understand responsible AI concerns relevant to their products without needing to regularly read large numbers of research papers, which could be supported by organization-level teams, tools, and/or education to synthesize and disseminate relevant knowledge. Several participants expressed that the traditional engineering mindset would need to become better aligned with the dynamic nature of responsible AI issues which cannot be fixed in predefined quantitative metrics. Anticipatory responsible AI frameworks could allow organizations to respond to the responsible AI challenges in ways which uphold organizational code of ethics and society’s values at large. \subsection{How do we measure success?} \label{measuresuccess} Another transition we saw our respondents navigating in their work and organizations is how organizations measure success. Many responsible AI initiatives are relatively newer and aim to measure the societal impact of technology, which is a departure from traditional business metrics like revenue or profitability. Learning organizations need to make an active change to better account for this shift. Respondents reported that many challenges in their prevalent work practices arise from the inability to adequately use existing metrics to account for the goals of responsible AI work, while emerging practices aim to begin rewarding success that falls outside of pre-existing narrow definitions. In an aspirational future, organizations value responsible AI work and processes reflect that at every level. \subsubsection{Prevalent work practices} The majority of respondents reported that one of the biggest challenges for their responsible AI work is the lack of metrics that adequately capture its true impact. The majority of respondents also expressed at least some degree of difficulty in communicating the impact of their work. Combined, this hinders them from fully illustrating the importance of responsible AI work for the organization's success, which in turn keeps them from being able to receive adequate credit and compensation for their true impact. The challenges of measuring the impact of responsible AI is a deeply researched topic in the field of fairness, accountability, and transparency of ML. Through our interview questions, we have tried to further disentangle the perspectives on this challenge in industry. For example, some industry practitioners reported that the use of inappropriate and misleading metrics is a bigger threat than the lack of metrics. Respondents shared that academic metrics are very different than industry metrics, which include benchmarks and other key performance indicators tracked by product teams, such as metrics related to customer retention and development (click rate, time spent using a product, etc.) Project managers reported trying to implement academic metrics in order to both leverage academic research and facilitate a collaboration between research and product teams within their organization. One of the interviewees shared that in their personal perspective, "industry-specific product-related problems may not have sufficient research merit or more specifically an ability for the researcher to publish, sometimes because of privacy reasons data used in the research experiments may not allow researchers to be recognized for their work.” This may be due to the nature of the problem or due to privacy reasons. Since data used in the research experiments may not allow researchers to be recognized for their work, this may ultimately discourage them from investigating real world responsible AI issues. Practitioners embedded in product teams explained that they often need to distill what they do into standard metrics such as number of clicks, user acquisition, or churn rate, which may not apply to their work. Most commonly, interviewees reported being measured on delivering work that generates revenue. They spoke at length about the difficulties of measuring responsible AI impact in terms of impact on the business "bottom line.” In some cases, practitioners framed their impact in terms of profitability by arguing that mitigating responsible AI risks prior to launch is much cheaper than waiting for and fixing problems that arise after launch where real-world harm and reputational risk come into play. Again, the prevalent work practices reveal individuals working on responsible AI taking on the extra labor of trying to translate their work into ill-fitting terms and metrics that are not designed to measure or motivate success on responsible AI outcomes. The majority of respondents expressed at least some degree of difficulty in communicating the impact of their work. The metrics-related challenges they described included: (1) product teams often have short-term development timelines and thus do not consider metrics that aim to encompass long-term outcomes; (2) time pressure within fast-paced development cycles leads individuals to focus on short-term and easier to measure goals; (3) qualitative work is not prioritized because it requires skills that are often not present within engineering teams; (4) leadership teams may have an expectation for "magic," such as finding easy to implement solutions, which in reality may not exist or work; (5) organizations do not measure leadership qualities and (6) do not reward the visionary leaders who proactively address the responsible AI issues that arise; (7) performance evaluation processes do not account for responsible AI work, making it difficult to impossible for practitioners to be rewarded or recognized for their responsible AI contributions. \subsubsection{Emerging work practices} A few interviewees reported that their organizations have implemented metrics frameworks and processes in order to evaluate responsible AI risks in products and services. Practitioners talked enthusiastically about how their organizations have moved beyond ethics washing \cite{bietti2020ethics} in order to accommodate diverse and long-term goals aligned with algorithmic responsibility and harm mitigation, the goals of a responsible AI practice. Interviewees identified the following enablers for this shift in organizational culture: (1) rewarding a broad range of efforts focused on internal education; (2) rewarding risk-taking for the public good; (3) following up on potential issues with internal investigations; (4) creating organizational mechanisms that enable cross-functional collaboration. These emerging organizational enablers begin to set organizational scaffolding of a work environment that supports individuals working on responsible AI as they seek to change how their organization assigns value to work to better align with the societally-focused outcomes. \subsubsection{Mapping the aspirational future} In an aspirational future where responsible AI work is effective and fully supported by organizational structures, interviewees reported that their organizations would measure success very differently than today's prevalent practices: (1) their organizations would have a tangible strategy incorporate responsible AI practices or issues into the key performance indicators of product teams; (2) teams would employ a data-driven approach to manage ethical challenges and ethical decisions in product development; (3) employee performance evaluation processes would be redefined to encompass qualitative work; (4) organizational processes would enable practitioners to collaborate more closely with marginalized communities, while taking into account legal and other socio-technical considerations; (5) what is researched in academic institutions would be more aligned with what is needed in practice; (6) collaboration mechanisms would be broadly utilized. Specifically, participants discussed two kinds of mechanisms to enable collaboration: (1) working with external groups and experts in the field to define benchmarks prior to deployment, and (2) working with external groups to continuously monitor performance from multiple perspectives after deployment. \subsection{What are the internal structures we rely on?} In order for individuals to better enable responsible AI work, they need to reexamine the properties of their organizations. This involves leveraging what Orlikowski called the "structurational" model of technology in a specific applied context ~\cite{Orlikowski2000}. In the prevalent practices, organizations do not have internal structures to ensure accountability for responsible AI work, which can then be neglected due to role uncertainty without consequences. Distributed accountability on top of existing structures was reported in emerging practices, while in the aspirational future, responsible AI work would become integrated into all product-related processes to ensure accountability. \subsubsection{Prevalent work practices} Most commonly, participants reported ambiguity and uncertainty about role definitions and responsibilities within responsible AI work at their organization, sometimes due to how rapidly the work is evolving. Multiple practitioners expressed that their responsible AI related concerns were heard on account of their seniority in their team and organization. In response to "Do you have autonomy to make impactful decisions?", one data science practitioner who was volunteering time with the responsible AI team shared, "More senior people are making the decisions. I saw ethical concerns but there was difficulty in communicating between my managers and the [responsible AI] team. People weren't open for scrutinization." This illustrates the fragility of the prevalent practice since accountability relies on the individual's own resources, interests, and situational power rather than scalable and systemic organizational structures and processes that would ensure the desired outcomes. Several interviewees talked about the lack of accountability across different parts of their organization, naming reputational risk as the biggest incentive their leadership sees for responsible AI work, again tying accountability to individual incentives to take responsibility rather than ensuring accountability through organization-wide processes and policies. \subsubsection{Emerging work practices} Interviewees shared these emerging organizational structures as enablers for responsible AI work: (1) flexibility to craft their roles dynamically in response to internal and external factors; (2) distributed accountability across organizational structures and among teams working across the entire product life cycle; (3) accountability integrated into workflows; (4) processes to hold teams accountable for what they committed to; (5) escalation of responsible AI issues to management; (6) responsible AI research groups that contribute to spreading internal awareness of issues and potential solutions; (7) internal review boards that oversee responsible AI topics; (8) publication and release norms that are consistently and widely followed; (9) cross-functional responsible AI roles that work across product groups, are embedded in product groups, and/or collaborate closely with legal or policy teams. Participants also reported being increasingly cognizant of external drivers for change, such as cities and governments participating to create centers of excellence, for example, New York’s Capital District AI Center of Excellence. As before, these emerging structures begin to shift the locus of responsibility for managing organizational change away from the individual who seeks to do responsible AI work, which is not necessarily the same as organizational change management work, and onto organizational processes and structures that can distribute that labor in an appropriate manner. \subsubsection{Mapping the aspirational future} In the future, interviewees envisioned internal organizational structures that would enable responsible AI responsibilities to be integrated throughout all business processes related to the work of product teams. One practitioner suggested that while a product is being developed, there could be a parallel development of product-specific artefacts that assess and mitigate potential responsible AI issues. The majority of interviewees imagined that responsible AI reviews and reports would be required prior to release of new features. New ML operations roles would be created as part of responsible AI audit teams. Currently, this work falls within ML engineering, but respondents identified the need for new organizational structures that would ensure that responsible AI concerns are being addressed while allowing ML engineers to be creative and experiment. For example, one practitioner suggested that a responsible AI operations role could act as a safeguard and ensure that continuous responsible AI assessments are being executed once a system is deployed. Some interviewees described the need for organizational structures that enable external critical scrutiny. Scale could be achieved through partnership-based and multistakeholder frameworks. In the future, public shaming of high-stakes AI failures would provide motivation towards building shared industry benchmarks, and structures would exist to allow organizations to share benchmark data with each other. External or internal stakeholders would need to call out high impact failure use cases to enable industry-wide learning from individual mistakes. Industry-wide standards could be employed to facilitate distributed accountability and sharing of data, guidelines, and best practices. Of note is that in the aspirational future, organizational structures and processes incorporate external parties and perspectives, providing organizations better channels to understand their societal impact. \subsection{How do we resolve tensions?} Lastly, responsible AI work brings new types of tensions that organizations may not yet have processes to resolve, especially related to the questions of ethics and unintended consequences of socio-technical systems like AI. This requires organizations to update their prevalent practices in their transitions to better enable responsible AI work. Resolving tensions requires organizations to choose what to prioritize in a situation where there's a need for trade-offs. The practices described below show the different approaches that organizations are taking in prevalent practices, emerging practices, and in the aspirational future. \subsubsection{Prevalent work practices} The majority of respondents reported that they see misalignment between individual, team, and organizational level incentives and mission statements within their organization. Often, individuals reported doing ad hoc work based on their own values and personal assessment of relative importance. Similarly, the spread of information relies on individual relationships. Practitioners reported relying on their personal relationships and ability to navigate multiple levels of obscured organizational structures to drive responsible AI work. Related to the question about "What are the ethical tensions that you/your team faces?”, one of the interviewees shared, "We often work on prototypes for specific geographic units which are not meant to be scaled, it’s really meant not to be scaled. We need to step in and make that clear. Also sometimes people state the model is complete, we need a disclaimer that we're still updating and validating it, it is work in progress." Many of the interviewees had to navigate tensions related to scale and expectations on a daily basis. Like in the other transitions, this highlights a prevalent practice of relying on individuals to decide how to resolve tensions rather than organizational processes that would support individuals in evaluating tensions in alignment with the organization's mission or values. This creates additional labor and uncertainty for individuals doing responsible AI work in organizations exhibiting prevalent practices. \subsubsection{Emerging work practices} One of the biggest challenges practitioners reported was that as responsible AI ethical tensions are identified, overly rigid organizational incentives may demotivate addressing them, compounded by organizational inertia which sustains those rigid incentives. In this case, although the organizational structures in the emerging work practice shift labor away from individuals onto organization-wide processes, the processes themselves are not sufficiently aligned with the ultimate goals of responsible AI. Therefore, this makes the transition from prevalent to emerging practice one that steers the organization away from, rather than towards, the aspirational future where organizations resolve tensions in a way that encourages responsible AI work. Respondents described that in this situation, research and product teams struggle to justify research agendas related to responsible AI. This was caused by competing priorities that may align better with existing incentives and metrics for success, which as reported in \emph{Section \ref{measuresuccess}: How do we measure success?}, do not adequately account for the impact of responsible AI initiatives. Interviewees identified several factors that limit an organization's ability to resolve tensions in a manner that enables, instead of hinders, responsible AI work: (1) incentives that reward complexity whether or not it is needed - individuals are rewarded for complex technical solutions; (2) lack of clarity around expectations and internal or external consequences; (3) impact of responsible AI work being perceived as diffuse and hard to identify; (4) lack of adequate support and communication structures - whether interviewees were able to address responsible AI tensions often depended on their network of high trust relationships within the organization; (5) lack of data for sensitive attributes, which can make it impossible to evaluate certain responsible AI concerns. \subsubsection{Mapping the aspirational future} When asked about their vision for the future of their responsible AI initiatives, several respondents wanted responsible AI tensions to be addressed in better alignment with organization-level values and mission statements. They imagined that organizational leadership would understand, support, and engage deeply with responsible AI concerns, which would be contextualized within their organizational context. Responsible AI would be prioritized as part of the high-level organizational mission and then translated into actionable goals down at the individual levels through established processes. Respondents wanted the spread of information to go through well-established channels so that people know where to look and how to share information. With communication and prioritization processes in place, finding a solution or best practice in one team or department would lead to rapid scaling via existing organizational protocols and internal infrastructure for communications, training, and compliance, in contrast to the current prevalent situation that respondents described. Respondents wanted organizational culture to be transformed to enable (1) releasing the fear of being scrutinized as a roadblock for allowing external critical review and (2) distributing accountability for responsible AI concerns across different organizational functions. In the future state, every single person in the organization would understand risk, teams would have a collective understanding of risk, while organizational leadership would talk about risk publicly, admit when failures happen, and take responsibility for broader socioeconomic and socio-cultural implications. \section{Results: Interdisciplinary Workshop As described in the \emph{Section \ref{studyandmethods}: Study and Methods}, after the interviews with practitioners were completed, a workshop was held at a responsible AI oriented venue [anonymized for review]. Each of the four key organizational questions we identified in \emph{Section \ref{results}: Results: Interviews} needs to be considered within the unique socio-technical context of specific teams and organizations - (1) \emph{When and how do we act?} (2) How do we measure success? (3) \emph{What are the internal structures we rely on?} and (4) \emph{How do we resolve tensions?} However, the literature and interview findings suggest that there are likely similar steps or tactics that could lead to positive outcomes. The workshop activity allowed groups to create landscapes of practices based on their own experiences and then illuminate connections and feedback loops between different practices. Participants were given a simple scenario describing the prevalent work practices and organizational structure of an AI product company in industry as described in \emph{Section \ref{studyandmethods}: Study and Methods}. They then engaged in identifying enablers and tensions elucidating current barriers and pointing the way towards possible solutions. The following themes emerged in the insights participants shared during the activity: \subsubsection{The importance of being able to veto an AI system} Multiple groups mentioned that before considering how the fairness or societal implications of an AI system can be addressed, it is crucial to ask whether an AI system is appropriate in the first place. It may not be due to risks of harm, or the problem may not need an AI solution. Crucially, if the answer is negative, then work must stop. They recommended designing a veto power that is available to people and committees across many different levels, from individual employees via whistleblower protections, to internal multidisciplinary oversight committees to external investors and board members. The most important design feature is that the decision to cease further development is respected and cannot be overruled by other considerations. \subsubsection{The role and balance of internal and external pressure to motivate corporate change} The different and synergistic roles of internal and external pressure was another theme across multiple groups’ discussions. Internal evaluation processes have more access to information and may provide higher levels of transparency, while external processes can leverage more stakeholders and increase momentum by building coalitions. External groups may be able to apply pressure more freely than internal employees that may worry about repercussions for speaking up. \subsubsection{Building channels for communication between people (employees and leadership, leadership and board, users and companies, impacted users and companies)} Fundamentally organizations are groups of people, and creating opportunities for different sets of people to exchange perspectives was another key enabler identified by multiple groups. One group recommended a regular town hall for employees to be able to provide input into organization-wide values in a semi-public forum. \subsubsection{Sequencing these actions will not be easy because they are highly interdependent} Many of the groups identified latent implementation challenges because the discussed organizational enablers work best in tandem. For example, whistleblower protections for employees and a culture that supports their creation would be crucial to ensure that people feel safe speaking candidly at town halls. It is interesting to observe that workshop discussion groups identified organization-level structures and processes that support and amplify individual efforts as one of the key enablers for responsible AI work. Additionally, these themes are shared as a starting point to spark experimentation. Further pooling of results from trying these recommendations would accelerate learning and progress for all towards achieving positive societal outcomes through scaling responsible AI practices. \section{Discussion and Conclusion As ML systems become more pervasive, there is growing interest and attention in protecting people from harms while also equitably distributing the benefits from these systems. This has led researchers to focus on algorithmic accountability and transparency as intermediary goals on the path to better outcomes. However, corporate responsibility and organizational change are not new themes. The processes we see elucidated by Orlikowski~\cite{Orlikowski2000} and Myserson\cite{meyerson2004tempered} also apply to responsible AI. Myserson described how tempered radicals forge collective action through clarifying issues and creating movements, with a focus on internal culture and actively soliciting support using small but persistent steps. Our interviews and workshop discussions echo similar processes. The results suggest that what individuals working on responsible AI need is for the organizational structures around them to adapt in order to support rather than hinder their work. This can happen as a product of their own advocacy, demonstrated early successes, and/or from leadership proactively steering into these transitions. As Myerson points out, tempered radicals should be aware of new opportunities or threats during their work to elevate social responsibility to an internal corporate priority, and frame their work so it appeals to organizational interests. The resulting tensions in how labor and responsibility are distributed between individuals compared to supporting processes or structures were also prevalent in our findings. In order to succeed, practitioners have to map out a route from \textit{prevalent work practices} to their \textit{aspirational future state} goals. Along the way, they need to leverage existing practices to build momentum for \textit{emerging work practices} that can lead them there. Similarly, it is essential that practitioners are able to identify and avoid creating \textit{emerging work practices} that work against their desired long-term outcomes. They need to have a clear enough view of what the \textit{aspirational future} should be, while adjusting to changing circumstances. This means both maintaining alignment with the existing organizational state, while maintaining a long-term goal orientation. Our interviews and workshop discussion identified the resulting tensions in getting to that aspirational state. Throughout the four key organizational questions in which transitions are necessary to accommodate responsible AI work, we saw that prevalent practices can place the burden of responsibility and labor squarely on individuals to identify issues and try to change outcomes within existing structures. This means pushing for changes in those structures and processes, as their goals may be antithetical to what is currently supported by the organizational structure. Thus, individuals who want to bring responsible AI issues into their work must do their own jobs, do the responsible AI work if that is not their official job, do the difficult work of redesigning organizational structures around them to accommodate the responsible AI work, and on top of it all, do the change management to get those new organizational practices to be adopted. As a result, incentives may appear misaligned between individuals and their organizational context. This can make it challenging to create adequate support, which should come from communicating the (sometimes small) steps that together make the larger organizational successes. This can invoke feelings of a lack of clarity on expectations and impact. As summarized in \emph{Table \ref{table:1_overview}}, our participants had to decide when and how to act, how to reframe success, orient themselves within internal structures, and resolve tensions between incentives. Navigating these questions in organizations exhibiting prevalent practices requires skills that are not necessarily part of the regular conversation at academic venues. Perhaps then, rather than focusing on technical complexity, or on calls to (ideal) action alone, we should as a research community also prioritize providing researchers and practitioners with the tools and organizational insight to ensure that they have clear strategies to face this challenge. Researchers who make transitions to industry need to communicate the impact of their work in ways that builds them support within organizations, and legitimacy along the way. They need to have the skills and tools to navigate internal structures and tensions. This requires training, mentorship and sponsorship much beyond technical or research skills. The most effective ways for a particular organization may not always be perfectly aligned with research community norms; perhaps there lies another tension. Rather than having individuals find out the organizational work required on their own, and encounter pitfalls anew as individuals, we can provide support as an insights community, but only when we take this less public work very seriously as a core field of inquiry, as well as education. Perhaps then, this could help us as a wider community to move towards an "open" system, as described in other organizational settings by Scott ~\cite{scott2015organizations}, linking different actors, resources and institutions, and solving complex problems, in similarly complex environments. We observed that organizations exhibiting the emerging practices were beginning to implement new structures and processes or adapt existing ones, although some emerging structures like rigid organizational incentives within high-inertia contexts can hinder rather than support responsible AI work. The remaining emerging work practices did better enable responsible AI work, often by reducing the labor burden on individuals especially to identify what organizational processes and policies are necessary to support their responsible AI work and to change manage the transitions of adopting those new work practices. This frees up time that individuals can reclaim to focus on the responsible AI work itself. In the aspirational future, organizational structures and processes would fully provide mechanisms for monitoring and adapting system-level practices to incorporate and address emergent ethical concerns, so individuals who care about algorithmic responsibility issues can easily devote their time and labor to making progress on the specific issues within their functions. The internal advocacy and change management work would be full-time roles given to people with the skills, training, and desire to focus on that work, who could also offer expertise and mentorship to other individuals as they band together to create system-level change inside and beyond their organizations. Individuals working on responsible AI would then be free to focus on their specific job, rather than on changing the job environment to make it possible to do their job. The impact of ML systems on people cannot be changed without considering the people who build them and the organizational structure and culture of the human systems within which they operate. A qualitative methodological approach has allowed us to build rich context around the people and organizations building and deploying ML technology in industry. We have utilized this qualitative approach here in order to investigate the organizational tensions that practitioners need to navigate in practice. We describe existing enablers and barriers for the uptake of responsible AI practices and map a transition towards an aspirational future that practitioners describe for their work. In line with earlier organizational research, we emphasize such transitions are not to be seen as linear movements from one fixed state to another, rather they represent persistent steps and coalition building within the ever-changing nature of organizational contexts themselves. \bibliographystyle{ACM-Reference-Format} \section{Questionnaire} \label{questionnaire} \subsection{Describing current work practices related to fairness, accountability, and transparency of ML products and services} \subsubsection{Describe your role} \begin{enumerate} \item What is your formal role by title? \item How would you describe your role? \item Is your formal role matched to your actual role? \begin{itemize} \item If not, how is it not? \end{itemize} \item Is your organization flexible in the way it sees roles? \begin{itemize} \item If not, what is it like? \end{itemize} \item How did you assume your role? \begin{itemize} \item If hired in, were you hired externally or transitioned? \item If you transitioned, where did you transition from? \item Does your company generally move people around fluidly? \item Does your company reward broad knowledge/skills across different industries or specializations? \end{itemize} \item How did your role change over time? \begin{itemize} \item From a responsibility perspective? \item From a people perspective? \end{itemize} \item Is role scope change typical at your company? \item If yes, what does it typically look like - is it… \begin{itemize} \item Scope creep? \item Is it explicitly within your job description? \item Planned role expansion? \item Stretch assignments? \end{itemize} \item Do you have autonomy to make impactful decisions? \begin{itemize} \item If yes, how? \item If no, what is the case instead? \end{itemize} \end{enumerate} \subsubsection{How did your fairML effort start?} \begin{enumerate} \item Was it officially sponsored? \begin{itemize} \item If yes, by whom - what level of leader? \item If no, who launched the effort - was it a team? A motivated individual? What level of leadership? \end{itemize} \item Why did the effort start? \item Was the effort communicated to employees? \begin{itemize} \item Who was it communicated to? \item How was it communicated? \end{itemize} \item Is the effort part of a program or stand-alone? \item Is it tied to a specific product’s development or launch? \begin{itemize} \item What is the product? \item What is its primary use case? \item Who is the primary end user? \item When is it slated to launch? \end{itemize} \item Are you part of a team or doing this kind of work by yourself? \item Is it a volunteering effort? \begin{itemize} \item If so, are you getting rewarded or recognized for your time? How? \end{itemize} \item What types of activities have been done or are planned? \item Are you actively collaborating with external groups? What groups and why? \end{enumerate} \subsubsection{Responsibility and accountability} \begin{enumerate} \item Who is accountable for aspects around risk or unintended consequence… \begin{itemize} \item Identifying risk? \item Solutioning against risk? \item Fixing mistakes? \end{itemize} \item Avoiding negative impact, including press? Is your sponsor connected to risk management? \begin{itemize} \item How so? \item What is their level of accountability relative to risk? \item What are they responsible for doing? \end{itemize} \item Who are your main stakeholders? \item What are the other departments you work with? \begin{itemize} \item How has that changed since the effort launched? \item What was the business case for the fairML work/team? \item What teams are adjacent to this effort (ie, not directly involved but “friends and family”) - is there an active compliance function, eg? \item (if no answer, probe for e.g. product teams, compliance, trust and safety type teams, or value-based design efforts, ethics grassroots etc) \item What are other efforts in your organization that are similar to Accountability work for instance Diversity \& Inclusion and what does that look like? Is there general support for this type of effort? \end{itemize} \item Do you feel there is support for this effort? \begin{itemize} \item Why or why not? \item Who supports it (what company career level, function, role and/or geography)? \item Who doesn’t support it? \end{itemize} \item Would you say this effort aligns to company culture? How or how not? \item Is scaling possible? \begin{itemize} \item If so, do you intend to scale? \item If not, why not? \end{itemize} \end{enumerate} \subsubsection{Performance, rewards, and incentives} \begin{enumerate} \item How is performance for your Algorithmic Accountability effort defined at your company? \item What are you evaluated on in your role? \item What works about the way performance is measured? What are some flaws? \item What does your performance management system/compensation structure seek to incentivize people to do (what is the logic behind the approach)? \item What does your performance management system/compensation structure actually incentivize people to do? \item What kind of person gets consistently rewarded and incentivized? \end{enumerate} \subsubsection{Risk culture} \begin{enumerate} \item How do you work with external communication teams - PR, Policy? \begin{itemize} \item Who owns that relationship - is it a centralized team? \item What is that comms teams primary accountability (eg, press releases, think pieces, etc)? \item Has the team managed risk before? \item Is the team mobilized to manage risk? \end{itemize} \item How do you work with Legal? \begin{itemize} \item Is it a visible function in the organization? \item Does it have authority to make decisions and company policy, from your PoV? \end{itemize} \begin{itemize} \item How do you engage with communities? \item What types of communities? \item What does this look like? \item What types of communication have you set up? \end{itemize} \item What are the ethical tensions that you/your team faces? \item On a scale of 1-5, what is your level of perception of your company’s risk tolerance? \end{enumerate} \subsection{Future dream state - a structured way of getting a mapping of future dream state} \begin{enumerate} \item What is your company’s current state for fairML practice? (people, process, technology) \item What is your vision for the future state of the fairML practices? \item What do you need to change to get to the future state? \item What do you need to retire to get to the future state? \item What can be salvaged/repurposed? \end{enumerate} \subsection{Ending notes} \begin{enumerate} \item What is the best about your current set up? \item How would you summarize the largest challenges? Aka what do you like least? \item Is there anything that I should have asked about? \end{enumerate} \section{Scenarios} \subsection{OKLinger} A company called OKLinger is developing a dating service which uses ML systems to make recommendations for a user’s entire dating life to maximize a user’s “happiness score” - from who to ask out and where to go to dinner to whether to go home together, and even when to end the relationship. Work on fairness, accountability, and transparency of their algorithmic systems is not very common, but a few team-members have been bringing it up again and again during meetings. OKLinger’s leadership has not spoken about this topic internally or externally. Some employees have expressed their concerns that there's a lack of understanding about the potential issues, the impacts of the unintended consequences of their technology, as well as latent uncertainty about the alignment between OKLinger’s products and its employees’ broader societal values and beliefs. \subsection{BreatheResponsibly} A company called BreatheResponsibly develops a carbon offsets platform which makes recommendations on where to offset carbon. Carbon offsets compensate for your emissions by canceling out greenhouse gas emissions somewhere else in the world. The money you pay to buy offsets supports programs designed to reduce emissions. Work on fairness, accountability, and transparency of their recommendation engine is not very common, however a few team-members have been bringing it up again and again during meetings. BreatheResponsibly's leadership has not spoken about this topic internally or externally. Some employees have expressed their concerns that there's a lack of understanding about the potential issues, the impacts of the unintended consequences of their technology, as well as latent uncertainty about the alignment between BreatheResponsibly's products and its employees’ broader societal values and beliefs. \subsection{FeetMirror} A company called FeetMirror is building a social media platform to engage youth in South Africa. They have an advertisement-based business model and are promoting western brands using ML to target advertisements to their users. Work on fairness, accountability, and transparency of their algorithmic systems is not very common, but a few team members have been bringing it up again and again during meetings. FeetMirror's leadership has not spoken about this topic internally or externally. Some employees have expressed their concerns that there's a lack of understanding about the potential issues, the impacts of the unintended consequences of their technology, as well as latent uncertainty about the alignment between FeetMirror's products and its employees’ broader societal values and beliefs. \subsection{EmptyFrisk} A company called EmptyFrisk is developing a risk assessment tool - a predictive model to assess the risk that a defendant fails to appear in court to aid in decision-making at various stages of the criminal justice pipeline. The output of the model is a risk score number. This risk assessment tool is used to determine whether defendants should be released pretrial and whether bail should be required. In most jurisdictions, EmptyFrisk’s risk scores are presented to a judge as a recommendation. Work on fairness, accountability, and transparency of EmptyFrisk’s algorithmic systems is not very common, but a few team-members have been bringing it up again and again during meetings. EmptyFrisk's leadership has not spoken about this topic internally or externally. Some employees have expressed their concerns that there's a lack of understanding about the potential issues, the impacts of the unintended consequences of their technology, as well as latent uncertainty about the alignment between EmptyFrisk’s products and its employees’ broader societal values and beliefs.
1,314,259,996,093
arxiv
\section{\label{sec:Intro}Introduction} Spin-dependent transport through nanostructures such as quantum dots has recently created a lot interest due to potential spintronics applications.~\cite{fabian_semiconductor_2007,barnas_spin_2008} Of particular interest are quantum-dot spin valves~\cite{konig_interaction-driven_2003,braun_theory_2004,braig_rate_2005} that consist of a single-level quantum dot tunnel coupled to ferromagnetic electrodes with magnetizations pointing in arbitrary directions, cf. Fig.~\ref{fig:model}. On the one hand, these systems show a spin accumulation due to spin-dependent tunneling that has the tendency to block transport through the device. On the other hand, there is spin precession in an energy-dependent exchange field generated by virtual tunneling between the dot and the leads that lifts this blockade. The interplay between these two effects gives rise to a number of distinctive transport signatures such as a broad area of negative differential conductance,~\cite{braun_theory_2004,sothmann_probing_2010} characteristic features in the finite-frequency noise at the Larmor frequency associated with the spin precession,~\cite{braun_frequency-dependent_2006,sothmann_influence_2010,sothmann_transport_2010} a splitting of the Kondo resonance~\cite{martinek_kondo_2003,martinek_kondo_2003-1,martinek_gate-controlled_2005,utsumi_nonequilibrium_2005,sindel_kondo_2007} and a nonequilibrium spin-precession resonance.~\cite{hell_spin_2014} Other studies of such systems investigated the dependence of the current on the angle between the magnetizations,~\cite{fransson_angular_2005,pedersen_noncollinear_2005,fransson_angular_2005-1,weymann_cotunneling_2005,weymann_cotunneling_2007} the full counting statistics of electron transport,~\cite{lindebaum_spin-induced_2009} adiabatic pumping~\cite{splettstoesser_adiabatic_2008} and the possibility to generate a spin accumulation in a thermoelectric fashion.~\cite{muralidharan_thermoelectric_2013} Experimentally, quantum dots coupled to ferromagnetic electrodes have been realized in a number of different ways, e.g., by using metallic nanoparticles,~\cite{deshmukh_using_2002,bernand-mantel_evidence_2006,wei_saturation_2007,mitani_current-induced_2008,bernand-mantel_anisotropic_2009,birk_spin-polarized_2009,birk_magnetoresistance_2010,bernand-mantel_anisotropic_2011} quantum dots defined in semiconductor nanowires,~\cite{hofstetter_ferromagnetic_2010} carbon nanotubes,~\cite{jensen_hybrid_2004,jensen_magnetoresistance_2005,sahoo_electric_2005,liu_spin-dependent_2006,hauptmann_electric-field-controlled_2008,merchant_current_2009,aurich_permalloy-based_2010,feuillet-palma_conserved_2010,gaass_universality_2011} self-assembled semiconductor quantum dots~\cite{hamaya_spin_2007,hamaya_electric-field_2007,hamaya_kondo_2007,hamaya_oscillatory_2008,hamaya_tunneling_2008,hamaya_spin-related_2009} and even single molecules.~\cite{pasupathy_kondo_2004,yoshida_gate-tunable_2013} The investigation of electronic waiting times in transport through nanostructures is another field that recently generated a lot of interest. Waiting-time distributions have been studied for systems that can be described by generalized master equations,~\cite{koch_full_2005,brandes_waiting_2008,welack_waiting_2009,albert_distributions_2011,rajabi_waiting_2013,thomas_electron_2013} scattering matrix theory~\cite{albert_electron_2012,dasenbrook_floquet_2014,albert_waiting_2014,haack_distributions_2014} as well as in terms of noninteracting tight-binding chains.~\cite{thomas_waiting_2014} Waiting times were shown to provide information about the short-time behaviour of transport processes that cannot be obtained from other quantities such as the zero-frequency current noise or the full counting statistics.~\cite{albert_electron_2012} Furthermore, waiting times contain information about the coherent internal dynamics of quantum systems~\cite{brandes_waiting_2008,welack_waiting_2009,rajabi_waiting_2013,thomas_electron_2013} and can serve to characterize recently developed single-electron sources.~\cite{albert_distributions_2011,dasenbrook_floquet_2014,albert_waiting_2014} Here, we study the waiting-time distribution of electron transport through a quantum-dot spin valve. We focus on the regime where sequential tunneling dominates transport and the system can be described in terms of a generalized master equation with transition rates obtained via a real-time diagrammatic approach.~\cite{konig_zero-bias_1996,konig_resonant_1996,braun_theory_2004} Our main aim is to demonstrate how the rich transport physics of the quantum-dot spin valve can be detected in the waiting-time distribution. In addition, we want to elucidate the conditions that need to be fulfilled in order to observe a specific transport feature in the waiting-time distribution and compare to the corresponding conditions required to observe the same feature in other transport properties such as zero- and finite-frequency noise or the full counting statistics. The paper is organized as follows. In Sec.~\ref{sec:Model} we introduce the model of a quantum-dot spin valve. Section~\ref{sec:Theory} describes the theoretical approach to calculate the waiting-time distribution within the framework of a real-time diagrammatic approach. We present our results in Sec.~\ref{sec:Results} and conclude with a summary in Sec.~\ref{sec:Summary}. \section{\label{sec:Model}Model} \begin{figure} \centering\includegraphics[width=\columnwidth]{Model2-1.pdf} \caption{\label{fig:model}Schematic sketch of a quantum-dot spin valve. A single-level quantum dot (blue) with excitation energies $\varepsilon$ and $\varepsilon+U$ is tunnel coupled with coupling strength $\Gamma_r$ to two ferromagnetic electrodes (yellow) with noncollinear magnetizations pointing along $\vec n_r$ and enclosing an angle $\varphi_\text{L}+\varphi_\text{R}$.} \end{figure} We consider a single-level quantum dot coupled to two ferromagnetic electrodes $r=\text{L,R}$ with magnetizations pointing in arbitrary directions $\vec n_r$. The Hamiltonian of the system can be written as \begin{equation} H=\sum_r H_r+H_\text{dot}+H_\text{tun}. \end{equation} Here, \begin{equation} H_r=\sum_{\vec k\sigma}\varepsilon_{r\vec k\sigma}a_{r\vec k\sigma}^\dagger a_{r\vec k\sigma} \end{equation} represents the two ferromagnetic electrodes in terms of a simple Stoner model as noninteracting electrons with a constant but spin-dependent density of states $\rho_{r\sigma}$. For each electrode, we choose the spin quantization axis along the magnetization of the respective lead such that $\sigma=\pm$ refers to the majority (minority) spin electrons. The spin dependence of $\rho_{r\sigma}$ can be conveniently parametrized in terms of the polarization $p_r=(\rho_{r+}-\rho_{r-})/(\rho_{r+}+\rho_{r-})$ where $p_r=0$ refers to a normal metal and $p_r=1$ to a half-metallic ferromagnet. In the following, we will assume both leads to have the same polarization, $p_\text{L}=p_\text{R}\equiv p$. The quantum dot is described in terms of a single spin-degenerate level with gate-tunable energy $\varepsilon$ as \begin{equation} H_\text{dot}=\sum_\sigma \varepsilon c_\sigma^\dagger c_\sigma +Uc_\uparrow^\dagger c_\uparrow c_\downarrow^\dagger c_\downarrow. \end{equation} Here, $U$ denotes the Coulomb energy of the dot that is needed in order to occupy the quantum dot with two electrons at the same time. For later convenience, we quantize the dot spin along the direction $\vec n_\text{L}\times\vec n_\text{R}$ perpendicular to the lead magnetizations. For the chosen quantization axes, the tunnel Hamiltonian takes the form \begin{align}\label{eq:Htun} \begin{split} H_\text{tun}=\sum_{r\vec k}\frac{t_r}{\sqrt{2}}\left[a_{r\vec k+}^\dagger(e^{i\varphi_r/2} c_\uparrow+e^{-i\varphi_r/2}c_\downarrow)\right.\\ \left.+a_{r\vec k-}^\dagger(-e^{i\varphi_r/2} c_\uparrow+e^{i\varphi_r/2}c_\downarrow)\right]+\text{H.c.}, \end{split} \end{align} i.e., it couples majority and minority spin electrons of the lead to both, spin up and spin down electrons on the dot. In Eq.~\eqref{eq:Htun}, $\varphi_r$ denotes the angle between $\vec n_r$ and $\vec n_\text{L}+\vec n_\text{R}$. The tunnel matrix elements $t_r$ are related to the spin-dependent tunnel coupling strengths via $\Gamma_{r\sigma}=2\pi|t_r|^2\rho_{r\sigma}$ with $\Gamma_r=(\Gamma_{r+}+\Gamma_{r-})/2$. We furthermore introduce the total tunnel coupling $\Gamma=\Gamma_\text{L}+\Gamma_\text{R}$. \section{\label{sec:Theory}Theory} In order to describe transport through our system, we employ a real-time diagrammatic technique~\cite{konig_zero-bias_1996,konig_resonant_1996} in its extension to systems with ferromagnetic leads.~\cite{braun_theory_2004,braun_frequency-dependent_2006} The central idea of this approach is to split the system into the strongly interacting quantum dot with a few degrees of freedom and the noninteracting electrodes with many degrees of freedom. The latter are integrated out to obtain a description of the quantum dot in terms of its reduced density matrix $\rho^\text{red}$ with density matrix elements $P^{\chi_1}_{\chi_2}=\bra{\chi}\rho^\text{red}\ket{\chi}$. The time evolution of the reduced density matrix is given by a generalized master equation of the form $\dot {\vec P}=\vec W\vec P$. Here, $\vec P$ is a vector containing the various density matrix elements of $\rho^\text{red}$. $\vec W$ is a matrix of generalized transition rates that are given by irreducible self-energies of the quantum-dot propagator on the Keldysh contour. They can be evaluated in a systematic expansion in the tunnel couplings while taking into account interaction and nonequilibrium effects exactly.~\cite{braun_theory_2004,braun_frequency-dependent_2006} In the following, we restrict ourselves to first order terms only which is a good approximation as long as $\Gamma_r\llk_\text{B}T$. For the quantum-dot spin valve, we can rewrite the generalized master equation in a physically intuitive form by introducing the probabilities to find the dot empty $P_0$, singly occupied $P_1=P_\uparrow+P_\downarrow$ or doubly occupied $P_d$ as well as the quantum-statistical average of the dot spin $S_x=(P^\uparrow_\downarrow-P^\downarrow_\uparrow)/2$, $S_y=(P^\uparrow_\downarrow-P^\downarrow_\uparrow)/(2i)$ and $S_z=(P_\uparrow-P_\downarrow)/2$. The generalized master equation can then be split into one set of equations for the occupation probabilities and another one for the average spin.~\cite{braun_theory_2004} The first set of equations is given by \begin{widetext} \begin{equation} \left( \begin{array}{c} \dot P_0 \\ \dot P_1 \\ \dot P_d \end{array} \right) = \sum_r \Gamma_r \left( \begin{array}{ccc} -2f_r^+(\varepsilon) & f_r^-(\varepsilon) & 0 \\ 2f_r^+(\varepsilon) & -f_r^-(\varepsilon)-f_r^+(\varepsilon+U) & 2f_r^-(\varepsilon+U) \\ 0 & f_r^+(\varepsilon+U) & -2f_r^-(\varepsilon+U) \end{array} \right) \left( \begin{array}{c} P_0 \\ P_1 \\ P_d \end{array} \right) +\sum_r 2p\Gamma_r \left( \begin{array}{c} f_r^-(\varepsilon) \\ -f_r^-(\varepsilon)+f_r^+(\varepsilon+U) \\ -f_r^+(\varepsilon+U) \end{array} \right) \vec S\cdot \vec n_r \end{equation} \end{widetext} with the Fermi function $f_r^+(\omega)=1-f_r^-(\omega)=1/\{\exp[(\omega-V_r)/k_\text{B}T]+1\}$, where $V_r$ denotes the voltage applied to lead $r$ and $T$ is the electrode temperature, assumed to be equal for both leads. Due to the ferromagnetic electrodes, the occupation probabilities not only couple to each other but also couple to the spin accumulation on the dot. The equation governing the spin dynamics reads \begin{equation} \left(\frac{d\vec S}{dt}\right)=\left(\frac{d\vec S}{dt}\right)_\text{acc}+\left(\frac{d\vec S}{dt}\right)_\text{rel}+\left(\frac{d\vec S}{dt}\right)_\text{prec} \end{equation} where \begin{multline} \left(\frac{d\vec S}{dt}\right)_\text{acc}=\sum_rp\vec n_r\Gamma_r\left[f_r^+(\varepsilon)P_0+\frac{f_r^+(\varepsilon+U)-f_r^-(\varepsilon)}{2}P_1\right.\\\left.-f_r^-(\varepsilon+U)P_d\vphantom{\frac{1}{2}}\right] \end{multline} describes the accumulation of spin on the dot due to spin-polarized tunneling on and off the dot. Similarly, \begin{equation} \left(\frac{d\vec S}{dt}\right)_\text{rel}=-\sum_r\Gamma_r\left[f_r^-(\varepsilon)+f_r^+(\varepsilon+U)\right]\vec S \end{equation} represents the relaxation of the dot spin due to electron tunneling. Finally, \begin{equation} \left(\frac{d\vec S}{dt}\right)_\text{prec}=\sum_r \vec B_r\times \vec S \end{equation} characterizes a precession of the dot spin in the effective exchange field \begin{equation} \vec B_r=\frac{p\Gamma_r\vec n_r}{\pi}\int' d\omega \left(\frac{f_r^-(\omega)}{\omega-\varepsilon-U}+\frac{f_r^+(\omega)}{\omega-\varepsilon}\right) \end{equation} generated by spin-dependent virtual tunneling between the dot and the electrodes. We now detail how to calculate the waiting-time distribution between tunneling events. In particular, we focus on tunneling out of the dot into the right lead. The central object in the calculation of the waiting-time distribution is the matrix $\vec W^X$ that can be obtained from $\vec W$ in a straightforward way: One simply multiplies each diagram encountered in the evaluation of $\vec W$ with a factor +1 if the corresponding process transfers an electron from the dot into the right lead and with a factor of 0 otherwise. The waiting-time distribution is then given by~\cite{brandes_waiting_2008} \begin{equation} w(\tau)=\frac{\vec e^T \vec W^X\exp[(\vec W-\vec W^X)\tau]\vec W^X \rho^\text{stat}}{\vec e^T\vec W^X \rho^\text{stat}}, \end{equation} where $\rho^\text{stat}$ is the stationary density matrix satisfying $0=\vec W\rho^\text{stat}$ and $\vec e^T$ is a vector that picks out the diagonal density matrix elements. In general, it is not possible to obtain compact analytical expressions for the waiting-time distribution of a quantum-dot spin valve. However, in the limiting case that transport occurs through the empty and singly-occupied dot only, $f_\text{L}^+(\varepsilon)=f_\text{R}^-(\varepsilon)=1$ and $f_\text{L}^+(\varepsilon+U)=f_\text{R}^+(\varepsilon+U)=0$, and the quantum dot is symmetrically coupled to the electrodes, $\Gamma_\text{L}=\Gamma_\text{R}=\Gamma/2$, we find for parallel magnetizations \begin{multline} w_P(\tau)=\frac{(1-p)^2}{1+p}\Gamma e^{-(1-p)\Gamma\tau}+\frac{(1+p)^2}{1-p}\Gamma e^{-(1+p)\Gamma\tau}\\ -2\frac{1+3p^2}{1-p^2}\Gamma e^{-2\Gamma\tau}, \end{multline} antiparallel magnetizations \begin{multline} w_{AP}(\tau)=(1-p)\Gamma e^{-(1-p)\Gamma\tau}+(1+p)\Gamma e^{-(1+p)\Gamma\tau}\\ -2\Gamma e^{-2\Gamma\tau}, \end{multline} and arbitrarily oriented magnetizations, neglecting the exchange field in the calculation, \begin{multline} w_\varphi^{B_r=0}(\tau)=\frac{(1-p)(1-p\cos\varphi)}{1+p}\Gamma e^{-(1-p)\Gamma\tau}\\ +\frac{(1+p)(1+p\cos\varphi)}{1-p}\Gamma e^{-(1+p)\Gamma\tau}\\ -2\frac{1+p^2(1+2\cos\varphi)}{1-p^2}\Gamma e^{-2\Gamma\tau}. \end{multline} In all three cases, the first term arises from the tunneling out of majority spin electrons into the drain. Similarly, the second term is due to the tunneling out of minority spin electrons. The last term describes tunneling in of electrons from the source. As both majority and minority spin electrons can tunnel into the empty dot, the time scale for this exponential decay does not depend on the polarization. \section{\label{sec:Results}Results} In the following, we analyze the waiting-time distribution for three different transport regimes that illustrate characteristic transport features of the quantum-dot spin valve. First, we will consider electron bunching due to a dynamical channel blockade~\cite{cottet_positive_2004-1,cottet_positive_2004,cottet_dynamical_2004,belzig_full_2005,elste_transport_2006,urban_tunable_2009} which occurs for parallely magnetized electrodes.~\cite{bulka_current_2000,braun_frequency-dependent_2006} Second, we show how the spin precession in the exchange field that occurs for noncollinear magnetizations~\cite{konig_interaction-driven_2003,braun_theory_2004,braun_frequency-dependent_2006} manifests itself in the waiting-time distribution. Finally, we discuss signatures of a recently discovered dynamical spin resonance that occurs for nearly antiparallel magnetizations.~\cite{hell_spin_2014} \subsection{Electron bunching} \begin{figure} \includegraphics[width=\columnwidth]{WT_Parallel.pdf} \caption{\label{fig:Parallel}Waiting-time distribution for a symmetric quantum-dot spin valve, $\Gamma_\text{L}= \Gamma_\text{R}=\Gamma/2$, in the parallel geometry, $\varphi_r=0$. The bias voltage is chosen such that transport occurs through the empty and singly-occupied state only. For large polarizations a crossover between two different exponential decay indicates bunching of electron transport.} \end{figure} For parallel magnetizations, the average charge current is independent of the polarization $p$ because as the current contribution of majority spin electrons increases with $p$, the current contribution of minority spin electrons decreases with $p$ by the same amount. However, as $p$ increases, the current becomes less regular due to electron bunching. While majority spin electrons can tunnel easily on and off the dot, the corresponding rates for minority spin electrons are suppressed by $1-p$. Thus, minority spin electrons dynamically block transport and chop the current into bunches of majority spin electrons flowing through the dot. In the waiting time distribution, this dynamical channel blockade shows up as the crossover between two exponential decays with two different time scales. At short times, the exponential decay of the waiting-time distribution is dominated by the tunneling of majority spin electrons $e^{-(1+p)\Gamma\tau}$~\footnote{The contribution from tunneling in of electrons decays even faster as $e^{2\Gamma\tau}$. However, its coefficient is always smaller than that of $e^{-(1+p)\Gamma\tau}$ such that it never dominates the exponential decay of the waiting-time distribution.}. At longer times, the contribution due to the tunneling of minority spin electrons, $e^{-(1-p)\Gamma\tau}$, that describes the occasional blockade of the dot, takes over. As can be seen in Fig.~\ref{fig:Parallel}, the crossover time increases as the polarization $p$ is increased. Furthermore, the crossover becomes more pronounced as $p$ grows because of the larger difference between the decay rates $(1+p)\Gamma$ and $(1-p)\Gamma$. In addition, as $p$ is increased the crossover occurs at smaller values of $w(\tau)$ which renders an experimental observation more challenging as it requires a high statistics of tunneling events. These findings are in agreement with the picture of bunching due to dynamical channel blockade. As $p$ grows, the blockade events due to minority spin electrons on the dot become more and more rare. In addition, the average duration of an individual blockade event becomes longer as well due to the reduced tunneling out rate of minority spins. We finally compare our results to the signatures of the dynamical channel blockade in the zero-frequency noise~\cite{braun_frequency-dependent_2006}. In the regime where transport occurs through the empty and singly occupied dot only, the zero-frequency Fano factor, i.e. the ratio between the current noise and the average current, \begin{equation} F=\frac{4(1+p^2)\Gamma_\text{L}+(1-p^2)\Gamma_\text{R}}{(1-p^2)(2\Gamma_\text{L}+\Gamma_\text{R})^2} \end{equation} becomes super-Poissonian, $F>1$, for polarizations larger than~\cite{sothmann_mesoscopic_2012} \begin{equation} p^*=\sqrt{\frac{\Gamma_\text{R}}{2\Gamma_\text{L}+\Gamma_\text{R}}}. \end{equation} The Fano factor characterizes the average number of electrons within one bunch transferred through the dot. We remark that in general a larger polarization is needed to observe the electron bunching in the waiting-time distribution than in the Fano factor. \subsection{Spin precession} We now turn to the case of a quantum-dot spin valve with noncollinearly magnetized leads. In this situation, the dot spin precesses in an exchange field due to virtual tunneling between the dot and the leads. This spin precession gives rise to characteristic features in the finite-frequency noise at the Larmor frequency of the exchange field~\cite{braun_frequency-dependent_2006,sothmann_transport_2010}. In the following, we demonstrate how the spin precession manifests itself in the waiting-time distribution. \begin{figure} \includegraphics[width=\columnwidth]{WT_perp_V.pdf} \caption{\label{fig:Perpendicular1}Waiting-time distribution as a function of bias voltage for a quantum-dot spin valve with perpendicular magnetizations, $\varphi_\text{L}=-\varphi_\text{R}=\pi/4$. Parameters are $\Gamma_\text{L}=10\Gamma_\text{R}$, $\varepsilon=U/4$, $p=1$. White dashed lines mark the times after which the spin of the dot has precesssed by an angle $(2n+1)\pi$.} \end{figure} Figure~\ref{fig:Perpendicular1} shows the waiting-time distribution as a function of the bias voltage for perpendicular magnetizations. For a given voltage, the waiting-time distribution shows oscillations with a frequency given by the Larmor frequency of the spin precession. The mechanism that gives rise to these oscillations is the following. Electrons preferrably enter the dot with a spin pointing along the magnetization of the source electrode. On the dot, the electron spin precesses in the exchange field. After a precession by an angle of $(2n+1)\pi$, the overlap between the dot spin and the majority spins of the drain electrode becomes maximal. Hence, there is an increased probability to tunnel out of the dot that leads to local maxima of the waiting-time distribution. We remark that the position of the maxima is not precisely given by $\omega\tau=(2n+1)\pi$ as it takes a finite time for an electron to tunnel into the empty dot. As the Larmor frequency depends on the energy-dependent exchange field, the frequency of the oscillations changes with bias voltage as can be clearly seen in Fig.~\ref{fig:Perpendicular1}. At the particle-hole symmetric point, $V=2\varepsilon+U$, the exchange field does not affect the dot spin at all and hence the oscillations vanish at this point. \begin{figure} \includegraphics[width=\columnwidth]{WT_Perp_p.pdf} \caption{\label{fig:Perpendicular2}Waiting-time distribution for a quantum dot spin-valve with perpendicular magnetizations. $V=3U/4$, other parameters as in Fig.~\ref{fig:Perpendicular1}.} \end{figure} In Fig.~\ref{fig:Perpendicular2}, we plot the waiting-time distribution for a fixed bias voltage and different values of the polarization. While for halfmetallic leads the oscillations in the waiting times are well pronounced, they quickly get washed out as the polarization is reduced and disappear around $p=0.6$ because of the strong decoherence due to tunneling events. Hence, in order to observe the spin precession in the waiting times, highly polarized electrodes are needed. This in in contrast to the signatures in the finite-frequency noise that are observable even for moderate polarizations of $p\approx 0.3$ achieveable, e.g., with electrodes made from Fe, Ni or Co~\cite{monsma_spin_2000}. \subsection{Spin resonance} \begin{figure} \includegraphics[width=\columnwidth]{WT_SpinResonance.pdf} \caption{\label{fig:SpinResonance}Waiting-time distribution as a function of applied bias voltage and time. Parameters are $\Gamma_L=10\Gamma_R$, $\varepsilon=-U/4$, $p=1$ and $\varphi_L=-\varphi_R=0.495\pi$. White dashed lines mark again the times after which the spin has precessed by an angle $(2n+1)\pi$ in the exchange field.} \end{figure} As a last transport feature, we discuss how a recently predicted spin resonance~\cite{hell_spin_2014} manifests itself in the waiting-time distribution. To this end, we consider transport inside the Coulomb-blockade region where the dot is preferrably singly occupied. In principle, one needs to perform a second-order calculation here. However, we expect that our sequential tunneling approximation still captures the essential features of the spin resonance and its signatures in the waiting-time distribution. As in the Coulomb-blockade regime adding or removing an extra electron from the dot is energetically unfavorable, electrons on the dot have a long life-time such that the waiting time decays exponentially on a very long time scale. We now focus on the case of nearly antiparallely magnetized leads. In this situation, we get on top of the Coulomb-blockade also a spin blockade as majority spin electrons from the source are minority spin electrons in the drain and hence have a suppressed probability of leaving the dot. In the nearly antiparallel geometry, the exchange field typically has a large component along the spin accumulation on the dot and only a small component perpendicular to it. Hence, there is only a very weak precession of the dot spin that cannot lift the spin blockade. In consequence, the waiting-time distribution shows a slow exponential decay with weak oscillations superimposed, cf. Fig~\ref{fig:SpinResonance} at $V=7.75 U$. However, as the exchange field contributions from the left and right lead have indepenent dependencies on the level position and bias voltage, it is possible to achieve that the exchange field component along the spin accumulation vanishes. In this case, the spin precesses in the small remaining exchange field perpendicular to the spin accumulation. This precession periodically lifts the spin blockade of the dot and increases the current from its suppressed value in the antiparallel geometry back to value in the parallel geometry~\cite{hell_spin_2014}. In the waiting time distribution, the resonance shows up as strong oscillations with the Larmor frequency on top of a slow exponential decay, cf. Fig~\ref{fig:SpinResonance} at $V=7.35 U$. Hence, as the system is tuned through the resonance, the current exhibits a peak (not shown) while the waiting-time distribution shows an increase of the period and amplitude of its oscillations. \section{\label{sec:Summary}Summary} We have analyzed the electronic waiting time of a quantum-dot spin valve. We obtained analytical results for the waiting-time distribution for collinear magnetizations as well as in the noncollinear case when neglecting the effect of a tunneling induced exchange field. We then discussed signatures of characteristic transport features of a quantum-dot spin valve in the waiting-time distribution. In particular, we showed that the electron bunching due to dynamical channel blockade leads to two different exponential decays of the waiting-time distribution. The spin precession for noncollinear setups gives rise to characteristic oscillations of the waiting-time distribution. Finally, we demonstrated that a recently predicted spin resonance in the Coulomb-blockade regime gives rise to pronounced oscillations in the waiting-time distribution on a very long time scale. \acknowledgments I thank Christian Flindt for valuable feedback on the manuscript and acknowledge financial support from the Swiss NSF via the NCCR QSIT.
1,314,259,996,094
arxiv
\section{Introduction} The horizontal motion of particles of an ideal fluid on a free surface obeys a set of nonlinear ordinary differential equations, which only depend on the surface and its space-time gradient and curvature. \cite{John1953} derived the equations of motion for such particles on the zero-stress surface of two-dimensional (2-D) gravity waves, and~\cite{sclav05} generalized them to the three dimensional (3-D) waves. In particular, given a Cartesian reference system $(x,y,z)$, where $z$ is along the vertical direction, he exploited the property that the zero-stress free surface $z=\zeta(x,y,t)$ is an iso-pressure surface, and thus the hydrodynamic pressure gradient $\nabla p$ is collinear with the outward normal $\mathbf{n}\sim\nabla(z-\zeta)$ to the surface, where $\nabla=\left(\partial_{x},\partial_{y},\partial_{z}\right)$. This implies that on the free surface \begin{equation} \nabla(z-\zeta)\times\nabla p=\mathbf{0},\qquad z=\zeta.\label{cross} \end{equation} From Euler\textquoteright s equations, the acceleration of a fluid particle in a 3-D flow satisfies \begin{equation*} \frac{{\rm d}^{2}\mathbf{r}}{{\rm d}t^{2}}=-\frac{1}{\rho}\nabla p+\mathbf{f},\label{Euler2} \end{equation*} where $\mathbf{r}=(x(t),y(t),z(t))$ is the instantaneous vector position of the fluid particle and $\mathbf{f}=(0,0,-\gr)$ is the force due to gravitational acceleration $\gr$. Then, Eq.~(\ref{cross}) can be written as \begin{equation} \left(-\partial_x\zeta\mathbf{i}-\partial_y\zeta\mathbf{j}+\mathbf{k}\right)\times\left(-\frac{{\rm d}^{2}\mathbf{r}}{{\rm d}t^{2}}+\mathbf{f}\right)=0,\label{cross2} \end{equation} where $({\bf i},{\bf j},{\bf k})$ are unit vectors along the $x,y$ and $z$ directions, respectively. The $z$ component of the cross product (\ref{cross2}) is redundant as it is a linear combination of the $x$ and $y$ components. These yield the coupled equations \begin{equation} \begin{array}{c} \partial_y\zeta\left(\frac{{\rm d}^{2}z}{{\rm d}t^{2}}+\gr\right)+\frac{{\rm d}^{2}y}{{\rm d}t^{2}}=0,\\ \\ \partial_x\zeta\left(\frac{{\rm d}^{2}z}{{\rm d}t^{2}}+\gr\right)+\frac{{\rm d}^{2}x}{{\rm d}t^{2}}=0. \end{array}\label{JS1} \end{equation} Since the fluid particle is constrained on the free surface $\zeta$, its vertical velocity $\dot{z}=\frac{{\rm d}z}{{\rm d}t}$ and acceleration $\ddot{z}=\frac{{\rm d}^{2}z}{{\rm d}t^{2}}$ depend on the horizontal motion $\mathbf{x}=(x(t),y(t))$. In particular, $\ddot{z}$ follows from differentiating $z(t)=\zeta(x(t),y(t),t)$ with respect to time. Substituting the resulting $\ddot{z}$ in Eq. (\ref{JS1}) yields the John-Sclavounos (JS) equations~[see Eqs.~(2.17)-(2.18) in~\cite{sclav05}] \begin{equation} \begin{array}{c} \left(1+\zeta_{,x}^{2}\right)\ddot{x}+\zeta_{,x}\zeta_{,y}\ddot{y}+\left(\zeta_{,tt}+\zeta_{,xt}\dot{x}+\zeta_{,yt}\dot{y}+\zeta_{,xx}\dot{x}^{2}+2\zeta_{,xy}\dot{x}\dot{y}+\zeta_{,yy}\dot{y}^{2}+\gr\right)\zeta_{,x}=0,\\ \\ \left(1+\zeta_{,y}^{2}\right)\ddot{y}+\zeta_{,x}\zeta_{,y}\ddot{x}+\left(\zeta_{,tt}+\zeta_{,xt}\dot{x}+\zeta_{,yt}\dot{y}+\zeta_{,xx}\dot{x}^{2}+2\zeta_{,xy}\dot{x}\dot{y}+\zeta_{,yy}\dot{y}^{2}+\gr\right)\zeta_{,y}=0, \end{array}\label{JS} \end{equation} for the evolution of the horizontal fluid particle trajectories driven by the free-surface elevation and its Eulerian temporal and spatial derivatives. \textcolor{black}{Here and in the following, the subscripted commas denote partial derivatives, i.e., $\zeta_{,x}=\partial \zeta/\partial x$.} \textcolor{black}{We point out that, as opposed to the Euler's equation, the JS equations are a set of ordinary differential equations (ODEs) describing the kinematics of a single fluid particle; as such, they generate a finite-dimensional dynamical system.} To the best of authors' knowledge, the properties and the structure of the JS equations have not been investigated in detail. In this work, we derive and study these equations using first principles in order to gain mathematical and physical insights into the dynamics of ocean waves and the inception of wave breaking. \section{Main findings} \textcolor{black}{ We demonstrate that the JS equations are more general than initially thought, as they can be derived from first principles using Lagrangian and Hamiltonian formalisms. The derivation of~\cite{John1953} assumes that the free surface $\zeta$ is generated by an inviscid and irrotational fluid. The derivation of \cite{sclav05}, however, does not assume irrotationality. As we show in section~\ref{sec:vor}, the same equations can be derived from an action principle describing the constrained motion of a frictionless particle on an unsteady surface and subject to gravity. In other words, the free surface can be any moving membrane and does not necessarily need to be formed by a fluid.} Using the Legendre transformation, we also derive the Hamiltonian structure of the JS equations. This Hamiltonian structure is also confirmed using Dirac theory as shown in subsection~\ref{sec:Dirac}. The unsteady surface is arbitrary and can originate from many physical processes. In this regard, if we are interested in the fluid particle kinematics on the free surface of gravity water waves, then one needs to know the irrotational flow field that generates a zero-stress free surface separating water from air. Indeed, only if the initial particle velocity is set as that induced by the irrotational flow do the JS equations describe the kinematics of fluid particles. \textcolor{black}{Our main result is presented in section~\ref{sec:symplectic}, which required a mathematical description of vorticity created on unsteady free surfaces presented in subsection~\ref{sec:vorticity}.} In particular, we find that vorticity created at a zero-stress free surface vanishes at a wave crest when the horizontal particle velocity equals the propagation speed of the crest. This is the kinematic criterion for wave breaking presented in subsection~\ref{sec:kinematic}~\citep{Perlin2013,Shemer2014,Shemer2015}. Drawing on~\cite{cartan1922lessons}~(chapter II, p. 20), further insights into the particle kinematics are gained by exploiting the relation between the symplectic structure of the JS equations and the physical vorticity as explored by~\cite{Bridges_vorticity_2005} for the shallow water equations. In particular, in subsection~\ref{sec:symvor} our analysis of the Hamiltonian structure of the JS equations reveals that the associated symplectic one-form is the physical fluid circulation and certain terms of the associated two-form relate to the vorticity created on the zero-stress free surface. If the kinematic criterion for wave breaking holds for the largest crest, then the symplectic two-form instantaneously reduces to that associated with the motion of a particle in free flight, as if the free surface and vorticity did not exist. In this regard, recent studies indicate that the inception of breaking of the largest crest of unsteady wave groups initiates when the particle velocity $u_x$ exceeds about $0.84$ times the crest velocity $V_c$~\citep{Barthelemy2015,BannerSaket2015}. In particular, none of the non-breaking or recurrent groups reach the threshold $B_x=u_x/V_c=0.84$, while all marginal breaking cases exceed the threshold~\citep{Barthelemy2015,BannerSaket2015} and eventually the particle speed $u_x$ overcomes the wave crest speed $V_c$ (see Figure 3 in~\cite{Barthelemy2015} and~\cite{Shemer2014}). This observation motivates a close examination of the space-time transport of wave energy near a large unsteady crest and possible local superharmonic instabilities that are triggered as the threshold $B_x$ is exceeded leading to breaking, as those found for steep steady waves~\citep{Longuet-HigginspartI1978,BridgesJFMhomoclinic}. Our results in section~\ref{sec:slow} suggest that as a wave crest grows and approaches breaking, the local kinetic energy $K_{e}$ on the free surface increases much faster than the potential energy $\rho \gr\zeta$ and the normal kinetic energy flux velocity $C_{K_{e}}$ tends to reduce approaching the normal fluid velocity speed $u_{n}$. Equivalently, the Lagrangian kinetic energy flux speed $C_{K_{e}}-u_{n}$ seen by a fluid particle is practically null. Consequently, there is a strong attenuation of accumulation of potential energy on the surface. Thus, at these special instants of time fluid particles on the surface behave like particles in free flight as if the free surface did not exist, in agreement with the analysis of the symplectic structure of the particle kinematics. Further studies on the coupling of the kinematics of surface fluid particles with the evolution of the wave field are desirable using Zakharov's (1968) Hamiltonian formalism (\nocite{Zakharov1968}\cite{Krasitskii1994,Zakharov1999}). Finally, the Hamiltonian formulation of the JS equations also helps gain significant insight into the possibility of singular behavior of particle trajectories and trapping regions, as conjectured by Bridges~(see contributed appendix in~\cite{sclav05}). For instance, in section~\ref{sec:blowup} we exploit the conservation and special form of the Hamiltonian function for steady surfaces and traveling waves and prove that particle velocities stay bounded at all times, ruling out the possibility of the finite-time blowup of solutions. The same argument does not rule out the possible occurrences of finite-time blowups on unsteady surfaces. We also identify regions where particles are trapped and so remain at all times if their initial velocity is bounded by a prescribed value~(see section~\ref{sec:trapping}). \section{Hamiltonian properties of the JS equations}\label{sec:vor} \textcolor{black}{ In the following, we first derive the JS equations from first principles using a Lagrangian formalism applied to the motion of a single frictionless particle constrained on an unsteady surface and subject to gravity (subsection~\ref{sec:lag}). In subsection~\ref{sec:Dirac} we demonstrate that the associated Hamiltonian structure follows from the Legendre transformation and it is also confirmed using Dirac theory of constrained Hamiltonian systems. Finally, in subsection~\ref{sec:symplectic2} the symplectic one- and two-forms are derived. Note that JS equations describe the kinematics of a single inviscid particle; as a result the associated phase-space dynamics is finite-dimensional. Further insights into the particle kinematics on a zero-stress free surface are to be gained from the analysis of the symplectic structure of the JS equations and associated differential forms, as discussed in later sections. } \subsection{Lagrangian formalism}\label{sec:lag} The Lagrangian for a free particle subject to gravity in ${\mathbb R}^3$ is given by \begin{equation*} \mathcal{L}({\bf r},\dot{\bf r})=K-P,\label{L} \end{equation*} where the kinetic and potential energies \begin{equation*} K=\frac{1}{2}\left(\dot{x}^{2}+\dot{y}^{2}+\dot{z}^{2}\right),\qquad P=\gr z,\label{KP} \end{equation*} and $\mathbf{r}=(x(t),y(t),z(t))$ is the instantaneous vector particle position. Minimizing the action $\mathcal{A}=\int\mathcal{L}{\rm d}t$ over all possible paths yields the Euler--Lagrange equations \begin{equation*} \frac{\delta \cal A}{\delta {\bf r}}=\frac{{\rm d}}{{\rm d}t}\left(\frac{\partial \cal L}{\partial\dot{{\bf r}}}\right)-\frac{\partial \cal L}{\partial {\bf r}}=0,\label{Euler} \end{equation*} or equivalently, $\ddot{\mathbf{r}}=\mathbf{f}$, where $\mathbf{f}=(0,0,-\gr)$. We now assume that the particle is constrained to move on an unsteady surface $z=\zeta(x,y,t)$. Thus, the horizontal particle motion is coupled with that of the evolving surface. The associated dynamical equations follow from the constrained Lagrangian \begin{equation} \mathcal{L}_{\rm c}={\cal L}+\lambda\left[z-\zeta(x,y,t)\right], \label{Lc} \end{equation} where we have introduced the Lagrange multiplier $\lambda$ to impose that the particle always stays on the surface $z=\zeta$. Minimizing the action with respect to $x,y,z$ and $\lambda$ yields the set of Euler--Lagrange equations \begin{align} \frac{{\rm d}}{{\rm d}t}\frac{\partial {\mathcal L}_{\rm c}}{\partial \dot x}-\frac{\partial {\mathcal L}_{\rm c}}{\partial x} & =\ddot{x}-\lambda_{x}(z-\zeta)+\lambda\zeta_{,x}=0,\label{Leq_1}\\ \frac{{\rm d}}{{\rm d}t}\frac{\partial {\mathcal L}_{\rm c}}{\partial \dot y}-\frac{\partial {\mathcal L}_{\rm c}}{\partial y} & =\ddot{y}-\lambda_{y}(z-\zeta)+\lambda\zeta_{,y}=0,\label{Leq_2}\\ \frac{{\rm d}}{{\rm d}t}\frac{\partial {\mathcal L}_{\rm c}}{\partial \dot z}-\frac{\partial {\mathcal L}_{\rm c}}{\partial z} & =\ddot{z}+\gr-\lambda=0,\label{Leq_3}\\ \frac{\partial {\mathcal L}_{\rm c}}{\partial\lambda} & =z-\zeta=0. \label{Leq_4} \end{align} Here, the last equation imposes the constraint $z=\zeta$, which can be differentiated twice with respect to time to yield the vertical particle velocity \begin{equation} \dot{z}=\zeta_{,x}\dot{x}+\zeta_{,y}\dot{y}+\zeta_{,t},\label{zdot} \end{equation} and acceleration \begin{equation} \ddot{z}=\zeta_{,x}\ddot{x}+\zeta_{,y}\ddot{y}+\zeta_{,xt}\dot{x}+\zeta_{,yt}\dot{y}+\zeta_{,xx}\dot{x}^{2}+2\zeta_{,xy}\dot{x}\dot{y}+\zeta_{,yy}\dot{y}^{2}+\zeta_{,tt},\label{zdotdot} \end{equation} as a function of the horizontal variables $(x,y,\dot{x},\dot{y})$. Then, from Eqs.~(\ref{Leq_1})-(\ref{Leq_2}) the horizontal trajectories satisfy the coupled ordinary differential equations (ODEs) \begin{equation} \begin{array}{c} \ddot{x}+\lambda\zeta_{,x}=0,\\ \\ \ddot{y}+\lambda\zeta_{,y}=0. \end{array}\label{xysec1} \end{equation} The multiplier $\lambda$ satisfies the implicit equation \begin{equation} \lambda=\ddot{z}+\gr,\label{Multi} \end{equation} which follows from Eq.~\eqref{Leq_3}. In particular, from~Eq.~\eqref{zdotdot},~\eqref{xysec1} the explicit expression for the multiplier follows as \begin{equation*} \lambda=\frac{\zeta_{,xt}\dot{x}+\zeta_{,yt}\dot{y}+\zeta_{,xx}\dot{x}^{2}+2\zeta_{,xy}\dot{x}\dot{y}+\zeta_{,yy}\dot{y}^{2}+\zeta_{,tt}+\gr}{1+\zeta_{,x}^2+\zeta_{,y}^2}.\label{lambda} \end{equation*} Furthermore, Eqs.~(\ref{xysec1}) can be written as \begin{equation*} \begin{array}{c} \ddot{x}+\left(\ddot{z}+\gr\right)\zeta_{,x}=0,\\ \\ \ddot{y}+\left(\ddot{z}+\gr\right)\zeta_{,y}=0, \end{array}\label{xy2} \end{equation*} which, after substituting~Eq.~\eqref{zdotdot}, are identical to the JS equations given in~Eq.~\eqref{JS}~(see Introduction). The JS equations can also be obtained by minimizing the action associated with the reduced Lagrangian \begin{equation*} \mathcal{\widetilde{L}}_c=\frac{1}{2}\left(\dot{x}^{2}+\dot{y}^{2}+\left(\zeta_{,t}+\zeta_{,x}\dot{x}+\zeta_{,y}\dot{y}\right)^{2}\right)-\gr\zeta,\label{Lcons} \end{equation*} which follows from the augmented Lagrangian in Eq.~\eqref{Lc} setting $z=\zeta$ and $\dot{z}$ equal to Eq.~\eqref{zdot}. In matrix form \begin{equation*} \mathcal{\widetilde{L}}_c=\frac{1}{2}\mathbf{\dot{x}}^{T}\mathbf{B}\dot{\mathbf{x}}+\mathbf{\boldsymbol{\alpha}}^{T}\dot{\mathbf{x}}-\gr\zeta+\frac{1}{2}\zeta_{,t}^{2},\label{Lmat} \end{equation*} where $\mathbf{x}=(x(t),y(t))$ is the horizontal vector of position and \begin{equation} \mathbf{B}=\left[\begin{array}{cc} 1+\zeta_{,x}^{2} & \zeta_{,x}\zeta_{,y}\\ \zeta_{,x}\zeta_{,y} & 1+\zeta_{,y}^{2} \end{array}\right],\qquad\boldsymbol{\alpha}=\zeta_{,t}\left[\begin{array}{c} \zeta_{,x}\\ \zeta_{,y} \end{array}\right].\label{Bmat} \end{equation} We note that $\mathbf{B}$ is symmetric and positive-definite with real eigenvalues \begin{equation*} \lambda_1=1,\quad\quad \lambda_2=|\mathbf{B}|=1+\zeta_{,x}^2+\zeta_{,y}^2, \label{eig} \end{equation*} and the corresponding orthogonal eigenvectors \begin{equation*} \mathbf{w}_1=(-\zeta_{,y},\zeta_{,x})=\nabla^{\perp}\zeta,\quad\quad \mathbf{w}_2=(\zeta_{,x},\zeta_{,y})=\nabla\zeta. \label{eigv} \end{equation*} These will be useful later in the analysis of the finite time blowup of the JS equations (cf. Section~\ref{sec:blowup}). The generalized momentum $\mathbf{p}=(p_x,p_y)$ is a function of the horizontal particle velocity $\mathbf{\dot{x}}$ as \begin{equation} \mathbf{p}=\mathbf{B}\mathbf{\dot{x}}+\boldsymbol{\alpha}, \label{pmon1} \end{equation} where \begin{equation} p_{x}=\frac{\partial\mathcal{\widetilde{L}}_c}{\partial\dot{x}}=\left(1+\zeta_{,x}^{2}\right)\dot{x}+\zeta_{,x}\zeta_{,y}\dot{y}+\zeta_{,x}\zeta_{,t}, \label{px1} \end{equation} and \begin{equation} p_{y}=\frac{\partial\mathcal{\widetilde{L}}_c}{\partial\dot{y}}=\left(1+\zeta_{,y}^{2}\right)\dot{y}+\zeta_{,y}\zeta_{,x}\dot{x}+\zeta_{,y}\zeta_{,t}. \label{py1} \end{equation} Then $(\mathbf{p},\mathbf{x})$ are canonically conjugate variables and the Hamiltonian follows from the Legendre transform of $\mathcal{\widetilde{L}}$ as~\citep{Morrison1998} \begin{equation} {\mathcal{H}}_c=p_{x}\dot{x}+p_{y}\dot{y}-\mathcal{\widetilde{L}}=\mathbf{p}^T\mathbf{\dot{x}}-\mathcal{\widetilde{L}}. \label{HC} \end{equation} From Eq.~(\ref{pmon1}) the horizontal particle velocity $\mathbf{\dot{x}}$ can be written as a function of the canonical momentum $\mathbf{p}$, and the Hamiltonian can be recast as \begin{equation} \mathcal{H}_c=\frac{1}{2}\left(\mathbf{p}-\boldsymbol{\alpha}\right)^{T}\mathbf{B}^{-1}\left(\mathbf{p}-\boldsymbol{\alpha}\right)+\gr\zeta-\frac{1}{2}\zeta_{,t}^{2}.\label{HV} \end{equation} Note that for unsteady surfaces, $\mathcal{H}_c$ is not conserved as particles behave as an open system exchanging energy with the moving surface. The Lagrangian formalism developed above highlights a fundamental property of the JS equations. On the one hand, these are originally derived from the dynamical condition that the zero-stress free surface $z=\zeta$ is an iso-pressure surface~\citep{sclav05}. On the other hand, we have derived the same equations from an action principle for the constrained motion of a frictionless particle subject to gravity on an unsteady surface. The unsteady surface is arbitrary and can be generated by many physical processes. If the interest is in the kinematics of fluid particles on the free surface of gravity water waves, one must know the irrotational velocity field beneath the waves. Indeed, only if the initial particle velocity is set as that induced by the irrotational flow do the JS equations describe the kinematics of fluid particles. A rigorous proof of the previous statement is beyond the scope of this paper. We only point out that the horizontal velocity $\dot{\mathbf{x}}$ of a fluid particle on an irrotational water surface satisfies \begin{equation} \dot{\mathbf{x}}=\mathbf{U}_h(\mathbf{x}(t),\zeta(x,y,t),t),\label{xdot} \end{equation} where the horizontal Eulerian velocity $\mathbf{U}_h=\nabla\phi=(\phi_{,x},\phi_{,y})$ is given in terms of the velocity potential $\phi(x,y,z,t)$. Thus, we expect that the JS equations~\eqref{JS} can also be derived using Eq.~\eqref{xdot} and the Stokes equations (see section \ref{sec:kinematic}, and in particular Eqs.~\eqref{B},~\eqref{B1a}). For instance, the JS equations for the case of steady irrotational flows are derived in Appendix~\ref{app:Stokes}. \subsection{Hamiltonian formalism via Dirac Theory}\label{sec:Dirac} The Lagrangian formalism developed in the previous section yields the Hamiltonian structure of the JS equations~(\ref{JS}) in terms of the canonical variables $(\mathbf{p},\mathbf{x})$. A non-canonical structure in terms of the original physical variables (position $\mathbf{x}$ and velocity $\mathbf{u}$) can be derived within the framework of Dirac's (1950) theory of constrained Hamiltonian systems (see also~\cite{dira58}). The transformation~\eqref{pmon1} between the non-canonical and canonical variables follows from Darboux's theorem for finite-dimensional Hamiltonian systems~(see, e.g.,~\cite{Morrison1998}). \subsubsection{Dirac theory: an introduction} An alternative way to constrain a Hamiltonian system is to work directly within the Hamiltonian structure and consider Lagrange multipliers associated with the constraints on the Hamiltonian $$ H_*=H+\lambda_\alpha \Phi_\alpha, $$ where $\lambda_\alpha$ are the Lagrange multipliers, $\Phi_\alpha$ are the constraints and with an implicit summation over $\alpha$ which labels the constraints. In the case under consideration, there are two constraints: the first one is to impose that the particle is on the surface at a given time (i.e., $z=\zeta$), and the second one is to impose that the velocity of the particle coincides with the velocity of the surface at the given position and the given time (i.e., $u_z=\dot{z}$ equals $\mathrm{d}\zeta/\mathrm{d}t$). The advantage of working within the Hamiltonian framework is to obtain the expression of the constrained system within the same set of dynamical variables. For instance, in the case we consider the dynamical variables are the positions and the velocities of the particles. Imposing the constraints within the Hamiltonian framework allows one to obtain the constrained dynamics also in terms of positions and velocities. In a very similar way as the Lagrangian framework, the Lagrange multipliers are obtained by imposing that the constraints are conserved quantities in the dynamics given by $H_*$, i.e., ${\mathrm d}{\Phi_\alpha}/{\mathrm d}t=0$. Consider a parent (unconstrained) Hamiltonian system defined by the Poisson bracket \begin{equation} \{F,G\}=\nabla F \cdot {\mathbb J}({\bf z})\nabla G, \label{bracket} \end{equation} and Hamiltonian $\mathcal{H}({\bf z})$ with dynamical variables ${\bf z}=(z_1,\ldots,z_N)$, where ${\mathbb J}({\bf z})$ is the $N\times N$ Poisson matrix and $\nabla =(\partial_{z_1},\ldots,\partial_{z_N})$. We recall that the Poisson bracket is an antisymmetric bilinear operator \begin{equation} \{F,G\}=-\{G,F\},\label{bilin} \end{equation} it satisfies the Leibniz rule \begin{equation} \{F_1F_2,F_3\}=F_1\{F_2,F_3\}+\{F_1,F_3\}F_2,\label{Leib} \end{equation} and the Jacobi identity \begin{equation} \{\{F_1,F_2\},F_3\}+\{\{F_3,F_1\},F_2\}+\{\{F_2,F_3\},F_1\}=0,\label{Jacob} \end{equation} for all observables $F_1({\bf z})$, $F_2({\bf z})$ and $F_3({\bf z})$ scalar functions of the dynamical variables. For the particle kinematics on a free surface, the dynamical variables are ${\bf z}=(x,y,z,u_x,u_y,u_z)$ and the Poisson matrix is the canonical one: $$ {\mathbb J}=\left( \begin{array}{cc} 0 & {\mathbb I}_3\\ -{\mathbb I}_3 & 0 \end{array} \right), $$ leading to the well-known Hamilton's equation from the equations of motion of any observable $F$ given by ${\mathrm d}F/{\mathrm d}t=\{F,H\}$ for the unconstrained dynamics generated by $H$ or by ${\mathrm d}F/{\mathrm d}t=\{F,H_*\}$ for the constrained dynamics generated by $H_*$. The Lagrange multipliers are obtained from $\{\Phi_\alpha,H_*\}=0$ and are defined by the set of equations $$ \{\Phi_\alpha,\Phi_\beta\}\lambda_\beta+\{\Phi_\alpha, H\}=0, $$ using the bilinearity of the Poisson bracket in~Eq.~\eqref{bilin} and the associated Leibniz rule in~Eq.~\eqref{Leib}. This equation is valid on the surface defined by the constraints $\Phi_\alpha=0$. In order to solve for the Lagrange multipliers, we define the matrix ${\mathbb C}$ with elements $C_{\alpha \beta}=\{\Phi_\alpha , \Phi_\beta\}$. If this matrix is invertible, we denote ${\mathbb D}$ with elements $D_{\alpha \beta}$ its inverse, and the Lagrange multipliers are given by $\lambda_\beta =-D_{\beta \gamma}\{\Phi_\gamma,H\}$. Therefore the equations of motion ${\mathrm d}F/{\mathrm d}t=\{F,H_*\}$ in the constrained system become \begin{equation} \label{eq:FHD} \dot{F}=\{F,H\}-\{F,\Phi_\alpha\}D_{\alpha \beta}\{\Phi_\beta,H\}, \end{equation} using again the bilinearity and the Leibniz rule of the Poisson bracket $\{\cdot,\cdot\}$~(see Eqs.~\eqref{bilin} and \eqref{Leib}). In the same way as above, these equations of motion are valid on the surface defined by the constraints $\Phi_\alpha=0$. Following~\cite{dira50,dira58}, Eq.~(\ref{eq:FHD}) suggests to define a new bracket for the constrained system as \begin{equation} \label{eq:expDB} \{F,G\}_*=\{F,G\}-\{F,\Phi_{\alpha}\}D_{\alpha \beta}\{\Phi_{\beta},G\}, \end{equation} such that the equations of motion for the constrained system are given by ${\mathrm d}F/{\mathrm d}t=\{F,H\}_*$, i.e., with the original Hamiltonian $H$ but a different bracket. The highly non-trivial feature is that this bracket is a Poisson bracket, i.e., it satisfies the Jacobi identity, as it was proved by Dirac. As a consequence, the constrained system defined by the Hamiltonian $H$ and the bracket $\{\cdot,\cdot\}_*$ is a Hamiltonian system. \subsubsection{Non-canonical Hamiltonian of the JS equations} The two constraints we consider are explicitly written as \begin{equation} \Phi_1=z-\zeta(x,y,t)=0,\quad\quad \Phi_2=u_z-u_x\zeta_{,x}-u_y \zeta_{,y}-\zeta_{,t}=0. \label{con} \end{equation} The matrix ${\mathbb C}$ is invertible since \begin{equation} \label{eq4C} C_{11}=C_{22}=0,\quad\quad C_{12}=-C_{21}=\{\Phi_1,\Phi_2\}=1+\zeta_{,x}^2+\zeta_{,y}^2. \end{equation} The Dirac bracket~\eqref{eq:expDB} specializes to \begin{equation} \{F,G\}_*=\nabla F \cdot \overline{\mathbb J}_*\nabla G,\label{dbracket} \end{equation} where $\nabla=\partial/\partial {\bf z}$ and ${\bf z}=(x,y,t,u_x,u_y,E)$. The Poisson matrix is given by \begin{equation} \label{eq:DBzeta} \overline{\mathbb J}_*=\left( \begin{array}{cc} 0 & \overline{\bf B}^{-1}\\ -(\overline{\bf B}^{-1})^T & \overline{\cal B} \end{array} \right), \end{equation} with $$ \overline{\bf B}= \left( \begin{array}{ccc} 1+\zeta_{,x}^2 & \zeta_{,x} \zeta_{,y} & \zeta_{,x}\zeta_{,t} \\ \zeta_{,x}\zeta_{,y} & 1+\zeta_{,y}^2 & \zeta_{,y}\zeta_{,t} \\ 0 & 0 & 1 \end{array} \right), $$ and $$ \overline{\cal B}=\left( \begin{array}{ccc} 0 & -b_3 & b_2 \\ b_3 & 0 & -b_1 \\ -b_2 & b_1 & 0 \end{array} \right). $$ The vector ${\bf b}_{\rm m}=(b_1,b_2,b_3)$ given by \begin{equation} {\bf b}_{\rm m}=\frac{\overline{\nabla} \zeta \times \overline{\nabla}\left( u_x\zeta_{,x}+u_y\zeta_{,y}+\zeta_{,t} \right)}{1+\vert \nabla\zeta \vert^2} =\frac{\overline{\nabla} \zeta \times\left[ \left(\overline u\cdot\overline{\nabla}\right) \overline{\nabla}\zeta\right]}{1+\vert \nabla\zeta \vert^2}.\label{bb} \end{equation} Here $\overline{\nabla}$ designates the gradient in space-time variables $(x,y,t)$ whereas $\nabla$ is the gradient in space variables $(x,y)$ and $\overline u=(u_x,u_y,1)$. The matrix $\overline{\bf B}$ is always invertible and its eigenvalues are $1+\zeta_{,x}^2+\zeta_{,y}^2$ and $1$ (of multiplicity two). The dynamical variable $E$ is canonically conjugate to time and corresponds to an energy variable, the amount of energy brought in by the time-dependence of the surface. More details on the computation of the Dirac bracket is given in Appendix~\ref{app:Dirac}. The Hamiltonian formulation of the reduced bracket in the physical variables $(x,y,t,u_x,u_y,E)$ is non-canonical. The constrained Hamiltonian $\overline{\mathcal H}_c$ is obtained from the free-particle Hamiltonian , replacing $z$ by $\zeta$ and $u_z$ by $u_x\zeta_{,x}+u_y\zeta_{,y}+\zeta_{,t}$ (see Appendix~\ref{app:Dirac}) \begin{equation} \label{Hc} \overline{\mathcal H}_c=\frac{u_x^2+u_y^2+(\zeta_{,x} u_x+\zeta_{,y} u_y+\zeta_{,t})^2}{2}+\gr \zeta +E. \end{equation} Then, the equations of motion are given by \begin{equation} \frac{{\rm d} \overline{F}}{{\rm d} \tau}=\{\overline{F},\overline{\mathcal{H}}_c\}_*, \label{Fbrack} \end{equation} where $\overline{F}$ is any function of the dynamical variables. It follows that, as expected, $$ \frac{{\rm d}t}{\rm d \tau}=\{t,\overline{\mathcal{H}}_c\}_*=1, $$ i.e., $t=\tau$ with a proper choice of the initial time. Then, the JS equations~\eqref{JS} are given by $\dot{x}=u_x$ and $\dot{y}=u_y$ and $$ \frac{{\rm d}u_x}{{\rm d} t}=\{u_x,\overline{\mathcal{H}}_c\}_*,\quad\quad \frac{{\rm d}u_y}{{\rm d} t}=\{u_y,\overline{\mathcal{H}}_c\}_*. $$ Furthermore, we get an equation for the evolution of the energy $E$ as $$ \frac{\dot{E}}{\zeta_{,t}}=\frac{\dot{u}_x}{\zeta_{,x}}=\frac{\dot{u}_y}{\zeta_{,y}}.\label{zz} $$ For a time-independent surface, the Poisson bracket can be further simplified, since the variables $(t,E)$ can be dropped. In this case, the Poisson matrix reduces to a $4\times 4$ matrix $$ {\mathbb J}_1=\left( \begin{array}{cc} 0 & {\bf B}^{-1}\\ -({\bf B}^{-1})^\dagger & {\cal B} \end{array} \right), $$ where ${\bf B}$ is given by Eq.~(\ref{Bmat}) and $$ {\cal B}=b_3\left( \begin{array}{cc} 0 & -1\\ 1 & 0 \end{array} \right). $$ \subsubsection{Canonical Hamiltonian via Darboux theorem } \textcolor{black}{ Following Darboux's theorem for finite-dimensional Hamiltonian systems~(see, e.g.,~\cite{Morrison1998}), it is possible to transform the Poisson bracket defined by the Poisson matrix~(\ref{eq:DBzeta}) into a canonical form. In principle the canonical and non-canonical coordinates are equivalent. In practice, however, utilizing one is favored over the other. For instance, working with physical variables has the advantage of lending itself to a better intuition. Working with a canonical bracket, on the other hand, has its own advantages, e.g., allowing the use of symplectic algorithms developed for finite-dimensional canonical Hamiltonian systems. } Here we apply Darboux's algorithm by modifying the momenta $u_x$, $u_y$ and $E$. In order to find the new momenta $p_x$, $p_y$ and $\tilde{E}$ which are canonically conjugate to $x$, $y$ and $t$ respectively, one has to solve first order linear partial differential equations of the kind $\{x,p_x\}=1$, e.g., using the method of characteristics. We restrict the search of these new variables to $p_x=p_x(x,y,t,u_x,u_y)$, $p_y=p_y(x,y,t,u_x,u_y)$ and $\tilde{E}=E+\varepsilon(x,y,t,u_x,u_y)$. After some algebra, the change of variables reads \begin{eqnarray} && p_x=(1+\zeta_{,x}^2)u_x+\zeta_{,x}\zeta_{,y} u_y +\zeta_{,x}\zeta_{,t}, \nonumber \\ && p_y=\zeta_{,x}\zeta_{,y} u_x+(1+\zeta_{,y}^2) u_y+\zeta_{,y}\zeta_{,t},\label{darboux}\\ && \tilde{E}=E+\zeta_{,t}(u_x\zeta_{,x}+ u_y\zeta_{,y}+\zeta_{,t}). \nonumber \end{eqnarray} The first two equations yield the generalized momentum $\mathbf{p}=(p_x,p_y)$ as a function of the horizontal particle velocity $\mathbf{u}_h=\left(u_x,u_y\right)$ as in Eq.~\eqref{pmon1}, i.e.\ $\mathbf{p}=\mathbf{B}\mathbf{u}_h+\boldsymbol{\alpha}$, where $ \boldsymbol{\mathbf \alpha}$ and ${\bf B}$ are given by Eq.~(\ref{Bmat}). The Hamiltonian~\eqref{Hc} in terms of the canonically conjugate variables $\left(\mathbf{x},t\right)$ and $(\mathbf{p},\tilde{E})$ becomes \begin{equation*} \overline{\mathcal{H}}_c=\frac{1}{2}({\bf p}-\boldsymbol{\alpha}) \cdot {\bf B}^{-1}({\bf p}-\boldsymbol{\alpha})+\gr\zeta -\frac{\zeta_{,t}^2}{2}+\tilde{E}. \label{Hdarboux} \end{equation*} This coincides with the Hamiltonian in Eq.~\eqref{HC} derived from the Lagrangian formalism, except for the extra variable $\tilde{E}$, canonically conjugate of the time $t$. The former is needed to make the system autonomous, as $\tilde{E}$ is the energy that the particle exchanges with the moving surface. Concerning the one-dimensional case, e.g., when $\zeta_{,y}=0$, the Hamiltonian simplifies to $$ \mathcal{H}_c=\frac{({p_x}-\zeta_{,t}\zeta_{,x})^2}{2(1+\zeta_{,x}^2)}+\frac{{p_y}^2}{2}+\gr\zeta-\frac{\zeta_{,t}^2}{2}+\tilde{E}. $$ Since the potential does not depend on $y$, the momentum $p_y$ is constant, so the motion in the $y$-direction is trivial. In the non-trivial direction, the reduced one-dimensional Hamiltonian becomes $$ \mathcal{H}_{1D}=\frac{(p_x-\zeta_{,t}\zeta_{,x})^2}{2(1+\zeta_{,x}^2)}+\gr\zeta-\frac{\zeta_{,t}^2}{2}, $$ where we have removed $\tilde{E}$ to consider the non-autonomous Hamiltonian (which is now not conserved). In the time-independent case ($\zeta_{,t}=0$), the additional variables $(t,E)$ can be eliminated since the set of observables $F(x,y,{p_x},{p_y})$ constitutes a Poisson sub-algebra. The resulting Hamiltonian then reads $$ \mathcal{H}_c=\frac{1}{2}{\bf p}\cdot {\bf B}^{-1}{\bf p}+\gr\zeta, $$ and ${\bf p}={\bf B} {\bf u}_h$. This Hamiltonian resembles the one of the free particle, except that the metric for the kinetic energy is defined by ${\bf B}^{-1}$. Another case of interest is the traveling wave $\zeta(x,y,t)=\overline{\zeta}(x-ct,y)$. Changing the dynamics to the moving frame with velocity $c$ is a time-dependent change of coordinates, so it has to be performed in the autonomous framework. We perform a canonical transformation defined by $\overline{x}=x-ct$ and $\overline{E}=E+c {p_x}$, the other variables remain unchanged. Being canonical, this change of variables does not modify the expression of the bracket. The reduced (time-independent) Hamiltonian becomes \begin{equation} \mathcal{H}_c=\frac{1}{2}({\bf p}-\boldsymbol{\alpha}) \cdot {\bf B}^{-1}({\bf p}-\boldsymbol{\alpha})+\gr\overline{\zeta} -c^2\frac{\overline{\zeta}_x^2}{2}-c{p_x}, \label{eq:H_tw} \end{equation} with $\boldsymbol{\alpha}=-c\overline{\zeta}_x(\overline{\zeta}_x,\overline{\zeta}_y)$, and the canonically conjugate variables are $(\overline{x},{p_x})$ and $(y,{p_y})$. Here, the matrix ${\bf B}$ is given by Eq.~(\ref{Bmat}) where $\zeta$ is replaced by $\overline{\zeta}$. Hamiltonian~\eqref{eq:H_tw} can be written in the form \begin{equation} \mathcal{H}_c=\frac{1}{2}({\bf p}-\boldsymbol{\alpha}-c\mathbf B\mathbf e_1) \cdot {\bf B}^{-1}({\bf p}-\boldsymbol{\alpha}-c\mathbf B\mathbf e_1)+\gr\overline{\zeta} -\frac{1}{2}c^2, \label{eq:H_tw2} \end{equation} with $\mathbf e_1=(1,0)^T$. Next, we express the Hamiltonian in terms of the particle velocity in the co-moving frame, $\overline{\mathbf u}_h=(\dot{\overline x},\dot y)$. From the fact that $\dot{\overline x}=\partial \mathcal{H}_c/\partial p_x$ and $\dot{y}=\partial \mathcal{H}_c/\partial p_y$, we have \begin{equation*} \overline{\mathbf u}_h=\mathbf B^{-1}\left({\mathbf p}-\pmb{\alpha}-c\mathbf B\mathbf e_1 \right). \end{equation*} Substitution in Eq.~\eqref{eq:H_tw2} yields, \begin{equation} \mathcal{H}_c=\frac{1}{2}\overline{\bf u}_h\cdot {\bf B}\overline{\bf u}_h+\gr\overline{\zeta} -\frac{1}{2}c^2. \label{Htw3} \end{equation} This form of the Hamiltonian will prove helpful in our analysis of the finite time blowup of the JS equations. {\em Remark: Physical interpretation of the vector ${\bf b}_{\rm m}$ in Eq.~\eqref{bb}.} The Poisson structure of particle motion on an unsteady surface bears some similarities with the motion of a charged particle in electromagnetic fields. In terms of the physical variables (position $\bf x$ and velocity $\bf u$), the Poisson bracket of a charge particle in a magnetic field is non-canonical with a part of the form ${\bf b}_{\rm m}\cdot (\partial_{\bf u}F \times \partial_{\bf u} G)$, called gyrobracket (responsible for the gyration motion of the particle around magnetic field lines) where ${\bf b}_{\rm m}$ is the magnetic field. In canonical coordinates, the velocity ${\bf u}$ has to be shifted by the vector potential ${\bf A}_{\rm m}$, which satisfies ${\bf b}_{\rm m}=\overline{\nabla} \times {\bf A}_{\rm m}$ (see~\cite{litt79} for more details). Our vector ${\bf b}_{\rm m}$ in Eq.~\eqref{bb} can be interpreted as a magnetic field in the extended phase space and the associated vector potential follows from \begin{equation*} (1+\vert \nabla\zeta \vert^2) {\bf b}_{\rm m}=\overline{\nabla} \times {\bf A}_{\rm m}, \end{equation*} with \begin{equation*} {\bf A}_{\rm m}=-(u_x\zeta_{,x}+u_y\zeta_{,y}+\zeta_{,t})\overline{\nabla} \zeta. \end{equation*} Notice that in general ${\bf b}_{\rm m}$ is not divergence-free because of the factor $(1+\vert \nabla\zeta \vert^2)$. Furthermore, in Eq.~(\ref{eq:DBzeta}), the term ${\cal B}$ generates a term ${\bf b}_{\rm m}\cdot \partial_{\bf u}F \times \partial_{\bf u} G$ in the Poisson bracket, since we notice that $\overline{\cal B}$ can be written as $\overline{\cal B}={\bf b}_{\rm m}\times$, i.e.\ it maps a vector ${\bf v}$ into ${\bf b}_{\rm m}\times {\bf v}$. Whereas when the Poisson bracket is canonical, the momenta have to be shifted by the ''vector potential'' $\boldsymbol{\alpha}$ [see Eq.~\eqref{pmon1}]. \subsection{Symplectic structure}\label{sec:symplectic2} The symplectic one-form \begin{equation} \omega^1={p_x}{\rm d}x+{p_y}{\rm d}y+\tilde{E}{\rm d}t \label{C_1_form} \end{equation} is given in terms of the canonically conjugate variables $(\mathbf{x},t,{\mathbf{p}},\tilde{E})$. The associated two-form $\omega^2={\rm d}\omega^1$, which provides the symplectic structure of the dynamics, follows by exterior differentiation of~Eq.~\eqref{C_1_form} as \begin{equation} \omega^2={\rm d}{p_x}\wedge {\rm d}x+{\rm d} {p_y}\wedge {\rm d}y+{\rm d}\tilde{E}\wedge {\rm d}t.\label{2form} \end{equation} To gain physical insights into the inviscid kinematics of fluid particles near large crests, it is convenient to write the above symplectic forms in terms of the non-canonical variables $\overline{\bf z}=(x,y,u_x,u_y,t,E)$. Using the transformations~\eqref{darboux}, Eq.~\eqref{C_1_form} yields \begin{eqnarray} \omega^{1} & = & \left((1+\zeta_{,x}^{2})u_{x}+\zeta_{,x}\zeta_{y}u_{y}+\zeta_{,x}\zeta_{,t}\right){\rm d}x+\left(\zeta_{x}\zeta_{,y}u_{x}+(1+\zeta_{,y}^{2})u_{y}+\zeta_{,y}\zeta_{,t}\right){\rm d}y\nonumber\\ & & \qquad\qquad+\left(E+\zeta_{,t}(\zeta_{,x}u_{x}+\zeta_{,y}u_{y}+\zeta_{,t})\right){\rm d}t,\label{omega1} \end{eqnarray} and Eq.~\eqref{2form} becomes \begin{eqnarray} \omega^{2} & = & -(1+\vert\nabla\zeta\vert^{2})b_{3}{\rm d}x\wedge{\rm d}y+ (1+\vert\nabla\zeta\vert^{2})b_{2}{\rm d}x\wedge{\rm d}t-(1+\vert\nabla\zeta\vert^{2})b_{1}{\rm d}y\wedge{\rm d}t\nonumber\\ & &+(1+\zeta_{,x}^{2}){\rm d}u_{x}\wedge{\rm d}y+\zeta_{,x}\zeta_{,y}{\rm d}u_{y}\wedge{\rm d}x+\zeta_{,x}\zeta_{,y}{\rm d}u_{x}\wedge{\rm d}y+(1+\zeta_{,y}^{2}){\rm d}u_{y}\wedge{\rm d}y\nonumber\\ & & +\zeta_{,x}\zeta_{,t}{\rm d}u_{x}\wedge{\rm d}t+\zeta_{,y}\zeta_{,t}{\rm d}u_{y}\wedge{\rm d}t+{\rm d}E\wedge{\rm d}t.\label{omega2} \end{eqnarray} Note that the two-form can also be obtained from the Lagrange matrix as \begin{equation*} \omega^2=\overline{L}_*^{\alpha \beta} {\rm d}z_\alpha \wedge {\rm d}z_\beta/2, \label{w2q} \end{equation*} where $\overline{L}_*^{\alpha \beta}$ is the inverse of the Dirac-Poisson matrix~\eqref{eq:DBzeta}, that is $$ \overline{\mathbb L}_*=\left( \begin{array}{cc} (1+\vert \nabla \zeta\vert^2)\overline{\cal B} & -\overline{\bf B}^T\\ \overline{\bf B} & 0 \end{array} \right). $$ \section{Physical interpretation of the symplectic structure}\label{sec:symplectic} \textcolor{black}{ In this section, we study in detail the symplectic structure of the JS equations obtained above. In particular, we provide a physical interpretation of the one- and two-forms~\eqref{omega1}~and~\eqref{omega2} in terms of circulation and vorticity created on the zero-stress free surface~(\cite{cartan1922lessons}, chapter II, p. 20, see also~\cite{Bridges_vorticity_2005}). } \textcolor{black}{ First, in subsection~\ref{sec:vorticity} we present the mathematical description of vorticity generated on a zero-stress free surface. In particular, we draw on~\cite{Longuet_Higgins_curvature1998} and extend his formulation for steady surfaces to the unsteady case. The associated velocity circulation is also derived. Then in subsection~\ref{sec:kinematic} we show that the classical kinematic criterion for wave breaking~\citep{Perlin2013} follows from the condition of vanishing vorticity at a wave crest. } \textcolor{black}{ Finally, in subsection~\ref{sec:symvor} our analysis reveals that the symplectic one-form of the JS equations obtained in section \ref{sec:symplectic2} is the physical fluid circulation and certain terms of the associated two-form relate to the vorticity created on the zero-stress free surface. Furthermore, if the kinematic criterion for wave breaking holds for the largest crest, then the two-form instantaneously reduces to that associated with the motion of a particle in free flight, as if the free surface and vorticity did not exist. } \subsection{Vorticity generated at a zero-stress free surface}\label{sec:vorticity} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{FIG1new} \caption{Reference coordinate system: in the global frame $(x,y,z)$, $\boldsymbol{\mathbf{r}}_{\Sigma}(x,y,t)$ is a point of the free surface $z=\zeta(x,y,t)$, and $(\mathbf{s},\mathrm{\mathbf{b}},\mathbf{n})$ is a local frame on the surface. } \label{FIG1} \end{figure} In general, vorticity is generated at free surfaces whenever there is flow past regions of surface curvature~\citep{Wu1995,Lundgren}. This non-zero vorticity resides in a vortex sheet along the free-surface even when the flow field beneath the free surface is irrotational~\citep{Longuet_Higgins_curvature1998}. The condition of zero shear stress determines the strength of the vorticity at the surface. In the global frame $(x,y,z)$, a point $\boldsymbol{\mathbf{r}}_{\Sigma}$ of the free-surface $\Sigma$ can be parametrized as \[ \boldsymbol{\mathbf{r}}_{\Sigma}(x,y,t)=\left(\begin{array}{c} x\\ y\\ \zeta(x,y,t) \end{array}\right), \] where $x$ and $y$ are the parameters. Here we consider single valued surfaces so that $z=\zeta(x,y,t)$ is well defined at any point $(x,y)$ and time $t$. The local frame $(\mathbf{s},\mathrm{\mathbf{b}},\mathbf{n})$ on the surface is given by \[ \mathbf{s}=\frac{\partial_{x}\mathbf{r}_{\Sigma}}{\left|\partial_{x}\mathbf{r}_{\Sigma}\right|},\qquad\mathbf{b}=\frac{\partial_{y}\mathbf{r}_{\Sigma}}{\left|\partial_{y}\mathbf{r}_{\Sigma}\right|},\qquad\mathbf{n}=\frac{\partial_{x}\mathbf{r}_{\Sigma}\times\partial_{y}\mathbf{r}_{\Sigma}}{\left|\partial_{x}\mathbf{r}_{\Sigma}\times\partial_{y}\mathbf{r}_{\Sigma}\right|}, \] where $\boldsymbol{\mathbf{s}}$ and $\boldsymbol{\mathbf{b}}$ are unit vectors tangent to the surface and $\mathbf{n}$ is the unit vector of the outward normal (see Fig.~\ref{FIG1}). More explicitly, \begin{equation} \boldsymbol{\mathbf{s}}=\frac{1}{\sqrt{h_1}}\left(\begin{array}{c} 1\\ 0\\ \zeta_{,x} \end{array}\right),\qquad\boldsymbol{\mathbf{b}}=\frac{1}{\sqrt{h_2}}\left(\begin{array}{c} 0\\ 1\\ \zeta_{,y} \end{array}\right),\qquad\mathbf{n}=\frac{1}{\sqrt{h}}\left(\begin{array}{c} -\zeta_{,x}\\ -\zeta_{,y}\\ 1 \end{array}\right),\label{sbn} \end{equation} where \[ h_1=\left|\partial_{x}\mathbf{r}_{\Sigma}\right|^{2}=1+\zeta_{,x}^{2},\qquad h_2=\left|\partial_{y}\mathbf{r}_{\Sigma}\right|^{2}=1+\zeta_{,y}^{2}, \] and \[ h=\left|\partial_{x}\mathbf{r}_{\Sigma}\times\partial_{y}\mathbf{r}_{\Sigma}\right|^{2}=1+\zeta_{,x}^{2}+\zeta_{,y}^{2}. \] Note that for a 2-D surface, $\boldsymbol{\mathbf{s}}$ and $\boldsymbol{\mathbf{b}}$ are in general not orthogonal as \begin{equation} \alpha=\mathbf{s}\cdot\mathbf{b}=\frac{\zeta_{,x}\zeta_{,y}}{\sqrt{h_1 h_2}}\label{asb} \end{equation} vanishes only at crests, troughs and saddles. We also consider the intrinsic curvilinear coordinates $s$ and $b$ on the surface (see Fig.~\ref{FIG1}) defined as \[ s(x,y)=\int_{0}^{x}\sqrt{h_1(x',y)}\mathrm{d}x',\qquad b(x,y)=\int_{0}^{y}\sqrt{h_2(x,y')}\mathrm{d}y', \] and the infinitesimal arc-lengths \begin{equation} \mathrm{d}s=\sqrt{h_1}\mathrm{d}x,\quad\quad \mathrm{d}b=\sqrt{h_2}\mathrm{d}y. \label{dsdb} \end{equation} In the global frame, the components $(u_x,u_y)$ of the horizontal particle velocity $\mathbf u_h=(u_x,u_y)$ components are denoted by \begin{equation} u_{x}=\dot{x},\qquad u_{y}=\dot{y}. \label{xy} \end{equation} The vertical particle velocity, dictated by the free-surface motion, is given by \begin{equation} \dot{\zeta}=\frac{{\rm d}\zeta}{{\rm d}t}=\zeta_{,t}+\dot{x}\zeta_{,x}+\dot{y}\zeta_{,y}=\zeta_{,t}+u_{x}\zeta_{,x}+u_{y}\zeta_{,y}. \label{zetadot} \end{equation} The particle velocity vector written in the global coordinate frame, \begin{equation} \mathbf{u}=u_{x}\mathbf{i}+u_{y}\mathbf{j}+\dot{\zeta}\mathbf{k},\label{uu} \end{equation} must coincide with its expression in the local frame, \begin{equation} \mathbf{u}=u_{s}\mathbf{s}+u_{b}\mathbf{b}+u_{n}\mathbf{n},\label{um} \end{equation} where $u_{s}$ and $u_{b}$ are the velocity components tangential to the surface, and satisfy \begin{equation} u_{s}=\frac{U_{s}-\alpha U_{b}}{1-\alpha^{2}},\qquad u_{b}=\frac{U_{b}-\alpha U_{s}}{1-\alpha^{2}},\label{usub} \end{equation} while \begin{equation} u_{n}=\mathbf{u\cdot}\boldsymbol{\mathbf{n}}=\frac{-\zeta_{,x}u_{x}-\zeta_{,y}u_{y}+\dot{\zeta}}{\sqrt{h}} =\frac{\zeta_{,t}}{\sqrt{h}}\label{un} \end{equation} is the particle velocity component orthogonal to the surface. Here, $U_{s}$ and $U_{b}$ are the projections of $\mathbf{u}$ onto $\mathbf{s}$ and $\mathbf{b}$ respectively, namely \begin{equation} U_{s}=\mathbf{u\cdot}\boldsymbol{\mathbf{s}}=\frac{u_{x}+\dot{\zeta}\zeta_{,x}}{\sqrt{h_1}}=\frac{(1+\zeta_{,x}^{2})u_{x}+\zeta_{,x}\zeta_{,y}u_{y}+\zeta_{,x}\zeta_{,t}}{\sqrt{1+\zeta_{,x}^2}},\label{ut} \end{equation} \begin{equation} U_{b}=\mathbf{u\cdot}\boldsymbol{\mathbf{b}}=\frac{u_{y}+\dot{\zeta}\zeta_{,y}}{\sqrt{h_2}}=\frac{(1+\zeta_{,y}^{2})u_{y}+\zeta_{,x}\zeta_{,y}u_{x}+\zeta_{,y}\zeta_{,t}}{\sqrt{1+\zeta_{,y}^2}}.\label{ub} \end{equation} Note that the denominators in Eq.~\eqref{usub} never vanish as, from Eq.~\eqref{asb}, \[ 1-\alpha^{2}=\frac{h}{h_1 h_2}=\frac{1+\zeta_{,x}^{2}+\zeta_{,y}^{2}}{\left(1+\zeta_{,x}^{2}\right)\left(1+\zeta_{,y}^{2}\right)}>0. \] Clearly, $U_{s}$ and $U_{b}$ coincide with $u_{s}$ and $u_{b}$ on the surface when $\mathbf{s}$ and $\mathbf{b}$ are orthogonal, i.e. $\alpha=0$. Note that $u_{n}$ vanishes if the surface is steady or in the comoving frame of a traveling wave. \textcolor{black}{ Drawing on~\cite{Longuet_Higgins_curvature1998}, on the assumption of a zero-stress free surface any line of inviscid fluid particles parallel to a principal axis of strain must stretch and be in rotation with angular velocity $\frac{1}{2}\mathbf{\boldsymbol{\omega}}$, where $\boldsymbol{\omega}$ is the vorticity vector. Since one axis of strain is always normal to the free surface, the unit normal $\mathbf{n}$ rotate according to \begin{equation} \frac{{\rm d}\mathbf{n}}{{\rm d}t}=\frac{1}{2}\boldsymbol{\mathbf{\omega}}\times\mathbf{n}.\label{dndt} \end{equation} } We then decompose the vorticity as \[ \mathbf{\boldsymbol{\omega}}=\mathbf{\mathbf{\boldsymbol{\omega}}}_{\mathrm{\parallel}}+\omega_{\perp}\mathbf{n}, \] into its tangential component $\pmb{\omega}_\parallel$ and its normal component $\omega_\perp\mathbf n$ to the surface. Note that \begin{equation} \mathbf{n}\times\left(\mathbf{\boldsymbol{\omega}}\times\mathbf{n}\right)=\left(\mathbf{n}\cdot\mathbf{n}\right)\mathbf{\boldsymbol{\omega}}-\left(\mathbf{n}\cdot\boldsymbol{\omega}\right)\mathbf{n}=\mathbf{\boldsymbol{\omega}}-\omega_{\perp}\mathbf{n}=\mathbf{\mathbf{\boldsymbol{\omega}}}_{\mathrm{\parallel}},\label{OMs} \end{equation} gives the vorticity aligned along the surface. The normal vorticity $\omega_{\perp}\mathbf{n}$ cannot be generated by the surface motion, but it depends upon both the fluid flows above and below the surface. For example, for irrotational and inviscid water wave fields $\mathbf{\omega}_{\perp}=0$ as there is no discontinuity across the surface since vorticity is divergence-free. However, there is no restriction on the vorticity $\boldsymbol{\omega}_{\parallel}$ generated by the surface motion, which is indeed discontinuous as it is stored in a vortical sheet along the surface. From Eqs. \eqref{dndt} and \eqref{OMs} the tangential component $\mathbf{\pmb{\omega}}_{_\parallel}$ of vorticity generated on the free surface is given by (\cite{Longuet_Higgins_curvature1998}) \begin{equation} \pmb{\omega}_{\parallel}=2\mathbf{n}\times\frac{{\rm d}\mathbf{n}}{{\rm d}t}.\label{wpp} \end{equation} From Eq. (\ref{sbn}), \[ \frac{{\rm d}\mathbf{n}}{{\rm d}t}=\frac{\mathbf{a}}{\sqrt{h}}-\frac{\dot{h}}{2h}\mathbf{n}, \] where \[ \mathbf{a}=-\left(\begin{array}{c} \partial_{x}\dot{\zeta}\\ \partial_{y}\dot{\zeta}\\ 0 \end{array}\right), \] $\dot{h}=2\nabla\zeta\cdot\nabla\dot{\zeta}$ and $\nabla=\left(\partial_{x},\partial_{y}\right)$ is the 2-D space gradient. Thus, Eq.~\eqref{wpp} yields \begin{equation} \pmb{\omega}_{\parallel}=2\mathbf{n}\times\frac{\mathbf{a}}{\sqrt{h}} =\frac{2}{h}\left(\begin{array}{c} \partial_{y}\dot{\zeta}\\ \\ -\partial_{x}\dot{\zeta}\\ \\ \zeta_{,x}\partial_{y}\dot{\zeta}-\zeta_{,y}\partial_{x}\dot{\zeta} \end{array}\right).\label{Ws} \end{equation} The $z$-component $\omega_{3}$ can be written in the compact form \textcolor{black}{ \begin{equation} \omega_{3}=\boldsymbol{\omega}_{\parallel}\cdot\mathbf{k}=\frac{2}{h}\left(\zeta_{,x}\partial_{y}\dot{\zeta}- \zeta_{,y}\partial_{x}\dot{\zeta}\right)=\frac{2}{1+\left|\nabla\zeta\right|^{2}}(\nabla\zeta\times\nabla\dot{\zeta})\cdot {\bf k}\label{W3}, \end{equation} } where $\dot{\zeta}$ follows from Eq.~\eqref{zetadot}. This observation is useful to interpret certain terms of the symplectic 2-form given in Section~\ref{sec:symplectic2}: The $b_3$ component of ${\bf b}_{\rm m}$ can be written as \begin{equation} b_3=\frac{\nabla \zeta \times \nabla (u_x\zeta_{,x}+u_y\zeta_{,y}+\zeta_{,t})}{1+\vert \nabla\zeta \vert^2}\cdot {\bf k},\label{b3} \end{equation} where we have used the two-dimensional cross-product. Comparing Eq.~\eqref{b3} to Eq.~\eqref{W3}, we observe that $b_3=\omega_3/2$ is half the vertical $z$~component of the vorticity created on the free-surface $z=\zeta(x,y,t)$. Note that $b_3$ vanishes when the kinematic criterion~\eqref{b41} for wave breaking holds. We will not dwell too much on the geometric meaning of the components $b_1$ and $b_2$. We only point out that one can show that $b_1$ ($b_2$) is the $z$-component of space-time vorticity created on the space-time surface $z=\zeta(x,y,t)$. Thus, if we imagine trajectories $\overline{\bf z}(\tau)$ as those of ``phase-space parcels'' transported by the Hamiltonian flow velocity ${\rm d}\overline{\bf z}/{\rm d}\tau$, then the vector ${\bf b}_{\rm m}$ can be interpreted as space-time vorticity generated by the Hamiltonian flow. These observations will be useful below to interpret the symplectic forms associated with the Hamiltonian equations. In the local frame \begin{equation} \mathbf{\boldsymbol{\omega}}_{\parallel}=\omega_{s}\mathbf{s}+\omega_{b}\mathbf{b},\label{omegapar} \end{equation} where \begin{equation} \omega_{s}=\frac{\Omega_{s}-\alpha\Omega_{b}}{1-\alpha^{2}},\qquad\omega_{b}= \frac{\Omega_{b}-\alpha\Omega_{s}}{1-\alpha^{2}}. \label{omegasb} \end{equation} The quantities $\Omega_{s}$ and $\Omega_{b}$ are the projections of $\mathbf{\mathbf{\boldsymbol{\omega}}_{\parallel}}$ onto $\mathbf{s}$ and $\mathbf{b}$, respectively. That is \begin{equation} \Omega_{s}=\mathbf{\mathbf{\boldsymbol{\omega}}_{\parallel}\cdot}\boldsymbol{\mathbf{s}}= 2\frac{\sqrt{h_1}\partial_{y}\dot{\zeta}-\alpha\sqrt{h_2}\partial_{x}\dot{\zeta}}{h} \label{ws}, \end{equation} and \begin{equation} \Omega_{b}=\mathbf{\mathbf{\boldsymbol{\omega}}_{\parallel}\cdot}\boldsymbol{\mathbf{b}} =-2\frac{\sqrt{h_2}\partial_{x}\dot{\zeta}-\alpha\sqrt{h_1}\partial_{y}\dot{\zeta}}{h}. \label{wb} \end{equation} At the points on the surface where $\mathbf{s}$ and $\mathbf{b}$ are orthogonal ($\alpha=0$), $\Omega_{s}$ and $\Omega_{b}$ coincide with $\omega_{s}$ and $\omega_{b}$, respectively. Vorticity created on the free-surface $\Sigma$ implies that there is non-zero circulation of the velocity $\mathbf{u}=(u_{x},u_{y},\dot{\zeta})$ along any closed path $\gamma(\mu,t)=\left(x(\mu,t),y(\mu,t),\zeta(x(\mu,t),y(\mu,t))\right)$ on $\Sigma$, parametrized by $\mu$, and it is conserved by Kelvin's theorem (see, e.g., \cite{Eyink_notes}). From Eqs.~(\ref{zetadot}) and $\mathrm{d}z=\zeta_{,x}\mathrm{d}x+\zeta_{,y}\mathrm{d}y$, the circulation around $\gamma$ \begin{equation} \oint_{\gamma(t)}\mathbf{u}\cdot\mathbf{dx}=\oint_{\gamma(t)}u_{x}\mathrm{d}x+u_{y}\mathrm{d}y+\dot{\zeta}\mathrm{d}z,\label{CIRC} \end{equation} can be expressed in terms of the projections $U_{s}$ and $U_{b}$ of the particle velocity $\mathbf{u}$ as (see Eqs.~(\ref{ut}) and (\ref{ub})) \begin{equation} \oint_{\gamma(t)}\mathbf{u}\cdot\mathbf{dx}=\oint_{\widetilde{\gamma}(t)}\sqrt{h_1}U_{s}\mathrm{d}x+\sqrt{h_2}U_{b}\mathrm{d}y=\oint_{\widetilde{\widetilde{\gamma}}(t)}U_{s}\mathrm{d}s+U_{b}\mathrm{d}b,\label{CIRC1} \end{equation} where we have used Eq.~\eqref{dsdb}, and $\widetilde{\gamma}(t)=\left(x(\mu,t),y(\mu,t)\right)$ and $\widetilde{\widetilde{\gamma}}(t)=\left(s(\mu,t),b(\mu,t)\right)$ are the projected paths of $\gamma$ onto the $x-y$ and $s-b$ planes respectively. % Comparing Eqs.~\eqref{px1},~\eqref{py1} with Eqs.~\eqref{ut},~\eqref{ub}, we note that the infinitesimal circulation in Eq.~\eqref{CIRC1} can be written in terms of generalized momenta as \begin{equation} U_s \mathrm{d}s + U_b \mathrm{d}b=p_x\mathrm{d}x+p_y\mathrm{d}y, \label{inc} \end{equation} where the arclengths $\mathrm{d}s$ and $\mathrm{d}b$ relate to $\mathrm{d}x$ and $\mathrm{d}y$ via Eq.~\eqref{dsdb}. Thus, the scaled generalized momenta $(p_x/\sqrt{h_1},p_y/\sqrt{h_2})$ are equal to the particle velocity projections $(U_s,U_b)$. \subsection{Kinematic criterion for wave breaking}\label{sec:kinematic} \textcolor{black}{In this section, we will show that the classical kinematic criterion for wave breaking~\citep{Perlin2013} follows from the condition of vanishing vorticity at a wave crest.} First, consider the special case of unidirectional waves propagating along $x$ and the associated 1-D surface $z=\zeta(x,t).$ In this case, $\mathbf{b}=\mathbf{j}$ is aligned along $y$ and orthogonal to $\mathbf{s}$ (see Fig.~\ref{FIG1}). Then, from Eq.~(\ref{omegasb}) vorticity created on the surface is aligned along $y$ and it is given by \begin{equation} \omega_{b}=\Omega_{b}=-\frac{2}{h_1}\partial_{x}\dot{\zeta}= \frac{2}{1+\zeta_{,x}^2}\left(\zeta_{,xt}+u_{x}\zeta_{,xx}\right). \label{wb1} \end{equation} This can be written as (\cite{Lundgren}) \begin{equation} \omega_{b}=-2\left(\frac{\mathrm{d}u_{n}}{\mathrm{d}s}+u_{s}K\right),\label{WLH} \end{equation} where \[ K=\frac{\zeta_{,xx}}{h_1^{3/2}}=\frac{\zeta_{,xx}}{\left(1+\zeta_{,x}^2\right)^{3/2}}, \] is the surface curvature. The tangential particle velocity $u_{s}$ follows from Eq.~(\ref{ut}) as \[ u_{s}=U_{s}=\frac{h_1 u_{x}+\zeta_{,x}\zeta_{,t}}{\sqrt{h_1}}, \] and the rate of change of the normal particle velocity $u_{n}=\zeta_{,t}/\sqrt{h_1}$ along the intrinsic curvilinear cordinates $s$ on the surface is given by \[ \frac{\mathrm{d}u_{n}}{\mathrm{d}s}=\frac{\mathrm{d}u_{n}}{\mathrm{d}x}\frac{\mathrm{d}x}{\mathrm{d}s}=\frac{\zeta_{,xt}}{h_1}-\frac{\zeta_{,t}\zeta_{,x}\zeta_{,xx}}{h_1^2}, \] where the infinitesimal arclength $\mathrm{d}s=\sqrt{h_1}\mathrm{d}x$ (see Eq.~\eqref{dsdb}). For steady surfaces $u_{n}=0$ and Eq.~\eqref{WLH} reduces to Longuet-Higgins' (1988) result \[ \omega_{b}=-2u_{s}K. \] Thus, in a comoving frame where travelling waves are steady, at crests vorticity is positive or counter-clockwise (\cite{Longuet_Higgins_JFM_bores}). For unsteady surfaces the normal velocity $u_{n}$ does not vanish as it balances the underneath horizontal water flow leading to convergence (growing crests) or divergence (decaying crests). In particular, at a crest of a wave $\frac{\mathrm{d}u_{n}}{\mathrm{d}s}>0$ since the wave travels forward as a result of the downward (upward) mass flow before (after) the crest. Thus, the convergence/divergence of the flow induced by unsteady surfaces creates negative vorticity that can counterbalance that generated by the surface curvature. Indeed, from Eq.~\eqref{wb1} vorticity vanishes at a crest, where $\zeta_{,x}=0$, when \begin{equation} \zeta_{,xt}+u_{x}\zeta_{,xx}=0.\label{B1} \end{equation} A physical interpretation of this condition is as follows. Consider the horizontal speed ${V}_{c}=\dot{X}_{c}$ of a crest located at $X_{c}(t)$ at time $t$. Since at a crest $\zeta_{,x}=0$, we have~\citep{Fedele2014_EPL} \[ \frac{\mathrm{d}}{\mathrm{d}t}\zeta_{,x}(X_{c}(t),t)=\zeta_{,xt}+\dot{X}_{c}\zeta_{,x}=0, \] which implies \begin{equation} V_{c}=\dot{X}_{c}=-\frac{\zeta_{,xt}}{\zeta_{,xx}}.\label{Vc1} \end{equation} Thus, condition~\eqref{B1} of vanishing vorticity holds when \begin{equation} u_{x}=V_{c},\label{b2} \end{equation} or equivalently when the horizontal particle velocity $u_{x}$ equals the horizontal crest speed $V_{c}$. A similar result holds in three dimensions. From Eq.~\eqref{omegapar} vorticity created on a 2-D surface vanishes when \begin{equation} \zeta_{,xt}+u_{x}\zeta_{,xx}+u_{y}\zeta_{,xy}=0,\qquad\zeta_{,yt}+u_{x}\zeta_{,xy}+u_{y}\zeta_{,yy}=0,\label{b4} \end{equation} or equivalently when the horizontal particle velocity ${\mathbf{u}}_h=(u_{x},u_{y})$ equals the horizontal crest speed $\mathbf{V}_{c}=(\dot{X}_{c},\dot{Y}_{c})$, where $(X_{c}(t),Y_{c}(t))$ is the horizontal crest position. At a crest where $\nabla\zeta=\mathbf{0}$ \[ \frac{\mathrm{d}}{\mathrm{d}t}\nabla\zeta\left(X_{c}(t),Y_{c}(t),t\right)=\nabla\dot{\zeta}=\mathbf{0}, \] or equivalently \begin{equation} \zeta_{,xt}+\dot{X}_{c}\zeta_{,xx}+\dot{Y}_{c}\zeta_{,xy}=0,\qquad\zeta_{,yt}+\dot{X}_{c}\zeta_{,xy}+\dot{Y}_{c}\zeta_{,yy}=0.\label{b41} \end{equation} Clearly, Eq. (\ref{b41}) reduces to condition~\eqref{b4} of vanishing vorticity if \begin{equation} \mathbf{u}_h=\mathbf{V}_{c}. \label{br2} \end{equation} Equations~\eqref{b2}~and~\eqref{br2} are the kinematic thresholds defined as potential breaking criteria for uni- and multidirectional water waves (see, for example~\cite{Perlin2013}). In particular, recent experimental results by~\cite{Shemer2014} and~\cite{Shemer2015} showed that as the largest crest of a focusing wave group grows in time the crest speed decreases, while water particles at the crest accelerate. Spilling breakers appear to occur when the horizontal particle velocity exceeds the crest speed, thus confirming the kinematic criterion for the inception of wave breaking (see also \cite{Shemer_kinematic2013,Duncan_JFM2001_spilling_1,Duncan_spilling_profile}). \subsection{Symplecticity and vorticity}\label{sec:symvor} To gain some intuition on the meaning of the differential one- and two-forms~\eqref{omega1}~and~\eqref{omega2}, we interpret the high-dimensional vector $\overline{\bf z}=(z_{\alpha})$ as the trajectory of a `fluid parcel' that is transported through the extended phase space by the Hamiltonian flow velocity \begin{equation*} {\rm v}_H(\tau)=\frac{{\rm d}\overline{\bf z}}{{\rm d}\tau}=\left(\frac{{\rm d} z_{\alpha}}{{\rm d} \tau}\right), \end{equation*} where $z_{\alpha}$ is any of the non-canonical variables $(x,y,u_x,u_y,t,E)$ and the associated velocity \[ \frac{{\rm d} z_{\alpha}}{{\rm d} \tau}=\{z_{\alpha},\overline{\mathcal{H}}_c\}_*, \] follows from the non-canonical Dirac bracket~\eqref{dbracket} (see also Eq.~\eqref{Fbrack}). Then, the symplectic one-form~\eqref{omega1} associated with the Hamiltonian flow can be interpreted as the circulation of the velocity ${\rm v}_H$ along the infinitesimal path~${\rm d}\overline{\bf z}$. On the slice $t=\mbox{const}$ of the extended phase space, the non-canonical one-form~\eqref{omega1} simplifies to \begin{equation*} \omega^1=\left(\mathbf B{\mathbf u}_h+\pmb{\alpha}\right)\cdot\mathrm d\mathbf x, \label{1f} \end{equation*} where we have used the identity in Eq.~\eqref{pmon1} and $\mathbf{u}_h=\left(u_x,u_y\right)$ is the horizontal particle velocity. The one-form $\omega^1$ is invariant along closed material lines. This implies that if $\gamma(t)$ is a closed material line, the quantity \begin{equation*} \mathcal C(t)=\oint_{\gamma(t)}\left(\mathbf B{\mathbf u}_h+\pmb{\alpha}\right)\cdot\mathrm d\mathbf x, \label{eq:generalKCT} \end{equation*} is constant, i.e., it does not vary in time. Clearly, $\mathcal C(t)$ is the physical circulation induced by the particle motion given in Eq.~\eqref{CIRC1}, and is conserved by Kelvin's theorem (see, e.g.~\cite{Eyink_notes}). Furthermore, on $t=\rm{const.}$ slices, the non-canonical two-form~\eqref{omega2} reduces to \begin{eqnarray} \omega^{2} & = & -(1+\vert\nabla\zeta\vert^{2})b_{3}{\rm d}x\wedge{\rm d}y+(1+\zeta_{,x}^{2}){\rm d}u_{x}\wedge{\rm d}x\nonumber\\ & & +\zeta_{,x}\zeta_{,y}{\rm d}u_{y}\wedge{\rm d}x+\zeta_{,x}\zeta_{,y}{\rm d}u_{x}\wedge{\rm d}y+(1+\zeta_{,y}^{2}){\rm d}u_{y}\wedge{\rm d}y. \label{omega2a} \end{eqnarray} Note that the coefficient $b_3$ of ${\rm d}x\wedge{\rm d}y$ is half the vertical component of the physical vorticity created on the slanted infinitesimal area $dS=(1+\vert\nabla\zeta\vert^{2}){\rm d}x\wedge{\rm d}y$ of the free surface $z=\zeta(x,y,t)$ [see Eqs.~\eqref{b3} and~\eqref{W3}]. In Section~\ref{sec:kinematic} we have shown that vorticity vanishes at a surface crest, where $\zeta_{,x}=\zeta_{,y}=0$, when the horizontal particle velocity $\mathbf{u}_{h}$ equals the propagation speed $\mathbf{V}_c$ of the crest [see Eq.~\eqref{b41}], or equivalently when the kinematic criterion~\eqref{b4} for wave breaking holds. In this case the two-form~\eqref{omega2a} further simplifies to \begin{equation} \omega^2={\rm d}u_x\wedge {\rm d}x+ {\rm d}u_y\wedge {\rm d}y, \label{om} \end{equation} and the associated Hamiltonian~\eqref{Hc} reduces to \begin{equation} \overline{\mathcal H}_c=\frac{u_x^2+u_y^2+\zeta_{,t}^2}{2}+\gr \zeta+E.\label{HH} \end{equation} This implies that if the kinematic criterion~\eqref{b4} is attained at the largest crest, i.e.\ when $\zeta_{,t}=0$, then the two-form~\eqref{om} and the associated Hamiltonian $\overline{\mathcal H}_c$ in~\eqref{HH} are those of a particle in free-flight, as if the surface on which the motion is constrained is non-existent and, as a result, vorticity is not created. Clearly, in realistic oceanic waves the large crest eventually breaks and energy of fluid particles is dissipated to turbulence as a clear manifestation of time irreversibility. This appears analogous to a flight--crash event in fluid turbulence, where a particle flies with a large velocity before suddenly losing energy~\citep{Falkovich2014}. Clearly, the Hamiltonian particle kinematics associated with the Euler or Zakharov~(1968) equations is time-reversible~\citep{Chabchoub2014} and it may reveal the inviscid mechanism of breaking inception before turbulent dissipative effects take place. To do so, the fluid particle kinematics on the free-surface must be coupled with the dynamics of the irrotational wave field that generates the surface exploiting Zakharov's (1968) Hamiltonian formalism. \section{Crest slowdown and wave breaking}\label{sec:slow} \textcolor{black}{In this section, we discuss the relevance of the kinematic criterion for wave breaking~\citep{Perlin2013,Shemer2014,Shemer2015}. Recent studies point at the crest slowdown as what appears to be the underlying inviscid mechanism from which breaking onset initiates. In particular, the multifaceted study by~\cite{Banner_PRL2014} on unsteady highly nonlinear wave packets highlights the existence of a generic oscillatory crest leaning mode that leads to a systematic crest speed slowdown of approximately $20\%$ lower than the linear phase speed at the dominant wavelength~(\cite{Fedele2014_EPL}, see also~\cite{Shemer2014}). This explains why initial breaking wave crest speeds are observed to be approximately $80\%$ of the linear carrier-wave speed (\cite{RappMelville,Stansell_MacFarlaneJPO2002}).} Both the particle kinematics on the free surface and the energetics of the wave field that generates the surface should be considered to establish if the kinematic criterion for incipient breaking is valid. Recent studies show that the breaking onset of the largest crest of unsteady wave groups initiates before the horizontal particle velocity $u_x$ reaches the crest speed $V_c$, with $x$ being the direction of wave propagation. More specifically, it has been observed that wave breaking initiates when the particle velocity reaches about $0.84$ times the crest velocity (\cite{Barthelemy2015,BannerSaket2015},~see also~\cite{KurniaVanGroesen2014}). In fact, none of the recurrent groups reach the threshold $B_x=u_x/V_c=0.84$, while all marginal breaking cases exceed the threshold. \cite{SongBannerJPO2002}, and more recently~\cite{Barthelemy2015}, explored the existence of an energy flux threshold related to the breaking onset. This suggests to look at the space-time transport of wave energy fluxes near a large crest of an unsteady wave group and possible local superharmonic instabilities that initiate as the threshold $B_x$ is exceeded leading to breaking, as those found for steady steep waves~\citep{Longuet-HigginspartI1978}. In the following we study the wave energy transport below a crest and the relation to the crest slowdown. The irrotational Eulerian velocity field $\mathbf{U}=(U,V,W)=(\phi_{,x},\phi_{,y},\phi_{,z})$ that generates the free surface $\zeta$ is given by the gradient of the potential $\phi$. From Eq.~\eqref{uu} the velocity $\mathbf u=(u_x,u_y,u_z)$ of a fluid particle that at time $t$ passes through the point $\mathbf x_P$ is $\mathbf u(t)=\mathbf U(\mathbf x_P,t)$. Besides the Laplace equation to impose fluid incompressibility in the flow domain, $\phi$ satisfies the dynamic Bernoulli and kinematic conditions on the free surface (see, e.g., \cite{Zakharov1968,Zakharov1999}) \begin{equation} \rho\phi_{,t}+\rho \gr\zeta+K_{e}=0,\qquad z=\zeta,\label{B} \end{equation} and \begin{equation} \phi_{,z}=\zeta_{,t}+U\zeta_{,x}+V\zeta_{,y},\qquad z=\zeta,\label{B1a} \end{equation} where $K_{e}=\rho\mathbf{\left|U\right|}^{2}/2$ is the kinetic energy density. Drawing on \cite{Tulin2007}, consider the transport equation \begin{equation} \partial_{t}K_{e}+\nabla\cdot\mathbf{F}_{K_{e}}=0\label{Ke} \end{equation} and the associated flux \begin{equation} \mathbf{F}_{K_{e}}=-\rho\phi_{,t}\mathbf{U}.\label{Flux} \end{equation} Equation~(\ref{Ke}) can be written as \begin{equation} \partial_{t}K_{e}+\nabla\cdot\left(\mathbf{C}_{K_{e}}K_{e}\right)=0,\label{Ke-1} \end{equation} where we have defined the Eulerian kinetic energy flux velocity \begin{equation} \mathbf{C}_{K_{e}}=\frac{\mathbf{F}_{K_{e}}}{K_{e}}=-\frac{\rho\phi_{,t}}{K_{e}}\mathbf{U}.\label{cflux} \end{equation} \begin{comment} The Lagrangian kinetic energy flux seen by a fluid particle is \begin{equation} \mathbf{F}_{K_{e},L}=-\left(\rho\phi_{,t}+K_{e}\right)\mathbf{U}\label{Flux-1} \end{equation} and the associated Lagrangian speed \[ \mathbf{C}_{K_{e},L}=-\left(1+\frac{\rho\phi_{,t}}{K_{e}}\right)\mathbf{U}. \] \end{comment} At the free-surface, the kinetic energy flux in Eq.~(\ref{Flux}) can be written as \begin{equation} \mathbf{F}_{K_{e}}=\mathbf{U}\left(\rho g\zeta+K_{e}\right),\qquad z=\zeta,\label{Flux-1-1} \end{equation} where we have used the Bernoulli equation (\ref{B}). Then, the rate of change of the surface potential energy density $P_{e}=\rho g\zeta^{2}/2$ \citep{Tulin2007} \begin{equation} \partial_{t}P_{e}=\mathbf{F}_{K_{e}}\cdot\mathbf{n}/\cos\theta\label{Pe} \end{equation} is due to the flux of kinetic energy into the moving interface $\zeta$ \begin{equation} \mathbf{F}_{K_{e}}\cdot\mathbf{n}=U_{n}\left(\rho g\zeta+K_{e}\right)\qquad z=\zeta,\label{Peflux} \end{equation} where $U_{n}=\mathbf{U}\cdot\mathbf{n}$ is the fluid velocity normal to the surface and $\theta$ the angle between $\mathbf{n}$ and the vertical (at a wave crest, $\theta=0$). The sum of the total kinetic energy $K_{e}$ integrated over the wave domain and the potential energy $P_{e}$ integrated over the surface is conserved. Clearly, a wave crest grows when the adjacent kinetic energy flux behind the crest is larger than the flux after the crest. For unidirectional waves $\zeta(x,t)$, the kinematic condition (\ref{B1a}) reduces to \[ \zeta_{,t}=W-U\zeta_{,x}, \] and \[ \zeta_{,xt}=\partial_x W+\partial_z W\zeta_{,x}-\partial_z U\zeta_{,x}^{2}-U\zeta_{,xx}. \] Then, at $\zeta_{,x}=0$ the crest speed in Eq.~\eqref{Vc1} can be written as \begin{equation} V_{c}=-\frac{\zeta_{,xt}}{\zeta_{,xx}}=U-\frac{\partial_x W}{\zeta_{,xx}}=U-\frac{\partial_z U}{\zeta_{,xx}},\label{VCC} \end{equation} where $\partial_x W=\partial_z U$ because of irrotationality. At a crest $\zeta_{,xx}<0$ and the vertical gradient $\partial_z U>0$ as indicated by measurements and simulations~\citep{Barthelemy_slowdown2015,Barthelemy2015}. As a result, for smooth wave fields the crest speed $V_{c}$ is always larger than the horizontal fluid velocity $U$. According to Eq.~(\ref{VCC}), only when crest becomes steep ($\left|\zeta_{,xx}\right|\gg1$) or the horizontal velocity profile flattens near the crest ($\partial_z U\ll1$) is the crest speed $V_{c}$ closer to the particle speed $u_x=U$. Thus, the observation that the initiation of breaking occurs when $V_{c}$ is actually $0.84$ times the particle speed is the kinematic manifestation of the space-time transport of kinetic energy below the crest~\citep{Barthelemy2015,BannerSaket2015}. Indeed, from Eq.~(\ref{Peflux}) the normal velocity $C_{K_{e}}$ of kinetic energy into the moving surface is given by \begin{equation} C_{K_{e}}=\frac{\mathbf{F}_{K_{e}}\cdot\mathbf{n}}{K_{e}}=U_{n}\left(1+\frac{\rho \gr\zeta}{K_{e}}\right).\label{Peflux-1} \end{equation} At a crest, where $\zeta>0$, $C_{K_{e}}$ is always larger than the fluid speed $U_{n}$ normal to the surface. However, we expect that as the wave crest grows reaching nearly breaking the local kinetic energy $K_{e}$ increases much faster than the potential energy $\rho \gr\zeta$ and $C_{K_{e}}$ tends to $U_{n}$ and the accumulation of potential energy into the surface is largerly attenuated. Equivalently, the Lagrangian kinetic energy flux speed $C_{K_{e}}-U_{n}$ seen by fluid particles on the surface is practically null. \section{There are no finite-time blowups}\label{sec:blowup} In the appendix of~\cite{sclav05}, contributed by Bridges, the possibility of finite-time blowup of solutions of the JS equations is discussed. Bridges studies the special case of the particle kinematics on a 1D surface, i.e., when $\zeta_{,y}\equiv 0$. The equations of motion in Eqs.~\eqref{JS} then reduce to \begin{subequations} \begin{equation*} \dot x=u_x,\quad \dot y=u_y, \end{equation*} \begin{equation*} \dot u_x=-\frac{2\zeta_{,x}\zeta_{,xt}}{1+\zeta_{,x}^2}u_x-\frac{\zeta_{,x}\zeta_{,xx}}{1+\zeta_{,x}^2}u_x^2 -\frac{\zeta_{,x}(\zeta_{,tt}+\gr)}{1+\zeta_{,x}^2}, \quad \dot u_y=0. \end{equation*} \label{eq:bridges} \end{subequations} It is then argued that under the further simplifying assumption that the matrix \begin{equation*} \frac{\zeta_{,x}}{1+\zeta_{,x}^2} \begin{pmatrix} \zeta_{,xx} & \zeta_{,xt}\\ \zeta_{,xt} & \zeta_{,tt}+\gr\\ \end{pmatrix}, \label{eq:briddges_assume} \end{equation*} is constant along trajectories $x(t)$, the horizontal velocity $u_x$ is likely to grow unbounded in finite time. These assumptions are highly specific and unrealistic. Nevertheless, Bridges' observation raises the fundamental question of whether the JS equations are well-posed. In fact, the right-hand-side of the JS equations (cf. Eq.~\eqref{JS}) is not Lipschitz continuous due to the presence of the quadratic terms in $\dot{x}$ and $\dot{y}$. Therefore, the elementary results from ODE theory (i.e., Picard's existence and uniqueness theorem) do not rule out the finite-time blowup scenario. \textcolor{black}{Note that even though the free surface $\zeta$ is bounded, the particle velocities obtained from the JS equations~\eqref{JS} could in principle have a singular behavior.} Our Hamiltonian formulation for the 3-D particle kinematics shows that for smooth steady surfaces (i.e., when $\zeta=\zeta(x,y)$ has bounded partial derivatives), the finite-time blowup never occurs. As we show in Appendix~\ref{app:u2}, the mere conservation of a Hamiltonian function does not generally rule out the finite-time blowup. However, the particular form of the Hamiltonian function~\eqref{HV} leads to a finite bound on particle speed. To see this, note that the Hamiltonian $\mathcal{H}_c=\mathcal{H}_c(\mathbf{x},{\mathbf{p}})$ derived in Eq.~(\ref{HV}) is conserved along the trajectories $(\mathbf{x}(t),\mathbf{p}(t))$. More precisely, \begin{equation} \mathcal{H}_c(\mathbf{x}(t),{\mathbf{p}}(t))= \mathcal{H}_c(\mathbf{x}(0),{\mathbf{p}}(0))=\mathcal{H}_{0}<\infty, \label{eq:dHdt=0} \end{equation} for all $t$ and finite initial data $(\mathbf x(0),{\mathbf p}(0))$. On the other hand, \begin{equation} \mathcal H_c(\mathbf x,{\mathbf p})= \frac{1}{2}{\mathbf p}\cdot \mathbf B^{-1}{\mathbf p}+\gr \zeta(\mathbf x) \geq \frac{|{\mathbf p}|^2}{2(1+|\nabla\zeta(\mathbf x)|^2)}+\gr \zeta(\mathbf x), \label{eq:Ham_ineq} \end{equation} where the inequality follows from the fact that $\mathbf B^{-1}$ is symmetric, positive-definite with the smallest eigenvalue equal to $(1+|\nabla\zeta|^2)^{-1}$. Now assume that there exists a finite time $t_{0}$ such that $\lim_{t\to t_{0}}|{\mathbf{p}}(t)|=\infty$, i.e., there is a blowup at time $t_{0}$. Since $\zeta$ and $|\nabla\zeta|$ are bounded, inequality~\eqref{eq:Ham_ineq} implies that $\lim_{t\to t_{0}}\mathcal{H}_c(\mathbf{x}(t),{\mathbf{p}}(t))=\infty$. This, however, contradicts the conservation law~\eqref{eq:dHdt=0}. By definition of the canonical momentum \eqref{pmon1}, we have ${\mathbf p}=\mathbf B{\mathbf u}_h$. This in turn implies $$|{\mathbf p}|^2={\mathbf u}_h\cdot \mathbf B^2{\mathbf u}_h\geq |{\mathbf u}_h|^2,$$ where the inequality follows from the fact that $\mathbf B$ is positive definite with the smallest eigenvalue equal to $1$. Since $|{\mathbf p}|$ is bounded, so is $|{\mathbf u}_h|$, ruling out the finite-time blowup for the particle velocity. In summary, in the autonomous case (where the smooth surface $\zeta$ is time-independent) the equations of motion~\eqref{JS} are well-posed and finite-time blowup cannot occur. For traveling waves, i.e., $\zeta(x,y,t)=\overline\zeta(x-ct,y)$, one can also show that there are no finite-time blowups. The proof is similar to the steady case, except that for the traveling waves the conserved Hamiltonian is given by Eq.~\eqref{Htw3}. Namely, in the co-moving frame $\overline{\mathbf x}=(x-ct,y)$, we have \begin{equation*} \mathcal H_c(\overline{\mathbf x}(t),{\mathbf p}(t))\geq \frac{|{\mathbf p}-\pmb{\alpha}-c\mathbf B\mathbf e_1|^2}{2(1+|\nabla\zeta(\overline{\mathbf x})|^2)}+ \gr\overline\zeta(\overline{\mathbf x})-\frac{1}{2}c^2. \end{equation*} As in the steady case, blowup of ${\mathbf p}$ violates the conservation of the Hamiltonian function. For the general non-autonomous case, where $\zeta$ is time-dependent, the finite-time blowup may not be ruled out by the above argument. \section{Trapping regions for steady flows and traveling waves}\label{sec:trapping} As mentioned earlier, the JS equations are very general as they describe the friction-less motion of a particle on a given surface. Using the Hamiltonian structure in Eq.~\eqref{HV}, we show that the horizontal motion of a particle on a steady surface (i.e., $\zeta=\zeta(x,y)$) or on a traveling wave (i.e., $\zeta=\overline{\zeta}(x-ct,y)$) is always trapped in a subset of the two-dimensional $x-y$ plane. Since the Hamiltonian is conserved, the phase space $(x,y,u_x,u_y)\in \mathbb R^4$ is foliated by the invariant hypersurfaces $\mathcal{H}=\mbox{const.}$ These hypersurfaces are three-dimensional, and therefore, the particle trajectories can be chaotic. It turns out that one can deduce more from the Hamiltonian structure. Namely, we show that, based on their initial conditions, the trajectories are confined to a subset of the configuration space $(x,y)$. We first consider the steady case $\zeta_{,t}=0$, where the Hamiltonian~\eqref{Hc} can be written as \begin{equation} \mathcal H(\mathbf x,\mathbf u)=\gr \zeta(\mathbf x)+\frac{1}{2}|\mathbf u|^2+\frac{1}{2}|\mathbf u\cdot \nabla\zeta (\mathbf x)|^2. \label{eq:simple_Ham} \end{equation} Note that the energy $E$ is omitted since the system is autonomous. In this steady case, the following result holds. \begin{theorem} Consider the motion of a particle constrained to the smooth steady surface $\zeta=\zeta(\mathbf x)$. Denote the initial condition of the particle by $(\mathbf x_0,\mathbf u_0)$ and define \begin{equation} D_0:=\left\{\mathbf x=(x,y)\in\mathbb R^2| \zeta(\mathbf x)\leq \zeta(\mathbf x_0)+ \frac{1}{2\gr}|\mathbf u_0|^2+\frac{1}{2\gr}|\mathbf u_0\cdot \nabla\zeta (\mathbf x_0)|^2\right\}. \label{D0} \end{equation} The position of the particle is bound to the subset $D_0$, i.e., $(x(t),y(t))\in D_0$ for all times $t$. \label{thm:trapping} \end{theorem} \begin{proof} Hamiltonian~\eqref{eq:simple_Ham} is conserved along particle trajectories $(\mathbf x(t),\mathbf u(t))$. Hence we have $$\gr \zeta(\mathbf x(t))\leq \mathcal H(\mathbf x(t),\mathbf u(t))=\mathcal H(\mathbf x_0,\mathbf u_0).$$ \end{proof} Note that the above theorem does not imply that the subset $D_0$ is invariant. In fact, particles initiated outside $D_0$ can very well enter (and exit) the set. Instead, the set $D_0$ is a \emph{trapping region}, i.e., particles starting in $D_0$ with initial conditions $(\mathbf x_0,\mathbf u_0)$ stay in $D_0$ for all times. For a given surface, the trapping region $D_0$ is entirely determined by the initial position $\mathbf x_0$ and the initial velocity $\mathbf u_0$ of the particle. An interesting special case is to consider the motion of the particle from rest, i.e., zero initial velocity. Then Theorem~\ref{thm:trapping} implies the following. \begin{corollary} Consider the motion of a particle that is initially at rest and moves on a smooth steady surface $\zeta=\zeta(x,y)$. Denote the initial position of the particle by $(x_0,y_0)$ and define \begin{equation*} D_0:=\left\{(x,y)\in\mathbb R^2| \zeta(x,y)\leq \zeta(x_0,y_0) \right\}. \label{D0_0} \end{equation*} The position of the particle is bound to the subset $D_0$, i.e. $(x(t),y(t))\in D_0$ for all times $t$. \label{cor:trapping} \end{corollary} \begin{proof} This is a direct consequence of Theorem~\ref{thm:trapping} with the initial velocity ${\mathbf u}_0=\mathbf 0$. \end{proof} Theorem~\ref{thm:trapping} and Corollary~\ref{cor:trapping} hold for traveling waves, $\zeta(x,y,t)=\overline{\zeta}(x-ct,y)$. The statements are identical except that the coordinate $x$ and the velocity $u_x$ are replaced with the co-moving coordinate $\bar x = x-ct$ and velocity ${\bar u}_x=\dot x-c$, respectively. The proofs are similar and therefore omitted here. The trapping region in Eq.~\eqref{D0} is now given by \begin{equation*} D_0:=\left\{\mathbf x=(\bar x,y)\in\mathbb R^2| \overline{\zeta}(\mathbf x)\leq \overline{\zeta}(\mathbf x_0)+ \frac{1}{2\gr}|\mathbf u_0-c\mathbf e_1|^2+\frac{1}{2\gr}|(\mathbf u_0-c\mathbf e_1)\cdot \nabla \overline{\zeta} (\mathbf x_0)|^2 \right\}, \label{D01} \end{equation*} where $\mathbf e_1$ is the unit vector along $\bar x$ and the initial particle velocity $\mathbf u_0$ is that in the fixed reference frame. \section{Concluding remarks} We have investigated the properties of the JS equations for the kinematics of fluid particles on the sea surface. We showed that the JS equations can be derived from an action principle describing the motion of a frictionless particle constrained on an unsteady surface and subject to gravity. Further, for a zero-stress free surface the classical kinematic criterion for wave breaking is deduced from the condition of vanishing of vorticity generated at a crest. If this holds for the largest crest, the Hamiltonian structure of the JS equations reveals that the associated symplectic two-form instantaneously reduces to that of the motion of a particle in free flight, as if the constraint to be on the free surface did not exist. In realistic oceanic fields the large crest eventually breaks and energy of fluid particles is dissipated to turbulence, which is a time-irreversible mechanism. We speculate that this behavior appears analogous to a flight--crash event in fluid turbulence, where a particle flies with a large velocity before suddenly losing energy~\citep{Falkovich2014}. Clearly, the Hamiltonian particle dynamics associated with the inviscid Euler or Zakharov~(1968) equations is time-reversible~\citep{Chabchoub2014}. Then, the instantenous vanishing of vorticity at large crests may reveal the inviscid mechanism of breaking inception before turbulent dissipative effects take place. This necessitates a further study of the dynamics and energetics of the wave field that generates the free surface to verify if the kinematic breaking criterion is valid. Finally, the conservation and special form of the Hamiltonian function for steady surfaces and traveling waves implies that particle velocities remain bounded at all times, ruling out the finite-time blowup of solutions. \section*{Acknowledgments} FF acknowledges the Georgia Tech graduate courses `Classical Mechanics II' taught by Jean Bellissard in Spring 2013 and `Nonlinear dynamics: Chaos, and what to do about it?' taught by Predrag Cvitanovi\'c in Spring 2012. FF also thanks Jean Bellissard for stimulating discussions on differential geometry and classical mechanics as well as for a revision of an early draft of the manuscript. The authors are also grateful to Jean Bellissard, Predrag Cvitanovi\'c and Rafael De La Llave for stimulating discussions on symplectic geometry and Hamiltonian dynamics. \begin{appendices} \section{JS equations for steady irrotational flows}\label{app:Stokes} Consider a one-dimensional, semi-infinite, steady, irrotational flow constrained to the wave surface $\zeta=\zeta(x)$. These assumptions imply $\phi_{,y}=\phi_{,t}=0$ (where $\phi$ is the velocity potential) and $\zeta_{,y}=\zeta_{,t}=0$. Since the vertical particle velocity satisfies $$\dot z=\phi_{,z}(x(t),z(t)),$$ the respective acceleration is given by $$\ddot z = \phi_{,xz}\dot x + \phi_{,zz}\dot z.$$ For particles on the surface, $z(t)=\zeta(x(t))$, which implies $$\dot z = \zeta_{,x} \dot x,$$ and $$\ddot z = \zeta _{,xx} \dot x^2+\zeta_{,x} \ddot x.$$ Therefore, $$\zeta _{,xx} \dot x^2+\zeta_{,x} \ddot x=\phi_{,xz}\dot x + \phi_{,zz}\zeta_{,x}\dot x,$$ which upon multiplying by $\zeta_{,x}$ and rearranging terms gives \begin{equation} \zeta_{,x}^2 \ddot x=-\zeta _{,xx}\zeta_{,x} \dot x^2+\phi_{,xz}\zeta_{,x}\dot x + \phi_{,zz}\zeta_{,x}^2\dot x.\label{EQ1} \end{equation} On the other hand, the Bernoulli equation~\eqref{B} reads $$\gr\zeta(x(t))+\frac{1}{2}(\dot x^{2}+\phi_{,z}^2(x(t),\zeta(x(t)))=0.$$ Taking the derivative with respect to time we obtain $$\ddot x=-\gr\zeta_{,x}-\phi_{,z}\big(\phi_{,xz}+\phi_{,zz}\zeta_{,x}\big).$$ Using $\phi_{,z}=\dot z=\zeta_{,x}\dot x$ implies \begin{equation} \ddot x=-\gr\zeta_{,x}-\phi_{,xz}\zeta_{,x}\dot x-\phi_{,zz}\zeta_{,x}^2\dot x. \label{EQ2} \end{equation} Adding Eqs.~\eqref{EQ1}~and~\eqref{EQ2} gives the JS equations~\eqref{JS} in the case of 1-D steady flows. \section{Computation of the Dirac bracket} \label{app:Dirac} Since the surface is time-dependent, the resulting constraints have an explicit time-dependence. We first autonomize the system of the free particle in three dimensions adding a pair of canonically conjugate variables $(t,E)$, where $E$ is the energy exchanged by the particle with the moving surface. Indeed, the particle behaves as an open system if the motion is on unsteady surfaces. Constraints are now functions of the dynamical variables \[ \overline{\bf z}=(x,y,z,t,u_x,u_y,u_z,E), \] as required by Dirac's theory, and $\overline{\bf z}(\tau)$ is a generic trajectory in the extended phase space, parametrized by $\tau$ which plays the role of time for the autonomous system. The autonomized Hamiltonian of the free particle in three dimensions subjected to gravity is \begin{equation*} \overline{\mathcal{H}}=\frac{u_x^2+u_y^2+u_z^2}{2}+\gr z +E.\label{Hfree} \end{equation*} The two constraints are given by \begin{equation*} \Phi_1=z-\zeta(x,y,t)=0,\quad\quad \Phi_2=u_z-u_x\zeta_{,x}-u_y \zeta_{,y}-\zeta_{,t}=0. \end{equation*} The $2\times 2$ matrix ${\mathbb D}$ follows from the inverse of the symplectic matrix ${\mathbb C}$ given by Eq.~(\ref{eq4C}): $$ D_{11}=D_{22}=0,\quad\quad D_{21}=-D_{12}=1/(1+\zeta_{,x}^2+\zeta_{,y}^2). $$ The Poisson matrix associated with the Dirac bracket is computed from \begin{equation} \label{eq:Jstar} {\mathbb J}_*={\mathbb J}-{\mathbb J}\hat{\cal Q}^\dagger{\mathbb D}\hat{\cal Q}{\mathbb J}, \end{equation} where the $K\times N$ matrix $\hat{\cal Q}$ has elements $$ \hat{\cal Q}_{\alpha l}=\frac{\partial \Phi_\alpha}{\partial z_l}, $$ and $\dagger$ denotes Hermitian transposition. Since ${\mathbb C}=\hat{\cal Q}{\mathbb J}\hat{\cal Q}^\dagger$, the Poisson matrix of the Dirac bracket can be computed algebraically by way of a projector~\citep{chan13a} $$ {\cal P}_*={\mathbb I}_N-\hat{\cal Q}^\dagger{\mathbb D}\hat{\cal Q}{\mathbb J}, $$ where ${\mathbb I}_N$ is the $N\times N$ identity matrix. If ${\mathbb C}$ is invertible, ${\cal P}_*\hat{\cal Q}^\dagger=0$, which is an alternative way to characterize the fact that constraints that are actually Casimir invariants of the Dirac bracket. Actually, the matrix ${\mathbb D}$ is defined such that the constraints are Casimir invariants of the Dirac bracket, i.e., $\{F,\Phi_{\alpha}\}_*=0$ for all observables $F$. As a result, the Dirac bracket is a Poisson bracket that satisfies the Jacobi identity~\citep{chan13b}. The Dirac projector ${\cal P}_*$ projects the dynamics onto the surface defined by the constraints. The expression of ${\mathbb J}_*$ is given by ${\mathbb J}_*={\mathbb J}{\cal P}_*={\cal P}_*^\dagger {\mathbb J}{\cal P}_*$. This provides a systematic and algebraic procedure to compute Dirac brackets. The resulting Poisson matrix does not explicitly depend on $z$ and $u_z$. As a consequence, the Poisson bracket of two functions of $(x,y,t,u_x,u_y,E)$ is again a function of $(x,y,t,u_x,u_y,E)$. In other words, the algebra of observables $F(x,y,t,u_x,u_y,E)$ is a Poisson sub-algebra. In this way, one can omit $z$ and $u_z$ (since their dynamics is quite trivially given by the constraints which are Casimir invariants of the Dirac bracket) and the phase-space dimension is reduced by two. This leads to the expression of the Dirac bracket given by Eq.~(\ref{eq:DBzeta}). \section{The Hamiltonian structure of a prototype blowup problem}\label{app:u2} As a toy problem, Bridges considers the simplest second order ODE Riccatti equation, which can be written as \begin{equation} \dot{x}=u,\quad\quad\dot{u}=u^2. \label{eq:ftbu} \end{equation} Although the JS equations cannot be reduced to this form, we discuss its properties for completeness. The system~\eqref{eq:ftbu} possesses the Hamiltonian \[ H=u{\rm e}^{-x}, \] which is of course an invariant of the dynamics. The non-canonical Poisson bracket is given by $$ \{F,G\}=u {\rm e}^x \left( \frac{\partial F}{\partial x}\frac{\partial G}{\partial u}-\frac{\partial F}{\partial u}\frac{\partial G}{\partial x} \right). $$ The canonical structure of the system is obtained in the variables $(x,{\rm e}^x \ln u)$. For initial conditions $\left(x_{0},u_{0}\right)$ at $t=0$ \[ x(t)=x_{0}+\ln\frac{1}{1-u_{0}t},\qquad u(t)=\frac{u_{0}}{1\text{\textminus}u_{0}t}. \] Clearly, for positive initial velocities ($u_{0}>0$), all solutions blow up in finite time, with the time of blowup inversely proportional to the norm of the initial velocity data. On the other hand, trajectories are bounded for negative initial velocities and they exist for all time. The finite-time singularity of the system can be explained exploiting the time invariance of the Hamiltonian $H=u{\rm e}^{-x}$. As $u$ linearly tends to infinity when $t$ tends to some $t_0$, $x$ also tends to infinity, but logarithmically, when $t$ goes to $t_0$, in such a way that the product between $u$ and ${\rm e}^{-x}$. This is possible because ${\rm e}^{-x}$ is not bounded from below by a strictly positive quantity. Contrast this with Eq.~\eqref{eq:Ham_ineq}, where the quadratic part of the Hamiltonian is positive definite, and hence bounded from below by a positive constant. \end{appendices} \bibliographystyle{jfm}
1,314,259,996,095
arxiv
\section{Introduction}\label{sec:intro} Let $X=(X_n: n\geq 0)$ be a Markov chain taking values in a state space $S$. For the purpose of this paper, the state space $S$ may be discrete or continuous. In many applications settings, it is natural to consider the behavior of $X$ as a function of a parameter $\theta$ that affects the transition dynamics of the process. In particular, suppose that for each $\theta$ in some open neighborhood of $\theta_0\in \mathbb{R}^d$, $P(\theta) = (P(\theta, x, dy): x,y\in S)$ defines the one-step transition kernel of $X$ associated with parameter choice $\theta$. In such a setting, computing the derivative of some application-specific expectation is often of interest. Such derivatives play a key role when one is numerically optimizing an objective function, defined as a Markov chain's expected value, over the decision parameter $\theta$. In addition, such derivatives describe the sensitivity of the expected value under consideration to perturbations in $\theta$. Such sensitivities are valuable in statistical applications, and arise when one applies (for example) the ``delta method'' in conjunction with estimating equations involving some expectation of the observed Markov chain; see, for example, \cite{lehmann06}. More generally, sensitivity analysis is important when one is interested in understanding how robust the model is to uncertainties in the input parameters. In particular, suppose that $\theta$ is a vector of statistical parameters, and that a data set of size $n$ has been collected to estimate the underlying true parameter $\theta^*$. In significant generality, the associated estiamtor $\hat \theta_n$ for $\theta^*$ will satisfy a central limit theorem (CLT) of the form $$ n^{1/2} (\hat \theta_n - \theta^*) \Rightarrow N(0,C) $$ as $n\to \infty$, where $\Rightarrow$ denotes weak convergence and $N(0,C)$ is a normally distributed random column vector with mean $0$ and covariance matrix $C$; see, for example, \cite{ibragimov81}. In many applications, one wishes to understand how the uncertainty in our estimator $\hat \theta_n$ of $\theta^*$ propagates through the model associated with $X$ to produce uncertainty in output measures of interest. Suppose, for example, that the decision-maker focuses her attention on a performance measure of the form $\alpha(\theta) = \mathbf{E}_\theta Z$, where $Z$ is some appropriately chosen random variable (rv) and $\mathbf{E}_\theta(\cdot)$ is the expectation operator under which $X$ evolvoes according to $\P(\theta)$. If $\alpha(\cdot)$ is differentiable at $\theta^*$, then $$ n^{1/2}(\alpha(\hat \theta_n)-\alpha(\theta^*)) \Rightarrow \nabla \alpha(\theta^*) N(0,C) $$ as $n\to \infty$, where $\nabla\alpha(\theta)$ is the (row) gradient vector evaluated at $\theta$; see p.122 of \cite{serfling80}. If, in addition, $\nabla \alpha(\cdot)$ is continuous at $\theta^*$ and $C$ can be consistently estimated from the observed data via an estimator $C_n$, the interval \begin{equation}\label{eqn_1_1} \left[\alpha(\hat \theta_n) - z\frac{\sigma_n}{\sqrt{n}},\ \alpha(\hat \theta_n) + z\frac{\sigma_n}{\sqrt{n}}\right] \end{equation} is an asymptotic $100(1-\delta)$\% confidence interval for $\alpha(\theta^*)$ (provided $\nabla \alpha(\theta^*)C\nabla \alpha(\theta^*)^T > 0$), where $z$ is chosen so that $P(-z\leq N(0,1)\leq z) = 1-\delta$ and $\sigma_n = \sqrt{\nabla \alpha(\hat \theta_n)C_n \nabla \alpha(\hat \theta_n)^T}$. The confidence interval (\ref{eqn_1_1}) provides the modeler with the desired sensitivity and robustness of the model described by $X$ to the statistical uncertainties present in the estiamtion of $\theta^*$. Of course, this approach rests on the differentiability of $\alpha(\cdot)$ and on one's ability to compute the gradient. This paper provides conditions guaranteeing differentiability in the general state space Markov chian settings and provides representations for those derivatives suitable for computation. The problem of determining such differentiability has a long history and has been addressed through various approaches including weak differentiation (\citealp{vazquez1992, pflug1992}), likelihood ratio \citep{glynn95}, measure-valued differentiation (\citealp{heidergott06}, \linebreak\citealp{heidergott2006B}), and derivative regenration (\citealp{glasserman1993}). However, most of the previous approaches are limited to special classes of problems. For example, the results in \linebreak\cite{vazquez1992} and \cite{pflug1992} are limited to bounded performance functionals; \cite{glasserman1993} impose special structures in the the transition dynamics of the Markov chains and their parametrization; \cite{glynn95} assumes for random horizon expectations that the associated stopping times have finite exponential moments, and for stationary expectations that the Markov chain is geometrically ergodic. \cite{heidergott2006B} provide weaker conditions for random horizon performance measures based on measure-valued differentiation approach, but their sufficient conditions are difficult to verify in general and still require that the associated stopping times possess finite (at least) second moment. Also based on measure-valued differentiation, \cite{heidergott06} study stationary expectations. However, the sufficient conditions verifiable based on the model building blocks in the paper require geometric ergodicity of the Markov chain. In this paper, on the other hand, we provide easily verifiable sufficient conditions for random horizon expectations that do not require any moment conditions for the associated stopping times---hence, allowing even infinite horizon expectations. For stationary expectations, we provide (again, easily verifiable) sufficient conditions that does not require geometric ergodicity. We illustrate the sharpness of our differentiability criteria with the example of waiting times of G/G/1 queues with heavy tailed service times. The rest of the paper is organized as follows. Section 2 develops a preliminary theory for both random-horizon expectations and stationary expectations based on simple and clean operator theoretic arguments. Section 3 provides more general criteria for differentiability of random horizon expectations based on the stochastic Lyapunov type inequalities arguments. In Section 4, also taking Lyapunov inequalities approach, we establish the differentiability criteria for stationary expectations. \section{Operator-theoretic Criteria for Differentiability}\label{sec:operator} We start by studying differentiability in a setting in which one can use operator arguments to establish existance of derivatives. In this operator setting, the proofs and theorem statements are especially straightforward. Consider a Markov chain $X=(X_n:n\geq 0)$ living on state space $S$, with one step transition kernel $P=(P(x,dy): x,y\in S)$, where $$ P(x,dy) = P(X_{n+1}\in dy|X_n = x) $$ for $x,y\in S$. We focus first on expectations of the form \begin{equation}\label{def2:u_star} u^*(x) = \mathbf{E}_x \sum_{j=0}^{T-1} \exp(\sum_{k=0}^{j-1} g(X_k)) f(X_j) + \exp(\sum_{k=0}^{T-1}g(X_k)) f(X_T), \end{equation} where $T = \inf\{n\geq 0: X_n \in C^c\}$ is the first hitting time of the ``target set'' $C^c\subseteq U$, $f:S\to\mathbb{R}_+$, $g:S\to \mathbb{R}$, and $\mathbf{E}_x(\cdot)\triangleq \mathbf{E}(\cdot|X_0=x)$. In (\ref{def2:u_star}), we permit the possibility that $C^c = \phi$, in which case $T=\infty$ a.s., and $u^*$ is then to be interpreted as the ``infinite horizon discounted reward'' \begin{align*} u^*(x) = \mathbf{E} _x \sum_{j=0}^\infty \exp\left(\sum_{k=0}^{j-1} g(X_k)\right) f(X_j). \end{align*} In addition to subsuming infinite horizon discounted rewards, (\ref{def2:u_star}) also includes expected hitting times ($g\equiv 0, f=1$ on $C$ and $f=0$ on $C^c$), exit probabilities ($g\equiv 0$, $f=0$ on $C$, and $f(x) = I(x\in B)$ for $x\in C^c$, when one is considering $P(X_T\in B | X_0 = x)$), and many other natural Markov chain expectations. It is easy to verify that \begin{equation}\label{eq:u_star} u^* = \sum_{n=0}^\infty K^n \tilde f \end{equation} where $K = (K(x,dy): x, y\in C)$ is the non-negative kernel for which \begin{equation}\label{eq2:2.3} K(x,dy) = \exp(g(x)) P(x,dy) \end{equation} for $x,y\in S$, and \begin{equation*} \tilde f(x) = f(x) + \int_{C^c} \exp(g(x)) P(x,dy)f(y) \end{equation*} for $x \in C$. Here, we are taking advantage in (\ref{def2:u_star}) of the (common) notational convention that for a function $h:B \to \mathbb{R}$, a measure $\eta$ on $B$, and kernels $Q_1$ and $Q_2$ on $B$, the scalar $\eta h$, the function $Q_1 h$, the measure $\eta Q_1$, and the kernel $Q_1 Q_2$ are respectively defined via $$\eta h = \int_B h(y) \eta(dy),$$ $$(Q_1 h)(x) = \int_B h(y) Q_1(x,dy),$$ $$(\eta Q_1)(A) = \int_B \eta(dx) Q_1 (x,A),$$ $$(Q_1 Q_2)(x,A) = \int_B Q_1(x,dy) Q_2(y,A),$$ whenever the right-hand sides are well-defined. Furthermore, we define the kernels $Q^n$ via $Q^0 (x,dy) = \delta_x(dy)$ (where $\delta_x( . )$ is a unit point mass at $x$), and $Q^n = Q Q^{(n-1)}$ for $n\geq1$. Our goal is to use operator-theoretic tools to study the differentiability of (\ref{eq:u_star}). To this end, we start by defining the appropriate linear spaces that underlie this approach. Given a measurable space $(B, \mathcal B)$, measurable $w:B\to[1,\infty)$, and $h:B\to\mathbb{R}$, let $\|h\|_w = \sup\{ |h(x)|/w(x): x\in B\}$ and $L_w=\{h\in L:\|h\|_w < \infty\}$ where $L$ is the set of measurable functions. For a linear operator $Q:L_w\to L_w$ and a functional $\eta:L_w\to \mathbb{R}$, set $$\vertiii{Q}_w = \sup_{h\in L_w: \|h\|_w \neq 0} \frac{\|Qh\|_w}{\|h\|_w}$$ and $$\|\eta\|_{w} = \sup\{|\eta h|: h\in L_w, \|h\|_w \leq 1\}.$$ Then, let , $\mathcal L_w = \{Q\in \mathcal L: \vertiii{Q}_w < \infty\}$, and $\mathcal M_w = \{\eta\in \mathcal M: \|\eta\|_w < \infty\}$ where $\mathcal L$ and $\mathcal M$ are the sets of kernels, and measures, respectively. Each of the spaces $L_w$, $\mathcal L_w$, and $\mathcal M_w$ are Banach spaces under their respective norms and addition / scalar multiplication operations; see Appendix~\ref{appendix:B}. Furthermore, for $Q_1, Q_2 \in \mathcal L_w$, $h\in L_w$, and $\eta \in \mathcal M_w$, it is easy to show that \begin{equation}\label{eq:2.4} \vertiii{Q_1Q_2}_w \leq \vertiii{Q_1}_w\cdot\vertiii{Q_2}_w \end{equation} and \begin{equation}\label{eq:2.5} \begin{aligned} \|Qh\|_w &\leq \vertiii{Q}_w \cdot\|h\|_w,\\ \|\eta Q\|_w &\leq \|\eta\|_w \cdot \vertiii{Q}_w,\\ |\eta h| &\leq \|\eta\|_w \cdot \|h \|_w; \end{aligned} \end{equation} see, for example, \cite{Dunford71} for the special case $w\equiv 1$. In view of (\ref{eq:2.4}), if $\vertiii{Q^m}_w < 1$ for some $m\geq 1$, then $(I-Q)$ is invertible on $\mathcal L_w$ and $$ (I-Q)^{-1} = \sum_{n=0}^\infty Q^n.$$ Given a parametrized family of kernels $(Q(\theta)\in \mathcal L_w: \theta \in (a,b))$, we say that $Q(\cdot)$ is \emph{continuous} in $\mathcal L_w$ at $\theta_0 \in (a,b)$ if $\vertiii{Q(\theta_0 + h)-Q(\theta_0) }_w \to 0$ as $h\to 0$, and \emph{differentiable} in $\mathcal L_w$ at $\theta_0 \in (a,b)$ with derivative $Q'(\theta_0)$ if there exists a kernel $Q'(\theta_0) \in \mathcal L_w$ for which $$\vertiii{\frac{Q(\theta_0 + h)-Q(\theta_0)}{h} - Q'(\theta_0)}_w \to 0$$ as $h\to 0$. If $Q(\cdot)$ is differentiable in a neighborhood of $\theta_0$ with derivative $Q'(\cdot)$, and $Q'(\cdot)$ is continuous in $\mathcal L_w$, then we say that $Q(\cdot)$ is continuously differentiable at $\theta_0$. Similarly, given families $(f(\theta)\in L_w: \theta\in (a,b))$ and $(\eta(\theta) \in \mathcal M_w: \theta\in (a,b))$, we say that $f(\cdot)$ is \emph{continuous} in $ L_w$ at $\theta_0$ if $\left\|f(\theta_0+h) - f(\theta_0)\right\|_w \to 0$ as $h\to 0$, and \emph{differentiable} in $ L_w$ at $\theta_0$ if there exists $f'(\theta_0)\in L_w$ such that $$\left\|\frac{f(\theta_0+h) - f(\theta_0)}{h} - f'(\theta_0)\right\|_w \to 0$$ as $h\to 0$; and $\eta(\cdot)$ is \emph{continuous} in $\mathcal M_w$ at $\theta_0$ if $ \left\|\eta(\theta_0 + h) - \eta(\theta_0)\right\|_w \to 0$ as $h\to0$, and differentiable in $\mathcal M_w$ at $\theta_0$ if there exists $\eta'(\theta_0) \in \mathcal M_w$ such that $$ \left\|\frac{\eta(\theta_0 + h) - \eta(\theta_0)}{h} - \eta'(\theta_0)\right\|_w \to 0$$ as $h\to0$. As in $\mathcal L_w$, if $f(\cdot)$ and $\eta(\cdot)$ are differentiable and their derivatives are continuous at $\theta_0$ in $L_w$ and $\mathcal M_w$ respectively, we say that they are continuously differentiable. Assuming that $(Q(\theta): \theta \in (a,b))$ is $n$-times differentiable in some neighborhood $\mathcal N$ of $\theta_0$, with derivative $(Q^{(n)}(\theta):\theta\in\mathcal N)$, we say that $Q(\cdot)$ is $(n+1)$-times differentiable in $\mathcal L_w$ at $\theta_0$ if $(Q^{(n)}(\theta): \theta\in \mathcal N)$ is differentiable at $\theta_0$, with corresponding derivative $Q^{(n+1)}(\theta_0)$. We can analogously define $f^{(n+1)}(\theta_0)$ and $\eta^{(n+1)}(\theta_0)$ in the spaces $L_w$ and $\mathcal M_w$, respectively. (We restrict our discussion in this paper to scalar $\theta$, since the vector case introduces no new mathematical issues.) We can now state our first result, pertaining to differentiability of $u^*$. \begin{theorem}\label{thm:01} Suppose there exists $w:C\to[1,\infty)$ and $\theta_0 \in (a,b)$ for which: \begin{enumerate} \item[(a)] $\vertiii{K^m(\theta_0)}_w < 1$ for some $m\geq 1$; \item[(b)] $K(\cdot)$ is \what{(continuously)} differentiable in $\mathcal L_w$ at $\theta_0$, with derivative $K'(\theta_0)$; \item[(c)] $\tilde f(\cdot)$ is \what{(continuously)} differentiable in $L_w$ at $\theta_0$, with derivative $\tilde f'(\theta_0)$. \end{enumerate} Then: \begin{enumerate} \item[(i)] $(I-K(\theta))$ is invertible on $L_w$ for $\theta$ in a neighborhood of $\theta_0$; \item[(ii)] Setting $G(\theta) = (I-K(\theta))^{-1}$, $G(\cdot)$ is \what{(continuously)} differentiable in $\mathcal L_w$ at $\theta_0$, and \begin{equation*} G'(\theta_0) = G(\theta_0) K'(\theta_0) G(\theta_0); \end{equation*} \item[(iii)] $u^*(\theta) = \sum_{n=0}^\infty K^n(\theta) \tilde f(\theta)$ is \what{(continuously)} differentiable in $L_w$ at $\theta_0$, with \begin{equation}\label{eq:4.3a} (u^*)'(\theta_0) = G'(\theta_0) \tilde f(\theta_0) + G(\theta_0)\tilde f'(\theta_0). \end{equation} \end{enumerate} If, in addition, $K(\cdot)$ and $\tilde f(\cdot)$ are $n$-times \what{(continuously)} differentiable in $\mathcal L_w$ and $L_w$, respectively, at $\theta_0$, then $Q(\cdot)$ and $u^*(\cdot)$ are $n$-times \what{(continuously)} differentiable at $\theta_0$ in $\mathcal L_w$ and $L_w$, respectively, and $Q^{(n)}(\theta_0)$ and $(u^*)^{(n)}(\theta_0)$ can be recursively computed via \begin{align}\label{eq2:D} G^{(n)}(\theta_0) = \sum_{j=0}^{n-1} {n\choose j}G^{(j)}(\theta_0)K^{(n-j)}(\theta_0) G(\theta_0) \end{align} and \begin{align}\label{eq2:E} (u^*)^{(n)}(\theta_0) = G(\theta_0) \bigg(\tilde f^{(n)}(\theta_0) + \sum_{j=0}^{n-1}{n\choose j} K^{(n-j)} (\theta_0) (u^*)^{(j)}(\theta_0)\bigg) \end{align} where, as usual, $K^{(0)}(\theta)\equiv K(\theta)$ and $\tilde f^{(0)}(\theta) = \tilde f(\theta)$. \end{theorem} \begin{proof}{Proof. Part (i) is \rvout{well known: see for example, \what{[reference]}}\rvin{obvious}. For part (ii), note that assumptions (a) and (b) imply that there exists a neighborhood $\mathcal N$ of $\theta_0$ for which $\sup_{\theta\in \mathcal N}\vertiii{K^m(\theta)}_w < 1$ and $\sup_{\theta\in \mathcal N} \vertiii{K(\theta)}_w < \infty$, from which it follows that $\sup_{\theta\in \mathcal N}\vertiii{G(\theta)}_w < \infty$. Furthermore, since $(I-K(\theta_0+h)) G(\theta_0+h) = G(\theta_0+h) (I-K(\theta_0+h)) = I$, evidently \begin{equation*} (G(\theta_0+h) - G(\theta_0)) (I-K(\theta_0)) = G(\theta_0+h) (K(\theta_0+h) - K(\theta_0)), \end{equation*} so that \begin{equation}\label{eq2:2.9} G(\theta_0 + h) - G(\theta_0) = G(\theta_0 + h) (K(\theta_0+h) - K(\theta_0)) G(\theta_0) \end{equation} Clearly, this implies that $\vertiii{G(\theta_0+h) - G(\theta_0)}_w \leq \vertiii{G(\theta_0+h)}_w\vertiii{K(\theta_0+h)- K(\theta_0)}_w\vertiii{G(\theta_0)}_w \to 0$ as $h \to 0$, so $G(\cdot)$ is continuous in $\mathcal L_w$ at $\theta_0$. Consequently, (\ref{eq2:2.9}) implies that $G(\cdot)$ is differentiable in $\mathcal L_w$ at $\theta_0$, with $G'(\theta_0) = G(\theta_0) K'(\theta_0)G(\theta_0)$. \what{In case $K'$ is continuous, continuity of $G'$ is also immediate from this expression.} For part (iii), the result follows analogously from the identity \begin{equation*} u^*(\theta_0+h) - u^*(\theta_0) = G(\theta_0) (\tilde f(\theta_0+h) - \tilde f(\theta_0)) + (G(\theta_0+h) - G(\theta_0)) \tilde f(\theta_0+h). \end{equation*} The proof for the $n$-fold derivatives for $n\geq 2$ is very similar and therefore omitted. \end{proof} \begin{remark} Suppose that $K(\cdot)$ posesses a density $(k(\cdot, x, y): x,y \in C)$ that is $n$-times differentiable (with (pointwise) derivative $(k^{(n)}(\cdot, x,y):x,y\in C))$. For $\epsilon>0$ and $0\leq j \leq n$, let $\tilde \omega_\epsilon^{(j)}(x,y) = \sup_{|\theta-\theta_0|<\epsilon} |k^{(j)}(x, y)|$. Then, the conditions \begin{align} &\sup_{x\in C} \int_C K^m(\theta_0, x, dy) \frac{w(y)}{w(x)} < 1\quad \text{for some } m\geq 1,\label{eq:02.10}\\ &\sup_{x\in C} \int_C \omega_\epsilon^{(j)}(x,y)\frac{w(y)}{w(x)} K(\theta_0, x, dy) < \infty,\label{eq:02.11}\\ &\sup_{x\in C} \int_{C^c} (1+\tilde \omega_\epsilon^{(j)}(x,y))\frac{|f(y)|}{w(x)}K(\theta_0, x, dy) < \infty,\label{eq:02.12} \end{align} for $j=0,\ldots,n$ imply (a), (b), and (c) of Theorem~\ref{thm:01} implying the validity of (\ref{eq2:D}) and (\ref{eq2:E}). \end{remark} There is an analgous differentiability results for measures. For a given initial distribution $\mu$ on $C$, let $\nu$ be the measure defined by \begin{equation*} \nu(dy) = \mathbf{E}_\mu \sum_{j=0}^{T-1} \exp \left(\sum_{k=0}^{j-1} g(X_k)\right) \mathbb{I}(X_j \in dy) \end{equation*} for $y\in S$, where $\mathbf{E}_\mu(\cdot) \triangleq \int_C \mu(dx) \mathbf{E}_x(\cdot)$. Then, \begin{equation*} \nu = \sum_{n=0}^\infty \mu K^n, \end{equation*} where $K$ is defined as in (\ref{eq2:2.3}). Assume that $\mu(\cdot)$ and $K(\cdot)$ now depend on the parameter $\theta$ (so that $\nu$ does as well). The following result has a proof identical to that of Theorem~\ref{thm:01}, and is therefore omitted. \begin{theorem}\label{thm:02} Suppose there exists $w:C\to[1,\infty)$ and $\theta_0 \in (a,b)$ for which: \begin{itemize} \item[(a)] $\vertiii{K^m(\theta_0)}_w < 1$ for some $m\geq 1$; \item[(b)] $K(\cdot)$ is (continuously) differentiable in $\mathcal L_w$ at $\theta_0$, with derivative $K'(\theta_0)$ \item[(c)] $\mu(\cdot)$ is (continuously) differentiable in $\mathcal M_w$ at $\theta_0$, with derivative $\mu'(\theta_0)$. \end{itemize} Then, $\nu(\theta) = \sum_{n=0}^\infty \mu(\theta)K^m(\theta)$ is (continuously) differentiable in $\mathcal M_w$ in $\theta_0$, with \begin{equation*} \nu'(\theta_0) = \mu'(\theta_0)G(\theta_0) + \nu(\theta_0) G'(\theta_0). \end{equation*} If, in addition, $K(\cdot)$ and $\mu(\cdot)$ are $n$-times (continuously) differentiable in $\mathcal L_w$ and $\mathcal M_w$, respectively, at $\theta_0$, then $\nu(\cdot)$ is $n$-times (continuously) differentiable in $\mathcal M_w$, and $\nu^{(n)}(\theta_0)$ can be recursively computed via \begin{equation*} \nu^{(n)} (\theta_0) = \left(\mu^{(n)}(\theta_0) + \sum_{j=0}^{n-1}{n \choose j}\nu^{(j)}K^{(n-j)}(\theta_0)\right)G(\theta_0). \end{equation*} \end{theorem} We finish this section with a short operator-theoretic argument establishing existence of a derivative for the stationary distribution under the assumption of geometric ergodicity (see condition (a) below, which is the key Lyapunov condition that implies geometric ergodicity in Chapter 15 of \cite{meyn09}). \begin{theorem}\label{thm:4} Suppose that there exists a subset $A\subseteq S$, $\epsilon,c > 0$, $\lambda, r \in (0,1)$, an integer $m\geq 1$, a probability measure $\varphi$ on $S$, and $w:S\to[1,\infty)$ such that: \begin{enumerate} \item[(a)] $(P(\theta_0) w)(x) \leq rw(x) + cI(x\in A)$ \quad for $x\in S$; \item[(b)] $P^m(\theta,x,dy) \geq \lambda \varphi(dy)$ for $x\in A$, $y\in S$, and $|\theta-\theta_0|<\epsilon$; \item[(c)] $P(\cdot)$ is \what{(continuously)} differentiable in $\mathcal L_w$ at $\theta_0$. \end{enumerate} Then, $X$ is positive Harris recurrent for $\theta$ in a neighborhood of $\theta_0$, and the stationary distributions $\pi(\theta)\in \mathcal M_w$ for $\theta$ in a neighborhood of $\theta_0$ are \what{(continuously)} differentiable in $\mathcal M_w$ at $\theta_0$. Furthermore, if $\Pi(\theta_0)$ is the kernel defined by $\Pi(\theta_0, x, dy) = \pi(\theta_0, dy)$ for $x,y\in S$, $(I-P(\theta_0)+ \Pi(\theta_0))$ has an inverse on $\mathcal L_w$ and \begin{equation}\label{eq2:2.13} \pi'(\theta_0) = \pi(\theta_0) P'(\theta_0) (I-P(\theta_0)+\Pi(\theta_0))^{-1}. \end{equation} If, in addition, $P(\cdot)$ is $n$-times \what{(continuously)} differentiable in $\mathcal L_w$ at $\theta_0$, then $\pi(\cdot)$ is $n$-times \what{(continuously)} differentiable in $\mathcal M_w$ at $\theta_0$, and $\pi^{(n)}(\theta_0)$ can be recursively computed via \begin{equation*} \pi^{(n)}(\theta_0) = \sum_{j=0}^{n-1} {n \choose j} \pi^{(j)}(\theta_0) P^{(n-j)}(\theta_0) (I-P(\theta_0) + \Pi(\theta_0))^{-1}. \end{equation*} \end{theorem} \begin{remark} Note that Theorem 4 of \cite{glynn95} is closely related to the above theorem. See also Remark 11 and the Kendall set assumption in \cite{glynn95}. \cite{heidergott06} also imposes similar assumption to establish the measure-valued derivative of the stationary distribution. \end{remark} \begin{proof}{Proof.} In view of (a) and (c), there exists $r'<1$ such that \begin{equation}\label{eq2:2.14} (P(\theta_0+h) w)(x) \leq r' w(x) + cI(x\in A) \end{equation} for $x\in S$ and $|h|$ sufficiently small. Assumptions (a) and (b), and the fact that $w\geq 1$ implies that $X$ is positive Harris recurrent for $\theta$ in a neighborhood of $\theta_0$. We can now appeal to Theorem 2.3 of \cite{glynn96} to establish that $(I-P(\theta_0) + \Pi(\theta_0))$ is invertible on $\mathcal L_w$, with $(I-P(\theta_0) + \Pi(\theta_0))^{-1} \in \mathcal L_w$. Furthermore, according to \cite{glynn08}, (\ref{eq2:2.14}) implies that $\pi(\theta_0+h) w \leq c/(1-r')$, and hence $\|\pi(\theta_0+h)\|_w \leq c/(1-r').$ Also, \begin{align*} (\pi(\theta_0+h) - \pi(\theta_0)) (I-P(\theta_0)) &= \pi(\theta_0+h) (I-P(\theta_0))\\ &= \pi(\theta_0+h) (P(\theta_0+h) - P(\theta_0)). \end{align*} In addition, $\nu \Pi(\theta_0) = \pi(\theta_0) $ for any probability $\nu$ on $S$. So $(\pi(\theta_0+h) - \pi(\theta_0))\Pi(\theta_0) = 0$. Consequently, \begin{equation*} (\pi(\theta_0 + h) - \pi(\theta_0)) (I-P(\theta_0)+\Pi(\theta_0)) = \pi(\theta_0+h) (P(\theta_0+h)-P(\theta_0)), \end{equation*} from which it follows that \begin{equation}\label{eq2:2.15} \pi(\theta_0 + h) - \pi(\theta_0) = \pi(\theta_0+h) (P(\theta_0+h)-P(\theta_0))(I-P(\theta_0)+\Pi(\theta_0))^{-1}. \end{equation} Thus, \begin{equation}\label{continuity_of_pi} \|\pi(\theta_0+h) - \pi(\theta_0)\|_w \leq \frac{c}{1-r'}\vertiii{P(\theta_0+h) - P(\theta_0)}_w\cdot \vertiii{(I-P(\theta_0)+\Pi(\theta_0))^{-1}}_w \end{equation} Since $P(\cdot)$ is differentiable in $\mathcal L_w$, $\vertiii{P(\theta_0+h)-P(\theta_0)}_w \to 0$ as $h \to 0$, so $\pi(\theta_0+h) \to \pi(\theta_0)$ in $\mathcal M_w$ as $h\to 0$. Letting $h\to 0$ in $(\ref{eq2:2.15})$ then yields (\ref{eq2:2.13}). \what{ For the continuity of the derivative in case $P(\cdot)$ is continuously differentiable, note first that (a) and (b) imply that $\vertiii{(P(\theta_0)-\Pi(\theta_0))^m}_w<1$ for some $m\geq 1$; this along with the continuity of $P(\cdot)$ and $\pi(\cdot)$, in turn, implies that $\sup_{|h|\leq h_0}\vertiii{(P(\theta_0+h)-\Pi(\theta_0+h))^m}_w<1$ for a small enough $h_0$. Therefore, we conclude that $\vertiii{(I-P(\theta_0+h) + \Pi(\theta_0+h))^{-1}}_w$ is bounded (uniformly w.r.t.\ $h$). From this, it is easy to see that the same argument as for (\ref{eq2:2.13}) works with $\theta = \theta_0+h$ instead of $\theta_0$ and proves that \begin{equation}\label{pi_prime_general} \pi'(\theta_0+h) = \pi(\theta_0+h) P'(\theta_0+h) (I-P(\theta_0+h)+\Pi(\theta_0+h))^{-1}. \end{equation} Now, \begin{align*} \pi'(\theta_0+h) - \pi'(\theta_0) &= \left(\pi'(\theta_0+h) - \frac{\pi(\theta_0)-\pi(\theta_0+h)}{-h}\right) - \left(\pi'(\theta_0)- \frac{\pi(\theta_0+h)-\pi(\theta_0)}{h} \right) = \text{(I)} - \text{(II)} \end{align*} where we have already seen that (II) converges to 0. To show that (I) also vanishes, note that \begin{align*} (\pi(\theta_0) - \pi(\theta_0+h))(I-P(\theta_0+h)+\Pi(\theta_0+h)) & =(\pi(\theta_0) - \pi(\theta_0+h))(I-P(\theta_0+h)) \\ & = \pi(\theta_0)(I-P(\theta_0+h)) = \pi(\theta_0)(P(\theta_0)-P(\theta_0+h)), \end{align*} and hence, \begin{equation}\label{pi_finite_difference} \frac{\pi(\theta_0) - \pi(\theta_0+h)}{-h} = \pi(\theta_0)\frac{P(\theta_0+h)-P(\theta_0)}{h}(I-P(\theta_0+h)+\Pi(\theta_0+h))^{-1}. \end{equation} From (\ref{pi_prime_general}), (\ref{pi_finite_difference}), the continuity of $\pi(\cdot)$, the continuous differentiability of $P(\cdot)$, and the uniform boundedness of the norm of $(I-P(\theta_0+h)+\Pi(\theta_0+h))^{-1}$, we conclude that (I) vanishes. Therefore, $\pi'(\cdot)$ is continuous at $\theta_0$. } Finally, as in Proposition 2, the proof for the $n$-fold derivatives for $n\geq 2$ follows similar lines, and is therefore omitted. \end{proof} This result establishes, in the presence of a single Lyapunov function $w$, the $n$-fold differentiability of the stationary distribution $\pi(\cdot)$ in $\mathcal M_w$. Of course, the simplicity of the result comes at the cost of assuming geometric ergodicity of $X$. \section{Lyapunov Criteria for Differentiability of Random Horizon Expectations}\label{sec:random_horizon} Let $\Lambda = (a,b)$ be an open interval containing $\theta_0$. For each $\theta \in \Lambda$, let $\mathbf{E}_x^\theta (\cdot) \triangleq \mathbf{E}^\theta(\cdot|X_0 = x)$ be the expectation operator associated with $X$, when $X$ is driven by the one-step transition kernel $P(\theta)$. As in Section~\ref{sec:operator}, we consider \begin{align}\label{def:u_star} u^*(\theta, x) = \mathbf{E}_x^\theta \sum_{j=0}^{T-1} \exp\left(\sum_{k=0}^{j-1} g(X_k)\right)f(X_j) + \exp\left(\sum_{k=0}^{T-1} g(X_k)\right)f(X_T) \end{align} for each $x\in C$ given $f:S\to \mathbb{R}$, $g:S\to \mathbb{R}$, $\phi \neq C\subseteq S$, and $T=\inf \{n\geq 0: X_n \in C^c\}$. Our goal, in this section, is to provide Lyapunov conditions under which $u^*(\theta) = (u^*(\theta,x): x \in C)$ is differentiable at $\theta_0$, and to provide an expression for the derivative ${u^*}'(\theta)$. Note that if $f$ is non-negative, then $u^*(\theta)$ is always well-defined. Furthermore, by conditioning on $X_1$, it is easily seen that \begin{align*} u^*(\theta, x) = f(x) + \int_{C^c} \exp(g(x)) P(\theta,x,dy)f(y) + \int_C \exp(g(x))P(\theta, x,dy) u^*(\theta,y) \end{align*} for $x\in C$, and hence \begin{equation}\label{eq:u_star} u^*(\theta) = \tilde f(\theta) + K(\theta)u^*(\theta), \end{equation} where as in Section~\ref{sec:operator} \begin{equation*} \tilde f(\theta,x) = f(x) + \int_{C^c} \exp(g(x)) P(\theta,x,dy)f(y) \end{equation*} for $x\in C$, and $K(\theta) = (K(\theta, x, dy): x, y \in C)$ is the non-negative kernel on $C$ for which \begin{equation*} K(\theta, x, dy ) = \exp(g(x))P(\theta,x,dy). \end{equation*} Given (\ref{eq:u_star}), formal differentiation of both sides of the equation yields \begin{equation} {u^*}'(\theta_0) = \tilde f'(\theta_0) + K'(\theta_0){u^*}(\theta_0) + K(\theta_0) {u^*}'(\theta_0), \end{equation} so that ${u^*}'(\theta)$ should satisfy the linear system \begin{equation}\label{eq:2.3a} (I-K(\theta_0)){u^*}'(\theta_0) = \tilde f'(\theta_0) + K'(\theta_0)u^*(\theta_0). \end{equation} When $|C|$ is finite, it will frequently be the case that the matrix $K(\theta_0)$ has spectral radius less than $1$, in which case $I-K(\theta_0)$ is invertible and \begin{equation}\label{eq:potential} (I-K(\theta_0))^{-1} = \sum_{n=0}^\infty K^n(\theta_0) \end{equation} In this case, \begin{equation*} {u^*}'(\theta_0) = \sum_{n=0}^\infty K^n(\theta_0)\left(\tilde f'(\theta_0) + K'(\theta_0) u^*(\theta_0)\right). \end{equation*} But (\ref{eq:u_star}) and (\ref{eq:potential}) further imply that \begin{equation}\label{rep:u_star} u^*(\theta_0) = \sum_{n=0}^\infty K^n(\theta_0)\tilde f(\theta_0), \end{equation} and hence we arrive at the formula \begin{equation}\label{eq:2.5} {u^*}'(\theta_0) = \sum_{m=0}^\infty\sum_{n=0}^\infty K^m(\theta_0) K'(\theta_0) K^n(\theta_0) \tilde f(\theta_0) + \sum_{m=0}^\infty K^m(\theta_0) \tilde f'(\theta_0). \end{equation} The remainder of this section is largely concerned with rigorously extending the formula (\ref{eq:2.5}) to the general state space setting, under Lyapunov criteria that are close to minimal (and easily checkable from the model building blocks). We start by observing that when $f$ is non-negative, Fubini's theorem implies that \begin{align} u^*(\theta, x) &= \sum_{j=0}^\infty \mathbf{E}_x^\theta \exp\left(\sum_{k=0}^{j-1} g(X_k)\right)f(X_j)I(T>j)\nonumber\\ &\qquad + \sum_{j=0}^\infty \mathbf{E}_x^{\theta} \exp\left( \sum_{k=0}^{j-1} g(X_k)\right) I(T\geq j) \hcancel{e^{g(X_j)}}f(X_j) \mathbb{I}(X_j \in C^c)\nonumber\\ &= \sum_{j=0}^\infty (K^{j}(\theta)\tilde f(\theta))(x),\label{eq:2.5a} \end{align} thereby rigorously verifying (\ref{rep:u_star}). To simplify the notation in the remainder of this paper, we set $K = K(\theta_0)$ and put \begin{equation}\label{eq:2.6} G = \sum_{n=0}^\infty K^n. \end{equation} Our path to providing rigorous conditions under which (\ref{eq:2.5}) holds involves the following key ``absolute continuity'' assumption: \begin{itemize} \item[A1.] The kernels $(K(\theta): \theta \in \Lambda)$ are absolutely continuous with respect to $K$, in the sense that there exists a (measurable) density $(k(\theta, x,y): x,y\in C)$ such that \begin{equation*} K(\theta, x,dy) = k(\theta, x, y) K(x,dy) \end{equation*} for $\theta\in \Lambda$, $x,y\in C$. \end{itemize} Our absolute continuity condition is often a mild hypothesis. For example, when $X$ has a transition density with respect to a reference measure $\eta$, A1 is in force when the support of the density is independent of $\theta$. We also need to assume that $K(\theta)$ is suitably differentiable at $\theta_0$. \begin{itemize} \item[A2.] There exists $\epsilon>0$ such that for each $x,y\in C$, $k(\cdot, x,y)$ is continuously differentiable, with derivative $k'(\cdot, x,y)$, in $[\theta_0-\epsilon, \theta_0+\epsilon]$. \end{itemize} Set $\omega_\epsilon(x,y) = \sup\{|k'(\theta,x,y)|: |\theta-\theta_0|<\epsilon\}$, $k'(x,y) = k'(\theta_0, x,y)$, and $K'(x,dy) = k'(x,y) \allowbreak K(x,dy)$. (Note that $K'$ is a signed kernel, and not non-negative.) Our hypotheses are stated in terms of $K(\theta)$, not $P(\theta)$, in order to offer the extra generality needed to cover settings in which derivatives involving parameters in the discount factor $\exp(g(\cdot))$ are of interest. Such derivatives are commonly considered in the finance literature when attempting to hedge uncertainty in the so-called ``short rate.'' (The resulting derivative is called \emph{rho} in the finance context.) Finally, we also need to assume $\tilde f(\theta)$ is suitably differentiable at $\theta_0$. To permit derivatives in parameters that involve the discount factor, we write $\tilde f(\theta)$ in the form \begin{equation} \tilde f(\theta, x) = f(x) + \int_{C^c} K(\theta, x, dy) f(y). \end{equation} \begin{itemize} \item[A3.] The family of measures $(K(\theta, x, dy): \theta\in \Lambda, x \in C, y \in C^c)$ is absolutely continuous with respect to $(K(\theta_0, x, dy): x\in C, y \in C^c)$, in the sense that there exists a (measurable) density $(k(\theta, x,y): x\in C, y \in C^c)$ such that \begin{equation*} K(\theta, x, dy) = k(\theta, x, y) K(\theta_0, x, dy) \end{equation*} for $\theta \in \Lambda$, $x \in C$, $y\in C^c$. Furthermore, there exists $\epsilon>0$ such that for $x \in C$, $y\in C^c$, $k(\cdot, x,y)$ is continuously differentiable, with derivative $k'(x,y),$ in $[\theta_0-\epsilon, \theta_0 +\epsilon]$. Also, we assume that \begin{equation*} \tilde r_\epsilon(x) \triangleq \int_{C^c} \tilde \omega_\epsilon(x,y)|f(y)|K(\theta_0,x,dy)<\infty \end{equation*} for $x \in C$, where \begin{equation*} \tilde \omega_\epsilon(x,y) = \sup_{|\theta - \theta_0 | < \epsilon} |k'(\theta, x,y)|. \end{equation*} \end{itemize} In many applications, $\tilde f(\theta)$ is independent of $\theta$ and A3 need not be verified (e.g. expected hitting times). For $x\in C$, $y\in C^c$, set $K(x,dy)= K(\theta_0, x, dy)$ and $K'(x,dy)= k'(\theta_0, x,y) K(x,dy)$. We are now ready to state the main theorem of this section. \begin{theorem}\label{thm:1} Assume A1, A2, and A3. Suppose there exists $\epsilon > 0$ and two finite-valued non-negative functions $v_0$ and $v_1$ defined on $C$ for which \begin{equation}\label{eq:2.7} (K(\theta)v_0)(x) \leq v_0(x) - |\tilde f(\theta, x)| \end{equation} for $x\in C$ and $|\theta - \theta_0| < \epsilon$, and \begin{equation}\label{eq:2.8} (Kv_1)(x) \leq v_1(x) - \int_C \omega_\epsilon(x,y) v_0(y) K(x,dy) - \tilde r_\epsilon(x) \end{equation} for $x\in C$. Then, $u^*(\cdot,x)$ is differentiable at $\theta_0$ and \begin{equation}\label{eq:2.9} {u^*}'(\theta_0) = \int_C\int_C\int_C G(x,dy) K'(y,dz) G(z, dw) f(w) + \int_C \int_{C^c} G(x, dy) K'(y,dz) f(z). \end{equation} \what{% If, in addition, \begin{equation}\label{cond:random-horizon-C1-2} \int_C \omega_\epsilon(x,y)v_1(y)K(x,dy)<\infty \end{equation} and (\ref{eq:2.8}) holds in a neighborhood of $\theta_0$, i.e., for $\theta\in[\theta_0-\epsilon, \theta_0+\epsilon]$ \begin{equation}\label{cond:random-horizon-C1-3}\tag{\ref*{eq:2.8}$'$} (K(\theta)v_1)(x) \leq v_1(x) - \int_C \omega_\epsilon(x,y) v_0(y) K(\theta,x,dy) - \int_{C^c} \tilde \omega_\epsilon(x,y)|f(y)|K(\theta,x,dy) \end{equation} then ${u^*}'(\cdot,x)$ is continuous on $[\theta_0-\epsilon, \theta_0+\epsilon]$. } \end{theorem} Recalling the definition of $G$, we see that (\ref{eq:2.9}) is indeed the general state space analog of (\ref{eq:2.5}). The functions $v_0$ and $v_1$ appearing in Theorem~\ref{thm:1} are often called (stochastic) Lyapunov functions. A standard means of guessing good choices for $v_0$ and $v_1$ is to recognize that $u^*(\theta_0)$ satisfies (\ref{eq:2.7}) with equality, \rvin{if $\tilde f$ is non-negative} while \begin{equation*} \int_C \left[\int_C K(y, dz) \omega_\epsilon(y,z) v_0(z) + r_\epsilon(y)\right] G(x,dy) \end{equation*} satisfies (\ref{eq:2.8}) with equality. When $C\subseteq \mathbb{R}^m$ is unbounded, one can often approximate the large $x$ behavior of these functions, and use these approximations as choices for $v_1$ and $v_2$, respectively. The proof of Theorem~\ref{thm:1} rests on the following easy bound. \begin{proposition}\label{prop:1} Suppose that $Q=(Q(x,dy): x,y\in C)$ is a non-negative kernel and that $f: C\to \mathbb{R}_+$. If $v:C\to \mathbb{R}_+$ is a finite-valued function for which \begin{equation}\label{eq:2.10} Qv \leq v - f, \end{equation} then \begin{equation}\label{eq:2.11} \sum_{n=0}^\infty Q^n f \leq v. \end{equation} \end{proposition} \begin{proof}{Proof.} Note that (\ref{eq:2.10}) implies that $Qv \leq v$, and hence $Q^nv \leq v$ for $n\geq 0$. It follows that $Q^nv$ is finite-valued for $n\geq 0$. Inequality (\ref{eq:2.10}) can be re-written as \begin{equation}\label{eq:2.12} f \leq v - Qv. \end{equation} Applying $Q^j$ to both sides of (\ref{eq:2.12}), we get \begin{equation}\label{eq:2.13} Q^jf\leq Q^j v - Q^{j+1}v \end{equation} Summing both sides of (\ref{eq:2.13}) over $j = 0, 1, \ldots, n$, we find that \begin{equation*} \sum_{j=0}^n Q^j f \leq v - Q^{n+1} v \leq v. \end{equation*} Sending $n \to \infty$ yields (\ref{eq:2.11}). \end{proof} \begin{proof}{Proof of Theorem~\ref{thm:1}.} For the purposes of this proof, $\epsilon$ is taken as the smallest of the $\epsilon$'s appearing in A2, A3, and the statement of the theorem. We start by observing that Proposition~\ref{prop:1}, applied to the Lyapunov bound (\ref{eq:2.7}), guarantees that \begin{equation*} \sum_{n=0}^\infty K^n(\theta) |\tilde f(\theta) | \leq v_0 \end{equation*} and hence \rvout{(\ref{rep:u_star})} \rvin{Fubini's theorem} implies that $u^*(\theta)$ is finite-valued, $u^*(\theta) = \sum_{n=0}^\infty K^n(\theta)\tilde f(\theta)$, and $|u^*(\theta)|\leq v_0$. Since $u^*(\theta)$ is finite-valued (as is $K(\theta)u^*(\theta)$), we can write \begin{equation*} u^*(\theta_0+h) - u^*(\theta_0) = K(\theta_0+h) u^*(\theta_0+h) - K(\theta_0) u^*(\theta_0) + \tilde f(\theta_0 + h) - \tilde f(\theta_0) \end{equation*} and hence \begin{equation}\label{eq:2.14} (I-K) \big(u^*(\theta_0+h)- u^*(\theta_0)\big) = \big(K(\theta_0 + h)- K(\theta_0) \big) u^*(\theta_0 + h) + \big(\tilde f(\theta_0 + h) - \tilde f(\theta_0)\big). \end{equation} For $|h| < \epsilon$, \begin{align*} &\left| \int_C (K(\theta_0 + h, x, dy) - K(\theta_0, x,dy))u^*(\theta_0+h,y) \right|\\ &\leq \int_C | k(\theta_0 + h, x, y) - k(\theta_0 , x , y)| K(x,dy) v_0(y)\\ &\leq |h| \int_C \sup_{|\theta - \theta_0| < \epsilon} |k'(\theta, x,y) | K(x,dy) v_0(y)\\ &= |h| \int_C \omega_\epsilon(x,y) K(x,dy) v_0(y). \end{align*} Similarly, for $|h|<\epsilon$, \begin{align*} &|\tilde f(\theta_0 + h, x) - \tilde f(\theta_0, x)|\\ &\leq |h| \int_{C^c} \tilde \omega_\epsilon (x,y)K(x,dy)|f(y)|\\ &\leq |h| \tilde r_\epsilon (x). \end{align*} Consequently, Proposition~\ref{prop:1}, together with the Lyapunov bound (\ref{eq:2.8}), ensures that \begin{equation*} \int_C G(x,dy)\bigg( \left| \int_C (K(\theta_0 + h, y, dz) - K(\theta_0, y, dz) ) u^*(\theta_0+h, z) \right| +\left|\tilde f(\theta_0+h,y) - \tilde f(\theta_0, y)\right|\bigg) \leq |h|v_1(x). \end{equation*} It follows from (\ref{eq:2.14}) that $u^*(\theta,x)$ is continuous at $\theta_0$ and \cf{ Is there a short justification? Referees may ask to justify. } \begin{align*} \frac{u^*(\theta_0+h,x) - u^*(\theta_0,x)}{h} &= \int_C G(x,dy) \left[\int_C \frac{k(\theta_0+h,y,z) - k(\theta_0, y,z)}{h} u^*(\theta_0+h,z)K(y,dz) \right.\\ &\qquad\qquad\quad\qquad+ \left.\int_{C^c} \frac{k(\theta_0 + h, y,z) - k(\theta_0, y,z)}{h} f(z) K(y,dz) \right]. \end{align*} But \begin{equation}\label{eq:2.15} \frac{k(\theta_0+h,y,z)-k(\theta_0, y, z)}{h} \to k'(y,z) \end{equation} and \begin{equation}\label{eq:2.16} u^*(\theta_0+h,z) \to u^*(\theta_0, z) \end{equation} as $h\to 0$. Also, \begin{equation}\label{eq:2.17} \left|\frac{k(\theta_0+h,y,z) - k(\theta_0,y,z)}{h}u^*(\theta_0+h,z)\right| \leq \omega_\epsilon(y,z) v_0(z) \end{equation} for $y,z\in C$, and \begin{equation}\label{eq:2.18} \frac{|k(\theta_0+h,y,z) - k(\theta_0,y,z)|}{h} \leq \tilde \omega_\epsilon(y,z) \end{equation} for $y\in C$, $z\in C^c$. The Lyapunov bound (\ref{eq:2.8}), together with Proposition~\ref{prop:1}, guarantees that \begin{equation}\label{eq:2.19} \int_C G(x,dy)\left(\int_C \omega_\epsilon(y,z) v_0(z) K(y,dz) + \int_{C^c} \tilde \omega_\epsilon (y,z) |f(z)| K(y,dz) \right) < \infty. \end{equation} In view of (\ref{eq:2.15}) through (\ref{eq:2.19}), the Dominated Convergence Theorem therefore establishes that $u^*(\theta, x)$ is differentiable at $\theta_0$, and \begin{equation}\label{eq:sec3:u-star-prime-in-the-proof} {u^*}'(\theta_0, x) = \int_C G(x,dy) \int_C k'(y,z) u^*(\theta_0, z) K(y,dz) + \int_C G(x,dy)\int_{C^c} k'(y,z)f(z) K(y,dz), \end{equation} which is equivalent to (\ref{eq:2.9}). \what{% Turning to the continuity of ${u^*}'(\cdot, x)$, note that one can easily check that $$ {u^*}'(\theta) = \tilde f'(\theta) + K'(\theta)u^*(\theta) + K(\theta){u^*}'(\theta) $$ for $\theta\in[\theta_0-\epsilon, \theta_0+\epsilon]$ where $K'(\theta)u^*(\theta,x) = \int_C k'(\theta, x,y) u^*(\theta,y)K(x,dy)$, and hence, \begin{align*} {u^*}'(\theta+h) - {u^*}'(\theta) &= G(\theta) \big( \tilde f'(\theta+h) - \tilde f'(\theta) \big) + G(\theta) \big( (K'(\theta+h)-K'(\theta))u^*(\theta+h) \big)\\ &\hspace{20pt} + G(\theta) \big( K'(\theta)(u^*(\theta+h)-u^*(\theta)) \big) + G(\theta) \big( (K(\theta+h)-K(\theta)){u^*}'(\theta+h) \big). \end{align*} Now, a similar argument (via dominated convergence and the Lyapunov conditions) as the one that leads to (\ref{eq:sec3:u-star-prime-in-the-proof})---along with (\ref{cond:random-horizon-C1-2}), and (\ref{cond:random-horizon-C1-3})---shows that ${u^*}'(\theta+h) - {u^*}'(\theta) \to 0$ for $\theta\in[\theta_0-\epsilon, \theta_0+\epsilon]$. } \end{proof} Our proof also yields the following (computable) bound on ${u^*}'(\theta_0)$, namely, \begin{equation}\label{eq:2.20} |{u^*}'(\theta_0, x) | \leq v_1(x) \end{equation} for $x \in C$. In many applications, the parameter $\theta$ enters the dynamics in a very specific way, which allows further simplification of the result. In particular, whenever $S$ is a separable metric space, we can always express $X$ as the solution to a stochastic recursion; see, for example, \cite{kifer1986}. Namely, we can find a mapping $r:S\times S' \to S$ and a sequence $(Z_n:n\geq 1)$ of independent and identically distributed (iid) $S'$-valued random elements such that \begin{equation}\label{eqA} X_{n+1} = r(X_n, Z_{n+1}) \end{equation} for $n\geq 0$. Suppose that $\theta$ affects the dynamics of $X$ only through the distribution of the $Z_n$'s. Assume that for $z\in S'$, \begin{equation}\label{eqAB} P^\theta(Z_1 \in dz) = p(\theta, z) P^{\theta_0} (Z_1\in dz), \end{equation} where $p(\cdot,z)$ is continuously differentiable for $z\in S'$. If $u^*(\theta, x)$ is defined as in (\ref{def:u_star}), then $u^*(\cdot, x)$ is differentiable at $\theta_0$ and ${u^*}'(\theta_0,x)$ is given by (\ref{eq:2.9}) (where $K'(x,dy) = \mathbf{E}^{\theta_0}\mathbb{I}(r(x,Z_1)\in dy)p'(\theta_0, Z_1)$), provided that there exists $\epsilon>0$ and finite-valued non-negative function $v_0$ and $v_1$ defined on $C\subseteq S$ for which \begin{equation}\label{eqC} \mathbf{E}^{\theta_0} v_0(r(x, Z_1)) p(\theta, Z_1) \leq v_0(x) - |\tilde f(\theta, x)| \end{equation} for $x \in C$ and $|\theta - \theta_0| < \epsilon$, and \begin{align*}\label{eqD} \mathbf{E}^{\theta_0} v_1 (r(x,Z_1)) \leq v_1 (x) &- \mathbf{E} ^{\theta_0}v_0(r(x,Z_1)) \sup_{|\theta - \theta_0|<\epsilon} |p'(\theta,Z_1)| \mathbb{I}(r(x,Z_1) \in C) \\ &- \mathbf{E}^{\theta_0} |f(r(x,Z_1)) |\sup_{|\theta - \theta_0|< \epsilon} |p'(\theta,Z_1)| \mathbb{I}(r(x,Z_1)\in C^c). \end{align*} for $x\in C$; the proof is essentially identical to that of Theorem~\ref{thm:1} and is omitted. According to Theorem~\ref{thm:1}, for functions $f$ satisfying the Lyapunov bound, \begin{equation*} {u^*}'(\theta_0, x) = \int_S \nu'(x,dy)f(y) \end{equation*} where \begin{align*} \nu'(w,dz) = \begin{cases} \int_C G(w,dx)\int_C K'(x,dy) \int_C G(y,dz), & w,z\in C\\ \int_C G(w,dx)\int_{C^c}K'(x,dz), & w\in C, z \in C^c. \end{cases} \end{align*} Hence, our derivative can be represented in terms of a signed measure. (In general, $\nu'(x,S)$ is non-zero in this setting.) The above approach also extends, in a straightforward way, to higher-order derivatives. Formal differentiation of (\ref{eq:u_star}) $n$ times yields the identity \begin{equation*} u^{*(n)}(\theta) = \tilde f^{(n)} (\theta) + \sum_{j=0}^n \binom{n}{j} K^{(n-j)}(\theta) u^{*(j)}(\theta), \end{equation*} which suggests that the $n$\textsuperscript{th} order derivative $u^{*(n)}(\theta)$ can then be recursively computed from $u^{*(0)}(\theta)$, \ldots, $u^{*(n-1)}(\theta)$ by solving the linear (integral) equation \begin{equation} (I-K(\theta))u^{*(n)} (\theta) = \rvin{\tilde f^{(n)}(\theta)} + \sum_{j=0}^{n-1}\binom{n}{j} K^{(n-j)} (\theta) u^{*(j)}(\theta). \end{equation} In particular, it should follow that \begin{equation}\label{eq:2.21} u^{*(n)}(\theta) = G\left(\tilde f^{(n)}(\theta)+\sum_{j=0}^{n-1}\binom{n}{j} K^{(n-j)}(\theta) u^{*(j)}(\theta)\right). \end{equation} Rigorous verification of (\ref{eq:2.21}) can be implemented with a family $v_0, v_1, \ldots, v_n$ of Lyapunov functions. Specifically assume that the densities $k(\cdot, x, y)$ (for $x\in S, y \in S$) are $n$-times continuously differentiable in some neighborhood $[\theta_0-\epsilon, \theta_0 + \epsilon]$ of $\theta_0$, and set \begin{equation*} \omega_\epsilon^{(j)}(x,y) = \sup_{|\theta-\theta_0|<\epsilon} |k^{(j)}(\theta, x, y)| \end{equation*} for $x,y\in C$ and \begin{equation*} \tilde \omega_\epsilon^{(j)}(x,y) = \sup_{|\theta-\theta_0|<\epsilon} |k^{(j)}(\theta, x, y)| \end{equation*} for $x\in C$, $y\in C^c$. \begin{theorem}\label{thm:2} Suppose that there exists $\epsilon > 0$ and a family of finite-valued non-negative functions $v_0, v_1, \ldots, v_n$ defined on $C$ for which \begin{equation*} (K(\theta)v_0)(x) \leq v_0(x) - |\tilde f(\theta,x)| \end{equation*} for $x\in C$ and $|\theta - \theta_0| < \epsilon$; \begin{align*} (K(\theta)v_l)(x) \leq v_l(x) - \sum_{j=0}^{l-1} \binom{l}{j} \int_C \omega_\epsilon^{(l-j)} (x,y) v_j(y)K(\theta,x,dy) - \int_{C^c} \tilde \omega_\epsilon^{(l)}(x,y) |f(y)| K(\theta,x,dy) \end{align*} for $x\in C$, $|\theta - \theta_0| < \epsilon$, and $1\leq l\leq n$; \what{and $$ \int_C \omega_\epsilon^{(n)}v_n(y)K(x,dy) < \infty $$ for $x\in C.$ } Then, $u^*(\cdot, x)$ is $n$-times continuously differentiable at $\theta_0$, and the derivative can be recursively computed from the equations \begin{align*} u^{*(l)}(\theta_0,x) &= \int_C G(x,dy) \int_C\sum_{j=0}^{l-1}\binom{l}{j} k^{(l-j)} (\theta_0, x, y) u^{*(j)}(y)K(x,dy)\\ &\qquad\quad+\int_C G(x,dy)\int_{C^c} k^{(l)}(y,z)f(z) K(y,dz) \end{align*} \end{theorem} The proof of Theorem~\ref{thm:2} mirrors that of Theorem~\ref{thm:1}, and is therefore omitted. As in the proof of Theorem~\ref{thm:1}, the argument establishes the bound $|u^{*(n)}(\theta_0, x)| \leq v_n(x)$ for $x\in C$ on the $n$\textsuperscript{th} order derivative. \section{Lyapunov Criteria for Differentiability of Stationary Expectations}\label{sec:equilibrium} Perhaps the most commonly occurring expectations that arise in applications are those associated with steady-state behavior. Our Lyapunov approach is also well-suited to establishing differentiability in this context. As in Section~\ref{sec:random_horizon}, it is informative to first study the problem non-rigorously. A stationary distribution $\pi(\theta) = (\pi(\theta, dx): x \in S)$ of the Markov chain $X$ associated with one-step transition kernel $P(\theta)$ will satisfy \begin{equation}\label{eq:3.1a} \pi(\theta) = \pi(\theta)P(\theta). \end{equation} Differentiating both sides of (\ref{eq:3.1a}) with respect to $\theta$, we obtain \begin{equation*} \pi'(\theta) = \pi'(\theta)P(\theta) + \pi(\theta)P'(\theta), \end{equation*} which leads to the equation \begin{equation*} \pi'(\theta) (I-P(\theta))=\pi(\theta)P'(\theta). \end{equation*} This equation is similar to (\ref{eq:2.3a}). However, unlike (\ref{eq:2.3a}), the operator $I-P(\theta)$ appearing here will never be invertible, even when $|S|<\infty$. In addition, $I - P(\theta)$ is acting on a measure rather than a function in this setting. Thus, a different approach is needed here. For a given function $f: S\to \mathbb{R}$, set $\alpha(\theta) = \pi(\theta) f$. Thus, \begin{align} \alpha(\theta_0 + h) - \alpha (\theta_0) &= \pi(\theta_0+h)f-\pi(\theta_0)f\nonumber\\ &= \pi(\theta_0+h)f_c,\label{eq:3.1} \end{align} where $f_c(x) = f(x) - \pi(\theta_0) f$. While $I-P(\theta_0)$ is singular, the \emph{Poisson's equation} \begin{equation}\label{eq:3.2} (I-P(\theta_0))g = f_c \end{equation} is, under suitable technical conditions, generally solvable for $g$ (because of the special structure of the right-hand side, namely $\pi(\theta_0)f_c= 0$). Substituting (\ref{eq:3.2}) into (\ref{eq:3.1}), we get \begin{align} \alpha(\theta_0 + h) - \alpha(\theta_0) &= \pi(\theta_0 + h) (I-P(\theta_0))g\nonumber\\ &= \pi(\theta_0 + h) (P(\theta_0+h) - P(\theta_0))g.\label{eq:3.3a} \end{align} This suggests that \begin{equation}\label{eq:3.3} \alpha'(\theta_0) = \pi(\theta_0) P'(\theta_0)g. \end{equation} We now turn to making this argument rigorous. We start by assuming that $(P(\theta): \theta \in \Lambda)$ itself satisfies the absolute continuity condition: \begin{itemize} \item[A4.] The family of one-step transition kernels $(P(\theta): \theta \in \Lambda)$ is absolutely continuous with respect to $P(\theta_0)$, in the sense that there exists a density $(p(\theta, x, y): \theta\in \Lambda, x,y \in S)$ for which \begin{equation*} P(\theta, x, dy) = p(\theta, x, y) P(\theta_0, x, dy) \end{equation*} for $x, y \in S$, and $\theta\in \Lambda$. Furthermore, there exists $\epsilon > 0$ for which $p(\cdot, x, y)$ is continuously differentiable on $[\theta_0-\epsilon, \theta_0 + \epsilon]$ for each $x,y\in S$. \end{itemize} Set $\omega_\epsilon(x,y) = \sup_{|\theta-\theta_0|<\epsilon}|p'(\theta,x,y)|.$ Our next assumption involves a (uniform) minorization condition over the set $A$, which is standard in the theory of Harris recurrent Markov chains; see, for example, p.102 of \cite{meyn09} \begin{itemize} \item[A5.] There exists $\epsilon > 0$, a subset $A\subseteq S$, an integer $n\geq 1$, $\lambda>0$, and a probability $\varphi$ for which \begin{equation*} P^n(\theta, x, dy) \geq \lambda \varphi(dy) \end{equation*} for $x\in A$, $y\in S$, and $|\theta-\theta_0| < \epsilon$. \end{itemize} For $a, b\in \mathbb{R}$, let $a\vee b \triangleq \max(a,b)$. We can now state our main theorem on differentiability of stationary expectations. \begin{theorem}\label{thm:3} Assume that A4 and A5 hold. Let $\kappa: \mathbb{R}_+ \to \mathbb{R}_+$ be a function for which $\kappa(x)\geq x$ and $\kappa(x)/x \to\infty$ as $x \to \infty$. Suppose that there exist positive constants $\epsilon$, $c_0$, and $c_1$, and non-negative finite-valued functions $q$, $v_0$, and $v_1$ for which \begin{align} (P(\theta)v_0)(x) &\leq v_0(x) - (\rvin{q(x)}\vee 1) + c_0 \mathbb{I}(x\in A), \label{eq:3.4}\\ (P(\theta)v_1)(x) &\leq v_1(x) - \kappa\left( \int_S ( 1 \vee \omega_\epsilon(x,y) ) (v_0(y) + 1) P(\theta, x, dy)\right) + c_1 \mathbb{I}(x \in A), \label{eq:3.5} \end{align} for $x\in S$, $|\theta-\theta_0|<\epsilon$, and \begin{equation}\label{bound:sup_v_0} \sup_{x\in A} v_0(x) < \infty \end{equation} Then: \begin{itemize} \item[(i)] There exists an open interval $\mathcal N$ containing $\theta_0$ for which $X$ is a positive recurrent Harris chain under $P(\theta)$ for each $\theta\in \mathcal N$; \item[(ii)] There exists a unique stationary distribution $\pi(\theta)$ satisfying $\pi(\theta) = \pi(\theta)P(\theta)$ for each $\theta\in \mathcal N$ and $\pi(\theta)\rvin{q} \leq c_0$ for $\theta\in \mathcal N$; \item[(iii)] For each $f$ such that $|f(x)|\leq q(x) \vee 1$ for $x\in S$, there exists a solution $g$ (denoted $g=\Gamma f$) of Poisson's equation satisfying \begin{equation*} ((I-P(\theta_0))g)(x) = f(x) - \pi(\theta_0) f \end{equation*} for $x\in S$, and $|g(x)| = |(\Gamma f)(x)| \leq a(v_0(x) + 1)$ for $x \in S$, where $a$ is a finite constant; \item[(iv)] For each $f$ such that $|f(x)| \leq q(x) \rvin{\vee} 1$, $\alpha(\theta) = \pi(\theta)f$ is \what{continuously} differentiable at $\theta_0$, and \begin{equation}\label{C} \alpha'(\theta_0) = \int_S \pi(\theta_0, dx) \int_S p'(\theta_0, x,y) (\Gamma f) (y) P(\theta_0, x, dy). \end{equation} \end{itemize} \end{theorem} \begin{proof}{Proof.} It is a standard fact that A5, (\ref{eq:3.4}), and (\ref{bound:sup_v_0}) imply that $X$ is a positive recurrent Harris chain under $P(\theta)$ for $\theta \in \mathcal N$ (where $\mathcal N$ is selected so that A5, (\ref{eq:3.4}) and (\ref{bound:sup_v_0}) are all in force); see, for example, \citet[p.313]{meyn09}. As a consequence, there exists a unique stationary distribution $\pi(\theta)$ for each $\theta \in \mathcal N$. Furthermore, (\ref{eq:3.4}) implies that the bound $\pi(\theta)\rvin{q}\leq c_0$ holds for $\theta\in \mathcal N$; see, for example, Corollary 4 of \cite{glynn08}. Because $X$ is Harris recurrent (and (\ref{eq:3.4}) holds), one can now invoke Theorem 2.3 of \cite{glynn96} to obtain (iii). Turning to (iv), note that (\ref{eq:3.5}) guarantees that $\pi(\theta)\rvin{v_0} < \infty $ for $\theta\in \mathcal N$, so that $\pi(\theta)|\Gamma f| < \infty$. \cf{ Note that $\pi|f|<\infty$ is equivalent to $(\pi P)|f|<\infty$ and hence conditional Fubini applies: $\pi (P f) = (\pi P) f = \pi f$. Therefore, $ \pi (Pf-f) = 0. $ This validates (\ref{eq:3.3a}).}% With the above conclusions having been verified, we can now appeal to (\ref{eq:3.3a}) to write \begin{align} \pi(\theta_0+h) f - \pi(\theta_0) f &= \pi(\theta_0 + h) \big( P(\theta_0+h) - P(\theta_0)\big) \Gamma f\nonumber\\ &=\int_S \pi(\theta_0 + h, dx) \int_S \big(P(\theta_0+h, x, dy) - P(\theta_0, x, dy)\big)(\Gamma f)(y).\label{eq:3.6} \end{align} Set $s(x) = \int_S \omega_\epsilon(x,y) (v_0(y) + 1) P(\theta_0, x, dy)$ and put $\mathbb{I}_m(x) = \mathbb{I}(s(x) \geq m)$, $\mathbb{I}_m^c(x) = \mathbb{I}(s(x) < m)$. \cf{ Then $s(x)$ bound the finite difference $$\frac{P_h-P}{h}\Gamma f \leq a s$$ and from Glynn and Zeevi (2008) and (\ref{eq:3.5}) \begin{align*} \pi(\theta) s &\leq \int\pi(\theta,dx) \int(\omega_\epsilon(x,y)\vee 1)(v_0(y) + 1) P(\theta_0, x, dy)\\ &\leq \int\pi(\theta,dx) \int\kappa\big((\omega_\epsilon(x,y)\vee 1)(v_0(y) + 1)\big) P(\theta_0, x, dy)\\ &\leq c_1 \end{align*} } Observe that since $|p(\theta_0+h, x, y) - p(\theta_0, x, y) |/h \leq \omega_\epsilon(x,y)$, and $\rvin{|}(\Gamma f)(y)\rvin{|} \leq a(v_0(y) +1 )$, \begin{align} &\int_S \pi(\theta_0 + h, dx) \mathbb{I}_m(x) \rvin{\bigg|}\left(\frac{P(\theta_0+h)-P(\theta_0)}{h} (\Gamma f)\right)(x)\rvin{\bigg|}\nonumber\\ &\leq \int_S \pi(\theta_0 + h, dx) \mathbb{I}_m(x) \int_S \omega_\epsilon(x,y) \,a(v_0(y)+1) P(\theta_0, x, dy)\nonumber\\ &\leq a\int_S \pi(\theta_0 + h, dx) \mathbb{I}_m(x) s(x)\nonumber\\ &\leq \frac{a}{\inf\{ \frac{\kappa(s(y))}{s(y)}: s(y)\geq m)\}} \int_S \pi(\theta_0+h, dx) \frac{\kappa(s(x))}{s(x)}s(x)\nonumber\\ &\leq \frac{a}{\inf\{ \frac{\kappa(s(y))}{s(y)}: s(y)\geq m)\}}\int_S \pi(\theta_0+h,dx) \kappa\left(\int_S (1\vee\omega_\epsilon(x,y))(v_0(y) + 1) P(\theta_0, x, dy)\right)\nonumber\\ &\leq \frac{a}{\inf\{ \frac{\kappa(s(y))}{s(y)}: s(y)\geq m)\}} c_1 \label{eq:3.7} \end{align} where the last inequality follows from (\ref{eq:3.5}) and Corollary 4 of \cite{glynn08}. On the other hand, \begin{align*} \int_S \pi(\theta_0+h,dx) \mathbb{I}_m^c (x) \frac{P(\theta_0+h)-P(\theta_0)}{h}(\Gamma f) (x) \triangleq \int_S \pi(\theta_0+h, dx) s_h^m(x) = \pi(\theta_0+h)s_h^m, \end{align*} where \begin{align*} |s_h^m(x) | \leq a\int_S \omega_\epsilon(x,y) (v_0(y)+1) P(\theta_0, x, dy)\mathbb{I}(s(x) < m) \leq am, \end{align*} so $s_h^m$ is bounded. It follows that \begin{equation*} \pi(\theta_0 + h) s_h^m - \pi(\theta_0)s_h^m = \pi(\theta_0 + h) (P(\theta_0+h)-P(\theta_0))(\Gamma s_h^m). \end{equation*} \cf{From (\ref{eq:3.4}) $$\Gamma 1 \leq a(v_0+1)$$ and hence $$\Gamma s_h^m \leq (am) (a(v_0+1))$$ } Note that $\left|\frac{\Gamma s_h^m}{am}\right| \leq a(v_0+1)$ (because $\left| \frac{s_h^m}{am} \right|\leq q \vee 1$), and hence \begin{align} |\pi(\theta_0 + h) s_h^m - \pi(\theta_0) s_h^m| &\leq a^2 m|h| \int_S \pi(\theta_0+h,dx) \int_S \omega_\epsilon(x,y) P(\theta_0, x,dy) (v_0(y) + 1)\nonumber\\ &\leq a^2 m|h| \int_S \pi(\theta_0 + h, dx) s(x)\nonumber\\ &\leq a^2 m|h| c_1\to 0 \label{eq:3.9} \end{align} as $h\to 0$. Finally, \begin{equation*} \int_S \pi(\theta_0, dx) s_h^m(x) = \int_S \pi(\theta_0, dx) \mathbb{I}_m^c(x)\int_S \frac{p(\theta_0+h, x, y) - p(\theta_0, x, y)}{h} P(\theta_0, x, dy) (\Gamma f) (y) \end{equation*} and \begin{equation*} \frac{p(\theta_0+h, x, y) - p(\theta_0, x, y)}{h} \to p'(\theta_0, x, y) \end{equation*} as $h \searrow 0$. Furthermore, $|p(\theta_0+h, x, y) - p(\theta_0, x, y) |/h \leq \omega_\epsilon(x,y)$, $(\Gamma f)(y) \leq a(v_0(y) +1 )$, and \begin{align*} \int_S \pi(\theta_0, dx) \int_S \omega_\epsilon(x,y) P(\theta_0, x, dy) (v_0(y) + 1) \leq \int_S \pi(\theta_0, dx) s(x) \leq c_1, \end{align*} so the Dominated Convergence Theorem implies that \begin{equation}\label{eq:3.10} \int_S \pi(\theta_0, dx) s_h^m(x) \to \int_S \pi(\theta_0, dx) \mathbb{I}_m^c(x) \cdot \int_S p'(\theta_0, x, y) P(\theta_0, x, dy) (\Gamma f) (y) \end{equation} as $h\searrow 0$. If we first let $h\to 0$ and then let $m \to \infty$, (\ref{eq:3.6}) through (\ref{eq:3.10}) imply part (iv) of our theorem. \what{ Finally, turning to the continuity of the derivative, note that the exactly same argument as above gives $ \alpha'(\theta_0+h) = \int_S \pi(\theta_0+h, dx) p'(\theta_0+h,x,y) \Gamma_{\theta_0+h} f(x) P(\theta_0,x,dy) $ where $\Gamma_{\theta_0+h}f$ is the solution $g$ of the Poisson equation $g - P(\theta_0+h)g = f - \pi(\theta_0+h) f$. Since $$ \alpha'(\theta_0+h) - \alpha'(\theta_0) = \left(\alpha'(\theta_0+h) - \frac{\alpha((\theta_0+h)+(-h)) - \alpha(\theta_0+h)}{-h}\right) - \left(\alpha'(\theta_0) - \frac{\alpha(\theta_0+h)-\alpha(\theta_0)}{h}\right), $$ and we have seen that the second term vanishes as $h\to 0$, we are done if we show that the first term also vanishes. Similarly as in (\ref{eq:3.3a}), $ \alpha(\theta_0) - \alpha(\theta_0+h) = \pi(\theta_0) (P(\theta_0)-P(\theta_0+h)) \Gamma_{\theta_0+h}f $. Therefore, \begin{align} &\alpha'(\theta_0+h) - \frac{\alpha((\theta_0+h)+(-h)) - \alpha(\theta_0+h)}{-h} = \alpha'(\theta_0+h) + \frac{\alpha(\theta_0) - \alpha(\theta_0+h)}{h} \nonumber \\ & = \int_S \pi(\theta_0+h, dx) p'(\theta_0+h,x,y) \Gamma_{\theta_0+h} f(x) P(\theta_0,x,dy) \nonumber \\ & \qquad + \int_S \pi(\theta_0, dx) \frac{p(\theta_0,x,y) - p(\theta_0+h,x,y)}{h} \Gamma_{\theta_0+h} f(x) P(\theta_0,x,dy) \nonumber \\ & = \int_S \big(\pi(\theta_0+h, dx) - \pi(\theta_0, dx)\big) p'(\theta_0+h,x,y) \Gamma_{\theta_0+h} f(x) P(\theta_0,x,dy) \label{eq:first_term_pf_t41} \\ & \qquad + \int_S \pi(\theta_0, dx) \left(p'(\theta_0+h,x,y)+\frac{p(\theta_0,x,y) - p(\theta_0+h,x,y)}{h}\right) \Gamma_{\theta_0+h} f(x) P(\theta_0,x,dy). \label{eq:second_term_pf_t41} \end{align} Upon a perusal of the proof of Theorem 2.3 of \cite{glynn96}, one can see that the uniform majorization condition A5 and the uniform Lyapunov inequality (\ref{eq:3.4}) implies $|\Gamma_{\theta_0+h} f(x)| \leq a(v_0(x)+1)$ with the same constant $a$ as in (iii). One can prove that (\ref{eq:first_term_pf_t41}) vanishes as $h\to 0$ by the same argument as (\ref{eq:3.7}) and (\ref{eq:3.9}). On the other hand, (\ref{eq:second_term_pf_t41}) vanishes by the continuous differentiability condition A4 of $p$ and the dominated convergence along with (\ref{eq:3.5}). } \end{proof} As for Theorem~\ref{thm:1}, the proof also establishes a computable bound on $|\alpha'(\theta_0)|$, namely $|\alpha'(\theta_0)|\leq \rvin{a} c_1$ \rvin{where $a$ is the constant in (iii)}. \cf{maybe we want to explain how to compute $a$ or at least give a formula?} Also, as in Section~\ref{sec:random_horizon}, we can further simplify the condition when $X$ is the solution to the stochastic recursion (\ref{eqA}), in which the parameter $\theta$ affects only the distribution $Z_1$. When $p(\cdot,z)$ is continuously differentiable, (\ref{eq:3.5}) may be simplified as \begin{equation} (P(\theta)v_1) (x) \leq v_1(x) - \kappa\left(\mathbf{E} ^{\theta_0}\Big(1\vee\sup_{|\theta-\theta_0|<\epsilon}|p'(\theta,Z_1) |\Big)\big(v_0(r(x,Z_1))+1\big)p(\theta,Z_1)\right) + c_1 \mathbb{I}(x\in A). \end{equation} With A5, (\ref{eq:3.4}), and (\ref{bound:sup_v_0}) also in force, this ensures the differentiability of $\alpha(\cdot)$ at $\theta_0$, with $\alpha'(\theta_0)$ given by \begin{equation} \alpha'(\theta_0) = \int_S \pi(\theta_0, dx) \mathbf{E}^{\theta_0} (\Gamma f) (r(x,Z_1))p'(\theta_0,Z_1). \end{equation} A useful example on which to illustrate the above theory (and an important model in its own right) is that of the waiting time sequence $W=(W_n: n\geq 0)$ for the single-server G/G/1 queue, with first come first serve queue discipline. Let $V_n$ be the arrival time for the $\rvin{n}$\textsuperscript{th} customer, and let $\chi_{n+1}$ be the inter-arrival time that elapses between the arrival of the $\rvin{n}$\textsuperscript{th} and $(n+1)$\textsuperscript{st} customer. If $W_n$ is the waiting time (exclusive of service) for customer $n$, the $W_n$'s satisfy the stochastic recursion \begin{equation} W_{n+1} = [W_n + V_n - \chi_{n+1}]^+ \end{equation} for $n\geq 0$, where $[x]^+ \triangleq \max(x,0).$ Assume that the $V_n$'s are iid, independent of the $\chi_n$'s (which are also assumed iid). Then, $W$ is a Markov chain taking values in $S = [0,\infty)$. It is well known that $W$ is a positive recurrent Harris chain if $\mathbf{E} V_0 < \mathbf{E} \chi_1$, and that $\mathbf{E} V_0^{p+1}\rvin{<\infty}$ is then a necessary and sufficient condition for guaranteeing the finiteness of $\pi f_p$, where $f_p(x) = x^p$ (with $p>0$); see, for example, \cite{kiefer1956}. This suggests that it then typically will be the case that the $p$\textsuperscript{th} moment should be differentiable when $\mathbf{E} V_0 ^{p+1}< \infty$. We consider this problem in the special case in which the service times are finite mean Pareto random variables (rv's), and $\theta$ influences the scale parameter of the Pareto distribution. In other words, we consider the setting in which \begin{equation*} P^\theta(V_0 > v) = (1+\theta v)^{-\alpha} \end{equation*} for $\alpha > 1$. \cf{$\mathbf{E}^\theta v = \frac{1}{\theta}\frac{1}{\alpha-1}$} In this case, the density of $V_0$ under $P^\theta$ is given by $\theta h_V(\theta v),$ where $h_V(v) = \alpha(1+v)^{-\alpha -1}$, so that \begin{equation*} p(\theta, v) = \left(\frac{\theta}{\theta_0}\right) \left(\frac{1+\theta v}{1+\theta_0 v}\right)^{-\alpha-1} \end{equation*} and \begin{equation*} p'(\theta, v) = p(\theta, v) \left(\frac{1}{\theta} - (\alpha + 1)\frac{v}{(1 + \theta v)}\right). \end{equation*} Note that both the density $p$ and its derivative (with respect to $\theta$) are bounded functions. Furthermore, the rv $p'(\theta_0, V_i)$ has mean zero under $P^{\theta_0}$. For any $c>0$, the set $A=[0,c]$ is easily seen to satisfy condition A5, and A4 is trivially verified (with $\omega_\epsilon(\cdot)$ bounded). Then, if $v_0(x) = a_1 x^{p+1}$, $v_1(x) = a_2 x^{r+2}$, and $\kappa(x)=x^{\frac{1+r}{1+p}}$ (with $r>p$ and $a_1$, $a_2$ chosen suitably), we see that (\ref{eq:3.4}), (\ref{eq:3.5}), and (\ref{bound:sup_v_0}) all hold, guaranteeing the differentiability of $\pi(\theta)f_p$ (according to Theorem~\ref{thm:3}). For example, to verify (\ref{eq:3.4}), we note that \begin{equation*} x^{-p}\big[(P(\theta)v_0)(x) - v_0(x)\big] = a_1 x \mathbf{E} ^{\theta_0} \left(\left[1+\frac{1}{x} \left(\frac{\theta_0}{\theta} V_0 - \chi_1\right)\right]^+\right)^{p+1} - a_1 x. \end{equation*} Observe that as $x \to \infty$, \begin{align*} &x f_{p+1}\left(\left[1+\frac 1 x \left(\frac{\theta_0}{\theta} V_0 - \chi_1\right)\right]^+\right) - x\\ &= x \left(f_{p+1}(1) + f_{p+1}'(1)\left(\frac{1}{x}\right)\left(\frac{\theta_0}{\theta} V_0 - \chi_1\right)\right) - x + o(1) \qquad a.s.\\ &= \rvin{(}p\rvin{+1)}\left(\frac{\theta_0}{\theta} V_0 - \chi_1\right) + o(1) \qquad a.s. \end{align*} \rvout{as $x \to \infty$,} where $o(1)$ represents a function $k(x)$ such that $k(x) \to 0$ as $x \to \infty$ \rvin{uniformly in a neighborhood of $\theta_0$}. In addition, note that for $p>0$ and $x>0$, the mean value theorem implies that $f_{p+1}(1+x) = f_{p+1}(1) + f'_{p+1}(1+\xi)x$ for some $\xi \in [0,x]$, so that $f_{p+1}(1+x) = f_{p+1}(1) + (p+1) (1+\xi)^{p} x \leq 1 + (p+1)(1+x)^px$. Consequently, \begin{align*} \hcancel{ &\left|\ x\left( \left[ 1+\frac{1}{x} \left( \frac{\theta_0}{\theta} V_0 - \chi_1 \right) \right]^+ \right)^{p+1} - x\ \right| \ \\ } x\left( \left[ 1+\frac{1}{x} \left( \frac{\theta_0}{\theta} V_0 - \chi_1 \right) \right]^+ \right)^{p+1} - x\ \ \\ &\leq x \left( 1+\frac 1 x\frac{\theta_0}{\theta} V_0 \right)^{p+1} - x\\ &\leq x\left(1 + (p+1) \left(1 + \frac{1}{x}\frac{\theta}{\theta_0}V_0\right)^p \frac{1}{x}\frac{\theta}{\theta_0} V_0\right) - x\\ &\hcancel{ \leq (p+1) \left(1 + \frac{1}{x}\frac{\theta}{\theta_0}V_0\right)^{p+1} }\\ &\leq (p+1) \left(1 + \frac{1}{x}\frac{\theta}{\theta_0}V_0\right)^{p}\frac{\theta}{\theta_0}V_0 \end{align*} \cf{ \rvin{ and \begin{align*} & x- x\left( \left[ 1+\frac{1}{x} \left( \frac{\theta_0}{\theta} V_0 - \chi_1 \right) \right]^+ \right)^{p+1} \\ &\leq x - x\left( \left[ 1-\frac{\chi_1}{x} \right]^+ \right)^{p+1} \\ &\leq x - x\left( 1-\frac{\chi_1}{x} \right)^{p+1} \end{align*} } } \cf{For $p \in (-1,0)$, mean value theorem implies $f_{p+1}(1+x) = f_{p+1}(1) + f'_{p+1}(1+\xi)x$ for some $\xi \in [0,x]$, so that $f_{p+1}(1+x) = f_{p+1}(1) + (p+1) (1+\xi)^{p} x \leq 1 + (p+1)x$. Consequently, \begin{align*} & x\left( \left[ 1+\frac{1}{x} \left( \frac{\theta_0}{\theta} V_0 - \chi_1 \right) \right]^+ \right)^{p+1} - x\ \\ &\leq x \left( 1+\frac 1 x\frac{\theta_0}{\theta} V_0 \right)^{p+1} - x\\ &\leq x\left(1 + (p+1) \frac{1}{x}\frac{\theta}{\theta_0} V_0\right) - x\\ &\leq (p+1)\frac{\theta}{\theta_0}V_0 \end{align*} and \begin{align*} & x- x\left( \left[ 1+\frac{1}{x} \left( \frac{\theta_0}{\theta} V_0 - \chi_1 \right) \right]^+ \right)^{p+1} \\ &\leq x - x\left( \left[ 1-\frac{\chi_1}{x} \right]^+ \right)^{p+1} \leq x - x\left( \left[ 1-\frac{\chi_1}{x} \right]^+ \right) \leq x - x\left( 1-\frac{\chi_1}{x} \right) \leq \chi_1 \end{align*} } Since $\mathbf{E} V_0^{p+1}< \infty$, \rvout{the Dominated Convergence theorem ensures} \rvin{Fatou's lemma applies to ensure} that \hcancel{ \begin{equation*} \mathbf{E}^{\theta_0} \left(x f_{p+1} \left(\left[1+\frac{1}{x}\left(\frac{\theta_0}{\theta}V_0 - \chi_1\right)\right]^+\right)-x\right) \to (p+1)\mathbf{E} ^{\theta_0} \left(\frac{\theta_0}{\theta}V_0 - \chi_1\right) = (p+1)\left(\frac1 {\theta\rvin{(\alpha - 1)}} - \mathbf{E} \chi_1\right) \end{equation*} } \rvin{ \begin{align*} \limsup_{x\to\infty}\,\sup_{\theta} \mathbf{E}^{\theta_0} \left(x f_{p+1} \left(\left[1+\frac{1}{x}\left(\frac{\theta_0}{\theta}V_0 - \chi_1\right)\right]^+\right)-x\right) &\leq (p+1)\mathbf{E} ^{\theta_0}\sup_{\theta} \left(\frac{\theta_0}{\theta}V_0 - \chi_1\right) \\ &= (p+1)\sup_{\theta}\left(\frac1 {\theta(\alpha - 1)} - \mathbf{E} \chi_1\right) \end{align*} } as $x\to\infty$ (with convergence that is uniform in a neighborhood of $\theta_0$). If we choose $a_1$ so that $a_1 \rvin{(}p\rvin{+1)\sup_{\theta}}(\frac{1}{\theta\rvin{(\alpha - 1)}} - \mathbf{E} \chi_1) \leq -2$ \rvout{(uniformly in $\theta$)} and $c$ so that \begin{equation*} a_1\rvin{\sup_{\theta}}\mathbf{E}^{\theta_0} \left(xf_{p+1}\left(\left[1+\frac{1}{x}\left(\frac{\theta_0}{\theta}V_0 - \chi_1\right)\right]^+\right)-x\right) \leq -1 \end{equation*} for $x\geq c$, then (\ref{eq:3.4}) is validated. A similar argument applies to (\ref{eq:3.5}), in view of the boundedness of $\omega_\epsilon(\cdot)$. Our argument therefore establishes that $\pi f_p$ is differentiable if $\mathbf{E} V_0 ^q < \infty$ for some $q>p+2$. This is not quite the ``correct'' result (in that we previously argued that $\mathbf{E} V_0^{p+1}<\infty$ should be sufficient.) The reason that our argument fails to provide optimal condition here has to do with special random walk structure that is present in the process $W$ that is difficult for general machinery to exploit. The challenge arises at (\ref{eq:3.3a}) above. Note that the argument just provided for $W$ involves using $v_0=a_1 f_{p+1}$ as a bound on the solution $g$ to Poisson's equation for $f_p$. (As we shall see in a moment, $g$ is indeed exactly of order $x^{p+1}$). The problem is that neither $P(\theta_0+h) f_{p+1}$ nor $P(\theta_0)f_{p+1}$ in (\ref{eq:3.3a}) are integrable with respect to $\pi(\theta_0+h)$ unless $\mathbf{E} V_0 ^{p+2}<\infty$. This is what leads to the extra moment appearing in our argument for $W$ above. Thus, any argument that yields differentiability under the hypothesis $\mathbf{E} V_0^{p+1}<\infty$ must take advantage of the fact that the random walk structure of $W$ yields the integrability of $(P(\theta_0 + h) - P(\theta_0))g$ under $\mathbf{E} V_0^{p+1}<\infty$ without demanding the integrability of $P(\theta_0)g$ and $P(\theta_0+h)g$ separately. It is shown in \cite{glynn96} that, in view of the fact that $W$ regenerates at hitting times of $\{0\}$, the solution $g$ to Poisson's equation for $f_p$ can be expressed as \begin{equation}\label{A} g(x) = \mathbf{E}_x ^{\theta_0} \sum_{j=0}^{\tau(0)-1} (f_p(W_j)-\pi(\theta_0)f_p), \end{equation} where $\tau(0) = \inf\{n\geq 1: W_n = 0\}$ is the hitting time of $\{0\}$. Let $Z_j = V_{j-1} - \chi_j$, $S_j = Z_1 + \cdots + Z_j$, (for $j\geq 1$), $\tau_x(0) = \inf\{j\geq 1: x + S_j \leq 0\}$, $\mu = \mathbf{E} Z_1$, and note that (\ref{A}) implies that \begin{align} (P(\theta_0 + h)g)(x) - (P(\theta_0)g)(x) &= \mathbf{E}^{\theta_0} g(W_1) [p(\theta_0 + h, V_0)-1]\label{B}\\ &= \mathbf{E}^{\theta_0} \sum_{j=1}^{\tau_x(0)\rvin{-1}} [(x+S_j)^p - \pi(\theta_0) f_p] (p(\theta_0 + h, V_0) -1 ) \mathbb{I}(x + Z_1 > 0).\nonumber \end{align} But \begin{align*} \sum_{j=1}^{\tau_x(0)\rvin{-1}} (x+S_j)^p (p(\theta_0+h,V_0)-1) &= x^p\sum_{j=1}^{\tau_x(0)\rvin{-1}} [(1+\frac{S_j-V_0}{x})^p+p\xi_j(x)^{p-1}\frac{V_0}{x}](p(\theta_0 + h, V_0)-1) \end{align*} where $\xi_j(x)$ lies between $1+S_j/x-V_0/x$ and $1+S_j/x$. It is easily argued, based on Riemann sum approximations, that \begin{align*} \sum_{j=1}^{\tau_x(0)\rvin{-1}} \xi_j(x)^{p-1} \frac{1}{x} &\to \int_0^{\rvin{1/}|\mu|} (1+\mu s)^{p-1}ds \qquad a.s.\\ &=\frac{1}{|\mu|}\cdot \frac{1}{p} \end{align*} as $x \to \infty$. Furthermore, $p(\theta_0+h,V_0)-1$ is a mean zero rv that is independent of $(1+(S_j-V_0)/x)^p$ for $j\geq 1$ and $\mathbf{E} \tau_x(0) \sim x / |\mu|$ as $x\to \infty$ (where $a_1(x) \sim a_2(x)$ as $x\to \infty$ means that $a_1(x)/a_2(x) \to 1$ as $x \to \infty$). In view of (\ref{B}), this suggests that \begin{equation*} (P(\theta_0+h)g)(x) - (P(\theta_0)g)(x) \sim \frac{x^p}{|\mu|} \mathbf{E} V_0 (p(\theta_0+h,V_0)-1) \end{equation*} as $x\to \infty$ (i.e., one power lower than the growth of $g$ itself). Thus, this style of argument can successfully deal with the integrability issue discussed earlier, and leads to a validation of the derivative formula (\ref{C}) for $W$ under the assumption $\mathbf{E} V_0^{p+1}<\infty$. A rigorous statement and the remaining details of the proof can be found in the Appendix. This differentiation result for $W$ can also be found in \cite{heidergott09}, with a different (and longer) proof, and with some steps that appear to be incomplete. (In particular, the paper asserts that $\rvin{\mathbf{E}_x ^{\theta_0} \sum_{j=0}^{\tau(0)-1} f_p(W_j)}$ is bounded \rvin{for any fixed $\theta$ and $p$, which implies that our function $g$ grows at most linearly regardless of $p$}; see p.248).
1,314,259,996,096
arxiv
\section{Introduction} \label{sec:Introduction} High dimensional multivariate data is becoming increasingly prevalent, with the estimation of the covariance matrix for such data sets being an important fundamental problem. The classical estimator, i.e.\ the sample covariance matrix, though, is known to be highly non-robust under longer tailed alternatives to the multivariate normal distribution, as well as being highly non-resistant to outliers in the data. Consequently, there have been numerous proposals for robust alternatives to the sample covariance matrix, with one of the earliest alternatives being the $M$-estimators of multivariate scatter \cite{Maronna_1976, Huber_1981}. As with the multivariate $M$-estimators of scatter, most of the subsequent proposals for robust estimators of multivariate scatter are affine equivariant. However, for sparse multivariate data, that is when the sample size $n$ is less than or not much larger than the dimension of the data $q$, such estimators of scatter do not differ greatly from the sample covariance matrix, and for the case $q \le n$, they are simply proportional to the sample covariance, see \cite{Tyler_2010}. Even when the distribution is normal and there are no outliers in the data set, the sample covariance matrix can still be unreliable for sparse data sets due to the large number of parameters being estimated, namely $q(q+1)/2$. Consequently, one may wish to model the covariance matrix using less parameters, or one may wish to give preference to certain covariance structures and pull the estimator towards such structures via penalization or regularization techniques. Traditionally, research on robust estimators of multivariate scatter have not taken these concerns into account, and the statistics literature has focused primarily on the unrestricted robust estimation of the scatter matrix. Within the signal processing community, though, there has been an increasing interest in the $M$-estimators of multivariate scatter \cite{Abramovich_etal_2013, Besson_etal_2013, Conte_etal_2002, Gini_Greco_2002, Ollila_Koivunen_2003, Ollila_Koivunen_2009, Ollila_Tyler_2012, Ollila_etal_2003, Ollila_etal_2012, Pascal_etal_2008, Soloveychik_Wiesel_2013, Wiesel_2012, Zhang_etal_2013} and more recently an interest in developing regularized versions of them \cite{Chen_etal_2011, Couillet_McKay_2014, Ollila_Tyler_2014, Pascal_etal_2014, Wiesel_2012, Wiesel_2012b}. An important mathematical contribution arising from the area of signal processing is the realization in \cite{Wiesel_2012} that treating the multivariate scatter matrices as elements in a Riemannian manifold and using the notion of geodesic convexity can be very useful, leading to elegant theory as well as new results. These concepts had been applied previously within the statistics literature \cite{Auderset_etal_2005}, but only for the specific case of the distribution free $M$-estimator of multivariate scatter. More recently they have been used in \cite{Sra_Hosseini_2013} and implicitly in the survey paper \cite{Duembgen_etal_2015} on $M$-functionals of multivariate scatter. The purpose of the present paper is threefold. We first review the standard Riemannian geometry on the space of symmetric positive definite matrices and the notion of geodesic convexity in Section~\ref{sec:G-Convexity}. In particular we introduce and utilize first and second order Taylor expansions of such functions with respect to geodesic parametrizations. Such expansions allow us to introduce sufficient conditions for a function to be geodesically convex. In addition we introduce the concept of geodesic coercivity, which is important in establishing the existence of both the $M$-estimators of scatter and their regularized versions. As in classical convex analysis, a real valued function on the space of symmetric positive definite matrices which is continuous, strictly geodesically convex and coercive has a unique minimizer. Our second contribution is a general analysis of regularized $M$-estimators of multivariate scatter with respect to geodesic convexity and coercivity in Section~\ref{sec:Regularized.scatter}. Our starting point are results of \cite{Wiesel_2012, Zhang_etal_2013} and \cite{Duembgen_etal_2015} which show that the log-likelihood type functions underlying $M$-estimators of multivariate scatter are geodesically convex under rather general conditions. We show that various penalty functions favoring matrices which are close to the identity matrix or to multiples of the identity matrix are geodesically convex. This leads to a rather complete picture concerning existence and uniqueness of regularized $M$-functionals of scatter. It also provides new results on regularized sample covariance matrices when using penalty functions which are geodesically convex but not convex in the inverse of the covariance matrix. Furthermore, we propose a cross-validation method for choosing a scaling parameter for the penalty function. Finally, we present a general partial Newton algorithm to minimize a smooth and strictly geodesically convex function in Section~\ref{sec:Algorithm}. This algorithm is a generalization of the partial Newton method of \cite{Duembgen_etal_2016} with guaranteed convergence. We illustrate this method with a numerical example in Section~\ref{sec:Example}. All proofs and some auxiliary results are deferred to Section~\ref{sec:Proofs} and to a supplement \ref{sec:Auxiliary}. We begin with some notation and a brief background review. \section{Background and Notation} \label{sec:Background} Let the space of symmetric matrices in $\mathbb{R}^{q\times q}$ be denoted by $\R_{\rm sym}^{q\times q}$, and let $\R_{{\rm sym},+}^{q\times q}$ stand for its subset of positive definite matrices, i.e.\ symmetric matrices with eigenvalues in $\mathbb{R}_+ := (0,\infty)$. For a distribution $Q$ on $\mathbb{R}^q$ with given center $0$ and a function $\rho : [0,\infty) \to \mathbb{R}$, an $M$-functional of multivariate scatter can be defined as a matrix which minimizes the objective function \begin{equation} \label{eq:Lrho} L_\rho(\Sigma,Q) \ := \ \int \bigl[ \rho(x^\top\Sigma^{-1}x) - \rho(\|x\|^2) \bigr] \, Q(dx) + \log \det(\Sigma) \end{equation} over $\Sigma \in \R_{{\rm sym},+}^{q\times q}$. When $Q = Q_n$ represents an empirical distribution, then the minimizer defines an $M$-estimator of scatter, and the objective function can be viewed as a generalization of the negative log-likelihood function arising from an elliptical distribution \cite{Maronna_1976}. The term $\rho(\|x\|^2)$ is not needed when working with empirical distributions. In general, though, this term allows us to be able to consider distributions $Q$ for which $\int | \rho(\|x\|^2) | \, Q(dx) = \infty$. For continuous $\rho$ with sill $a_o > q$, defined below, a minimizer $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ to $L_\rho(\Sigma,Q_n)$ is known to exist, provided no subspace contains too may data points, or specifically if the following condition holds for $Q=Q_n$ \cite{Kent_Tyler_1991}. \paragraph{Condition~1.} For all linear subspaces $\mathbb{V} \subset \mathbb{R}^q$ with $0 \le \dim(\mathbb{V}) < q$, \[ Q(\mathbb{V}) < 1 - \frac{\{q - \dim(\mathbb{V})\}}{a_o} , \] where $a_o = \sup\{a : s^a\exp\{-\rho(s)\} \to 0 \ \text{as} \ s \to \infty \}$. (Note that the function $\rho$ in the present paper corresponds to $2\rho$ in \cite{Kent_Tyler_1991} and other publications.) If $\rho$ is differentiable, then the critical points, and hence any minimizer, of \eqref{eq:Lrho} satisfy the $M$-estimating equations \begin{equation} \label{eq:Mee} \Sigma \ = \ \int u(x^\top\Sigma^{-1}x) xx^\top \, Q_n(dx) \end{equation} where $u(s) := \rho'(s)$. Furthermore, if we define $\psi(s) := su(s)$, then the sill $a_o$ equals the limit $\psi(\infty) = \lim_{s \to \infty} \psi(s)$ whenever the latter exists. To assure the uniqueness of a minimizer to $L_\rho(\Sigma,Q_n)$ or a unique solution to the $M$-estimating equations \eqref{eq:Mee}, further conditions on the function $\rho$ are needed. It has been know since the introduction of the $M$-estimators of scatter \cite{Maronna_1976, Huber_1981} that one such sufficient condition is the following. \paragraph{Condition~2.} The function $\rho$ is differentiable, with $u(s)$ being non-increasing and $\psi(s)$ being non-decreasing and strictly increasing for $\psi(s) < \psi(\infty)$. \smallskip \noindent The proof of uniqueness given in \cite{Maronna_1976, Huber_1981} assumes more restrictive conditions on the distribution $Q$ than that given by Condition~1, although it is shown in \cite{Kent_Tyler_1991} that Conditions~1 and 2 are sufficient for the existence of a unique solution to \eqref{eq:Mee}, i.e.\ for the existence and uniqueness of the $M$-estimator of scatter. Some common examples of $M$-estimators satisfying Condition~2 are Huber's $M$-estimator for which $\psi(s) = K\min(s/c,1)$ with tuning constants $c > 0$ and $K > p$, and the maximum likelihood estimators derived from an elliptical t-distribution on $\nu > 0$ degrees of freedom, for which $\psi(s) = (\nu+q)s/(\nu+s)$. The above conditions lack some intuition as to why \eqref{eq:Lrho} has a unique minimum. The proofs of uniqueness given in \cite{Maronna_1976, Huber_1981, Kent_Tyler_1991} are based on a study of the $M$-estimating equations \eqref{eq:Mee}. Recall that for the classical case when $L_\rho(\Sigma,Q_n)$ corresponds to the negative log-likelihood under a $q$-dimensional normal distribution with mean zero and covariance $\Sigma$, i.e.\ when $\rho(s) = s$, then $L_\rho(\Sigma,Q_n)$ is strictly convex in $\Sigma^{-1}$ and hence has a unique minimizer, namely the sample covariance matrix. For general $\rho$, however, $L_\rho(\Sigma,Q_n)$ tends not to be convex in $\Sigma^{-1}$. Important insight into the function $L_\rho(\Sigma,Q_n)$ has recently been given within the area of signal processing. In particular, it is shown in \cite{Zhang_etal_2013} that if the function $\rho(e^x)$ is convex in $x \in \mathbb{R}$, then $L_\rho(\Sigma,Q_n)$ is geodesically convex in $\Sigma \in \R_{{\rm sym},+}^{q\times q}$, and that if the function $\rho(e^x)$ is strictly convex in $x \in \mathbb{R}$, then $L_\rho(\Sigma,Q_n)$ is strictly geodesically convex in $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ provided the data span $\mathbb{R}^q$. Consequently, when Condition~1 holds, then the minimizer set for $L_\rho(\Sigma,Q_n)$ is a geodesically convex set when $\rho(e^x)$ is convex, and the minimizer is unique when $\rho(e^x)$ is strictly convex. The results on geodesic convexity, or g-convexity, not only give a mathematically elegant insight into uniqueness, but they also yield more general results. For example, $\rho(s)$ need not be differentiable. Also, when $\rho(s)$ is differentiable, then $\rho(e^x)$ is (strictly) convex in $x \in \mathbb{R}$ if and only if $\psi(s)$ is (strictly) increasing, with no additional conditions on $u(s)$ being needed, i.e. $u(s)$ need not be non-increasing. The notion of g-convexity also allows for the development of new results regarding minimizing $L_\rho(\Sigma,Q)$ over a g-convex subset of $\R_{{\rm sym},+}^{q\times q}$, as well as minimizing a penalized objective function when the penalty function is also g-convex. Before addressing these problems, though, we provide a thorough review and present some new results on the notion of geodesic convexity. \begin{Remark} Note that our objective function \eqref{eq:Lrho} assumes $0$ to be the center of the distribution $Q$. In various applications in signal processing the center of $Q$ is often known or hypothesized, and consequently all the aforementioned signal processing references presume a known center. In more traditional location-scatter problems, one could embed the location-scatter problem in dimension $q$ into a scatter-only problem in dimension $q+1$ as explained in \cite{Kent_Tyler_1991, Duembgen_etal_2015}. But regularization in this setting is less clear. If the location parameter is merely a nuisance parameter, then one can first center the data using an auxiliary estimate of location. Alternatively, the location parameter can be removed by symmetrization, i.e.\ instead of $Q$ one considers the symmetrized distribution $\mathcal{L}(X - X')$ with independent random vectors $X, X' \sim Q$; see \cite{Duembgen_1998, Duembgen_etal_2015} for further details. \end{Remark} \section{Geodesic Convexity} \label{sec:G-Convexity} \subsection{A Riemannian geometry for scatter matrices} We collect a few basic ideas about positive definite matrices and their geometry. For a full treatment we refer to \cite{Bhatia_2007}. The Euclidean norm of a vector $v \in \mathbb{R}^p$ is denoted by $\|v\| = \sqrt{v^\top v}$. For matrices $A, B$ with identical dimensions we write \[ \langle A, B\rangle \ := \ \mathop{\mathrm{tr}}\nolimits(A^\top B) \quad\text{and}\quad \|A\| \ := \ \sqrt{\langle A, A\rangle} , \] so $\|A\|$ is the Frobenius norm of $M$. Equipped with this inner product $\langle\cdot,\cdot\rangle$ and norm $\|\cdot\|$, the matrix space $\R_{\rm sym}^{q\times q}$ is a Euclidean space of dimension $q(q+1)/2$, and $\R_{{\rm sym},+}^{q\times q}$ is an open subset thereof. But in the context of scatter estimation an alternative geometry turns out to be useful. Let $\widehat{\Sigma}_n$ be the sample covariance matrix of independent random vectors $X_1, X_2, \ldots, X_n$ with distribution $\mathcal{N}_q(\mu,\Sigma)$ with $\mu \in \mathbb{R}^q$ and $\Sigma \in \R_{{\rm sym},+}^{q\times q}$. It is well known that \[ \widehat{\Sigma}_n \ =_{\mathcal{L}}^{} \ \Sigma^{1/2} (I_q + A_n) \Sigma^{1/2} \] with the identity matrix $I_q \in \R_{}^{q\times q}$ and a random matrix $A_n \in \R_{\rm sym}^{q\times q}$. The distribution of $A_n$ depends only on $n$ and is invariant under transformations $A_n \mapsto U A_n U^\top$ with $U \in \R_{\rm orth}^{q\times q}$, the set of orthogonal matrices in $\R_{}^{q\times q}$. Moreover, $A_n \to_p 0$ as $n \to \infty$. Thus one could measure the distance between $\widehat{\Sigma}_n$ and $\Sigma$ by \[ \|A_n\| \ = \ \|\widehat{\Sigma}_n - \Sigma\|_{\Sigma} \] with the local norm \[ \|\Delta\|_\Sigma \ := \ \|\Sigma^{-1/2} \Delta \Sigma^{-1/2}\| \ = \ \sqrt{ \mathop{\mathrm{tr}}\nolimits(\Delta \Sigma^{-1} \Delta \Sigma^{-1})} \] corresponding to the local inner product \[ \langle \Delta, \widetilde{\Delta}\rangle_\Sigma \ := \ \langle \Sigma^{-1/2}\Delta\Sigma^{-1/2}, \Sigma^{-1/2}\widetilde{\Delta}\Sigma^{-1/2}\rangle = \mathop{\mathrm{tr}}\nolimits(\Delta\Sigma^{-1}\widetilde{\Delta}\Sigma^{-1}) \] of matrices $\Delta, \widetilde{\Delta} \in \R_{\rm sym}^{q\times q}$. To define a distance between two arbitrary matrices $\Sigma_0, \Sigma_1 \in \R_{{\rm sym},+}^{q\times q}$, we consider a smooth path $M$ connecting them. That means, $M : [0,1] \to \R_{{\rm sym},+}^{q\times q}$ is piecewise continuously differentiable with $M(0) = \Sigma_0$ and $M(1) = \Sigma_1$. Then we define the length of $M$ to be \[ L(M) \ := \ \int_0^1 \|\dot{M}(t)\|_{M(t)} \, dt . \] Denoting with $\R_{\rm ns}^{q\times q}$ the set of nonsingular matrices in $\R_{}^{q\times q}$, one can easily verify that for any $B \in \R_{\rm ns}^{q\times q}$, the new path \[ M_B(t) \ := \ B M(t) B^\top \] connects the matrices $B\Sigma_0B^\top$ and $B\Sigma_1B^\top$ and has length \[ L(M_B) \ = \ L(M) . \] Here is a well-known key result about shortest paths in $\R_{{\rm sym},+}^{q\times q}$. For the reader's convenience we provide a self-contained proof in Supplement~\ref{sec:Auxiliary}. \begin{Theorem} \label{thm:geodesics} Let $M : [0,1] \to \R_{{\rm sym},+}^{q\times q}$ be a path connecting $M(0) = \Sigma_0$ and $M(1) = \Sigma_1$. Then \[ L(M) \ \ge \ \bigl\| \log(\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2}) \bigr\| \] with equality if, and only if, \[ M(t) \ = \ \Sigma_0^{1/2} \, (\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2})_{}^{u(t)} \, \Sigma_0^{1/2} \] for some non-decreasing, piecewise continuously differentiable function $u : [0,1] \to \mathbb{R}$ with $u(0) = 0$ and $u(1) = 1$. \end{Theorem} Note that for a shortest path $M$, its track $\{M(t) : t \in [0,1]\}$ does not depend on the function $u$ but is equal to $\{N(u) : u \in [0,1]\}$ with the special path $N : [0,1] \to \R_{{\rm sym},+}^{q\times q}$ given by $N(u) := \Sigma_0^{1/2} \, (\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2})_{}^{u} \, \Sigma_0^{1/2}$. Indeed $M(t) = N(u(t))$, and the path $N$ has constant geodesic speed in the sense that for all $u \in [0,1]$, \[ \|\dot{N}(u)\|_{N(u)} \ = \ L(N) = L(M) . \] The preceding considerations involve matrix powers and logarithms. In general, a real valued function $h : \mathbb{R} \to \mathbb{R}$ can be extended to a matrix-valued function $h: \R_{\rm sym}^{q\times q} \rightarrow \R_{\rm sym}^{q\times q}$ in the following manner: Let $A \in \R_{\rm sym}^{q\times q}$ have spectral decomposition $A = U D(\lambda) U^\top$ with a matrix $U \in \R_{\rm orth}^{q\times q}$ of orthonormal eigenvectors of $A$ and a diagonal matrix $D(\lambda)$ with diagonal elements given by $\lambda = (\lambda_i)_{i=1}^q \in \mathbb{R}^q$ , then \[ h(A) \ := \ U D(h(\lambda)) U^\top \quad\text{for} \ s \in \mathbb{R}, \] using the convention $h(\lambda) := (h(\lambda_i))_{i=1}^q$. If $h$ is defined only on $\mathbb{R}_+$, then we restrict $A$ to $\R_{{\rm sym},+}^{q\times q}$ and obtain a matrix-valued function $h: \R_{{\rm sym},+}^{q\times q} \rightarrow \R_{\rm sym}^{q\times q}$. So, for $\lambda \in \mathbb{R}_+^q$, \[ A^s \ := \ U D(\lambda^s) U^\top \quad\text{for} \ s \in \mathbb{R} \] and \[ \log(A) \ := \ U D(\log \lambda) U^\top . \] Also, for $A \in \R_{\rm sym}^{q\times q}$, \[ \exp(A) \ := \ U D(e^\lambda) U^\top . \] This is consistent with the more general definition of a matrix exponential \[ \exp(A) \ := \ \sum_{k=0}^\infty \frac{A^k}{k!} \] which is defined for any arbitrary matrix $A \in \R_{}^{q\times q}$. Analogous to the real setting, $\exp : \R_{\rm sym}^{q\times q} \to \R_{{\rm sym},+}^{q\times q}$ is a bijection with inverse mapping $\log : \R_{{\rm sym},+}^{q\times q} \to \R_{\rm sym}^{q\times q}$. For $A \in \R_{{\rm sym},+}^{q\times q}$, \[ A^s \ = \ \exp(s \log(A)) . \] Hence Theorem~\ref{thm:geodesics} shows that a shortest path between two matrices $\Sigma_0, \Sigma_1 \in \R_{{\rm sym},+}^{q\times q}$ is given by \[ M(t) \ := \ \Sigma_0^{1/2} \exp \bigl( t \log(\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2}) \bigr) \Sigma_0^{1/2} , \quad t \in [0,1] . \] Sometimes it is convenient to consider other factorizations of $\Sigma_0$, i.e.\ other square roots. If we write $\Sigma_0 = BB^\top$ for some $B \in \R_{\rm ns}^{q\times q}$, then \[ M(t) \ = \ B \exp(tA) B^\top \quad\text{with}\quad A \ := \ \log(B^{-1} \Sigma_1 B^{-\top}) \] and $B^{-\top} := (B^\top)^{-1} = (B^{-1})^\top$. The function $M(t)$ does not depend on the particular choice for $B$ since $B = \Sigma_0^{1/2} V$ for some $V \in \R_{\rm orth}^{q\times q}$. In particular, let $\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2} = V D(\eta) V^\top$ with $V \in \R_{\rm orth}^{q\times q}$ and $\eta \in \mathbb{R}_+^q$ containing the eigenvalues of $\Sigma_1 \Sigma_0^{-1}$. Then $\Sigma_0 = BB^\top$ and $\Sigma_1 = B D(\eta) B^\top$ with $B = \Sigma_0^{1/2} V$. For this choice and $\gamma := \log\eta$ we obtain the expression \begin{equation} \label{eq:BDB} M(t) \ = \ B D(\eta)^t B^\top \ = \ B \exp(t D(\gamma) ) B^\top , \end{equation} which leads to a simple interpretation of the geodesic path from $\Sigma_0$ to $\Sigma_1$. Namely, after jointly diagonalizing $\Sigma_0$ and $\Sigma_1$, the geodesic path corresponds to the linear path connecting the logs of the diagonal elements. \begin{Lemma}[Geodesic curves and $q$-dimensional surfaces] \label{lem:geodesic.curves} Let $B$ be an arbitrary matrix in $\R_{\rm ns}^{q\times q}$. For $A \in \R_{\rm sym}^{q\times q}$ and $t \in \mathbb{R}$ let \[ \Sigma(t) \ := \ B \exp(tA) B^\top . \] This defines a geodesic curve in the following sense: For arbitrary different numbers $t_0, t_1$, a shortest path connecting $\Sigma(t_0)$ and $\Sigma(t_1)$ is given by \[ [0,1] \ni u \ \mapsto \ \Sigma((1 - u) t_0 + u t_1) . \] For $x \in \mathbb{R}^q$ let \[ \Gamma(x) \ := \ B D(e^x) B^\top = B \exp(D(x)) B^\top . \] This defines a $q$-dimensional geodesic surface in the following sense: For arbitrary $x_0, x_1 \in \mathbb{R}^q$, a shortest path connecting $\Gamma(x_0)$ and $\Gamma(x_1)$ is given by \[ [0,1] \ni u \ \mapsto \ B D \bigl( \exp((1 - u) x_0 + u x_1) \bigr) B^\top . \] \end{Lemma} \paragraph{Local geodesic parametrizations.} Closely related to the geodesic paths just described are the following local parametrizations of subsets of $\R_{{\rm sym},+}^{q\times q}$. For any matrix $\Sigma = BB^\top$ with $B \in \R_{\rm ns}^{q\times q}$ one may write \[ \R_{{\rm sym},+}^{q\times q} \ = \ \bigl\{ B \exp(A) B^\top : A \in \R_{\rm sym}^{q\times q} \bigr\} . \] These parametrizations are particularly useful in connection with first and second order Taylor expansions of smooth functions on $\R_{{\rm sym},+}^{q\times q}$. \begin{Definition}[Geodesically convex sets] \label{def:g-convex.sets} A subset $C$ of $\R_{{\rm sym},+}^{q\times q}$ is called \textsl{geodesically convex} (\textsl{g-convex}) if for arbitrary $\Sigma_0, \Sigma_1 \in C$ the whole geodesic path connecting them is contained in $C$. That means, for $0 \le t \le 1$, \[ \Sigma_t^{} := \Sigma_0^{1/2} \bigl( \Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2} \bigr)^t \Sigma_0^{1/2} \ \in \ C . \] \noindent In other words, for arbitrary $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$ such that both $BB^\top$ and $B\exp(A)B^\top$ belong to $C$, \[ B \exp(tA) B^\top \ \in \ C \quad\text{for} \ 0 \le t \le 1 . \] \end{Definition} \paragraph{Examples.} Lemma~\ref{lem:geodesic.curves} implies that for arbitrary $B \in \R_{\rm ns}^{q\times q}$ the following sets are g-convex: \[ \bigl\{ B \exp(tA) B^\top : t \in \mathcal{T}\} \] with $A \in \R_{\rm sym}^{q\times q}$ and an interval $\mathcal{T} \subset \mathbb{R}$, and \[ \bigl\{ B D(e^x) B^\top : x \in \mathcal{X}\} \] with a convex set $\mathcal{X} \subset \mathbb{R}^q$. Moreover, for any number $c > 0$, the set \[ \{\Sigma \in \R_{{\rm sym},+}^{q\times q} : \det(\Sigma) = c\} \] is easily shown to be g-convex. \paragraph{Geodesic distance.} The geodesic distance between two matrices $\Sigma_0, \Sigma_1 \in \R_{{\rm sym},+}^{q\times q}$ is defined to be the length of the geodesic path connecting them, i.e. \[ d_g(\Sigma_0,\Sigma_1) \ := \ \bigl\| \log(\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2}) \bigr\| . \] If, as in \eqref{eq:BDB}, we express $\Sigma_0 = BB^\top$ and $\Sigma_1 = B \exp(D(\gamma)) B^\top$, then \[ d_g(\Sigma_0,\Sigma_1)^2 \ = \ \|\gamma\|^2 \ = \ \sum_{i=1}^q \gamma_i^2 . \] Obviously $d_g(\Sigma_0,\Sigma_1) \ge 0$ with equality if, and only if, $\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2} = I_q$ which is equivalent to $\Sigma_0 = \Sigma_1$. The interpretation of $d_g(\Sigma_0,\Sigma_1)$ as the length of a shortest path between $\Sigma_0$ and $\Sigma_1$ implies that $d_g(\cdot,\cdot)$ is a metric on $\R_{{\rm sym},+}^{q\times q}$. As to symmetry, $d_g(\Sigma_1,\Sigma_0) = d_g(\Sigma_0,\Sigma_1)$, because any path $M$ from $\Sigma_0$ and $\Sigma_1$ defines a path $\widetilde{M}(t) := M(1 - t)$ from $\Sigma_1$ to $\Sigma_0$ such that $L(\widetilde{M}) = L(M)$. As to the triangle inequality, for a third matrix $\Sigma_2 \in \R_{{\rm sym},+}^{q\times q}$ let $M_{01}$ be a shortest path from $\Sigma_0$ to $\Sigma_1$ and let $M_{12}$ be a shortest path from $\Sigma_1$ to $\Sigma_2$. Then \[ M(t) \ := \ \begin{cases} M_{01}(2t) & \text{for} \ 0 \le t \le 1/2 \\ M_{12}(2t-1) & \text{for} \ 1/2 \le t \le 1 \end{cases} \] defines a path from $\Sigma_0$ to $\Sigma_2$ such that $L(M) = L(M_{01}) + L(M_{12})$. Thus $d_g(\Sigma_0,\Sigma_2) \le L(M) = d_g(\Sigma_0,\Sigma_1) + d_g(\Sigma_1,\Sigma_2)$. Tow additional facts are that \[ d_g(B \Sigma_0 B^\top, B \Sigma_1 B^\top) \ = \ d_g(\Sigma_0,\Sigma_1) \ = \ d_g(\Sigma_0^{-1},\Sigma_1^{-1}) . \] The first equality follows from the fact that any path $M$ from $\Sigma_0$ to $\Sigma_1$ gives rise to the path $M_B$ from $B\Sigma_0 B^\top$ to $B\Sigma_1 B^\top$ with $L(M_B) = L(M)$. Moreover, one can easily verify that $\widetilde{M}(t) := M(t)^{-1}$ defines a path from $\Sigma_0^{-1}$ to $\Sigma_1^{-1}$ with $L(\widetilde{M}) = L(M)$. \paragraph{Matrices with determinant one.} In connection with scale-invariant functionals, the submanifold \[ \mathbb{M}^{(q)} \ := \ \bigl\{ \Sigma \in \R_{{\rm sym},+}^{q\times q} : \det(\Sigma) = 1 \bigr\} \] of $\R_{{\rm sym},+}^{q\times q}$ plays a prominent role. Note that any $\Sigma \in \mathbb{M}^{(q)}$ may be represented as $\Sigma = BB^\top$ with $B \in \R_{}^{q\times q}$ satisfying $\det(B) = \pm 1$, and then \[ \mathbb{M}^{(q)} \ = \ \bigl\{ B \exp(A) B^\top : A \in \mathbb{W}^{(q)} \bigr\} \] with the linear subspace \[ \mathbb{W}^{(q)} \ := \ \bigl\{ A \in \R_{\rm sym}^{q\times q} : \mathop{\mathrm{tr}}\nolimits(A) = 0 \bigr\} \] of $\R_{\rm sym}^{q\times q}$. An arbitrary matrix $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ may be written as $\Sigma = a^a \Gamma$ with $a := q^{-1} \log\det(\Sigma) \in \mathbb{R}$ and $\Gamma := \det(\Sigma)^{-1/q} \Sigma \in \mathbb{M}^{(q)}$. Then indeed \[ \min_{G \in \mathbb{M}^{(q)}} d_G(\Sigma,G) \ = \ d_G(\Sigma,\Gamma) \ = \ q^{1/2} |a| \ = \ q^{-1/2} |\log \det(\Sigma)| . \] This follows from a more general observation: Let $\Sigma_0, \Sigma_1 \in \R_{{\rm sym},+}^{q\times q}$ be written as $\Sigma_j = e^{a_j}\Gamma_j$ with $a_j \in \mathbb{R}$ and $\Gamma_j \in \mathbb{M}^{(q)}$. Then \[ \log(\Sigma_0^{-1/2} \Sigma_1^{} \Sigma_0^{-1/2}) \ = \ (a_1 - a_0) I_q + \log(\Gamma_0^{-1/2} \Gamma_1^{} \Gamma_0^{-1/2}) , \] and it follows from $\langle I_q, \log(\Gamma_0^{-1/2} \Gamma_1^{} \Gamma_0^{-1/2})\rangle = \log \det(\Gamma_0^{-1} \Gamma_1) = 0$ that \[ d_g(\Sigma_0, \Sigma_1)^2 \ = \ q (a_1 - a_0)^2 + d_g(\Gamma_0,\Gamma_1)^2 . \] \subsection{Geodesically convex functions} \begin{Definition}[Geodesically convex functions] \label{def:g-convex.functions} Let $C \subset \R_{{\rm sym},+}^{q\times q}$ be g-convex. A function $f : C \to \mathbb{R}$ is called \textsl{geodesically convex} (\textsl{g-convex}) if for arbitrary matrices $\Sigma_0,\Sigma_1 \in C$ and $0 < t < 1$, \[ f(\Sigma_t) \ \le \ (1 - t) f(\Sigma_0) + t f(\Sigma_1) , \] where $\Sigma_t$ is defined as in Definition~\ref{def:g-convex.sets}. If the preceding inequality is strict whenever $\Sigma_0 \ne \Sigma_1$, the function $f$ is called \textsl{strictly geodesically convex} (\textsl{strictly g-convex}). \noindent Equivalently, $f : C \to \mathbb{R}$ is (strictly) g-convex if for arbitrary $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$ such that both $BB^\top$ and $B \exp(A) B^\top$ belong to $C$, \[ f(B \exp(tA) B^\top) \ \text{is (strictly) convex in} \ t \in [0,1] . \] \end{Definition} \begin{Example} \label{ex0} The function $f(\Sigma) := \log \det(\Sigma)$ is geodesically convex on $\R_{{\rm sym},+}^{q\times q}$. It is even geodesically linear in the sense that \[ f(B \exp(A) B^\top) \ = \ f(BB^\top) + \mathop{\mathrm{tr}}\nolimits(A) \ = \ f(BB^\top) + \langle I_q,A\rangle \] for arbitrary $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$. \end{Example} By means of Lemma~\ref{lem:geodesic.curves} one can easily derive the following result. \begin{Lemma} \label{lem:criteria.g-convexity} For a function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ the following three properties are equivalent: \noindent \textbf{(a)} \ $f$ is (strictly) geodesically convex; \noindent \textbf{(b)} \ For arbitrary $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$, the function \[ \mathbb{R} \ni t \ \mapsto \ f(B \exp(tA) B^\top) \] is (strictly) convex; \noindent \textbf{(b')} \ For arbitrary $B \in \R_{\rm ns}^{q\times q}$ and $x \in \mathbb{R}^q \setminus \{0\}$, the function \[ \mathbb{R} \ni t \ \mapsto \ f(B D(e^{tx}) B^\top) \] is (strictly) convex; \noindent \textbf{(c)} \ For arbitrary $B \in \R_{\rm ns}^{q\times q}$, the function \[ \mathbb{R}^q \ni x \ \mapsto \ f(B D(e^x) B^\top) \] is (strictly) convex. \end{Lemma} Obviously, Property~(b') is a special case of Property~(b), because $D(e^{tx}) = \exp(t D(x))$. On the other hand we may write $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$ as $A = U D(x) U^\top$ for some $U \in \R_{\rm orth}^{q\times q}$ and $x \in \mathbb{R}^q \setminus \{0\}$. Then $B \exp(tA) B^\top = (BU) D(e^{tx}) (BU)^\top$, whence Property~(b') implies Property~(b). \begin{Example} \label{ex1} For any vector $v \in \mathbb{R}^q \setminus \{0\}$, the function \[ \Sigma \ \mapsto \ v^\top \Sigma v \] is g-convex, and the function \[ \Sigma \ \mapsto \ \mathop{\mathrm{tr}}\nolimits(\Sigma) \] is strictly g-convex. To verify these claims we use criterion (c) in Lemma~\ref{lem:criteria.g-convexity}: For $B \in \R_{\rm ns}^{q\times q}$ and $x \in \mathbb{R}^q$, \[ v^\top B D(e^x) B^\top v \ = \ \sum_{i=1}^q e^{x_i} (B^\top v)_i^2 \] is obviously convex in $x$, because $\exp : \mathbb{R} \to \mathbb{R}$ is convex. Similarly, \[ \mathop{\mathrm{tr}}\nolimits(B D(e^x) B^\top) \ = \ \sum_{j=1}^q e^{x_j} w_j \quad\text{with} \ w_j = \sum_{i=1}^q B_{ij}^2 . \] This is even strictly convex in $x$, because $\exp : \mathbb{R} \to \mathbb{R}$ is strictly convex and all weights $w_j$ are strictly positive. \end{Example} \begin{Example} \label{exlog} For any vector $v \in \mathbb{R}^q \setminus \{0\}$, the function \[ \Sigma \ \mapsto \ \log (v^\top \Sigma v) \] is g-convex. To verify this claim we use criterion (b') in Lemma~\ref{lem:criteria.g-convexity}: For $B \in \R_{\rm ns}^{q\times q}$, $x \in \mathbb{R}^q \setminus \{0\}$, and $t \in \mathbb{R}$, \[ g(t) \ = \ \log\left( v^\top B D(e^{tx}) B^\top v \right) \ = \ \log \sum_{i=1}^q e^{t x_i} a_i. \] with $a_i = (B^\top v)_i^2 \ge 0$. Evaluating its second derivative gives \[ g^{\prime\prime}(t) \ = \ \frac{\sum_{i=1}^q e^{t x_i} a_i x_i^2}{\sum_{i=1}^q e^{t x_i} a_i} - \biggl\{ \frac{\sum_{i=1}^q e^{t x_i} a_i x_i}{\sum_{i=1}^q e^{t x_i} a_i} \biggr\}^2 , \] and so by application of the Cauchy Schwartz inequality $g^{\prime\prime}(t) \ge 0 $, with equality if and only if all the $x_i$'s are equal for those $i$ for which $a_i > 0$. Furthermore, suppose that $\rho : \mathbb{R}_+ \to \mathbb{R}$ is g-convex, which is equivalent to $h(t) := \rho(e^t)$ being convex in $t \in \mathbb{R}$, and that $\rho$ is non-decreasing. Then the function \[ \Sigma \ \mapsto \ \rho(v^\top \Sigma^{-1} v) \] is g-convex. This follows by expressing $\rho(v^\top \Sigma^{-1} v) = h(f(\Sigma^{-1})$ with $f(\Sigma) := \log(v^\top \Sigma v)$ and then applying the two remarks given below. \end{Example} \begin{Remark}[G-convexity and inversion] \label{rem:Inversion} If $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ is geodesically convex, then $\widetilde{f}(\Sigma) := f(\Sigma^{-1})$ defines a geodesically convex function, too. This follows essentially from the fact that \[ \bigl( B \exp(tA) B^\top \bigr)^{-1} \ = \ B^{-\top} \exp(- tA) B^{-1} \ = \ \widetilde{B} \exp(t \widetilde{A}) \widetilde{B}^\top \] with $\widetilde{B} := B^{-\top}$ and $\widetilde{A} := -A$. \end{Remark} \begin{Remark}[G-convexity and compositions] \label{rem:Transformation} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ be geodesically convex with values in an interval $\mathcal{T} \subset \mathbb{R}$, and let $h : \mathcal{T} \to \mathbb{R}$ be convex and non-decreasing. Then $\widetilde{f}(\Sigma) := h(f(\Sigma))$ defines a geodesically convex function, too. For if $\Sigma_0, \Sigma_1, \Sigma_t$ as in Definition~\ref{def:g-convex.sets}, then \begin{align*} \widetilde{f}(\Sigma_t) \ &= \ h(f(\Sigma_t)) \\ &\le \ h \bigl( (1 - t)f(\Sigma_0) + t f(\Sigma_1) \bigr) \qquad(\text{g-convexity of} \ f, \ \text{monotonicity of} \ h) \\ &\le \ (1 - t) h(f(\Sigma_0)) + t h(f(\Sigma_1)) \qquad(\text{convexity of} \ h) \\ &= \ (1 - t) \widetilde{f}(\Sigma_0) + \widetilde{f}(\Sigma_1) . \end{align*} The function $\widetilde{f}$ is even strictly g-convex if $f$ is strictly g-convex and $h$ is strictly increasing. \end{Remark} \subsection{Minimizers and geodesic coercivity} \label{subsec:Minimizers.g-coercivity} Suppose we want to minimize a g-convex function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$. As in classical convex analysis, a minimizer of $f$ may be characterized by means of the one-sided directional derivatives \[ \lim_{t \to 0\,+} \frac{f(B \exp(tA) B^\top) - f(BB^\top)}{t} \] for $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$. The latter limit exists in $\mathbb{R}$, because g-convexity of $f$ implies convexity of $f(B \exp(tA) B^\top)$ in $t \in \mathbb{R}$. \begin{Lemma}[Characterizing minimizers] \label{lem:minimizer.g-convex} A matrix $\Sigma = BB^\top$ with $B \in \R_{\rm ns}^{q\times q}$ minimizes a g-convex function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ if, and only if, \begin{equation} \label{eq:minimizer.g-convex} \lim_{t \to 0\,+} \frac{f(B \exp(tA) B^\top) - f(BB^\top)}{t} \ \ge \ 0 \quad\text{for all} \ A \in \R_{\rm sym}^{q\times q} . \end{equation} \end{Lemma} This lemma provides an explicit criterion to check whether a certain point $\Sigma$ is a minimizer of a differentiable and g-convex function on $\R_{{\rm sym},+}^{q\times q}$. But it is not clear under what conditions a minimizer has to exist. In this context a key property of $f$ is coercivity in the following sense. \begin{Definition}[Geodesic coercivity] A function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ is called \textsl{geodesically coercive} (\textsl{g-coercive}) if \[ f(\Sigma) \ \to \ \infty \quad\text{as} \ \|\log(\Sigma)\| \to \infty . \] \noindent In other words, a function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ is g-coercive if, and only if, the function $\R_{\rm sym}^{q\times q} \ni A \mapsto f(\exp(A))$ is coercive in the usual sense, that is, $f(\exp(A)) \to \infty$ as $\|A\| \to \infty$. \end{Definition} Note that $\|\log(\Sigma)\| \to \infty$ is equivalent to $\|\Sigma\| + \|\Sigma^{-1}\| \to \infty$. Various authors have realized that any continuous function $f$ on $\R_{{\rm sym},+}^{q\times q}$ with the latter property has a compact set of minimizers, e.g.\ \cite{Sra_Hosseini_2015}. The following lemma and its corollary explain the relation between g-coercivity and the existence of minimizers in case of g-convex functions. In particular, the corollary shows that a continuous and strictly g-convex function has a unique minimizer if, and only if, it is g-coercive. \begin{Lemma}[Existence of minimizers] \label{lem:existence.minimizers} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ be a continuous and geodesically convex function. \noindent \textbf{(i)} \ The set $\mathcal{S}_*$ of its minimizers is a closed and geodesically convex subset of $\R_{{\rm sym},+}^{q\times q}$. It is possibly empty. \noindent \textbf{(ii)} \ If $f$ is g-coercive, then $\mathcal{S}_*$ is nonvoid and compact. \noindent \textbf{(iii)} \ If $f$ fails to be g-coercive but $\mathcal{S}_*$ is nonvoid, then $\mathcal{S}_*$ is geodesically unbounded, that means, \[ \sup_{\Sigma_1, \Sigma_2 \in \mathcal{S}_*} d_g(\Sigma_1,\Sigma_2) \ = \ \infty . \] \end{Lemma} \begin{Corollary}[Existence of unique of minimizers] \label{cor:uniqueness.minimizer} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ be a continuous and strictly geodesically convex function. \noindent \textbf{(i)} \ If $f$ is g-coercive, it has a unique minimizer. \noindent \textbf{(ii)} \ If $f$ fails to be g-coercive, it has no minimizer at all. \end{Corollary} Corollary~\ref{cor:uniqueness.minimizer} follows easily from Lemma~\ref{lem:existence.minimizers}. Note that a strictly g-convex function $f$ can have at most one minimizer. For if $\Sigma_0, \Sigma_1$ are two different matrices with $f(\Sigma_0) = f(\Sigma_1)$, then $f$ attains strictly smaller values along the geodesic path connecting $\Sigma_0$ and $\Sigma_1$. Since a geodesically unbounded set is necessarily infinite, a continuous and strictly g-convex function which is not g-coercive cannot have a minimizer. The next lemma provides an equivalent characterization for g-coercivity: \begin{Lemma}[Characterizing g-coercivity] \label{lem:g-coercivity} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ be continuous and geodesically convex. Then $f$ is geodesically coercive if, and only if, for any fixed $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$, \[ \lim_{t \to \infty} \ \lim_{u \to t\,+} \frac{f(\exp(uA)) - f(\exp(tA))}{u - t} \ > \ 0 . \] \end{Lemma} \subsection{Differentiability} The next lemma establishes a connection between differentiability in the usual sense and differentiability with respect to local geodesic coordinates. \begin{Lemma}[1st order smoothness] \label{lem:smoothness1} For a function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ the following two conditions are equivalent: \noindent \textbf{(S1.i)} \ $f$ is differentiable with gradient $\nabla f : \R_{{\rm sym},+}^{q\times q} \to \R_{\rm sym}^{q\times q}$. \noindent \textbf{(S1.ii)} \ For each $B \in \R_{\rm ns}^{q\times q}$ there exists a matrix $G(B) \in \R_{\rm sym}^{q\times q}$ such that for $A \in \R_{\rm sym}^{q\times q}$, \[ f(B \exp(A) B^\top) \ = \ f(BB^\top) + \langle A, G(B)\rangle + o(\|A\|) \quad\text{as} \ A \to 0 . \] \noindent In case of (S1.i-ii), \begin{align*} G(B) \ &= \ B^\top \nabla f(BB^\top) B \quad\text{for} \ B \in \R_{\rm ns}^{q\times q} , \\ \nabla f(\Sigma) \ &= \ \Sigma^{-1/2} G(\Sigma^{1/2}) \Sigma^{-1/2} \quad\text{for} \ \Sigma \in \R_{{\rm sym},+}^{q\times q} . \end{align*} \end{Lemma} In particular, a function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ is continuously differentiable if, and only if, its ``geodesic gradient (g-gradient)'' $G(B)$ is continuous in $B \in\R_{\rm ns}^{q\times q}$. It is well-known from convex analysis that a differentiable convex function $f$ on $\mathbb{R}^d$ is minimal at a certain point $x \in \mathbb{R}^d$ if, and only if, $\nabla f(x) = 0$. The same is true for differentiable g-convex functions: \begin{Corollary}[Characterizing minimizers] \label{cor:minimizers.g-convex} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ be differentiable and geodesically convex. Then for $\Sigma = BB^\top$, $B \in \R_{\rm ns}^{q\times q}$, the following three conditions are equivalent: \noindent \textbf{(a)} \ $\Sigma$ is a minimizer of $f$; \noindent \textbf{(b)} \ $\nabla f(\Sigma) = 0$; \noindent \textbf{(b')} \ $G(B) = 0$. \end{Corollary} This corollary follows directly from Lemmas~\ref{lem:minimizer.g-convex} and \ref{lem:smoothness1}, noting that \[ \lim_{t \to 0\,+} \frac{f(B\exp(tA)B^\top) - f(BB^\top)}{t} \ = \ \langle A, G(B)\rangle \] for $A \in \R_{\rm sym}^{q\times q}$ and $B \in \R_{\rm ns}^{q\times q}$. Moreover, for different real numbers $t, u$ and $B := \exp((t/2)A)$, \[ \frac{f(\exp(uA)) - f(\exp(tA))}{u - t} \ = \ \frac{f \bigl( B\exp((u - t)A) B \bigr) - f(BB)}{u - t} \ \to \ \langle A, G(B) \rangle \] as $u \to t$. Hence for differentiable and g-convex functions $f$ the criterion for g-coercivity in Lemma~\ref{lem:g-coercivity} can be reformulated as follows: \begin{Corollary}[Characterizing g-coercivity] \label{cor:g-coercivity} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ be differentiable and geodesically convex. Then $f$ is geodesically coercive if, and only if, for any fixed $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$, \[ \lim_{t \to \infty} \ \frac{d}{dt} f(\exp(tA)) \ > \ 0 \] which is equivalent to \[ \lim_{t \to \infty} \langle A, G(\exp(tA)) \ > \ 0 . \] \end{Corollary} \subsection{Second order smoothness} Verifying g-convexity of a function $f$ on $\R_{{\rm sym},+}^{q\times q}$ is not trivial. Many authors use direct calculations case by case \cite{Wiesel_2012} or use advanced matrix inequalities \cite{Sra_Hosseini_2013, Sra_Hosseini_2015}. Convexity of functions can be easily characterized in terms of second derivatives. The same is true for g-convexity if one uses local geodesic coordinates. \begin{Lemma}[Conditions for g-convexity] \label{lem:g-convexity} Let $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ satisfy the following condition: For each $B \in \R_{\rm ns}^{q\times q}$ there exist a matrix $G(B) \in \R_{\rm sym}^{q\times q}$ and a quadratic form $H(\cdot,B)$ on $\R_{\rm sym}^{q\times q}$ such that for $A \in \R_{\rm sym}^{q\times q}$, \begin{equation} \label{eq:Taylor2} f(B \exp(A) B^\top) \ = \ f(BB^\top) + \langle A, G(B)\rangle + 2^{-1} H(A,B) + o(\|A\|^2) \quad\text{as} \ A \to 0 . \end{equation} Then the function $f$ is geodesically convex if, and only if, \begin{equation} \label{eq:H.pos.semidefinite} H(A,B) \ \ge \ 0 \quad\text{for all} \ B \in \R_{\rm ns}^{q\times q} \ \text{and} \ A \in \R_{\rm sym}^{q\times q} . \end{equation} It is strictly geodesically convex if \begin{equation} \label{eq:H.pos.definite} H(A,B) \ > \ 0 \quad\text{for all} \ B \in \R_{\rm ns}^{q\times q} \ \text{and} \ A \in \R_{\rm sym}^{q\times q} \setminus \{0\} . \end{equation} \end{Lemma} \begin{Example} \label{ex2} The function $\Sigma \mapsto \log \mathop{\mathrm{tr}}\nolimits(\Sigma)$ is geodesically convex. For if $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$, then \begin{align*} \log \mathop{\mathrm{tr}}\nolimits(B \exp(A) B^\top) \ = \ &\log \bigl( \mathop{\mathrm{tr}}\nolimits(BB^\top) + \mathop{\mathrm{tr}}\nolimits(BAB^\top) + 2^{-1} \mathop{\mathrm{tr}}\nolimits(B A^2 B^\top) + O(\|A\|^3) \bigr) \\ = \ &\log \mathop{\mathrm{tr}}\nolimits(\Sigma) + \log \Bigl( 1 + \frac{\mathop{\mathrm{tr}}\nolimits(BAB^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} + 2^{-1} \frac{\mathop{\mathrm{tr}}\nolimits(B A^2 B^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} + O(\|A\|^3) \Bigr) \\ = \ &\log \mathop{\mathrm{tr}}\nolimits(\Sigma) + \frac{\mathop{\mathrm{tr}}\nolimits(BAB^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} \\ &+ \ 2^{-1} \Bigl( \frac{\mathop{\mathrm{tr}}\nolimits(B A^2 B^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} - \Bigl( \frac{\mathop{\mathrm{tr}}\nolimits(BAB^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} \Bigr)^2 \Bigr) + O(\|A\|^3) \Bigr) \end{align*} as $A \to 0$, so \begin{align*} G(B) \ &= \ \mathop{\mathrm{tr}}\nolimits(BB^\top)^{-1} B^\top B \ = \ \|B\|^{-2} B^\top B , \\ H(A,B) \ &= \ \frac{\mathop{\mathrm{tr}}\nolimits(B A^2 B^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} - \Bigl( \frac{\mathop{\mathrm{tr}}\nolimits(BAB^\top)}{\mathop{\mathrm{tr}}\nolimits(BB^\top)} \Bigr)^2 \ = \ \langle A^2, G(B)\rangle - \langle A, G(B)\rangle^2 . \end{align*} Obviously, $H(I_q,B) = 0$. But $H(A,B) > 0$ for all $A \not\in \{t I_q : t \in \mathbb{R}\}$. To show this let $A = U D(x) U^\top$ with $U \in \R_{\rm orth}^{q\times q}$ and $x \in \mathbb{R}^q$. Then for any integer $s \ge 0$, \[ \mathop{\mathrm{tr}}\nolimits(B A^s B^\top) \ = \ \mathop{\mathrm{tr}}\nolimits(BU D(x^s) (BU)^\top) \ = \ \sum_{j=1}^q w_j x_j^s \] with $w_j := \sum_{i=1}^q (BU)_{ij}^2 > 0$. Consequently, \[ H(A,B) \ = \ \frac{\sum_{j=1}^q w_j x_j^2}{\sum_{j=1}^q w_j} - \biggl\{ \frac{\sum_{j=1}^q w_j x_j}{\sum_{j=1}^q w_j} \biggr\}^2 \ > \ 0 \] unless $x_1 = x_2 = \cdots = x_q$. But the latter condition would be equivalent to $A$ being a multiple of the identity matrix. \end{Example} \begin{Remark}[Smoothness and inversion] \label{rem:Inversion2} Suppose that $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ satisfies the second order smoothness assumption in Lemma~\ref{lem:g-convexity}. Then $\widetilde{f}(\Sigma) := f(\Sigma^{-1})$ satisfies this assumption, too: For any $B \in \R_{\rm ns}^{q\times q}$, as $\R_{\rm sym}^{q\times q} \ni A \to 0$, \[ \widetilde{f} \bigl( B \exp(A) B^\top \bigr) \ = \ \langle A, \widetilde{G}(B)\rangle + 2^{-2} \widetilde{H}(A, B) + o(\|A\|^2) \] with \begin{align*} \widetilde{G}(B) \ &:= \ - G(B^{-\top}) , \\ \widetilde{H}(A,B) \ &:= \ H(A,B^{-\top}) . \end{align*} \end{Remark} \begin{Remark}[Smoothness and exponential or power transformations] \label{rem:Transformation2} Suppose that a function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ satisfies the second order smoothness assumption in Lemma~\ref{lem:g-convexity}. For $c > 0$ let \[ f_c(\Sigma) \ := \ \exp(c f(\Sigma))/c . \] Then for any $B \in \R_{\rm ns}^{q\times q}$, as $\R_{\rm sym}^{q\times q} \ni A \to 0$, \[ f_c(B \exp(A) B^\top) \ = \ f_c(BB^\top) + \langle A, G_c(B)\rangle + 2^{-1} H_c(A, B) + o(\|A\|^2) \] with \begin{align*} G_c(B) \ &:= \ \exp(c f(BB^\top)) G(B) , \\ H_c(A,B) \ &:= \ \exp(c f(BB^\top)) \bigl( H(A,B) + c \langle A, G(B)\rangle^2 \bigr) . \end{align*} Similarly, if $f > 0$ and \[ f_\gamma(\Sigma) \ = \ f(\Sigma)^\gamma/\gamma \] for $\gamma > 1$, then \[ f_\gamma(B \exp(A) B^\top) \ = \ f_\gamma(BB^\top) + \langle A, G_\gamma(B)\rangle + 2^{-1} H_\gamma(A, B) + o(\|A\|^2) \] with \begin{align*} G_\gamma(B) \ &:= \ f(BB^\top)^{\gamma-1} G(B) , \\ H_\gamma(A,B) \ &:= \ f(BB^\top)^{\gamma-1} H(A,B) + (\gamma - 1) f(BB^\top)^{\gamma-2} \langle A, G(B)\rangle^2 . \end{align*} \end{Remark} \begin{Remark}[Orthogonal transformations] \label{rem:Orthogonal transformations} For matrices $B, \widetilde{B} \in \R_{\rm ns}^{q\times q}$, the equation $BB^\top = \widetilde{B}\widetilde{B}^\top$ is equivalent to $\widetilde{B} = BU$ for some orthogonal matrix $U \in \R_{}^{q\times q}$. For any function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ satisfying the second order smoothness assumption in Lemma~\ref{lem:g-convexity}, \begin{align*} G(BU) \ &= \ U^\top G(B) U \quad\text{and} \\ H(A,BU) \ &= \ H(UAU^\top, B) \quad\text{for} \ A \in \R_{\rm sym}^{q\times q} . \end{align*} In particular, neither the eigenvalues of $G(B)$ nor the set $\bigl\{ H(A,B) : A \in \R_{\rm sym}^{q\times q}, \|A\| = 1 \bigr\}$ change when $B$ is replaced with $BU$. The equations for $G(BU)$ and $H(\cdot,BU)$ follow from the fact that $BU \exp(A) (BU)^\top = B \exp(UAU^\top) B^\top$. Thus \[ f \bigl( BU \exp(A) (BU)^\top \bigr) \ = \ f(BB^\top) + \langle A, G(BU)\rangle + 2^{-1} H(A, BU) + o(\|A\|^2) \] coincides with \begin{align*} f \bigl( B \exp(UAU^\top) B^\top \bigr) \ &= \ f(BB^\top) + \langle UAU^\top, G(B)\rangle + 2^{-1} H(UAU^\top, B) + o(\|A\|^2) \\ &= \ f(BB^\top) + \langle A, U^\top G(B)U\rangle + 2^{-1} H(UAU^\top, B) + o(\|A\|^2) . \end{align*} \end{Remark} As explained in Supplement~\ref{sec:Auxiliary}, existence of second order Taylor expansions alone does not imply twice differentiability. But this is true under an additional continuity requirement on the quadratic terms. \begin{Lemma}[2nd order smoothness] \label{lem:smoothness2} For a function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ the following two conditions are equivalent: \noindent \textbf{(S2.i)} \ $f$ is twice continuously differentiable with gradient $\nabla f(\Sigma) \in \R_{\rm sym}^{q\times q}$ and Hessian operator $D^2f(\Sigma) : \R_{\rm sym}^{q\times q} \to \R_{\rm sym}^{q\times q}$ at $\Sigma \in \R_{{\rm sym},+}^{q\times q}$. \noindent \textbf{(S2.ii)} \ For each $B \in \R_{\rm ns}^{q\times q}$ there exist a matrix $G(B) \in \R_{\rm sym}^{q\times q}$ and a quadratic form $H(\cdot,B)$ on $\R_{\rm sym}^{q\times q}$ such that expansion \eqref{eq:Taylor2} is valid. Moreover, $H(A,B)$ is continuous in $B \in \R_{\rm sym}^{q\times q}$ for any fixed $A \in \R_{\rm sym}^{q\times q}$. \noindent In case of (S2.i-ii), for $A \in \R_{\rm sym}^{q\times q}$, \begin{align*} H(A,B) \ &= \ \langle A^2, G(B) \rangle + \langle BAB^\top, D^2f(BB^\top) BAB^\top\rangle \quad\text{for} \ B \in \R_{\rm ns}^{q\times q} , \\ \langle A, D^2f(\Sigma) A\rangle \ &= \ H(\Sigma^{-1/2}A\Sigma^{-1/2}, \Sigma^{1/2}) - \langle A\Sigma^{-1}A, \nabla f(\Sigma)\rangle \quad\text{for} \ \Sigma \in \R_{{\rm sym},+}^{q\times q} . \end{align*} \end{Lemma} \subsection{Scale-invariant functions} \label{subsec:Scale-invariance} Sometimes we consider \textsl{scale-invariant} functions $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ in the sense that \[ f(c\Sigma) \ = \ f(\Sigma) \quad\text{for arbitrary} \ \Sigma \in \R_{{\rm sym},+}^{q\times q} \ \text{and} \ c > 0 . \] If the function $f$ is differentiable, this property is equivalent to the following condition on its g-gradients $G(B)$: \[ \mathop{\mathrm{tr}}\nolimits(G(B)) \ = \ 0 \quad\text{for all} \ B \in \R_{\rm ns}^{q\times q} . \] This follows essentially from the fact that for $t \in \mathbb{R}$, \begin{align*} f(e^tBB^\top) \ = \ f(B \exp(t I_q) B^\top) \ &= \ f(BB^\top) + \langle t I_q, G(B)\rangle + o(t) \\ &= \ f(BB^\top) + \mathop{\mathrm{tr}}\nolimits(G(B)) t + o(t) \end{align*} as $t \to 0$. If $f$ does even satisfy the second order smoothness assumption in Lemma~\ref{lem:g-convexity}, then \[ H(I_q,B) \ = \ 0 \quad\text{for all} \ B \in \R_{\rm ns}^{q\times q} , \] because \[ f(e^tBB^\top) \ = \ f(BB^\top) + \mathop{\mathrm{tr}}\nolimits(G(B)) t + H(I_q,B) t^2/2 + o(t^2) . \] A scale-invariant function $f$ on $\R_{{\rm sym},+}^{q\times q}$ is geodesically convex if, and only if, $f$ is geodesically convex on the g-convex submanifold $\mathbb{M}^{(q)} = \{\Sigma \in \R_{{\rm sym},+}^{q\times q} : \det(\Sigma) = 1\}$ introduced earlier. For if $\Sigma_t = B \exp(tA) B^\top$ for $t \in \mathbb{R}$ with arbitrary $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$, then $\det(\Sigma_t) = \det(B)^2 \exp(t \mathop{\mathrm{tr}}\nolimits(A))$, and \[ f(\Sigma_t) \ = \ f \bigl( \det(\Sigma_t)^{-1/q} \Sigma_t \bigr) \ = \ f(B_o^{} \exp(t A_o) B_o^\top) \] with $B_o := |\det(B)|^{-1/q} B$ satisfying $\det(B_o) = \pm 1$ and $A_o := A - (\mathop{\mathrm{tr}}\nolimits(A)/q) I_q$ belonging to the subspace $\mathbb{W}^{(q)}$ of symmetric matrices with trace $0$. To minimize a scale-invariant function $f$, one may restrict one's attention to matrices in $\mathbb{M}^{(q)}$. Then the previous considerations can be adapted as follows: \paragraph{A criterion for strict g-convexity.} Suppose that $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ is scale-invariant and satisfies the second order smoothness assumption of Lemma~\ref{lem:g-convexity}. Then it is strictly geodesically convex on $\mathbb{M}^{(q)}$ if $H(A,B) > 0$ for all $B \in \R_{\rm ns}^{q\times q}$ and $A \in \mathbb{W}^{(q)} \setminus \{0\}$. \paragraph{Minimizers and g-coercivity.} All results of Section~\ref{subsec:Minimizers.g-coercivity} carry over with the following modifications: We restrict our attention to matrices $\Sigma \in \mathbb{M}^{(q)}$, to matrices $B \in \R_{\rm ns}^{q\times q}$ with $\det(B) = \pm 1$ and to matrices $A \in \mathbb{W}^{(q)}$. In particular, a matrix $\Sigma = BB^\top \in \mathbb{M}^{(q)}$ minimizes a g-convex function $f$ on $\mathbb{M}^{(q)}$ if, and only if, \[ \lim_{t \to 0\,+} \frac{f(B\exp(tA) B^\top) - f(BB^\top)}{t} \ \ge \ 0 \quad\text{for all} \ A \in \mathbb{W}^{(q)} . \] A function $f$ is said to be geodesically coercive on $\mathbb{M}^{(q)}$ if \[ f(\exp(A)) \ \to \ \infty \quad\text{as} \ \|A\| \to \infty, A \in \mathbb{W}^{(q)} . \] In case of a continuous and g-convex function $f$, a necessary and sufficient condition for this is \[ \lim_{t \to \infty} \ \lim_{u \to t\,+} \frac{f(\exp(uA)) - f(\exp(tA))}{u - t} \ > \ 0 \quad\text{whenever} \ A \in \mathbb{W}^{(q)} \setminus \{0\} . \] \section{Regularized $M$-estimators of scatter} \label{sec:Regularized.scatter} \subsection{Scatter functionals} \label{subsec:M-scatter} We now apply the results of the previous section to the problem of regularized $M$-functionals and $M$-estimators of scatter. Before doing so, we first briefly consider the non-penalized case, i.e.\ minimizing \[ L_\rho(\Sigma,Q) \ := \ \int \bigl[ \rho(x^\top\Sigma^{-1}x) - \rho(\|x\|^2) \bigr] \, Q(dx) + \log \det(\Sigma) . \] In what follows we summarize various results from \cite{Zhang_etal_2013} and \cite{Duembgen_etal_2015} in a slightly more general setting. The former paper considered only empirical distributions $Q = Q_n$ whereas the latter survey paper considered general distributions $Q$ but only differentiable functions $\rho$ satisfying additional constraints. Throughout we assume that $\rho(s)$ is non-decreasing and g-convex in $s > 0$, that means, $h(x) := \rho(e^x)$ is non-decreasing and convex in $x \in \mathbb{R}$. In particular, $\rho$ is continuous with left- and right-sided derivatives on $\mathbb{R}_+$, and \[ \psi(s) \ := \ \begin{cases} 0 & \text{if} \ s = 0 , \\ s \rho'(s\,+) & \text{if} \ s > 0 \end{cases} \] defines a non-decreasing function on $[0,\infty)$. Note that $\psi(e^x) = h'(x\,+)$ for $x \in \mathbb{R}$. Thus strict g-convexity of $\rho$ on $\mathbb{R}_+$ is equivalent to $\psi$ being strictly increasing on $[0,\infty)$. The next proposition clarifies under which conditions on $\rho$ and $Q$ the objective function $L_\rho(\Sigma,Q)$ is well-defined for arbitrary $\Sigma \in \R_{{\rm sym},+}^{q\times q}$. In particular, a sufficient condition for that is $\psi(\infty) < \infty$ or $Q$ having bounded support. \begin{Proposition} \label{prop:existence} The integral $\int \bigl| \rho(x^\top\Sigma^{-1}x) - \rho(\|x\|^2) \bigr| \, Q(dx)$ is finite for arbitrary $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ if, and only if, \begin{equation} \label{eq:existence} \int \psi(\lambda \|x\|^2) \, Q(dx) \ < \ \infty \quad\text{for arbitrary} \ \lambda \ge 1 . \end{equation} In case of $\rho'(s\,+)$ being non-increasing in $s > 0$, the latter condition is equivalent to \[ \int \psi(\|x\|^2) \, Q(dx) \ < \ \infty . \] \end{Proposition} The following theorem regarding the g-convexity of $L_\rho(\Sigma,Q)$ follows essentially from examples \ref{ex0} and \ref{exlog} plus some extra arguments, see Supplement~\ref{sec:Auxiliary}. It is an extension of Theorem~1(a) of \cite{Zhang_etal_2013}, who considered the case $Q = Q_n$, and of Proposition~5.4 of \cite{Duembgen_etal_2015}, who considered differentiable functions $\rho$: \begin{Theorem} \label{thm:Mfunc} Under Condition~\eqref{eq:existence}, $L_\rho(\Sigma,Q)$ is continuous and geodesically convex in $\Sigma \in \R_{{\rm sym},+}^{q\times q}$. Furthermore, \noindent \textbf{(a)} suppose that $\rho(s)$ is strictly g-convex in $s > 0$. Then $L_\rho(\cdot,Q)$ is strictly geodesically convex if, and only if, \[ Q(\mathbb{V}) \ < \ 1 \] for any linear subspace $\mathbb{V}$ of $\mathbb{R}^q$ with $\dim(\mathbb{V}) < q$. \noindent \textbf{(b)} suppose that $\rho(s) = q \log s$ for $s > 0$. Then $L_\rho(\cdot,Q)$ is strictly geodesically convex on $\mathbb{M}^{(q)}$ if, and only if, \[ Q(\mathbb{V} \cup \mathbb{W}) \ < \ 1 \] for arbitrary linear subspaces $\mathbb{V}, \mathbb{W} \subsetneq \mathbb{R}^q$ with $\mathbb{V} \cap \mathbb{W} = \{0\}$. \end{Theorem} The special function $\rho(s) = q \log s$ in part~(b) corresponds to the distribution-free $M$-estimator of scatter introduced in \cite{Tyler_1987a}, and it is the setting for which geodesic convexity was first applied to $M$-estimation \cite{Auderset_etal_2005,Wiesel_2012}. The corresponding objective function $L_\rho(\cdot,Q)$ is scale-invariant if $Q(\{0\}) = 0$. Results on the g-coercivity of $L_\rho(\Sigma,Q)$ can be obtained by extending Lemma 2.2 of \cite{Kent_Tyler_1991} from $Q_n$ to general $Q$, see also Theorem~1(b) of \cite{Zhang_etal_2013} and Proposition~5.5 of \cite{Duembgen_etal_2015}. Lemma~\ref{lem:g-coercivity} allows for a complete answer in the present general framework, starting from the following proposition. \begin{Proposition} \label{prop:g-coercivity} Let $A = U D(-\gamma) U^\top$ with $U = [u_1, u_2, \ldots, u_q] \in \R_{\rm orth}^{q\times q}$ and $\gamma \in \mathbb{R}^q$ satisfying $\gamma_1 \le \gamma_2 \le \cdots \le \gamma_q$. Then \begin{align} \nonumber \lim_{t \to \infty} \ &\lim_{u \to t\,+} \, \frac{L_\rho(\exp(uA),Q) - L_\rho(\exp(tA),Q)}{u - t} \\ \label{eq:g-coercivity} &= \ \sum_{j=1}^q Q(\mathbb{V}_j\setminus\mathbb{V}_{j-1}) \bigl( \psi(\infty) \gamma_j^+ - \psi(0\,+) \gamma_j^- \bigr) - \sum_{j=1}^q \gamma_j , \end{align} where $\mathbb{V}_0 := \{0\}$ and $\mathbb{V}_j = \mathrm{span}(u_1,u_2,\ldots,u_j)$ for $1 \le j \le q$. Furthermore, $a^{\pm} := \max\{\pm a, 0\}$ for $a \in \mathbb{R}$. \noindent \textbf{(a)} \ Specifically let $\psi(0\,+) = 0 < \psi(\infty)$. Then the previous limit may be rewritten as \[ \sum_{k=0}^{q-1} \bigl( (1 - Q(\mathbb{V}_k)) \psi(\infty) - q + k \bigr) (\gamma_{k+1}^+ - \gamma_k^+) + \sum_{j=1}^q \gamma_j^- . \] \noindent \textbf{(b)} \ Specifically let $\rho(s) := q \log s$ for $s > 0$. Then $\psi \equiv q$ on $\mathbb{R}_+$, and the previous limit may be rewritten as \[ q \sum_{k=1}^{q-1} (k/q - Q(\mathbb{V}_k)) (\gamma_{k+1} - \gamma_k) - q Q(\{0\}) \gamma_1 . \] \end{Proposition} This proposition will be used later in connection with regularized scatter functionals. In the present context it implies necessary and sufficient conditions for g-coercivity in the following two settings: \bigskip \noindent \textbf{Setting 0.} \ $\rho(s) = q \log s$ for $s > 0$, and $Q(\{0\}) = 0$. \noindent \textbf{Setting 1.} \ $\psi(0\,+) = 0$, $q < \psi(\infty) \le \infty$, and $Q$ satisfies \eqref{eq:existence}. \bigskip \begin{Theorem}\strut \label{thm:g-coercivity} \noindent \textbf{(a)} \ In Setting~1, $L_\rho(\cdot,Q)$ is geodesically coercive if, and only if, \begin{equation} \label{eq:g-coercivity1} Q(\mathbb{V}) \ < \ 1 - \frac{\{q - \dim(\mathbb{V})\}}{\psi(\infty)} \end{equation} for all linear subspaces $\mathbb{V} \subset \mathbb{R}^q$ with $0 \le \dim(\mathbb{V}) < q$. If in addition $\psi$ is strictly increasing on $\{s \ge 0: \psi(s) < \psi(\infty)\}$, then $L_\rho(\cdot,Q)$ has a unique minimizer. \noindent \textbf{(b)} \ In Setting~0, $L_\rho(\cdot,Q)$ is geodesically coercive on $\mathbb{M}^{(q)}$ if, and only if, \begin{equation} \label{eq:g-coercivity0} Q(\mathbb{V}) \ < \ \frac{\dim(\mathbb{V})}{q} \end{equation} for all linear subspaces $\mathbb{V} \subset \mathbb{R}^q$ with $1 \le \dim(\mathbb{V}) < q$. In this case, $L_\rho(\cdot,Q)$ has a unique minimizer on $\mathbb{M}^{(q)}$. \end{Theorem} Note that the condition in part~(a) of Theorem~\ref{thm:g-coercivity} is precisely Condition~1 mentioned in Section~\ref{sec:Background}. The additional assumption for uniqueness of the minimizer covers $M$-estimators of scatter as proposed in \cite{Maronna_1976,Huber_1981} with functions $\rho$ which are not strictly g-convex on the whole positive half-line. In part~(b) the condition $Q(\{0\}) = 0$ can be eliminated by replacing $Q$ with $\mathcal{L}(X \,|\, X \ne 0)$, $X \sim Q$. The conclusion of part~(b) is well known, see \cite{Duembgen_Tyler_2005} and \cite{Duembgen_etal_2015}. In connection with the algorithms introduced later we need objective functions $L_\rho(\cdot,Q)$ which are twice continuously differentiable. In Setting~0 this is the case, but Setting~1 will be replaced with the following one: \bigskip \noindent \textbf{Setting~2.} \ $\rho$ is twice continuously differentiable on $\mathbb{R}_+$ such that $\psi(s) \ := \ s \rho'(s)$ is strictly increasing in $s \in \mathbb{R}_+$ with limits $\psi(0) = 0$ and $\psi(\infty) \in (q, \infty]$. Moreover, for some constant $\kappa > 0$, $s \psi'(s) \le \kappa \psi(s)$ for all $s \in \mathbb{R}_+$. \bigskip \begin{Lemma}[cf.\ \cite{Duembgen_etal_2015}] \label{lem:g-convexity.log.likelihood} For $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$, under Settings~0 and 2, \[ L_\rho(B \exp(A) B^\top, Q) - L_\rho(BB^\top) \ = \ \langle A, G_\rho(Q_B)\rangle + 2^{-1} H_\rho(A, Q_B) + o(\|A\|^2) \] as $A \to 0$, where \[ Q_B \ := \ \mathcal{L}(B^{-1} X), X \sim Q , \] and \begin{align*} G_\rho(Q) \ &:= \ I_q - \Psi_\rho(Q), \\ \Psi_\rho(Q) \ &:= \ \int \rho'(\|x\|^2) xx^\top \, Q(dx) , \\ H_\rho(A,Q) \ &:= \ \langle A^2, \Psi_\rho(Q)\rangle + \int \rho''(\|x\|^2) (x^\top Ax)^2 \, Q(dx) . \end{align*} Moreover, $H_\rho(A,Q) \ge 0$ with equality if, and only if, \[ \begin{cases} Q \bigl( \bigcup_{j=1}^m \mathbb{V}_j \bigr) = 1 & \text{in Setting~0} , \\ Q(\mathcal{N}_A) = 1 & \text{in Setting~2} . \end{cases} \] Here $\mathbb{V}_1, \mathbb{V}_2, \ldots, \mathbb{V}_m$ are the different eigenspaces of $A$, and $\mathcal{N}_A := \{x \in \mathbb{R}^q : Ax = 0\}$. \end{Lemma} \subsection{Regularization} \label{subsec:Regularization} As noted in the introduction, most research on robust estimation of scatter has mainly centered on the unrestricted estimation of the scatter matrix. But the previous results imply that a unique minimizer of $L_\rho(\cdot,Q)$ can only exist if $Q(\mathbb{V}) < 1$ for any proper linear subspace $\mathbb{V}$ of $\mathbb{R}^q$. This excludes empirical distributions $Q_n$ with sample size $n < q$. Some previous work on regularization does exist, with one approach being to introduce a regularization or shrinkage term to the $M$-estimating equations \eqref{eq:Mee}, as is done for the special function $\rho(s) = q \log s$ in \cite{Chen_etal_2011, Couillet_McKay_2014, Pascal_etal_2014, Wiesel_2012b} and for more general $M$-estimates in \cite{Abramovich_etal_2013, Besson_etal_2013}. Proving existence and/or uniqueness to regularized $M$-estimation equations, though, is not straightforward, and most of the work using this approach does not include conditions to insure such properties. Here, we consider a penalized objective function approach, that is we aim to minimize over $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ the function \begin{equation} \label{eq:Lpen} f_\alpha(\Sigma) \ := \ L_\rho(\Sigma,Q) + \alpha \pi(\Sigma) \end{equation} for some tuning parameter $\alpha > 0$ and penalty function $\pi : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$. For the special function $\rho(s) = q \log s$, the empirical version of this approach has been considered in \cite{Wiesel_2012b} for certain g-convex penalties, although coercivity is not treated and consequently conditions for existence are not given. The empirical version is also studied in \cite{Ollila_Tyler_2014} for general g-convex $\rho$-functions and general g-convex penalties, but conditions for coercivity are only given for the penalty function $\mathop{\mathrm{tr}}\nolimits(\Sigma^{-1})$. \begin{Remark}[The graphical lasso] A popular penalty function is the $l_1$ penalty on the off-diagonal elements of $\Sigma^{-1}$, i.e.\ when $\pi(\Sigma) = \sum_{i<j} | (\Sigma^{-1})_{ij}|$. In the classical setting, i.e.\ when $L_\rho(\Sigma,Q)$ is taken to be proportional to the multivariate normal negative log-likelihood functional, the problem of minimizing \eqref{eq:Lpen} using this $l_1$ penalty is commonly referred to as a graphical lasso. For this case, as $\alpha$ increases the solutions produce a path of increasing zeros in the off-diagonal elements of $\Sigma^{-1}$. A robust graphical lasso can be constructed by considering general $L_\rho(\Sigma,Q)$, as has been proposed e.g.\ in \cite{Finegold_Drton_2011} for the case when $L_\rho(\Sigma,Q)$ is proportional to the negative log-likelihood of an elliptical t-distribution. One drawback to this approach is that when using $\rho$-functions which yield bounded influence estimators, the function $L_\rho(\Sigma,Q)$ is not convex in $\Sigma^{-1}$ and consequently as $\alpha$ increases the solution path may not yield increasing zeros in the off-diagonal elements of $\Sigma^{-1}$. Moreover, as shown in Supplement~\ref{sec:Auxiliary}, this $l_1$ penalty is not g-convex. So even when $L_\rho(\Sigma,Q)$ is strictly g-convex, the uniqueness of a solution to \eqref{eq:Lpen} is not guaranteed. \end{Remark} Here, we are interested in considering \eqref{eq:Lpen} for the case when both $L_\rho(\Sigma,Q)$ and $\pi(\Sigma)$ are g-convex. Obviously this implies that the penalized objective function $f$ is g-convex, too. Moreover, if either $L_\rho(\Sigma,Q)$ or $\pi(\Sigma)$ are strictly g-convex, then $f$ is strictly g-convex as well. Note that these considerations apply to the special case when $L_\rho(\Sigma,Q)$ is taken to be proportional to the multivariate normal negative log-likelihood functional, i.e.\ $\rho(s) = s$. For this case, $L_\rho(\Sigma,Q)$ is not only strictly convex in $\Sigma^{-1}$, it is also strictly g-convex in $\Sigma^{-1}$ and hence in $\Sigma$. Thus, in this classical setting, in addition to penalty functions which are convex in $\Sigma^{-1}$, penalty functions which are g-convex in $\Sigma$ also ensure the uniqueness of a minimum to \eqref{eq:Lpen}, provided a minimum exists. The existence of a minimizer to \eqref{eq:Lpen} depends on the geodesic coercivity of $f(\Sigma)$, which in turn depends of the behavior of $L_\rho(\Sigma,Q)$ and $\pi(\Sigma)$ as $\|\log(\Sigma)\| \to \infty$. For $L_\rho(\Sigma,Q)$, Proposition~\ref{prop:g-coercivity} provides a complete answer, so it remains to specify and investigate the penalties $\pi(\Sigma)$. \paragraph{Shrinkage towards $I_q$.} Functions which penalize deviations from $I_q$ are \begin{align*} \Pi_0(\Sigma) \ &:= \ \mathop{\mathrm{tr}}\nolimits(\Sigma) + \mathop{\mathrm{tr}}\nolimits(\Sigma^{-1}) \ = \ \sum_{i=1}^q (\sigma_i + \sigma_i^{-1}) , \\ \Pi_1(\Sigma) \ &:= \ \log\det(\Sigma) + \mathop{\mathrm{tr}}\nolimits(\Sigma^{-1}) \ = \ \sum_{i=1}^q (\log \sigma_i + \sigma_i^{-1}) , \\ \Pi_2(\Sigma) \ &:= \ \| \log(\Sigma) \|^2 \ = \ \sum_{i=1}^q (\log \sigma_i)^2 , \end{align*} where $\sigma_1 \ge \cdots \ge \sigma_q > 0$ are the eigenvalues of $\Sigma$. In all three cases, $\Sigma = I_q$ is the unique minimizer. Note that $\Pi_2(\Sigma)$ is just the square of the geodesic distance $d_g(I_p,\Sigma)$. While $\Pi_0$ and $\Pi_2$ satisfy the symmetry relation $\Pi(\Sigma^{-1}) = \Pi(\Sigma)$, the penalty $\Pi_1(\Sigma)$ is non-symmetric, penalizing very small eigenvalues more severely than very large ones. It corresponds to the Kullback-Leibler divergence between $\mathcal{N}_q(0,\Sigma)$ and $\mathcal{N}_q(0,I_q)$ and has been previously considered in \cite{Sun_etal_2014}. In principle one could also use the penalty $\Pi_1'(\Sigma) = \Pi_1(\Sigma^{-1})$, but from a statistical perspective this seems to be less reasonable. The next lemma summarizes the essential properties of these penalties. \begin{Lemma} \label{lem:all.about.Pi} For $k = 0,1,2$, the penalty function $\Pi_k$ is twice continuously differentiable and strictly geodesically convex on $\R_{{\rm sym},+}^{q\times q}$ with a unique minimum at $I_q$. \noindent Precisely, for any $B \in \R_{\rm ns}^{q\times q}$, as $\R_{\rm sym}^{q\times q} \ni A \to 0$, \[ \Pi_k(B \exp(A) B^\top) \ = \ \Pi_k(BB^\top) + \langle A, G_k(B)\rangle + 2^{-1} H_k(A,B) + o(\|A\|^2) \] with $G_k(B)$ and $H_k(A,B)$ given in the following table: \[ \begin{array}{|c||c|c|} \hline k & G_k(B) & H_k(A,B) \\ \hline\hline 0_{\strut}^{\strut} & B^\top B - B^{-1} B^{-\top} & \langle A^2, B^\top B + B^{-1} B^{-\top} \rangle \\ \hline 1_{\strut}^{\strut} & I_q - B^{-1} B^{-\top} & \langle A^2, B^{-1} B^{-\top} \rangle \\ \hline 2_{\strut}^{\strut} & 2 \log(B^\top B) & 2 \sum_{i,j=1}^q W_{ij}(\lambda) (v_i^\top A v_j)^2 \\ \hline \end{array} \] Here $B^\top B = V D(e^\lambda) V^\top$ with $V = [v_1, v_2, \ldots, v_q] \in \R_{\rm orth}^{q\times q}$ and $\lambda \in \mathbb{R}^q$, and \[ W_{ij}(\lambda) \ := \ \frac{(\lambda_i - \lambda_j)/2}{ \tanh((\lambda_i - \lambda_j)/2)} \ \ge \ 1 \] with the convention $0/\tanh(0) := 1$. In particular, $H_k(A,B) > 0$ whenever $A \ne 0$. \noindent Moreover, if $A = U D(-\gamma) U^\top$ with $U \in \R_{\rm orth}^{q\times q}$ and $\gamma \in \mathbb{R}^q \setminus \{0\}$ such that $\gamma_1 \le \gamma_2 \le \cdots \le \gamma_q$, then \[ \lim_{t \to \infty} \, \frac{d}{dt} \Pi_k(\exp(tA)) \ = \ \begin{cases} \infty &\text{if} \ k = 0 , \\ 1_{[\gamma_q > 0]} \infty - \sum_{i=1}^q \gamma_i &\text{if} \ k = 1 , \\ \infty &\text{if} \ k = 2 . \end{cases} \] \end{Lemma} This lemma and Theorem~\ref{thm:Mfunc} together show that using any of the penalties $\Pi_0$, $\Pi_1$ or $\Pi_2$ together with a g-convex function $\rho$ yields an objective function $f$ in \eqref{eq:Lpen} which is strictly g-convex. In particular, by Corollary~\ref{cor:uniqueness.minimizer}, \eqref{eq:Lpen} has a unique minimizer or no minimizer. With $\Pi_0$ or $\Pi_2$ g-coercivity and thus existence of a unique minimizer is guaranteed, regardless of $Q$. This is in contrast to the non-regularized case for which conditions on $Q$ are needed to insure the existence of a minimizer. Shrinkage towards a different given matrix $\Sigma_o \in \R_{{\rm sym},+}^{q\times q}$ is obtained by replacing $\Sigma$ in $\Pi_k(\Sigma)$ with $\Sigma_o^{-1/2} \Sigma \Sigma_o^{-1/2}$. \paragraph{Shrinkage towards multiples of $I_q$.} Functions which penalize large condition numbers $\sigma_1/\sigma_q$ of $\Sigma$ are given by \begin{align*} \pi_0(\Sigma) \ &:= \ \log\mathop{\mathrm{tr}}\nolimits(\Sigma) + \log\mathop{\mathrm{tr}}\nolimits(\Sigma^{-1}) \ = \ \log \Bigl( \sum_{i=1}^q \sigma_i \Bigr) + \log \Bigl( \sum_{i=1}^q \sigma_i^{-1} \Bigr) , \\ \pi_1(\Sigma) \ &:= \ q^{-1} \log\det(\Sigma) + \log\mathop{\mathrm{tr}}\nolimits(\Sigma^{-1}) \ = \ q^{-1} \sum_{i=1}^q \log \sigma_i + \log \Bigl( \sum_{i=1}^q \sigma_i^{-1} \Bigr) , \\ \pi_2(\Sigma) \ &:= \ \Pi_2(\det(\Sigma)^{-1/q} \Sigma) \ = \ \sum_{i=1}^q \Bigl( \log \sigma_i - q^{-1} \sum_{j=1}^q \log \sigma_j \Bigr)^2 . \end{align*} All three functions are scale-invariant with $\Sigma$ minimizing $\pi_j(\Sigma)$ if, and only if, $\Sigma$ is a positive multiple of $I_q$. Moreover, $\pi_0$ and $\pi_2$ satisfy the symmetry relation $\pi(\Sigma^{-1}) = \pi(\Sigma)$, whereas $\pi_1(\Sigma)$ penalizes relatively small eigenvalues more severely than relatively large ones. Here are the main facts: \begin{Lemma} \label{lem:all.about.pi} For $k = 0,1,2$, the penalty function $\pi_k$ is scale-invariant, twice continuously differentiable and geodesically convex. On $\mathbb{M}^{(q)}$ it is strictly geodesically convex with a unique minimum at $I_q$. \noindent Precisely, for any $B \in \R_{\rm ns}^{q\times q}$, as $\R_{\rm sym}^{q\times q} \ni A \to 0$, \[ \pi_k(B \exp(A) B^\top) \ = \ \pi_k(BB^\top) + \langle A, G_k(B)\rangle + 2^{-1} H_k(A,B) + o(\|A\|^2) \] with $G_k(B)$ and $H_k(A,B)$ given in the following table: \[ \begin{array}{|c||c|c|} \hline k & G_k(B) & H_k(A,B) \\ \hline\hline 0^{\strut} & N(B^\top B) - N(B^{-1} B^{-\top}) & \langle A^2, N(B^\top B) \rangle - \langle A, N(B^\top B)\rangle^2 \\ \strut_{\strut} & & + \ \langle A^2, N(B^{-1}B^{-\top})\rangle - \langle A, N(B^{-1} B^{-\top})\rangle^2 \\ \hline 1_{\strut}^{\strut} & q^{-1} I_q - N(B^{-1}B^{-\top}) & \langle A^2, N(B^{-1} B^{-\top}) \rangle - \langle A, N(B^{-1} B^{-\top}) \rangle^2 \\ \hline 2_{\strut}^{\strut} & 2 \log(B^\top B)^o & 2 \sum_{i,j=1}^q W_{ij}(\lambda) (v_i^\top A^o v_j)^2 \\ \hline \end{array} \] Here $N(\Sigma) := \mathop{\mathrm{tr}}\nolimits(\Sigma)^{-1} \Sigma$, $C^o := C - q^{-1} \mathop{\mathrm{tr}}\nolimits(C) I_q$ for $C \in \R_{\rm sym}^{q\times q}$, and $V = [v_1, \ldots, v_q]$, $\lambda$, $W_{ij}(\lambda)$ are defined as in Lemma~\ref{lem:all.about.Pi}. In particular, $H_k(A,B) > 0$ whenever $A \in \mathbb{W}^{(q)} \setminus \{0\}$. \noindent Moreover, if $A = U D(-\gamma) U^\top$ with $U \in \R_{\rm orth}^{q\times q}$ and $\gamma \in \mathbb{R}^q$ such that $\gamma_1 \le \gamma_2 \le \cdots \le \gamma_q$ and $\gamma_1 < \gamma_q$, \[ \lim_{t \to \infty} \, \frac{d}{dt} \pi_k(\exp(tA)) \ = \ \begin{cases} \gamma_q - \gamma_1 &\text{if} \ k = 0 \\ \gamma_q - \bar{\gamma} &\text{if} \ k = 1 \\ \infty &\text{if} \ k = 2 \end{cases} \] with $\bar{\gamma} := q^{-1} \sum_{i=1}^q \gamma_i$. \end{Lemma} Of course one could replace any of these penalties $\pi_k(\Sigma)$ with a non-decreasing convex function thereof. As pointed out in Remark~\ref{rem:Transformation}, this would preserve geodesic convexity. \paragraph{A scale-invariant example.} We consider the special case where $\rho(s) = q \log s$ for $s > 0$ and $Q(\{0\}) = 0$. Since $L_\rho(\Sigma,Q)$ is scale-invariant, it is natural to choose a penalty which is scale-invariant, too, and to treat $f$ as a function on $\mathbb{M}^{(q)}$. If $\pi$ is strictly g-convex on the latter set, then $f$ inherits this property. As to g-coercivity, let $A = U D(-\gamma) U^\top$ with $U = [u_1, \ldots, u_q] \in \R_{\rm orth}^{q\times q}$ and $\gamma \in \mathbb{R}^q$ such that $\gamma_1 \le \cdots \le \gamma_q$ and $\gamma_1 < \gamma_q$. If $\pi = \pi_0$, then \begin{align*} \lim_{t \to \infty} \frac{d}{dt} f(\exp(tA)) \ &= \ q \sum_{k=1}^{q-1} (k/q - Q(\mathbb{V}_k))(\gamma_{k+1} - \gamma_k) + \alpha (\gamma_q - \gamma_1) \\ &= \ q \sum_{k=1}^{q-1} \bigl( (k + \alpha)/q - Q(\mathbb{V}_k) \bigr) (\gamma_{k+1} - \gamma_k) . \end{align*} Thus $f$ is g-coercive on $\mathbb{M}^{(q)}$ if, and only if, \[ Q(\mathbb{V}) \ < \ (\dim(\mathbb{V}) + \alpha)/q \] for any subspace $\mathbb{V}$ of $\mathbb{R}^q$ with $1 \le \dim(\mathbb{V}) < q$. If $\pi = \pi_1$, then \begin{align*} \lim_{t \to \infty} \frac{d}{dt} f(\exp(tA)) \ &= \ q \sum_{k=1}^{q-1} (k/q - Q(\mathbb{V}_k))(\gamma_{k+1} - \gamma_k) + \alpha (\gamma_q - \bar{\gamma}) \\ &= \ q \sum_{k=1}^{q-1} (k/q - Q(\mathbb{V}_k))(\gamma_{k+1} - \gamma_k) + \alpha \sum_{k=1}^{q-1} \frac{k}{q} (\gamma_{k+1} - \gamma_k) \\ &= \ q \sum_{k=1}^{q-1} \bigl( (k/q) (1 + \alpha/q) - Q(\mathbb{V}_k) \bigr) (\gamma_{k+1} - \gamma_k) . \end{align*} Thus $f$ is g-coercive on $\mathbb{M}^{(q)}$ if, and only if, \[ Q(\mathbb{V}) \ < \ (1 + \alpha/q) \dim(\mathbb{V}) / q \] for any subspace $\mathbb{V}$ of $\mathbb{R}^q$ with $1 \le \dim(\mathbb{V}) < q$. In case of \[ \lim_{t \to \infty} \frac{d}{dt} \pi(\exp(tA)) \ = \ \infty \] for any fixed $A \in \mathbb{W}^{(q)} \setminus \{0\}$, the function $f$ is g-coercive on $\mathbb{M}^{(q)}$ without further constraints on $Q$. This is the case, for instance, if $\pi(\Sigma) = \pi_2(\Sigma)$ or \[ \pi(\Sigma) \ = \ \psi \bigl( \pi_k(\Sigma) - \pi_k(I_q) \bigr) \] for $k = 0,1$ with a non-decreasing convex function $\psi : [0,\infty) \to [0,\infty)$ such that $\psi(t)/t \to \infty$ as $t \to \infty$. Explicit examples for such functions $\psi$ are \begin{align*} \psi(s) \ &:= \ (1 + s)^\gamma/\gamma , \quad \gamma > 1 , \\ \psi(s) \ &:= \ \exp(c s), \quad c > 0 . \end{align*} \subsection{Cross validation} Rather than choose $\alpha$ in \eqref{eq:Lpen} beforehand, one can use data dependent methods for selecting $\alpha$. One possible approach is to use an oracle type estimator for $\alpha$, as is done in \cite{Chen_etal_2011,Ollila_Tyler_2014}. Such an approach is based upon minimizing the mean square error under a specific distribution with the method being dependent on the choice of the penalty $\pi$ and the $\rho$-function. A more universal approach is to use cross-validation. Here we propose a leave-one-out cross validation approach for the current problem as follows. Let $Q_{n,(i)}$ denoted the empirical distribution when the $i$th data point is removed, and for a given $\alpha$ define \[ \widehat{\Sigma}_{\alpha,(i)} \ := \ \mathop{\mathrm{arg\,min}} \bigl\{ L_\rho(\Sigma,Q_{n,(i)}) + \alpha \pi(\Sigma) \bigr\} , \] with the minimum being taken over $\Sigma \in \R_{{\rm sym},+}^{q\times q}$. Next, define an aggregate robust measure of how well $\widehat{\Sigma}_{\alpha,(i)}$ reflects the left-out observation $x_i$ by \[ \mathrm{CV}(\alpha) := \ \sum_{i=1}^n \bigl\{ \rho(x_i^\top \widehat{\Sigma}_{\alpha,(i)}^{-1} x_i^{}) + \log \det(\widehat{\Sigma}_{\alpha,(i)}^{}) \bigr\} . \] The objective is to then minimize $\mathrm{CV}(\alpha)$ over $\alpha \ge 0$. In practice, this would be done over over some finite set of values for $\alpha$. Some examples are given in section \ref{sec:Example}. Since the cross validation approach can be computationally intensive, we first discuss algorithms for computing the regularized $M$-estimators of scatter. \section{Algorithms} \label{sec:Algorithm} There is a rich literature on optimization on Riemannian manifolds, see \cite{Ring_Wirth_2012} and the references therein. For the special case of functions on $\R_{{\rm sym},+}^{q\times q}$, \cite{Sra_Hosseini_2013, Sra_Hosseini_2015} propose various fixed-point and gradient descent methods. Newton-Raphson algorithms would be another possibility but may be inefficient due to the high dimension of Hessian operators. For the minimization of a smooth and g-convex function we propose a partial Newton-Raphson algorithm which is similar to a method of \cite{Duembgen_etal_2016} for pure $M$-functionals of scatter. While the latter method has been designed for special settings in which a certain fixed-point algorithm serves as a fallback option with guaranteed convergence, the present approach is more general. We consider a twice continuously differentiable function $f : \R_{{\rm sym},+}^{q\times q} \to \mathbb{R}$ such that \[ H(A,B) \ > \ 0 \quad\text{for any} \ A \in \R_{\rm sym}^{q\times q} \setminus \{0\} \ \text{and} \ B \in \R_{\rm ns}^{q\times q} . \] In particular, $f$ is strictly g-convex. Furthermore we assume that $f$ is g-coercive, so \[ \Sigma_* \ := \ \mathop{\mathrm{arg\,min}}_{\Sigma \in \R_{{\rm sym},+}^{q\times q}} f(\Sigma) \] exists. Finally we assume that $G(B)$ and $H(A,B)$ are continuous in $B \in \R_{\rm ns}^{q\times q}$ for any fixed $A \in \R_{\rm sym}^{q\times q}$. Under these conditions on $f$ one can devise an iterative algorithm to compute the minimizer $\Sigma_*$. According to Lemma~\ref{lem:minimizer.g-convex}, this is equivalent to finding a matrix $B_* \in \R_{\rm ns}^{q\times q}$ such that $G(B_*) = 0$. \paragraph{Algorithmic mappings.} To compute $\Sigma_*$ we iterate a certain mapping \[ \phi : \R_{{\rm sym},+}^{q\times q} \to \R_{{\rm sym},+}^{q\times q} \] such that $\phi(\Sigma_*) = \Sigma_*$ and $f(\phi(\Sigma)) < f(\Sigma)$ whenever $\Sigma \ne \Sigma_*$. If we replace the latter condition by a somewhat stronger constraint, iterating the mapping $\phi$ yields sequences with guaranteed converge to $\Sigma_*$. \begin{Lemma} \label{lem:algorithm} Suppose that $\phi : \R_{{\rm sym},+}^{q\times q} \to \R_{{\rm sym},+}^{q\times q}$ satisfies $\phi(\Sigma_*) = \Sigma_*$ and \[ \limsup_{\Sigma \to \Sigma_o} f(\phi(\Sigma)) \ < \ f(\Sigma_o) \quad\text{for any} \ \Sigma_o \in \R_{{\rm sym},+}^{q\times q} \setminus \{\Sigma_*\} . \] Let $\Sigma_1 \in \R_{{\rm sym},+}^{q\times q}$ be an arbitrary starting point, and define inductively $\Sigma_{k+1} := \phi(\Sigma_k)$ for $k = 1,2,3,\ldots$. Then \[ \lim_{k \to \infty} \Sigma_k \ = \ \Sigma_* . \] \end{Lemma} This lemma belongs to the folklore in optimization theory. For the reader's convenience we provide its short proof in Section~\ref{sec:Auxiliary}. \paragraph{Construction of $\phi$.} Let $\Sigma = BB^\top$ with $B \in \R_{\rm ns}^{q\times q}$ be our current candidate for $\Sigma_*$. Note that the quadratic term $H(A,B)$ may be rewritten as \[ H(A,B) \ = \ \langle A, H_B A\rangle \] for a self-adjoint linear operator $H_B : \R_{\rm sym}^{q\times q} \to \R_{\rm sym}^{q\times q}$ with strictly positive eigenvalues. Thus a promising new candidate for $\Sigma_*$ would be \[ \phi_{\rm fN}(\Sigma) \ := \ B \exp(A_{\rm fN}) B^\top \] with \[ A_{\rm fN} \ := \ \mathop{\mathrm{arg\,min}}_{A \in \R_{\rm sym}^{q\times q}} \bigl( \langle A, G(B)\rangle + 2^{-1} H(A, B) \bigr) \ = \ - H_B^{-1} G(B) , \] a full Newton step in local geodesic coordinates. Computing $A_{\rm fN}$ would require substantial memory and computation time, though. Alternatively one could try a gradient descent step: \[ \phi_{\rm G}(\Sigma) \ := \ B \exp(A_{\rm G}) B^\top \] with \[ A_{\rm G} \ := \ \mathop{\mathrm{arg\,min}}_{A \in \{t G(B) : t \in \mathbb{R}\}} \bigl( \langle A, G(B)\rangle + 2^{-1} H(A, B) \bigr) \ = \ - \frac{\|G(B)\|^2}{H(G(B),B)} \, G(B) . \] As a compromise between a full Newton and a mere gradient step we propose a partial Newton step: To this end we consider a spectral decomposition \[ G(B) \ = \ U D(\lambda) U^\top \] with an orthogonal matrix $U = U(B) \in \R_{}^{q\times q}$ and a vector $\lambda = \lambda(B) \in \mathbb{R}^q$. Then we define \[ \phi_{\rm pN}(\Sigma) \ := \ B \exp(A_{\rm pN}) B^\top \] with \[ A_{\rm pN} = A_{\rm pN}(B,U) \ := \ \mathop{\mathrm{arg\,min}}_{A \in \{U D(x) U^\top : x \in \mathbb{R}^q\}} \bigl( \langle A, G(B)\rangle + 2^{-1} H(A, B) \bigr) . \] This may be computed explicitly: Since \[ \langle U D(x) U^\top, G(B)\rangle + 2^{-1} H(U D(x) U^\top, B) \ = \ x^\top \lambda(B) + 2^{-1} x^\top \underline{H}(BU) x \] for a certain matrix $\underline{H}(BU) \in \R_{{\rm sym},+}^{q\times q}$, we may write \[ A_{\rm pN} \ = \ - U D \bigl( \underline{H}(BU)^{-1} \lambda(B) \bigr) U^\top . \] If $\Sigma = BB^\top$ is far from $\Sigma_*$, the matrix $\phi_{\rm pN}(\Sigma)$ need not be better than $\Sigma$ itself. To avoid poor steps we introduce a simple step size correction and define finally \[ \phi(\Sigma) \ := \ B \exp \bigl( 2^{-m(BU)} A_{\rm pN} \bigr) B^\top \ = \ BU D \bigl( \exp \bigl( - 2^{-m(BU)} \underline{H}(BU)^{-1} \lambda(B) \bigr) \bigr) (BU)^\top \] with $m(BU)$ being the smallest integer $m \ge 0$ such that \[ f \bigl( B \exp(2^{-m} A_{\rm pN}) B^\top \bigr) - f(\Sigma) \ \le \ 2^{-m} \langle A_{\rm pN}, G(B) \rangle / C \] for a given $C > 2$. The rationale behind this definition is the fact that \[ \min_{x \in \mathbb{R}^q} \bigl( \langle UD(x)U^\top, G(B)\rangle + 2^{-1} H(UD(x)U^\top, B) \bigr) \ = \ \langle A_{\rm pN}, G(B)\rangle / 2 \] and \[ \lim_{m \to \infty} \frac{f \bigl( B \exp(2^{-m} A_{\rm pN}) B^\top \bigr) - f(\Sigma)}{2^{-m}} \ = \ \langle A_{\rm pN}, G(B)\rangle . \] Note that $\phi(\Sigma) = \Sigma$ whenever $G(B) = 0$, which is equivalent to $\Sigma = \Sigma_*$. Otherwise \[ \langle A_{\rm pN}, G(B)\rangle \ = \ - \lambda(B)^\top \underline{H}(BU)^{-1} \lambda(B) \ < \ 0 . \] This algorithmic mapping $\phi$ has the desired properties, no matter how the factor $B$ of $\Sigma = BB^\top$ and the orthogonal matrix $U$ in the spectral decomposition $G(B) = U D(\lambda) U^\top$ are chosen. \begin{Theorem} \label{thm:phi} The algorithmic mapping just defined has the properties described in Lemma~\ref{lem:algorithm}. Moreover, if $\Sigma = BB^\top$ is sufficiently close to $\Sigma_*$, then the number $m(BU)$ in the step size correction equals $0$, whence $\phi(\Sigma) = \phi_{\rm pN}(\Sigma) = B \exp(A_{\rm pN}) B^\top$. \end{Theorem} \paragraph{Pseudo-code for $\phi(\cdot)$.} One may interpret our algorithmic mapping $\phi$ such that the factor $B$ of our current candidate $\Sigma = BB^\top$ for $\Sigma_*$ is replaced with a new matrix \[ B_{\rm new} \ = \ B U \exp(2^{-m(BU)} A_{\rm pN}/2) , \] and $\phi(\Sigma) = B_{\rm new}^{} B_{\rm new}^\top$. Here is corresponding pseudo-code for the computation of $B_{\rm new}$: \begin{align*} &(U,\lambda) \ \leftarrow \ \text{eigen}(G(B)) \\ &a \ \leftarrow \ H(BU)^{-1} g(BU) \\ &\epsilon \ \leftarrow \ a^\top g(BU) \\ &\text{while} \ f(BB^\top) - f(B D(\exp(-a)) B^\top) < \epsilon/C \ \text{do} \\ &\quad a \ \leftarrow \ a/2 \\ &\quad \epsilon \ \leftarrow \ \epsilon/2 \\ &\text{end while} \\ &B_{\rm new} \ \leftarrow \ B U D(\exp(- a/2)) \end{align*} \section{Numerical Example} \label{sec:Example} We illustrate the proposed methods in case of $\rho(s) = q \log s$ and \[ \pi(\Sigma) \ := \ \exp(\pi_1(\Sigma) - \pi_1(I_q)) \ = \ \det(\Sigma)^{1/q} \mathop{\mathrm{tr}}\nolimits(\Sigma^{-1})/q . \] The resulting functional $f_\alpha(\Sigma) = L_\rho(\Sigma,Q) + \alpha \pi(\Sigma)$ is strictly g-convex and g-coercive on $\mathbb{M}^{(q)}$ for any value $\alpha > 0$. Precisely, we chose $q = 50$ and simulated a random sample of size $n = 30$ from the multivariate Cauchy distribution with center $0$ and scatter matrix \[ \Sigma \ = \ D(10,5,3,2,1,1,\ldots,1)^2 . \] Then we computed the minimizer $\widehat{\Sigma}(\alpha)$ of $f_\alpha$ with $Q$ being the empirical distribution of this sample for $\alpha = 2^z$ with $z = 1,2,\ldots,15$. Table~\ref{tab:CV.errors} shows the resulting values $\mathrm{CV}(\alpha)$ and the following estimation errors: \begin{align*} \epsilon_0(\alpha) \ &: \quad \text{Euclidean distance between first eigenvectors of} \ \Sigma, \widehat{\Sigma}(\alpha) , \\ \epsilon_1(\alpha) \ &: \quad \text{Euclidean distance between} \ \log \lambda(S), \log \lambda(\widehat{S}(\alpha)) , \\ \epsilon_2(\alpha) \ &: \quad \text{geodesic distance between} \ S, \widehat{S}(\alpha) , \end{align*} where $S := \det(\Sigma)^{-1/q} \Sigma$, $\widehat{S}(\alpha) := \det(\widehat{\Sigma}(\alpha))^{-1/q} \widehat{\Sigma}(\alpha)$, and $\lambda(B)$ refers to the vector of the ordered eigenvalues of a symmetric matrix $B$. Note that our cross-validation criterion yields $\alpha = 2^7$, which is a reasonable choice in view of the estimation errors. Figure~\ref{fig0} shows a bar plot of the log-transformed eigenvalues of $S$ and of $\widehat{S}(2^7)$. \begin{table}[h!] \[ \begin{array}{|c||c||c|c|c|} \hline \log_2(\alpha) & \mathrm{CV}(\alpha) & \epsilon_0(\alpha) & \epsilon_1(\alpha) & \epsilon_2(\alpha) \\ \hline\hline 1 & 11670.248 & 0.164 & 20.817 & 118.797 \\ \hline 2 & 10658.798 & 0.164 & 16.985 & 74.729 \\ \hline 3 & 9704.005 & 0.163 & 13.278 & 46.696 \\ \hline 4 & 8883.141 & 0.160 & 9.793 & 28.871 \\ \hline 5 & 8282.730 & 0.158 & 6.660 & 17.781 \\ \hline 6 & 7933.924 & 0.158 & 4.141 & 11.518 \\ \hline 7 & 7816.171 & 0.160 & 2.899 & 8.787 \\ \hline 8 & 7883.674 & 0.165 & 3.307 & 8.098 \\ \hline 9 & 8079.799 & 0.173 & 4.260 & 8.137 \\ \hline 10 & 8321.868 & 0.183 & 5.035 & 8.295 \\ \hline 11 & 8515.666 & 0.190 & 5.499 & 8.407 \\ \hline 12 & 8633.030 & 0.194 & 5.740 & 8.467 \\ \hline 13 & 8695.983 & 0.196 & 5.859 & 8.497 \\ \hline 14 & 8728.327 & 0.197 & 5.918 & 8.513 \\ \hline 15 & 8744.677 & 0.198 & 5.947 & 8.520 \\ \hline \end{array} \] \caption{Cross-validation criterion and estimation errors for one data matrix. } \label{tab:CV.errors} \end{table} \begin{figure}[b!] \centering \includegraphics[width=0.99\textwidth]{logEigenvalues} \caption{Log-eigenvalues of $S$ (green) and $\widehat{S}(2^7)$ (blue).} \label{fig0} \end{figure} This simulation was repeated 100 times, and in all cases the minimizer of $\mathrm{CV}(\alpha)$ on the given grid turned out to be $2^7 = 128$. Figure~\ref{fig1} shows box plots of $\mathrm{CV}(\alpha)$ and the estimation errors $\epsilon_0(\alpha)$, $\epsilon_1(\alpha)$, $\epsilon_2(\alpha)$ for these simulations. \begin{figure}[b!] \includegraphics[width=0.49\textwidth]{CVplot} \hfill \includegraphics[width=0.49\textwidth]{E0plot} \includegraphics[width=0.49\textwidth]{E1plot} \hfill \includegraphics[width=0.49\textwidth]{E2plot} \caption{Cross-validation measures $\mathrm{CV}(\alpha)$ (upper left) and estimation errors $\epsilon_0(\alpha)$ (upper right), $\epsilon_1(\alpha)$ (lower left), $\epsilon_2(\alpha)$ (lower right) versus $\log_2(\alpha)$.} \label{fig1} \end{figure} \section{Proofs} \label{sec:Proofs} \subsection{Proofs for Section~\ref{sec:G-Convexity}} \begin{proof}[\bf Proof of Lemma~\ref{lem:geodesic.curves}] For $B \in \R_{\rm ns}^{q\times q}$ and $A_0,A_1 \in \R_{\rm sym}^{q\times q}$ define $\Sigma_j := B \exp(A_j) B^\top$. Then $\Sigma_0 = B_0^{} B_0^\top$ with $B_0 := B \exp(A_0/2)$, and this implies that \[ \Sigma_0^{1/2} \ = \ B_0 V \ = \ V^\top B_0^\top \] for some $V \in \R_{\rm orth}^{q\times q}$. Hence \[ \Sigma_0^{-1/2} \ = \ V^\top B_0^{-1} \ = \ B_0^{-\top} V , \] and for $u \in [0,1]$, \begin{align*} \Sigma_0^{1/2} & (\Sigma_0^{-1/2} \Sigma_1 \Sigma_0^{-1/2})_{}^u \Sigma_0^{1/2} \\ &= \ B_0 V \, (V^\top B_0^{-1} \Sigma_1^{} B_0^{-\top} V)_{}^u \, V^\top B_0^{-1} \\ &= \ B_0 \, (B_0^{-1} \Sigma_1^{} B_0^{-\top})_{}^u \, B_0^{-1} \\ &= \ B \exp(A_0/2) \bigl( \exp(- A_0/2) \exp(A_1) \exp(-A_0/2) \bigr)_{}^u \exp(A_0/2) B^\top . \end{align*} If $A_0A_1 = A_1A_0$, the right hand side may be simplified further and we obtain \begin{align*} \Sigma_0^{1/2} (\Sigma_0^{-1/2} \Sigma_1 \Sigma_0^{-1/2})_{}^u \Sigma_0^{1/2} \ &= \ B \exp(A_0/2) \exp(A_1 - A_0)^u \exp(-A_0/2) B^\top \\ &= \ B \exp(A_0/2) \exp(uA_1 - uA_0) \exp(A_0/2) B^\top \\ &= \ B \exp((1 - u)A_0 + u A_1) B^\top . \end{align*} This may be applied to the curve $t \mapsto \Sigma(t)$ with $A_j = t_j A$ as well as to the surface $x \mapsto \Gamma(x)$ with $A_j = D(x_j)$. \end{proof} \begin{proof}[\bf Proof of Lemma~\ref{lem:minimizer.g-convex}] If $\Sigma = BB^\top$ minimizes $f$, then obviously \eqref{eq:minimizer.g-convex} has to hold true. On the other hand, suppose that $\Sigma = BB^\top$ is not a minimizer of $f$. That means, $f(B \exp(A) B^\top) < f(BB^\top)$ for some $A \in \R_{\rm sym}^{q\times q}$. But $h(t) := f(B \exp(t A) B^\top)$ is a convex function of $t \in \mathbb{R}$, so \[ \lim_{t \to 0\,+} \frac{h(t) - h(0)}{t} \ \le \ h(1) - h(0) \ < \ 0 . \]\\[-5ex] \end{proof} \begin{proof}[\bf Proof of Lemma~\ref{lem:g-coercivity}] The result and its proof generalize Proposition~5.5 in \cite{Duembgen_etal_2015}. Recall first that for any $A \in \R_{\rm sym}^{q\times q}$, the function $\mathbb{R} \ni t \mapsto f(\exp(tA))$ is convex with right-sided derivative \[ g(t,A) \ := \ \lim_{u \to t\,+} \frac{f(\exp(uA)) - f(\exp(tA))}{u - t} . \] Moreover, $g(t,A)$ is non-decreasing in $t \in \mathbb{R}$ with limit $g(\infty,A) \in (-\infty,\infty]$ as $t \to \infty$. Thus we have to show that $f$ is g-coercive if, and only if, $g(\infty,A) > 0$ for any $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$. Suppose that $f$ is not g-coercive. Then there exists a sequence $(A_k)_k$ in $\R_{\rm sym}^{q\times q}$ such that $\lim_{k \to \infty} \|A_k\| = \infty$ but $f(\exp(A_k)) \le C$ for all indices $k$ and some real constant $C$. Writing $A_k = \|A_k\| N_k$ for a matrix $N_k$ with norm one, we may even assume that $\lim_{k \to \infty} N_k = N$ with $N \in \R_{\rm sym}^{q\times q}$, $\|N\| = 1$. Now for any fixed $t > 0$, \begin{align*} g(t,N) \ &\le \ f(\exp((t+1)N)) - f(\exp(tN)) \\ &= \ \lim_{k \to \infty} \bigl( f(\exp((t+1)N_k)) - f(\exp(tN_k)) \bigr) \\ &\le \ \limsup_{k \to \infty} \frac{ f(\exp(\|A_k\|N_k)) - f(\exp(t N_k))}{\|A_k\| - t} \\ &\le \ \limsup_{k \to \infty} \frac{ C - f(\exp(t N_k))}{\|A_k\| - t} \\ &\le \ 0 . \end{align*} In the first and third step we used convexity of $f(\exp(tN_{(k)}))$ in $t \in \mathbb{R}$, the second and last step rely on continuity of $f$ and the choice of $(A_k)_k$. These considerations show that $g(\infty,N) \le 0$. On the other hand, suppose that $f$ is g-coercive. Then for any $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$ and sufficiently large $r > 0$, \[ 0 \ < \ \frac{f(\exp(rA)) - f(I_q)}{r} \ = \ \frac{f(\exp(rA)) - f(\exp(0A))}{r} \ \le \ g(r,A) \ \le \ g(\infty,A) . \]\\[-5ex] \end{proof} \begin{proof}[\bf Proof of Lemma~\ref{lem:existence.minimizers}] By continuity of $f$, the set $\mathcal{S}_*$ is closed, and by g-convexity of $f$ it is g-convex. Obviously, the set $\mathcal{S}_*$ is identical with the set of minimizers of $f$ on the closed set $\mathcal{K} := \bigl\{ \Sigma \in \R_{{\rm sym},+}^{q\times q} : f(\Sigma) \le f(I_q) \bigr\}$. If $f$ is also g-coercive, the set $\mathcal{K}$ is even compact, and $\mathcal{S}_*$ is a nonvoid and closed subset of $\mathcal{K}$, so it is compact itself. Now suppose that $f$ has a minimizer $\Sigma_* = BB^\top$, $B \in \R_{\rm ns}^{q\times q}$. Note that g-coercivity is equivalent to \[ f(B \exp(A) B^\top) \ \to \ \infty \quad\text{as} \ \|A\| \to \infty . \] This follows from the inequality \begin{equation} \label{eq:triangle?} \bigl| \|\log(B\exp(A)B^\top)\| - \|A\| \bigr| \ \le \ \|\log(\Sigma_*)\| \end{equation} which will be proved later. Now suppose that $f$ is minimal at $\Sigma_*$ but not g-coercive. That means, there exists a sequence $(A_k)_k$ in $\R_{\rm sym}^{q\times q}$ with $\lim_{k \to \infty} \|A_k\| = \infty$ but $f(B \exp(A_k) B^\top) \le C$ for all indices $k$ and some real constant $C$. Writing $A_k = \|A_k\| N_k$ for a matrix $N_k$ with norm one, we may even assume that $\lim_{k \to \infty} N_k = N$ with $N \in \R_{\rm sym}^{q\times q}$, $\|N\| = 1$. Since $h_k(t) := f(B \exp(t N_k) B^\top)$ is convex in $t \in \mathbb{R}$, we may conclude that for any fixed $t > 0$, \begin{align*} \frac{f(B \exp(tN) B^\top) - f(\Sigma_*)}{t} \ &= \ \lim_{k \to \infty} \frac{f(B \exp(tN_k) B^\top) - f(\Sigma_*)}{t} \\ &= \ \lim_{k \to \infty} \frac{h_k(t) - h_k(0)}{t} \\ &\le \ \limsup_{k \to \infty} \frac{h_k(\|A_k\|) - h_k(0)}{\|A_k\|} \\ &= \ \limsup_{k \to \infty} \frac{f(B \exp(A_k) B^\top) - f(\Sigma_*)}{\|A_k\|} \\ &\le \ 0 . \end{align*} This implies that $f(B \exp(tN) B^\top) = f(\Sigma_*)$ for all $t > 0$, so $\mathcal{S}_*$ is geodesically unbounded. It remains to prove inequality \eqref{eq:triangle?} which is related to geodesic distances. On the one hand, \begin{align*} \| \log(B \exp(A) B^\top) \| \ &= \ d_g(I_q, B \exp(A) B^\top) \\ &\le \ d_g(I_q,BB^\top) + d_g(BB^\top, B \exp(A) B^\top) \\ &= \ \|\log(\Sigma_*)\| + d_g(I_q, \exp(A)) \\ &= \ \|\log(\Sigma_*)\| + \|A\| . \end{align*} On the other hand, \begin{align*} \|A\| \ &= \ d_q(I_q, \exp(A)) \\ &\le \ d_q(I_q, (B^\top B)^{-1}) + d_g((B^\top B)^{-1}, \exp(A)) \\ &= \ d_q(I_q, B^\top B) + d_g(B^{-1} B^{-\top}, \exp(A)) \\ &= \ \|\log(B^\top B)\| + d_g(I_q, B\exp(A)B^\top) \\ &= \ \|\log(\Sigma_*)\| + \|\log(B\exp(A)B^\top)\| . \end{align*} In the last step we utilized that $B^\top B$ and $BB^\top = \Sigma_*$ have the same eigenvalues, which follows from the singular value decomposition of $B$. \end{proof} \begin{proof}[\bf Proof of Lemma~\ref{lem:g-convexity}] This criterion follows from the fact that for $t,\delta \in \mathbb{R}$, \[ B \exp((t+\delta)A) B^\top \ = \ B_t^{} \exp(\delta A) B_t^\top \quad\text{with} \ B_t := B \exp((t/2)A) , \] so \[ f \bigl( B \exp((t+\delta)A) B^\top \bigr) \ = \ f(B_t^{}B_t^\top) + \langle A, G(B_t)\rangle \delta + 2^{-1} H(A,B_t) \delta^2 + o(\delta^2) \] as $\delta \to 0$. By means of Lemma~\ref{lem:convexity} in Supplement~\ref{sec:Auxiliary}, this shows that $f(B \exp(tA) B^\top)$ is convex in $t \in \mathbb{R}$, provided that $H(A,B_t) \ge 0$ for all $t \in \mathbb{R}$. This convexity is strict if $H(A,B_t) > 0$ for all $t \in \mathbb{R}$. If $H(A,B) < 0$ for some $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q}$, then for sufficiently small $\delta > 0$, \[ f(B \exp(\pm \delta A) B^\top) \ < \ f(BB^\top) \pm \delta \langle A, G(B)\rangle . \] Hence \[ f(BB^\top) = f(B \exp(0A) B^\top) \ > \ 2^{-1} f(B \exp(-\delta A) B^\top) + 2^{-1} f(B \exp(\delta A) B^\top) . \] Thus $f(B \exp(tA) B^\top)$ is not convex in $t \in \mathbb{R}$, so $f$ is not geodesically convex. \end{proof} \subsection{Proofs for Section~\ref{subsec:Regularization}} \begin{proof}[\bf Proof of Lemma~\ref{lem:all.about.Pi}] That $\Sigma = I_q$ is the unique minimizer of $\Pi_k(\Sigma)$ follows from the fact that $x + x^{-1} > 2$, $\log x + x^{-1} > 1$, $x - \log x > 1$ and $(\log x)^2 > 0$ for $x \in \mathbb{R}_+ \setminus \{1\}$. Note first that $f(\Sigma) := \mathop{\mathrm{tr}}\nolimits(\Sigma)$ satisfies the expansion \begin{align*} f(B \exp(A) B^\top) \ &= \ f(BB^\top) + \mathop{\mathrm{tr}}\nolimits(BAB^\top) + 2^{-1} \mathop{\mathrm{tr}}\nolimits(BA^2 B^\top) + o(\|A\|^2) \\ &= \ f(BB^\top) + \langle A, B^\top B\rangle + 2^{-1} \langle A^2, B^\top B\rangle + o(\|A\|^2) . \end{align*} This and Remark~\ref{rem:Inversion2} implies that $G_0(B) = B^\top B - B^{-1} B^{-\top}$ while $H_0(A,B)$ is given by $\langle A^2, B^\top B + B^{-1} B^{-\top}\rangle$. The inequality $H_0(A,B) > 0$ for $A \ne 0$ can be proved similarly as the inequality $H(A,B) \ge 0$ in Example~\ref{ex2}. In case of $A = U D(-\gamma) U^\top$ with an orthogonal matrix $U$ and a vector $\gamma \in \mathbb{R}^q$ with non-decreasing componnents, \[ \frac{d}{dt} \Pi_0(\exp(tA)) \ = \ \sum_{i=1}^q \gamma_i (e^{t\gamma_i} - e^{-t \gamma_i}) \ \to \ \infty \] as $t \to \infty$, unless $\gamma = 0$. As to $\Pi_1$, it follows from the previous considerations and Example~\ref{ex0} that $G_1(B) = I_q - B^{-1} B^{-\top}$ and $H_1(A,B) = \langle A^2, B^{-1} B^{-\top}\rangle$. Again $H_1(A,B) > 0$ for $A \ne 0$. Moreover, if $A = U D(-\gamma) U^\top$ as before, as $t \to \infty$, \[ \frac{d}{dt} \Pi_1(\exp(tA)) \ = \ \sum_{i=1}^q \gamma_i (e^{t \gamma_i} - 1) \ \to \ 1_{[\gamma_q > 0]}^{} \infty - \sum_{i=1}^q \gamma_i . \] For $\Pi_2$ the expansion is a consequence of Corollary~\ref{cor:penalties} in Supplement~\ref{sec:Auxiliary}. Just note that we may write $B = U D(\mu)^{1/2} V^\top$ with $U, V \in \R_{\rm orth}^{q\times q}$ and $\mu = e^\lambda$, $\lambda \in \mathbb{R}^q$, and \[ \Pi_2(B \exp(A) B^\top) \ = \ \Pi_2 \bigl( D(\mu)^{1/2} \exp(V^\top A V) D(\mu)^{1/2} \bigr) . \] Moreover, $\Pi_2(\exp(tA)) = t^2 \|A\|^2$, so $d \Pi_2(\exp(tA)) / dt = 2 t \|A\|^2$. \end{proof} \begin{proof}[\bf Proof of Lemma~\ref{lem:all.about.pi}] Elementary considerations reveal that all penalty functions $\pi_k$ are scale-invariant. Next we show that a matrix $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ with eigenvalues $\sigma_1 \ge \cdots \ge \sigma_q > 0$ minimizes $\pi_k(\Sigma)$ if, and only if, $\sigma_1/\sigma_q = 1$. On the one hand, \[ \pi_0(\Sigma) \ = \ \log \Bigl( \sum_{i=1}^q \sigma_i^{} \sum_{j=1}^q \sigma_j^{-1} \Bigr) \ = \ \log \Bigl( \frac{1}{2} \sum_{i,j=1}^q \bigl( \frac{\sigma_i}{\sigma_j} + \frac{\sigma_j}{\sigma_i} \Bigr) \Bigr) \ \ge \ \log(q^2) \] with equality if, and only if, $\sigma_i/\sigma_j = 1$ for all indices $i,j$. This follows from $x + x^{-1} > 2$ for arbitrary $x \in \mathbb{R}_+ \setminus \{1\}$. In case of $\pi_1(\Sigma)$, note that by Jensen's inequality and strict concavity of $\log$ on $\mathbb{R}_+$, \[ \pi_1(\Sigma) \ = \ - q^{-1} \sum_{i=1}^q \log(\sigma_i^{-1}) + \log \Bigl( q^{-1} \sum_{i=1}^q \sigma_i^{-1} \Bigr) + \log q \ \ge \ \log(q) \] with strict inequality unless all $\sigma_i$ are identical. Finally, \[ \pi_2(\Sigma) \ = \ \sum_{i=1}^q \Bigl( \log \sigma_i - q_{}^{-1} \sum_{j=1}^q \log \sigma_j \Bigr)^2 \ \ge \ 0 \] with equality if, and only if, all $\sigma_i$ are identical. Next we verify the geodesic second order Taylor expansions of $\pi_k(\Sigma)$. It follows from Examples~\ref{ex0} and \ref{ex2} and Remark~\ref{rem:Inversion2} that \begin{align*} G_0(B) \ &= \ N(B^\top B) - N(B^{-1} B^{-\top}) , \\ G_1(B) \ &= \ q^{-1} I_q - N(B^{-1} B^{-\top}) , \end{align*} and \begin{align*} H_0(A,B) \ &= \ \langle A^2, N(B^\top B)\rangle - \langle A, N(B^\top B)\rangle^2 + \langle A^2, N(B^{-1}B^{-\top})\rangle - \langle A, N(B^{-1}B^{-\top})\rangle^2 , \\ H_1(A,B) \ &= \ \langle A^2, N(B^{-1}B^{-\top})\rangle - \langle A, N(B^{-1}B^{-\top})\rangle^2 \end{align*} with $N(\Sigma) := \mathop{\mathrm{tr}}\nolimits(\Sigma)^{-1} \Sigma$. The considerations to Example~\ref{ex2} reveal that both $H_0(A,B)$ and $H_1(A,B)$ are strictly positive whenever $A \not\in \{t I_q : t \in \mathbb{R}\}$. The expansion for $\pi_2$ follows from Corollary~\ref{cor:penalties} with the same arguments as in the proof of Lemma~\ref{lem:all.about.Pi}. In particular, \[ H_2(A,B) \ = \ \sum_{i,j=1}^q W_{ij}(\lambda) (v_i^\top A^o v_j)^2 \ \ge \ \|A^o\|^2 \] with $A^o = A - q^{-1}\mathop{\mathrm{tr}}\nolimits(A) I_q$. Concerning coercivity, let $A = V D(-\gamma) V^\top$ with $\gamma_1 \le \ldots \le \gamma_q$ and $\gamma_q > \gamma_1$. Then for $\xi = \pm 1$, \[ \frac{d}{dt} q^{-1} \log \det(\exp(tA)^{\xi}) \ = \ - \xi \bar{\gamma} \] and \[ \frac{d}{dt} \log \mathop{\mathrm{tr}}\nolimits(\exp(tA)^{\xi}) \ = \ - \xi \sum_{i=1}^q \gamma_i e^{- \xi t\gamma_i} \Big/ \sum_{i=1}^q e^{- \xi t\gamma_i} \ \to \ \begin{cases} - \gamma_1 & \text{if} \ \xi = +1 , \\ \ \gamma_q & \text{if} \ \xi = -1 , \end{cases} \] as $t \to \infty$. This implies for $k = 0,1$ the asserted limits of $d \pi_k(\exp(tA)) / dt$. For $k = 2$ the claim follows from \[ \pi_2(\exp(tA)) \ = \ t^2 \sum_{i=1}^q (\gamma_i - \bar{\gamma})^2 . \]\\[-5ex] \end{proof} \subsection{Proofs for Section~\ref{sec:Algorithm}} Our proof of Theorem~\ref{thm:phi} is based on two elementary inequalities for the accuracy of Taylor expansions of $f$ which are derived in Supplement~\ref{sec:Auxiliary}: \begin{Lemma} \label{lem:remainders} For $\Sigma \in \R_{{\rm sym},+}^{q\times q}$ and $\delta > 0$ let \begin{align*} \Lambda_{\rm max}(\Sigma,\delta) \ &:= \ \max_{A, C \in \R_{\rm sym}^{q\times q} \,:\, \|A\| \le 1, \|C\| \le \delta} H(A, \Sigma^{1/2} \exp(C/2)) , \\ N(\Sigma,\delta) \ &:= \ \max_{A, C \in \R_{\rm sym}^{q\times q} \,:\, \|A\| \le 1, \|C\| \le \delta} \bigl| H(A, \Sigma^{1/2} \exp(C/2)) - H(A, \Sigma^{1/2}) \bigr| . \end{align*} For arbitrary $\Sigma = BB^\top$ with $B \in \R_{\rm ns}^{q\times q}$ and $A \in \R_{\rm sym}^{q\times q} \setminus \{0\}$, \[ f(B \exp(A) B^\top) - f(\Sigma) - \langle A, G(B)\rangle \ \le \ 2^{-1} \|A\|^2 \Lambda_{\rm max}(\Sigma,\|A\|) \] and \[ \bigl| f(B \exp(A) B^\top) - f(\Sigma) - \langle A, G(B)\rangle - 2^{-1} H(A,B) \bigr| \ \le \ 2^{-1} \|A\|^2 N(\Sigma,\|A\|) . \] \end{Lemma} \begin{proof}[\bf Proof of Theorem~\ref{thm:phi}] One can deduce from continuity of $H(A,B)$ in $B \in \R_{\rm ns}^{q\times q}$ for fixed $A \in \R_{\rm sym}^{q\times q}$ and $\R_{\rm sym}^{q\times q}$ being finite-dimensional that both $\Lambda_{\rm max}(\Sigma,\delta)$ and $N(\Sigma,\delta)$ are continuous in $(\Sigma,\delta) \in \R_{{\rm sym},+}^{q\times q} \times [0,\infty)$, where $N(\Sigma,0) = 0$. Additional quantities we shall use repeatedly are \[ \Lambda_{\rm min}(\Sigma) \ := \ \min \bigl\{ H(A, \Sigma^{1/2}) : A \in \R_{\rm sym}^{q\times q}, \|A\| = 1 \bigr\} > 0 \] and $\|G(\Sigma^{1/2})\|$. Both are continuous in $\Sigma$. For arbitrary $\Sigma = BB^\top$, $B \in \R_{\rm ns}^{q\times q}$, we can say that \[ \|A_{\rm pN}\| \ = \ \| \underline{H}(BU)^{-1} \lambda(B) \| \ \le \ \frac{\|\lambda(B)\|}{\lambda_{\rm min}(\underline{H}(BU))} \ \le \ \frac{\|G(\Sigma^{1/2})\|}{\Lambda_{\rm min}(\Sigma)} \ =: \ R_1(\Sigma) , \] because $\|\lambda(B)\| = \|G(B)\| = \|G(\Sigma^{1/2})\|$ and \[ \lambda_{\rm min}(\underline{H}(BU)) \ \ge \ \min \bigl\{ H(A, BU) : A \in \R_{\rm sym}^{q\times q}, \|A\| = 1 \bigr\} \ = \ \Lambda_{\rm min}(\Sigma) . \] On the other hand, \[ \langle A_{\rm pN}, G(B)\rangle \ = \ - \lambda(B)^\top \underline{H}(BU)^{-1} \lambda(B) \ \le \ - \frac{\|\lambda(B)\|^2}{\lambda_{\rm max}(\underline{H}(BU))} \ \le \ - \frac{\|G(\Sigma^{1/2})\|^2}{\Lambda_{\rm max}(\Sigma,0)} . \] Hence it follows from Lemma~\ref{lem:remainders} that for any fixed integer $m \ge 0$, \begin{align*} f(B & \exp(2^{-m}A_{\rm pN}) B^\top) - f(\Sigma) - \langle 2^{-m} A_{\rm pN}, G(B)\rangle/C \\ &\le \ 2^{-2m-1} \|A_{\rm pN}\|^2 \Lambda_{\rm max}(\Sigma,2^{-m} \|A_{\rm pN}\|) + (1 - C^{-1}) \langle 2^{-m} A_{\rm pN}, G(B)\rangle \\ &\le \ 2^{-m} \|G(\Sigma^{1/2})\|^2 \Bigl( \frac{\Lambda_{\rm max}(\Sigma, 2^{-m} R_1(\Sigma))} {2^{m+1} \Lambda_{\rm min}(\Sigma)^2} - \frac{1}{\Lambda_{\rm max}(\Sigma,0)} \Bigr) \\ &=: \ R_{2,m}(\Sigma) . \end{align*} Note that $R_{2,m}(\Sigma)$ is continuous in $\Sigma$. Moreover, for any fixed $\Sigma_o \ne \Sigma_*$ there is an integer $m_o \ge 0$ such that $R_{2,m_o}(\Sigma_o) < 0$. Consequently, if $\Sigma$ is sufficiently close to $\Sigma_o$, then the integer $m(BU)$ in $\phi(\Sigma)$ satisfies $m(BU) \le m_o$, and \[ f(\phi(\Sigma)) - f(\Sigma) \ \le \ 2_{}^{-m_o} \langle A_{\rm pN}, G(B)\rangle / C \ \le \ - \frac{\|G(\Sigma^{1/2})\|^2}{2_{}^{m_o} \Lambda_{\rm max}(\Sigma,0) C} . \] This shows that \[ \limsup_{\Sigma \to \Sigma_o} f(\phi(\Sigma)) - f(\Sigma_o) \ \le \ - \frac{\|G(\Sigma_o^{1/2})\|^2}{2_{}^{m_o} \Lambda_{\rm max}(\Sigma_o,0) C} \ < \ 0 . \] For $\Sigma$ close to $\Sigma_*$ we only consider $m = 0$ and utilize the second bound in Lemma~\ref{lem:remainders}. Namely, \begin{align*} f(B \exp(A_{\rm pN}) B^\top) - f(\Sigma) \ &= \ 2^{-1} \langle A_{\rm pN}, G(B)\rangle + 2^{-1} \|A_{\rm pN}\|^2 N(\Sigma, \|A_{\rm pN}\|) \\ &\le \ 2^{-1} \langle A_{\rm pN}, G(B)\rangle + \|G(\Sigma^{1/2})\|^2 \frac{N(\Sigma, R_1(\Sigma))} {2 \Lambda_{\rm min}(\Sigma)^2} . \end{align*} Consequently, \begin{align*} f(B & \exp(A_{\rm pN}) B^\top) - f(\Sigma) - \langle A_{\rm pN}, G(B)\rangle/C \\ &\le \ (2^{-1} - C^{-1}) \langle A_{\rm pN}, G(B)\rangle + \|G(\Sigma^{1/2})\|^2 \frac{N(\Sigma, R_1(\Sigma))} {2 \Lambda_{\rm min}(\Sigma)^2} \\ &\le \ \|G(\Sigma^{1/2})\|^2 \Bigl( \frac{N(\Sigma, R_1(\Sigma))}{2 \Lambda_{\rm min}(\Sigma)^2} - \frac{2^{-1} - C^{-1}}{\Lambda_{\rm max}(\Sigma,0)} \Bigr) . \end{align*} But $R_1(\Sigma) \to 0$ as $\Sigma \to \Sigma_*$ and $N(\Sigma_*,0) = 0$, so \[ \lim_{\Sigma \to \Sigma_*} \Bigl( \frac{N(\Sigma, R_1(\Sigma))}{2 \Lambda_{\rm min}(\Sigma)^2} - \frac{2^{-1} - C^{-1}}{\Lambda_{\rm max}(\Sigma,0)} \Bigr) \ = \ - \frac{2^{-1} - C^{-1}}{\Lambda_{\rm max}(\Sigma_*,0)} \ < \ 0 . \] Consequently, $m(BU) = 0$ if $\Sigma$ is sufficiently close to $\Sigma_*$. \end{proof} \addcontentsline{toc}{section}{References} \input{PenalizedM.bbl} \clearpage
1,314,259,996,097
arxiv
\section{Introduction} Randomness is a central concept and resource in various fields of research in computer science, information theory and physics, in both the classical and the quantum realm. It is an ingredient to (quantum) algorithm design, a core element in coding and communication protocols, and plays a central role in fundamental aspects of statistical mechanics. In the quantum context, randomness is also increasingly being seen as a valuable resource. {A natural question that arises in this context is then how much of it is required to implement a given physical process on a quantum system. Another important question is to what extent the required amount of randomness differs depending on whether an \emph{implicit} or an \emph{explicit} model of randomness is employed. Here, an implicit model of randomness considers the \emph{source of randomness (SoR)} as a black box that provides coin flips, while an explicit model takes into account the fact that, fundamentally, all systems including the ones provided by the SoR are quantum systems, and hence models the randomness as a quantum state. In this work, we give a complete answer to both of the above questions. We provide, for both the implicit and explicit model, optimal and tight bounds on the amount of randomness required to implement physical processes on quantum systems. Moreover, we show a strict separation between the above models, in the sense that every physical process can be implemented in the explicit model by using only half the amount of randomness that is required in the implicit model.} Specifically, we use a model of noisy processes---processes that require randomness--- known as \emph{noisy operations}~\cite{Gour2015Resource}. We study the minimal amount of noise required to implement a large variety of noisy processes and construct protocols that saturate the lower bounds imposed by quantum mechanics. These processes include \emph{dephasing and equilibration}~\cite{LongReview,Linden2012}, \emph{decoherence}~\cite{Zurek2003,Zeh}, the \emph{implementation of measurements}~\cite{HolevoBook,Nielsen2000,Zeh}, any \emph{transition} between two quantum states that requires randomness~\cite{Gour2015Resource} as well as the novel construction of \emph{private quantum channels}~\cite{Ambainis2000,Boykin2003Optimal}. It is an important aspect of our work that, by virtue of an explicit model, these saturated lower bounds also translate into bounds on the \emph{physical size} of an SoR. This insight allows us to construct, for particular processes, the \emph{smallest decohering environment} or measurement device compatible with quantum mechanics~\cite{Zurek2003}. Put in a different language, it provides an understanding of the \emph{smallest equilibrating environment}~\cite{LongReview} possible. The surprisingly small size that suffices for an environment to be equilibrating challenges the commonly held view that such decohering baths should necessarily feature a large dimension. A further notable feature of the protocols that we construct is that they are \emph{catalytic}: The same unit of randomness can be \emph{re-used} for different processes~\cite{Mueller2017}. It is also \emph{robust}, in the sense that we do not require perfect control in either the states prepared by the SoR or the timing of the process, and further \emph{recurrent}, in the sense that, for large system dimension $d$, continuous time versions of our noisy processes maintain a state close to the desired final state for times $\tau \propto \sqrt{d}$, at which point the systems recurs to the initial state. \section{Classical versus quantum noise}\label{sec:classicalvsquantum} Let us begin with discussing in more detail the difference between classical and quantum uses of randomness. Consider initial and final (mixed) states $\rho,\rho'$ on a Hilbert space $\mc H_S$ of dimension ${\rm \dim}( \mc H_S)=d$. We are concerned with the possibility of implementing a transition $\mc E(\rho)=\rho'$, where $\mc E$ represents a noisy process. There exist different ways of modeling the maps $\mc E$ which we now explain in detail. In a classical, implicit model of the SoR one assumes a discrete random variable $J$ that is uniformly distributed over $m$ possible values. Depending on the value of $j$ one implements a given unitary transformation $U_j$, which gives rise to the operations \begin{align}\label{eq:def:classicalnoise} \mc E_{\rm C}^m(\:\cdot \:)=\frac{1}{m} \sum_{i=1}^{m} U_i \cdot U_i^{\dagger}. \end{align} If there exist $\mc E_{\rm C}^m$ so that a transition is possible, we simply denote it by $\rho \overset{m}{\to}_C \rho'$. In constrast, in an explicit quantum model, the SoR is a quantum system $R$ in the maximally mixed state of dimension $m$, which we denote by ${\mathbb{I}}_m \coloneqq \frac{1}{m} \mathbbm{1}$, with $\mathbbm{1}$ being the identity matrix. In this model, noisy processes are any effect of a unitary joint evolution of the compound, \begin{align}\label{eq:def:quantumnoise} \mc E_{\rm Q}^m(\:\cdot \:)= \Tr_R[U(\:\cdot \: \otimes {\mathbb{I}}_m)U^\dagger]. \end{align} As in the classical case, we write $\rho \overset{m}{\to}_Q \rho'$ whenever the transition is possible. The set of transitions that can be implemented with both classical and quantum noise coincide if the amount of noise --- quantified by the dimension $m$ --- is unbounded. In this case we have \begin{align} \label{eq:equiv_for_infty} \rho \overset{\infty}{\to}_C \rho' \Leftrightarrow \rho \overset{\infty}{\to}_Q \rho' \Leftrightarrow \rho \succ \rho' \end{align} where we use the symbol ``$\succ$'' to indicate that $\rho$ majorizes $\rho'$~\cite{Marshall2011Inequalities}. The set of transitions $\rho \overset{\infty}{\to}_Q \rho'$ have been extensively studied as \emph{noisy operations}~\cite{Gour2015Resource}, where the noise is treated as a free resource and the main concern is to study the possible transitions with unbounded $m$. In contrast, here we are concerned with treating noise as a valuable resource and focus on the following question: What is the minimal amount of noise---quantified by $m$---that serves to implement any possible transition between pairs of $d$-dimensional quantum states fulfilling $\rho \succ \rho'$? We denote these minimal values of $d$ for the classical and quantum case by $m^*_C(d)$ and $m^*_Q(d)$, respectively. At first glance, one might suspect that $m^*_C(d)=m^*_Q(d)$, with quantum noise offering no advantage over its classical counterpart. That intuition comes from the fact that, although one writes a full quantum description in~\eqref{eq:def:quantumnoise}, the state of $R$, given by ${\mathbb{I}}_m$, is nevertheless a quasi-classical state. Hence it seems reasonable that it could be recast as a classical variable, similarly as in~\eqref{eq:def:classicalnoise}. However, treating the noise as a quantum state allows one to access its quantum degrees of freedom, for example to create entanglement between the $S$ and $R$. In other words, one could in principle use quantum correlations to make a more efficient use of the noise yielding $m^*_C(d)>m^*_Q(d)$. One of the main results of this work is to show that there is indeed a gap between the classical and quantum case. We find that $m^*_C(d)=d \:{>}\:\lceil d^{1/2}\rceil=m^*_Q(d)$ and more importantly, we construct protocols that saturate those bounds. In this way, we provide protocols that use the noise optimally for a large variety of tasks. These protocols also have a number of useful properties such as allowing one to re-use the noise or being robust under different classes of imperfections. In the subsequent section, we present the key lemma to construct such optimal protocols and then turn to discuss applications and properties in Section~\ref{sec:applications}. \section{An optimal dephasing map} \label{sec:an_optimal_dephasing_map} For any state transition $\rho\to \rho'$ that is possible under either quantum or classical noisy processes, there exists a corresponding map $\mc E(\rho) = \rho'$ such that \begin{align} \label{eq:decomp_channel} \mc E(\cdot) = \mc U' \circ \pi_{A} \circ \mc U(\cdot). \end{align} Here $\mc U', \mc U$ are unitary channels that depend on $\rho$ and $\rho'$. The map $\pi_{A}$ is the dephasing map in a fixed orthonormal basis $A = \{\ket{i}\}_{i=1}^d$, defined as \begin{align} \bra{i}\pi_A(\rho) \ket{j} = \bra{i}\rho\ket{j} \delta_{i,j}, \end{align} with $\delta_{i,j}$ being the Kronecker delta. This follows from the Schur-Horn-Theorem~\cite{Horn1954Doubly} together with~\eqref{eq:equiv_for_infty} and was used to bound the required randomness for noisy processes already in Ref.\ \cite{Scharlau2016Quantum}. Since the unitary channels $\mc U', \mc U$ do not require the use of any SoR by definition, we see from~\eqref{eq:decomp_channel} that noise is required only for the implementation of the dephasing map $\pi_A$. In turn,~\eqref{eq:decomp_channel} implies that whether $\mc E$ represents a quantum noisy process or a classical one, depends only on the particular implementation of this dephasing map: Any construction of $\pi_A$ in the form of~\eqref{eq:def:quantumnoise} with $m$-dimensional SoR implies also that $\mc{E}$ is a map $\mc E_Q^m$, while any construction of it in the form of~\eqref{eq:def:classicalnoise} implies that $\mc{E}$ is of the form $\mc E_C^m$. Understanding the amount of randomness required to implement the dephasing map therefore is key to understanding the amount of randomness required to implement any noisy process. The following lemma provides a protocol implementing a dephasing map in any basis, using an explicit model model of noise and requiring a SoR of dimension $m=\lceil d^{1/2} \rceil$. \begin{lemma}[Catalytic quantum dephasing]\label{lem:dephasing_basic} For any integer $d$ and basis $A$ there exists a unitary $U$ so that \begin{align} \label{eq:dephasingcondition1} &\operatorname{tr}_R[U \: (\cdot \: \otimes\: {\mathbb{I}}_{\lceil d^{1/2}\rceil} )\: U^\dagger] = \pi_A(\cdot),\\ \label{eq:dephasingcondition2} &\operatorname{tr}_S[U (\rho \otimes {\mathbb{I}}_{\lceil d^{1/2}\rceil} ) U^\dagger] = {\mathbb{I}}_{\lceil d^{1/2}\rceil}\:\: \forall \rho. \end{align} \end{lemma} \begin{proof} Assume first that $ \sqrt{d} = m \in \mathbb{N}$. Now, let $\{U_i\}$ be a unitary operator basis for $\mc B(\mc H_R)$, that is, a collection of $m^2=d$ unitary operators $U_i \in \mc B(\mc H_R)$ such that \begin{align} \label{eq:prop_unitary_basis} \frac{1}{m}\operatorname{tr}(U_i U_j^\dagger) &= \delta_{i,j} \end{align} for all $i,j$. Such a basis exists for every $m$~\cite{Schwinger1960,Werner2001All}. We now define the unitary \begin{align} \label{eq:dephasing_unitary_basic} U = \sum_{i=1}^d \proj{i} \otimes U_i, \end{align} where the $\{\ket{i}\}$ are elements of the basis $A$ in which we intend to pinch. Then, for any density matrix $\rho$ on $\mc H_S$, \begin{align} \Tr_R[U (\rho \otimes {\mathbb{I}}_m )U^\dagger] &= \sum_{i,j} \proj{i}\rho \proj{j} \frac{1}{m}\operatorname{tr}(U_i U^\dagger_j) \\ &= \sum_{i,j} \proj{i}\rho \proj{j} \delta_{i,j} = \pi_A(\rho). \end{align} Lastly, note that Eq.\ ~\eqref{eq:dephasingcondition2} follows simply by \begin{align}\label{eq:map_on_noise} \Tr_S[U (\rho \otimes {\mathbb{I}}_m ) U^\dagger] &=\sum_i \bra{i}\rho \ket{i} U_i {\mathbb{I}}_m U_i^\dagger={\mathbb{I}}_ m . \end{align} In the case where $\sqrt{d}$ is not an integer, we can use the same construction with a source of randomness of dimension $m =\lceil d^{1/2} \rceil$ by simply not exhausting all possible $m^2$ possible unitaries $U_i$ on $R$. \end{proof} The protocol of Lemma~\ref{lem:dephasing_basic} is optimal, in the sense that it is impossible to implement the dephasing map with $m<\lceil d^{1/2}\rceil$. This can be seen by noting that for any basis $A$ one can always choose an initial pure state $\rho$ so that $\pi_A(\rho)={\mathbb{I}}_d$. Using the preservation of the von Neumann-entropy under unitaries and the Lieb-Araki triangle inequality one finds that $m\geq \sqrt{d}$ (see Appendix~\ref{sec:necessary}). This implementation of the dephasing map compares with the best value known to date of $m=d$, proven in Ref.~\cite{Scharlau2016Quantum}, whose implementation can in fact be shown to correspond to a classical noisy operation of the form~\eqref{eq:def:classicalnoise} as we will see later. \subsection{Catalyticity} \label{sub:catalyticity} Equation~\eqref{eq:dephasingcondition2} states that the dephasing operation defined in Lemma~\ref{lem:dephasing_basic} leaves the state of $R$ invariant, or in other words, that the noise is catalytic~\cite{JonathanPlenio, Mueller2017,Ng14,PhysRevLett.85.437}. This property has numerous useful applications. For instance, an immediate corollary of the lemma is that one can locally dephase an arbitrarily large number of uncorrelated systems, each of them of dimension at most $d$, by using a single noise system $R$ of dimension $\lceil d^{1/2} \rceil$. More formally, we have that for any set of states $\{\rho^i\}_{i=1}^{N}$ there exists a unitary $U$ so that \begin{align}\label{eq:local_dephasing} \Tr_R[U (\rho^i_{S_1} \otimes \cdots \otimes \rho^i_{S_N}\otimes {\mathbb{I}}_{\lceil d^{1/2}\rceil}) \: U^\dagger]= \rho'_{S_1,\ldots,S_N} \end{align} where $\rho'_{S_i}=\pi_{A_i}(\rho^i_{S_i})$. This follows by simply iterating the unitaries of Lemma~\ref{lem:dephasing_basic} with all the subsystems and re-using the noise as illustrated in the top of Fig.~\ref{fig:dephasing}. In contrast, if the noise would not have the property of being catalyticm, then it would be necessary to employ a new mixed state for each of the subsystems, in which case an amount of randomness proportional to $N$ would be required. (bottom of Fig.~\ref{fig:dephasing}). It is important to note, however, that reusing the randomness comes at the cost of correlating the subsystems amongst each other. Hence, if a protocol requires for the individual systems to remain uncorrelated, one still has to resort to a scheme whose required randomness scales linearly with the number of subsystems. \begin{figure}[tb] \includegraphics[width=0.4\textwidth]{catalytic-new2.pdf} \includegraphics[width=0.4\textwidth]{catalytic-new1.pdf} \caption{Two possible ways of dephasing and the resulting correlation structure. Top: A sequence of systems in state $\rho$ is dephased using a single state of randomness, with correlations being established between all systems involved. The local margins of the resulting global state \eqref{eq:local_dephasing} are the dephased initial states. Bottom: In order to avoid correlations between the systems, one can instead use additional and unused randomness.} \label{fig:dephasing} \end{figure} As sketched already, dephasing can be related to many processes that require noise, both in engineered as well as in equilibrating natural quantum processes. In the remainder of this work, we discuss and present applications of Lemma~\ref{lem:dephasing_basic} to these processes. \section{Applications} \label{sec:applications} \subsection{Minimal noise for state transitions} \label{sub:first_application_noisy_operations} As a first application, we prove the tight bounds for noisy operations presented in Section~\ref{sec:classicalvsquantum}. Formally, given a Hilbert space $\mc H_S$ with $\dim(\mc H_S)=d$, we define the minimal noise for the classical and quantum case as \begin{align} \label{eq:def:minimalclassical}m^*_C(d)\coloneqq &\argmin_m \: \rho \overset{m}{\to}_C \rho' \:\:\:\forall \rho,\rho' \in \mc B(\mc H_S)\: | \: \rho \succ \rho' ,\\ \label{eq:def:minimalquantum}m^*_Q(d)\coloneqq &\argmin_m \: \rho \overset{m}{\to}_Q \rho' \:\:\:\forall \rho,\rho' \in \mc B(\mc H_S) \: | \: \rho \succ \rho'. \end{align} In the following lemma we find the values of the above quantities, thus providing the smallest SoR that suffices to perform any transition between two states $\rho \succ \rho'$. Note, however, that it is possible for {particular transitions to require even less randomness or none at all.} \begin{lemma}[Optimal {source of randomness} for state transitions]\label{lemma:optimaldimensions} Any state transition of a $d$-dimensional system that is possible under noisy processes, in the sense of~\eqref{eq:def:minimalclassical} and~\eqref{eq:def:minimalquantum}, can be implemented using an amount of classical and quantum noise given by \begin{align} \label{eq:optimalclassical}&m^*_C(d)=d,\\ \label{eq:optimalquantum}&m^*_Q(d)=\lceil d^{1/2} \rceil. \end{align} \end{lemma} \begin{proof} Here, we only prove that the above values are sufficient. For the corresponding necessary conditions (and $\epsilon$-approximate versions of the above) see Appendix~\ref{sec:necessary}. Eq.~\eqref{eq:optimalquantum} follows from combining~\eqref{eq:decomp_channel} with the dephasing construction in Lemma~\ref{lem:dephasing_basic}. To see~\eqref{eq:optimalclassical}, consider the {unitary \begin{align} V= \sum_{i=1}^d \proj{i}_S \otimes X^i_R, \end{align} where $X$ is the generalized Pauli matrix defined as \begin{align}\label{X} X \ket{i} = \ket{(i + 1) \text{ mod } d}. \end{align} As shown in Ref.~\cite{Scharlau2016Quantum}, this unitary implements the dephasing map \begin{align} \Tr_R(V (\rho \otimes {\mathbb{I}}_d )V^\dagger) &= \frac{1}{d} \sum_{i,j}\bra{i}\rho\ket{j}\ketbra{i}{j} \Tr(X^{i-j}) &= \pi_A(\rho). \end{align} $V$ is the local Fourier transform of a unitary leading to a channel of the form~\eqref{eq:def:classicalnoise}: there exists a unitary $F$ and a basis $\{\ket{ \tilde j}=F^\dagger \ket{j}\}$ such that \begin{align} \tilde{V}\coloneqq (\mathbbm{1} \otimes F) V (\mathbbm{1}\otimes F^\dagger) =\sum_{j=1}^d Z^j \otimes \proj{\tilde{j}}. \end{align} Here, \begin{align}\label{Z} Z=\sum_j \omega_d^j \proj{j} \end{align} is the generalized Pauli matrix conjugate to $X$ and $\omega_d$ the $d$-th root of unity. Since the maximally mixed state is unitarily invariant, $\tilde{V}$ implements the dephasing map and its action on the system $S$ can be represented as \begin{align} \rho \mapsto \Tr_R(\tilde V (\rho\otimes {\mathbb{I}}_d) {\tilde V}^\dagger) = \frac{1}{d}\sum_{j=1}^d Z^j \rho Z^{-j}. \end{align} Thus the dephasing map can be implemented with a classical SoR of dimension $d$. } \end{proof} This lemma proves a conjecture in Ref.~\cite{Scharlau2016Quantum}, where the possibility of strengthening their bound $m^*_Q(d)=d$ to the present one was already raised. In complete analogy to the discussion in Section~\ref{sub:catalyticity} and Fig.~\ref{fig:dephasing}, we can also use the catalytic properties of the source of randomness to implement state transitions locally from an initially uncorrelated state and using a fixed-size source of randomness. More concretely, let $\{\rho^i\}_{i=1}^N$ and $\{\sigma^i\}_{i=1}^N$ be $d$-dimensional quantum states such that $\rho^i \succ \sigma^i$ for all $i=1,\ldots,N$. Then there exists a unitary $U$ such that \begin{align}\label{eq:local_transition} \Tr_R[U (\rho^1_{S_1} \otimes \cdots \otimes \rho^N_{S_N}\otimes {\mathbb{I}}_{\lceil d^{1/2}\rceil}) \: U^\dagger]= \rho'_{S_1,\ldots,S_N}, \end{align} with $\rho'_{S_i} = \sigma^i$. To see this, we recall from the discussion in section~\ref{sub:catalyticity} that the transition $\rho^i \rightarrow \sigma^i$ can be implemented composing unitary channels and dephasing maps. Hence, $\mc{E}(\rho^{1}_{S_1} \otimes \cdots \otimes \rho^{1}_{S_1} ) = \sigma^{1}_{S_1} \otimes \cdots \otimes \sigma^{1}_{S_1}$ with \begin{align}\label{eq:local_shur_horn} \mc{E} = \bigotimes_{i=1}^N \mc{U'}_{S_i} \circ \bigotimes_{i=1}^N\pi_{A_1} \circ \bigotimes_{i=1}^N\mc{U}_{S_1} . \end{align} Now, using Eq.\ \eqref{eq:local_dephasing} we see that it is possible to dephase locally ---that is, perform locally the same transition as the one implemented by the second map on the r.h.s. of \eqref{eq:local_shur_horn}--- using a single source of randomness of dimension $\lceil d^{1/2}\rceil$, at the cost of creating correlations between the subsystems. Hence, composing the local unitaries with the local dephasing of \eqref{eq:local_dephasing}, we obtain a map that locally implements the same transition as $\mc{E}$, as captured by \eqref{eq:local_transition}. \subsection{Smallest possible decohering environment and measurement device} \label{ssub:measurements} A further application of our results is to the physical mechanism of decoherence and implementing a measurement in quantum mechanics, which can indeed be seen as a special case of a noisy operation, since it requires randomness. Both applications follow from the fact that a quantum source of randomness can be seen as half of a maximally entangled system. It is useful to first discuss decoherence. To do so, we make use of the fact that the usual decoherence mechanism is, in a sense, simply a purified version of the system-environment interactions that are toy-modelled by noisy operations. Let $\ket{\psi} \in \mc H_S$ be an initial state vector of a $d$-dimensional system and $\ket{\phi}$ be the initial state vector of the environment. According to the decoherence mechanism, the unitary joint evolution of system and environment is generated by a Hamiltonian whose interaction term picks out, or einselects, a preferred basis in which it decoheres the system~\cite{Zurek2003}. We are now interested in the smallest possible size of the environment that achieves this. Let us label the system basis that is einselected by $A = \{ \ket{i}\}$ and assume that $\ket{\phi}$ is a maximally entangled $d$-dimensional and bi-partite state vector over systems $E_1$ and $E_2$. We then define the unitary \begin{align} \label{eq:unitary_deco} U = U_{SE_1} \otimes {\mathbbm 1}_{E_2}, \end{align} where $U_{SE_1}$ is the unitary defined in~\eqref{eq:dephasing_unitary_basic} that acts on systems $S$ and $E_1$. As is clear from the above, this unitary will have the effect that \begin{align} \Tr_E[U \proj{\psi} \otimes \proj{\phi} U^\dagger] = \pi_A(\proj{\psi}), \end{align} meaning that even in this purified picture only an environment of the size of the system is required to produce decoherence. Let us now turn to the smallest possible measurement device. For simplicity, we only consider projective measurement schemes: Suppose we are given a system in some initial state vector $\ket{\psi}$ and some set of projective measurement operators $\{M_i = \proj{i}\}, i \in \{1, \dots, d\}$. Then a measurement process consists of the following steps: A bi-partite measurement device, initially in state vector $\ket{\phi}$, consisting of a $d$-dimensional pointer system $P$ and a remainder $R$, whose dimension we are interested in bounding; and a unitary $W$ with the effect that \begin{align} \label{eq:measurement_scheme} Tr_R[W \proj{\psi} \otimes \proj{\phi} W^\dagger] = \sum_i p_i \proj{i, P_i}, \end{align} where $p_i = \operatorname{tr}(M_i \proj{\psi})$ and $\{\ket{P_i}\}$ form an orthonormal basis for the pointer system. Using the above results, we can easily construct a measurement process as follows: Let the initial state vector of the measurement device be $\ket{\phi} = \ket{0}_P \otimes \ket{\phi^+}_R$, where $\ket{\phi^+}$ is a bi-partite, $d$-dimensional, maximally entangled state vector. Further, let $\{V_i\}$ be unitaries defined by the action \begin{align} V_i \ket{i, 0} = \ket{i,P_i}. \end{align} Finally, define the unitary \begin{align} W = \sum_i \proj{i}\otimes V_i \otimes (U_i)_{R_1} \otimes \mathbbm{1}_{R_2}, \end{align} where the unitaries $U_i$ form an operator basis as before. Then, it is easy to verify that $\ket{\phi}$ and $W$ together satisfy~\eqref{eq:measurement_scheme}. This shows that in principle one requires a measurement device (including the pointer variable) whose size is only twice that of the system to be measured to implement a projective measurement as a physical process. Using entropic arguments one can again show that this is also the smallest possible measurement device. Note that the register $R$ is exclusively used as a source of randomness in this protocol. Thus if we are willing to give up the assumption that the initial state of the measurement-device is pure, then it suffices to only keep part $R_1$ in a maximally mixed state. Clearly, these results can also be read as providing the minimal dimension of an environment that equilibrates a quantum system of dimension $d$~\cite{LongReview,Linden2012}. \subsection{A universal dephasing machine} \label{sub:universaldecoherence} In Section~\ref{sec:an_optimal_dephasing_map} we have shown that with the aid of a noise system $R$ in state ${\mathbb{I}}_{\lceil d^{1/2} \rceil}$ it is possible to perform a protocol $U$ which has the effect of implementing the dephasing map $\pi_A$ on the system $S$. We will now investigate which map is induced on $S$ if the same unitary is applied with a system $R$ in a state $\sigma$ different from ${\mathbb{I}}_{\lceil d^{1/2}\rceil}$. We will show that $U$ brings the system closer to $\pi_A(\rho)$ for any initial states $\rho$ and $\sigma$. Also, we find that iterating the same protocol $U$ with a sufficiently large sequence of imperfect noise states of $R$ brings the system $S$ exponentially close (in the number of iterations) to its dephased state. In this sense, $U$ acts as a universal dephasing machine (Fig.~\ref{fig:machine1} and Fig.~\ref{fig:machine}): an iterated use of the same protocol $U$ dephases the state of $S$ for large families of states on $R$ acting as a SoR. Hence one can implement this protocol universally as a ``black box'', without having to know the actual state of $R$. \subsubsection{Imperfect noise and convergence to the dephased state}\label{sec:convergence} Let $ \mathcal{D}_\sigma(\cdot)$ denote the map \begin{align} \label{eq:dephasing_map} \mathcal{D}_\sigma(\cdot) \coloneqq \operatorname{tr}_R[U \: (\cdot \: \otimes \: \sigma )\: U^\dagger] \end{align} where $U$ is the unitary of Lemma~\ref{lem:dephasing_basic}. In Appendix~\ref{app:universal_dephasing_machine} we show that, for any $\rho$ and $\sigma$, \begin{align} \mathcal{D}_\sigma(\pi(\rho)) = \pi(\mathcal{D}_\sigma(\rho)) = \pi(\rho),\\ \norm{\mathcal{D}_\sigma(\rho) - \pi(\rho)}_1 \leq \norm{\sigma - {\mathbb{I}}_{\lceil d^{1/2} \rceil}}_1, \end{align} where we have dropped the subscript $A$. These properties imply that, independently of the actual state $\sigma$, the system $S$ is brought closer to the dephased state $\pi(\rho)$ while keeping its diagonal invariant. This follows from the data-processing inequality~\cite{Nielsen2000} \begin{align} \norm{\mathcal{D}_\sigma(\rho) - \pi(\rho)}_1 = \norm{\mathcal{D}_\sigma(\rho) - \mathcal{D}_\sigma(\pi(\rho))}_1 \leq \norm{\rho - \pi(\rho)}_1.\nonumber \end{align} Using those properties, one can show that by repeating the process sequentially (see Fig.~\ref{fig:machine1} (top)) the system is eventually dephased for large classes of states $\sigma$. In fact, one can show that (see again Appendix~\ref{app:universal_dephasing_machine}) \begin{align}\label{eq:convergence_system} \norm{\mathcal{D}^n_\sigma(\rho) - \pi(\rho)}_1 \leq \norm{\sigma - {\mathbb{I}}_{\lceil d^{1/2} \rceil}}_1^n , \end{align} where $\mathcal{D}^n_\sigma(\rho)$ denotes the repeated application of $\mathcal{D}_\sigma$. This means that, given $\sigma$ such that $\norm{\sigma - {\mathbb{I}}_{\lceil d^{1/2} \rceil}}_1<1$, the dephased state is approached exponentially fast. Note that another corollary of the above properties is that the map $\mc{D}_{\sigma}$ can only increase the von Neumann entropy of its input, which is formally proven in Appendix \ref{app:robustness_noise}. \begin{figure}[!t] \includegraphics[width=0.4\textwidth]{univdephmach.pdf} \caption{Single instance of ``universal dephasing machine''. We interpret the process $\rho \otimes \sigma \to U(\rho \otimes \sigma)U^\dagger$ as a dephasing machine that takes the state $\sigma$ as fuel and transfers the input state $\rho$ into the output state $\mathcal{D}_\sigma(\rho)$ and ``waste'' $\tilde{\mathcal{D}}_\rho(\sigma)$.} \label{fig:machine1} \end{figure} \subsubsection{Reusing the randomness} \label{ssub:reusing_the_randomness} \begin{figure*}[!t] \includegraphics[width=0.8\textwidth]{udm-sequence.pdf} \caption{Top: Repeated application on single input state approximates dephasing map. Bottom: Producing the dephased state when there is no SoR. If $\norm{\rho-{\mathbb{I}}_d}_1 < 1$, then the necessary amount of randomness for dephasing can be distilled by repeated application of the universal dephasing machine.} \label{fig:machine} \end{figure*} In the case of $R$ being in the state ${\mathbb{I}}_{\lceil d^{1/2} \rceil}$, we have shown in Section~\ref{sub:catalyticity} that it remains unchanged and thus, the noise is re-usable. A natural question is then what happens to the state of $R$ when it is in an arbitrary state $\sigma$. Let $\tilde{\mathcal{D}}_\rho$ denote the map \begin{align} \tilde{\mathcal{D}}_\rho(\cdot) \coloneqq \operatorname{tr}_R[U \: (\rho \: \otimes \: \cdot \:) U^\dagger]. \end{align} It follows simply from Eq.~\eqref{eq:map_on_noise} that $\tilde{\mathcal{D}}_\rho$ is just a mixture of unitaries, hence bringing $R$ closer to the maximally mixed state. Indeed, following arguments analogous to the ones of Section~\ref{sec:convergence} (see Appendix~\ref{app:universal_dephasing_machine}) one can show that there exist choices for the unitary operator basis of Lemma~\ref{lem:dephasing_basic} so that the final state of $R$ fulfills \begin{align} \norm{\tilde{\mathcal{D}}_\rho(\sigma) - {\mathbb{I}}_{\lceil d^{1/2} \rceil}}_1 \leq \norm{\rho - {\mathbb{I}}_d}, \end{align} and analogously it converges as \begin{align}\label{eq:convergencenoise} \norm{\tilde{\mathcal{D}}^n_\rho(\sigma) - {\mathbb{I}}_{\lceil d^{1/2} \rceil}}_1 \leq \norm{\rho - {\mathbb{I}}_d}_1^n. \end{align} Altogether we conclude not only that the noise can be re-used, but furthermore, that it improves its quality converging exponentially fast to a state of perfect noise, provided that the initial state $\rho$ is mixed enough to start with (as given by the condition $\norm{\rho - {\mathbb{I}}_d}_1<1$). The fact that the noise system is brought closer to the maximally mixed state allows one to implement a distillation protocol such as the one depicted in Fig.~\ref{fig:machine} (bottom). There, one has a single source providing copies of a given initial state $\rho$. One aims at dephasing each subsystem locally, similarly to what is done with a perfect noise system in Eq.~\eqref{eq:local_dephasing}. Here, one can take one copy $\rho$ playing the role of $R$ for some iterations until it is brought close enough to the maximally mixed state, which will happen exponentially quickly, given~\eqref{eq:convergencenoise}. Then, using Eq.~\eqref{eq:convergence_system} one can ensure that all the new copies of $\rho$ can be locally dephased. \subsubsection{Time control for the dephasing machine and recurrence} \label{ssub:time_control_for_the_dephasing_machine} So far we have left unspecified how the dephasing of the machine would physically be implemented. One concern here may be that the dephasing properties heavily rely on very precise time control of the evolution under the associated Hamiltonian $H = \mathrm{i} \log(U)$. However, the numerical simulations depicted in Fig.~\ref{fig:robust_simulation} strongly indicate that, as the system dimension becomes large, $H$ produces an evolution that is close to $\mc D_\sigma(\cdot)$ for a time-span that scales exponentially with the size of $S$. Indeed, for prime power dimensions and the case $\sigma = {\mathbb{I}}_{\lceil d^{1/2} \rceil}$, we find analytically that integer iterations of the application of the dephasing unitary always yield the exact dephasing map, up to a recurrence point, at which the original state is returned. See Appendix~\ref{app:timing} for details. The numerical simulations above complement this and suggest that this recurrence property holds not only for integer iterations of the application of the dephasing unitary, but also for intermediate times. We hence expect that in the limit of very large dimensions, this \emph{equilibrating} behavior~\cite{LongReview,Linden2012} becomes arbitrarily good and the state $\rho(t)$ remains close to the equilibrium state $\pi(\rho)$ for a time exponential in the system size. This means that the universal dephasing machine can be made robust in time, in the sense that it does not require exact control over the timing and the dephasing is maintained for long time scales. \begin{figure}[!tbp] \includegraphics[width=0.4\textwidth]{RecurrencesNew2.pdf} \caption{Numerical simulations of the dephasing map that is induced by the noisy operation~\eqref{eq:dephasing_unitary_basic} for continuous time and system dimensions $d=m^2=9,25,49,121$ (red,green,yellow,blue). Shown is the trace-norm distance between the time evolved state $\rho(t)$ and the pinched state $\pi(\rho)$ as a function of rescaled time $t/m$. The initial state is a maximally coherent state $\frac{1}{\sqrt{d}}\sum_i \ket{i}$. The graph shows that, while for integer times (with respect to the dimension of the environment) the dephasing is always exact, for non-integer times the deviation from exact dephasing becomes small with increasing dimension. The numerically obtained deviation at $t/m=0.5$ seems compatible with a scaling as $1/m=1/\sqrt{d}$, but we leave open to derive the exact scaling behaviour.} \label{fig:robust_simulation} \end{figure} \subsection{An entanglement-assisted private quantum channel} \label{sec:a_modified_private_quantum_channel} In this section, we apply our results to the construction of a cryptographic protocol known as private quantum channel (PQC). In a PQC-setting, two parties, Alice and Bob, would like to communicate quantum data privately, that is, without an eavesdropper being able to intercept and retrieve the data. To achieve this they share a secret key. We will now first briefly explain PQCs using classical secret keys and then provide a construction where the classical key $k$ is substituted for a "quantum key" in the form of a minimal number of entangled bits. In the following, we denote by $\mc S(\mc H)$ the set of normalized quantum states on the Hilbert space $\mc H$. Formally, in the classical-key setting, a $(\delta, \epsilon)$-PQC is a set of pairs of encoding and decoding CPTP-maps $\mc X_k: \mc S(\mc H_A) \to \mc S(\mc H_{A'})$ and $\mc Y_k: \mc S(\mc H_{A'}) \to \mc S(\mc H_A)$ that can be locally implemented by the sending and receiving parties respectively, where $k$ denotes the secret key that is shared by Alice and Bob. We think of the key $k$ as a random variable and assume that the key $k$ occurs with probability $p(k)$. These channels then have to fulfill the following conditions \cite{Ambainis2009}. Firstly, there exists a fixed element $\tau \in \mc S(\mc H_A')$, such that \begin{align} \label{eq:pqc_reliability} \sup_{\rho_{A,B} \: \in \: \mc S(\mc H_A \otimes \mc H_B)} \norm{ \left(\sum_k p_k\mc X_k \otimes \text{id}\right)(\rho_{A,B}) - \tau \otimes \rho_B}_1 \leq \epsilon, \end{align} where $\rho_{A,B}$ is any extension of the input state $\rho_A$ to a larger Hilbert space and $\rho_B = \Tr_A(\rho_{A,B})$. And secondly, \begin{align} \label{eq:pqc_security} \sup_{\rho \: \in \: \mc S(\mc H_A)} \norm{\sum_k p_k \mc Y_k \circ \mc X_k (\rho) - \rho}_1 \leq \delta. \end{align} Eq.~\eqref{eq:pqc_reliability} warrants (approximate) security from eavesdropping, while~\eqref{eq:pqc_security} warrants the channel's (approximate) reliability. The reason that the security is defined over all possible extensions is that the eavesdropper may initially be entangled with part of the unencrypted message. Finally, a $(0,0)$-PQC is called an \emph{ideal} PQC. PQCs have been well-studied for the case in which Alice and Bob share a classical key~\cite{Boykin2003Optimal,Ambainis2000,Hayden2004Randomizinga,Ambainis2009,Portmann2017,Hayden2016Universal}. In this case{, and if $\mc X_k$ is unitary,} the encoding corresponds to a classical noisy process and a key of length at least $(2-O(\epsilon))n$ is necessary for the $\epsilon$-secure transmission of $n$ qubits~\cite{Ambainis2009,Boykin2003Optimal,Ambainis2000,Note1}. Here, in contrast, we consider a setting in which Alice and Bob share a ``quantum key'' in the form of entangled quantum states. We use our dephasing map to construct an ideal private quantum channel that requires $n$ shared ebits of entanglement to transmit $n$ qubits of quantum data. As with the dephasing map, this value can again be shown to be optimal, in the sense that no implementation of an ideal PQC as a noisy operation can require fewer ebits (a result that extends to approximately ideal PQCs). It improves on the only other discussion of PQCs that uses entanglement known to the authors, in Ref.\ \cite{Leung2002Quantum}. There, an ideal PQC is constructed that applies techniques from classical PQCs and hence achieves only ``classical'' efficiency by requiring $2n$ ebits for $n$ transmitted qubits. \begin{figure}[!t] \includegraphics[width=0.28\textwidth]{pqc2.pdf} \caption{Illustration of our quantum PQC for the case $n=2$. To encode a two-qubit state $\rho$ (blue), Alice applies the dephasing unitaries $U_I$ and $U_J$ to the system and one half of an ebit (red) each, where $I$ and $J$ can be any mutually unbiased bases. This maps $\rho$ into the maximally mixed state exactly, so that an eavesdropper cannot learn anything about $\rho$ even if she was initially entangled with part of it. Bob, in order to decode, applies the conjugate of the two above unitaries and thereby retrieves the state exactly.} \label{fig:pqc} \end{figure} The idea behind our construction is straightforward (see Fig.~\ref{fig:pqc}). Given an $n$-qubit system $S$, let $U_I$ and $U_J$ denote the dephasing unitaries~\eqref{eq:dephasing_unitary_basic} whose projective part corresponds to the two orthonormal bases $I = \{\ket{i}\}_{i=1}^d$ and $J = \{\ket{j}\}_{j=1}^d$ for $\mc H_S$. If Alice and Bob share $n$ ebits, and assuming for convenience that $n$ is even, Alice can split the ebits into two halves, which we call $E_1$ and $E_2$. She then applies $U_I$ to $S$ and her local share of $E_1$, followed by applying $U_J$ to $S$ and her half of $E_2$. It is easy to check that if $I$ and $J$ are mutually unbiased, that is, if \begin{align} |\braket{i}{j}|^2 = \frac{1}{d}, \quad \forall i,j, \end{align} then this results in the completely depolarizing channel. That is, the map \begin{align} \mc X(\cdot) \coloneqq \Tr_{E}(U_J U_I( \cdot \otimes \:\proj{\phi^+}_{E_1} \otimes \proj{\phi^+}_{E_2}) U^\dagger_I U^\dagger_J), \end{align} where $\ket{\phi^+}$ represents an $n/2$-ebit state vector, has the property that \begin{align} \mc X(\rho) = {\mathbb{I}}_d, \quad \forall \rho \in \mc D(\mc H_S). \end{align} This ensures perfect secrecy, since the completely depolarizing channel necessarily also removes all correlations to other systems \cite{Hayden2004Randomizinga}. Upon receipt of $S$, Bob can then apply the complex conjugate of the encoding unitaries to his share of the ebits to retrieve the original state. See Appendix~\ref{app:pqc} for the formal proofs. This construction has a number of interesting features, some of which, however, are already present in the construction of Ref.\ \cite{Leung2002Quantum}. For instance, it is catalytic in the sense that, at the end of the transmission process, in case no eavesdropper has interacted with the sent data, all of the entanglement is returned in its initial state and can be reused for future rounds of transmission. Moreover, the scheme allows for error correction, efficient authentication and recycling of some of the entanglement in case eavesdropping has occurred. We refer the reader to Appendix~\ref{app:pqc} for a discussion of these properties. \section{Dephasing with quantum expanders}\label{sec:expandergraphs} The protocol presented in Lemma~\ref{lem:dephasing_basic} allows one to dephase perfectly a $d$-dimensional system given a SoR of dimension of $m=\lceil d^{1/2}\rceil$. This very same protocol, when applied to an imperfect SoR of dimension $m$ but not in the maximally mixed state, yields, as shown in Section~\ref{sec:convergence}, a convergence to the dephased state when the protocol is iterated. In this section we study a complementary protocol that provides fast convergence when we have states of the SoR that are maximally mixed, but of dimension significantly smaller than $m$. We find a protocol that yields an exponential convergence to the dephased state with the system size (i.e., the logarithm of Hilbert space dimension) of the SoR, measured in the $2$-norm. This is remarkable, in that it shows that one can obtain an equilibration in $2$-norm exponentially quickly in the ancillary system size. This insight may be seen as being at odds with the intuition that an equilibrating environment should naturally have a large physical dimension. Our approach is based on a machinery of \emph{quantum expanders} \cite{PhysRevA.76.032315,Margulis,HarrowExpander}. The key insight is that one can trade residual correlations still present in the system with the dimension required for the mixing environment. \begin{theorem}[Dephasing with quantum expanders]\label{thm:expanders} For any $d$-dimensional state, $d=e^2$ with $d$ odd, and an integer $k$, there exists an $8^k$-dimensional quantum system $R$ and a unitary $U\in \mathrm{U}(d 8^k)$ such that \begin{equation} \|{\rm tr}_R \left(U(\rho \otimes {\mathbb{I}}_{8^k}) U^\dagger\right) - \pi(\rho)\|_2 \leq \sqrt{2 {d^3}} \left( \frac{ 5 \sqrt{2}}{8}\right)^k. \end{equation} \end{theorem} The restriction to the dimension is done for pure conceptual simplicity. The argument for the proof, presented in Appendix~\ref{sec:expanders_app}, follows from a construction of a classical random walk that acts on the vertices of an expander graph, a \emph{Margulis expander} \cite{MargulisExpander}. In the present construction, the vertices of the Margulis expander are seen as lines labeled by $q=1,\dots, d$ in a $d\times d$-dimensional quantum phase space of the $d$-dimensional quantum system. The central insight is that classical random walks on such lattices are reflected by random walks on Wigner functions defined on $d\times d$-dimensional phase spaces, which in turn give rise to random unitary channels on quantum states in $d$ dimensions. The construction laid out in detail in the appendix builds upon and draws inspiration from the scheme of Ref.~\cite{Margulis}, but is in several important ways a new scheme, in particular in that each line in phase space is treated separately. In this way, the strong mixing properties of the random walk of the Margulis expander graph is not used to show rapid mixing to a maximally mixed state, but in fact to a quantum state with vanishing off-diagonal elements. \section{Summary and conclusions} We have studied the problem of implementing state transitions under noisy processes, that is, processes that require randomness. We solve this problem completely by providing optimal protocols for both the case of an implicit, classical model of randomness as well as an explicit, quantum model of randomness. The main building block behind these protocols is the construction of a protocol that performs a dephasing map on an artibrary quantum state using a SoR of the smallest possible dimension, both for the quantum and classical case. We find that a quantum SoR is quadratically more efficient than its classical counterpart due to quantum correlations, and hence show that an explicit model is strictly more powerful for any dimension $d > 2$. {Once the optimal protocols for dephasing were established, we studied applications such as state transitions in noisy operations, decoherence and quantum measurements, providing optimal protocols for all of them. An interesting feature of our protocol is that the SoR is not altered during the protocol, meaning that it can be re-used to implement further iterations of the above tasks.} {We have also extended our discussion to the case of imperfect noise and used our results to construct a universal dephasing machine that exhibits robustness both with respect to the noise that fuels it, as well as with respect to the control over timing when running it. Moreover, we have used our dephasing as a primitive to construct a novel, ideal private quantum channel. Finally, by putting it into the context of expander graphs, we have seen how such a dephasing is possible very economically, leading to an approximate dephasing in $2$-norm.} {Besides the foundational interest of our construction, which makes precise the way in which the relationship between correlations and randomness in quantum mechanics differs from that in classical mechanics, we expect our dephasing protocol to improve bounds in noisy processes that we have not discussed here, to the extent that introduce a new primitive to constructions in quantum information. Given the pivotal status of randomness in protocols of quantum information processing and in notions of quantum thermodynamics, these results promise a significant number of further practical applications.} \emph{Acknowledgements:} P.~B.~thanks Lluis Masanes, Markus M\"{u}ller, Jon Richens and Ingo Roth for interesting conversations and especially Jonathan Oppenheim for suggesting cryptographic applications of the results. We acknowledge funding from the ERC (TAQ), the DFG (EI 519/14-1, CRC183), the Templeton Foundation, and the Studienstiftung des Deutschen Volkes. \bibliographystyle{unsrtnat}
1,314,259,996,098
arxiv
\section{Introduction} \label{intro} Transport systems, along with other infrastructures like power grids or communication networks, are fundamental elements of our societies and economies. They guarantee the high level of mobility that we all experience, and which is vital for the cohesion of markets and for the quality of life of citizens. Moreover, transport systems enable socio-economic growth and job creation. When such fundamental infrastructures experience a random failure or are intentionally targeted by terrorist attacks, the whole society is severely affected. The air transport system is no exception. In $2008$ this industry generated $32$ million jobs worldwide, of which $5.5$ of them were direct, and contributed with USD $408$ billion to the global gross product \cite{Atag08}. On the other hand, its vulnerability and the consequences for citizens' mobility clearly appear when a strike or the eruption of a volcano interrupt the normal behavior of the system \cite{Bolic11,Mazzocchi10}. The theory and application of complex networks has experienced a tremendous growth in the last decade \cite{Boccaletti06,Albert02}. In spite of its young age, the great variety of tools developed for the analysis of different topologies \cite{Costa07} has favored a better understanding of the structure and dynamics of many real-world systems \cite{Costa11}. Remarkably, the complex networks approach has explained the appearance of emergent phenomena in many systems composed by a large set of interacting elements. Well known examples include social systems, e.g. the study of networks of acquaintances or the diffusion of contagious diseases \cite{Liljeros01}, the Internet \cite{Satorras01}, and applications to neural dynamics \cite{Bullmore09,Sporns04}. It is not surprising that the complex network methodology has been successfully applied to different transportation modes, including streets \cite{Crucitti06,Porta06}, railways \cite{Sem02}, or subways \cite{Latora02,Angeloudis06}. In this paper we present a review of the literature related to the application of complex network theory to the air transport system. As it will be clear, several problems have been investigated so far. For instance, the description of the topological and metric structure of the network is of great importance for understanding the business strategies adopted by different airlines, for assessing passengers' mobility in the presence of direct and indirect connections, or for investigating the time evolution of air transport, while it adapts to changes in the passengers' demand and reacts to economical external forces, such as deregulation. Another aspect of interest is the dynamics taking place on the network. A paradigmatic example, reviewed in this paper, is the spreading of infectious diseases worldwide and the role that air transport has in enhancing the speed of epidemic propagation. The future presents many challenges for the air transport system and complex network theory is likely to play a more and more significant role in tackling these challenges. First of all, air transport is increasing worldwide at a very fast pace. Policy makers are aware of the fact that the current system will be at its capacity limits in few years because of the increase of traffic demand and new business challenges. For this reason, large investment programs like SESAR in Europe and SingleSky in the US have been launched. Also, policy makers have stressed the importance of fostering the resilience of the system, and of its capacity of recovering the required mobility after an external shock \cite{Eur11}. Moreover the future will require an increasing degree of integration among different transportation modes. This problem finds a natural description in terms of multi-layer representation of complex networks \cite{Kurant06}. Clearly, all these issues are not only relevant for the air transport itself, but they have important implications for the society as a whole. This review is organized as follows. Section \ref{constr_net} describes the main components involved in the air transport system, which are the basis for the construction of different network representations. Section \ref{topol} reports the most important facts about the topologies of such networks, including their dynamical evolution, and the models that have been developed to explain these characteristics. Section \ref{dyn_net} reviews the main dynamics that have been studied on top of this network and Section \ref{resil} discusses the role of network topology in the resilience and vulnerability of air transport system. Finally, Section \ref{concl} draws some final conclusions, and presents some open lines of research. \section{Networks for the air transport} \label{constr_net} Many complex systems can naturally be represented by one or more networks. For instance, the Internet can be represented as a set of nodes (the webpages) connected by links (the hyperlinks) \cite{Adamic99,Albert01}. The same system can also be represented by considering the routers as nodes and their physical connections as links \cite{Vazquez02}. This multiple network representation property is shared also by the air transport system, and therefore one should first decide which network is investigated. The air transport system is composed of a large number of different elements, interacting and working together. The mobility of passengers is just the final result and it is clearly of high importance from a social point of view. Therefore it is not surprising that most analyses have been focused on the mobility of people, disregarding other technical details. When this point of view is followed, the construction of network is straightforward. Nodes represent airports and a link between two nodes is created whenever there exists a direct flight between the two airports associated with the nodes. From this point of view the airport network is the projection of the bipartite network, whose first set of nodes is composed of airports, the second set of flights, and a link exists between a flight and an airport if that flight departs or arrive to that airport. Clearly in this way additional sources of information, like scheduling, types of flights, or airlines, are disregarded. The projected network of airports is naturally a directed graph, where, in general, two directed links can exist between two nodes $A$ and $B$, one describing the flights from $A$ to $B$ and one from $B$ to $A$. The projected network has also a natural weighting scheme, given by the number of flights that are present (in the investigated time period) between the two airports. These networks are termed flight networks. Additionally, not all flights are equivalent. The number of available seats in each aircraft can be very different, ranging from the 50 passengers of a small regional jet, up to the 853 seats of an Airbus A380. As a consequence, it has been proposed the association of weights to links, proportional either to the frequency of connections or to the number of transported persons - see Section \ref{WNA}. As usual, from a weighted directed graph one can construct other graphs by neglecting information. For example, taking the difference between the weights from $A$ to $B$ and from $B$ to $A$, one can construct a directed network where only one directed link exists between two nodes. By neglecting the weights one can obtain an unweighted network, where only topology matters, and by neglecting also directionality one can obtain simple binary graph. All these alternatives have been investigated in the literature. In any form, the flight network is probably the most investigated network in air traffic studies. Here we will review mostly the properties of these networks in the following of this paper. However while complex networks are traditionally considered as static objects, it is clear that time is an important feature of any movement. This is especially true in the case of the air transport, because passengers may have to use several flights to get to their destination. If one considers only static representations of the network, there is no way of knowing the real dynamics of passengers, i.e., if one needs to wait $2$ or $10$ hours in a airport before taking the next connecting flight. In Section \ref{dyn_net} we will review some solutions that have been proposed to investigate the problem of indirect connectivity of passengers. Another important aspect of flight networks is that they can be naturally decomposed into many subnetworks. For example, flight networks can be decomposed by considering separately one subnetwork for each airline. The analysis of the networks corresponding to a single airline has been performed, for instance, in Refs. \cite{Han04,Li04}. To the best of our knowledge, no paper has been published on the analysis of interdependencies between the subnetworks corresponding to different airlines, or of different alliances of airlines. We expect this topic will gather an increasing interest, especially because it can be considered a special case of a multi-layer representation of complex networks \cite{Kurant06}. This framework would also allow the study of the relations between the air and other transportation modes \cite{Qu10,Xu11}. It is important to stress that networks different from the flight networks can be constructed and are likely to be very important for the understanding and modeling of air transport. The structure of the airspace is one of these cases. Nowadays aircraft do not travel along the straight line (geodesic) connecting the departure and destination airport. On the contrary, they must follow some fixed {\it airways}, defined as union of consecutive segments between pairs of navigation aids ({\it navaids} in short). While such constraints are actually imposed in order to improve safety (as it is easier to control ordered flows of aircraft) and capacity (the workload of controllers is reduced, and thus they can control a higher density of aircraft), bottlenecks may appear in some central zones of the airspace, or where several busy airways converge. Navaids can be used to create a network, where nodes are navaids, and links represent airways. To the best of our knowledge, only very few studies have considered this type of network. For example, Cai and coworkers \cite{Cai12} investigated the Chinese air route networks. Another example of networks considers {\it reactionary delays} and their effect on passengers. These are situations in which a flight cannot take off on time because of a delay in another flight. This occurs, for instance, because of the late arrival of the aircraft, or of the crew itself \cite{Pyrgiotis11}. Different networks may be created to study this phenomenon. Nodes may represent crews, with a link between them when they share the same aircraft. Alternatively nodes may also represent airports, connected whenever the same aircraft has to serve them in a sequential fashion. The identification of the central elements of these graphs may help in highlighting the critical points for the dynamics of the system, and would thus allow the creation of better mitigation strategies. To the best of our knowledge, this topic has not yet been studied within this framework. Finally we mention a very recent approach to construct networks of air traffic safety events \cite{Lillo08}. When two aircraft are too close, an automatic alarm, termed Short Term Conflict Alert (STCA), is activated and the air traffic controller is supposed to give instructions to the two pilots in order to avoid a collision. One important question is whether STCA are isolated events or whether aircraft initially involved in a STCA are likely to be involved in other STCAs with other aircraft in the near future and so on, creating a cascade of events. This possibility signals the fact that the controller suggests a local solution without forecasting unintended consequences of her instructions. By using a dataset of automatically recorded STCA, authors of \cite{Lillo08} mapped this problem into a network of STCAs which in turn can be mapped in a network of aircraft, where two nodes (aircraft) are connected if they were involved together in a STCA. These networks shows topological regularities and might shed lights on the aircraft conflict resolution dynamics. \section{Topological analysis} \label{topol} \subsection{Unweighted air transport networks} \label{simple_proj} The analysis of the structure of the flight network in air transport, especially when focused on individual airlines, began years before the formalization of the complex network theory. This type of analysis was motivated by the aim of defining the most efficient structures of flights for a given airline \cite{Bania98,Alderighi07}, both in terms of yields (and, thus, profit) and of passengers' mobility. The proposed solutions can be grouped into two classes: \begin{description} \item[{\it Point to point}:] in this configuration, a different aircraft serves each pair of airports in the network, or at least those pairs where the passengers' demand is enough to justify the connection. While it has the advantage of offering direct connections to all passengers, it also requires a high number of aircraft for covering all the possible routes. For a completely connected network the number of connections increases with the square of the number of airports. This strategy of connections was common in the United States before the '70 deregulation \cite{Chou90}, and nowadays is still used by several low-cost airlines - see left panel of Fig. 1. \item[{\it Hub-and-spoke}:] in this case, connections are structured like a chariot wheel (or a collection of such structures), in which all traffic moves along {\it spokes} connected to the {\it hub(s)} at the centre. While most passengers must take (at least) two different flights to reach their destination, this strategy presents several benefits for the airline. In fact, a lower number of aircraft is required, flights usually have a higher occupation rate, and the expansion of the network to a new airport only requires one new additional flight \cite{OKelly94,Berry96}. Today the hub-and-spokes configuration is used by most major airlines all around the world - see right panel of Fig. 1. \end{description} While earlier studies were mostly theoretical, the possibilities offered by the analysis of real systems through the complex networks methodology, and the ever-increasing computational capabilities of modern computers, have enabled a better understanding of the structure of real air transport networks. It is interesting to notice that some network characteristics have been confirmed in all studied networks. One important aspect of flight networks is the the fact that they show the scale-free feature. This implies the presence of few hubs with a very high number of connections, confirming the predominance of a hub-and-spoke topology. An example can be seen in Fig. 2, which shows the cumulative probability distribution of degree of all European airports, considering only internal flights (i.e., flights whose origin or destination is outside Europe are disregarded). The right panel of Fig. 2 shows a zoom of the extreme tail of the distribution and it is clear how few airports have direct connections with a large number the destinations in the network, performing a hub function. \begin{table} \caption{Example of different topological metrics of flight network, as reported in several research papers. The asterisk in the Links column indicates that the number refers to the number of flights, while in all the other cases the column reports the number of connections.} \label{tab:Topologies} \begin{tabular}{llrrlllllll} \hline\noalign{\smallskip} Country & Period & Nodes & Links & $\gamma$ & $\gamma_B$ & $L$ & $L_{rand}$ & $C$ & $C_{rand}$ & Refs. \\ \noalign{\smallskip}\hline\noalign{\smallskip} World & 11/2000 & 3883 & 27051 & 1.0 & 0.9 & 4.4 & --- & 0.62 & 0.049 & \cite{Guimera05} \\ World & 11/2002 & 3880 & 18810 & 2.0 & --- & 4.37 & --- & --- & --- & \cite{Barrat04,Barrat05} \\ US & --- & 215 & $^*$116725 & 2.0 & --- & 1.403 & --- & 0.618 & 0.065 & \cite{LiPing03} \\ US & 10-12/2005 & 272 & 6566 & 2.63 & --- & 1.9 & 1.81 & 0.73 & 0.19 & \cite{Xu08} \\ Austria & --- & 134 & 9560 & 2.32 & --- & --- & --- & 0.206 & 0.01 & \cite{Han04} \\ China & --- & 128 & 1165 & 4.161 & --- & 2.067 & --- & 0.733 & --- & \cite{Li04} \\ China & 28/11/2007-29/3/2008 & 144 & 1018 & --- & --- & 2.23 & 1.88 & 0.69 & 0.098 & \cite{Wang11} \\ India & 12/1/2004& 79 & 442 & 2.2 & --- & 2.259 & 2.493 & 0.657 & 0.0731 & \cite{Bagler08} \\ India & 12/2010 & 84 & $^*$13909 & 0.71 & 0.54 & 2.17 & 2.55 & 0.645 & 0.18 & \cite{Sapre11} \\ Italy & 16/7-14/8/2005 & 42 & --- & 1.6 & 0.4 & 1.987 & 3.74 & 0.10 & 0.17 & \cite{Guida07} \\ Italy & 11/2005 & 42 & --- & 1.1 & 0.5 & 2.14 & 3.64 & 0.07 & 0.14 & \cite{Guida07} \\ Italy & 6/2005-5/2006 & 42 & 310 & 1.7 & 0.4 & 1.97 & --- & 0.1 & --- & \cite{Quartieri08} \\ Italy & --- & 33 & 105 & --- & --- & 1.92 & --- & 0.418 & --- & \cite{Zanin08} \\ Spain & --- & 35 & 123 & --- & --- & 1.84 & --- & 0.738 & --- & \cite{Zanin08} \\ \noalign{\smallskip}\hline \end{tabular} \end{table} \begin{figure} \begin{center} \resizebox{0.50\columnwidth}{!}{ \includegraphics{RYR.eps} } \resizebox{0.40\columnwidth}{!}{ \includegraphics{DLH.eps} } \caption{Representation of the networks corresponding to Ryanair (Left) and Lufthansa (Right), as at $1^{st}$ of June 2011. Notice how the network of Lufthansa, which is a major airline, is centered around few main airports (hubs), while the structure of Ryanair, the biggest European low-cost company, has a densely connected core.} \end{center} \label{fig:DLH} \end{figure} \begin{figure} \begin{center} \resizebox{0.45\columnwidth}{!}{ \includegraphics{EuroDD.eps} } \resizebox{0.45\columnwidth}{!}{ \includegraphics{EuroDDZoom.eps} } \caption{Cumulative probability distribution of degrees of European airports. The network has been constructed considering only commercial (both regular and charter) flights operated between two European airports the $1^{st}$ of June 2011. (Right) Zoom of the right part of the distribution. The hubs of the network are indicated by red circles.} \end{center} \label{fig:Euro} \end{figure} In the literature we surveyed, different methodologies have been used for creating and analyzing flight networks. In Table \ref{tab:Topologies} we report the values of classic complex network metrics, for the air transport system of different countries and considering an unweighted representation. The meaning of these metrics is reported below: \begin{description} \item[$\gamma$:] in scale-free networks \cite{Barabasi09} the asymptotic behavior of the node degree distribution has a functional form $P(k>x) \sim x^{-\gamma}$. It has been pointed out that the real degree distribution of the worldwide flight network is a truncated power-law, i.e., it is asymptotically better explained by the function $P(k>x) \propto x^{-\gamma}f(x/k_m)$, where $f$ is an exponential truncation function and $k_m$ is a truncation parameter. The values reported in Refs. \cite{Guimera05,Barrat04} correspond to the exponent $\gamma$. \item[$\gamma_{B}$:] the {\it betweenness} of a node is a centrality measure quantifying how important is a node for movements inside the network. Node betweenness is defined as the proportion of shortest paths, among all possible origins and destinations, that pass through a node \cite{Freeman77}. The exponent $\gamma_{B}$ is the exponent of a power law fit of betweenness distribution. When the distribution of centrality is asymptotically a power-law function, a high exponent $\gamma_{B}$ indicates that few nodes are responsible for the efficient routing in the network. \item[$L$ and $L_{rand}$:] $L$ is the mean length of shortest paths between pairs of nodes of the network, i.e., \begin{equation} L = \frac{1}{{n^2 }}\sum\limits_{i,j} {d_{i,j} } \end{equation} where $i$ and $j$ are two nodes of the network, $n$ the number of nodes, and $d_{ij}$ the length of the shortest (topological) path between nodes $i$ and $j$. The value of $L$ is usually compared with $L_{rand}$, that is the mean value obtained in different networks that have the same number of nodes and links, but a completely random structure. These random networks are also called Erd\"os-R\'enyi graphs \cite{Erdos60}. It is worth noticing that the Table shows how $L$ is often lower than the corresponding $L_{rand}$, indicating, as expected, that air transport networks are engineered to efficiently reduce the number of connections needed by passengers. \item[$C$ and $C_{rand}$:] the {\it clustering coefficient} $C$, and its randomized counterpart $C_{rand}$, measures the number of triangles that can be found in the network \cite{Costa07}. It assesses the probability that two nodes, which are connected to a third node, also share a direct connection. Similarly to $L_{rand}$, $C_{rand}$ corresponds to the mean clustering coefficient of an ensemble of randomized networks. \end{description} The reader may easily notice how obtained values are very heterogeneous. For instance, the exponent of the degree distribution $\gamma$ varies from $1.0$ up to $4.161$, and the clustering coefficient $C$ from $0.07$ to $0.738$. This variability is mainly due to two factors. Firstly, there are important differences in the method used in the construction of the networks, which are usually not fully explained in the papers. The time window represented by the network may be not reported \cite{Han04,Li04,Zanin08}, and no details are given about the types of flights considered (regular passengers flights, charters, cargo flights). Secondly, most of the researches have investigated national networks, covering few tens of airports. It is well known that some complex networks properties, such as, for instance, the scale-free distribution of degrees, are meaningful and can be correctly assessed only in large networks, where finite size effects are negligible \cite{Stumpf05}. Moreover the degree of network heterogeneity is very different if one considers a regional airport network or the worldwide network. \subsection{Weighted network analysis} \label{WNA} As explained in Section \ref{simple_proj}, the analyses described above are based on unweighted projections of the air transport system, that is only the existence of direct connections between pairs of nodes is taken into account. On the other hand, it can be expected that the structure of frequencies of flights may unveil interesting information, especially related with the main routes of movements chosen by passengers. Table \ref{tab:WTopologies} shows the values of some metrics obtained for different weighted networks \cite{Newman04}. When a link between two nodes $i$ and $j$ has a weight $w_{ij}$, it is possible to calculate a weighted version of the degree of a node, called {\it strength}, as $s_i = \sum_{j} w_{ij}$. Notice that the variability observed in the metrics of the unweighted networks is amplified in the weighted network, because several variables can be used to define the value of $w_{ij}$. For example one can consider the number of flights, the number of offered seats, or the number of passengers transported, obtaining different weighted networks. The definition of the metrics shown of Table \ref{tab:WTopologies} is here reported: \begin{description} \item[$\beta$:] the relation between the strength $s$ (number of flights) of each node and its degree $k$ (number of connections) is typically well fitted by a power law $s(k) \approx k ^ {\beta}$. This relation unveils relevant information about how capacities are distributed through the airport network. \item[$\beta_b$:] if one is interested in the assessment of the centrality of airports from the point of view of passengers' movements, it is possible to relate the strength of a node with its betweenness, i.e., $s(b) \approx b ^ {\beta_b}$. \item[$\theta$:] in order to check whether there is a relation between the frequency of connections between two airports, and their connectivity (in terms of number of destinations directly served), the weight of each link has been related with the degrees of connected nodes, leading to $w_{ij} \approx (k_i k_j) ^ {\theta}$. \end{description} The main conclusion that can be drawn from Table \ref{tab:WTopologies} is that there exists a strong correlation between the degree of a node, and the quantity of flights and passengers going through it. This fact is in agreement with the hub-and-spoke structure of the network. The more connections a node has, the more passengers will use that node to reach other destinations, and thus the frequencies of such connections strongly increase. \begin{table} \caption{Topological properties of different weighted flight networks.} \label{tab:WTopologies} \begin{tabular}{llllll} \hline\noalign{\smallskip} Country & Weigth & $\beta$ & $\beta_b$ & $\theta$ & Refs. \\ \noalign{\smallskip}\hline\noalign{\smallskip} Worldwide & Available seats & 1.5 & 0.8 & 0.5 & \cite{Barrat04,Barrat05,Wu06} \\ US & Number of passengers & 1.8 & --- & --- & \cite{Xu08} \\ India & Number of flights & 1.43 & --- & --- & \cite{Bagler08} \\ 4 European airlines & Number of flights & $(1.06 - 1.18)$ & --- & --- & \cite{Han09} \\ China & Number of flights & --- & --- & 0.5 & \cite{Li04} \\ Europe & Number of flights & 1.39 &---& ---&\cite{Lillo11}\\ \noalign{\smallskip}\hline \end{tabular} \end{table} \subsection{Short and long term evolution of the network} Although the scheduling of flights of regular airlines should be defined and published almost one year before the actual day of operation, the air transport system is far from being a static structure. On the contrary, airlines constantly adapt their scheduling to changes in the passengers' demand, both on the short and long term. On the short term, the network evolves to answer to the different needs of two groups of passengers, namely those moving for work and usually traveling from Monday to Friday, and those moving for leisure, traveling mainly during the week-end. It is well known that the first group is less sensitive to price, but highly values short travel durations. The result is that, during the week-end, the number of flights is reduced, and the network assumes a more star-like shape around the airports of cities of tourist interest. Fig. 3 reports the evolution of two metrics, namely the mean degree $\langle k \rangle$ and the exponent of the power-law fit of the degree distribution $\gamma$, for the Austrian and Chinese air networks at different days of the week. \begin{figure} \begin{center} \resizebox{0.50\columnwidth}{!}{ \includegraphics{WEMeanK.eps} } \resizebox{0.45\columnwidth}{!}{ \includegraphics{WEGamma.eps} } \caption{Evolution of the mean degree $\langle k \rangle$ (Left) and of the exponent of the degree exponential fit $\gamma$ (Right) for the Austrian (black squares and solid lines) and Chinese (blue triangles and dashed lines) air transport networks, through different days of the week. Information about the Austrian and Chinese networks have been obtained in Refs. \cite{Han04} and \cite{Li04} respectively.} \end{center} \label{fig:WeekEvol} \end{figure} A long term adaptation of the network has been also studied in several papers. While short term variations are easy to predict, long term oscillations in the demand are more complicated to forecast, because they are the result of significant changes in macro-economical factors, of the competition between different airlines, and of the limitation imposed by policy-makers. The evolution of the structure of the network due to changes in the regulations has been extensively studied in the case of the European deregulation started in year 1986 \cite{Barrett90} (see also Section \ref{dyn_pass}). While several papers have focused on the aeronautical and economical consequences of the deregulation \cite{Berechman96,Betancor97,Rey03}, only a few have applied complex networks concepts to this problem. Specifically, in Ref. \cite{Burghouwt01}, the evolution of the European air network is studied between years 1990 and 1998, with one network created for each year. The dynamics of some basic network metrics have been investigated. The most important of them are the classification of airports according to their number of connections and the evolution of the number of small and big (hubs) airports through time. Results identify two different dynamical patterns. On one side, medium size airports have attracted most of the intra-European traffic, creating specialized internal hubs. On the other side, the intercontinental traffic was also concentrated, but on different hubs, usually big airports. In summary, the global structure of the network has promoted an hub-and-spoke structure, but with different hubs for internal and intercontinental flights. An interesting case study is represented by China. Since the economic reforms started in 1978, which slowly reduced the control of the State over the economy, the size of the air network and the number of passengers transported have increased at fast pace \cite{Jin04} (see Fig. 4). Specifically, the number of passengers grew at an average annual rate of $17\%$, mainly driven by business and tourist motivations. As a consequence of this expansion, the number of airports also increased from 69 in 1980 to 137 in 1998. As forecasted by the classical theory \cite{Chou90}, the structure of the network changed from a point-to-point to a hub-and-spoke configuration. Nevertheless this hub-and-spoke topology presents some interesting specificities \cite{Wang07}. First of all, there are three main hubs, corresponding to the headquarters of the three main national airlines, namely Beijing (Air China), Shanghai (China East) and Guangzhou (China South). The relative importance of these airports has significantly changed over time, reflecting the changes in the international scene of the corresponding cities. While Beijing was the main hub in 1990, it passed its role to Shanghai in 2005 \cite{Ma08}. Finally, these three airports form a peculiar {\it triangular subnetwork}, accounting for a $37.3\%$ of the transported passengers. These characteristics are mainly the result of regional economical and social inequalities within the Chinese territory, and of China's involvement in the world economy. After year 2000, the main topological properties of the network have remained stationary. However several changes have occurred, mainly related with the appearance and extinction of small airports and routes between them \cite{Zhang10}. \begin{figure} \begin{center} \resizebox{0.50\columnwidth}{!}{ \includegraphics{EvolutionChina.eps} } \caption{Evolution of the number of airports in the China's air network, and of the number of passengers from 1980 to 2005. Data correspond to Table 1 of Ref. \cite{Wang07}.} \end{center} \label{fig:ChinaEvol} \end{figure} To conclude this Section on the evolution of air networks, it is worth noticing Ref. \cite{Rocha09}, in which the evolution of the Brazilian air network between years 1995 and 2006 is studied. Similarly to the case of China, Brazil experienced from an important growth in its economy. Within this period, the total number of passengers transported in Brazil increased from 18 millions to 43 millions. Nevertheless, and in opposition to what found in the case of the Chinese network, the number of airports served by major airlines strongly decreased (211 to 142), along with the mean degree (13.19 to 10.28). Therefore, while the traffic increased, the network reduced the number of connections between secondary airports, getting closer to a pure hub-and-spoke configuration centered around S\~{a}o Paulo, Bras\'ilia and Salvador. \section{Dynamics on the air network} \label{dyn_net} Up to now we have considered the topological and metric properties of the flight network, by discussing also the dynamics {\it of the} network over short and long time scales. However the air traffic network intrinsically describes the space where something, for example passengers or goods, moves. Therefore it is natural to consider the problem of characterization, modeling, and control of the dynamics {\it on the} flight network. In this Section we review some recent approaches to dynamics on air transport network, focusing our attention to the problem of the dynamics of passengers when connection is not direct, the emergence of traffic jams, and the propagation of epidemics through the air traffic network. \subsection{Indirect connectivity and passengers dynamics} \label{dyn_pass} Probably the most important entity moving on the air transport network are passengers. The structure of the air network strongly affects the capability for a passenger to reach destination B starting from destination A in the shortest possible time and in the most direct way. The transition from the point to point system to the hub and spoke system had several effects on passengers \cite{OKelly94,Berry96}. On one side, it has been argued that the hub and spoke system increased airline's efficiency and therefore, in a competitive market, it lead to lower prices. On the other side, it is clear that spoke cities risk being marginalized, being connected in a more indirect way to the rest of the system. The emergence of low cost carriers, thanks to the deregulation of the market, created cheaper opportunity of flying and created additional hub and spoke subnetworks, where hubs are different from the one used by main airlines \cite{Dobruszkes06}. These qualitative considerations on the effect of the hub and spoke system on passengers have been investigated recently in a more quantitative way by making use of network theory, as we will show in the following. Many papers have focused on the dynamics of the air transport network toward an higher spatial concentration, which means that the topology of the network becomes more and more similar to a collection of star like structures. For example, several authors have investigated the change of topology of the American \cite{Goetz97} and European \cite{Burghouwt01,Button02} air transport network during the transition from the point to point to the hub and spoke structure. However as pointed out, for example, in Ref. \cite{Burghouwt05} a hub and spoke network requires a concentration of traffic both in space and in time. Temporal concentration means that flight schedule must be organized in such a way to allow passengers to travel between two (or more) spoke cities in a relatively short time, avoiding thus long waiting time at the hubs. In Ref. \cite{Burghouwt05} authors considered the airline network configurations in Europe between 1990 and 1999, and investigated to which extent a temporal concentration trend can be observed after deregulation, by focusing on the mechanism of wave systems. In an hub and spoke traffic network, airlines typically operate synchronized waves of flights through these hubs. The aim of such a wave-system structure is to optimize the number and quality of connections offered in the attempt to minimize waiting times for indirect connections. By comparing the flows of departing and arriving flights in an airport with an ideal type connection wave (given some transfer times), authors were able to identify the presence of waves and study how the structure and number of waves in an airport changed during the 1990s, when the air transport network became more and more similar to an hub and spoke system. Authors concluded that in the late 1990s major hubs had a clear wave system in place. For example, in Paris CDG airport six clear daily waves could be identified. The presence of these waves was then used to assess their effects on the quality of indirect connectivity. By introducing a suitable airport metric measuring the number of indirect connections weighted by transfer time and a routing factor\footnote{The routing factor is the ratio between the actual in-flight time indirect connection and estimated in-flight time direct connection based on great circle distance. It measures the quality of the indirect connection.}, Burghouwt {\it et al.} \cite{Burghouwt05} concluded that in 1999 only few airline hubs are highly competitive in the indirect market (essentially Frankfurt, Paris CDG, London Heathrow, and Amsterdam). Moreover they showed evidence that European airlines have increasingly adopted wave-system structures or intensified the existing structure during the 1990s. Finally, they found evidence that, given a certain number of direct flights, airports adopting a wave-system structure offer generally better indirect connections than airports without a wave-system structure. In order to assess the role of the European hub and spoke network, Malighetti {\it et al.} \cite{Malighetti08} investigated different notion of centrality in the flight network, by considering the point of view of a passenger that must reach a destination in the shortest possible time. They first considered two purely topological measures. The first is the average shortest path length for an airport $i$, defined as the average of the minimum number of flights needed to reach an airport $j$ from airport $i$. The second metric is the betweenness of an airport $i$, i.e. the number of minimal paths that passes through airport $i$. Since there are typically many shortest paths connecting two airports (and passing through $i$), authors also defined the essential betweenness as the number of {\it unavoidable} minimal paths passing through an airport $i$. An high value of the essential betweenness indicates that the airport is a bottleneck for the traffic in the system. Analyzing the European air transport network in 2007, they found a large heterogeneity of average shortest path length and different values for an airport, both if one considers only European flights and if one considers also world flights. This result is somewhat expected and typically differentiates hubs for main airlines from hubs for low cost carriers. Authors claim that, despite being interesting for the characterization of the centrality properties of the flight network, these topological measures do not help much in assessing the passengers' needs. In fact, a short path (in number of flights) can be useless for a passenger if the composing flights are unfrequent or if their scheduling makes impossible to do the connection. For this reason Malighetti {\it et al.} \cite{Malighetti08} considered a specific day and for each pair of airports they calculated the shortest travel time between them for a passenger who wants to leave at a specific time and to arrive at destination within the same day. They used scheduled flight time data and allowed for at least one hour for a flight connection. Finally they considered the minimum among the possible starting time within the day and defined the optimal path from the two airports as the one having this minimum travel time. This value is a passenger oriented metric for the distance between two airports in a given day. It is clear that this metric takes implicitly into account the geography of Europe. In fact, most of the airports with the smallest average optimal path time are also those in the central part of Europe, in part simply because flights are shorter from there than from a peripheral airport. Authors compared also the connectivity offered by three big alliance networks (Oneworld, Sky Team, and Star Alliance) with that of the overall network. They found that roughly two thirds of the fastest indirect connections are not operated by the alliance system. The interpretation provided is that this could be exploited to enable a new passenger strategy of ``self-help hubbing". However, as authors warned, this result does not take into account properly travelers' utility, since the analysis is focused on time and neglects other important variables, such as prices, number of flights, loyalty programs, etc. A similar, but mathematically more sophisticated, approach to the indirect connectivity problem has been used in Ref. \cite{Zanin09} where {\it scheduled networks} have been introduced. A scheduled network for the air transport system is constructed starting from the ordinary flight network and by adding additional nodes on the links connecting two airports. The number of additional (secondary) nodes on a link is proportional to the traveling time needed to travel in the route associated to the link. The full network composed by primary (airports) and secondary nodes allows to compute the real time needed to go from an airport to another one, even if they are only indirected connected. The main advantage of this approach is that one can adapt several standard network metrics, such as mean path length, giant component, clustering coefficient, and tolerance to errors and attacks, to scheduled networks in order to take into account real travel time (and not just topological distance) as a metric and, more important, to take properly into account indirect connectivity. Authors then applied their method to a small sample of flight data of 40 European airports and measured the efficiency of the network as a function of the time of the day. As expected, they found a very low efficiency during the night, meaning that passengers should wait in the airport for some connections until the next morning. Moreover, they identified three peaks of connectivity during daytime, corresponding with the moments of high network connectivity (see Figure 5). \begin{figure} \begin{center} \vspace{2.0cm} \resizebox{0.50\columnwidth}{!}{ \includegraphics{EvolEfficiency.eps} } \caption{Evolution of three metrics for the 40 busiest European airports, at different hours of the day. The metrics are the proportion of active connections (black solid line, left scale), the efficiency of the network (blue dashed line, left scale), and the minimum {\it rotation} time, i.e., the minimum time required for a crew to return to the departure airport (red dotted line, right scale). Reprinted with permission from Ref. \cite{Zanin09}. \copyright 2009 by the American Institute of Physics.} \end{center} \label{fig:EvolEfficiency} \end{figure} \subsection{Air traffic jams} The commercial airline traffic is made of scheduled flights. However due to several possible reasons, such as adverse weather conditions, operational problems, and high traffic volume, the actual dynamics of flights can be drastically different from the scheduled one. Due to the strong interconnectedness of the air traffic system, it is likely that deviations from the scheduling propagate in space and time, i.e. in other airports or routes in the near future. A big engineering challenge is to design the whole system in such a way to be resilient to these shocks, i.e. to be able to return quickly to a normal state after a shock. Moreover air traffic system is a specific instance of traffic where jams can appear for no apparent reasons. A small set of recent papers started to investigate theoretically and empirically this problem. In Ref. \cite{Lacasa09}, for example, authors proposed a network-based model of the air transport system that simulates the effect of traffic dynamics and shows the appearance of jams. Specifically, in the model a random (Erdos-Renyi) network describes the topology of a set of interconnected airports. Each airport is characterized by an exogenously given capacity, i.e. the maximal number of aircraft per unit time that the airport can handle in an ideal situation. This ideal capacity is perturbed (diminished) by a random noise term. Moreover each link of the network is weighted and the weight measures the number of time steps that an aircraft needs in order to complete the route. A series of simple queuing rules describes the interplay between incoming and out-coming aircraft flows in an airport. Simplifying a bit, if the input flow is larger than the capacity, the output flow will be equal to the capacity and the other aircraft will remain in a queue. Only when the input flow becomes smaller than the capacity it will be possible to remove the waiting aircraft from the queue. The model is simulated with Monte Carlo methods. The key system indicator, $P$, is the percentage of aircraft that are not stuck in a node's queue, measured in the steady state. Note that $P$ actually measures the network's efficiency as far as it gives the flow rate which is diffusing as compared to the flow rate which is stuck. The key finding of Ref. \cite{Lacasa09} is that by increasing the aircraft density (number of aircraft), the system undergoes a phase transition. This is testified by the fact that the expected value of $P$ sharply deviates from the efficient phase $P=1$ when the aircraft density is larger than a threshold. After this threshold $P$ declines, as expected. Correspondingly, the variance of $P$, goes abruptly from zero to an high value when the aircraft density is larger than the threshold described above. These and other evidences indicate that the system undergoes a jamming transition, similarly to what observed in other traffic systems \cite{Helbing01}. One may ask whether network topology plays a role in this observation. Authors considered the topology of the (real) European air traffic system composed by $858$ airports and $11170$ flights and they found qualitatively the same result, i.e. the emergence of a jamming phase transition for a given value of the aircraft density (see Figure 6). \begin{figure} \begin{center} \resizebox{0.50\columnwidth}{!}{ \includegraphics{Jamming.eps} } \caption{Phase diagram relating the percentage of aircraft not stuck in a node's queue $P$ as a function of the network aircraft density, for the European air transport network composed of 858 nodes. Reprinted with permission from Ref. \cite{Lacasa09}. \copyright 2009 by Elsevier.} \end{center} \label{fig:Jamming} \end{figure} \subsection{Epidemic spreading} Another recent stream of research has considered the role of air transportation network in the propagation of global epidemics and in the assessment of its predictability. The majority of ``classical" models of epidemic spreading considers a set of individuals located very closely one to each other, so that the connections between individuals that can propagate the epidemic are short range (in space). However air transport network provides a mechanism for long-range heterogeneous connections that can change dramatically the diffusion properties of an epidemics. In Ref. \cite{Colizza06} Colizza {\it et al.} developed a model of a set of more than $3000$ large cities worldwide where a major airport is present. They use IATA data on the number of available seats on any given flight connection for the year 2002 and they complemented this dataset with census data on the population of the large metropolitan area served by each airport. Interestingly, they found that the number of passengers (seats) scales as the square of the population. For each city, they simulated a classical SIR epidemic model, where each individual is either susceptible (S), or infected (I), or recovered (R). Then they used the air traffic flow data to simulate the mobility of individuals from an airport to another. In this way they were able to simulate the epidemic spreading in the world and to assess the role of the air transportation network for the global pattern of emerging diseases. To this end they compared the simulations calibrated on the real air traffic data with simulations for two benchmarks. In the first one, the air traffic network is a random Erdos-Renyi graph, while in the second one the network is a graph with the same topology of the real system but fluxes and populations are taken as uniform and equal to the average value of the corresponding real variables. In this way authors could assess the relative importance of topological (network) and metric (population, SIR rates, etc) variables by comparing simulations of the two benchmark cases to simulations of the model fully calibrated on real data. The striking result is that the model where the topology of the air network is faithfully reproduced but the weights are homogeneous shows simulations much more similar to those of the model fully calibrated on real data than the simulations of the random graph model. This finding strongly indicates that (at least in the framework of the model) ``the air-transportation-network properties are responsible for the global pattern of emerging diseases" \cite{Colizza06}. The same result holds if one consider the predictability of epidemic spreading. By defining a suitable measure of the sensitivity of simulations to random perturbation, Colizza {\it et al.} \cite{Colizza06} found that the predictability (or more precisely, the reproducibility or the model's sensitivity) of the random graph model is much higher than the model fully calibrated on real data. Again, the model where only the real air transport network topology is reproduced displays patterns similar to the model fully calibrated on real data. These results point out that topology of the air transport network have an important role, not only for the mobility of people, but also for the dynamics of entities that depend on human mobility. \section{Resilience and vulnerability} \label{resil} In this last Section we review some results on the resilience of the air transport network, i.e., its ability to adjust its functioning prior to, during, and following internal and external disturbances, so that it can sustain required operations under both expected and unexpected conditions \cite{Hollnagel11}. For example, few preliminary studies (for example, \cite{Lillo11}) found a positive correlation between node's topological properties and typical fraction of delayed flights. In spite of its relevance for passengers and society in general \cite{Eur11} however, little effort has been devoted to the understanding of the relationships between the topology of the air transport networks and the vulnerability of its dynamics. Two significant exceptions can be found in the literature. The first one, by Chi and coworkers \cite{Chi04}, analyzes how the main topological properties of the US air transport network are changed by random failures and attacks. The former effect is analyzed by de-activating airports at random, and thus simulating random disturbances like emergency situations or adverse weather. The latter is investigated by deleting the most connected nodes (as in an intentional terrorist attack). As known in more general settings \cite{Albert00}, scale-free networks are extremely resilient to random failures. However this comes at a high price, because they are also extremely vulnerable to targeted attacks. The de-activation of the $10\%$ of the most connected airports is enough to reduce the topological efficiency \cite{Latora03} of the network of $25\%$. The resilience of the air transport system to random failures, as obtained in Ref. \cite{Chi04}, is partly aligned with the experience that we all have as passengers. In spite of multiple failures that may appear in small airports across the network, the dynamics of the system as a whole is seldom disturbed. Yet, a complementary problem is represented by those events that push the dynamics of the system far away from its normal point of operation. For instance, black swans as large strikes or the eruption of a volcano dramatically affect the performance of the system. The eruption of the Eyjafjallaj\"okull volcano in 2010 is a clear example of such random events that have larger than expected consequences. This problem has been tackled in Ref. \cite{Wilkinson12}. Specifically, a set of random networks has been created, in which the main properties of the topology (i.e., its scale-free nature) and the spatial position of nodes have been maintained as found in the European air transport network. Results indicate that the severe disruptions observed in 2010 are explained by the geographical correlation of the disturbances (which are not, thus, completely random, as in the model of Ref. \cite{Chi04}), and by the geographical correlation of hubs, which concentrates in the centre of Europe. The proposed solution is to move some hub airports from Germany to peripheral regions. However, although this may improve the resilience against black swans, the economical consequences for airlines would probably exceed the expected benefits. \section{Conclusions and open lines of research} \label{concl} In conclusions, we have presented a short review of the recent use of complex network methods for the characterization of the structure of air transport and of its dynamics. We have shown that most of the published researches have focused on the topological and metric properties of flight networks, where nodes represent airports, and links are created between pairs of them, bringing information on the presence and the frequency of flights. Specifically, these papers can be classified within three main families. Firstly, some of them propose a simple characterization of the topology of the networks, without considering their evolution through time. The recent change from a point to point to an hub-and-spoke system has triggered a series of studies, aimed at identifying and characterizing this transition, by monitoring the evolution of network's characteristics through time. Finally, some works have focused on the dynamics on the network, as, for instance, on the movement of passengers and the epidemic spreading. However, we believe that this is just a starting point of a fruitful breeding between air transport science and complex network theory. Many different types of networks can be defined by taking into account variables or phenomena that have not been investigated with networks. Therefore, we anticipate that many interesting contributions will be published in the near future about air transport networks involving, for example, airways and navpoints, delays, safety events, crews and physical aircraft, sectors, etc. In the near future our society is likely to face a significant growth of air traffic. Radical organizational and institutional changes are already taking place to accommodate this increase, for instance with the progressive integration of the still nationally fragmented Eu airspace management, thanks to the Single European Sky initiative and the SESAR program. The current system will drastically change, thus making a primary research area of the development of innovative network management methods and tools. We believe that complex network theory could give a significant contribution to this challenge. \section{acknowledgement} Authors acknowledge S. Miccich\`e, R. N. Mantegna, and S. Pozzi for useful discussion. This work has been developed as part of the activities of the ComplexWorld Network (www. complexworld.eu). Work presented therein was co-financed by EUROCONTROL on behalf of the SESAR Joint Undertaking in the context of SESAR Work Package E, project ELSA. The paper reflects only the authors' views. EUROCONTROL is not liable for any use that may be made of the information contained therein.
1,314,259,996,099
arxiv
\section{Introduction} Let $L$ be a $2$-component link in $S^3$ with trivial linking number. Choose a Seifert surface for each component of $L$ that misses the other component and such that the surfaces intersect transversely. The intersection of the two Seifert surfaces gives a framed link in $S^3$. Such a framed link determines a homotopy class of maps $S^3 \rightarrow S^2$ by the Pontryagin-Thom construction. \begin{definition} The Sato-Levine invariant of $L$ is the corresponding group element of $\pi_3(S^2) = \mathbb{Z}$. \end{definition} This definition first appears in \cite{Sato}. The non-vanishing of the Sato-Levine invariant of $L$ provides an obstruction to the link $L$ bounding disjoint locally flat discs in the $4$-ball (in other words, an obstruction to $L$ being slice). In this paper we give a combinatorially-defined obstruction $\phi(L) \in \mathbb{Z}/4\mathbb{Z}$ to $L$ being slice. It turns out to be equal to the modulo $4$ reduction of the Sato-Levine invariant. Nevertheless, the proofs of the well-definedness and properties of $\phi$ are straightforward and direct. The intermediate construction used in the proofs is a flat $SO(3)$ connection on a $4$-manifold. The result follows from an application of the Dold-Whitney theorem (which classifies all $SO(3)$ bundles over a $4$-complex by their characteristic classes). \begin{theorem}[Dold-Whitney \cite{DW}] Let $X$ be a 4-dimensional CW-complex. A principal $SO(3)$ bundle $E$ over $X$ is determined by the pair consisting of its Pontryagin class $p_1(E) \in H^4(X;\mathbb{Z})$ and second Steifel-Whitney class $w_2(E) \in H^2(X; \mathbb{Z}/2\mathbb{Z})$. Furthermore there is an $SO(3)$ bundle $E$ realizing $p_1(E) = a$ and $w_2(E) = b$ exactly when \[ \bar{a} = b^2 \in H^4(X; \mathbb{Z}/4\mathbb{Z}) \] where we write $\bar{a}$ for the reduction of $a$ and where the squaring of $b$ is the Pontryagin squaring operation. \end{theorem} In essence, we are giving an essentially 4-dimensional proof of the invariance and properties of a reduction of the Sato-Levine invariant. \subsection*{Acknowledgements} We thank the Max Planck Institute for their hospitality and thank the colleagues there who showed such an interest in this interpretation of a well-known invariant. \section{Definition and properties} Let $L$ be an oriented link in $S^3$ of trivial linking number comprising two components $K_1$ and $K_2$. Then there certainly exist two disjoint locally flat immersed discs in the $4$-ball $B^4$, bounded by $L$, where the discs are boundary-transverse and oriented consistently with $L$. Let $D_1$ and $D_2$ be two such discs. \begin{definition} \label{defn:whatitis} To each self-intersection point $p \in B^4$ of $D_1$ or $D_2$ we associate a number $i(p) \in \{ -1, 0 , 1 \}$ as follows. Let $\{ s,t \} = \{ 1,2 \}$, and suppose that $p$ is a self-intersection point of $D_s$. Choose a loop $l$ which starts and ends at $p \in B^4$, staying on $D_s$ and starting and ending on different branches of the intersection. Then we set \[ w(p) : = [l] \in H_1(B^4 \setminus D_t ; \mathbb{Z}/2\mathbb{Z}) = \mathbb{Z}/2\mathbb{Z} = \{ 0 , 1 \} {\rm .} \] Note that this is independent of the choice of $l$. We define \[ i(p) = w(p) \sigma(p) \] where $\sigma(p) = \pm 1$ is the sign of the intersection at $p$. \end{definition} \begin{definition} We define \[ \phi(L, D_1, D_2) = \sum_p i(p) \in \mathbb{Z}/4\mathbb{Z} \] \noindent where the sum is taken over all the self-intersections $p$ of $D_1$ and $D_2$. \end{definition} \begin{remark} The fact that $\phi$ is the reduction of the Sato-Levine invariant may be deduced from this definition and the crossing-change formula due to Jin \cite{Jin} and Saito \cite{Saito}. \end{remark} We shall show the following \begin{proposition} \label{prop:gluing} Suppose that $L$ bounds the two pairs of disjoint locally flat immersed discs $(D_1, D_2)$ and $(D'_1, D'_2)$. Then there exists a closed $4$-manifold $X$ with a flat $SO(3)$-bundle $E \rightarrow X$ with \[ \phi(L, D_1, D_2) - \phi(L,D'_1,D'_2) = w_2^2(E) = p_1(E) = 0 \in \mathbb{Z} / 4\mathbb{Z} = H^4(X ; \mathbb{Z} / 4\mathbb{Z}) \rm{.} \] \end{proposition} From this proposition we immediately obtain a corollary. \begin{corollary} \label{cor:obstruction} The quantity $\phi(L, D_1, D_2)$ depends only on the link $L$. So we can write $\phi(L) = \phi(L,D_1,D_2)$. Furthermore, if $\phi(L) \not= 0$ then $L$ does not bound two disjoint embedded locally flat discs in $B^4$. \qed \end{corollary} We note that the content of the equation in Proposition \ref{prop:gluing} is the first equality sign, the second being the Dold-Whitney theorem (the squaring operation here is the Pontryagin square, a $\mathbb{Z}/4\mathbb{Z}$ lift of the cup product), and the third being a consequence of the flatness of the bundle $E$. \begin{remark} \label{rem:saito} Work by Saito \cite{Saito} gives a $\mathbb{Z}/4\mathbb{Z}$-valued extension of the Sato-Levine invariant for links of even linking number. Saito's invariant is constructed via considering the framed intersection of possibly non-orientable Seifert surfaces, and is distinct from that which we consider. \end{remark} We devote the following section to the description of the manifold $X$ and the $SO(3)$-bundle $E \rightarrow X$. \section{Construction of a 4-manifold with an $SO(3)$-bundle} Given an immersed locally-flat $2$-link $\Lambda \subseteq S^4$ of two components with no intersections between distinct components of the link, we give a construction of a closed diagonal $4$-manifold $X_\Lambda$. Suppose that $\Lambda$ has $n_-$ negative and $n_+$ positive intersection points. Then we blow-up each negative intersection point by taking connect sum with $\overline{\mathbb{P}}^2$ and each positive intersection point by taking connect sum with $\mathbb{P}^2$. Let \[ \overline{\Lambda} \hookrightarrow n_-\bar{\mathbb{P}}^2 \# n_+\mathbb{P}^2 \] be the proper transform of $\Lambda$. Because of the way we chose to blow-up the negative and positive intersections respectively, each exceptional sphere intersects $\bar{\Lambda}$ in two points, once negatively, and once positively. Furthermore, since the self-intersections of $\Lambda$ do not occur between the distinct components of $\Lambda$, each exceptional sphere intersects exactly one component of $\bar{\Lambda}$. This means that each component of $\bar{\Lambda}$ is trivial homologically, and so has a trivial $D^2$-neighborhood. This allows us to do surgery by removing a neighborhood $\bar{\Lambda} \times D^2$ and gluing in two copies of $D^3 \times S^1$. We call the resulting manifold $X_\Lambda$. Now we collect some information about the algebraic topology of $X_\Lambda$. \begin{prop} \label{prop:algtop} The 4-manifold $X_\Lambda$ has diagonal intersection form and satisfies \begin{align*} H_1(X_\Lambda ; \mathbb{Z}) = \mathbb{Z}^2, \,\,\,& H_2(X_\Lambda ; \mathbb{Z}) = \mathbb{Z}^{n_+ + n_-},\\ b_2^+ = n_+,\,\,\,& b_2^- = n_- \rm{.} \end{align*} \end{prop} \begin{proof} We shall display $n_- + n_+$ disjoint embedded tori in $X_\Lambda$, $n_-$ of which have self-intersection $-1$ and $n_+$ of which have self-intersection $+1$. Using a simple argument counting handles and computing Euler characteristics, it is easy then to deduce the statement of the proposition. Each exceptional sphere $E \subset n_-\bar{\mathbb{P}}^2 \# n_+\mathbb{P}^2$ intersects $\bar{\Lambda}$ transversely in two points. Connect these two points by a path on $\bar{\Lambda}$. The $D^2$-neighborhood of $\bar{\Lambda}$ pulls back to a trivial $D^2$-bundle over the path. The fibers over the two endpoints can be identified with neighborhoods on $E$. Removing these neighborhoods from $E$ we get a sphere with two discs removed and we take the union of this with the $S^1$ boundaries of the all the fibers of the $D^2$-bundle over the path. This either gives a torus or a Klein bottle. Because $E$ intersects $\bar{\Lambda}$ once positively and once negatively, we see that we in fact get a torus which has self-intersection $\pm 1$. Finally note that we can certainly choose paths on $\bar{\Lambda}$ for each exceptional sphere which are disjoint. \end{proof} \section{A flat connection and the Dold-Whitney theorem} This section considers the characteristic classes of $SO(3)$-bundles, but in fact we shall only be concerned with those bundles whose structure group can be restricted to a small subgroup of $SO(3)$. \begin{definition} \label{def:klein4groupthing} Let $V_4 \subseteq SO(3)$ be the Klein $4$-group \[ V_4 = \left\{ \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \end{array} \right), \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & -1 \end{array} \right), \left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & -1 \end{array} \right), \left( \begin{array}{ccc} -1 & 0 & 0 \\ 0 & -1 & 0 \\ 0 & 0 & 1 \end{array} \right) \right\} { \rm .}\] In future, we write $x_1, x_2, x_3$ for the non-identity elements. \end{definition} We begin with a well-known (in certain circles) lemma about a flat $SO(3)$-connection on the torus. \begin{lemma} \label{lem:flat_connection_torus} Let $T^2$ be a torus and let $\eta : \pi_1(T^2) \rightarrow SO(3)$ be defined by $\eta(a) = x_1$ and $\eta(b) = x_2$ where $a, b$ is a basis for $\pi_1(T^2) = H_1(T^2; \mathbb{Z}) = \mathbb{Z} \oplus \mathbb{Z}$. Writing $E_\eta$ for the associated (flat) $SO(3)$-bundle, we have \[ w_2 (E_\eta) = 1 \in H^2(T^2; \mathbb{Z}/2) = \mathbb{Z}/2 {\rm .} \] \end{lemma} \begin{proof} Note that the matrices of $V_4$ are all diagonal with entries in $\mathbb{Z}/2\mathbb{Z} = O(1)$. Hence, thinking of $E_\eta$ as an $O(3)$-bundle, we can write $E_\eta = L_1 \oplus L_2 \oplus L_3$ where $L_i$ is the (flat) real line bundle determined by the representation \[ \pi_1(T^2) \stackrel{\eta}{\longrightarrow} V_4 \stackrel{p_i}{\longrightarrow} \mathbb{Z}/2\mathbb{Z} = O(1) {\rm ,} \] where $p_i$ is given by the $(ii)$ matrix entry. Each $L_i$ is the pullback of a M\"obius line bundle over a circle by a map $T^2 \rightarrow S^1$ (depending on $i$) which is a projection map onto an $S^1$ factor of $T^2$. We compute then that \[ w_1(L_1) = \bar{a}, \, \, w_1(L_2) = \bar{b}, \, \, {\rm and} \, \, \, w_1(L_3) = \bar{a} + \bar{b} {\rm ,} \] where we write $\bar{a}, \bar{b} \in H^1(T^2; \mathbb{Z}/2\mathbb{Z})$ for the reductions of the Poincar\'e duals of $a$ and $b$ respectively. Then we compute via the cup-product formula for the Stiefel-Whitney class of a sum of bundles: \[ w_2(E_\eta) = \bar{a}\cup\bar{b} + \bar{b}\cup(\bar{a} + \bar{b}) + (\bar{a} + \bar{b})\cup\bar{a} = \bar{a} \cup \bar{b} = 1 \in H^2(T^2 ; \mathbb{Z}/2\mathbb{Z}) {\rm .} \] \end{proof} \begin{remark} \label{rem:surjectivity_torus} For representations $\eta : \pi_1(T^2) \rightarrow V_4$, Lemma \ref{lem:flat_connection_torus} says that $w_2(E_\eta)$ is non-trivial exactly when $\eta$ is surjective (note that if $\eta$ is not surjective then $E_\eta$ is the pullback of a bundle over a circle). \end{remark} Suppose now that we are in the situation of the hypotheses of Proposition \ref{prop:gluing}. By gluing together the two pairs of disks $(D_1, D_2)$ and $(D'_1, D'_2)$ along their boundary $L \subset S^3$, we get a $2$-component locally-flat immersed link $\Lambda \subset S^4$. We write $\Lambda_j$ for the sphere resulting from gluing together $D_j$ and $D'_j$ for $j = 1,2$. In performing this gluing we of course reverse the orientation of the second 4-ball. This has the effect that positive/negative self-intersections of $(D'_1, D'_2)$ become negative/positive self-intersections of $\Lambda$ respectively. We write $X = X_\Lambda$, and now give a flat $SO(3)$ connection on $X$. Let $\theta : \pi_1 (X) \rightarrow SO(3)$ be a representation that factors through an onto map $\bar{\theta}: H_1(X ; \mathbb{Z}) = \mathbb{Z} \oplus \mathbb{Z} \rightarrow V_4$. We define $\theta$ by setting $\bar{\theta} : m_j \mapsto x_j$ where $m_j$ is a meridian of $\Lambda_j$ for $j = 1,2$. We write $E_\theta$ for the associated (flat) $SO(3)$-bundle over $X$. We are interested in the characteristic classes $w_2(E_\theta) \in H^2(X;\mathbb{Z}/2\mathbb{Z})$ and $p_1(E_\theta) \in H^4(X;\mathbb{Z})$. In the case we consider in this paper, we know immediately that $p_1(E_\theta) = 0$ since the bundle admits a flat connection. Proposition \ref{prop:gluing} now follows by computing $w_2^2(E_\theta)$ using our basis of tori representing the second homology of $X$. \begin{proof}[Proof of Proposition \ref{prop:gluing}] As noted before, the content of the proposition is in the first equality sign, namely that we have \[ w_2^2(E_\theta) = \phi(L, D_1, D_2) - \phi(L, D'_1, D'_2) \in H^4(X ; \mathbb{Z}/4\mathbb{Z}) {\rm .} \] We compute $w_2(E_\theta) \in H^2(X ; \mathbb{Z}/2\mathbb{Z})$ by pulling back the representation $\theta$ to each torus representing a basis element of $H_2(X ; \mathbb{Z})$. Let $T_p \subseteq X$ be a torus as constructed in Proposition \ref{prop:algtop} coming from a self-intersection point $p \in \Lambda_j$ for some $j \in \{1,2\}$. We wish to give a pair of $H_1(T_p ; \mathbb{Z})$-generating circles on $T_p$. The first of these circles we take to be a meridian $m_p$ to $\Lambda_j$. The other we take to be any circle $l_p$ on $T_p$ which is dual to $m_p$. Then the restriction of $\theta$ to $\pi_1(T_p) = H_1(T_p ; \mathbb{Z})$ is determined by $\bar{\theta}(m_p)$ and $\bar{\theta}(l_p)$. We know by the definition of $\theta$ that we have $\bar{\theta}(m_p) = x_j$. On the other hand, $\bar{\theta}(l_p)$ is determined by the class of $l_p$ in $H_1(X ; \mathbb{Z}/2\mathbb{Z})$. Consider $w(p)$ as given in Definition \ref{defn:whatitis}. If we have $w(p) = 0$ then $\bar{\theta}(l_p) \in \{ 1, x_j \}$, but if $w(p) = 1$ then $\bar{\theta}(l_p) \notin \{ 1, x_j \}$. In consequence, $\theta \vert_{\pi_1(T_p)}$ maps onto $V_4$ if and only if $w(p) = 1$. In light of Remark \ref{rem:surjectivity_torus}, it follows that $w_2(E_\theta \vert_{T_p}) = w(p) \in \mathbb{Z}/2\mathbb{Z} = H^2(T, \mathbb{Z}/2\mathbb{Z})$. The equation we wish to prove then follows since, computing in $H^4(X, \mathbb{Z}/4\mathbb{Z})$, we have \begin{align*} p_1(E_\theta) &= w_2^2(E_\theta) = \left( \sum_p (w_2(E_\theta)[T_p])\overline{[T_p]} \right)^2 \\ &= \sum_p (w_2(E_\theta \vert_{T_p})[T_p])(\overline{[T_p]} \cup \overline{[T_p]}) = \sum_p w(p)(\overline{[T_p]} \cup \overline{[T_p]}) \\ &= \phi(L, D_1, D_2) - \phi(L,D'_1,D'_2) {\rm ,} \end{align*} where we write $[T_p]$ for the fundamental class of $T_p$ and the overline denotes the Poincar\'e dual. We use here that the Pontryagin square of the $\mathbb{Z}/2\mathbb{Z}$ reduction of an integral class is the $\mathbb{Z}/4\mathbb{Z}$ reduction of the usual square of that integral class. \end{proof} \begin{remark} It is possible to give more a complicated construction along the lines above, which should extend the invariant to $2$-component links of even linking number. This recovers the $\mathbb{Z}/4\mathbb{Z}$ reduction of the Sato-Levine invariant due to Akhmetiev and Repovs \cite{ZhRe} for this class of links. The construction above starts with two pairs of discs $(D_1, D_2)$ and $(D'_1, D'_2)$. In the case of a link $L$ of non-zero linking number $2n$ we start rather with two immersed concordances from $L$ to the $(2,4n)$-torus link. These may then be glued end-to-end and the resulting immersed surface resolved by blow-up in order to give two embedded tori $\Lambda$ of self-intersection $0$ in a blow-up of $S^1 \times S^3$. Surgery may be done on $\Lambda$ in order to give a closed $4$-manifold $X$. The main subtleties in this new situation are in performing the surgery so that one obtains $X$ with the correct algebraic topology, and in dealing with an intersection form that is no longer diagonal. \end{remark} \bibliographystyle{amsplain}
1,314,259,996,100
arxiv
\section{Introduction} In \cite{Pandharipande:2014fk}, Pandharipande, Solomon and Tessler constructed a rigorous theory of intersection theory on the moduli space of the disk, and proved that the generating function for these numbers obey a number of constraint conditions that are direct analogues of the KdV equation and Virasoro constraints for intersection theory on moduli spaces of closed Riemann surfaces (c.f. \cite{Kontsevich:1992fv, MR1144529, MR1083914}). These constraints uniquely specify the intersection theory for higher genera intersection numbers, and led to conjectural equations for the resulting generating functions. In particular, they conjectured a Virasoro constraint condition, and a solution of an integrable system that they termed the open KdV equation. Later, it was shown by Buryak \cite{Buryak:2014kx} that these two systems of differential equations are, in fact, compatible. In so doing, he demonstrated that the open KdV equations form a part of a larger hierarchy, called the Burgers-KdV equations, which he conjectured to be the correct framework for introducing descendent integrals for the marked points appearing on the boundary of the surfaces. These conjectures have been proven in \cite{Buryak:2015uq,Tessler:2015ys}, with the caveat that the rigorous construction of the necessary moduli spaces has been announced by Solomon and Tessler, but as of the writing of this paper, has not yet appeared. Alexandrov \cite{Alexandrov:2015kq} constructed a solution to the Burgers-KdV hierarchy based on the Kontsevich-Penner matrix model. The partition function for this matrix model was shown to satisfy the so-called MKP hierarchy. In addition, Alexandrov \cite{Alexandrov:2014gfa} constructed a $W$-algebra constraint on this function. In the present work, we take Alexandrov's $W$-constraint as the starting point, and show that it is equivalent to a topological recursion equation for the generating function of open intersection numbers. We formulate this recursion in two ways: first as a master equation, in the sense of Kazarian and Zograf \cite{Kazarian:2014ys} (also present in \cite{Eynard:2007kx}), and then as a residue calculation. The spectral curve turns out to be the same as the spectral curve for the Witten-Kontsevich generating function (c.f. \cite{MR2944483, Bergere:2009ve}), namely \begin{align*} x &= \frac{1}{2}z^2 \\ y &= z, \end{align*} however, the topological recursion formula itself is rather unusual, in that it combines aspects of the topological recursion formula for curves with higher order branch points (c.f. \cite{Bouchard:2013ss, MR3147410, Ferrer:2010fk}), as well as $\beta$-deformed topological recursion, as constructed in \cite{Bergere:2012dp, MR3165804}. We also explore a potential refinement of the open intersection numbers by incorporating a grading parameter $Q$, which separates the components of a given moduli space by the number of boundary components in the underlying surface. We conjecture that the resulting generating function is given by a parametrized version of the Kontsevich-Penner matrix model. If true, it would immediately imply a quantum curve equation for the principal specialization of the generating function: \begin{equation*} \left(\hbar^3 \frac{d^3}{dx^3} - 2\hbar x \frac{d}{dx} + 2\hbar(Q-1)\right) e^{\Psi_Q} = 0. \end{equation*} Note that the quantum curve for open intersection numbers (without the $Q$ grading) is given by substituting $Q=1$. The paper is organized as follow. In Section~\ref{sect:W-Constraint}, we review the recent results on open intersection numbers and, in particular, Alexandrov's construction of $W$-constraints on the generating function of open intersection numbers. Then, in Section~\ref{sect:TR} we introduce the tools and notation used in topological recursion calculations. In Section~\ref{sect:MasterEquation} we formulate the $W$-constraint condition as an equivalent system of master equations. In Section~\ref{sect:Residue} we write the master equation as a residue integral. In Section~\ref{sect:Q-Grading} we conjecture a refinement of the generating function which tracks the intersection numbers for moduli spaces of surfaces with different numbers of boundary components. Finally, in Section~\ref{sect:QuantumCurve}, we derive the quantum curve for the $Q$-graded correlators. \begin{acknowledgement} The author thanks Bertrand Eynard for useful discussions. During the preparation of this paper, the author received support from the Max Planck Institute for Mathematics in Bonn, the Institute des Hautes \'Etudes Scientifique, and the National Science Foundation through grants 1002477 and DMS-1308604. \end{acknowledgement} \section{Open intersection numbers and Alexandrov's W-constraints} \label{sect:W-Constraint} There are many effective methods for computing so-called descendent integrals on moduli spaces of closed Riemann surfaces: \begin{equation*} \braket{\tau_{a_1} \cdots \tau_{a_l}}_g = \int_{\widebar{\mathcal{M}}_{g,l}} \psi_1^{a_1} \cdots \psi_{l}^{a_l}, \end{equation*} where $\psi_i$ is the first chern class of the natural line bundle on $\widebar{\mathcal{M}}_{g,l}$ given by taking the cotangent space of the curve at the $i$-th marked point. The approach pioneered by Witten is to collect them into generating functions \begin{align*} F_g^{\text{WK}} (T_0, T_1, \ldots) &= \sum_{n=0}^{\infty} \frac{1}{n!}\braket{\gamma^n}_g, \\ F^{\text{WK}} &= \sum_{g=0}^{\infty} u^{2g-2} F_g^{\text{WK}}, \\ \tau_{\text{WK}} & = e^{F^{\text{WK}}}, \end{align*} for $\gamma = \sum T_k \tau_k$. He conjectured \cite{MR1144529}, (proven by Kontsevich \cite{Kontsevich:1992fv}) that $\tau_{\text{WK}}$ is a $\tau$-function for the KdV hierarchy, with KP times $\{t_k\}$ given by $T_k = (2k+1)!! t_{2k+1}$. Equivalently, there is a family of differential operators $L_a^{\text{WK}}$ ($a \geq -1$) that annihilate the generating function: \begin{equation*} L^{\text{WK}}_n \tau_{\text{WK}} = 0, \end{equation*} and satisfy the commutation relations of (half of) the Virasoro algebra $[L^{\text{WK}}_n, L^{\text{WK}}_m] = (n-m) L^{\text{WK}}_{n+m}$. In \cite{Pandharipande:2014fk}, Pandharipande, Solomon, and Tessler began extending descendent integration to moduli spaces of open Riemann surfaces, or more precisely, Riemann surfaces with boundary. In particular, a Riemann surface with boundary $(X, \partial X)$ is a 1-dimensional complex manifold with finitely many circular boundaries, each with a holomorphic collar structure. The double of $(X, \partial X)$, denoted $D(X, \partial X)$, is the closed Riemann surface obtained by Schwarz reflection across the boundary of $X$. If $X$ has genus $g$ and $b$ boundary components then we define the augmented genus of $X$, $\mathfrak{h}(X) = g + b/2$. The genus of the double of $X$ is given by $2\mathfrak{h}(X) - 1$. We define $\mathcal{M}_{h, k, l}$ to be the moduli space of (possibly open) Riemann surfaces $X$ with $\mathfrak{h}(X)= h$, $k$ marked points on the boundary of $X$, and $l$ interior marked points. We note the slightly different conventions then those used by \cite{Pandharipande:2014fk}, where in particular we use parameter $\mathfrak{h}(X)$ instead of the genus of the doubled surface, and we consider the moduli space of closed Riemann surfaces $\mathcal{M}_{h,l}$ to be a connected component of $\mathcal{M}_{h, 0, l}$. It is also worth pointing out that $h$ can be any non-negative integer or half-integer. $\mathcal{M}_{h, k, l}$ is a real orbifold of dimension $6h - 6 + k + 2l$. At interior marked points we have cotangent line bundles \begin{equation*} \LL_i \rightarrow \widebar{\mathcal{M}}_{h, k, l} \quad\quad i=1, \ldots, l, \end{equation*} and we wish to consider $\psi_i = c(\LL_i) \in H^2(\widebar{\mathcal{M}}_{h,k,l})$. Furthermore, one can construct cotangent line bundles \begin{equation*} {\skew{-5}{\widetilde}{\mathbb{L}}}_j \rightarrow \widebar{\mathcal{M}}_{h, k, l} \quad\quad j=1, \ldots, k \end{equation*} for the marked points on the boundary and consider $\phi_j = c({\skew{-5}{\widetilde}{\mathbb{L}}}_j) \in H^2(\widebar{\mathcal{M}}_{h, k, l})$. If such constructions can be made rigorous, then one could calculate descendent integrals \begin{equation} \braket{\tau_{a_1} \ldots \tau_{a_l} \sigma_{b_1} \cdots \sigma_{b_k}}_h^{\mathfrak{o}} = \int_{\widebar{\mathcal{M}}_{h,k,l}} \psi_1^{a_1} \cdots \psi_l^{a_l} \phi_1^{b_1} \cdots \phi_k^{b_k}, \end{equation} and the resulting generating functions \begin{align*} F_h(T_0, T_1, \ldots ; S_0, S_1, \ldots) &= \sum_{k,l=0}^{\infty} \frac{1}{k! l!} \braket{\gamma^k \lambda^l}_h^\mathfrak{o} \\ F &= \sum_{2h \in \mathbb{Z}_{\geq 0}} u^{2h-2} F_h \\ \tau_\mathfrak{o} &= e^{F}, \end{align*} where $\gamma = \sum T_j \tau_j$ and $\lambda = \sum S_j \sigma_j$. Although it is not yet possible to define open intersection numbers in full generality, \cite{Pandharipande:2014fk} contains a rigorous treatment for $h=0, \frac{1}{2}$ and $S_i = 0 \forall i \geq 1$, while the construction for arbitrary $h$ and $S_i$ has been announced by Solomon and Tessler. Based on their analysis of descendent integrals on the disk, Pandharipande, Solomon and Tessler \cite{Pandharipande:2014fk} conjectured a Virasoro constraint and open KdV equation for the generating function of open intersection numbers. In particular, for $Z = \tau_\mathfrak{o}(T_0, T_1, \ldots, S_0=S, S_1=0, S_2=0, \ldots)$ they conjectured that \begin{equation*} \cL_n Z = 0 \quad \text{for $n=-1, 0, \ldots$}, \end{equation*} where \begin{multline*} \cL_n = - \frac{(2n+3)!!}{2^{n+1}} \frac{\partial}{\partial T_{n+1}} + \sum_{a=0}^{\infty} \frac{(2(a+n) + 1)!!}{2^{n+1} (2a-1)!!} T_a \frac{\partial}{\partial T_{a+n}} \\ + \frac{u^2}{2^{n+1}} \sum_{a+b = n-1} (2a+1)!! (2b+1)!! \frac{\partial^2}{\partial T_a \partial T_b} + \delta_{k, -1} u^{-2} \frac{T_0^2}{2} + \delta_{n, 0} \frac{1}{16} \\ + u^n S \frac{\partial^{n+1}}{\partial S^{n+1}} + \frac{3n + 3}{4}u^n \frac{\partial^n}{\partial S^n} \end{multline*} satisfy commutation relations $[\cL_n, \cL_m] = (n-m)\cL_{n+m}$. In addition, they conjectured that the generating function satisfies the following open KdV equations \begin{multline*} (2n+1) \dbraket{\tau_n}^\mathfrak{o} = u \dbraket{\tau_{n-1}\tau_0} \dbraket{\tau_0}^\mathfrak{o} + 2 \dbraket{\tau_{n-1}}^\mathfrak{o} \dbraket{\sigma_0}^\mathfrak{o} \\ + 2 \dbraket{\tau_{n-1}\sigma_0}^\mathfrak{o} - \frac{u}{2}\dbraket{\tau_{n-1} \tau_0^2}, \end{multline*} where \begin{align*} \dbraket{\tau_{a_1} \dots \tau_{a_l}} &= \frac{\partial}{\partial T_{a_1}} \cdots \frac{\partial}{\partial T_{a_l}} F^{\text{WK}}(T_0, T_1, \ldots) \\ \dbraket{\tau_{a_1} \dots \tau_{a_l} \sigma_0^k}^\mathfrak{o} &= \frac{\partial}{\partial T_{a_1}} \cdots \frac{\partial}{\partial T_{a_l}} \frac{\partial^k}{\partial S^k} F^{\mathfrak{o}}(T_0, T_1, \ldots; S_0=S, S_1=0, S_2=0, \ldots). \end{align*} Many details of the proofs of these conjectures are in \cite{Buryak:2015uq, Tessler:2015ys}, with the remaining part (chiefly the construction of the compact moduli spaces and extensions of the line bundles to the boundary) announced by Solomon and Tessler. Independent of the conjectures themselves, Buryak proved \cite{Buryak:2014kx} that the Virasoro constraint and open KdV equations are consistent and compatible with each other. In so doing, he introduced an extended $\tau$-function with additional parameters $S_1,S_2, \ldots$, that satisfied the Burgers-KdV hierarchy. He conjectured that these parameters introduce descendent integration with respect to the cotangent bundles from the boundary marked points. Using the solution of the Burgers-KdV hierarchy as the starting point, Alexandrov \cite{Alexandrov:2015kq, Alexandrov:2014gfa} related the generating function of open intersection numbers to the Kontsevich-Penner matrix model \begin{equation} \label{eqn:MatrixIntegral} \tau_Q = \det(\Lambda)^Q \mathcal{C}^{-1} \int_{\cH_N} [d\Phi] \exp \left( -\Tr \Bigl( \frac{\Phi^3}{3!} - \frac{\Lambda^2\Phi}{2} + Q \log \Phi \Bigr) \right), \end{equation} where $Q \in \mathbb{Z}$, $\Lambda = \diag(\lambda_1, \ldots, \lambda_N)$, the integral is over the space of hermitian $N \times N$ matrices, and \begin{equation*} \mathcal{C} = e^{\Tr \Lambda / 3} \int d\Phi \exp \left(-\Tr \frac{\Lambda \Phi^2}{2} \right). \end{equation*} In particular, he conjectured that $\tau_{Q=1} = \tau_\mathfrak{o}$. Note that $\tau_{Q=0} = \tau_{\text{WK}}$. In Section~\ref{sect:Q-Grading}, we formulate a conjectural relationship between $\tau_Q$ and a $Q$-graded refinement of the open intersection generating function. Using standard matrix model techniques, Alexandrov constructed families of operators that annihilate $\tau_1$, and satisfy the commutation relations of the $W^{(3)}$ algebra. He also showed that $\tau_1$ is a $\tau$-function for the KP-hierarchy, while the parameter $Q$ plays the role of a discrete time, making $\tau_Q$ a solution of the modified KP (MKP) hierarchy (c.f. \cite{Alexandrov:2013fj,MR723457}). Note that the natural KP times $\{t_k\}$ are related to the parameters $\{T_i\}$, $\{ S_i\}$ for the generating function by \begin{align*} T_i &= (2i+1)!! t_{2i+1} \\ S_i &= 2^{i+1} (i+1)! t_{2i+2}. \end{align*} For the $W$-constraints of $\tau_1$, we define \begin{equation} \label{eqn:LkOpenHat} {\Lhat}^{\mathfrak{o}}_k = L_{2k} + (k+2)J_{2k} + \delta_{k,0} ( \frac{1}{8} + \frac{3}{2}) - J_{2k+3}, \end{equation} and \begin{multline} \label{eqn:MkOpenHat} {\Mhat}^{\mathfrak{o}}_k = M_{2k} + 2(k+3)L_{2k} - 2L_{2k+3} - 2(k+3)J_{2k+3} \\ + (\frac{95}{12} + 6k + \frac{4}{3}k^2)J_{2k} + \frac{23}{4}\delta_{k,0} + J_{2k+6}, \end{multline} where \begin{equation} \label{eqn:J-operator} J_{k} = \begin{cases} u^{1-a/3}\frac{\partial}{\partial t_k} & \text{ if $k>0$} \\ 0 & \text{ if $k=0$} \\ (-k)u^{a/3-1} t_k & \text{ if $k<0$,} \end{cases} \end{equation} and $L_k$ and $M_k$ are the standard generators of the Virasoro and $W^{(3)}$-algebras respectively. Namely \begin{equation} \label{eqn:L-operator} L_k = \frac{1}{2}\sum_{a+b=k} \normalorder{J_a J_b}, \end{equation} \begin{equation} \label{eqn:M-operator} M_k = \frac{1}{3}\sum_{a+b+c=k} \normalorder{J_a J_b J_c}, \end{equation} and $\normalorder{A}$ is the normal ordering of operator $A$. In particular, $\normalorder{AB} = \normalorder{BA}$, while \begin{equation*} \normalorder{J_a J_b} = \begin{cases} J_a J_b & \text{ if $a \leq b$ } \\ J_b J_a & \text{otherwise.} \end{cases} \end{equation*} Then Alexandrov proved \cite{Alexandrov:2014gfa} that for all $k \geq 0$ \begin{equation} \label{eqn:W-constraint} {\Lhat}^{\mathfrak{o}}_k \tau_1 = 0 = {\Mhat}^{\mathfrak{o}}_k \tau_1. \end{equation} In addition, these operators satisfy the commutation relations of generators of the $W^{(3)}$-algebra: \begin{align*} [{\Lhat}^{\mathfrak{o}}_k, {\Lhat}^{\mathfrak{o}}_m] &= 2(k-m){\Lhat}^{\mathfrak{o}}_{k+m}\\ [{\Mhat}^{\mathfrak{o}}_k, {\Lhat}^{\mathfrak{o}}_m] &= 2(k-2m) {\Mhat}^{\mathfrak{o}}_{k+m} + 4m(m+1) {\Lhat}^{\mathfrak{o}}_{k+m}. \end{align*} As is the case with the $W^{(3)}$-algebra in general, the commutator $[{\Mhat}^{\mathfrak{o}}_k, {\Mhat}^{\mathfrak{o}}_m]$ cannot be represented as a linear combination of generators ${\Lhat}^{\mathfrak{o}}_\ell$ and ${\Mhat}^{\mathfrak{o}}_\ell$, but is quadratic in ${\Lhat}^{\mathfrak{o}}_\ell$. It turns out for our purposes to be more convenient to work with the following shifted operators: \begin{align} {\widehat{L}}_k &= {\Lhat}^{\mathfrak{o}}_k \nonumber \\ \label{eqn:hatL} &= L_{2k} + (k+2)J_{2k} + \delta_{k,0} ( \frac{1}{8} + \frac{3}{2}) - J_{2k+3} \\ {\widehat{M}}_k &= -{\Mhat}^{\mathfrak{o}}_k + 2(k+2){\widehat{L}}_k \nonumber \\ \label{eqn:hatM} &= -M_{2k} + 2(L_{2k+3} - L_{2k}) + 2J_{2k+3} \\ & \quad \ + (\frac{2}{3}k^2 + 2k + \frac{1}{12})J_{2k} + \frac{3}{4}\delta_{k,0} - J_{2k+6} \nonumber \end{align} \section{Topological recursion} \label{sect:TR} Topological recursion, as developed by Chekhov, Eynard and Orantin \cite{MR2222762,Eynard:2007kx}, originated as a method for calculating correlation functions for matrix models. However, it was realized to be a more general construction, valid for any spectral curve (defined below), even those not arising from matrix models. It was subsequently found to determine a large number of interesting enumerative and geometric invariants (c.f. \cite{MR3335006,MR2746135,Do:2012lh,Dumitrescu:2013zr,MR2855174,MR2849645,Eynard:2007if,MR3339157,MR3268770,MR2944483,Dunin-Barkowski:2014ij}). The initial data needed to apply topological recursion consists of a \emph{spectral curve} $(C, x, y)$, where $C$ is a Torelli marked compact Riemann surface, and $x$ and $y$ are meromorphic functions on $C$. We require that $x$ and $y$ generate the function field of $C$ (i.e. $K(C) = \CC(x, y)$), and that $x$ and $y$ separate tangents, so that we do not have $dx(p) = 0 = dy(p)$ at any point $p\in C$. We suppose that $x$ has degree $r$, and we call the zeros of $dx$ the \emph{branch points} of the spectral curve, denoted $\{a_1, \ldots, a_d\}$. Given the data of a spectral curve, topological recursion allows for the construction of an infinite family of \emph{correlation functions} $W_{g,n+1}(z_0, \ldots, z_n)$, defined for any $g,n \geq 0$. With the exception of $W_{0,1}$ and $W_{0,2}$, any correlation function $W_{g,n}$ is a symmetric meromorphic $n$-differential, with poles only at the branch points. Although topological recursion is defined for spectral curves of arbitrary degree $r$, we restrict to the case of $r\leq 3$ for simplicity, and because that is sufficient for the example at hand. We will also make the unecessary, though simplifying assumption, that the spectral curve has genus 0. Given a spectral curve, the base cases for the correlation function are given by \begin{align*} W_{0,1}(z) &= y(z)\,dx(z) \\ W_{0,2}(z_0, z_1) &= B(z_0, z_1) = \frac{dz_0 \, dz_1}{(z_0 - z_1)^2}. \end{align*} We use topological recursion to calculate the remainder of the correlation functions, which utilizes the following constructions. Given a family of symmetric $n$-differentials $\{W_{g,n}\}$, and $\vec{z} = (z_1, \ldots, z_n)$, we define \begin{multline*} \cE^{(2)}W_{g,n+1}(v, w; \vec{z}) = W_{g-1, n+2}(v, w, \vec{z}) \\ + \sum_{\substack{g_1+g_2 = g \\ Z_1 \sqcup Z_2 = \vec{z}}} W_{g_1, \abs{Z_1}+1}(v, Z_1) W_{g_2, \abs{Z_2}+1}(w, Z_2), \end{multline*} \begin{multline*} \cE^{(3)}W_{g, n+1}(w_1, w_2, w_3; \vec{z}) = W_{g-2, n+3}(w_1, w_2, w_3, \vec{z}) \\ + \sum_{\substack{g_1+g_2 = g-1 \\ Z_1 \sqcup Z_2 = \vec{z}}} W_{g_1, \abs{Z_1}+2}(w_1, w_2, Z_1) W_{g_2, \abs{Z_2}+1}(w_3, Z_2) \\ + \sum_{\substack{g_1+g_2+g_3 = g \\ Z_1 \sqcup Z_2 \sqcup Z_3 = \vec{z}}} \prod_{i=1}^{3}W_{g_i, \abs{Z_i}+1}(w_i, Z_i), \end{multline*} and \begin{equation*} P_{g,n+1}^{(k)}(z, \vec{z}) = \sum_{\{w_1, \ldots, w_k\} \subset x^{-1}(x(z))} \cE^{(k)}_{g,n+1}(w_1, \ldots, w_k; \vec{z}). \end{equation*} We also define the recursion kernel \begin{equation*} K(z_0, z) = \frac{1}{r W_{0,1}(z)^{r-1}} \int_{\zeta=0}^z B(z_0,\zeta). \end{equation*} Then we have the following topological recursion equation, called the global recursion in \cite{Bouchard:2013ss}. \begin{theorem} The correlation functions for a spectral curve of degree $r$ satisfy \begin{equation*} 0 = \frac{1}{2\pi i} \oint_{\Gamma} K(z_0, z) \sum_{m=2}^{r} W_{0,1}(z)^{r-m} P_{g, n+1}^{(m)}(z, \vec{z}), \end{equation*} where $\Gamma$ is a contour that encloses all the branch points $\{a_1, \ldots, a_d\}$. \end{theorem} \section{Master equation for open intersection numbers} \label{sect:MasterEquation} In this section, we reframe Alexandrov's $W$-constraint condition \eqref{eqn:W-constraint} as a \emph{master equation}, in the sense of Kazarian and Zograf \cite{KazarianMasterEqNotes,Kazarian:2014ys}. We should point out that although this approach to topological recursion was formalized by Kazarian and Zograf, the idea was already present to some degree in \cite{Eynard:2007kx}. The operators used to express the master equation are as follows. \begin{align*} \delta^{(2)} &= \sum_{k=0}^{\infty} \frac{dz}{z^{2k+2}}\frac{\partial}{\partial t_{2k+1}} \\ \delta^{(3)} &= \sum_{k=1}^{\infty} \frac{dz}{z^{2k+1}} \frac{\partial}{\partial t_{2k}} \\ \delta &= \delta^{(2)} + \delta^{(3)} \\ \mathbf{t}^{(2)} &= \sum_{k=0}^{\infty} (2k+1)t_{2k+1} z^{2k}dz \\ \mathbf{t}^{(3)} &= \sum_{k=1}^{\infty} 2k t_{2k} z^{2k-1}dz \\ \mathbf{t} &= \mathbf{t}^{(2)} + \mathbf{t}^{(3)} \end{align*} Note that given any polynomial in $t$, the operator $\delta$ produces a \emph{Laurent differential} in $z$, which is a formal expression \begin{equation*} \sum_{k=-m}^{\infty} a_k(t) z^k\,dz, \end{equation*} for some finite $m$. As well, we define projection operators on the space of Laurent differentials. In particular, $\mathcal{P}^{(2)}$ is the projection to the linear span of $\{ \frac{dz}{z^{2a+2}} \}_{a=0}^{\infty}$, while $\mathcal{P}^{(3)}$ is the projection to the linear span of $\{ \frac{dz}{z^{2a+3}} \}_{a=0}^{\infty}$. We then form the sum \begin{align*} \cL &= \sum_{k=-1}^{\infty} u^{2k/3} \frac{dz}{z^{2k+4}}{\widehat{L}}_k\\ &= \sum_{k=-1}^{\infty} u^{2k/3} \frac{dz}{z^{2k+4}} \left( -J_{2k+3} + (k+2)J_{2k} + L_{2k} + \frac{13}{8}\delta_{k,0} \right). \end{align*} Term-by-term, we find that \begin{equation*} -\sum_{k=-1}^{\infty} u^{2k/3} \frac{dz}{z^{2k+4}} J_{2k+3} = -\sum_{k=0}^{\infty} u^{(2k-2)/3} \frac{dz}{z^{2k+2}} u^{1 - (2k+1)/3} \frac{\partial}{\partial t_{2k+1}} = -\delta^{(2)} \end{equation*} \begin{align*} \sum_{k=-1}^{\infty} u^{2k/3} \frac{dz}{z^{2k+4}}(k+2) J_{2k} &= 2u^{-1} t_2 \frac{dz}{z^2} + \sum_{k=1}^{\infty}(k+2) u^{2k/3} \frac{dz}{z^{2k+4}} u^{1 - 2k/3} \frac{\partial}{\partial_{2k}} \\ &= 2u^{-1} t_2 \frac{dz}{z^2} + u\left(-\frac{1}{2z^2}\frac{d}{dz} + \frac{3}{2z^3}\right) \delta^{(3)} \end{align*} \begin{multline*} \sum_{k=-1}^{\infty} u^{2k/3} \frac{dz}{z^{2k+4}} L_{2k} = u^{-2} \frac{t_1^2}{2}\frac{dz}{z^2} + \sum_{k=-1}^{\infty} \frac{1}{z^2 dz} \sum_{-a + b = 2k} \frac{dz^2}{z^{-a+b+2}}a t_a \frac{\partial}{\partial t_b} \\ + \sum_{k=1}^{\infty} u^2 \frac{1}{2z^2\,dz} \sum_{a+b=2k} \frac{dz^2}{z^{a+b+2}} \frac{\partial^2}{\partial t_a \partial t_b}. \end{multline*} We observe that the second term of the last equality is even in $z$, with order at most $z^{-2}$. Hence we find that \begin{multline*} \cL = -\delta^{(2)} + \frac{13}{8}\frac{dz}{z^4} + u^{-1} 2t_2 \frac{dz}{z^2} + u \left( -\frac{1}{2z^2}\frac{d}{dz} + \frac{3}{2z^3} \right) \delta^{(3)} \\ + \frac{u^2}{2z^2 dz} \left( \bigl( \delta^{(2)} \bigr)^2 + \bigl(\delta^{(3)} \bigr)^2 \right) + u^{-2} \frac{t_1^2 \, dz}{2z^2} \\ + \mathcal{P}^{(2)} \left[ \frac{1}{z^2\,dz} \bigl( \mathbf{t}^{(3)}\delta^{(3)} + \mathbf{t}^{(2)} \delta^{(2)} \bigr) \right] \end{multline*} In addition, we define \begin{equation*} \cM = \sum_{k=-2}^{\infty} u^{2k/3 + 1} \frac{dz}{z^{2k+7}} {\widehat{M}}_k, \end{equation*} which, by a similar calculation as was done for $\cL$, reduces to \begin{multline*} \cM = -\delta^{(3)} + \frac{dz}{z^3} (2u^{-1}t_1 - 4u^{-1} t_2^2 - 6u^{-1}t_1t_3 - 5t_4 - 2u^{-2}t_1^2 t_2 ) \\ + \frac{dz}{z^5} \left(- \frac{5}{2}t_2 - u^{-1}t_1^2 \right) + \frac{3u\,dz}{4z^7} + \frac{2u}{z^3}\delta^{(2)} + \frac{u^2}{6z^4} \left( \frac{d^2}{dz^2} - \frac{3}{z}\frac{d}{dz} - \frac{9}{2z^2} \right) \delta^{(3)} \\ + \frac{u^2}{z^2 dz} \left( \delta^{(2)} \delta^{(3)} + \delta^{(3)} \delta^{(2)} \right) + \mathcal{P}^{(3)} \left[ \frac{2}{z^2\,dz} \bigl( \mathbf{t}^{(2)}\delta^{(3)} + \mathbf{t}^{(3)} \delta^{(2)} \bigr) \right] \\ - \frac{u^3}{z^5\,dz} \left( \bigl(\delta^{(2)} \bigr)^2 + \bigl( \delta^{(3)} \bigr)^2 \right) - \mathcal{P}^{(3)} \left[ \frac{2u}{z^5\,dz} \bigl( \mathbf{t}^{(2)} \delta^{(2)} + \mathbf{t}^{(3)} \delta^{(3)} \bigr) \right] \\ - \mathcal{P}^{(3)} \left[ \frac{1}{z^4\,dz^2} \bigl(2 \mathbf{t}^{(2)} \mathbf{t}^{(3)}\delta^{(2)} + \mathbf{t}^{(2)} \mathbf{t}^{(2)} \delta^{(3)} + \mathbf{t}^{(3)} \mathbf{t}^{(3)}\delta^{(3)} \bigr) \right] \\ - \mathcal{P}^{(3)} \left[ \frac{u^2}{z^4\,dz^2} \bigl( \mathbf{t}^{(2)} \delta^{(2)} \delta^{(3)} + \mathbf{t}^{(2)}\delta^{(3)}\delta^{(2)} + \mathbf{t}^{(3)} \delta^{(2)} \delta^{(2)} + \mathbf{t}^{(3)} \delta^{(3)} \delta^{(3)} \bigr) \right] \\ - \frac{u^4}{3 z^4\, dz^2} \biggl( \delta^{(2)}\delta^{(2)}\delta^{(3)} + \delta^{(2)} \delta^{(3)} \delta^{(2)} + \delta^{(3)} \delta^{(2)} \delta^{(2)} + \delta^{(3)} \delta^{(3)} \delta^{(3)} \biggr) \end{multline*} If we define $\cU^{(i)} = \delta^{(i)}F$, for $i=1,2$, then the fact that $\cL e^F = 0 = \cM e^F$ implies \begin{multline*} \cU^{(2)} = \frac{13}{8}\frac{dz}{z^4} + 2u^{-1}t_2 \frac{dz}{z^2} + u^{-2} t_1^2\frac{dz}{2z^2} + u \left( -\frac{1}{2z^2}\frac{d}{dz} + \frac{3}{2z^3} \right) \cU^{(3)} \\ + \frac{u^2}{2z^2\,dz} \left( \bigl( \cU^{(2)} \bigr)^2 + \bigl( \cU^{(3)} \bigr)^2 + \delta^{(2)}\cU^{(2)} + \delta^{(3)} \cU^{(3)} \right) \\ + \mathcal{P}^{(2)} \left[ \frac{1}{z^2\,dz} \bigl( \mathbf{t}^{(2)} \cU^{(2)} + \mathbf{t}^{(3)} \cU^{(3)} \bigr) \right], \end{multline*} and \begin{multline*} \cU^{(3)} = \frac{3}{4} u \frac{dz}{z^7} - \left( 5t_4 \frac{dz}{z^3} + \frac{5}{2}t_2 \frac{dz}{z^5} \right) + u^{-1} \left( (2t_1 - 4t_2^2 - 6t_1t_3) \frac{dz}{z^3} - t_1^2 \frac{dz}{z^5} \right) \\ - 2u^{-2} t_1^2 t_2 \frac{dz}{z^3} + \frac{2u}{z^3}\cU^{(2)} + \frac{u^2}{6z^4} \left( \frac{d^2}{dz^2} - \frac{3}{z}\frac{d}{dz} - \frac{9}{2z^2} \right) \cU^{(3)} \\ + \frac{u^2}{z^2\,dz} \bigl( 2\cU^{(2)} \cU^{(3)} + \delta^{(2)}\cU^{(3)} + \delta^{(3)}\cU^{(2)} \bigr) \\ - \frac{u^3}{z^5\,dz} \left( \bigl(\cU^{(2)} \bigr)^2 + \bigl( \cU^{(3)} \bigr)^2 + \delta^{(2)}\cU^{(2)} + \delta^{(3)} \cU^{(3)} \right) \\ - \frac{u^4}{3z^4\,dz^2} \biggl[ \bigl( \delta^{(3)^2} + \delta^{(2)^2} \bigr) \cU^{(3)} + \bigl( \delta^{(2)}\delta^{(3)} + \delta^{(3)}\delta^{(2)} \bigr) \cU^{(2)} + \bigl(\cU^{(3)}\bigr)^3 \\ + 3\bigr(\cU^{(2)}\bigl)^2 \cU^{(3)} + 3 \delta^{(2)}\bigl( \cU^{(2)}\cU^{(3)} \bigr) + \frac{3}{2} \delta^{(3)}\bigl( {\cU^{(2)}}^2 + {\cU^{(3)}}^2 \bigr) \biggr] \\ + \mathcal{P}^{(3)} \biggl[ \frac{2}{z^2\,dz} \bigl( \mathbf{t}^{(2)} \cU^{(3)} + \mathbf{t}^{(3)} \cU^{(2)} \bigr) - \frac{2u}{z^5\,dz} \bigl( \mathbf{t}^{(2)} \cU^{(2)} + \mathbf{t}^{(3)}\cU^{(3)} \bigr) \\ - \frac{1}{z^4\,dz^2} \Bigl( 2\mathbf{t}^{(2)} \mathbf{t}^{(3)}\cU^{(2)} + \bigl(\mathbf{t}^{(2)}\bigr)^2\cU^{(3)} + \bigl(\mathbf{t}^{(3)}\bigr)^2 \cU^{(3)} \Bigr) \\ - \frac{u^2}{z^4\,dz^2} \Bigl( \mathbf{t}^{(2)} \bigl( 2\cU^{(2)} \cU^{(3)} + \delta^{(2)}\cU^{(3)} + \delta^{(3)}\cU^{(2)} \bigr) \\ + \mathbf{t}^{(3)} \bigl( \bigl(\cU^{(2)}\bigr)^2 + \bigl(\cU^{(3)}\bigr)^2 + \delta^{(2)}\cU^{(2)} + \delta^{(3)}\cU^{(3)} \bigr) \Bigr) \biggr]. \end{multline*} These equations simplify substantially if we introduce \begin{equation*} \tilde{\cU} = \cU - u^{-2}z^2\,dz + u^{-2}\mathbf{t} + u^{-1} \frac{dz}{z}, \end{equation*} with $\cU = \cU^{(2)} + \cU^{(3)}$. In fact, we have \begin{multline*} \mathcal{P}^{(2)} \left[ \frac{u^2}{2z^2\,dz} \bigl((\tilde{\cU} - \cU)^2 + u^{-1}dz (- \frac{d}{dz} + \frac{1}{z})(\tilde{\cU} - \cU) \bigr) \right] \\ = \frac{u^{-2} t_1^2}{2z^2}dz + \frac{2u^{-1}t_2}{z^2}dz + \frac{3dz}{2z^4} \end{multline*} and \begin{multline*} \mathcal{P}^{(3)} \left[ -\frac{u^4}{3z^4\,dz^2} \bigl( (\tilde{\cU} - \cU)^3 - u^{-2} \frac{dz^2}{2}(\frac{d^2}{dz^2} - \frac{3}{z}\frac{d}{dz} + \frac{3}{2z^2} ) (\tilde{\cU} - \cU) \bigr) \right] \\ = \Biggl\{ \frac{3}{4}u \frac{dz}{z^7} - \left( 5 t_4 \frac{dz}{z^3} + \frac{5}{2}t_2 \frac{dz}{z^5} \right) + u^{-1} \left( (2t_1 - 6t_1 t_3 - 4 t_2^2) \frac{dz}{z^3} - t_1^2 \frac{dz}{z^5} \right) \\ - 2u^{-2} t_1^2 t_2 \frac{dz}{z^3}\Biggr\}. \end{multline*} This allows us to write \begin{equation*} \mathcal{P}^{(2)} \left[ \frac{u^2}{2z^2\,dz} \biggl( \tilde{\cU}^2 + \delta \cU + u^{-1} dz \left(- \frac{d}{dz} + \frac{1}{z} \right) \tilde{\cU} \biggr) \right] = -\frac{dz}{8z^4} \end{equation*} and \begin{equation*} \mathcal{P}^{(3)} \left[ -\frac{u^4}{3z^4\,dz^2} \biggl( \tilde{\cU}^3 + 3\tilde{\cU}\delta\cU + \delta^2\cU - u^{-2} \frac{dz^2}{2} \left( \frac{d^2}{dz^2} - \frac{3}{z}\frac{d}{dz} + \frac{3}{2z^2} \right) \tilde{\cU} \biggr) \right] = 0. \end{equation*} An alternative formulation of the above master equations is obtained through ``renormalization.'' In effect, one redefines the indeterminate form \begin{equation*} \delta \mathbf{t} = \frac{dz^2}{z^2} \sum_{k=1}^{\infty} k, \end{equation*} and instead sets it equal to $ \frac{dz^2}{4z^2}$. Then we have \begin{equation*} \mathcal{P}^{(2)} \left[ \frac{1}{2\eta} \bigl( \tilde{\cU}^2 + \delta\tilde{\cU} + u^{-1}\mathcal{D}_1\tilde{\cU} \bigr) \right] = 0, \end{equation*} and \begin{equation*} \mathcal{P}^{(3)} \left[ \frac{1}{3\eta^2} \bigl( \tilde{\cU}^3 + \tfrac{3}{2}\delta\tilde{\cU}^2 + \delta^2\tilde{\cU} - u^{-2}\mathcal{D}_2 \tilde{\cU} \bigr) \right] = 0, \end{equation*} where \begin{align*} \eta &= -z^2\,dz \\ \mathcal{D}_1 &= dz \left(-\frac{d}{dz} + \frac{1}{z}\right) \\ \mathcal{D}_2 &= \frac{dz^2}{2} \left( \frac{d^2}{dz^2} - \frac{3}{z}\frac{d}{dz} + \frac{3}{z^2} \right). \end{align*} Although the renormalization seems arbitrary, it is well justified when one transforms these master equations into an equivalent residue equation, as is done in Section~\ref{sect:Residue}. We can make a further reduction by defining $U = \tilde{\cU} + \delta$. Then we have proven \begin{theorem} \begin{align} \label{eq:MasterEquationSimple1} \mathcal{P}^{(2)} \left[ \frac{1}{2\eta}(U^2 + u^{-1}\mathcal{D}_1 U) \cdot 1 \right] &= 0 \\ \label{eq:MasterEquationSimple2} \mathcal{P}^{(3)} \left[ \frac{1}{3\eta^2} (U^3 - u^{-2}\mathcal{D}_2 U) \cdot 1 \right] &= 0. \end{align} \end{theorem} \begin{remark} Kazarian \cite{KazarianMasterEqNotes} has shown that the master equation for the Witten-Kontsevich $\tau$ function can be written in the form \begin{equation*} \mathcal{P}^{(2)} \left[ \frac{1}{2\eta}U^2 \cdot 1 \right] = 0, \end{equation*} with identical initial conditions as the open intersection case, except for the lack of the term $u^{-1}\frac{dz}{z}$, and with the same renormalization condition on $\delta\mathbf{t}$. \end{remark} \section{Residue formulation} \label{sect:Residue} In order to reformulate the master equations \eqref{eq:MasterEquationSimple1}, \eqref{eq:MasterEquationSimple2} as an example of Eynard-Orantin topological recursion \cite{Eynard:2007kx}, we must decompose the equation by genus and marked points, convert the projection operators into residue integrals and finally, write the equations using symmetric correlation differentials. \begin{lemma} Given a Laurent differential $\gamma(z)\,dz$, we have \begin{align*} \mathcal{P}^{(2)}\left[ \gamma(z)\,dz \right] &= \Res_{w\rightarrow 0} \left[ \frac{1}{2}\left( \frac{1}{z-w} - \frac{1}{z+w} \right) \gamma(w)\,dw \right] \\ \mathcal{P}^{(3)}\left[ \gamma(z)\, dz \right] &= \Res_{w\rightarrow 0} \left[ \frac{1}{2} \left(\frac{1}{z-w} - \frac{1}{z} + \frac{1}{z+w} - \frac{1}{z} \right) \gamma(w)\,dw \right] \end{align*} \end{lemma} \begin{proof} This is an easy consequence of the definition of a Laurent differential, and from a direct calculation of the residues against the Laurent differential $z^k\,dz$, for any integer $k$. \end{proof} \begin{remark} The integral operators used to replace the projection operators can be expressed as \begin{equation*} p^{(2)}(z,w) = \frac{1}{2}\int_{\zeta=0}^{w} B(z,\zeta) - \frac{1}{2}\int_{\zeta=0}^{-w} B(z, \zeta), \end{equation*} and \begin{equation*} p^{(3)}(z, w) = \frac{1}{2}\int_{\zeta=0}^{w} B(z, \zeta) + \frac{1}{2}\int_{\zeta=0}^{-w} B(z, \zeta), \end{equation*} respectively, where $B(z_1,z_2)$ is the normalized canonical bilinear differential of the second kind defined on $\PP^1$ (c.f. \cite{MR0335789}), and $w \mapsto -w$ is the unique involutive mapping preserving the spectral curve $x = \frac{1}{2}w^2$ around the branch point at $w=0$. In other words, our construction is quite general, and not necessarily restricted to the example at hand. \end{remark} To decompose the partition function by genus and marked points, we set \begin{equation*} \tau_1 = e^F, \end{equation*} and \begin{align*} F(u, t) &= \sum_{h=0}^{\infty} u^{h-2} F_{h/2}(t) \\ F_{g}(t) &= \sum_{n=1}^{\infty} F_{g,n}(t), \end{align*} where $F_{g,n}(t)$ is homogeneous of degree $n$ in the variables $t_1, t_2, \ldots$. Then we set \begin{align*} \cU_{0,1} &= -z^2\,dz \\ \cU_{0,2} &= \mathbf{t} \\ \cU_{\frac{1}{2}, 1} &= \frac{dz}{z} \\ \cU_{g, n+1} &= \delta F_{g,n+1} \quad \text{if $2g-2+n+1 > 0$ and $n, 2g \in \mathbb{Z}_{\geq 0}$}. \end{align*} After decomposing master equations \eqref{eq:MasterEquationSimple1}, \eqref{eq:MasterEquationSimple2} by degree in both $u$ and $t$, and replacing the projection operators $\mathcal{P}^{(i)}$ with the appropriate residue integral, we arrive at the following recursion relation, valid for all $g, n \geq0$ with $2g-1+n > 0$ and $2g, n \in \mathbb{Z}$: \begin{multline} \label{eqn:TopologicalRecursionU} \cU_{g,n+1}(z) = \Res_{w\rightarrow 0} \Biggl\{ K^{(2)}(z, w) \biggl[ \delta U_{g-1, n+2}(w, w) \\ + \sum_{\substack{ g_1 + g_2 = g \\ n_1+n_2 = n+2 }}^{\text{no $(g,n+1)$ term}} \cU_{g_1,n_1}(w) \cU_{g_2, n_2}(w) + \mathcal{D}_1 \cU_{g-\frac{1}{2}, n+1}(w) \biggr] \\ + K^{(3)}(z, w) \biggl[ \delta^2\cU_{g-2, n+3}(w, w, w) + \frac{3}{2}\delta \sum_{\substack{g_1 + g_2 = g-1 \\ n_1 + n_2 = n+3}} \cU_{g_1, n_1}(w) \cU_{g_2, n_2}(w) \\ + \sum_{\substack{g_1 + g_2 + g_3 = g \\ n_1 + n_2 + n_3 = n+3 }}^{\text{no $(g,n+1)$ terms}} \prod_{i=1}^{3} \cU_{g_i, n_i}(w) - \mathcal{D}_2 \cU_{g-1, n+1}(w) \biggr] \Biggr\} , \end{multline} where \begin{equation*} K^{(j)}(z, w) = \left( (-1)^j \int_{\zeta=0}^{-w} B(z, \zeta) - \int_{\zeta=0}^{w} B(z, \zeta) \right) \frac{1}{2j (-w^2\,dw)^{j-1}}, \end{equation*} and \begin{equation*} B(z_1, z_2) = \frac{dz_1 dz_2}{(z_1 - z_2)^2}. \end{equation*} Note that this formula is only valid under the convention that $\delta\cU_{0,2} = \frac{dz^2}{4z^2}$. We now construct the correlation functions appearing in topological recursion by setting \begin{equation*} \delta_i = \sum_{j=1}^{\infty} \frac{dz_i}{z_i^{j+1}} \frac{\partial}{\partial t_j} \end{equation*} and defining \begin{equation*} W_{g, n}(z_1, \ldots, z_n) = \delta_1 \cdots \delta_n F_{g,n}(t),\quad \text{if $2g-2+n>0$.} \end{equation*} The unstable correlation functions are defined as $W_{0,1}(z) = -z^2\,dz$, $W_{\frac{1}{2},1}(z) = \frac{dz}{z}$, and \begin{align*} W_{0,2}(z_1, z_2) &= \delta_2 \cU_{0,2}(z_1) \\ &= \sum_{k=1}^{\infty} k \frac{z_1^{k-1}}{z_2^{k+1}} dz_1 dz_2 \\ &= \frac{dz_1 dz_2}{(z_1 - z_2)^2}. \end{align*} We also need the following renormalized correlation functions: \begin{equation*} \widetilde{W}_{g,n}(z_1, \ldots, z_n) = W_{g,n}(z_1, \ldots, z_n) - \delta_{g,0} \delta_{n,2} \frac{dx(z_1) dx(z_2)}{(x(z_1) - x(z_2))^2}, \end{equation*} where $x = z^2/2$, and $\delta_{n,m}$ is the Dirac delta function, not the operator $\delta_i$ appearing elsewhere in the paper. We note in particular, that \begin{equation*} \widetilde{W}_{0,2}(z, z) = \frac{dz^2}{4z^2}, \end{equation*} which is exactly the renormalized behavior we need for the topological recursion formula. To make the formulas a little more digestible, we use the notation $\vec{z} = (z_1, \ldots, z_n)$, \begin{multline*} \cR^{(2)}W_{g, n+1}(w; \vec{z}) = \widetilde{W}_{g-1, n+2}(w, w, \vec{z}) \\ + \sum_{\substack{g_1 + g_2 = g \\ Z_1 \sqcup Z_2 = \vec{z}}}^{\text{no $(g,n+1)$ terms}} W_{g_1, \abs{Z_1}+1}(w, Z_1) W_{g_2, \abs{Z_2}+1}(w, Z_2), \end{multline*} and \begin{multline*} \cR^{(3)}W_{g, n+1}(w; \vec{z}) = W_{g-2, n+3}(w, w, w, \vec{z}) \\ +3 \sum_{\substack{g_1 + g_2 = g-1 \\ Z_1 \sqcup Z_2 = \vec{z}}} W_{g_1, \abs{Z_1}+1}(w, Z_1) \widetilde{W}_{g_2, \abs{Z_2}+2}(w, w, Z_2) \\ + \sum_{\substack{g_1 + g_2 + g_3 = g \\ Z_1 \sqcup Z_2 \sqcup Z_3 = \vec{z}}}^{\text{no $(g, n+1)$ terms}} \prod_{i=1}^{3} W_{g_i, \abs{Z_i}+1}(w, Z_i) \end{multline*} If we apply the operator $\delta_1 \cdots \delta_n$ to \eqref{eqn:TopologicalRecursionU}, then we find the following \begin{theorem} The correlation functions $W_{g,n}$ for open intersection numbers obey the topological recursion formula \begin{multline} \label{eqn:TopologicalRecursion} W_{g,n+1}(z_0, \ldots, z_n) = \Res_{w\rightarrow 0} \Biggl\{ K^{(2)}(z_0, w) \biggl[ \cR^{(2)}W_{g, n+1}(w; \vec{z}) + \mathcal{D}_1 W_{g-1/2, n+1}(w, \vec{z}) \biggr] \\ + K^{(3)}(z_0, w) \biggl[ \cR^{(3)}W_{g, n+1}(w; \vec{z}) - \mathcal{D}_2 W_{g-1, n+1}(w, \vec{z}) \biggr] \Biggr\}, \end{multline} with initial conditions \begin{align*} W_{0,1}(z) &= -z^2\,dz \\ W_{\frac{1}{2},1}(z) &= \frac{dz}{z} \\ W_{0,2}(z_1, z_2) &= B(z_1, z_2), \end{align*} where \begin{align*} \mathcal{D}_1 &= dz \left(-\frac{d}{dz} + \frac{1}{z}\right) \\ \mathcal{D}_2 &= \frac{dz^2}{2} \left( \frac{d^2}{dz^2} - \frac{3}{z}\frac{d}{dz} + \frac{3}{z^2} \right). \end{align*} \end{theorem} \section{A conjectural refinement} \label{sect:Q-Grading} One issue that arises in the above formulation of topological recursion for open intersection numbers is that the correlators combine intersection numbers from a union of disjoint moduli spaces. In particular, a given function $F_{h, n+1}$, for $2h, n \in \mathbb{Z}_{\geq 0}$, has contributions from moduli spaces of genus $g$ curves with $b$ boundary components for all $g, b$ with $g + b/2 = h$. If the contribution to $F_{h,n}$ from genus $g$ surfaces with $b$ boundary components is $F_{g,b,n}$, it is natural to try to introduce a parameter $Q$, and write \begin{equation*} F_{h,n}(t, Q) = \sum_{\substack{g \in \mathbb{Z} \\ 0 \leq g \leq h}} Q^{2(h-g)} F_{g, 2(h-g), n}(t), \end{equation*} as well as the associated correlators \begin{equation*} W_{g, k, n} = \delta_1 \cdots \delta_n F_{g, k, n}. \end{equation*} If such a decomposition is possible, we can resum to obtain correlators with only integral genus parameter \begin{equation*} \Omega_{g,n} = \sum_{k=0}^{\infty} Q^k W_{g, k,n}. \end{equation*} This form of the correlators is required if one were to attempt to find a spectral curve whose correlators under the standard theory of topological recursion calculate open intersection numbers. It is tempting to try to insert the $Q$ grading into the topological recursion formula \eqref{eqn:TopologicalRecursion} by replacing $W_{\frac{1}{2},1} \mapsto Q W_{\frac{1}{2},1}$ and $\mathcal{D}_i \mapsto Q^i \mathcal{D}_i$, as this would produce correlation functions with terms of correct degree in $Q$. Unfortunately, this proves unsuccessful as these $Q$-graded correlators $W_{g,n}(Q; \vec{z})$ with $2g-2 + n \geq 3$ are not symmetric. However, correlators with $2g-2 + n < 3$ match exactly to the correlators coming from $\tau_Q$, the Kontsevich-Penner matrix model. This motivates the following \begin{conjecture} \label{conj:QGrading} Let $\tau_Q = e^{F_Q}$, with $\tau_Q$ given by the Kontsevich-Penner matrix model \eqref{eqn:MatrixIntegral}. Then the correlators $W_{g,n}(Q; \vec{z}) = \delta_1 \cdots \delta_n F_{g,n}(Q; t)$ given from the expansion \begin{equation*} F_Q(t) = \sum_{g,n} u^{2g-2}F_{g,n}(Q;t) \end{equation*} are the $Q$-graded correlators for open intersection numbers. \end{conjecture} The conjecture is true for $g = 0, \frac{1}{2}, 1$. In addition, the correlators of all augmented genera for $F_Q$ exhibit the proper degree behaviour in $Q$. One difficulty that arises is that moduli spaces with different numbers of boundary components can have common boundary using the compactification of Solomon and Tessler. Moreover, the boundary contributes non-trivially to the intersection numbers, realized in the form of nodal ribbon graphs contributing to the total in Tessler's combinatorial model \cite{Tessler:2015ys}. However, what seems to hold true, at least in the low genus examples that can be calculated by hand, is that the ribbon graphs naturally sort themselves into neat piles, each contributing to a term with fixed degree in $Q$, and the resulting expression matching the one predicted by the conjecture. So, while the evidence in support of Conjecture~\ref{conj:QGrading} is hardly definitive, it certainly seems promising enough to warrant further investigation. \section{Quantum curve equation} \label{sect:QuantumCurve} In this section we derive a quantum curve for open intersection numbers. In topological recursion, the quantum curve is obtained from the spectral curve via \emph{quantization}, whereby we replace $y$ with $\hbar \frac{d}{dx}$. Then, in many cases, and with the proper choice of ordering of the now non-commuting variables, one obtains the quantum curve equation \begin{equation*} P(\hat{x}, \hat{y})e^{\Psi} = 0, \end{equation*} where $\Psi$ is the \emph{principal specialization} of the partition function. In our case, it can be realized by substituting \begin{align*} \tilde\Psi_Q &= F\biggr|_{\substack{u \mapsto \hbar \\ t_k \mapsto \frac{\hbar}{k z^k}}} \\ \Psi_Q &= \tilde\Psi_Q + \frac{z^3}{3} - \frac{3}{4}\log \frac{z^2}{2}. \end{align*} It also corresponds with taking $N=1$ in the matrix integral \eqref{eqn:MatrixIntegral}. Hence, by work of Brezin and Hikami \cite{MR2874239}, we have the quantum curve equation \begin{theorem} If $\Psi_Q$ is the principal specialization of $F_Q$ and $x = \frac{1}{2}z^2$, then \begin{equation*} \left(\hbar^3 \frac{d^3}{dx^3} - 2\hbar x \frac{d}{dx} + 2\hbar(Q-1)\right) e^{\Psi_Q} = 0. \end{equation*} The semi-classical limit is, for all values of $Q$, \begin{equation*} y^3 - 2xy = 0. \end{equation*} \end{theorem} We note in particular that the spectral curve is reducible and has degree 3. Since there is currently no known way of calculating topological recursion for a general reducible spectral curve, it is not surprising that the formulas we obtain are not in exact correspondence with the standard topological recursion. Whether or not this example can be generalized to other reducible curves is a topic for future work. However, the fact that the curve is degree 3 does help explain why the formula we obtain most closely matches topological recursion in the rank 3 case. \bibliographystyle{plain}
1,314,259,996,101
arxiv
\section*{Acknowledgements} \vspace{-0.2cm} This work was supported by Naver Labs and Institute for Information \& Communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) [2017-0-01779, 2017-0-01780]. {\small \bibliographystyle{ieee_fullname} \section{Experiments}\label{sec:exp} This section describes our setting for training and evaluation and presents the experimental results of our algorithm in comparison to the existing methods. We also analyze various aspects of the proposed network. \subsection{Training} \label{sub:training} We use Selective Search~\cite{uijlings2013selective} for generating bounding box proposals. All fully connected layers in the detection and the IMG modules are initialized randomly using a Gaussian distribution $(0, 0.01^2)$. The learning rate is $0.001$ at the beginning and reduced to $0.0001$ after 90K iterations. The hyper-parameter in the weight decay term is $0.0005$, the batch size is 2, and the total training iteration is 120K. We use 5 image scales of $\{480, 576, 688, 864, 1000\}$, which are based on the shorter size of an image, for data augmentation and ensemble in training and testing. The NMS threshold for selecting foreground proposals is 0.3 and $\xi$ in Eq~(\ref{eq:gt}) is set to $0.4$ following MNC~\cite{dai2016instance}. For regression, a proposal is associated with a pseudo-ground-truth if the IoU is larger than 0.6. The output size $T$ of the IMG and instance segmentation modules is 28. Our model is implemented on PyTorch and the experiments are conducted on a single NVIDIA Titan XP GPU. \subsection{Datasets and Evaluation Metrics} We use PASCAL VOC 2012 segmentation dataset~\cite{everingham15pascal} to evaluate our algorithm. The dataset is composed of 1,464, 1,449, and 1,456 images for training, validation, and testing, respectively, for 20 object classes. We use the standard augmented training set (\textit{trainaug}) with 10,582 images to learn our network, following the prior segmentation research~\cite{ahn2019weakly,chen2018deeplab,cholakkal2019object,ge2019label,bharath2011semantic,zhou2018weakly,zhu2019learning}. In our weakly supervised learning scenario, we only use image-level class labels to train the model. Detection and instance segmentation accuracies are measured on PASCAL VOC 2012 segmentation validation (\textit{val}) set. We employ the standard mean average precision (mAP) to evaluate object detection performance, where a bounding box is regarded as a correct detection if it overlaps with a ground-truth more than a threshold, \textit{i.e.} IoU $> 0.5$. CorLoc~\cite{deselaers2012weakly} is also used to evaluate the localization accuracy on the \textit{trainaug} dataset. For instance segmentation task, we evaluate performance of an algorithm using mAPs at IoU thresholds 0.25, 0.5 and 0.75. We also use Average Best Overlap (ABO) to present overall instance segmentation performance of our model. \subsection{Comparison with Other Algorithms} \label{sec:comp} We compare our algorithm with existing weakly supervised instance segmentation approaches~\cite{cholakkal2019object,ge2019label,zhou2018weakly,zhu2019learning}. Table~\ref{tab:inst} shows that our algorithm generally outperforms the prior arts even without post-processing. Note that our post-processing using MCG proposals~\cite{arbelaez2014multiscale} improves mAP at high thresholds and ABO significantly, and leads to outstanding performance in terms of both mAP and ABO after all. We believe that such large gaps come from the effective regularization given by our community learning. The accuracy of our model is not as good as the method given by Mask R-CNN re-training~\cite{ahn2019weakly,issam2019where}, but direct comparison is not fair due to the retraining issue. Table~\ref{tab:inst_train} illustrates that our model outperforms the methods without re-training on \textit{train} split. \begin{table}[t] \begin{center} \caption{Instance segmentation results on the PASCAL VOC 2012 segmentation \textit{train} set. \cite{ahn2019weakly,issam2019where} report results without Mask R-CNN obtained from their original papers.} \label{tab:inst_train} \vspace{0.2cm} \scalebox{0.85}{ \begin{tabular}{cccc} \toprule & WISE~\cite{issam2019where} & IRN~\cite{ahn2019weakly} & Ours\\ \hline\hline mAP$_{0.5}$ & 25.8 & 37.7 & \bf{39.2} \\ \bottomrule \end{tabular} } \end{center} \vspace{-0.3cm} \end{table} \begin{table}[t] \begin{center} \caption{Contribution of individual components integrated into our algorithm study. The evaluation is performed on PASCAL VOC 2012 segmentation \textit{val} set for mAP and \textit{trainaug} set for CorLoc (* indicates that detection bounding boxes are used as segmentation results as well).} \label{tab:abl_arch} \scalebox{0.85}{ \begin{tabular}{cccc} \toprule \multirow{2}{*}{\large Architecture} & $\begin{array}{c} \text{Instance} \\ \text{Segmentation} \end{array}$ &\multicolumn{2}{c}{$\begin{array}{c} \text{Object} \\ \text{Detection} \end{array}$} \\ & mAP$_{0.5}$ & mAP & CorLoc \\ \hline\hline Detector & \ 18.8$^*$ &45.3 & 63.6 \\ Detector + IMG &32.8 &48.6 & 66.3 \\ Detector + IMG + IS & 33.7 & 49.2 & 66.8 \\ Detector + REG + IMG + IS& \bf{35.9}&\bf{53.2}& \bf{70.8} \\ \bottomrule \end{tabular} } \end{center} \vspace{-0.2cm} \end{table} \subsection{Ablation Study} \label{sub:abl} We discuss the contribution of each component in the network and the effectiveness of our training strategy. We also compare two different regression strategies---class-agnostic vs. class-specific---using detection scores. Note that we present the results without post-processing for the ablation study to verify the contribution of each component clearly. \subsubsection{Network Components} We analyze the effectiveness of individual modules for instance segmentation and object detection. For comparisons, we measure mAP$_{0.5}$ for instance segmentation and mAP for object detection on PASCAL VOC 2012 segmentation \textit{val} set while computing CorLoc on the \textit{trainaug} set. Note that the instance segmentation accuracy of the detection-only model is given by using detected bounding boxes as segmentation masks. All models are trained on PASCAL VOC 2012 segmentation \textit{trainaug} set. Table~\ref{tab:abl_arch} presents that the IMG and Instance Segmentation (IS) modules are particularly helpful to improve accuracy for both tasks. By adding the two components, our model achieves accuracy gain in detection by 3.9\% and 3.2\% points in terms of mAP and CorLoc, respectively, compared to the baseline detector. Additionally, bounding box regression (REG) enhances performance by generating better pseudo-ground-truths. \begin{table}[t!] \begin{center} \caption{Accuracy of the variants of IMG module with background class (BG), weighted GAP (wGAP), and feature smoothing (FS), based on the ResNet50 backbone without REG} \label{tab:cam_comp} \vspace{0.1cm} \scalebox{0.85}{ \begin{tabular}{cccccc} \toprule & BG & BG + wGAP & BG + FS & wGAP + FS & All \\ \hline\hline mAP$_{0.5}$&28.8&30.0&31.8&27.4 & \bf{33.7} \\ \bottomrule \end{tabular} } \end{center} \vspace{-0.3cm} \end{table} \subsubsection{IMG module} We further investigate the components in the IMG module and summarize the results in Table~\ref{tab:cam_comp}. All results are from the experiments without bounding box regression to demonstrate the impact of individual components clearly. All the three tested components make substantial contribution for performance improvement. The background class activation map models background likelihood within a bounding box explicitly and facilitates the comparison with foreground counterparts. The feature smoothing regularizes excessively discriminative activations in the inputs to CAM module while the weighted GAP pays more attention to the proper region for segmentation. \subsubsection{Comparison to a Simple Algorithm Combination} To demonstrate the benefit of our unified framework, we compare the proposed algorithm with a straightforward combination of weakly supervised object detection and semantic segmentation methods. Table~\ref{tab:end2end} presents the result from a combination of weakly supervised object detection algorithm, OICR~\cite{tang2017multiple}, and a weakly supervised semantic segmentation algorithm, AffinityNet~\cite{ahn2018learning}. Note that both OICR and AffinityNet are competitive approaches in their target tasks. We train the two models independently and combine their results by providing a segmentation label map using AffinityNet for each detection result obtained from OICR. The proposed algorithm based on a unified end-to-end training outperforms the simple combination of two separate modules even without post-processing. \begin{table}[t] \begin{center} \caption{Comparison our model with a combination of OICR and AffinityNet on PASCAL VOC 2012 segmentation \textit{val} set} \label{tab:end2end} \vspace{0.2cm} \scalebox{0.85}{ \begin{tabular}{cccc} \toprule Model & $\begin{array}{c} \text{OICR} \\ \text{+ AffinityNet} \end{array}$ & $\begin{array}{c} \text{OICR (ResNet50)} \\ \text{+ AffinityNet} \end{array}$ & Ours\\ \hline\hline mAP$_{0.5}$ & 27.3 & 33.3 & \bf{35.9} \\ \bottomrule \end{tabular} } \end{center} \vspace{-0.2cm} \end{table} {\setlength{\tabcolsep}{0.5em} \begin{table}[t] \begin{center} \caption{Comparison of class-agnostic regressor and class-specific regressor into our algorithm in terms of detection performance. The evaluation is performed on PASCAL VOC 2012 segmentation \textit{val} set for mAP and \textit{trainaug} set for CorLoc.} \label{tab:reg} \vspace{0.2cm} \scalebox{0.85}{ \setlength\tabcolsep{15pt} \begin{tabular}{cccc} \toprule Model & mAP & CorLoc\\ \hline\hline Ours w/o REG & 49.2 & 66.8 \\ Ours (class-specific) &50.4 & 68.4 \\ Ours (class-agnostic) & \textbf{53.2} & \textbf{70.1} \\ \bottomrule \end{tabular} } \end{center} \vspace{-0.2cm} \end{table} } \begin{figure*}[t] \begin{center} \includegraphics[width=0.85\linewidth]{assets/qual.pdf} \end{center} \vspace{-0.3cm} \caption{Instance segmentation results on PASCAL VOC 2012 segmentation \textit{val} set. Green rectangle is a detected object bounding box.} \label{fig:seg} \vspace{-0.1cm} \end{figure*} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.85\linewidth]{./assets/det_qual.pdf} \end{center} \vspace{-0.2cm} \caption{Qualitative results of detection on PASCAL VOC 2012 segmentation \textit{val} set. Green rectangle is generated by our model and yellow one indicates the output of detector-only model (OICR~\cite{tang2017multiple} based on ResNet50). } \label{fig:det} \vspace{-0.3cm} \end{figure*} \subsubsection{Comparison to Class-Specific Box Regressor} We compare the results from the class-agnostic and class-specific bounding box regressors in terms of mAP and CorLoc. Table~\ref{tab:reg} presents that bounding box regressors turn out to be learned effectively despite incomplete supervision. It further shows that the class-agnostic bounding box regressor clearly outperforms the class-specific version. We believe that this is partly because sharing a regressor over all classes reduces the bias observed in individual classes and regularizes overly discriminative representations. \subsection{Qualitative Results} Figure~\ref{fig:seg} shows instance segmentation results from our model after post-processing and identified bounding boxes on PASCAL VOC 2012 segmentation \textit{val} set. Refer to our supplementary material for more details. Our model successfully segments whole regions of objects and discriminates each object in a same class within an input image via predicted object proposals. Figure~\ref{fig:det} compares detection results from our full model and a detector-only model, OICR with the ResNet50 backbone network, on the same dataset. Our model is more robust to localize a whole object since the features are better regularized by joint learning of Object Detection, IMG, and Instance Segmentation modules. \section*{Appendix} \section{Details of Our Framework} This section discusses more details regarding our feature extractor, object detection and instance mask generation modules, which are described in our main paper. \subsection{Feature Extractor} \label{sub:backbone} We use ResNet50~\cite{he2016deep} as a backbone network, which is pretrained on ImageNet. For object detection, one SPP layer is attached after \textsf{res4}, followed by \textsf{res5}. The output of the last residual block is shared with IMG and segmentation modules through upsampling. The IMG module employs multiple level of $28\times 28$ features from outputs of SPP layers attached to \textsf{res3} and \textsf{res4}, and upsampled \textsf{res5} output. These features are given to the weighted GAP and the classification layers following one convolution layer for each level of the CAM subnetwork. For instance segmentation, the upsampled output of \textsf{res5} is used. On our implementation, batch normalization is replaced to group normalization~\cite{wu2018group} due to the small batch size. \subsection{Object Detection Module} Object detection module is composed of detector and regressor parts. Note that any weakly supervised object detection algorithm can be used as the detector in the proposed framework. \subsubsection{Detector}% We adopt OICR~\cite{tang2017multiple} for the detector. OICR is one of the most commonly used algorithm for weakly supervised object detection relying on multiple instance learning~\cite{tang2018pcl,tang2017multiple,zhang2018w2f}. The model has two parts, multiple instance detection network (MIDN) and refinement layers.% \subsubsection{MIDN} MIDN is based on the Weakly Supervised Deep Detection Network (WSDDN)~\cite{bilen2016weakly}, which has two parallel fully connected layers for classification and detection, respectively and they are followed by two separate softmax layers. For classification, the softmax layer is given by \begin{equation} [\sigma_{\text{cls}}(\mathbf{x}^c)]_{ij} = \frac{e^{x_{ij}^c}}{\sum_{k=1}^C e^{x^c_{kj}}}, \end{equation} where $x^c_{ij}$ denotes the classification score for the $i^\text{th}$ class of the $j^\text{th}$ proposal and $C$ denotes the number of classes. On the other hand, the softmax layer for detection branch is given by \begin{equation} [\sigma_{\text{det}}(\mathbf{x}^d)]_{ij} = \frac{e^{x_{ij}^d}}{\sum_{k=1}^{|{R}|} e^{x^d_{ik}}}, \end{equation} where $x^d_{ij}$ denotes the detection score for the $i^\text{th}$ class of the $j^\text{th}$ proposal and $|{R}|$ is the number of proposals. The final score, $\mathbf{z} \in \mathbb{R}^{C \times |{R}|}$ is defined as \begin{equation} \mathbf{z} = \sigma_{\text{cls}}(\mathbf{x}^c) \odot \sigma_{\text{det}}(\mathbf{x}^d), \end{equation} where $\odot$ is the Hadamard product. The image-level classification score $\boldsymbol{\phi}$ is given by the sum of $\mathbf{z}$ over all proposals. By using the image-level score, the loss from MIDN\, $\mathcal{L}_\text{cls}$ is defined as an image-level cross-entropy, which is described in Eq.~3 in our main paper. \subsubsection{Refinement Layer} Once MIDN predicts a class of each proposal, a refinement layer revises the labels by leveraging object classification scores from the previous stage. The refinement layer finds the proposal with the highest rank in each class, which is considered as a seed. Each proposal is given a label from the highest overlapping seed if its IoU (Intersection over Union) with the seed is higher than a threshold, 0.5; otherwise, it is labeled as a background class. The weight of the proposal $w_r$ is given by the class score of the seed. Hence, the loss of the $k^{\text{th}}$ refinement layer, $\mathcal{L}_\text{refine}^k$ is defined as a weighted cross-entropy loss as described in Eq.~4 in our main paper. \subsubsection{Regressor} For bounding box regression, we attach two fully connected layers after \textsf{res5} which has 2048 channels. The final output of our regressor has a dimension of 4 for class-agnostic manner instead of $4C$ where $C$ is the number of classes for traditional class-specific manner. It means that class-agnostic regressor is shared with all classes. During training, a proposal and its nearest pseudo-ground-truth proposal pair $( \text{p}, g)$ is converted to a regression offset $t = [t_{x},t_{y},t_{w},t_{h}]$ as follows: \begin{equation} \begin{split} t_{x} &= (g_{x} - \text{p}_{x}) / \text{p}_{w},\\ t_{y} &= (g_{y} - \text{p}_{y}) / \text{p}_{h}, \\ t_{w} &= \log(g_{w}/ \text{p}_{w}), \\ t_{h} &= \log(g_{h}/ \text{p}_{h}), \end{split} \label{eq:reg_offset} \end{equation} where $g = [g_{x}, g_{y}, g_{w}, g_{h}]$ is a target pseudo-ground-truth proposal for a proposal, $\text{p} = [ \text{p}_{x}, \text{p}_{y}, \text{p}_{w}, \text{p}_{h}]$. \vspace{0.5cm} \subsection{Instance Mask Generation (IMG) Module} We use CAM~\cite{zhou2016learning} for instance mask generation module. It can be substituted by other object localization algorithms based on image-level labels such as Grad-CAM~\cite{selvaraju2017grad} and Grad-CAM++~\cite{chattopadhyay2017grad}. \subsubsection{Class Activation Map (CAM)} CAM~\cite{zhou2016learning} highlights areas of discriminative parts of objects over each class and is often used for the pseudo-ground-truth for weakly supervised semantic segmentation. CAM is built on a classification task leveraging Global Average Pooling (GAP)~\cite{lin2013network}. It is applied to the last convolutional layer followed by a fully connected layer and a softmax layer to predict image-level class labels. For each class c, CAM, $\mathbf{M}_{c}(x,y)$ is defined as follows: \begin{equation} \mathbf{M}_{c}(x,y) = \mathbf{w}_{c}^T\cdot \mathbf{F}(x,y), \end{equation} where $\mathbf{F}(x,y)$ is a feature vector from the last convolutional layer with respect to spatial grid $(x,y)$, and $\mathbf{w}_c$ is a weight vector of fully connected layer. \section{Time Cost of Post Processing} Note that our model without post-processing has competitive results compared to existing methods, and our post-processing is finding best matching MCG proposal for each predicted mask. The computational cost for post-processing is not signficant compared to our main algorithm based on a deep neural network. Specifically, the inference through our network takes 4 seconds per image (5 multi-scales with flip) on a single TITAN Xp GPU but the post-processing takes $0\sim4$ seconds on a CPU. \begin{figure}[t] \begin{center} \includegraphics[width=0.85\linewidth]{./assets/supp_non-linear.pdf} \end{center} \vspace{-0.2cm} \caption{Comparison between the outputs of the conventional CAM network (middle) and one with feature smoothing (right) for two images.} \label{fig:non-linear} \end{figure} \begin{figure*}[t] \begin{center} \includegraphics[width=0.95\linewidth]{./assets/supp_reg_cls_qual.pdf} \end{center} \caption{Qualitative results regarding class-specific and class-agnostic regressors on PASCAL VOC 2012 segmentation \textit{val} set. Red rectangle is a ground-truth, blue rectangle represents the output of our model with class-agnostic regressor and orange one is our model with class-specific regressor.} \label{fig:reg_result_2} \end{figure*} {\setlength{\tabcolsep}{0.5em} \begin{table}[t] \begin{center} \caption{Accuracy of the various number of CAMs in IMG module based on ResNet50 without REG and IS modules on PASCAL VOC 2012 segmentation \textit{val} set.} \label{tab:multi_cams} \vspace{0.3cm} \scalebox{1.0}{ \setlength\tabcolsep{5pt} \begin{tabular}{ccccc} \toprule The number of CAMs & 1 & 2 & 3 (ours) & 4\\ \hline\hline mAP$_{0.5}$ & 29.7 & 30.6 & \textbf{32.8} & 31.3 \\ \bottomrule \end{tabular} } \end{center} \vspace{-0.2cm} \end{table} } \section{Additional Ablation Study} \label{sec:abl} \subsection{Multiple CAMs} We present the instance segmentation performance at mAP$_{0.5}$ with respect to the number of CAMs in Table~\ref{tab:multi_cams}. The results come from our Detector + IMG module which does not have REG and IS modules and the postprocesing to directly show the effectiveness of multiple CAMs. The multi-scale representations are helpful to capture whole objects rather than discriminative parts only. \section{Additional Qualitative Results} \label{sec:qual} \subsection{Instance Segmentation} Figure~\ref{fig:seg_qual} shows additional instance segmentation results. Images in the first two rows are success cases and those in the last row are failure cases. In the failure cases, the model is confused with dog and cat and cannot detect human hands and leg, dark sheep. differentiate adjacent three sheep, and remove false positive. \subsection{Feature Smoothing} To penalize CAM focusing excessively on discriminative parts on target objects, we smooth the input features to CAM networks using a non-linear activation function. As illustrated in Figure~\ref{fig:non-linear}, the function helps produce more spatially regularized activation maps which are more appropriate to enclose entire target objects by segmentation. \subsection{Bounding Box Regression} \label{subsubsec:qual_reg} We qualitatively compare our model with class-agnostic regressor and with class-specific regressor on Figure~\ref{fig:reg_result_2}. Our model with class-agnostic regressor achieves better performance than with class-specific regressor. The difference between two regressors is remarkable on ``cat'' and ``dog'' classes. With class-agnostic regressor, our model detects their entire bodies while the model with class-specific counterpart still spotlights their discriminative parts, faces. Figure~\ref{fig:reg_supp} presents the effectiveness of our class-agnostic regressor compared to our model without regressor on PASCAL VOC segmentation \textit{val} set. \begin{figure*}[] \begin{center} \includegraphics[width=0.95\linewidth]{./assets/supp_reg.pdf} \end{center} \caption{Qualitative results of detection on PASCAL VOC 2012 segmentation \textit{val} set. Red rectangle indicates ground-truth, green rectangle is generated by our model without regressor, and blue one represents the output of our model with class-agnostic regressor.} \label{fig:reg_supp} \end{figure*} \section{Introduction}\label{sec:intro} Object detection and semantic segmentation algorithms have achieved great success in recent years thanks to the advent of large-scale datasets~\cite{everingham15pascal,lin2014microsoft} as well as the development of deep learning technologies~\cite{girshick2015fast,he2017maskrcnn,redmon2018yolo,ren2015faster}. However, most of existing image datasets have relatively simple forms of annotations such as image-level class labels, while many practical tasks require more sophisticated information such as bounding boxes and areas corresponding to object instances. Unfortunately, the acquisition of the complex labels needs significant human efforts, and it is challenging to construct a large-scale dataset containing such comprehensive annotations. Instead of standard supervised learning formulations~\cite{chen2018masklab,dai2016instance,hayder2017boundary,he2017maskrcnn}, we tackle a more challenging task, weakly supervised instance segmentation, which relies only on image-level class labels for instance-wise segmentation. This task shares critical limitations with many weakly supervised object recognition problems; trained models typically focus too much on discriminative parts of objects in the scene, and, consequently, fail to identify whole object regions and extract accurate object boundaries in a scene. Moreover, there are additional challenges in handling two problems jointly, weakly supervised object detection and semantic segmentation; incomplete ground-truths incur noisy estimation of labels in both tasks, which makes it difficult to take advantage of the joint learning formulation. For example, although object detection methods typically employ object proposals to provide rough information of object location and size, a na\"ive application of instance segmentation module to weakly supervised object detection results may not be successful in practice due to noise in object proposals. \begin{figure}[t] \begin{center} \includegraphics[width=1\linewidth]{./assets/concept.pdf} \end{center} \vspace{-0.2cm} \caption{The proposed community learning framework for weakly supervised instance segmentation. Our model is composed of object detection module, instance mask generation module, instance segmentation module and feature extractor, which constructs a positive feedback loop within a community. It first identifies positive detection bounding boxes from the detection module and generates pseudo-ground-truths of instance segmentation using class activation maps. The model is trained with multi-task loss of the three components using the pseudo-ground-truths. The final segmentation masks are obtained from the ensemble of outputs from instance mask generation and segmentation modules. } \label{fig:concept} \vspace{-0.3cm} \end{figure} Our approach aims to realize the goal using a deep neural network with multiple interacting task-specific components that construct a positive feedback loop. The whole model is trained in an end-to-end manner and boosts performance of individual modules, leading to outstanding segmentation accuracy. We call such a learning concept {\it community learning}, and Figure~\ref{fig:concept} illustrates its application to weakly supervised instance segmentation. The community learning is different from multi-task learning that attempts to achieve multiple objectives in parallel without tight interaction between participating modules. The contributions of our work are summarized below: \begin{itemize} \item[$\bullet$] We introduce a deep community learning framework for weakly supervised instance segmentation, which is based on an end-to-end trainable deep neural network with active interactions between multiple tasks: object detection, instance mask generation, and object segmentation. \item[$\bullet$] We incorporate two empirically useful techniques for object localization, class-agnostic bounding box regression and segmentation proposal generation, which are performed without full supervision. \item[$\bullet$] The proposed algorithm achieves substantially higher performance than the existing weakly supervised approaches on the standard benchmark dataset without post-processing. \end{itemize} The rest of the paper is organized as follows. We briefly review related works in Section~\ref{sec:related} and describe our algorithm with community learning in Section~\ref{sec:method}. Section~\ref{sec:exp} analyzes the experimental results on a benchmark dataset. \section{Related Works}\label{sec:related} This section reviews existing weakly supervised algorithms for object detection, semantic segmentation, and instance segmentation. \subsection{Weakly Supervised Object Detection} Weakly Supervised Object Detection (WSOD) aims to localize objects in a scene only with image-level class labels. Most of existing methods formulate WSOD as Multiple Instance Learning (MIL) problems~\cite{dietterich1997solving} and attempt to learn detection models via extracting pseudo-ground-truth labels~\cite{bilen2016weakly,tang2018pcl,tang2017multiple,zhang2018w2f}. WSDDN~\cite{bilen2016weakly} combines classification and localization tasks to identify object classes and their locations in an input image. However, this technique is designed to find only a single object class and instance conceptually and often fails to solve the problems involving multiple labels and objects. Various approaches~\cite{kantorov2016contextlocnet,son2018forget,tang2018pcl,tang2017multiple,wan2018min} tackle this issue by incorporating additional components, but they are still prone to focus on the discriminative parts of objects instead of whole object regions. Recently, there are several research integrating semantic segmentation to improve detection performance~\cite{diba2017weakly,li2019weakly,shen2019cyclic,wei2018ts2c}. WCCN~\cite{diba2017weakly} and TS2C~\cite{wei2018ts2c} filter out object proposals using semantic segmentation results, but still have trouble in identifying spatially overlapped objects in the same class. Meanwhile, SDCN~\cite{li2019weakly} utilizes semantic segmentation result to refine pseudo-ground-truths. WS-JDS~\cite{shen2019cyclic} leverages weakly supervised semantic segmentation module that estimates importance for object proposals. Although the core idea is valuable and the segmentation module improves detection performance, the instance segmentation performance improvement is marginal compared to simple box masking of its baselines~\cite{bilen2016weakly,kantorov2016contextlocnet}. \subsection{Weakly Supervised Semantic Segmentation} Weakly Supervised Semantic Segmentation (WSSS) is a task to estimate pixel-level semantic labels in an image based on image-level class labels only. Class Activation Map (CAM)~\cite{zhou2016learning} is widely used for WSSS because it generates class-specific likelihood maps using the supervision for image classification. SPN~\cite{kwak2017weakly}, one of the early works that exploit CAM for WSSS, combines CAM with superpixel segmentation result to extract accurate class boundaries in an image. AffinityNet~\cite{ahn2018learning} propagates the estimated class labels using semantic affinities between adjacent pixels. Ge~\etal~\cite{ge2018multi} employ a pretrained object detector to obtain segmentation labels. Recent approaches~\cite{huang2018weakly,kwak2017weakly,lee2019ficklenet,lee2019frame,wang2018weakly,zeng2019joint} often train their models end-to-end. DSRG~\cite{huang2018weakly} and MCOF~\cite{wang2018weakly} propose iterative refinement procedures starting from CAM. FickleNet~\cite{lee2019ficklenet} performs stochastic feature selection in its convolutional layers and captures the regularized shapes of objects. \subsection{Instance Segmentation} Instance segmentation can be regarded as a combination of object localization and semantic segmentation, which needs to identify individual object instances. There exist several fully supervised approaches~\cite{chen2018masklab,dai2016instance,hayder2017boundary,he2017maskrcnn}. Haydr~\etal~\cite{hayder2017boundary} utilize Region Proposal Network (RPN)~\cite{ren2015faster} to detect individual instances and leverage Object Mask Network (OMN) for segmentation. Mask R-CNN~\cite{he2017maskrcnn}, Masklab~\cite{chen2018masklab} and MNC~\cite{dai2016instance} have similar procedures to predict their pixel-level segmentation labels. There have been recent works for Weakly Supervised Instance Segmentation (WSIS) based on image-level class labels only~\cite{ahn2019weakly,ge2019label,issam2019where,zhou2018weakly,zhu2019learning}. Peak Response Map (PRM)~\cite{zhou2018weakly} takes the peaks of an activation map as pivots for individual instances and estimates the segmentation mask of each object using the pivots. Instance Activation Map (IAM)~\cite{zhu2019learning} selects pseudo-ground-truths out of precomputed segment proposals based on PRM to learn segmentation networks. Label-PEnet~\cite{ge2019label} combines various components with different functionalities to obtain the final segmentation masks. However, it involves many duplicate operations across the components and requires very complex training pipeline. There are a few attempts to generate pseudo-ground-truth segmentation maps based on weak supervision and forward them to the well-established network~\cite{he2017maskrcnn} for instance segmentation~\cite{ahn2019weakly,issam2019where}. To improve accuracy, the algorithms often employ post-processing such as MCG proposals~\cite{arbelaez2014multiscale} or denseCRF~\cite{krahenbuhl2011efficient}. \section{Proposed Algorithm}\label{sec:method} This section describes our community learning framework based on an end-to-end trainable deep neural network for weakly supervised instance segmentation. \subsection{Overview and Motivation} \label{sub:network} One of the most critical limitations in a na\"ive combination of detection and segmentation networks for weakly supervised instance segmentation is that the learned models often attend to small discriminative regions of objects and fail to recover missing parts of target objects. This is partly because segmentation networks rely on noisy detection results without proper interactions and the benefit of the iterative label refinement procedure is often saturated in the early stage due to the strong correlation between outputs from two modules. To alleviate this drawback, we propose a deep neural network architecture that constructs a circular chain along with the components and generates desirable instance detection and segmentation results. The chain facilitates the interactions along individual modules to extract useful information. Specifically, the object detector generates proposal-level pseudo-ground-truth labels. They are used to create pseudo-ground-truth masks for instance segmentation module, which estimates the final segmentation labels of individual proposals using the masks. These three network components make up a community and collaborate to update the weights of the backbone network for feature extraction, which leads to regularized representations robust to overfitting to poor local optima. \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\linewidth] {./assets/architecture.pdf} \end{center} \caption{ The proposed network architecture for weakly supervised instance segmentation. Our end-to-end trainable network consists of four parts: (a) \textit{feature extraction network} computes the shared feature maps and provides proposal-level features with the other networks, (b) \textit{object detection network} identifies the location of objects and gives a pseudo-label of object class to each proposal, (c) \textit{instance mask generation network} constructs the class activation map for given proposals using predicted pseudo-labels from the detector, (d) \textit{instance segmentation network} predicts segmentation masks and is learned with the outputs of the above networks as pseudo-ground-truths. } \label{fig:model} \end{figure*} \subsection{Network Architecture} Figure~\ref{fig:model} presents the network architecture of our weakly supervised object detection and segmentation algorithm. As mentioned earlier, the proposed network consists of four parts: feature extractor, object detector with bounding box regressor, instance mask generator and instance segmentation module. Our feature extraction network is made of shared fully convolutional layers, where the feature of each proposal is obtained from the Spatial Pyramid Pooling (SPP)~\cite{he2014spatial} layers on the shared feature map and fed to the other modules. \paragraph*{3.2.1 Object Detection Module\vspace{0.3cm}\newline}% \label{par:detection_module} For object detection, a $7\times 7$ feature map is extracted from the SPP layer for each object proposal and forwarded to the last residual block (\textsf{res5}). Then, we pass these features to both the detector and the regressor. Since this process is compatible with any end-to-end trainable object detection network based on weak supervision, we adopt one of the most popular weakly supervised object detection networks, referred to as OICR~\cite{tang2017multiple}, which has three refinement layers after the base detector. For each image-level class label, we extract foreground proposals based on their estimated scores corresponding to the label and apply a non-maximum suppression (NMS) to reduce redundancy. Background proposals are randomly sampled from the proposals that are overlap with foreground proposals below a threshold. Among the foreground proposals, the one with the highest score for each class is selected as a pseudo-ground-truth bounding box. Bounding box regression is typically conducted under full supervision to refine the proposals corresponding to objects. However, learning a regressor in our problem is particularly challenging since it is prone to be biased by discriminative parts of objects; such a characteristic is difficult to control in a weakly supervised setting and is aggravated in class-specific learning. Hence, unlike~\cite{girshick2015fast,girshick2014rich,ren2015faster}, we propose a class-agnostic bounding box regressor based on pseudo-ground-truths to avoid overly discriminative representation learning and provide better regularization effect. Note that a class-agnostic regressor has not been explored actively yet since fully supervised models can exploit accurate bounding box annotations and learning a regressor with weak labels only is not common. If a proposal has a higher IoU with its nearest pseudo-ground-truth proposal than a threshold, the proposal and the pseudo-ground-truth proposal are paired to learn the regressor. \paragraph*{3.2.2 Instance Mask Generation (IMG) Module\vspace{0.3cm}\newline} \label{par:cam_module} This module constructs pseudo-ground-truth masks for instance segmentation using the proposal-level class labels given by our object detector. It takes the feature of each proposal from the SPP layers attached to multiple convolutional layers as shown in Figure~\ref{fig:model}. Since the IMG module utilizes hierarchical representations from different levels in a backbone network, it can deal with multi-scale objects effectively. We construct pseudo-ground-truth masks for individual proposals by integrating the following additional features into CAM~\cite{zhou2016learning}. First, we compute a background class activation map by augmenting a channel corresponding to the background class. This map is useful to distinguish objects from the background. Second, instead of the Global Average Pooling (GAP) adopted in the standard CAM, we employ the weighted GAP to give more weights to the center pixels within proposals. It computes a weighted average of the input feature maps, where the weights are given by an isotropic Gaussian kernel. Third, we convert input features $f$ of the CAM module to log scale values, \ie, $\log(1+f)$, which penalizes excessively high peaks in the CAM and leads to spatially regularized feature maps appropriate for robust segmentation. The output of the IMG module, denoted by $\mathbf{M}$, is an average of three CAMs to which min-max normalizations~\cite{patro2015normalization} are applied. For each selected proposal, the pseudo-ground-truth mask $\mathbf{\widetilde{M}} \in \mathbb{R}^{(C+1) \times T^2}$ for instance segmentation is given by the following equation using the three CAMs, $\mathbf{M}_k$ $(k = 1, 2, 3)$, \begin{equation} \mathbf{\widetilde{M}} = \delta \left[ \frac{1}{3}\sum_{k=1}^3 \mathbf{M}_k > \xi \right], \label{eq:gt} \end{equation} where $\mathbf{M}_k \in \mathbb{R}^{(C+1) \times T^2}$ is the $k^{\text{th}}$ CAM whose size is $T\times T$ for all classes including background, $\delta[\cdot]$ is an element-wise indicator function, and $\xi$ is a predefined threshold. \paragraph*{3.2.3 Instance Segmentation Module\vspace{0.3cm}\newline} \label{par:segmentation_module} For instance segmentation, the output of the \textsf{res5} block is upsampled to $T\times T$ activation maps and provided to four convolutional layers along with ReLU layers and the final segmentation output layer as illustrated in Figure~\ref{fig:model}. This module learns a pixel-wise binary classification label for each proposal based on the pseudo-ground-truth mask $\mathbf{\widetilde{M}}^c$, provided by the IMG module. The predicted mask of each proposal is a class-specific binary mask, where the class label $c$ is determined by the detector. Note that our model is compatible with any semantic segmentation network. \subsection{Losses} \label{sub:losses} The overall loss function of our deep community learning framework is given by the sum of losses from the three modules as \begin{equation} \mathcal{L} = \mathcal{L}_{\text{det}} + \mathcal{L}_{\text{img}} + \mathcal{L}_{\text{seg}}, \label{eq:loss} \end{equation} where $\mathcal{L}_{\text{det}}$, $\mathcal{L}_{\text{img}}$, and $\mathcal{L}_{\text{seg}}$ denote detection loss, instance mask generation loss, and instance segmentation loss, respectively. The three terms interact with each other to train the backbone network including the feature extractor in an end-to-end manner. \paragraph*{3.3.1 Object Detection Loss\vspace{0.3cm}\newline} \label{par:loss_object_detection} The object detection module is trained with the sum of classification loss $\mathcal{L}_{\text{cls}}$, refinement loss $\mathcal{L}_{\text{refine}}$, and bounding box regression loss $\mathcal{L_{\text{reg}}}$. The features extracted from the individual object proposals are given to the detection module based on OICR~\cite{tang2017multiple}. Image classification loss $\mathcal{L}_\text{cls}$ is calculated by computing the cross-entropy between image-level ground-truth class label $\boldsymbol{y} = (y_1, \dots, y_C)^\text{T}$ and its corresponding prediction $\boldsymbol{\phi} = (\phi_1, \dots, \phi_C)^\text{T}$, which is given by \begin{equation} \mathcal{L}_{\text{cls}} = - \sum_{c=1}^{C} y_c \log \phi_c + (1-y_c)\log(1-\phi_c) \text{,} \\ \label{eq:Lcls} \end{equation} where $C$ is the number of classes in a dataset. As in the original OICR, the pseudo-ground-truth of each object proposal in the refinement layers is obtained from the outputs of their preceding layers, where the supervision of the first refinement layer is provided by WSDDN~\cite{bilen2016weakly}. The loss of the $k^\text{th}$ refinement layer is computed by a weighted sum of losses over all proposals as \begin{equation} \mathcal{L}_\text{refine}^k = - \frac{1}{|R|}\sum_{r=1}^{|R|}\sum_{c=1}^{C+1}w_r^k y_{cr}^k \log x_{cr}^{k}, \label{eq:Loicr} \end{equation} where $x_{cr}^{k}$ denotes a score of the $r^\text{th}$ proposal with respect to class $c$ in the $k^\text{th}$ refinement layer, $w_r^k$ is a proposal weight obtained from the prediction score in the preceding refinement layer, and $|R|$ is the number of proposals. In the refinement loss function, there are $C+1$ classes because we also consider background class. Regression loss $\mathcal{L_{\text{reg}}} $ employs smooth $\ell_1$-norm between a proposal and its matching pseudo-ground-truth, following the bounding box regression literature~\cite{girshick2015fast,ren2015faster}. The regression loss is defined as follows: \begin{equation} \mathcal{L_{\text{reg}}} =\frac{1}{|R|} \sum_{r=1}^{|R|} \sum_{j=1}^{|\mathcal{G}|} q_{rj} \sum_{k \in \{x,y,w,h\}} \text{smooth}_{\ell_1}(t_{rjk} - v_{rk}), \end{equation} where $\mathcal{G}$ is a set of pseudo-ground-truths, $q_{rj}$ is an indicator variable denoting whether the $r^\text{th}$ proposal is matched with the $j^\text{th}$ pseudo-ground-truth, $v_{rk}$ is a predicted bounding box regression offset of the $k^\text{th}$ coordinate for $r^\text{th}$ proposal and $t_{rjk}$ is the desirable offset parameter of the $k^\text{th}$ coordinate between the $r^\text{th}$ proposal and the $j^\text{th}$ pseudo-ground-truth as in R-CNN~\cite{girshick2014rich}. The detection loss $\mathcal{L}_\text{det}$ is the sum of image classification loss, bounding box regression loss, and $K$ refinement losses, which is given by \begin{equation} \mathcal{L}_\text{det} = \mathcal{L}_\text{cls}+ \mathcal{L}_\text{reg} + \sum_{k=1}^K \mathcal{L}_\text{refine}^k, \label{eq:Ldet} \end{equation} where $K=3$ in our implementation. \paragraph*{3.3.2 Instance Mask Generation Loss\vspace{0.3cm}\newline} \label{par:loss_cam} For training CAMs in the IMG module, we adopt average classification scores from three refinement branches of our detection network. The loss function of the $k^\text{th}$ CAM network, denoted by $\mathcal{L}_{\text{cam}}^k$, is given by a binary cross entropy loss as \begin{align} \mathcal{L}_\text{cam}^k = -\frac{1}{|R|} \sum_{r=1}^{|R|}\sum_{c=1}^{C+1} \widetilde{y}_{rc} \log p_{rc}^k + (1-\widetilde{y}_{rc})\log(1-p_{rc}^k) \text{,} \label{eq:Lcam_k} \end{align} where $\widetilde{y}_{rc}$ is an one-hot encoded pseudo-label from detection branch of the $r^{\text{th}}$ proposal for class $c$, and $p_{rc}^k$ is a softmax score of the same proposal for the same class obtained by the weighted GAP from the last convolutional layer. The instance mask generation loss is the sum of all the CAM losses as shown in the following equation: \begin{equation} \mathcal{L}_{\text{img}} = \sum_{k=1}^3 \mathcal{L}_{\text{cam}}^k \text{.}\\ \label{eq:Lcam} \end{equation} \paragraph*{3.3.3 Instance Segmentation Loss\vspace{0.3cm}\newline} \label{par:loss_instance_segmentation} The loss in the segmentation network is obtained by comparing the network outputs with the pseudo-ground-truth $\mathbf{\widetilde{M}}$ using a pixel-wise binary cross entropy loss for each class, which is given by \begin{align} \mathcal{L}_{\text{seg}} = -\frac{1}{T^2}\sum_r^{|R|}\sum_c^{C+1} \hspace{-0.2cm} &\sum_{(i,j) \in T \times T} \hspace{-0.3cm} m_{rc}^{ij} \log s_{rc}^{ij} \\ &+ (1-m_{rc}^{ij})\log\left(1 - s_{rc}^{ij} \right), \nonumber \label{eq:Lseg} \end{align} where $m_{rc}^{ij}$ means a binary element at $(i,j)$ of $\mathbf{\widetilde{M}}$ for the $r^\text{th}$ proposal, and $s_{rc}^{ij}$ is the output value of the segmentation network, $\mathbf{S} \in \mathbb{R}^{|R| \times (C+1) \times T^2}$, at location $(i,j)$ of the $r^\text{th}$ proposal. \begin{table*}[t!] \caption{Instance segmentation results on the PASCAL VOC 2012 segmentation \textit{val} set with two different types of supervisions ($\mathcal{I}$: image-level class label, $\mathcal{C}$: object count). The numbers in red and blue denote the best and the second best scores without Mask R-CNN re-training, respectively.} \label{tab:inst} \begin{center} \scalebox{0.85}{ \renewcommand{\arraystretch}{1.1} \setlength\tabcolsep{12pt} \begin{tabular}{ccc|ccccc} \toprule Method & Supervision & Post-procesing & $\text{mAP}_{0.25}$ & $\text{mAP}_{0.5}$ & $\text{mAP}_{0.75}$ & ABO \\ \hline\hline WISE~\cite{issam2019where} w/ Mask R-CNN & $\mathcal{I}$ & \checkmark & 49.2 & 41.7 & 23.7 & 55.2 \\ IRN~\cite{ahn2019weakly} w/ Mask R-CNN & $\mathcal{I}$ & \checkmark & - & 46.7 & - & -\\ \hline Cholakkal \etal~\cite{cholakkal2019object} &$\mathcal{I} + \mathcal{C}$ &\checkmark & 48.5 & 30.2 & 14.4 &44.3 \\\hdashline PRM~\cite{zhou2018weakly} & $\mathcal{I}$& \checkmark & 44.3 & 26.8 & 9.0 & 37.6 \\ IAM~\cite{zhu2019learning} & $\mathcal{I}$ &\checkmark & 45.9 & 28.3 & 11.9 & 41.9 \\ Label-PEnet~\cite{ge2019label} & $\mathcal{I}$ &\checkmark & 49.2 & 30.2 & {\color{red}\textbf{12.9}} & 41.4 \\ \hdashline \multirow{2}{*}{Ours}& $\mathcal{I}$ && {\color{red}\textbf{57.0}} & {\color{blue}\textbf{35.9}}& 5.8 & {\color{blue}\textbf{43.8}} \\ & $\mathcal{I}$ & \checkmark & {\color{blue}\textbf{56.6}} & {\color{red}\textbf{38.1}}& {\color{blue}\textbf{12.3}} & {\color{red}\textbf{48.2}} \\\Xhline{0.6pt} \end{tabular} } \end{center} \vspace{-0.2cm} \end{table*} \subsection{Inference} \label{sub:inference} Our model sequentially predicts object detection and instance segmentation for each proposal in a given image. For object detection, we use the average scores of three refinement branches in the object detection module. Each regressed proposal is labeled as the class that corresponds to the maximum score. We apply a non-maximum suppression with IoU threshold 0.3 to the proposals. The survived proposals are regarded as detected objects and used to estimate pseudo-labels for instance segmentation. For instance segmentation, we select the foreground activation map of the identified class $c$, $\mathbf{M}^c$, from the IMG module and the corresponding segmentation score map, $\mathbf{S}^c$, from instance segmentation module for each detected object. The final instance segmentation label is given by the ensemble of two results, \begin{equation} \mathbf{O}^c = {\delta} \left[ \frac{\mathbf{M}^c + \mathbf{S}^\text{c}}{2} > \xi \right], \label{eq:inf} \end{equation} where $\mathbf{O}^c$ is a binary segmentation mask for detected class $c$, $\delta[\cdot]$ is an element-wise indicator function, and $\xi$ is a threshold identical used in Eq.~\eqref{eq:gt}. For post-processing, we utilize Multiscale Combinatorial Grouping (MCG) proposals~\cite{arbelaez2014multiscale} as used in PRM~\cite{zhou2018weakly}. Each instance segmentation mask is substituted as a max overlap MCG proposal. Since the MCG proposal is a group of superpixels, it contains boundary information. Hence, if a segmentation output covers overall shape well, MCG proposal is greatly helpful to catch details of an object. \section{Conclusion}\label{sec:conclusion} We presented a unified end-to-end deep neural network for weakly supervised instance segmentation via community learning. Our framework trains three subnetworks jointly with a shared feature extractor, which performs object detection with bounding box regression, instance mask generation, and instance segmentation. These components interact with each other closely and form a positive feedback loop with cross-regularization for improving quality of individual tasks. Our class-agnostic bounding box regressor successfully regularizes object detectors even with weak supervisions only while the post-processing based on MCG mask proposals improves accuracy substantially. The proposed algorithm outperforms the previous state-of-the-art weakly supervised instance segmentation methods and the weakly supervised object detection baseline on PASCAL VOC 2012 with a simple segmentation module. Since our framework does not rely on particular network architectures for object detection and instance segmentation modules, using better detector or segmentation network would improve the performance of our framework.
1,314,259,996,102
arxiv
\section*{\refname \@mkboth{\MakeUppercase{\refname}}{\MakeUppercase{\refname}}} } \makeatother \providecommand{\keywords}[1]{ \small\textbf{Keywords:~~} #1} \usepackage{listings,xcolor} \lstdefinelanguage{ttl}{ sensitive=true, morekeywords={nif,itsrdf,es,dbo,penn,olia,rdfs}, morecomment=[l][\color{black}]{@}, morecomment=[l][\color{darkgreen}]{\# }, morecomment=[l][\color{black}]{\#char }, morestring=[b][\color{blue}]\", } \lstset{language=ttl, morekeywords={PREFIX,rdf,rdfs,url,nif,itsrdf,es,penn,olia,rdfs}, basicstyle=\footnotesize} \definecolor{darkgreen}{RGB}{0, 128, 0} \definecolor{lightblue}{RGB}{50,155,230} \definecolor{black}{RGB}{0,0,0} \newcommand{\todo}[1]{ \vspace{0.1cm}\textcolor{red}{[\textbf{TODO:} #1]} } \begin{document} \title{DBpedia NIF: Open, Large-Scale and Multilingual Knowledge Extraction Corpus} \author[1,2]{Milan Dojchinovski} \author[1,3]{Julio Hernandez} \author[1]{Markus Ackermann} \author[1]{Amit Kirschenbaum} \author[1]{Sebastian Hellmann} \affil[1]{Agile Knowledge Engineering and Semantic Web (AKSW) \authorcr InfAI, Leipzig University, Germany\authorcr \texttt{\{dojchinovski,ackermann,amit,hellmann\}@informatik.uni-leipzig.de}} \affil[2]{Web Intelligence Research Group\authorcr Faculty of Information Technology\authorcr Czech Technical University in Prague, Czech Republic\authorcr \texttt{milan.dojchinovski@fit.cvut.cz}} \affil[3]{Center for Research and Advanced Studies\authorcr National Polytechnic Institute of Mexico\authorcr \texttt{nhernandez@tamps.cinvestav.mx}} \date{} \setcounter{footnote}{0} \maketitle \begin{abstract} In the past decade, the DBpedia community has put significant amount of effort on developing technical infrastructure and methods for efficient extraction of structured information from Wikipedia. These efforts have been primarily focused on harvesting, refinement and publishing semi-structured information found in Wikipedia articles, such as information from infoboxes, categorization information, images, wikilinks and citations. Nevertheless, still vast amount of valuable information is contained in the unstructured Wikipedia article texts. In this paper, we present \textit{DBpedia NIF} - a large-scale and multilingual knowledge extraction corpus. The aim of the dataset is two-fold: to dramatically broaden and deepen the amount of structured information in DBpedia, and to provide large-scale and multilingual language resource for development of various NLP and IR task. The dataset provides the content of all articles for 128 Wikipedia languages. We describe the dataset creation process and the NLP Interchange Format (NIF) used to model the content, links and the structure the information of the Wikipedia articles. The dataset has been further enriched with about 25\% more links and selected partitions published as Linked Data. Finally, we describe the maintenance and sustainability plans, and selected use cases of the dataset from the TextExt knowledge extraction challenge. \keywords{DBpedia, NLP, IE, Linked Data, training, corpus} \end{abstract} \section{Introduction} In past decade, the Semantic Web community has put significant effort on developing technical infrastructure for efficient Linked Data publishing. These efforts gave birth of 1,163 Linked Datasets (as of February 2017\footnote{\url{http://lod-cloud.net/}}), which is an overall growth of 296\% datasets published in September 2011. Since the beginning of the Linked Open Data (LOD) initiative, DBpedia \cite{dbpedia} has served as a central interlinking hub for the emerging Web of Data. The ultimate goal of the DBpedia project was, and still is, to extract structured information from Wikipedia and to make this information available on the Web. In the past ten years, the main focus of the DBpedia project was to extract available structured information found in the Wikipedia articles, such as information from the infoboxes, categorization information, images, wikilinks and citation. Nevertheless, huge amount of highly valuable information is still being hidden in the free text of the Wikipedia articles. The Wikipedia article texts represent the largest part of the articles in terms of time spent on writing, informational content and size. This content and the information it provides can be exploited in various use cases, such as fact extraction and validation, training various multilingual NLP tasks, development of multilingual language resources, etc. In the past years, there have been several attempts \cite{richman2008mining,toral2006,kazama2007exploiting,mihalcea2007wikify,NOTHMAN2013151,Mendes2011spotlight, HAHM14,POLYGLOT} on extracting the free text content in Wikipedia articles. However, none of these works have achieved significant impact and recognition within the Semantic Web and NLP community since the generated datasets have not been properly maintained and they have been developed without sustainable development and maintenance strategy. Thus, there is need to develop robust extraction process for extraction of the information from the free text in the Wikipedia articles, semantically describe the extracted information and make it available to the Semantic Web community, and put in place sustainable development and maintenance strategy for the dataset. In order to achieve these goals, we have developed the \textit{DBpedia NIF} dataset - a large-scale, open and multilingual knowledge extraction dataset which provides information extracted from the unstructured information found in the Wikipedia articles. It provides the Wikipedia article texts of 128 Wikipedia editions. The dataset captures the structure of the articles as they are organized in Wikipedia including the sections, paragraphs and corresponding titles. We also capture the links present in the articles, the surface forms (anchors) and their exact location within the text. Since the Wikipedia contributors should follow strict guidelines while adding new links, there are many missing links in the content. In order to overcome the problem of missing links, we further enriched the content with new links and significantly increased the number links, for example, the number of links for the English Wikipedia has been increased by 31.36\%. The dataset is provided in a machine-readable format, in the NLP Interchange Format (NIF) \cite{hellmann2013NIF}, which is used to semantically describe the structure of the text documents and the annotations . The dataset has been enriched, selected partitions published according to the Linked Data principles and new updated versions of the datasets are provided along with each DBpedia release. We have evaluated the data for its syntactic validity and semantic accuracy and the results from these evaluations confirm the quality of the dataset. The reminder of the paper is structured as follows. Section~\ref{sec.background-and-motivations} provides necessary background information and motivates the work. Section~\ref{sec.dataset} describes the dataset, the extraction process, the data model, the availability, and the maintenance and sustainability plans for the dataset. Section~\ref{sec.enrichment} describes the enrichment process of the dataset. Section~\ref{sec.quality} discusses the quality of the dataset. Section~\ref{sec.use-cases} presents selected use cases of the dataset at the TextExt challenge. Finally, Section~\ref{sec.conclusion} concludes the paper. \section{Background and Motivations} \label{sec.background-and-motivations} DBpedia is a community project which aims at published structured knowledge extracted from Wikipedia. Since its inception, the DBpedia project has been primarily focused on extraction of knowledge from semi-structured sections in Wikipedia articles, such as infoboxes, categorization information, images, wikilinks, etc. Nevertheless, huge amount of information is still being hidden in the text of the Wikipedia articles. This information can not only increase the coverage of DBpedia but it can also support other relevant tasks, such as validation of DBpedia facts or training various NLP and IR tasks. In the past, Wikipedia and the article texts have been widely exploited as a resource for many NLP and IR tasks \cite{richman2008mining,toral2006,kazama2007exploiting,mihalcea2007wikify,NOTHMAN2013151,Mendes2011spotlight, HAHM14,POLYGLOT}. In \cite{richman2008mining} the authors develop a multilingual NER system, which is trained on data extracted from Wikipedia article texts. Wikipedia has been used to create an annotated training corpus in seven languages by exploiting the wikilinks within the text. In \cite{toral2006} the authors propose use of Wikipedia for automatic creation and maintenance of gazetteers for NER. Similarly, \cite{kazama2007exploiting} used Wikipedia, particularly the first sentence of each article, to create lists of entities. Another prominent work which uses Wikipedia is Wikify! \cite{mihalcea2007wikify}, a system for enrichment of content with Wikipedia links. It uses Wikipedia to create a vocabulary with all surface forms collected from the Wikipedia articles. DBpedia Spotlight \cite{Mendes2011spotlight} is a system for automatic annotation of text documents with DBpedia URIs. It is using Wikipedia article text to collect surface forms which are then used for the entity spotting and disambiguation tasks. In \cite{HAHM14}, the authors also construct NE corpus using Wikipedia, parsed the XML dumps and extracted wikilinks annotations. In \cite{NOTHMAN2013151}, the authors present an approach for automatic construction of an NER training corpus out of Wikipedia, which has been generated for nine languages based on the textual and structural features present in the Wikipedia articles. Similarly, in \cite{POLYGLOT} the authors develop an NER system which is trained on data from Wikipedia. The training corpus has been generated for 40 languages by exploiting the Wikipedia link stricture and the internal links embedded in the Wikipedia articles to detect named entity mentions. Although these works have shown promising results and confirmed the high potential of the Wiki text, several crippling problems are starting to surface. Most of the datasets are not available, they are not regularly updated nor properly maintained, and they have been developed without clear sustainability plan and roadmap strategy. Most datasets are provided in just few languages, with exception of \cite{POLYGLOT} which is provided in 40 languages. The data is not semantically described, it is hard to query, analyze and consume. Thus, there is need to develop a knowledge extraction process for extraction of information from Wikipedia article texts, semantically describe the extracted information, make it available to the Semantic Web and other communities, and establish sustainable development and maintenance strategies for the dataset. \section{The DBpedia NIF Dataset} \label{sec.dataset} The DBpedia knowledge base is created using the DBpedia Extraction Framework\footnote{\url{https://github.com/dbpedia/extraction-framework/}}. In our work, we appropriately extended the framework and implemented a knowledge extraction process for extraction of information from Wikipedia article texts. \subsection{Knowledge Extraction} Although Wikipedia article text can be harvested using the official Wikipedia API\footnote{\url{https://en.wikipedia.org/w/api.php}}, it is not recommended to be used at a large scale due to crawling ethics. The content behind Wikipedia is also provided as XML dumps\footnote{\url{http://dumps.wikimedia.org/}} where the content is represented using Wikitext\footnote{\url{https://en.wikipedia.org/wiki/Help:Wikitext}} (also known as Wiki markup). Wikitext is a special markup which defines syntax and keywords to be used by the MediaWiki software to format Wikipedia pages. Apart from the text formatting, it also provides support for LUA scripts\footnote{\url{https://en.wikipedia.org/wiki/Wikipedia:Lua}} and special Wikipedia templates which further manipulate with the content and prepares it for rendering. These add-ons add additional complexity to the process of rendering and parsing Wikipedia articles. Although several tools for rendering Wikipedia articles have been developed, there is no tool that implements all templates and LUA scripts. On the other hand, the Mediawiki\footnote{\url{http://www.mediawiki.org/}} is the only available tool which produces high-quality content from Wikitext markup, thus, in our work we rely on Mediawiki. We use Mediawiki to expand all templates in the Wikipedia article and render the HTML for the article. Next, we clean the HTML page based on pre-defined CSS selectors. We define three types of selectors: \begin{itemize} \item \texttt{search selectors} - to find special HTML elements, \item \texttt{remove selectors} - to remove specific HTML elements, and \item \texttt{replace selectors} - to replace specific HTML elements, for example with a newline character. \end{itemize} The CSS selector specification is a manual task and their specification is outsourced to the community. While there are CSS selectors which are valid for all Wikipedias, there is need to specify specific CSS selectors for individual Wikipedia languages. Upon successful cleansing of the HTML, also with the help of the search selectors the HTML is traversed and its structure is captured. Via traversing, the article is split into (sub-)\texttt{sections} and \texttt{paragraphs}, and the text of the article is accumulated. Figure~\ref{fig.html-example} provides an example. Note that order of the sections and paragraphs is kept as it is in the corresponding Wikipedia article. Section \texttt{titles} are also captured and stored. The paragraphs are further parsed and for every \texttt{<a>} HTML element its start and end offsets, surface form and the URL are captured. If any table or equation is spotted, then it is also extracted, transformed and stored into MathML\footnote{\url{https://www.w3.org/Math/}}. Finally, the cleaned textual content for the sections, paragraphs and titles is semantically modeled using NIF \cite{hellmann2013NIF} as described in the following section. \begin{figure*}[t!] \centering \includegraphics[width=\textwidth]{dbpedia-nif-illustration.pdf} \caption{Data model illustrated on the Wikipedia article for United States. Source: \url{https://en.wikipedia.org/wiki/United_States}.} \label{fig.html-example} \vspace{-10pt} \end{figure*} \subsection{Data Model in NIF} \label{sec.nif-model} The NLP Interchange Format (NIF)\footnote{\url{http://persistence.uni-leipzig.org/nlp2rdf/}}\cite{hellmann2013NIF} is an RDF/OWL-based format that aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. It enables modeling of text documents and describing strings within the documents. In our work we use selected subset of the NIF concepts to model the extracted content and its structure. The \textbf{document} and the main content are represented using the \texttt{nif:Context}\footnote{In this article, the \texttt{nif} prefix is associated with the URI \url{http://persistence.uni-leipzig.org/nlp2rdf/ontologies/nif-core\#}.}, while the associated \texttt{nif:String} property holds the actual text. The \texttt{nif:beginIndex} and \texttt{nif:endIndex} properties are used to describe the position of the sting using offsets and denote its length. The provenance information is captured with \texttt{nif:sourceUrl} where the URL identifies the document the context was extracted from and the \texttt{nif:predLang} describes the predominant language of the text. The \textbf{structure} of the document is captured with the \texttt{nif:Section} and \texttt{nif:Paragraph} classes. From the \texttt{nif:Context} is referenced the first section using \texttt{nif:firstSection}, the last section with \texttt{nif:lastSection}, and all other contained sections with the \texttt{nif:hasSection} property. In a same manner subsection within a section can be modeled. Sections have also begin and end index and they are referenced to the \texttt{nif:Context} using the \texttt{nif:referenceContext} property. Each section provides references to all contained paragraphs using the \texttt{nif:firstParagraph}, \texttt{nif:lastParagraph} and \texttt{nif:hasParagraph} property. Paragraphs, same as the sections, are described with a begin and end index and are referenced to the context with the \texttt{nif:referenceContext} property. Reference to the section is provided with the \texttt{nif:superString} property, which is used to express that one string is contained in another. The \textbf{links} extracted from the wiki text are described using the \texttt{nif:Word} and \texttt{nif:Phrase} classes. The \texttt{nif:Word} describes a link with a single-token anchor text, while the \texttt{nif:Phrase} describes a link with a multi-token anchor text. The anchor text for the link is provided with the \texttt{nif:anchorOf} together with its position in the string of the referenced \texttt{nif:Context}. Each link resource provides reference to the \texttt{nif:Context} using \texttt{nif:referenceContext} and to the \texttt{nif:Paragraph} using the \texttt{nif:superString}. The actual link (URL) is described with the \texttt{itsrdf:taIdentRef} property from the Internationalization Tag Set (ITS) Version 2.0\footnote{\url{https://www.w3.org/TR/its20/}} standard. Following listing provides an RDF excerpt from the datasets which represents the content and the structure of the HTML from Figure~\ref{fig.html-example}. It describes a section, the first paragraph withing the section and a link. \begin{lstlisting}[captionpos=b, language=ttl, showstringspaces=false, stepnumber=1, basicstyle=\scriptsize\ttfamily, xleftmargin=\parindent, numbers=left, breaklines=true, numberstyle=\scriptsize, caption=Excerpt from the dataset illustrating the RDF generated for the example from Figure~\ref{fig.html-example}., keywordstyle=\color{lightblue}] @prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . @prefix xsd: <http://www.w3.org/2001/XMLSchema#> . @prefix itsrdf: <http://www.w3.org/2005/11/its/rdf#> . @prefix nif: <> . @prefix ex: <http://nif.dbpedia.org/wiki/en/> . ex:United_States?dbpv=2016-10&nif=context a nif:Context ; nif:beginIndex "0"^^xsd:nonNegativeInteger ; nif:endIndex "104211"^^xsd:nonNegativeInteger ; nif:firstSection ex:United_States?dbpv=2016-10&char=0,4241 ; nif:lastSection ex:United_States?dbpv=2016-10&char=103211,104211 ; nif:hasSection ex:World_War_II?dbpv=2016-10&char=0,5001 ; nif:sourceUrl ex:United_States?oldid=745182619 ; nif:predLang <http://lexvo.org/id/iso639-3/eng> ; nif:isString "...The first inhabitants of North America migrated from Siberia by way of the Bering land bridge ..." . ex:United_States?dbpv=2016-10&char=7745,9418 a nif:Section ; nif:beginIndex "7745"^^xsd:nonNegativeInteger ; nif:endIndex "9418"^^xsd:nonNegativeInteger ; nif:hasParagraph ex:United_States?dbpv=2016-10&char=7860,8740 ; nif:lastParagraph ex:United_States?dbpv=2016-10&char=8741,9418 ; nif:nextSection ex:United_States?dbpv=2016-10&char=9420,12898 ; nif:referenceContext ex:United_States?dbpv=2016-10&nif=context ; nif:superString ex:United_States?dbpv=2016-10&char=7548,7743 . ex:United_States?dbpv=2016-10&nif=paragraph&char=7860,8740 a nif:Paragraph ; nif:beginIndex "7860"^^xsd:nonNegativeInteger ; nif:endIndex "8740"^^xsd:nonNegativeInteger ; nif:nextParagraph ex:United_States?dbpv=2016-10&char=8741,9418 ; nif:referenceContext ex:United_States?dbpv=2016-10&nif=context ; nif:superString ex:United_States?dbpv=2016-10&char=7745,9418 . ex:United_States?dbpv=2016-10&char=7913,7920 a nif:Word ; nif:anchorOf "Siberia" ; nif:beginIndex "7913"^^xsd:nonNegativeInteger ; nif:endIndex "7920"^^xsd:nonNegativeInteger ; nif:referenceContext ex:United_States?dbpv=2016-10&nif=context ; nif:superString ex:United_States?dbpv=2016-10&char=7860,8740 ; itsrdf:taIdentRef <http://dbpedia.org/resource/Siberia> . \end{lstlisting} \label{lst.nif-example} \subsection{Coverage and Availability} The DBpedia NIF dataset is first of its kind with structured information provided for 128 Wikipedia languages. The DBpedia NIF dataset provides over 9 billion triples, which is bringing the overall DBpedia triples count up to 23 billion. Table~\ref{tbl.general-dataset-info} provide statistics for the top 10 Wikipedia languages. It provides information on the number of articles, paragraphs, annotations, and mean and median number of annotations per article. \begin{table}[!h] \caption{Dataset statistics for the top 10 languages.} \begin{center} \begin{tabular}{ >{\raggedleft\arraybackslash} m{17mm} >{\raggedleft\arraybackslash} m{17mm} >{\raggedleft\arraybackslash} m{20mm} >{\raggedleft\arraybackslash} m{22mm} >{\centering\arraybackslash} m{18mm} >{\centering\arraybackslash} m{18mm} } \specialrule{1pt}{0pt}{1pt} \textbf{Language} & \textbf{Articles} & \textbf{Paragraphs} & \textbf{Links} & \textbf{Mean per article} & \textbf{Median per article} \\ \specialrule{1pt}{1pt}{3pt} English & 4,909,454 & 40,939,057 & 127,227,173 & 26.20 & 13 \\ Cebuano & 3,071,209 & 6,434,998 & 24,878,067 & 8.14 & 7 \\ Swedish & 3,297,038 & 7,918,709 & 36,372,087 & 11.07 & 9 \\ German & 1,734,835 & 14897948 & 50,116,852 & 29.08 & 19 \\ French & 1,680,645 & 15,833,816 & 55,347,176 & 33.73 & 17 \\ Dutch & 1,799,619 & 6,537,238 & 23,107,130 & 13.17 & 4 \\ Russian & 1,172,548 & 10,327,544 & 31,759,092 & 27.37 & 15 \\ Italian & 1,124,751 & 7,837,457 & 30,996,231 & 27.79 & 12 \\ Spanish & 1,166,614 & 9,221,266 & 31,123,375 & 26.89 & 16 \\ Polish & 1,106,247 & 5,740,870 & 21,793,337 & 19.84 & 12 \\ \specialrule{1pt}{1pt}{1pt} Total & 21062960 & 125688903 & 432720520 & 223.28 & 124 \\ \specialrule{1pt}{0pt}{1pt} \end{tabular} \end{center} \label{tbl.orig-dataset-stats} \end{table} According to the statistics presented in Table~\ref{tbl.general-dataset-info}, although the English Wikipedia is the largest, the German and French Wikipedia articles are enriched with more links in average compared to the English Wikipedia. French Wikipedia contains 33.73 links per article (mean count) , German Wikipedia 29.08, while English Wikipedia 26.20. Also, although the Cebuano Wikipedia is the second largest Wikipedia according to the number of articles, it is the one with the least number of links, likely due to the nature of its creation; automated creation of the articles using bot\footnote{\url{https://www.quora.com/Why-are-there-so-many-articles-in-the-Cebuano\%2Dlanguage-on-Wikipedia}}. Since the amount of data is considerable large, only the English subset of the dataset is published according to the Linked Data principles with dereferenceable URIs. Publishing of other language versions of the dataset will be consider only if there is requirement for this within the community. For all resources we mint URIs in the DBpedia namespace (http://nif.dbpedia.org/wiki/\{lang\}/\{name\}). This namespace gives us flexibility in publishing different language versions of the dataset, as well as, publishing any other NIF dataset. Information about the latest news, releases, changes and download information is provided at the main dataset page at \url{http://wiki.dbpedia.org/nif-abstract-datasets}. General information about the dataset is provided in Table~\ref{tbl.general-dataset-info}. \begin{table}[!h] \caption{Details for the DBpedia NIF dataset.} \begin{center} \begin{tabular}{ >{\raggedright\arraybackslash} m{27mm} >{\raggedright\arraybackslash} m{87mm} } \specialrule{1pt}{0pt}{1pt} \textbf{Name} & DBpedia NIF dataset\\ \textbf{URL} & \url{http://wiki.dbpedia.org/nif-abstract-datasets}\\ \textbf{Ontology} & NLP Interchange Format (NIF) version 2.1\\ \textbf{Version} & 1.0\\ \textbf{Release Date} & July 2017\\ \textbf{License} & Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) \\ \specialrule{1pt}{0pt}{1pt} \end{tabular} \end{center} \label{tbl.general-dataset-info} \end{table} Currently, the DBpedia NIF dataset is released together with the regular bi-annual DBpedia releases. Nevertheless, in the past few months the core DBpedia team has done considerable amount of work to streamline the extraction process of DBpedia and convert many of the extraction tasks into an ETL setting. The ultimate goal behind these efforts is to increase the frequency of the DBpedia releases including the DBpedia NIF dataset. \subsection{Maintenance and Sustainability Plans} The DBpedia Association provided us with computational resources for creation of the dataset and a persistent web space for hosting. This will guarantee persistent URI identifiers for the dataset resources. The ongoing maintenance of the dataset is an iterative process and feedback from its users is captured via several communication channels: the DBpedia-discussion mailing list\footnote{\url{https://lists.sourceforge.net/lists/listinfo/dbpedia-discussion}}, the TextExt challenge (see next section) and the DBpedia Framework issue tracker\footnote{\url{https://github.com/dbpedia/extraction-framework/}}. In order to assure sustainable and regularly updated and maintained dataset, the \textbf{TextExt: DBpedia Open Extraction Challenge}\footnote{\url{http://wiki.dbpedia.org/textext}} has been developed. \textit{TextExt} is a knowledge extraction challenge with the ultimate goal to spur knowledge extraction from Wikipedia article texts in order to dramatically broaden and deepen the amount of structured DBpedia/Wikipedia data and provide a platform for training and benchmarking various NLP tools. It is a continuous challenge with the focus to sustainably advance the state of the art and systematically put advance knowledge extraction technologies in action. The challenge is using nine language versions of the DBpedia NIF dataset and participants are asked to apply their knowledge extraction tools in order to extract i) facts, relations, events, terminology or ontologies for the \textit{triples track}, ii) and NLP annotations such as pos-tags, dependencies or co-references for the \textit{annotations track}. The submissions are evaluated two times a year and the challenge committee selects a winner which receives a reward. The knowledge extraction tools developed by the challenge participants are executed in regular intervals and the extracted knowledge published as part of the DBpedia core dataset. \section{Dataset Enrichment} \label{sec.enrichment} Wikipedia defines guidelines which should be followed by the authors when authoring content and adding links. The main principles described in the guidelines\footnote{\url{https://en.wikipedia.org/wiki/Wikipedia:Manual_of_Style/Linking\#Principles}} are as follows: \begin{enumerate} \item a link should appear only once in an article, \item avoid linking subjects known by most readers, \item avoid overlinking which makes it difficult to identify useful links, and \item add links specific to the topic of the article. \end{enumerate} These guidelines manifest themselves in certain properties of the dataset: \begin{enumerate} \item If a concept is linked once, its further occurrences will not be linked. \item The concept described in the article is never linked to itself. \item Relevant subjects related to the article are to be linked. \end{enumerate} While the third property is important for the dataset, the first and the second properties negatively influence the number of links and the richness of the dataset. In our work, we addressed these two properties and further enriched the dataset with links. The main goal of the enrichment process is to annotate those words or phrases which have been annotated at least once in the article but their subsequent occurrences lack links. The enrichment workflow which creates new links within the Wikipedia articles has been defined as follows. First, all links and their corresponding anchor texts found in the article are collected. Next, the link-anchor pairs are sorted according to the anchor text length, from the longest to the shortest anchor. Following the ordered list, a full string matching is applied over the whole article content, starting with a lookup of anchors with highest text length. In case of an overlapping matches, the one with the longest match will have the priority and the other is discarded. For example, if a link with anchor text ``East Berlin'' already exists, then a link over ``Berlin'' will be omitted. We have applied the enrichment process on the content extracted from the top ten Wikipedias, ranked according to the number of articles. Note that enrichment within the ``See also'', ``Notes'', ``Bibliography'', ``References'' and ``External Links'' section have not been applied. Table~\ref{tbl.enrichment-stats} summarizes the results from the enrichment. \begin{table}[!t] \caption{Dataset enrichment statistics.} \begin{center} \begin{tabular}{ >{\raggedleft\arraybackslash} m{16mm} >{\raggedleft\arraybackslash} m{25mm} >{\raggedleft\arraybackslash} m{25mm} >{\raggedleft\arraybackslash} m{25mm} >{\raggedleft\arraybackslash} m{25mm} } \specialrule{1pt}{0pt}{1pt} \textbf{Language} & \multicolumn{1}{>{\centering\arraybackslash}m{23mm}}{\textbf{Annotations before enrichment}} & \multicolumn{1}{>{\centering\arraybackslash}m{23mm}}{\textbf{Unique annotations}} & \multicolumn{1}{>{\centering\arraybackslash}m{23mm}}{\textbf{Annotations after enrichment}} & \multicolumn{1}{>{\centering\arraybackslash}m{23mm}}{\textbf{\% of new annotations}} \\ \specialrule{1pt}{1pt}{0pt} English & 127,227,173 & 17,322,066 & 168,988,631 & 31.36 \% \\ Cebuano & 24,878,067 & 3,077,130 & 26,222,416 & 5.40 \% \\ Swedish & 36,372,087 & 2,974,858 & 41,368,833 & 13.74 \% \\ German & 50,116,852 & 6,642,511 & 63,347,163 & 26.39 \%\\ French & 55,347,176 & 6,110,952 & 74,843,900 & 35.23 \%\\ Dutch & 23,107,130 & 3,332,920 & 26,873,294 & 16.29 \%\\ Russian & 31,759,092 & 5,386,638 & 37,215,365 & 17.18 \%\\ Italian & 30,996,231 & 3,674,701 & 39,605,915 & 27.78 \%\\ Spanish & 31,123,375 & 4,116,251 & 40,125,126 & 28.92 \%\\ Polish & 21,793,337 & 3,532,420 & 24,374,548 & 11.84 \%\\ \specialrule{1pt}{1pt}{1pt} Total & 432,720,520 & 56,170,447 & 542,965,191 & 25.48 \%\\ \specialrule{1pt}{0pt}{1pt} \end{tabular} \end{center} \label{tbl.enrichment-stats} \end{table} In overall, the enrichment process generated over 110 million links which is more than 25\% more links compared to the number of links present in Wikipedia. Most new links have been generated for the French Wikipedia 35.23\%, followed by the English, 31.36\%, Spanish 28.92\%, Italian 27.78\% and German 26.39\%. The reason for the different percentage of generated new links could be the language, how strong the guidelines have been followed by the Wikipedia editors, or the cultural background of the editors. For example, the content of the Cebuano Wikipedia has been primarily generated automatically using bot, which has direct influence on the enrichment process (only 5.4\% new links.) due to the low number of initial links present in the content. We have evaluated the enrichment process and report on the results from the evaluation in Section~\ref{sec.sem-quality-eval}. \section{Dataset Quality} \label{sec.quality} According to the 5-star dataset classification system defined by Tim Berners-Lee \cite{berners2006linked}, the DBpedia NIF dataset classifies as a five-star dataset. The five stars are credited for the open license, availability in a machine-readable format, use of open standards, use of URIs for identification, and the links to the other LOD datasets; links to DBpedia and indirectly to many other datasets. In \cite{zaveri2015quality} the authors describe a list of indicators for evaluation of the intrinsic quality of Linked Data datasets. We have checked the data for each of the metrics (where applicable) described for syntactic validity and semantic accuracy. \subsection{Syntactic Validity of the Dataset} \label{sec.syn-quality-eval} Considering the size and the complexity of the task in extraction knowledge from HTML, we have put an significant effort in checking the syntactic validity of the dataset. Following three syntactic validation checks have been executed: \begin{itemize} \item \textit{Raptor}\footnote{\url{http://librdf.org/raptor/}} RDF syntax parsing and serializing utility was used to assure the RDF is syntactically correct. \item The GNU command line tools \textit{iconv}\footnote{\url{https://www.gnu.org/software/libiconv/}} and \textit{wc}\footnote{\url{https://www.gnu.org/software/coreutils/manual/html_node/wc-invocation.html}} have been used to make sure the files only contain valid unicode codepoints. Wrong encoded characters were dropped and we also compared the number of characters afterwards to the character counts of the original files. Iconv was executed on the files with the \texttt{-c} parameter which drops inconvertible and wrongly encoded characters. \item \textit{RDFUnit}\footnote{\url{https://github.com/AKSW/RDFUnit/}}\cite{Kontokostas2014} is a validation tool designed to read and produce RDF that complies to an ontology. RDFUnit has been used to ensure adherence to the NIF format \cite{hellmann2013NIF}. Tests, such as checking that the string indicated with the \texttt{nif:anchorOf} property equals the part of \texttt{nif:isString} string (article text) found between begin and end offset, have been applied. \end{itemize} \subsection{Semantic Accuracy of the Dataset Enrichment} \label{sec.sem-quality-eval} In order to evaluate the semantic accuracy of the enrichment process, we have crowdsourced randomly created subset of the dataset and asked annotators to check the enrichments; the newly generated links. The goal of this evaluation was to check the quality of the enrichments, and in particular to assess i) if the anchors (annotations) are at correct position within the text, and ii) whether the link associated with the anchor text is correct. \noindent \textbf{Evaluation setup.} For the evaluation we have used the top-10 articles for English and German according to their PageRank score. \textit{Notice: for the review purposes we have temporarily published these documents (original and enriched version) at} \url{http://nlp2rdf.aksw.org/dbpedia-nif-examples/}. The score has been retrieved from the DBpedia SPARQL endpoint\footnote{\url{http://dbpedia.org/sparql}}, found in the \texttt{\url{http://people.aifb.kit.edu/ath/\#DBpedia_PageRank}} graph and described with the \texttt{\url{http://purl.org/voc/vrank\#rankValue}} property. Since the number of links in these articles is high (over 1,500 links per article), for the evaluation we have decided to randomly select 30 links per article. Next, these articles were submitted to ten annotators, where each annotator processed at minimum 150 links from five articles. For each annotation we collected three judgments. The evaluation sheets contained list of enrichments described with an ``\texttt{anchor text}'', the ``\texttt{link}'' associated with the anchor, and a ``\texttt{context}'' text (15 words) which contains the link. Since the provided context text is short in length, we also provided an HTML version of the article with the links highlighted in it. Along with the evaluation sheets we have also provided strict annotation guidelines where each annotator was asked to check if: \begin{itemize} \item \textbf{anchor-eval (0 or 1):} the anchor delimits the complete word or phrase that identifies an entity/concept/term in relation to the context, \item \textbf{link-eval (0 or 1):} the link (i.e. Wikipedia page) describes the anchor, \item \textbf{comment:} a placeholder for an optional comment. \end{itemize} The annotators were instructed to provide explanation in the \textit{comment} field, if they felt unsure. The composition of annotators was as follows: five computer science PhD students, four undergraduate bachelor computer science students and one master of arts student with a strong knowledge in computer science. All annotators were non-native English speakers and eight of them native German speakers. \textbf{Results.} Table~\ref{tbl.enrichment-eval-results} shows the results from the evaluation. We report on the fraction of correctly generated new links among the total number of generated links; a link is considered to be marked as correct if at least two out of three annotators provided same judgment. We also report on the correctly identified anchors, links, and their combination. In addition, for each evaluation mode we report the inter-annotator agreement (IAA) in terms of Fleiss' kappa \cite{fleiss1971measuring} metric. \newcolumntype{P}[1]{>{\centering\arraybackslash}p{#1}} \begin{table}[!h] \caption{Results from the evaluation of the dataset enrichment.} \begin{center} \begin{tabular}{ P{1.1cm} P{1.5cm} P{1.5cm} P{1.5cm} P{1.5cm} P{1.5cm} P{1.5cm} } \specialrule{1pt}{0pt}{1pt} & \multicolumn{1}{>{\centering\arraybackslash}m{15mm}}{\textbf{IAA for anchors}} & \multicolumn{1}{>{\centering\arraybackslash}m{15mm}}{\textbf{IAA for links}} & \multicolumn{1}{>{\centering\arraybackslash}m{20mm}}{\textbf{IAA for anchors and links}} & \multicolumn{1}{>{\centering\arraybackslash}m{15mm}}{\textbf{Correct anchors}} & \multicolumn{1}{>{\centering\arraybackslash}m{15mm}}{\textbf{Correct links}} & \multicolumn{1}{>{\centering\arraybackslash}m{20mm}}{\textbf{Correct anchors and links}} \\ \specialrule{1pt}{1pt}{3pt} English & 0.4929 & 0.6538 & 0.5435 & 0.7933 & 0.6633 & 0.6133 \\ \specialrule{1pt}{1pt}{3pt} German & 0.4828 & 0.6183 & 0.5177 & 0.8723 & 0.8511 & 0.7908 \\ \specialrule{1pt}{1pt}{1pt} \end{tabular} \end{center} \label{tbl.enrichment-eval-results} \end{table} According to the agreement interpretation table defined in \cite{landis1977measurement}, we have received ``substantial agreement'' (0.61-0.80) for the links; 0.6538 for English and 0.6183 for German. For the rest, the anchors and both, the anchors and links combined, we have received ``moderate agreement'' (0.41-0.60). Highest agreement score has been received for the links in English 0.6538, while lowest for the German anchors 0.4828. The results also show that the enrichment process performed well in terms of accuracy. Best results have been achieved for German anchors with fraction of 0.8723 correct anchors, while the worst for the English links with fraction of 0.6633 correct links. We assume that the low fraction of correct links for English is due to the character of the Wikipedia guidelines on linking. In our future work, we will focus on improvement of the enrichment process, with a particular focus on the link validation. \section{Selected Dataset Use Cases from the TextExt Challenge} \label{sec.use-cases} In this section we report on two use cases of the datasets which won the TextExt challenge in year 2017. \subsection{Linked Hypernyms Dataset: entity typing using Wikipedia} The Linked Hypernyms Dataset (LHD) \cite{KLIEGR201647} was the first winner at the TextExt challenge which has been using the DBpedia NIF dataset. The LHD dataset has been developed on top of the DBpedia NIF dataset with the goal to complete missing types in DBpedia and mine more specific types. It has been generated via extraction of the hypernyms found in the first sentence in the articles. The extracted hypernyms are considered as entity type for the entity described in the article. Finally, the hypernyms are mapped to DBpedia ontology types which are further analyzed and the most specific type selected. LHD has been processed for English, German and Dutch and it is already integrated as part of the DBpedia core dataset. \subsection{Lector: facts extraction from Wikipedia text} Lector \cite{Cannaviccio:2016} was the second and most recent winner of the TextExt challenge. Lector is a knowledge extraction tool used to harvest highly accurate facts from Wikipedia article text. The tool has been adopted and applied on DBpedia NIF to extract new facts. The approach applied on DBpedia NIF is defined as follows: in the first phase, all articles are processed in order to harvest the most common typed-phrases used to express facts present in DBpedia; in the second phase, the most common phrases are used to extract novel facts. Lector has been applied on five different DBpedia NIF languages and in a near future it will be integrated as part of the DBpedia core dataset. \section{Conclusions} \label{sec.conclusion} Unstructured Wikipedia article texts provide vast amount of valuable information, which has not yet been exploited and considered in DBpedia. In this paper, we have presented the \textit{DBpedia NIF} dataset, a large scale, open and multilingual knowledge extraction corpus. The ultimate goal of the dataset is to broaden and deepen the information in DBpedia and to provide large-scale and multilingual linguistic resource for training various NLP and IR tasks. The dataset provides the content of Wikipedia article texts in 128 languages and it captures the structure of the articles as they are structured in Wikipedia including the sections, paragraphs and links. We use the NIF format to model the data and represent the information. Also, we have further enriched the dataset and increased the number of links in the content by 25\%. The dataset has been checked for its syntactic validity and semantic accuracy and the results from these evaluations confirm the quality of the dataset. We have also validated and shown the potential of the dataset on two selected use cases from the TextExt challenge. Presented maintenance and sustainability plans will assure continuous growth of the dataset in terms of size, quality, coverage and usage. In our future work, we will focus on improving the extraction process and the data model, and continue to disseminate and extract additional knowledge from the dataset at the TextExt challenge. ~ \noindent \textbf{Acknowledgement.} We thank all contributors to the dataset and especially Martin Br\"ummer for the initial implementation and Markus Freudenberg for integration of the extraction as part of the DBpedia Extraction framework. Also, thanks to the annotators for their help with the evaluation. This work was partially funded by a grant from the EU’s H2020 Programme for the ALIGNED project (GA-644055) and grant from the Federal Ministry for Economic Affairs and Energy of Germany (BMWi) for the SmartDataWeb project (GA-01MD15010B). \bibliographystyle{plainnat} \input{dbpedia-nif.bbl} \end{document}
1,314,259,996,103
arxiv
\section{Introduction} \label{sec:intro} Extracting information from business documents remains a significant burden for all large enterprises. Existing extraction pipelines are complex, fragile, highly engineered, and yield poor accuracy. These systems are typically trained using annotated document images whereby human annotators have labeled the tokens or words that need to be extracted. The need for human annotation is the biggest barrier to the broad application of machine learning. Business documents often have complex recursive structures whose understanding is critical to accurate information extraction, e.g. invoices contain a variable number of line-items each of which may have multiple (sometimes varying) fields to be extracted. This remains a largely unsolved problem. Deep Learning has revolutionized sequence modeling resulting in systems that are trained end-to-end and have improved accuracy while reducing system complexity for a variety of applications \cite{NIPS2017_3f5ee243}. Here we leverage these benefits in an end-to-end system for information extraction from complex business documents. This system does not use human annotations making it easily adaptable to new document types while respecting the recursive structure of many business documents. \begin{figure}[tb] \centering \scriptsize \begin{subfigure}{0.4\columnwidth} \centering \begin{tabular}{| l r r r |} \hline & & & \\ \multicolumn{4}{|r|}{ \fbox{Invoice} \fbox{No:} \fbox{12345} } \\ \multicolumn{4}{|r|}{ \fbox{Date:} \fbox{12/13/2019} } \\ & & & \\ \multicolumn{1}{|c}{ \fbox{Description} } & \multicolumn{1}{c}{ \fbox{Qty} } & \multicolumn{1}{c}{ \fbox{Rate} } & \multicolumn{1}{c|}{ \fbox{Amt} } \\ & & & \\ \fbox{Apples} & \fbox{5} & \fbox{\$1} & \fbox{\$5} \\ \fbox{Chicken} \fbox{Wings} & \fbox{10} & \fbox{\$2} & \fbox{\$20} \\ & & &\\ \multicolumn{4}{|r|}{ \fbox{Total:} \fbox{\$25} } \\ & & & \\ \hline \end{tabular} \caption{Example of an invoice} \label{tbl:invoice_bb} \end{subfigure} \begin{subfigure}{0.56\columnwidth} \centering \begin{tabular}{|c c c c c|} \hline \multicolumn{5}{|l|}{Invoice Relational Table} \\ \hline \textit{InvoiceID} & \multicolumn{2}{c}{\textit{Date}} & \multicolumn{2}{c|}{\textit{TotalAmt}} \\ 12345 & \multicolumn{2}{c}{12/13/2019} & \multicolumn{2}{c|}{\$25} \\ \hline \multicolumn{5}{|l|}{Line-items Relational Table} \\ \hline \textit{InvoiceID} & \textit{Desc} & \textit{Qty} & \textit{Rate} & \textit{Amt} \\ 12345 & Apples & 5 & \$1 & \$5\\ 12345 & Chicken Wings & 10 & \$2 & \$20 \\ \hline \end{tabular} \caption{Invoice record in relational tables} \label{tbl:record} \end{subfigure} \caption{Invoice image and invoice record} \end{figure} We apply this system to invoices which embody many of the challenges involved: the information they contain is highly structured, they are highly varied, and information is encoded in their 2D layout and not just in their language. A simplified example of an invoice is shown in Figure \ref{tbl:invoice_bb} with the corresponding relational-record shown in Figure \ref{tbl:record}. The relational-record contains structured information including header fields that each appear once in the invoice and an arbitrary number of line-items each containing a description, quantity, rate and amount \footnote{Other business documents are significantly more complicated often involving recursively defined structure.}. While human annotations of document images are rarely available, these relational-records are commonly stored in business systems such as Enterprise Resource Planning (ERP) systems. Figure \ref{tbl:invoice_bb} illustrates bounding boxes surrounding each of the tokens in the invoice. Some of these bounding boxes, e.g. ``Description'' provide context by which other bounding boxes, e.g. ``Apples'' can be interpreted as corresponding to a description field. We assume that these bounding boxes are provided by an external Optical Character Recognition (OCR) process and given as input to our information extraction pipeline. Unlike alternative approaches which require annotated bounding boxes to train a bounding box classification model, our approach avoids the use of annotations for each box while only requiring relational-records. A common approach to information extraction is to classify these bounding boxes individually using machine learning and then to use heuristics to group extracted fields to recover the recursive structure. This approach has several shortcomings: 1) It requires annotated bounding boxes, where each annotation is tagged with the coordinates of the box and a known label to train the classification model. 2) It requires heuristic post-processing to reconstruct the structured records and to resolve ambiguities. These post-processing pipelines tend to be brittle, hard to maintain, and highly tuned to specific document types. Adapting these systems to new document types is an expensive process requiring new annotations and re-engineering of the post-processing pipeline. We address these issues using a structured prediction approach. We model the information to be extracted using a context free grammar (CFG) which allows us to capture even complex recursive document structures. The CFG is built by adapting the record schema for the information to be extracted. Information is extracted by parsing documents with this CFG in 2D. It is important to note that the CFG is not based on the layout of the document, as with previous work in this field, but on the structure of the information to be extracted. We do not have to fully describe the document, we only describe the information that needs to be extracted. This allows us to be robust to a wide variety of document layouts. There may be many valid parses of a document. We resolve this ambiguity by extending Conditional Probabilistic Context Free Grammars (CPCFG) \cite{sutton2004conditional} in 3 directions. 1) We use deep neural networks to provide the conditional probabilities for each production rule. 2) We extend CPCFGs to 2D parsing. 3) We train them using structured prediction \cite{10.3115/1118693.1118694}. This results in a computationally tractable algorithm for handling even complex documents. Our core contributions are: 1) A method for end-to-end training in complex information extraction problems requiring no post-processing and no annotated images or any annotations associated with bounding boxes. 2) A tractable extension of CFGs to parse 2D images where the grammar reflects the structure of the extracted information rather than the layout of the document. 3) We demonstrate state-of-the-art performance in an invoice reading system, especially the problem of extracting line-item fields and line-item grouping. \section{Related work} \label{sec:related} We identify 3 areas of prior work related to ours: information extraction from invoices using machine learning, grammars for parsing document layout, and structured prediction using Deep Learning. The key innovation of our approach is that we use a grammar based on the information to be extracted rather than the document layout resulting in a system that requires no hand annotations and can be trained end-to-end. There has been a lot of work on extracting information from invoices. In particular a number of papers investigate using Deep Learning to classify bounding boxes. \cite{8892875} generated invoice data and used a graph convolutional neural network to predict the class of each bounding box. \cite{layoutlm} uses BERT\cite{devlin-etal-2019-bert} that integrates 2D positional information to produce contextualized embeddings to classify bounding boxes and extract information from receipts\footnote{Receipts are a simplified form of invoice.}. \cite{majumder-etal-2020-representation} also used BERT and neighborhood encodings to classify bounding boxes. \cite{denk2019bertgrid,katti-etal-2018-chargrid} use grid-like structures to encode information about the positions of bounding boxes and then to classify those bounding boxes. \cite{8892875,majumder-etal-2020-representation,layoutlm,yu2020pick} predict the class of bounding boxes and then post-process to group them into records. \cite{yu2020pick} uses graph convolutional networks together with a graph learning module to provide input to a Bidirectional-LSTM and CRF decoder which jointly labels bounding boxes. This works well for flat (non-recursive) representations such as the SROIE dataset \cite{8977955} but we are not aware of their application to hierarchical structures in more complex documents, especially on recursive structures such as line-items. \cite{hwang2019post,hwang2020spatial} provide a notable line of work which addresses the line-item problem. \cite{hwang2019post} reduce the 2D layout of the invoice to a sequence labeling task, then use Beginning-Inside-Outside (BIO) Tags to indicate the boundaries of line-items. \cite{hwang2020spatial} treats the line-item grouping problem as that of link prediction between bounding boxes. \cite{hwang2020spatial} jointly infers the class of each box, and the presence of a link between the pair of boxes in a line-item group. However, as with all of the above cited work in information extraction from invoices both \cite{hwang2019post,hwang2020spatial} require hand annotation for each bounding box, and they require post-processing in order to group and order bounding boxes into records. Documents are often laid out hierarchically and this recursive structure has been addressed using parsing. 2D parsing has been applied to images \cite{10.1145/2543581.2543593,10.5555/2986459.2986468,10.5555/2981780.2982029}, where the image is represented as a 2D matrix. The regularity of the 2D matrix allows parsing to be extended directly from 1D. The 2D approaches to parsing text documents most related to ours are from \cite{10.1109/ICDAR.2005.98,10.1109/ICCV.2005.140,Tomita1991}. \cite{Tomita1991} described a 2D-CFG for parsing document layout which parsed 2D regions aligned to a grid. \cite{10.1109/ICDAR.2005.98} describe 2D-parsing of document layout based on context free grammars. Their Rectangle Hull Region approach is similar to our 2D-parsing algorithm but yields a $\mathcal{O}(n^5)$ complexity compared to our $\mathcal{O}(n^3)$. \cite{10.1109/ICCV.2005.140} extends \cite{10.1109/ICDAR.2005.98} to use conditional probabilistic context free grammars based on Boosting and the Perceptron algorithm while ours is based on deep Learning with back-propagation through the parsing algorithm. Their work relies on hand-annotated documents. All of this work requires grammars describing the document layout and seeks to fully describe that layout. On the other hand, our approach provides end-to-end extraction from a variety of layouts simply by defining a grammar for the information to be extracted and without a full description for the document layout. To the best of our knowledge, no other work in the space of Document Intelligence takes this approach. \cite{LARI199035} provides the classical Probabilistic Context Free Grammar (PCFG) trained with the Inside-Outside algorithm, which is an instance of the Expectation-Maximization algorithm. This work has been extended in a number of directions that are related to our work. \cite{sutton2004conditional} extended PCFGs to the Conditional Random Field \cite{10.5555/645530.655813} setting. \cite{drozdov-etal-2019-unsupervised-latent} use inside-outside applied to constituency parsing where deep neural networks are used to model the conditional probabilities. Like us they train using backpropagation, however, they use the inside-outside algorithm while we use structured prediction. Other work \cite{AlvarezMelis2017TreestructuredDW} considers more general applications of Deep Learning over tree structures. While \cite{drozdov-etal-2019-unsupervised-latent,LARI199035,sutton2004conditional} are over 1D sequences, here we use deep neural networks to model the conditional probabilities in a CPCFG over 2D structures. Finally, we refer the reader to \cite{subramani2020survey} for a recent survey on Deep Learning for document understanding. \section{Problem description} \label{sec:problem} We define an information extraction problem by a universe of documents $D$ and a schema describing the structure of records to be extracted from each document $d \in D$. Each document is a single image corresponding to a page in a business document (extensions to multi-page documents are trivial). We assume that all documents $d$ are processed (e.g. by OCR software) to produce a set of bounding boxes $b = (content, x_1, y_1, x_2, y_2)$ with top left coordinates $(x_1, y_1)$ and bottom right coordinates $(x_2, y_2)$. The schema describes a tuple of named fields each of which contains a value. Values correspond to base types (e.g., an Integer), a list of values, or a recursively defined tuple of named fields. These schemas will typically be described by a JSON Schema, an XML Schema or via the schema of a document database. More generally, the schema is a context free grammar $G=(V, \Sigma, R, S)$ where $\Sigma$ are the terminals in the grammar and correspond to the base types or tokens, $V$ are the non-terminals and correspond to field names, $R$ are the production rules and describe how fields are constructed either from base types or recursively, and $S$ is the start symbol corresponding to a well-formed extracted record. \begin{figure}[htb] \centering \begin{tabular}{|r l l|} \hline \textbf{Invoice} & \textbf{:=} & \textbf{(InvoiceID Date LineItems TotalAmt)} ! \\ \textbf{InvoiceID} & \textbf{:=} & \textbf{STRING} $|$ (N InvoiceID) ! $|~\epsilon$ \\ \textbf{Date} & \textbf{:=} & \textbf{STRING $|$ Date Date} $|$ (N Date) ! $|~\epsilon$ \\ \textbf{TotalAmt} & \textbf{:=} & \textbf{MONEY} $|$ (N TotalAmt) ! $|~\epsilon$ \\ \textbf{LineItems} & \textbf{:=} & \textbf{LineItems LineItem} $|$ \textbf{LineItem} \\ \textbf{LineItem} & \textbf{:=} & \textbf{(Desc Qty Rate Amt)} ! $|$ (N LineItem) ! \\ \textbf{Desc} & \textbf{:=} & \textbf{STRING $|$ Desc Desc} $|$ (N Desc) !\\ \textbf{Qty} & \textbf{:=} & \textbf{NUMBER} $|$ (N Qty) ! $|~\epsilon$ \\ \textbf{Rate} & \textbf{:=} & \textbf{NUMBER} $|$ (N Rate) ! $|~\epsilon$ \\ \textbf{Amt} & \textbf{:=} & \textbf{MONEY} $|$ (N Amt) ! $|~\epsilon$ \\ N & := & N N $|$ STRING \\ \hline \end{tabular} \caption{Grammar} \label{tbl:grammar} \end{figure} An example grammar is illustrated in Figure \ref{tbl:grammar}. Reading only the content in \textbf{bold} gives the rules for $G$ that represents the main information that we want to extract. Here $\Sigma=\{$STRING, NUMBER. MONEY$\}$, $V=\{$Invoice, InvoiceNumber, TotalAmount, LineItems, LineItem, Desc (Description), Qty (Quantity), Rate, Amt$\}$ \footnote{Note that this is not in CNF but the conversion is straightforward \cite{cole2007converting}.}. The goal of information extraction is to find the parse tree $t_d \in G$ corresponding to the record of interest. \section{Approach} \label{sec:approach} We augment $G$ to produce $G'=(V', \Sigma', R', S')$ a CPCFG whose parse trees $t'_d$ can be traversed to produce a $t_d \in G$. Below we assume (without loss of generality) that all grammars are in Chomsky Normal Form (CNF) and hence all parse trees are binary. The set of bounding boxes for a document $d$ may be recursively partitioned to produce a binary tree. We only consider partitions that correspond to vertical or horizontal partitions of the document region so that each subtree corresponds to a rectangular region $B$ of the document image \footnote{This approach is built on the assumption (by analogy with sentences) that documents are read contiguously. This assumption likely does not hold for general images where occlusion may require non-contiguous scanning to produce an interpretation.}. Each such region contains a set of bounding boxes. We consider any two regions $B_1$ and $B_2$ equivalent if they contain exactly the same set of bounding boxes. The leaves of a partition tree each contain single bounding boxes. We extend the tree by appending a final node to each leaf that is the bounding box itself. We refer to this extension as the partition tree for the remainder of the paper. The contents of these bounding boxes are mapped to the terminals $\Sigma'=\Sigma \cup \{$NOISE$, \epsilon\}$. The special NOISE token is used for any bounding box contents that do not map to a token in $\Sigma$. The special $\epsilon$ token is used to indicate that some of the Left-Hand-Side (LHS) non-terminals can be an empty string. Document fields indicated with $\epsilon$ are optional and can be missing from the document. We handle this special case within the parsing algorithm. We augment the non-terminals $V'=V \cup \{$N$\}$ where non-terminal N captures regions of the image that contain no extractable information. We augment the rules $R$ by adding rules dealing with the N and NOISE symbols. Every rule $X\rightarrow YZ$ is augmented with a production $X \rightarrow NX$ and every rule $A \rightarrow \alpha$ is augmented with a rule $A \rightarrow NA$. In many cases the record information in a document may appear in arbitrary order. For example, the order of line-items in an invoice is irrelevant to the meaning of the data. We introduce the suffix ``!'' on a production to indicate that all permutations of the preceding list of non-terminals are valid. This is illustrated in Figure \ref{tbl:grammar} where the modifications are in not in \textbf{bold}. Leaves of a partition tree are labeled with the terminals mapped to their bounding boxes. We label the internal nodes of a partition tree with non-terminals in $V'$ bottom up. Node $i$ corresponding to region $B_i$ is labeled with any $X \in V'$ where $\exists (X \rightarrow YZ) \in R'$ such that the children of node $i$ are labeled with $Y$ and $Z$. We restrict our attention to document partition trees for which such a label exists and refer to the set of all labels of such trees as $T'_d$ and a single tree as $t'_d$ (with a minor abuse of notation). We recover a tree $t_d \in G$ from $t'_d$ by removing all nodes labeled with N or NOISE and re-connecting in the obvious way. We say that such a $t_d$ is compatible with $t'_d$. By adding weights to each production, we convert $G'$ to a CPCFG. Trees are assigned a score $s(t'_d)$ by summing the weights of the productions at each node. Here the weights are modeled by a deep neural network $m_{X\rightarrow YZ}(B_1, B_2)$ applied to a node labeled by $X$ with children labeled by $Y$ and $Z$ and contained in regions $B_1$ and $B_2$ respectively. \subsection{Parsing} \label{sec:parsing} \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{imgs/cyk2d} \caption{An example of CYK 2D parsing on a simple document and grammar. The valid parse tree is the one on the left with \textbf{bolded arrows} $\rightarrow$.} \label{fig:cyk2d} \end{figure} We can now solve the information extraction problem by finding the tree $t'_d$ with the highest score by running a chart parser. A typical 1D chart parser creates a memo $c[$sentence span$][X]$ where $X \in V$ and ``sentence span'' corresponds to a contiguous span of the sentence being parsed and is usually represented by indexes $i,j$ for the start and end of the span so that the memo is more typically described as $c[i][j][X]$. The memo contains the score of the highest scoring subtree that could produce the sentence span and be generated from the non-terminal $X$. The memo can be constructed (for a sentence of length $n$) top down starting from $c[0][n][S]$ with the recursion: \begin{align} \label{eqn:dp_1} c[i][j][X] = \max_{(X \rightarrow YZ) \in R} ~ \max_{i \leq k < j} \Big( w_{X \rightarrow YZ}(i,k,j) + c[i][k][Y] + c[k][j][Z] \Big) \end{align} where $w_{X \rightarrow YZ}$ is the weight associated with the rule $X \rightarrow YZ$. It is easy to see that the worst-case time complexity of this algorithm is $O(n^3 |R'|)$. We extend this algorithm to deal with 2D documents. In this case the memo $c[B][X]$ contains the score of the highest scoring sub-tree that could produce the region $B$ of the document image from the non-terminal $X$. This results in a top down algorithm recursively defined as follows: \begin{align} \label{eqn:dp_2} c[B][X] = \max_{(X \rightarrow YZ) \in R'} ~ \max_{B_1, B_2 \in Part(B)} \Big(& m_{X\rightarrow YZ}(B_1, B_2) \\ \nonumber &\quad + c[B_1][Y] + c[B_2][Z] \Big) \end{align} where we consider $Part(B)$ defined as partitions of $B$ obtained by splitting horizontally or vertically between adjacent pairs of bounding boxes in $B$, i.e. $B = B_1 \cup B_2, B_1 \cap B_2 = \emptyset$. There are $n-1$ such horizontal splits and $n-1$ such vertical splits. The worst-case time complexity of this algorithm is $O(n^3 |R'|)$ \footnote{The cost of calculating $c$ can be further reduced by using beam search.}. Overloading notation, we say $s(d) = c[d][S']$ provides the score for the highest scoring tree for $d$. We can recover the tree itself by maintaining back-pointers as in a typical chart parser. We assign a score to $t_d \in G$ as the maximum score over all $t'_d$ with which it is compatible. One may refer to Figure \ref{fig:cyk2d} for an illustration of how 2D parsing is done using the CYK algorithm. \subsection{Learning} \label{sec:learning} \begin{algorithm}[tb] \caption{The neural network model of the unary rules} \label{alg:f_unary} \begin{algorithmic} \Function{$m_{X \rightarrow x}$}{$bb, lm$} \Comment{$bb$: the bounding box, $lm$: the language model} \State $h_0$ = forward($lm, bb$) \Comment{Get language model vector} \State $h$ = GELU($W_0^{X \rightarrow x} h_0 + b_0^{X \rightarrow x}$) \Comment{Hidden vector} \State $s$ = $W_1^{X \rightarrow x} h + b_1^{X \rightarrow x}$ \Comment{Score for Non-terminal $X$} \State \Return $s, h$ \EndFunction \end{algorithmic} \end{algorithm} \begin{algorithm}[tb] \caption{The neural network model of the binary rules} \label{alg:f_binary} \begin{algorithmic} \Function{$m_{X \rightarrow YZ}$}{$B, B_1, B_2, h_d, M$} \Comment{$B$: the region under consideration.}\\ \Comment{$B_1, B_2$: the sub-regions for one of the partitions of $B$.}\\ \Comment{$h_d$: the embedding representing the direction of the partition.}\\ \Comment{$M$: the memoization of the dynamic program.} \State $h_1, h_2 = M[B_1][Y][h], M[B_2][Z][h]$ \State $h$ = GELU($W_1^{X \rightarrow YZ} h_1 + W_2^{X \rightarrow YZ} h_2 + W_d^{X \rightarrow YZ} h_d + b_0^{X \rightarrow YZ}$) \Comment{Hidden vector} \State $s = W_3^{X \rightarrow YZ} h + b_3^{X \rightarrow YZ}$ \Comment{Score for Non-terminal $X$} \State \Return $s, h$ \EndFunction \end{algorithmic} \end{algorithm} Given training data consisting of pairs $(d, t_d)$ with $d \in D$ a document and $t_d \in G$ our goal is to learn the parameters of the models $m_r$ such that $t_d$ is the highest scoring tree for $d$. We achieve this following the structured prediction approach \cite{10.3115/1118693.1118694} and minimizing the structured prediction loss. \begin{equation} \label{eqn:obj2} \sum_{d \in D} s(\hat{t'_d}) - s(\bar{t'_d}) \end{equation} Where $\bar{t'_d}$ is the highest scoring tree compatible\footnote{We will address compatibility in Section \ref{sec:compatibility}.} with $t_d$ (the correct tree) and $\hat{t'_d}$ is the highest scoring tree from the dynamic program. Intuitively we aim to increase the scores of correct trees and decrease scores of incorrect trees. We perform this minimization using gradient descent and back-propagation on Equation \ref{eqn:obj2}. Each model $m_r$ is a deep neural network and the score $s(t'_d)$ is computed recursively as a function of these models. We can back-propagate through this recursion to jointly train all of the $m_r$. It remains to describe the models $m_r$. Each model outputs both a score for the production at a given tree node and an embedding meant to represent the sub-tree under that node. The models for terminal rules $X \rightarrow x$ where $x \in \Sigma$ take as input a target bounding box and any context that might be relevant to labeling that bounding box such as the coordinates of the bounding box. Intuitively these models predict which Non-terminal labels a given bounding box. For simplicity of presentation, we describe a relatively simple class of models. Algorithm \ref{alg:f_unary} shows the model $m_{X \rightarrow x}$ used in Equation \ref{eqn:dp_1}. The forward function in Algorithm \ref{alg:f_unary} produces a vectorized representation of the given bounding box. One could either use a language model with pretrained weights or train the model end-to-end as part of the learning process. Many architectures are possible for such a model including ones based on language embeddings (e.g., BERT \cite{devlin-etal-2019-bert}) that embed only the contents of the bounding box, and ones which aim to take document image, layout, and format into account (e.g., LayoutLM \cite{layoutlm}). When we integrate our DeepCPCFG model and a language model, it results in an encoder-decoder architecture. The encoder is the language model (e.g. Layoutlm) which takes as input the bounding box coordinates and text, then produces a word embedding for each bounding box. The word embedding is given to the decoder, which is DeepCPCFG, that produces a parse tree reflecting the document hierarchy. Geometric information is captured explicitly by Layoutlm then implicitly again during 2D parsing. The models for Non-terminal rules $X \rightarrow YZ$ take as inputs the sub-regions $B_1, B_2 \in Part(B)$ such that $B = B_1 \cup B_2, B_1 \cap B_2 = \emptyset$ where $B, B_1, B_2$ are labeled with $X, Y, Z$ respectively and outputs a score for $X$ and an embedding vector for $B$ representing $X$. Algorithm \ref{alg:f_binary} shows the implementation of the function $m_{X \rightarrow YZ}$ (used in Equation \ref{eqn:dp_2}), as a derivation of a Tree Convolutional Block \cite{harer2019treetransformer}. The matrices $W_i^{r}$ with biases $b_i^r$ are the learned parameters of each model $m_r$ and $M[B][X][h]$ is the embedding for the best scoring sub-tree associated with $B$ and generated by $X$. These embedding vectors are stored in the memo $M$ of the chart parser. \subsection{Compatible tree for structured prediction} \label{sec:compatibility} We showed in Equation \ref{eqn:obj2} of Section \ref{sec:learning} that a compatible tree $\bar{t'_d}$ with the annotation of $d$ is required as part of learning the parameters for a Structured Prediction model. We define a tree $\bar{t'_d}$ to be most \emph{compatible} if the tree gives the \emph{smallest} tree edit-distance \cite{zhang1989simple} as compared with the hierarchical structure derived from the relational-record of the document $d$. But ordering of columns (tree branches) within a relational database can interfere with how tree edit-distance is computed. Therefore, we propose to relax the ordering of fields within the document, and the fields within the recurrent line-items to derive a variant of tree edit-distance, which we call Hierarchical Edit-Distance (HED). In HED, we only require the ordering of line-items within a document and words within a field remain the same, while the ordering of fields within a line-item may be permuted without impacting the distance. \begin{align} \label{eqn:hed} \operatorname{HED}(x, y) = \sum_{f \in H} \operatorname{SED}(x_f, y_f) + \operatorname{LiSeqED}(x_{\text{li}}, y_{\text{li}}) \end{align} $H$ refers to the set of header fields: \{InvoiceID, Date, TotalAmt\}. SED stands for Levenshtein String Edit Distance. LiSeqED (Line-item sequence edit-distance) is defined by Equation \ref{eqn:liseqed}. $x_{\text{li}}$ and $y_{\text{li}}$ represent the line-items of $x$ and $y$. \begin{align} \label{eqn:liseqed} \operatorname{LiSeqED}(x, y) = \begin{cases} \sum_{i=1}^{|x|} \operatorname{LiED}(x_{i}, \emptyset) \quad \text{if} ~ |y| = 0 \\ \sum_{i=1}^{|y|} \operatorname{LiED}(\emptyset, y_{i}) \quad \text{if} ~ |x| = 0 \\ \text{otherwise:} \\ \min \begin{cases} \operatorname{LiED}(x_1, y_1) + \operatorname{LiSeqED}(\operatorname{tail}(x), \operatorname{tail}(y)) \\ \operatorname{LiED}(x_1, \emptyset) + \operatorname{LiSeqED}(\operatorname{tail}(x), y) \\ \operatorname{LiED}(\emptyset, y_1) + \operatorname{LiSeqED}(x, \operatorname{tail}(y)) \\ \end{cases} \end{cases} \end{align} where LiED is defined by Equation \ref{eqn:lied}, tail is a function that returns the rest of the list except the first element in the list. $\emptyset$ represents an empty line-item. \begin{align} \label{eqn:lied} \operatorname{LiED}(x, y) = \sum_{f \in G} \operatorname{SED}(x_f, y_f) \end{align} where $G$ represents the set of line-item fields \{Desc, Qty, Rate, Amt\}. Using HED, we can obtain the compatible (smallest edit-distance) tree $\bar{t'_d}$ in the same way as we obtained the highest scoring parse tree $\hat{t'_d}$ by using HED as the scoring function then taking the minimum instead of maximum in Equations \ref{eqn:dp_1} and \ref{eqn:dp_2}. We will re-use HED when evaluating the results of our experiments. \section{Experiments} \label{sec:experiments} \begin{figure}[tb] \centering \begin{subfigure}[b]{0.3\textwidth} \includegraphics[width=\textwidth]{imgs/cord-example1} \caption{CORD receipt} \label{fig:cord-example1} \end{subfigure} \begin{subfigure}[b]{0.6\textwidth} \includegraphics[width=\textwidth]{imgs/ARIA_CDIP_Test_91545246} \caption{RVL-CDIP invoice} \label{fig:ARIA_CDIP_Test_91545246} \end{subfigure} \caption{Examples of structured documents} \label{fig:example_docs} \end{figure} Prior research on business documents, particularly invoices, has been limited by available public datasets. We are not aware of public datasets that provide structured extraction from business documents or invoices. In the FUNSD dataset \cite{jaume2019} annotations reflect record linkages rather than hierarchical structures. The RVL-CDIP \cite{harley2015icdar} dataset provides class labels for a document classification task rather than line-item annotations for information extraction. \cite{Gralinski2020KleisterAN} provides a dataset of business documents for information extraction but its documents do not contain line-items. The receipts used in SROIE \cite{8977955} have line-items but those are not annotated. \cite{denk2019bertgrid,katti-etal-2018-chargrid,yu2020pick,majumder-etal-2020-representation} report results only on proprietary sets of invoices. To evaluate our model on line-items we ran our experiments on three datasets summarized in Table \ref{tbl:dataset}. The first dataset from \cite{hwang2019post,hwang2020spatial} consists of the CORD receipt data. The second consists of proprietary invoices for which we have both hand annotations and relational-records. We created the third dataset by hand annotating invoices in the RVL-CDIP collection \cite{harley2015icdar}. We use the RVL-CDIP invoices solely for testing and welcome other researchers to compare with our results \footnote{We release RVL-CDIP data and metric code at \url{https://github.com/deepcpcfg}}. \begin{table}[t] \centering \caption{Dataset sizes} \label{tbl:dataset} \begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{} & Receipts & \multicolumn{2}{c|}{Invoices} \\ \cline{2-4} & CORD & Proprietary & RVL-CDIP \\ \hline Training & 800 & 17938 & 0 \\ Validation & 100 & 2085 & 0 \\ Testing & 100 & 2516 & 869 \\ \hline \end{tabular} \end{table} Our preliminary experiments demonstrated that the language model used in Algorithm \ref{alg:f_unary} should be fine-tuned on relevant documents. In the experiments below we fine-tune Layoutlm\cite{layoutlm} as follows: using the untrained CFG we derive compatible trees from the proprietary invoices' relational-records. For each invoice $d$, we obtain the compatible tree $\bar{t'_d}$, which provides classes for each bounding box (token) from the leaves of the compatible trees. These leaves are used to fine-tune a LayoutLM model on the token classification task. \subsection{Results on CORD} Table \ref{tbl:cord} provides a comparison between \cite{hwang2020spatial} and DeepCPCFG. We compare against the best results reported in Table 9 of \cite{hwang2020spatial} using the SPADE metric described in the appendix of \cite{hwang2020spatial}\footnote{At the time of writing, \cite{hwang2020spatial} have not released their code. We re-implemented SPADE metric to the best of our understanding based on communication with the authors. We release the metric implementation and output files at \url{ https://github.com/deepcpcfg} for anyone to compare or verify our results.}. The SPADE (Spatial Dependency Parsing) metric can be seen as a special case of HED if SED in Equations \ref{eqn:hed} and \ref{eqn:lied} is implemented using exact string match as follows, \begin{align} SED_{SPADE}(x,y) = \begin{cases} 0 & \quad \text{if} ~ x = y \\ 1 & \quad \text{if} ~ x \neq y \end{cases} \end{align} Using the dataset provided by \cite{hwang2020spatial} we derive relational-records (see Figure \ref{tbl:record}) for our DeepCPCFG model. In table \ref{tbl:cord}, \cite{hwang2020spatial} is trained using hand annotations while DeepCPCFG makes no use of those annotations and is therefore at a significant disadvantage. Although DeepCPCFG can leverage hand annotations when they are present, we chose not to use them to emphasize the power of DeepCPCFG when trained end-to-end. Overall DeepCPCFG achieves comparable results to \cite{hwang2020spatial} despite not being trained on hand annotations. Given that there are only 100 receipts in the holdout test set, the numbers we report depend substantially on 1 or 2 receipts. When we inspect DeepCPCFG's errors we found rotated or distorted receipts (see Fig \ref{fig:cord-example1}). DeepCPCFG's performance is compelling given that DeepCPCFG focuses on learning an end-to-end model for formally scanned business documents while \cite{hwang2020spatial} specializes in extraction from photos taken using handheld cameras. \begin{table}[tb] \centering \caption{F1 results on CORD using SPADE metric} \label{tbl:cord} \begin{tabular}{|m{0.25\columnwidth}|J|L|L|L|L|K|} \hline Model & {\bf Overall} & Desc & Qty & Rate & Amt & TotalAmt \\ \hline Hwang et al. \cite{hwang2020spatial} & 90.1 & \textbf{91.6} & \textbf{92.1} & 91.6 & \textbf{93.4} & \textbf{96.9} \\ \hline DeepCPCFG & \textbf{92.2} & 88.7 & 90.6 & \textbf{96.4} & 91.7 & 95.0 \\ \hline \end{tabular} \end{table} \subsection{Results on invoices} Scanned invoices reflect the primary objective and motivation for our research, that is, information extraction from complex business documents. By comparison, the CORD receipt dataset is much smaller and simpler. In Table \ref{tbl:hed-proprietary-rvl-cdip} we report results of 3 models trained on our proprietary invoices. The models are evaluated on a holdout test set of proprietary invoices and on an unseen set of invoices from RVL-CDIP. In all cases we train only on relational-records and do not make use of hand annotations. For invoices, we report results based on HED which allows for mismatches due to OCR errors. HED is implemented by tracking the number of unchanged characters (true positives), insertions (false negatives) and deletions (false positives) required to transform each prediction into its respective annotation. Replacements are treated as deletion/insertion pairs. These counts then allow us to derive precision, recall and f1 metrics. First, we investigate DeepCPCFG with pre-trained Layoutlm as the language model. We compare this to DeepCPCFG using fine-tuned Layoutlm and note a significant improvement in performance on both datasets.\footnote{In each case the optimal number of epochs was chosen using the validation set.} We also examine the performance of untrained DeepCPCFG (Epoch 0) and observe that training DeepCPCFG provides a dramatic improvement in performance especially for the Desc field which is the most dependent on structure as it is composed of multiple bounding boxes. \begin{table}[tb] \centering \caption{F1 results on proprietary invoices using HED metric} \label{tbl:hed-proprietary-rvl-cdip} \begin{tabular}{|m{0.32\columnwidth}|L|Z|Z|Z|Z|J|Z|J|} \hline \multicolumn{1}{|c|}{Model} & \scriptsize{\bf Overall} & Desc & Qty & Rate & Amt & \scriptsize{InvoiceID} & Date & \scriptsize{TotalAmt} \\ \hline \hline \multicolumn{9}{|c|}{Proprietary Invoices} \\ \hline DeepCPCFG pre-trained LayoutLM (Epoch 3) & 67.2 & 61.7 & 70.0 & 70.7 & 77.2 & 77.8 & 86.2 & 84.1 \\ \hline DeepCPCFG fine-tuned LayoutLM (Epoch 0) & 73.5 & 68.7 & 81.2 & 83.4 & 81.8 & 92.8 & 86.3 & 80.9 \\ \hline DeepCPCFG fine-tuned LayoutLM (Epoch 1) & 82.2 & 79.1 & 86.8 & 89.5 & 88.6 & 93.0 & 88.2 & 83.9 \\ \hline Compatible Trees & 95.6 & 97.3 & 91.5 & 95.2 & 92.8 & 94.8 & 89.6 & 87.5 \\ \hline \hline \multicolumn{9}{|c|}{RVL-CDIP Invoices} \\ \hline DeepCPCFG pre-trained LayoutLM (Epoch 3) & 55.2 & 52.0 & 38.4 & 45.6 & 60.0 & 53.6 & 75.2 & 68.1 \\ \hline DeepCPCFG fine-tuned LayoutLM (Epoch 0) & 63.1 & 60.0 & 48.7 & 57.2 & 66.8 & 75.5 & 83.7 & 68.5 \\ \hline DeepCPCFG fine-tuned LayoutLM (Epoch 1) & 70.5 & 69.0 & 55.2 & 57.0 & 73.5 & 74.6 & 84.5 & 71.6 \\ \hline Compatible Trees & 89.9 & 92.3 & 80.0 & 88.2 & 83.3 & 85.2 & 88.5 & 80.7 \\ \hline \end{tabular} \end{table} The HED metric is sensitive to OCR or annotation errors. The rows ``Compatible Trees'' in Table \ref{tbl:hed-proprietary-rvl-cdip} show the quality of the compatible trees derived using the relational-records annotations. These values reflect the best results possible on the holdout test set given OCR and annotation errors. The RVL-CDIP dataset is rather old and its scanned images are typically noisy or of poor quality (see Figure \ref{fig:ARIA_CDIP_Test_91545246}) resulting in diminished OCR quality. This leads to deteriorated performance for the RVL-CDIP dataset in Table \ref{tbl:hed-proprietary-rvl-cdip}. \subsubsection{Relation to bounding box classification} While our goal is to extract structured information from complex documents in an end-to-end fashion it is informative to compare against methods that classify bounding boxes based on hand annotations. In Table \ref{tbl:hed-proprietary-rvl-cdip1} we compare DeepCPCFG against a Layoutlm based bounding box classifier. Note that these results measure the number of bounding boxes where the model and the human annotator disagree on their label. We first examine the performance of bounding box classifications produced by taking the leaves from the compatible tree (``Leaf-annotations''). This process uses the true relational-records on the test data and identifies the bounding boxes that best recover those records. This performs poorly particularly on fields like Rate, Amt, Date, and TotalAmt whose values may appear multiple times in an invoice, due to significant annotation errors. This illustrates the difference between ``ground truth" when evaluating against relational-records rather than human annotations. Relational-records better reflect real-world objectives in most applications and this result suggests that human annotations are a rather poor proxy for evaluating these objectives. Next, we examine the performance of DeepCPCFG which has been trained based on the compatible trees from the training data, as such it is penalized in this evaluation in the same way that the ``Leaf-annotations" are. Notably DeepCPCFG obtains results comparable to and sometimes better than the ``Leaf-annotations" on the test data. From these experiments we see that the problem of information extraction is substantially different from the problem of classifying bounding boxes. Despite this we see that DeepCPCFG is quite effective at classifying bounding boxes even when trained without any hand annotations. \begin{table}[tb] \centering \caption{F1 classification results on proprietary invoices evaluated on hand-annotations as ground truth} \label{tbl:hed-proprietary-rvl-cdip1} \begin{tabular}{|m{0.32\columnwidth}|L|Z|Z|Z|Z|J|Z|J|} \hline \multicolumn{1}{|c|}{Model} & \scriptsize{\bf Overall} & Desc & Qty & Rate & Amt & \scriptsize{InvoiceID} & Date & \scriptsize{TotalAmt} \\ \hline LayoutLM with token classification from Hand-annotations & 93.3 & 92.0 & 96.4 & 95.3 & 96.5 & 96.6 & 97.2 & 89.3 \\ \hline Leaf-annotations & 86.4 & 95.6 & 87.9 & 61.3 & 65.4 & 88.8 & 72.2 & 52.7 \\ \hline DeepCPCFG fine-tuned with LayoutLM using leaf-annotations (Epoch 1) & 81.4 & 88.1 & 87.3 & 58.2 & 64.0 & 88.7 & 75.9 & 52.1 \\ \hline \end{tabular} \end{table} \section{Conclusion} We have described and demonstrated a method for end-to-end structured information extraction from business documents such as invoices and receipts. This work enhances existing capabilities by removing the need for brittle post-processing, and by reducing the need for annotated document images. Our method ``parses'' a document image using a grammar that describes the information to be extracted. The grammar does not describe the layout of the document or significantly constrain that layout. This method yields compelling results. However, research on this important problem is limited by the lack of available benchmark data sets which has slowed development and stymied comparisons. In order to alleviate this, we released a new public evaluation set based on the RVL-CDIP data. Related ``2D parsing'' approaches have been previously explored for image analysis and we believe that the effectiveness of our approach suggests a broader re-examination of grammars in image understanding particularly in combination with Deep Learning. We have gone beyond existing work in this space by \cite{8892875,10.1109/ICDAR.2005.98,hwang2020spatial,subramani2020survey,layoutlm,yu2020pick}. The success of DeepCPCFG, together with earlier work in this space, shows the value of combining structured models with Deep Learning. Our experimental results illustrate the significant gap between information extraction and recovering labels from hand annotations. We believe that evaluations based on recovering relational-records best reflect real-world use cases. In ongoing work, we are applying this technique to a wide variety of structured business documents including tax forms, proofs of delivery, and purchase orders. \subsubsection{Disclaimer} The views reflected in this article are the views of the authors and do not necessarily reflect the views of the global EY organization or its member firms. \subsubsection{Acknowledgements} The authors will like to thank the following colleagues: David Helmbold, Ashok Sundaresan, Larry Kite, Chirag Soni, Mehrdad Gangeh, Tigran Ishkhanov and Hamid Motahari. \bibliographystyle{splncs04}
1,314,259,996,104
arxiv
\subsection{Proof}% \title{Analog models of computations \& Effective Church Turing Thesis: \\ Efficient simulation of Turing machines by the General Purpose Analog Computer} \maketitle \begin{abstract} \emph{Are analog models of computations more powerful than classical models of computations?} From a series of recent papers, it is now clear that many realistic analog models of computations are provably equivalent to classical digital models of computations from a \emph{computability} point of view. Take, for example, the probably most realistic model of analog computation, the General Purpose Analog Computer (GPAC) model from Claude Shannon, a model for Differential Analyzers, which are analog machines used from 1930s to early 1960s to solve various problems. It is now known that functions computable by Turing machines are provably exactly those that are computable by GPAC. This paper is about next step: understanding if this equivalence also holds at the \emph{complexity} level. In this paper we show that the realistic models of analog computation -- namely the General Purpose Analog Computer (GPAC) -- can simulate Turing machines in a computationally efficient manner. More concretely we show that, modulo polynomial reductions, computations of Turing machines can be simulated by GPACs, without the need of using more (space) resources than those used in the original Turing computation. As we recently proved that functions computable by a GPAC in a polynomial time with similar restrictions can always be simulated by a Turing machine, this opens the way to, first, a proof that realistic analog models of computations do satisfy the effective Church Turing thesis, and second to a well founded theory of complexity for analog models of computations. \end{abstract} \section{Introduction} The Church-Turing thesis is a cornerstone result in theoretical computer science. It states that any (discrete time, digital) computational model which captures the notion of algorithm is computationally equivalent to the Turing machine (see e.g.~\cite{Odi89}, \cite{Sip05}). It also relates various aspects of models in a very surprising and strong way. When considering non-discrete time or non-digital models, the situation is far from being so clear. In particular, when considering models working over real numbers, several models are clearly not equivalent \cite{Bra00}. However, a question of interest is whether physically \emph{realistic} models of computation over the real numbers are equivalent, or can be related. Some of the results of non-equivalence involve models, like the BSS model \cite{BSS89}, \cite{BCSS98}, which are claimed not to be physically realistic \cite{Bra00} (although they certainly are interesting from an algebraic perspective), or models which depend critically on the use of exact precision computation for obtaining super-Turing power, e.g.~\cite{AM98}, \cite{BBKT01}. Realistic models of computation over the reals clearly include the \emph{General Purpose Analog Computer (GPAC)}, an analog continuous-time model of computation and \emph{Computable Analysis}. The GPAC is a mathematical model introduced by Shannon \cite{Sha41} of an earlier analog computer, the Differential Analyzer. The first general-purpose Differential Analyzer is generally attributed to Vannevar Bush \cite{Bus31}. Differential Analyzers have been used intensively up to the 1950's as computational machines to solve various problems from ballistic to aircraft design, before the era of the digital computer \cite{Nyc96}. Computable analysis, based on Turing machines, can be considered as today's most used model for talking about computability and complexity over reals. In this approach, real numbers are encoded as sequences of discrete quantities and a discrete model is used to compute over these sequences. More details can be found in the books \cite{PR89}, \cite{Ko91}, \cite{Wei00}. As this model is clearly based on classical (digital and discrete time) models like Turing machines, and that such models are admitted to be realistic models of today's computers, one can clearly consider such an approach to deal with a realistic model of computation. Understanding whether there could exist something similar to a Church-Turing thesis for analog models of computation, or whether analog models of computation could be more powerful than today's classical models of computation motivated us to try to relate GPAC computable functions to functions computable in the sense of recursive analysis. The paper \cite{BCGH07} was a first step towards the objective of obtaining a version of the Church-Turing thesis for physically feasible models over the real numbers. This paper proves that, from a computability perspective, Computable Analysis and the GPAC are equivalent: GPAC computable functions are computable and, conversely, functions computable by Turing machines or in the computable analysis sense can be computed by GPACs. However this is about \emph{computability}, and not \emph{complexity}. This proves that one can not solve more problems using analog models. But this leaves open the intriguing question whether one could solve some problems \emph{faster} using analog models of computations (see e.g. what happens for quantum models of computations\dots). In other words, the question of whether the above models are equivalent at a computational complexity level remained open. Part of the difficulty stems from finding an appropriate notion of complexity (see e.g.~\cite{SBF99}, \cite{BSF02}) for analog models of computations. In the present paper we study both the GPAC and Computable Analysis at a complexity level. In particular, we introduce measures for space complexity and show that, using these measures, both models are equivalent, even at a computational complexity level. Since we already have shown in our previous paper \cite{BGP12} that Turing machines can simulate efficiently GPACs, we will prove in this paper the missing link: GPACs can simulate Turing machines in an efficient manner. More concretely we show that, modulo polynomial reductions, computations of Turing machines can be simulated by GPACs, without the need of using more (space) resources than those used in the original Turing computation. In a schematic view, here is the situation that we reach, when relating the constructions presented in this paper with already known results, where PIVP stands for \emph{Polynomial Initial Value Problems}, known to be equivalent to GPACs (see \cite{GC03} and Section \ref{section:GPAC}).\medskip \begin{center} \begin{tikzpicture}[scale=1] \draw (0,0) rectangle (3,3); \draw (1.5,1.5) node {$\begin{array}{c}\text{Turing Machine}\\\\\text{Time: }T\\\text{Space: }S\end{array}$}; \draw[->,line width=2pt] (3,1.5) to node[sloped,above] {Submission \cite{BGP12}} (6,1.5); \draw (6,0) rectangle (9,3); \draw (7.5,1.5) node {$\begin{array}{c}\text{PIVP/GPAC}\\\\\text{Time: }T\\\text{Space: }poly(S,T)\end{array}$}; \draw[->,line width=2pt] (9,1.5) to node[sloped,above] {ICALP'12} (11,1.5); \draw (11,0) rectangle (14,3); \draw (12.5,1.5) node {$\begin{array}{c}\text{Turing Machine}\\\\\text{Time: }poly(S,T)\\\text{Space: }poly(S,T)\end{array}$}; \end{tikzpicture}\medskip \end{center} We believe that these results open the way to state that realistic analog models do satisfy the classical Church-Turing thesis in a provable way, both at the computability and complexity level, hence when talking both about Church-Turing thesis and Church-Turing effective thesis. We believe that this opens the way to a well founded complexity theory for analog models of computations and for continuous dynamical systems. Notice that it has been observed in several papers that, since continuous time systems might undergo arbitrary space and time contractions, Turing machines, as well as even accelerating Turing machines\footnote{Similar possibilities of simulating accelerating Turing machines through quantum mechanics are discussed in \cite{CP01}.} \cite{Davies01}, \cite{Cop98}, \cite{Cop02} or even oracle Turing machines, can actually be simulated in an arbitrary short time by ordinary differential equations in an arbitrary short time or space. This is sometimes also called \textit{Zeno's phenomenon}: an infinite number of discrete transitions may happen in a finite time: see e.g. \cite{CIEChapter2007}. Such constructions or facts have been deep obstacles to various attempts to build a well founded complexity theory for analog models of computations: see \cite{CIEChapter2007} for discussions. One way to interpret our results is then the following: all these time and space phenomena, or Zeno's phenomena do not hold (or, at least, they do not hold in a problematic manner) for ordinary differential equations corresponding to GPACs, that is to say for \emph{realistic} models. This has already been stated at various places. The novelty is first a statement of what ``realistic'' includes, and second a formal proof of it. \section{GPAC} \subsection{Preliminaries} Throughout the paper we will use the following notation: $\infnorm{(x_1,\ldots,x_n)}=\max_{1\leqslant i\leqslant n}|x_i|$ and $\Vert(x_1,\ldots,x_n)\rVert=\sqrt{|x_1|^2+\cdots+|x_n|^2}$. We will also use the following shortcuts $\proj{i}{x_1,\ldots,x_k}=x_i$, $\operatorname{int}(x)=\lfloor x\rfloor$, $\operatorname{frac}(x)=x-\lfloor x\rfloor$, $\operatorname{int}_n(x)=\min(n,\operatorname{int}(x))$, $\operatorname{frac}_n(x)=x-\operatorname{int}_n(x)$, and \[\fiter{f}{n}= \begin{cases} \operatorname{id} & \text{if }n=0\\ \fiter{f}{n-1} & \text{otherwise} \end{cases}\] In this section, we consider the following ODE \begin{equation}\left\{\begin{array}{@{}c@{}l}\dot{y}&=p(y)\\y(t_0)&=y_0\end{array}\right.\label{eq:ode}\end{equation} where $p:\mathbb{R}^d\rightarrow\mathbb{R}^d$ is a vector of polynomials. This is motivated by the fact that this is known \cite{GC03} that a function is generable by a GPAC iff it is a component of the solution to the initial-value problem \eqref{eq:ode}. If $p:\mathbb{R}^d\rightarrow\mathbb{R}$ a is polynomial, we write $p(X_1,\ldots,X_d)=\sum_{|\alpha|\leqslant k}a_\alpha X^\alpha$ where $k$ is degree of $p_i$ that we denote as $\degp{p_i}$. We also take, as usual, $|\alpha|=\alpha_1+\cdots+\alpha_d$. We also write $\sigmap{P}=\sum_{|\alpha|\leqslant k}|a_\alpha|$ If $p:\mathbb{R}^d\rightarrow\mathbb{R}^d$ is a vector of polynomials, we write $\degp{p}=\max(\degp{p_1},\ldots,\degp{p_d})$ and $\sigmap{p}=\max(\sigmap{p_1},\ldots,\sigmap{p_d})$. When it is not ambiguous, for any constant $A\in\mathbb{R}$ we identify $A$ with the constant function $\lambdafun{x_1,\ldots,x_k}{A}$. \subsection{Basic properties}\label{section:GPAC} It is known \cite{GC03} that a function is generable by a GPAC iff it is a component of the solution to the initial-value problem \eqref{eq:ode}: formally, a function $f: \mathbb{R} \to \mathbb{R}$ is generable by a GPAC if it belongs to the following class $\operatorname{GPAC}(I)$: \begin{definition}\label{def:gpac_class} Let $I\subseteq\mathbb{R}$ be an open interval and $f:I\rightarrow\mathbb{R}$. We say that $f\in\operatorname{GPAC}(I)$ if there exists $d\in\mathbb{N}$, a vector of polynomials $p$, $t_0\in I$ and $y_0\in\mathbb{R}^d$ such that $\forall t\in I, f(t)=y_1(t)$, where $y:I\rightarrow\mathbb{R}$ is the unique solution over $I$ of \[\left\{\begin{array}{@{}c@{}l}\dot{y}&=p(y)\\y(t_0)&=y_0\end{array}\right.\] \end{definition} We want to talk about a subclass of GPAC generable functions, that permits to talk about complexity. This leads to the following definition. \begin{definition}\label{def:gspace_class} Let $I\subseteq\mathbb{R}$ be an open interval, $f,g:I\rightarrow\mathbb{R}$. We say that $f\in\GSpace{I}{g}$ if there exists $d\in\mathbb{N}$, a vector of polynomials $p$, $t_0\in I$ and $y_0\in\mathbb{R}^d$ such that $\forall t\in I, f(t)=y_1(t)$ and $\infnorm{y(t)}\leqslant g(t)$, where $y:I\rightarrow\mathbb{R}$ is the unique solution over $I$ of \[\left\{\begin{array}{@{}c@{}l}\dot{y}&=p(y)\\y(t_0)&=y_0\end{array}\right.\] Let $f:I\rightarrow\mathbb{R}^d$. We say that $f\in\GSpace{I}{g}$ if $\forall i, (f_i:I\rightarrow\mathbb{R})\in\GSpace{I}{g}$. \end{definition} The following can be proved (non-trivial missing proofs are in appendix). \begin{lemma}\label{lem:gspace_class_stable} Let $I,J\subseteq\mathbb{R}$, $f\in\GSpace{I}{s_f}$ and $g\in\GSpace{J}{s_g}$. Then: \begin{itemize} \item $f+g, f-g\in\GSpace{I\cap J}{s_f+s_g}$ \item $fg\in\GSpace{I\cap J}{\max(s_f,s_g,s_fs_g)}$ \item $f\circ g\in\GSpace{J}{\max(s_g,s_f\circ s_g)}$ if $g(J)\subseteq I$ \end{itemize} \end{lemma} \begin{definition}\label{def:gspace_ext_class} Let $I\subseteq\mathbb{R}^d$ be open set and $f,s_f:I\rightarrow\mathbb{R}$. We say that $f\in\GSpace{I}{s_f}$ if \[\forall J\subseteq\mathbb{R}\text{ open interval },\forall (g:J\rightarrow\mathbb{R}^d)\in\GSpace{J}{s_g}\text{ such that }g(J)\subseteq I,\quad f\circ g\in\GSpace{J}{\max(s_g,s_f\circ s_g)}\] \end{definition} \begin{remark} In the special case of $I\subseteq\mathbb{R}$, \defref{def:gspace_ext_class} matches \defref{def:gspace_class} because of \lemref{lem:gspace_class_stable}. \end{remark} \begin{lemma}\label{lem:gspace_ext_class_stable} Let $I,J\subseteq\mathbb{R}^d$ be open sets, $(f:I\rightarrow\mathbb{R}^n)\in\GSpace{I}{s_f}$ and $(g:J\rightarrow\mathbb{R}^m)\in\GSpace{J}{s_g}$. Then: \begin{itemize} \item $f+g, f-g\in\GSpace{I\cap J}{s_f+s_g}$ if $n=m$ \item $fg\in\GSpace{I\cap J}{\max(s_f,s_g,s_fs_g)}$ if $n=m$ \item $f\circ g\in\GSpace{J}{\max(s_g,s_f\circ s_g)}$ if $m=d$ and $g(J)\subseteq I$ \end{itemize} \end{lemma} \begin{definition} Let $d,e\in\mathbb{N}$ and $(i_1,\ldots,i_e)\in\llbracket 1,d\rrbracket^e$ we define \[\pi_d(i_1,\ldots,i_e):\left\lbrace\begin{array}{ccc} \mathbb{R}^d&\rightarrow&\mathbb{R}^e\\(x_1,\ldots,x_d)&\mapsto&(x_{i_1},\ldots,x_{i_e}) \end{array}\right.\] \end{definition} \begin{example} \[\pi_4(1,3):\left\lbrace\begin{array}{ccc} \mathbb{R}^4&\rightarrow&\mathbb{R}^2\\(x_1,x_2,x_3,x_4)&\mapsto&(x_1,x_3) \end{array}\right.\] \end{example} \begin{lemma}\label{lembug} For any $d,e\in\mathbb{N}$ such that $e\leqslant d$, for any $(i_1,\ldots,i_e)\in\llbracket 1,d\rrbracket^e$ and for any $I\subseteq\mathbb{R}^d$, \[\pi_d(i_1,\ldots,i_e)\in\GSpace{I}{0}\] \end{lemma} \begin{lemma}\label{lem:gspace_ext_comp_poly} Let $I\subseteq\mathbb{R}^c$ and $J\subseteq\mathbb{R}^e$ be open sets, $(f:I\rightarrow\mathbb{R}^d)\in\GSpace{I}{s_f}$ and $p:J\rightarrow I$ a vector of polynomials. Then $f\circ p\in\GSpace{J}{s_f\circ p}$. \end{lemma} \begin{lemma}\label{lem:elem_fun_gspace} $\sin,\tanh\in\GSpace{\mathbb{R}}{1}$ \end{lemma} \subsection{Main result} The main result of this paper is the following ($\operatorname{poly}(S(e)+T)$ stands for polynomial in $S(e)+T$)): \begin{theorem} Let $M$ be a Turing Machine. Then there exists a vector of polynomials $p$ such that, for any input $e$ and time $T$, the solution $y$ of $\eqref{eq:ode}$ with initial condition $y(0)=<\phi(e),\psi(S(e),T),...>$ ($\phi$ and $\psi$ define a simple encoding scheme), where $S(e)$ is the space used by $M$ on input $e$, satisfies the following properties: \begin{itemize} \item for any integer time $t\leqslant T$, $y(t)$ fully and unambiguously describes the state of $M$ on input $e$ at step $t$ \item for any $0\leqslant t\leqslant T$, $\infnorm{y(t)}\leqslant\operatorname{poly}(S(e)+T)$ \end{itemize} \end{theorem} \section{Turing Machines Simulations} In this section we explain how to simulate a Turing Machine with a GPAC. We would like to simulate a Turing Machine with a polynomially bounded GPAC. As a matter of comparison, it is already known how to simulation any Turing Machine for an arbitrary number of steps using an exponentially bounded GPAC \cite{GCB08}. Our simulations are different from the already known ones in several ways: \begin{itemize} \item The simulation will only be valid for a certain number of steps: this will be sufficient as we want to talk about (time) complexity, and hence we mostly have a bound on the time of computation. \item The values of the components of the system will be polynomially bounded; \end{itemize} \subsection{Helper functions} Our simulation will be performed on a real domain and may be subjected to (small) errors. Thus, to simulate a Turing machine over a large number os steps, we need tools which allow us to keep errors under control. In this section we present functions which are specially designed to fulfill this objective. We call these function \emph{helper functions}. Notice that since functions generated by GPACs are analytic, all helper functions are required to be analytic. As a building block for creating more complex functions, it will be useful to obtain analytic approximations of the functions $\operatorname{int}(x)$ and $\operatorname{frac}(x)$. Notice that we are only concerned about nonnegative numbers so there is no need to discuss the definition of these functions on negative numbers. A graphical representation of the various helper functions we will introduce in this section can be found on \figref{fig:xi}, \figref{fig:sigma} and \figref{fig:theta}. \begin{definition} For any $x,y,\lambda\in\mathbb{R}$ define $\xi(x,y,\lambda)=\tanh(xy\lambda)$ \end{definition} The following can be proved (see appendix). \begin{lemma}\label{lem:xi} For any $x\in\mathbb{R}$ and $\lambda>0,y\geqslant1$, \[|\sgn{x}-\xi(x,y,\lambda)|<1/2\] Furthermore if $|x|\geqslant\lambda^{-1}$ then \[|\sgn{x}-\xi(x,y,\lambda)|<e^{-y}\] and $\xi\in\GSpace{\mathbb{R}^{3}}{1}$. \end{lemma}\vspace{-9mm} \begin{figure} \begin{center} \includegraphics{Fig1} \end{center} \caption{Graphical representation of $\xi$ and $\sigma_1$} \label{fig:xi} \end{figure}\vspace{-7mm} \begin{definition} For any $x,y,\lambda\in\mathbb{R}$, define $\sigma_1(x,y,\lambda)=\frac{1+\xi(x-1,y,\lambda)}{2}$ \end{definition} \begin{corollary}\label{cor:sigma1} For any $x\in\mathbb{R}$ and $y>0,\lambda>2$, \[|\operatorname{int}_1(x)-\sigma_1(x,y,\lambda)|\leqslant1/2\] Furthermore if $|1-x|\geqslant\lambda^{-1}$ then \[|\operatorname{int}_1(x)-\sigma_1(x,y,\lambda)|<e^{-y}\] and $\sigma_1\in\GSpace{\mathbb{R}^{3}}{1}$. \end{corollary} \begin{definition} For any $p\in\mathbb{N}$, $x,y,\lambda\in\mathbb{R}$, define $\sigma_p(x,y,\lambda)=\sum_{i=0}^{k-1}\sigma_1(x-i,y+\ln p,\lambda)$ \end{definition} \begin{lemma}\label{lem:sigmap} For any $p\in\mathbb{N}$, $x\in\mathbb{R}$ and $y>0,\lambda>2$, \[|\operatorname{int}_p(x)-\sigma_p(x,y,\lambda)|\leqslant1/2+e^{-y}\] Furthermore if $x<1-\lambda^{-1}$ or $x>p+\lambda^{-1}$ or $d(x,\mathbb{N})>\lambda^{-1}$ then \[|\operatorname{int}_p(x)-\sigma_p(x,y,\lambda)|<e^{-y}\] and $\sigma_p\in\GSpace{\mathbb{R}^{3}}{p}$. \end{lemma} Finally, we build a square wave like function which we be useful later on. \begin{definition} For any $t\in\mathbb{R}$, and $\lambda>0$, define $\theta(t,\lambda)=e^{-\lambda\left(1-\sin\left(2\pi t\right)\right)^2}$ \end{definition} \begin{lemma}\label{lem:theta} For any $\lambda>0$, $\rho(\cdot,\lambda)$ is a positive and $1$-periodic function bounded by $1$, furthermore \[\forall t\in[1/2,1], |\theta(t,\lambda)|\leqslant e^{-\lambda}\] \[\int_0^{\frac{1}{2}}\theta(t,\lambda)dt\geqslant\frac{(e\lambda)^{-\frac{1}{4}}}{\pi}\] and $\theta\in\GSpace{\mathbb{R}\times\mathbb{R}^{*}_{+}}{\lambdafun{t,\lambda}{\max(1,\lambda)}}$. \end{lemma} \subsubsection{Polynomial interpolation} In order to implement the transition function of the Turing Machine, we will use polynomial interpolation (Lagrange interpolation). But since our simulation may have to deal with some amount of error in inputs, we have to investigate how this error propagates through the interpolating polynomial. \begin{lemma}\label{lem:diff_prod} Let $n\in\mathbb{N}$, $x,y\in\mathbb{R}^n$, $K>0$ be such that $\infnorm{x},\infnorm{y}\leqslant K$, then \[\left|\prod_{i=1}^nx_i-\prod_{i=1}^ny_i\right|\leqslant K^{n-1}\sum_{i=1}^n|x_i-y_i|\] \end{lemma} \begin{definition}[Lagrange polynomial] Let $d\in\mathbb{N}$ and $f:G\rightarrow\mathbb{R}$ where $G$ is a finite subset of $\mathbb{R}^d$, we define \[L_f(x)=\sum_{\bar{x}\in G}f(\bar{x})\prod_{i=1}^d\prod_{\substack{y\in G\\y\neq\bar{x}}}\frac{x_i-y_i}{\bar{x}_i-y_i}\] \end{definition} We recall that by definition, for all $x\in G$, $L_f(x)=f(x)$ so the interesting part is to know what happen for values of $x$ not in $G$ but close to $G$, that is to relate $L_f(x)-L_f(\tilde{x})$ with $x-\tilde{x}$. \begin{lemma}\label{lem:interp_L} Let $d\in\mathbb{N}$, $K>0$ and $f:G\rightarrow\mathbb{R}$, where $G$ is a finite subset of $\mathbb{R}^d$. Then \[\forall x,z\in[-K,K]^d, |L_f(x)-L_f(z)|\leqslant A\infnorm{x-z}\qquad\text{and}\qquad L_f\in\GSpace{[-K,K]^d}{B}\] where \[\delta=\min_{x\neq x'\in G}\min_{i=1}^d|x_i-x'_i|\qquad F=\max_{x\in G}|f(x)|\qquad M=K+\max_{x\in G}\infnorm{x}\] \[A=|G|F\left(\frac{M}{\delta}\right)^{d(|G|-1)-1}d(|G|-1)\qquad B=|G|F\left(\frac{M}{\delta}\right)^{d(|G|-1)}\] \end{lemma} \subsection{Turing Machine}\label{sec:tm} \subsubsection{Assumptions} Let $\mathcal{M}=(Q,\Sigma,b,\delta,q_0,F)$ be a Turing Machine which will be fixed for the whole simulation. Without loss of generality we assume that: \begin{itemize} \item When the machine reaches a final state, it stays in this state: so $F$ is useless in what follows. \item $Q=\llbracket 0,m-1\rrbracket$ \item $\Sigma=\llbracket 0, k-2\rrbracket$ and $b=0$ \item $\delta:Q\times\Sigma\rightarrow Q\times\Sigma\times\{L,R\}$, and we identify $\{L,R\}$ with $\{0,1\}$ ($L=0$ and $R=1$). \end{itemize} Consider a configuration $c=(x,\sigma,y)$ of the machine as described in \figref{fig:tm_config}. We could encode it as a triple of integers as done in \cite{GCB08} (e.g. if $x_0,x_1,\ldots$ are the digits of $x$ in base $k$, encode $x$ as the number $x_0+x_1k+x_2k^2+\cdots+x_nk^n$), but this encoding is not suitable for our needs. We instead define the \emph{rational encoding} $[c]$ of $c$ as follows. \begin{definition} Let $c=(x,s,y,q)$ be a configuration of $\mathcal{M}$, we define the \emph{rational encoding} $[c]$ of $c$ as $[c]=(0.x,s,0.y,q)$ where: \[0.x=x_0k^{-1}+x_1k^{-2}+\cdots+x_nk^{-n-1}\in\mathbb{Q}\qquad\text{if}\qquad x=x_0+x_1k+\cdots+x_nk^n\in\mathbb{N}.\] \end{definition} The following lemma explains the consequences on the rational encoding of configurations of the assumptions we made for $\mathcal{M}$. \begin{lemma}\label{lem:config_range} Let $c$ be a reachable configuration of $\mathcal{M}$ and $[c]=(0.x,\sigma,0.y,q)$, then $0.x\in[0,\frac{k-1}{k}]$ and similarly for $y$. \end{lemma}\vspace{-4mm} \begin{figure} \begin{center} \begin{tikzpicture} \draw (0,0) rectangle (0.5,0.5); \draw (0.25,0.25) node {$\sigma$}; \draw (0.5,0) rectangle (1,0.5); \draw (0.75,0.25) node {$y_0$}; \draw (1,0) rectangle (1.5,0.5); \draw (1.25,0.25) node {$y_1$}; \draw (1.5,0) rectangle (4.5,0.5); \draw (3,0.25) node {$\ldots$}; \draw (4.5,0) rectangle (5,0.5); \draw (4.75,0.25) node {$y_k$}; \draw (0,0) rectangle (-0.5,0.5); \draw (-0.25,0.25) node {$x_0$}; \draw (-0.5,0) rectangle (-1,0.5); \draw (-0.75,0.25) node {$x_1$}; \draw (-1,0) rectangle (-4,0.5); \draw (-2.5,0.25) node {$\ldots$}; \draw (-4,0) rectangle (-4.5,0.5); \draw (-4.25,0.25) node {$x_k$}; \draw[->,very thick] (0.25,-0.5) -- (0.25,0); \draw (0.25,-0.6) node {head}; \end{tikzpicture} \end{center} \caption{Turing Machine configuration} \label{fig:tm_config} \end{figure}\vspace{-7mm} \subsubsection{Simulation by iteration} The first step towards a simulation of a Turing Machine $\mathcal{M}$ using a GPAC is to simulate the transition function of $\mathcal{M}$ with a GPAC-computable function $\operatorname{step}_\mathcal{M}$. The next step is to iterate the function $\operatorname{step}_\mathcal{M}$ with a GPAC. Instead of considering configurations $c$ of the machine, we will consider its rational configurations $[c]$ and use the helper functions defined previously. Theoretically, because $[c]$ is rational, we just need that the simulation works over rationals. But, in practice, because errors are allowed on inputs, the function $\operatorname{step}_\mathcal{M}$ has to simulate the transition function of $\mathcal{M}$ in a manner which tolerates small errors on the input. We recall that $\delta$ is the transition function of the $\mathcal{M}$ and we write $\delta_i$ the $i^{th}$ component of $\delta$. \begin{definition} We define: \[\operatorname{step}_\mathcal{M}:\left\{\begin{array}{ccc} \mathbb{R}^4&\longrightarrow&\mathbb{R}^4\\ \begin{pmatrix}x\\s\\y\\q\end{pmatrix}&\mapsto&\begin{pmatrix} \operatorname{choose}\left[\operatorname{frac}(kx),\frac{y+L_{\delta_2}(q,s)}{k}\right]\\ \operatorname{choose}\left[\operatorname{int}(kx),\operatorname{int}(ky)\right]\\ \operatorname{choose}\left[\frac{y+L_{\delta_2}(q,s)}{k},\operatorname{frac}(ky)\right]\\ L_{\delta_1}(q,s) \end{pmatrix} \end{array}\right.\] where \[\operatorname{choose}[a,b]=(1-L_{\delta_3}(q,s))a+L_{\delta_3}(q,s)b\] \end{definition} The function $\operatorname{step}_\mathcal{M}$ simulates the transition function of the Turing Machine $\mathcal{M}$, as shown in the following result. \begin{lemma} Let $c_0,c_1,\ldots$ be the sequence of configurations of $\mathcal{M}$ starting from $c_0$. Then \[\forall n\in\mathbb{N}, [c_n]=\fiter{\operatorname{step}_\mathcal{M}}{n}([c_0])\] \end{lemma} Now we want to extend the function $\operatorname{step}_\mathcal{M}$ to work not only on rationals coding configuration but also on reals close to configurations, in a way which tolerates small errors on the input. That is we want to build a robust approximation of $\operatorname{step}_\mathcal{M}$. We already have some results on $L$ thanks to \lemref{lem:interp_L}. We also have some results on $\operatorname{int}(\cdot)$ and $\operatorname{frac}(\cdot)$. However, we need to pay attention to the case of nearly empty tapes. Indeed, consider the case where, say, the tape on the left of the head contains a single character $s$. Then $x=0.s=sk^{-1}$. Now assume that the head moves left. After the move we should be in the configuration $(0,s,\_,\_)$. Looking at the definition of $\operatorname{step}_\mathcal{M}$, one sees that this works because $\operatorname{int}(kx)=\operatorname{int}(s)=s$. However if one perturbes a little bit $x$ to $x-\epsilon$, then $\operatorname{int}(k(x-\epsilon))=s-1$. To overcome this difficulty, we will use to our advantage the choice of $k$ and its main consequence, \lemref{lem:config_range}. Indeed, if we shift $kx$ by a small amount, we can allow a small perturbation. Since integers are the worst case scenario for our approximation of $\operatorname{int}(\cdot)$, we center the unreachable range for the tape on it, thus considering $\operatorname{int}(kx+\frac{1}{2k})$. The same applies to $\operatorname{frac}(kx)$ with a caveat: we will use $kx-\operatorname{int}(kx+\frac{1}{2k})$ instead. \begin{definition} Define: \[\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda):\left\{\begin{array}{ccc} \mathbb{R}^4&\longrightarrow&\mathbb{R}^4\\ \begin{pmatrix}x\\s\\y\\q\end{pmatrix}&\mapsto&\begin{pmatrix} \operatorname{choose}\left[\overline{\operatorname{frac}}(kx),\frac{x+L_{\delta_2}(q,s)}{k},q,s\right]\\ \operatorname{choose}\left[\overline{\operatorname{int}}(kx),\overline{\operatorname{int}}(ky),q,s\right]\\ \operatorname{choose}\left[\frac{y+L_{\delta_2}(q,s)}{k},\overline{\operatorname{frac}}(ky),q,s\right]\\ L_{\delta_1}(q,s) \end{pmatrix} \end{array}\right.\] where \[\operatorname{choose}[a,b,q,s]=(1-L_{\delta_3}(q,s))a+L_{\delta_3}(q,s)b\] \[\overline{\operatorname{int}}(x)=\sigma_{k}\left(x+\frac{1}{2k},\tau,\lambda\right)\] \[\overline{\operatorname{frac}}(x)=x-\overline{\operatorname{int}}(x)\] \end{definition} We now show that $\overline{\operatorname{step}}_\mathcal{M}$ is a robust version of $\operatorname{step}_\mathcal{M}$. We first begin with a lemma about function $choose$. \begin{lemma}\label{lem:choose} There exists $A_3>0$ and $B_3>0$ such that $\forall q,\bar{q},s,\bar{s},a,b,\bar{a},\bar{b}\in\mathbb{R}$, if \[\infnorm{(\bar{a},\bar{b})}\leqslant M\qquad\text{and}\qquad q\in Q,s\in\Sigma\qquad\text{and}\qquad\infnorm{(q,s)-(\bar{q},\bar{s})}\leqslant 1\] then \[\left|\operatorname{choose}[a,b,q,s]-\operatorname{choose}[\bar{a},\bar{b},\bar{q},\bar{s}]\right|\leqslant\infnorm{(a,b)-(\bar{a},\bar{b})}+2MA_3\infnorm{(q,s)-(\bar{q},\bar{s})}\] Furthermore, $\operatorname{choose}\in\GSpace{\mathbb{R}^2\times[-m,m]\times[-k,k]}{\lambdafun{a,b,q,s}{(1+B_3)(a+b)}}$. \end{lemma} \begin{lemma}\label{lem:step_bar} There exists $A_1,A_2,A_3,B_1,B_2,B_3>0$ such that for any $\tau,\lambda>0$, any valid rational configuration $c=(x,s,y,q)\in\mathbb{R}^4$ and any $\bar{c}=(\bar{x},\bar{s},\bar{y},\bar{q})\in\mathbb{R}^4$, if \[\infnorm{(x,y)-(\bar{x},\bar{y})}\leqslant \frac{1}{2k^2}-\frac{1}{k\lambda}\qquad\text{and}\qquad\infnorm{(q,s)-(\bar{q},\bar{s})}\leqslant 1\] then \[ \begin{array}{r@{\hspace{0.3em}}l} \text{for }p\in\{1,3\}\quad|\operatorname{step}_\mathcal{M}(c)_p-\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda)(\bar{c})_p|&\leqslant k\infnorm{(x,y)-(\bar{x},\bar{y}}+(1+2A_3)\left(e^{-\tau}+\frac{A_2}{k}\infnorm{(q,s)-(\bar{q},\bar{s})}\right)\\ |\operatorname{step}_\mathcal{M}(c)_2-\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda)(\bar{c})_2|&\leqslant2A_3k\infnorm{(q,s)-(\bar{q},\bar{s})}+e^{-\tau}\\ |\operatorname{step}_\mathcal{M}(c)_4-\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda)(\bar{c})_4|&\leqslant A_1\infnorm{(q,s)-(\bar{q},\bar{s})} \end{array}\] Furthermore, \[\overline{\operatorname{step}}_\mathcal{M}\in\GSpace{(\mathbb{R}_+^*)^2\times[-1,1]\times[-m,m]\times[-1,1]\times[-k,k]}{B_1+(1+B_3)(2k+1+B_2k^{-1})}\] \end{lemma} \begin{proof} To apply \lemref{lem:choose}, we need two kinds of results: bounds on the difference between the argument of $\operatorname{choose}$ and bounds on the argument themselves. Let $A_1$ and $A_2$ be the constants coming from \lemref{lem:interp_L} applied to $\delta_1$ and $\delta_2$ (because $Q$ and $\Sigma$ are finite and thus bounded and $(\bar{q},\bar{s})$ is bounded by hypothesis). To improve readability, we write $\Delta=\infnorm{(q,s)-(\bar{q},\bar{s})}$. \begin{itemize} \item We first show that $|\operatorname{int}(kx)-\overline{\operatorname{int}}(k\bar{x})|\leqslant e^{-\tau}$: since $c$ is a valid configuration, $x=0.u z$ where $u\in\llbracket 0,k-2\rrbracket$ and $z\in[0,\frac{k-1}{k}]$ by \lemref{lem:config_range}. By hypothesis, $|x-\bar{x}|\leqslant\frac{1}{2k^2}-\frac{1}{k\lambda}$ so $|kx-k\bar{x}|\leqslant\frac{1}{2k}-\frac{1}{\lambda}$. Notice that $kx=u.z$ thus $kx\in[u,u+\frac{k-1}{k}]$ and as a consequence $k\bar{x}\in[u-\frac{1}{2k}+\frac{1}{\lambda},u+\frac{k-1}{k}+\frac{1}{2k}-\frac{1}{\lambda}]$. Finally, $k\bar{x}+\frac{1}{2k}\in[u+\frac{1}{\lambda},u+1-\frac{1}{\lambda}]$. By \lemref{lem:sigmap}, $|\sigma_k(kx+\frac{1}{2k},\lambda,\tau)-\operatorname{int}_k(kx+\frac{1}{2k})|\leqslant e^{-\tau}$. But $kx+\frac{1}{2k}\in[u+\frac{1}{2k},u+1-\frac{1}{2k}]$ so $\operatorname{int}_k(kx+\frac{1}{2k})=\operatorname{int}(kx)=u$. \item It is then easy to see that $|\operatorname{frac}(kx)-\overline{\operatorname{frac}}(k\bar{x})|\leqslant k|x-\bar{x}|+e^{-\tau}$. \item By \lemref{lem:interp_L}, $|L_{\delta_2}(\bar{q},\bar{s})-L_{\delta_2}(q,s)|\leqslant A_2\Delta$. \item Thus, $\left|\frac{x+L_{\delta_2}(q,s)}{k}-\frac{\bar{x}+L_{\delta_2}(\bar{q},\bar{s})}{k}\right|\leqslant\frac{|x-\bar{x}|+A_2\Delta}{k}$ \item As a consequence, $|\overline{\operatorname{int}}(k\bar{x})|\leqslant k-2+e^{-\tau}$ since $|\overline{\operatorname{int}}(k\bar{x})|\leqslant|\operatorname{int}(kx)|+|\operatorname{int}(kx)-\overline{\operatorname{int}}(k\bar{x})|$ and $|\operatorname{int}(kx)|\leqslant k-2$ \item Similarly, $|\overline{\operatorname{frac}}(k\bar{x})|\leqslant \frac{k-1}{k}+k|x-\bar{x}|+e^{-\tau}\leqslant1+e^{-\tau}$ since $|\operatorname{frac}(kx)|\leqslant\frac{k-1}{k}$ by \lemref{lem:config_range}. \item Also, $|L_{\delta_2}(\bar{q},\bar{s})|\leqslant k-2+A_2\Delta$ since $|L_{\delta_2}(q,s)|\leqslant k-2$. \item Finally, $\left|\frac{\bar{x}+L_{\delta_2}(\bar{q},\bar{s})}{k}\right|\leqslant\frac{|x|+|x-\bar{x}|+k-2+A_2\Delta}{k}\leqslant\frac{\frac{k-1}{k}+\frac{1}{2k^2}+k-2+A_2\Delta}{k}\leqslant1+\frac{A_2}{k}$ since $\Delta\leqslant1$. \end{itemize} Now applying \lemref{lem:choose} four times gives the result. We write the computation for the two interesting cases: \begin{align*} |\operatorname{step}_\mathcal{M}(c)_1-\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda)(\bar{c})_1| &\leqslant\max\left[k|x-\bar{x}|+e^{-\tau},\frac{|x-\bar{x}|+A_2\Delta}{k}\right]+2A_3\max\left[1+e^{-\tau},1+\frac{A_2}{k}\right]\Delta\\ &\leqslant k|x-\bar{x}|+e^{-\tau}+\frac{A_2\Delta}{k}+2A_3\left(1+e^{-\tau}+\frac{A_2}{k}\right)\Delta\\ &\leqslant k|x-\bar{x}|+(1+2A_3)\left(e^{-\tau}+\frac{A_2}{k}\Delta\right) \end{align*} \begin{align*} |\operatorname{step}_\mathcal{M}(c)_2-\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda)(\bar{c})_2| &\leqslant e^{-\tau}+2A_3(k-2+e^{-\tau})\Delta\\ &\leqslant e^{-\tau}+2A_3k\Delta \end{align*} \end{proof} We summarize the previous lemma into the following simpler form. \begin{corollary}\label{cor:tm_simul_iter} For any $\tau,\lambda>0$, any valid rational configuration $c=(x,s,y,q)\in\mathbb{R}^4$ and any $\bar{c}=(\bar{x},\bar{s},\bar{y},\bar{q})\in\mathbb{R}^4$, if \[\infnorm{(x,y)-(\bar{x},\bar{y})}\leqslant \frac{1}{2k^2}-\frac{1}{k\lambda}\qquad\text{and}\qquad\infnorm{(q,s)-(\bar{q},\bar{s})}\leqslant 1\] then \[\infnorm{\operatorname{step}_\mathcal{M}(c)-\overline{\operatorname{step}}_\mathcal{M}(\tau,\lambda)(\bar{c})}\leqslant O(1)(e^{-\tau}+\infnorm{c-\bar{c}})\] Furthermore, \[\overline{\operatorname{step}}_\mathcal{M}\in\GSpace{(\mathbb{R}_+^*)^2\times[-1,1]\times[-m,m]\times[-1,1]\times[-k,k]}{O(1)}\] \end{corollary} \subsection{Iterating functions with differential equations}\label{sec:it_fn_diff_eq} We will use a special kind of differential equations to perform the iteration of a map with differential equations. In essence, it relies on the following core differential equation \begin{equation}\label{eq:branicky_simple}\tag{Reach} \dot{x}(t)=A\phi(t)(g-x(t)) \end{equation} We will see that with proper assumptions, the solution converges very quickly to the \emph{goal} g. However, \eqref{eq:branicky_simple} is a simplistic idealization of the system so we need to consider a perturbed equation where the goal is not a constant anymore and the derivative is subject to small errors \begin{equation}\label{eq:branicky_perturbed}\tag{ReachPerturbed} \dot{x}(t)=A\phi(t)(\bar{g}(t)-x(t))+E(t) \end{equation} We will again see that, with proper assumptions, the solution converges quickly to the \emph{goal} within a small error. Finally we will see how to build a differential equation which iterates a map within a small error. We first focus on \eqref{eq:branicky_simple} and then \eqref{eq:branicky_perturbed} to show that they behave as expected. In this section we assume $\phi$ is a $C^1$ function. \begin{lemma}\label{lem:branicky_simple} Let $x$ be a solution of \eqref{eq:branicky_simple}, let $T,\lambda>0$ and assume $A\geqslant\frac{\lambda}{\int_0^T\phi(u)du}$ then $|x(T)-g|\leqslant|g-x(0)|e^{-\lambda}$. \end{lemma} \begin{proof} Check that $x(t)=g+(x(0)-g)e^{-A\int_0^T\phi(u)du}$ is the unique solution of \eqref{eq:branicky_simple}, which gives the result immediately. \end{proof} \begin{lemma}\label{lem:branicky_perturbed} Let $T,\lambda>0$ and let $x$ be the solution of \eqref{eq:branicky_perturbed} with initial condition $x(0)=x_0$. Assume $|\bar{g}(t)-g|\leqslant\eta$, $A\geqslant\frac{\lambda}{\int_0^T\phi(u)du}$ and $E(t)=0$ for $t\in[0,T]$. Then \[|x(T)-g|\leqslant\eta(1+e^{-\lambda})+|x_0-g|e^{-\lambda}\] \end{lemma} \begin{proof} Let $x^{+},x^{-}$ be the respective solutions of $\dot{x}=A\phi(t)(g\pm\eta-x(t))$ with initial condition $x(0)=x_0$. We will show that $\forall t\in[0,T], x^{-}(t)\leqslant x(t)\leqslant x^{+}(t)$. Consider $f(t,u)=A\phi(t)(\bar{g}(t)-u)+E(t)$ and $f^{\pm}(t,u)=A\phi(t)(g\pm\eta-u)$. Then $x$ satisfies $\dot{x}(t)=f(t,x(t))$ and $x^{\pm}$ satisfy $\dot{x}^\pm(t)=f^\pm(t,x(t))$. It is easy to see that for any $t\in[0,T]$ and any $u\in\mathbb{R}$, $f^{-}(t,u)\leqslant f(t,u)\leqslant f^{+}(t,u)$. Thus, by a classical result of differential inequations, $x^{-}(t)\leqslant x(t)\leqslant x^{+}(t)$ for any $t\in[0,T]$. We prove that $|x^{\pm}(T)-g|\leqslant\eta(1+e^{-\lambda})+|x_0-g|e^{-\lambda}$ using \lemref{lem:branicky_simple} and the fact that $x^{-}(t)\leqslant x(t)\leqslant x^{+}(t)$. \end{proof} We can now define a system that simulates the iteration of a function using a system based on \eqref{eq:branicky_perturbed}. \begin{definition} Let $d\in\mathbb{N}$, $F:\mathbb{R}^d\rightarrow\mathbb{R}^d$, $\lambda\geqslant1,\mu\geqslant0$, we define \begin{equation}\label{eq:simul_iter}\tag{Iterate} \left\{\begin{array}{r@{}l} A&=10(\lambda+\mu)^2\\ B&=4(\lambda+\mu)\\ \dot{z}_i(t)&=A\theta(t,B)(F_i(u(t))-z_i(t))\\ \dot{u}_i(t)&=A\theta(t-1/2,B)(z_i(t)-u_i(t)) \end{array}\right. \end{equation} \end{definition} \begin{theorem}\label{th:fn_simul_diff_eq} Let $d\in\mathbb{N}$, $F:\mathbb{R}^d\rightarrow\mathbb{R}^d$, $\lambda\geqslant1$, $\mu\geqslant0$, $c_0\in\mathbb{R}^d$. Assume $z,u$ are solutions to \eqref{eq:simul_iter} and let $\Delta F$ and $M\geqslant 1$ be such that \[\forall k\in\mathbb{N},\forall \varepsilon>0,\forall x\in]-\varepsilon,\varepsilon[^d,\infnorm{\fiter{F}{k+1}(c_0)-F\left(\fiter{F}{k}(c_0)+x\right)}\leqslant\Delta F(\varepsilon)\] \[\forall t\geqslant 0, \infnorm{u(t)},\infnorm{z(t)},\infnorm{F(u(t))}\leqslant M=e^\mu\] and consider \[\left\{\begin{array}{@{}c@{}l} \varepsilon_0&=\infnorm{u(0)-c_0}\\ \varepsilon_{k+1}&=(1+3e^{-\lambda})\Delta F(\varepsilon_k+2e^{-\lambda})+5e^{-\lambda} \end{array}\right.\] Then \[\forall k\in\mathbb{N},\infnorm{u(k)-\fiter{F}{k}(c_0)}\leqslant\varepsilon_k\] Furthermore, if $F\in\GSpace{[-M,M]^d}{s_F}$ then $u\in\GSpace{\left(\mathbb{R}_+^*\right)^3}{\lambdafun{\lambda,\mu,t}{\max(1,4(\lambda+\mu),s_F(M))}}$. \end{theorem} \begin{proof} First we show that $AMe^{-B}\leqslant e^{-\lambda}$: \[ \frac{e^{B-\lambda}}{AM}\geqslant\frac{e^{3(\lambda+\mu)}}{10(\lambda+\mu)^2}\geqslant1\qquad\text{because }\lambda+\mu\geqslant1\text{ by the study of }\frac{e^{3x}}{10x^2} \] Second we show that $A\geqslant(\lambda+\mu)\pi(eB)^\frac{1}{4}$: \[ \frac{A}{(\lambda+\mu)\pi(eB)^\frac{1}{4}}=\frac{10(\lambda+\mu)^\frac{3}{4}}{\pi (4e)^{1/4}}\geqslant\frac{10}{\pi (4e)^{1/4}}\geqslant1\qquad\text{because }\lambda+\mu\geqslant1 \] We prove the result by induction on $k$. There is nothing to prove for $k=0$ since $\fiter{F}{0}(c_0)=c_0$. Now let $k\in\mathbb{N}$ and assume $\infnorm{u(k)-\fiter{F}{k}(c_0)}\leqslant\varepsilon_k$. We will work in two steps: first we consider the evolution of $u$ and $z$ on $[k,k+1/2]$ and then $[k+1/2,k+1]$. \begin{itemize} \item Notice that for any $t\in[k,k+1/2]$, by \lemref{lem:theta}, $|\theta(t-1/2,B)|\leqslant e^{-B}$. Thus $|\dot{u}_i(t)|\leqslant Ae^{-B}2M\leqslant 2e^{-\lambda}$. Hence $|u_i(t)-u_i(k)|\leqslant2e^{-\lambda}$. By induction hypothesis, $\infnorm{u(k)-\fiter{F}{k}(c_0)}\leqslant\varepsilon_k$ thus $\infnorm{u(t)-\fiter{F}{k}(c_0)}\leqslant\varepsilon_k+2e^{-\lambda}$. Finally, by hypothesis, $\infnorm{F(u(t))-\fiter{F}{k+1}(c_0)}\leqslant\Delta F(\varepsilon_k+2e^{-\lambda})$. Now thanks for \lemref{lem:theta}, $\int_k^{k+1/2}\theta(t,B)dt\geqslant\frac{(eB)^{-\frac{1}{4}}}{\pi}\geqslant\frac{\lambda+\mu}{A}$. Thus the hypothesis of \lemref{lem:branicky_perturbed} are met and we have $\infnorm{z(k+1/2)-\fiter{F}{k+1}(c_0)}\leqslant(1+e^{-\lambda-\mu})\Delta F(\varepsilon_k+2e^{-\lambda})+2Me^{-\lambda-\mu}\leqslant(1+e^{-\lambda})\Delta F(\varepsilon_k+2e^{-\lambda})+2e^{-\lambda}$. \item Similarly, for any $t\in[k+1/2,k+1]$, by \lemref{lem:theta}, $\infnorm{z(t)-z(k+1/2)}\leqslant2e^{-\lambda}$ thus $\infnorm{z(t)-\fiter{F}{k+1}(c_0)}\leqslant(1+e^{-\lambda})\Delta F(\varepsilon_k+2e^{-\lambda})+4e^{-\lambda}$. Now apply \lemref{lem:branicky_perturbed} as before to $u$ and we get that \begin{align*} \infnorm{u(k+1)-\fiter{F}{k+1}(c_0)}&\leqslant(1+e^{-\lambda})\left((1+e^{-\lambda})\Delta F(\varepsilon_k+2e^{-\lambda})+2e^{-\lambda}\right)+2e^{-\lambda}\\ &\leqslant(1+3e^{-\lambda})\Delta F(\varepsilon_k+2e^{-\lambda})+5e^{-\lambda} \end{align*} \end{itemize} \end{proof} \subsection{Turing Machine simulation with differential equations} In this section, we will use results of both \secref{sec:tm} and \secref{sec:it_fn_diff_eq} to simulate Turing Machines with differential equations. Indeed, in \secref{sec:tm} we showed that we could simulate a Turing Machine by iterating a robust real map, and in \secref{sec:it_fn_diff_eq} we showed how to efficiently iterate a robust map with differential equations. Now we just have to connect these results \begin{lemma}\label{lem:rec_seq_geom_arith} Let $a>1$ and $b\geqslant0$, assume $u\in\mathbb{R}^\mathbb{N}$ satisfies $u_{n+1}\leqslant au_n+b, for n\geqslant0$. Then \[u_n\leqslant a^nu_0+b\frac{a^n-1}{a-1},\quad n\geqslant0\] \end{lemma} \begin{theorem}\label{th:tm_simual_diff_eq} Let $\mathcal{M}$ be a Turing Machine as in \secref{sec:tm}, then there is $ f_\mathcal{M}\in\GSpace{\left(\mathbb{R}_+^*\right)^3}{s_f}$ such that for any sequence $c_0,c_1,\ldots,$ of configurations of $\mathcal{M}$ starting from the initial configuration $c_0$, \[\forall S, T\in\mathbb{R}_+^*,\forall n\leqslant T,\infnorm{[c_n]-f_\mathcal{M}(S,T,n)}\leqslant e^{-S}\] and \[\forall S, T\in\mathbb{R}_+^*,\forall n\leqslant T, s_u(S,T,n)=O(poly(S,T))\] \end{theorem} \begin{proof} Let $\tau>0$ (to be fixed later) and apply \thref{th:fn_simul_diff_eq} to $F=\overline{\operatorname{step}}_\mathcal{M}(\tau,4k)$. By \corref{cor:tm_simul_iter}, $\exists K_1,K_2$ such that \[\Delta F(\varepsilon)=K_1(e^{-\tau}+\varepsilon)\] and \[\forall x\in\Lambda=[-1,1]\times[-m,m]\times[-1,1]\times[-k,k], \infnorm{F(x)}\leqslant K_2\] Let $M=K_2+1$. The recurrence relation of $\varepsilon$ \[\left\{\begin{array}{@{}c@{}l} \varepsilon_0&=\infnorm{u(0)-c_0}\\ \varepsilon_{k+1}&=(1+3e^{-\lambda})\Delta F(\varepsilon_k+2e^{-\lambda})+5e^{-\lambda} \end{array}\right.\] now simplifies to (using that $e^{-\lambda}\leqslant1$) \begin{align*} \varepsilon_{k+1} &\leqslant(1+3e^{-\lambda})K_1(e^{-\tau}+\varepsilon_k+2e^{-\lambda})+5e^{-\lambda}\\ &\leqslant K_1(1+3e^{-\lambda})\varepsilon_k+2K_1(1+3e^{-\lambda})e^{-\lambda}+5e^{-\lambda}\\ &\leqslant \underbrace{K_1(1+3e^{-\lambda})}_{a}\varepsilon_k+\underbrace{(8K_1+5)e^{-\lambda}}_{b} \end{align*} Now apply \lemref{lem:rec_seq_geom_arith} to get an explicit expression \[\varepsilon_n\leqslant a^nu_0+b\frac{a^n-1}{a-1}\] If we take as initial condition the exact rational configuration $[c_0]$, we immediately get that $u_0=0$. Let $K_3=4K_1$, then $a\leqslant K_3$. Pick $\lambda=S+T\log(K_3)+\log(8K_1+5)$. Then $\varepsilon_T\leqslant e^{-S}$. We check with \thref{th:fn_simul_diff_eq} that $\infnorm{u(t)},\infnorm{z(t)}\leqslant M$ for $t\leqslant T$ since $\varepsilon_T\leqslant1$. Finally, $u\in\GSpace{\left(\mathbb{R}_+^*\right)^3}{\underbrace{\lambdafun{\lambda,\mu,t}{\max(1,4(\lambda+\mu),s_F(M))}}_{s_u}}$ and $s_u=O(poly(S,T))$. \end{proof} \section{Acknowledgments} D.S. Gra\c{c}a was partially supported by \emph{Funda\c{c}\~{a}o para a Ci\^{e}ncia e a Tecnologia} and EU FEDER POCTI/POCI via SQIG - Instituto de Telecomunica\c{c}\~{o}es through the FCT project PEst-OE/EEI/LA0008/2011. Olivier Bournez and Amaury Pouly were partially supported by ANR project SHAMAN, by DGA, and by DIM LSC DISCOVER project. \bibliographystyle{plain}
1,314,259,996,105
arxiv
\section{Introduction} In the past few years, scene text detection and recognition have received a lot of attention from both academia and industry, due to its numerous potential applications in image understanding and computer vision systems. Detecting text from natural scene is an open issue in computer vision field because texts may appear in various forms and the background may be very complex. From systematically perspective, a text detector which can detect individual words directly while being robust enough to complex background is more preferable, as it will greatly simply the processing of later recognizer \cite{DBLP:journals/corr/abs-1710-03425}. Owning to this, recently, many state-of-the-art text detectors \cite{Jaderberg2016,Liao2016TextBoxes,zhong2016deeptext} based on the advanced general object detection techniques \cite{NIPS2015_5638,liu2016ssd}, or box-based text detectors are proposed, which take words as the detection targets and thus make individual words detection feasible. Generally, they directly output word bounding boxes by jointly predicting text presence and coordinate offsets to anchor boxes \cite{NIPS2015_5638} at multiple scales. By this way, they have remarkably improved the detection performance, in terms of accuracy and robustness. However, we argue that the current box-based frameworks are still inefficient and unsatisfactory, for two main reasons. First, it is not efficient to handle multi-scale texts by traversing all the possible scales (see Figure 1). The current box-based text detectors employ fixed-size anchors to match the words, and the box-regression can only adjust the sizes of anchors to some extent, the effect is rather minor. Due to the diversity of text size, they have to preset massive anchor boxes of different scales to match the underlying text shapes, which results in high computational cost. For example, in \cite{Liao2016TextBoxes} $6$ scales (implemented with $6$ layered feature maps) are used and each cell is associated with fixed-size anchors. Secondly, it is unreasonable to match texts of all possible scales with limited discrete scaled anchor boxes. This fact has been observed in \cite{Liao2016TextBoxes,zhong2016deeptext}. In these work, though $3\sim6$ scales are adopted to produce multi-scale anchors, there are still some texts are missed when no appropriately designed scale is applicable. Therefore, fixed-size anchors have become the bottleneck for the box-based text detection framework, though they are widely adopted currently. To conquer the above limitations, in this work, we propose a novel box-based text detector with scale-adaptive anchors, where the scales of anchors are dynamically adjusted according to the sizes of texts. Specifically, we introduce an additional scale regression layer to the basic box-based framework and use it to learn the scales of anchors in an implicit way, such that extra training supervision of object size is avoided. With the proposed scale-adaptive anchors, we only need to preset a few initial anchors of different aspect ratios in $1$ scale, thus reducing the number of anchors largely. Meanwhile, the learned scale value is continuous which is more applicable to detect various texts than several discrete scales, especially for small texts. \begin{figure*}[t] \includegraphics[height=3in, width=5.8in]{figure1} \caption{A comparison between our model and the representative box-based models(TextBoxes)\cite{Liao2016TextBoxes}. TextBoxes add 6 convolutional feature layers(green layers) decreasing in size progressively to the end of the VGG16 model, and thus allow predictions of detections at $6$ scales. Differently, we only make predictions on $Conv 4\_3$ layer and add an additional scale regression layer behind it to predict text scale for each location of $Conv 4\_3$. Then the learnt scales are encoded to $Conv 4\_3$ layer to achieve scale-adaptive detection(the anchor boxes per location can be enlarged or shrunk according to the scale $s$ ). By using the learnt scale to replace multiple presetting discrete scales, we improve the computation efficiency(reduce the running time from 0.73s to 0.28s), while keeping competitive accuracy. } \end{figure*} Additionally, we argue that when making predictions on a single feature map (\textit{i.e.}, single scale), when the scale of anchor is updating, the size of the corresponding receptive fields should change synchronously. However, for a given anchor, regardless of size, the standard convolutions in CNN \cite{NIPS2012_4824} can only assign a fixed-size respective field to it. To tackle this problem, we propose Anchor convolution to dynamically adjust the sizes of receptive fields according to the learned scales of anchors, to ensure the integrity and richness of feature information of each anchor. To summarize, the contributions of this paper are as follows: \begin{itemize} \item We propose scale-adaptive anchors which largely reduce the computational cost and improve the robustness against multi-scale texts, especially small scales. The whole framework is end-to-end, simple and easy to train. \item We propose Anchor convolution to dynamically adjust the sizes of respective fields, to ensure the integrity and richness of feature information of each anchor. \item We evaluate the proposed method on two real-world text detection datasets, \textit{i.e.}, ICDAR11 \cite{shahab2011icdar} and ICDAR13 \cite{karatzas2013icdar} and demonstrate that while keeping competitive accuracy with state-of-the-art, it is more efficient, taking only 0.28s per image, which is important to real systems, especially mobile applications. \end{itemize} \begin{figure*}[t] \includegraphics[height=3in, width=7in]{figure2} \caption{Comparisons of our framework(Top) with TextBoxes(Bottom). (a)the initial anchors we preset on $Conv 4\_3$, and all the anchors have a initial scale.(b)generated scale map, which predicts the text size of each feature map location. (c)the sizes of initial anchors are dramatically adjusted according to the text scales learned by scale map. (d)-(g)the fixed-size anchors preseted by TextBoxes. To handle multi-scale texts, TextBoxes requires to employ 6 feature maps to produce anchors with different scales, while we only need one single map and reduce the time computational complexity from $O(n)$ to $O(1)$, the details can be seen in Sec 3.1. The green anchor and blue anchor can match the large text and small text of input image respectively. } \end{figure*} \section{Related Work} Scene text detection have been extensively studied for a few years in the computer vision community, and a large amount of methods have been proposed. Traditional methods \cite{epshtein2010detecting,huang2013text,6945320,10.1007/978-3-319-10593-2_33} deal with this issue usually by first detecting individual characters or coarse text regions, then following with a sequential processing of grouping or segmentation to form text lines or blocks. However, such post-processing steps are difficult to design because they require exploring many low-level image cues and various heuristic rules, which also make the whole system highly complicated and unreliable. Owning to the strong representation-capability of the deep Convolutional Neural Networks (CNN), more and more deep learning based methods develop rapidly. A number of recent approaches were built on Fully Convolutional Networks \cite{Long_2015_CVPR}, by treating text detection as semantic problem. For example, Yao \textit{et al.} \cite{DBLP:journals/corr/YaoBSZZC16} propose to directly run the algorithm on the full images and produce global, pixel-wise prediction maps, in which detections are subsequently formed. Zhang \textit{et al.} \cite{Zhang_2016_CVPR} propose Text-Block FCN for generating text salient maps and the Character-Centroid FCN for predicting the centroid of characters. However, the current FCN-based methods fail to produce accurate word-level predictions with a single model, and they still require multiple bottom-up steps to construct words. Recently, inspired by the great progress of deep learning methods \cite{NIPS2015_5638} \cite{liu2016ssd} for general object detection, many box-based text detectors are proposed and have advanced the performance of text detection considerably. For example, Jaderberg \textit{et al.} \cite{Jaderberg2016} propose an R-CNN-based \cite{7410526} framework, which first generates word candidates with a proposal generator and then adopts a convolution neural network (CNN) to refine the word bounding boxes. Liao \textit{et al.} \cite{Liao2016TextBoxes} propose an end-to-end trainable network named \textit{TextBoxes}, to directly output word bounding boxes by jointly predicting text presence and coordinate offsets to anchor boxes \cite{NIPS2015_5638} at multiple scales. However, most of them rely on presetting a large amount of anchors with different scales to discover texts in scene images, thus leading to high computational cost. \section{Approach} In this section, we propose a novel end-to-end text detector, which can automatically adjust the sizes of both anchors and receptive fields according to the scales of texts. The whole framework is illustrated in the top row of Figure 1. Initially, an input image is forwarded through the convolution layers of VGG16 \cite{simonyan2014very} and the $Conv 4\_3$ feature maps are produced. We add an additional scale regression layer behind the $Conv 4\_3$ feature maps to generate a scale map, which is used to indicate the text size at each location. Next, the scale map is passed to $Conv 4\_3$ layer to produce scale-adaptive anchors and flexible-size receptive fields. Finally, these anchors are classified and refined via the detection module, which contains a classification layer and a bounding box regression layer, similar to the SSD detector \cite{liu2016ssd}. The details of each phase will be described in the following. \subsection{Scale-adaptive Anchors} The basic idea of box-based detector is to associate a set of anchor boxes with every map location to coarsely match the ground truth texts, and then predict both classification scores and shape offsets for each anchor to obtain the final text locations. In natural scene images, texts are usually presented in various scales. To match multi-scale texts, most current box-based detectors tend to employ multiple feature maps from different levels to simultaneously make detection predictions. For example, as shown in the bottom of Figure 1, TextBoxes \cite{Liao2016TextBoxes} adds 6 convolutional feature layers(green layers) decreasing in size progressively to the end of the VGG16 model, and thus allows predictions of detections at $6$ scales. In order to better describe this, we present its working principle in the bottom of Figure 2, where the pictures$(d)-(g)$ correspond to the green layers(from $Conv 4\_3$ to $Pool 11$) in Figure 1. We can see that different layers represent different scales, and the anchors of former layers are applied to match small scale texts. Although the way of searching all possible anchors can handle most multi-scale texts, it is obviously inefficient. Different from previous box-based framework, we add an additional scale regression layer behind the $Conv 4\_3$ layer (as shown in Figure 1) to generate a scale map with one channel. The scale map has the same size with $Conv 4\_3$ layer and is used to encode the predicted text size for each location. Then, the scale map is used to obtain the scale-adaptive anchors. The top row of figure 2 gives the working principle of the proposed scale-adaptive anchors. At each feature map cell of $Conv 4\_3$ layer, we place several initial anchors. Then, with the generated scale map, the anchors of each cell can be enlarged or shrunk according to the assigned scale values. Specially, for a given anchor, we denote its initial size as $({x_0},{y_0},{w_0},{h_0})$, where ${x_0},{y_0}$ are the coordinates of its center, ${w_0}$ is the width and ${h_0}$ is the height. Suppose the learned scale corresponding to this anchor is $s$, then the updated anchor's size$(x',y',w',h')$ can be computed as follows: \begin{equation} \begin{array}{l} x' = {x_0}, y' = {y_0}\\ w' = {w_0} \times s, h' = {h_0} \times s \end{array} \end{equation} The detailed learning process will be introduced in Section 3.3. Given an input image, previous box-based methods tend to preset all possible anchors, and the number of anchors can be represented as: \begin{displaymath} {N_a} = \sum_{i=1}^{n_F} {D_i} \times {N_c} \end{displaymath} where $n_F$ is the number of green layers(as illustrated in Figure 1, and the $n_F$ of TextBoxes is 6), $D_i$ represents the size of $ith$ feature map(such as $Conv 4\_3$ is $38 \times 38$), and $N_c$ represents the number of anchors in each cell. $D_i$ is a constant. Besides, the anchors preset in each cell of $ith$ feature map have different aspect ratios(e.g. 1,3,5,7,10), and so $N_c$ is also a constant. Therefore, the time computational complexity of this kind of algorithms is $O(n)$. As shown in Figure 1, we only employ one single layer to handle multi-scale texts, so the number of anchors of our detector can be represented as: \begin{displaymath} {N'_a} = {D_1} \times {N_c} \end{displaymath} where $D_1$ represents the size of $Conv 4\_3$ feature map, the setting of $N_c$ is same with above. Therefore, our algorithm reduce the time computational complexity from $O(n)$ to $O(1)$, and thus improve the computation efficiency of algorithm. More importantly, we propose adaptive scale to replace the previous search way, which traversing all possible scales(just like exhaustive search), and allow other box-based methods to handle multi-scale texts in a more efficient way. \subsection{Anchor Convolution} In the previous section, we proposed a novel scale regression layer for dynamically adjusting scales of anchors. In this section, we propose Anchor convolution to dynamically adjust the sizes of respective fields and thus exploit scaled features for each anchor. The standard convolution \cite{NIPS2012_4824} assigns a fixed-size respective field to each anchor and computes a feature vector for later classifying and box-regression. Different from standard convolution, we propose Anchor convolution to dynamically adjust the sizes of respective fields according to the scales of anchors and to extract necessary feature information for improving the performance of subsequent classification and regression. Next, we will introduce its working principle (see Figure 3). In the convolution layers, each pixel of the output feature maps corresponds to a fixed size of receptive field ${\emph P}$ (${\emph P}$ is part of input layer, and also known as convolutional patch). Generally, a total of $\left({k_h} \times {k_w}\right)$ elements are selected from ${\emph P}$ to construct a feature vector and perform convolution operations with the kernel, where ${k_{h}}$ and $k_{w}$ is the height and width of kernel, respectively. In standard convolution, for each pixel, the size of corresponding ${\emph P}$ is $\left( {\left( {{k_h} - 1} \right){d_h} + 1,\left( {{k_w} - 1} \right){d_w} + 1} \right)$, here ${\emph P}$ is a rectangular respective field with the center coordinates of $\left( {{c_h},{c_w}} \right)$ and ${d_h}$, ${d_w}$ are the dilation parameters. For integer $i \in \left[ { - \left\lfloor {{{{k_h}} \mathord{\left/ {\vphantom {{{k_h}} 2}} \right. \kern-\nulldelimiterspace} 2}} \right\rfloor ,\left\lfloor {{{{k_h}} \mathord{\left/ {\vphantom {{{k_h}} 2}} \right. \kern-\nulldelimiterspace} 2}} \right\rfloor } \right]$ and integer $j \in \left[ { - \left\lfloor {{{{k_w}} \mathord{\left/ {\vphantom {{{k_w}} 2}} \right. \kern-\nulldelimiterspace} 2}} \right\rfloor ,\left\lfloor {{{{k_w}} \mathord{\left/ {\vphantom {{{k_w}} 2}} \right. \kern-\nulldelimiterspace} 2}} \right\rfloor } \right]$, the coordinates of selected elements from ${\emph P}$ are: \begin{equation} {h_{ij}} = {c_h} + i{d_h},{w_{ij}} = {c_w} + j{d_w} \end{equation} Let $\emph{I} = ${\emph P}$ \left( {{h_{ij}},{w_{ij}}} \right)$ denote the feature vector constructed by these elements. Then, ${\emph I}$ is used to perform element-wise multiplication with the kernel. For Anchor convolution, suppose the output layer has the same size with the scale map and all channels share one scale map. Thus, each pixel of the output map corresponds to a scale coefficient. Let the respective field here be ${\emph P}'$, which is also a rectangle with the same center as ${\emph P}$. Differently, the size of ${\emph P}'$ can change to $\left( {\left( {{k_h} - 1} \right){d_h}{s} + 1,\left( {{k_w} - 1} \right){d_w}{s}\\ + 1} \right)$ along with the scale coefficient $s$. Then, a total of ${k_{h} \times {k_{w}}}$ elements are selected from ${\emph P}'$ to construct ${\emph I}$. Inspired by \cite{Zhang_2017_ICCV}, the coordinates of these elements are changed to: \begin{equation} {h'_{ij}} = {c_h} + i{d_h}{s},{w'_{ij}} = {c_w} + j{d_w}{s} \end{equation} We adopt an irregular kernel \cite{Szegedy_2015_CVPR} in this work, setting ${k_{h}} = 1$ and ${k_{w}} = 5$. Since $i$ is integer and $i \in \left[ { - \left\lfloor {{{{k_h}} \mathord{\left/ {\vphantom {{{k_h}} 2}} \right. \kern-\nulldelimiterspace} 2}} \right\rfloor ,\left\lfloor {{{{k_h}} \mathord{\left/ {\vphantom {{{k_h}} 2}} \right. \kern-\nulldelimiterspace} 2}} \right\rfloor } \right]$, we have $i \equiv 0$ and ${h'_{ij} = {c_h}}$. In order to use scale information in height direction, we define feature vector ${\emph I}$ $ = {{\tilde {\emph P}}_{ij}}$ and construct it in a different way, which is formulated as: \begin{equation} \begin{array}{l} {{\tilde {\emph P}}_{ij}} = \left( {1 - \alpha } \right){\emph P}'\left( {{c_h},{{w'}_{ij}}} \right) + \left((s>1) ? \right) \\ \cdot\left(\frac{\alpha }{2}{\emph P}'\left( {{c_h} - \frac{{{s} - 1}}{2},{{w'}_{ij}}} \right) + \frac{\alpha }{2}{\emph P}'\left( {{c_h} + \frac{{{s} - 1}}{2},{{w'}_{ij}}} \right)\right) \end{array} \end{equation} where $\alpha$ is a weight parameter. In the Anchor convolution, the respective field ${\emph P}'$ can be shrunk or expanded according to the different values of scale coefficients. As illustrated in Figure 3, when $scale = 1$, the Anchor convolution is the same with the standard convolution. However, when $scale > 1$, the respective field ${\emph P}$ will be expanded to ${\emph P}'$. In this case, we select $3\times5$ sampling elements to construct the feature vector according to Eq.4. In our work, Anchor convolution is applied in $reg$ layer and $cls$ layer. \begin{figure} \includegraphics[height=1.6in, width=3in]{figure3} \caption{Working principle of Anchor convolution. (a): scale = 1. (b): scale $> 1$.} \end{figure} \subsection{Scale Learning} We introduce a scale regression layer to generate scale map, which is applied to scale-adaptive anchors and Anchor convolution learning. Thus, the derivations of loss function w.r.t. scale follows the chain rule. The objective loss function is defined as follows: \begin{displaymath} L\left( {x,c,l,g} \right) = \frac{1}{N}\left( {{L_{conf}}\left( {x,c} \right) + \beta {L_{loc}}\left( {x,l,g} \right)} \right), \end{displaymath} here $x = 1$ denotes positives and $x = 0$ denotes negatives, $N$ is the number of matched anchors, $c$ is the confidence, $l$ is the predicted box, $g$ is the ground truth box, and $L_{conf}$ is a 2-class softmax loss. Smooth-L1 defined in \cite{girshick2015fast} is applied to the localization loss $L_{loc}$. Now, we derive gradients w.r.t. scale coefficients. For brevity, we omit the standard derivations applied in network. \textbf{In scale-adaptive anchors.} We define the predicted box as $l=\left(x,y,w,h\right)$, which is computed as: \begin{equation} \begin{array}{l} x = x' + w'\Delta x\\ y = y' + h'\Delta y\\ w = w'\exp \left( {\Delta w} \right)\\ h = h'\exp \left( {\Delta h} \right) \end{array} \end{equation} where $\left( {\Delta x,\Delta y,\Delta w,\Delta h} \right)$ are offsets relative to the matched anchor , which are learned from $reg$ layer. According to Eq.1, the gradient of $\left(x, y, w, h\right)$ (we use $\left(.\right)$ for brevity) w.r.t. $s$ is obtained as \begin{equation} \frac{{\partial \left( {.} \right)}}{{\partial {s} }} = \left( {\Delta x + \exp \left( {\Delta w} \right)} \right){w_0} + \left( {\Delta y + \exp \left( {\Delta h} \right)} \right){h_0} \end{equation} If we use $n_{p}$ to denote the numbers of anchors at position $t$, then the gradient of $l$ w.r.t. $s_{t}$ is ${\sum\limits_{n_p}{\frac{{\partial \left( {.} \right)}}{{\partial {s} }}}}$. \textbf{In Anchor convolution.} We first formulate the forward propagation of our proposed Anchor convolution, and then give the formulations to update scale. Let $H$, $W$ and $C$ denote the height, width and channel number of feature map, respectively. We also define subscripts $in$ and $out$ to distinguish inputs and outputs. Suppose the convolution kernels in Anchor convolution is denoted as $\emph K \in \mathbb{R}^{({C_{out}}) \times ({C_{in}} \times {k_{h}} \times {k_{w}})}$ and the bias is $b \in \mathbb{R}^{C_{out}}$. We further define the subscript $x \in \left[ {1,{C_{out}}} \right]$, $y \in \left[ {1,{H_{out}} \times {W_{out}}} \right]$ and $c \in \left[ {1,{C_{in}}} \right]$. If taking $\emph K$ as a partitioned matrix, then each of its block ${\Phi_{xc}}^T \in {\mathbb{R}^{\left( {{k_h} \times {k_w}} \right)}}$ is a vector, corresponding to one of the convolution kernels. For any element ${O_{xy}}$ in convolution $output$, we have: \begin{equation} {O_{xy}} = \sum\limits_{c = 1}^{{C_{in}}} {{\Phi_{xc}}{{\emph I}_{cy}}} + {b_x} \end{equation} In Anchor convolution, we compute the coordinates of feature vector via Eq.3. We denote the scale efficient corresponding to ${O_{xy}}$ as ${s_{y}}$. Since $s_{y}$ is a float value, the coordinates ${w'_{ij}}$ and ${c_h} \pm \frac{{{s_y} - 1}}{2}$ may not be integers. Here, inspired by the Spatial Transformer Networks \cite{NIPS2015_5854}, we obtain ${{\tilde {\emph P}}_{ij}}$ through bilinear interpolation. Let ${{\emph I}_{cy}}$ be the feature vector corresponding to ${{\tilde {\emph P}}_{ij}}$, the forward propagation of convolution is computed via Eq.7. During backward propagation, after obtaining $g\left(O_{xy}\right)$ from $loss$ layer, the gradients w.r.t. ${\emph I}_{cy}$, $ \Phi_{xc}$ and $b_{x}$ are derived as \begin{eqnarray} g\left( {{{\emph I}_{cy}}} \right) &=& \sum\limits_x {\Phi _{xc}^Tg\left( {{O_{xy}}} \right)} \\ g\left( {{\Phi _{xc}}} \right) &=& \sum\limits_y {g\left( {{O_{xy}}} \right){\emph I}_{cy}^T} \\ g\left( {{b_x}} \right) &=& \sum\limits_y {g\left( {{O_{xy}}} \right)} \end{eqnarray} According to Eq.3, we can obtain the gradients: $\frac{{\partial {{\emph I}_{cy}}}}{{\partial {{h'}_{ij}}}}$ and $\frac{{\partial {{\emph I}_{cy}}}}{{\partial {{w'}_{ij}}}}$. Since the coordinates ${{h'}_{ij}}$ and ${{w'}_{ij}}$ rely on the scale coefficient $s_{y}$, to obtain the gradient of $s_{y}$,we first compute the partial derivatives of coordinates as follows.:\begin{equation} \frac{{\partial {{h'}_{ij}}}}{{\partial {s_y}}} = 0 ~ or ~ \frac{\alpha }{2} ~ or ~ - \frac{\alpha }{2}, ~ \frac{{\partial {{w'}_{ij}}}}{{\partial s}} = j{d_w} \end{equation} Thus the final gradients of $s_{y}$ are obtained as \begin{equation} g\left( {{s_y}} \right) = \sum\limits_c {\sum\limits_{i,j} {{{\left( {\frac{{\partial {{h'}_{ij}}}}{{\partial {s_y}}}\frac{{\partial {{\emph I}_{cy}}}}{{\partial {{h'}_{ij}}}} + \frac{{\partial {{w'}_{ij}}}}{{\partial {s_y}}}\frac{{\partial {{\emph I}_{cy}}}}{{\partial {{w'}_{ij}}}}} \right)}^T}} g\left( {{{\emph I}_{cy}}} \right)} \end{equation} According to Eq.6 and Eq.12, the gradients of scale coefficients can be automatically calculated from the gradients of the following layers. In other words, the scale map can be obtained in a data-driven manner and we do not need any extra supervision. All above the derived formulations can be computed efficiently and implemented in parallel on GPUs. In practice, we limit the scale coefficients greater than zero and smaller than the size of image. \section{Experiments} We implement the proposed algorithm with Caffe \cite{Jia:2014:CCA:2647868.2654889} on Python. All the experiments are conducted on a regular server (3.3GHz 20-core CPU, 64GB RAM, NVIDIA TITAN GPU and Linux 64-bit OS) and the routine run on a single GPU in each time. \subsection{Datasets and Experimental Settings} {\bfseries VGG SynthText-Part.} The VGG SynthText datasets \cite{gupta2016synthetic} consist of approximately $800k$ synthetic scene-text images. For efficiency, we randomly select $500k$ images for training and refer it as VGG SynthText-Part. {\bfseries ICDAR13.} The ICDAR13 datasets are from the ICDAR 2013 Robust Reading Competition \cite{karatzas2013icdar}, with $229$ natural images for training and $233$ images for testing. {\bfseries ICDAR11.} The ICDAR11 datasets are from the ICDAR 2011 Robust Reading Competition \cite{shahab2011icdar}, with $229$ natural images for training and $255$ images for testing. Our model is trained with $300 \times 300$ images using stochastic gradient descent (SGD). Momentum and weight decay are set to $0.9$ and $5 \times 10^{-4}$ respectively. Learning rate is initially set to $10^{-3}$, and decayed to $10^{-4}$ after $20k$ training iterations. We first train our model on VGG SynthText-Part for 40\textit{k} iterations, and then finetune it on ICDAR13 for $2k$ iterations. Compared to previous box-based methods, the number of anchor boxes in our algorithm are largely reduced, so we employ all anchors without negative mining for training. Accordingly, we add a balance parameter in our loss function to balance the ratio of positive and negative anchors. To further boost detection recall, we rescale input image to 6 resolutions, and the total running time is 0.28s. \subsection{Visualization and Analysis} To verify the ability of our network in learning scales of texts, we visualize the training results of scale maps after different iterations. As shown in Figure 4, the scale maps exhibit overall similar structures to the texts in images gradually when the iteration increases. Specifically, large texts have large scale values, whereas small texts have small scale values. Besides, for each text area, the scale values associated with the center points are slightly larger, while the scale values close to boundaries are slightly smaller. \begin{figure}[h] \includegraphics[height=1.6in, width=3in]{figure4} \caption{Visualization of scale maps training results after different iterations. Different color (in green) brightness denotes different levels of scales, and brighter color represents larger value. (a)Input images; (b)500 iterations; (c)5,000 iterations; (d)30,000 iterations.} \end{figure} \subsection{Evaluation on Anchor Convolution} In this part, we investigate the effect of Anchor convolution, which is used to adjust the size of receptive field of each anchor and get more abundant feature information. Two models are trained using all $50k$ images of SynthText-Part and refined on ICDAR13. One model is with Anchor convolution (denoted as AC-model) while the other is not (denoted as WAC-model). We evaluate two models on ICDAR13 and the results are tabulated in Table 1. Here P, R and F are abbreviations for Precision, Recall, and F-measure respectively. \begin{figure*}[t] \includegraphics[height=3in, width=6in]{figure5} \caption{Our detection results on several challenging images. The green bounding boxes are correct detections.} \end{figure*} \begin{table}[h] \caption{Impact of Anchor convolution on text detection. $\Delta F$ is the improvement of AC-model over WAC-model. } \label{tab:freq} \begin{tabular}{ccccl} \toprule Model & P & R & F & $\Delta F$ \\ \midrule WAC-model & 77\%& 76\% & 76\% & - \\ AC-model & 89\%& 83\% & 86\% & 10\% \\ \bottomrule \end{tabular} \end{table} From Table 1 we can see that the F-measure based on Anchor convolution (AC-model) is $86\%$, which has $10\%$ improvement over WAC-model. This verifies that Anchor convolution is effective in exploiting necessary feature information by adjusting the receptive fields dynamically, for detecting texts of various sizes. \subsection{Evaluation for Full Text Detection } \begin{table*} \caption{Experimental results on the ICDAR11 and ICDAR13 datasets} \label{tab:commands} \begin{tabular}{cccccccl} \toprule Datasets & \multicolumn{3}{|c|}{ICDAR11} & \multicolumn{3}{|c|}{ICDAR13} & \multirow{2}*{Runtime/s} \\ \cline{1-7} Methods & P & R & F & P & R & F & \\ \midrule TextBoxes \cite{Liao2016TextBoxes} & 88\% & 82\% & 85\% & 88\% & 83\% & 85\% & 0.73 \\ Yao \textit{et al.}\cite{DBLP:journals/corr/YaoBSZZC16} & - & - & - & 89\% & 80\% & 84\% & 0.62 \\ MCLAB\_FCN \cite{Zhang_2016_CVPR} & - & - & - & 78\% & 88\% & 82\% & 2.1 \\ RRPN \cite{DBLP:journals/corr/MaSYWWZX17} & - & - & - & 90\% & 72\% & 80\% & - \\ TextFlow \cite{Tian:2015:TFU:2919332.2920067} & 86\% & 76\% & 81\% & 85\% & 76\% & 80\% & 1.4 \\ Lu \textit{et al.} \cite{Lu2015} & - & - & - & 89\% & 70\% & 80\% & - \\ Neumann \textit{et al.} \cite{neumann2015efficient} & - & - & - & 82\% & 72\% & 77\% & 0.8 \\ FASText \cite{Buta:2015:FEU:2919332.2919945} & - & & - & 84\% & 69\% & 75\% & 0.55 \\ Ours & 89\% & 82\% & \textbf{85 \%} & 89\% & 83\% & \textbf{86\%} & \textbf{0.28} \\ \bottomrule \end{tabular} \end{table*} We evaluate our detector on two benchmarks: ICDAR11 and ICDAR13. The comparison results with some state-of-the-art methods, including traditional methods and box-based methods, are tabulated in Table 2. We can see that our method achieves slightly superior results of $85\%$, $86\%$ F-measure to state-of-the-art approaches, while costing much less time, which is important to real systems especially mobile applications. Some detection examples are given in Figure 5. The results show that our model is extremely robust against multiple text variations, cluttered backgrounds and challenging conditions like high light, blurring and so on. \begin{figure} \includegraphics[height=2in, width=3in]{figure6} \caption{Comparisons of our method (Top row) with TextBoxes (Bottom row). The red bounding boxes are the detection results. Our method is more efficient to handle small texts, as marked by yellow bounding boxes. } \end{figure} {\bfseries Advantages on Small Texts.} We argue that our method is superior in detecting small texts. To verify this, we compare it with the representative box-based method, i.e., \textit{TextB\\oxes} \cite{Liao2016TextBoxes}. To detect small texts, \textit{TextBoxes} needs to resize the input image to $1600 \times 1600$ pixels before sending into the network, which is very time-consuming. In contrast, we can cover most of the small texts with only $800 \times 800$ input images. With such settings, as shown in Figure 6, our model is more reliable and finds all the small texts, while \textit{TextBoxes} has missed some of them. We attribute the advantages of our method on small texts to the scale-adaptive anchors and Anchor convolution. First, different to the fixed-size anchors of several discrete scales used in \textit{TextBoxes}, the proposed scale-adaptive anchors can change their sizes continuously and thus are more potential in matching the shapes of small texts. Moreover, in \textit{TextBoxes} even the smallest anchor may be much bigger than the small texts for some test images with its current settings. Second, the proposed Anchor convolution is able to shrink the respective fields of small texts adaptively, therefore we can focus on texts while remove the side-effect of background in feature extraction. {\bfseries Performance Analysis.} By producing scale-adaptive anchors to replace presetting all possible anchors of different scales, which are employed in most box-based methods, we improve the computation efficiency(reduce the time computational complexity from $O(n)$ to $O(1)$, the details can be seen in Sec 3.1), and reduce the running time from 0.73s to 0.28s while keeping competitive accuracy. Our running time includes generating scale map and matching anchors. Therefore, the savings in time will be more significant as the networks go deeper. Furthermore, the proposed adaptive scale allows other box-based methods to handle multi-scale texts in a more efficient way and further improve their performance. \section{Conclusions} In this paper, we have presented an end-to-end text detector with scale-adaptive anchors. It can largely reduce the number of anchors and thus improve the computation efficiency. Meanwhile, it also eliminates the unreliability of detection caused by discrete scales, and is more effective to handle multi-scale texts, especially small texts. Additionally, the Anchor convolution is proposed to further improve detection performance via exploiting necessary feature for each anchor. Furthermore, the proposed adaptive scale can also be applied to other methods, and allow them to handle multi-scale texts in a more efficient way. Experimental results show that our approach is fast while maintaining high accuracy on ICDAR11 and ICADR13. In future, we are interested to apply our method to arbitrary-oriented text detection task. \begin{acks} The authors would like to thank the associate editor and the anonymous reviewers for their constructive suggestions. \end{acks}
1,314,259,996,106
arxiv
\section{Cycles assigned to ${\rm Conf}^+_w({\cal A}^n, {\cal B}, {\cal B})({\mathbb Z}^t)$} \label{sec2.3.3} Given an element $w\in W$, there are moduli spaces $$ {\rm Conf}_{w}({\cal A}^n, {\cal B}, {\cal B}) \subset {\rm Conf}({\cal A}^n, {\cal B}, {\cal B}), ~~~{\rm Conf}^{\cal O}_{w}({\cal A}^n, {\cal B}, {\cal B})\subset {\rm Conf}^{\cal O}({\cal A}^n, {\cal B}, {\cal B}) $$ determined by the condition that the last two flags belong to the ${\rm G}$-orbit $({\cal B}\times {\cal B})_w \subset {\cal B}\times {\cal B}$ parametrised by $w$. So there are decompositions into disjoint unions $$ {\rm Conf}({\cal A}^n, {\cal B}, {\cal B}) = \coprod_{w\in W}{\rm Conf}_{w}({\cal A}^n, {\cal B}, {\cal B}), ~~~~ {\rm Conf}^{\cal O}({\cal A}^n, {\cal B}, {\cal B}) = \coprod_{w\in W} {\rm Conf}^{\cal O}_{w}({\cal A}^n, {\cal B}, {\cal B}). $$ Similarly there are moduli space ${\rm Conf}_{w}({\rm Gr}^n, {\cal B}, {\cal B})$, and a canonical map $$ \kappa_w: {\rm Conf}^{\cal O}_{w}({\cal A}^n, {\cal B}, {\cal B}) \longrightarrow {\rm Conf}_{w}({\rm Gr}^n, {\cal B}, {\cal B}). $$ The subgroup ${\rm B}_w:= {\rm B}\cap w{\rm B}w^{-1}$ is a stabiliser of the group ${\rm G}$ acting on $({\cal B}\times {\cal B})_w$. So one has \begin{equation} \label{stack10} {\rm Conf}_{w}({\rm Gr}^n, {\cal B}, {\cal B})= {\rm B}_w({\cal K})\backslash {\rm Gr}^{n}, ~~~~{\rm B}_w:= {\rm B}\cap w{\rm B}w^{-1}. \end{equation} For $w=e$ we get \begin{equation} \label{stack1} {\rm Conf}_{e}({\rm Gr}^n, {\cal B}, {\cal B})= {\rm Conf}({\rm Gr}^n, {\cal B}). \end{equation} The space ${\rm Conf}_{w}({\cal A}, {\cal B}, {\cal B})$ is birational isomorphic to a subgroup ${\rm U}_{(w)}:= {\rm U} \cap w^{-1}B^-w \subset {\rm U}$. The latter has a positive structure defined by Lusztig \cite{L} using reduced decompositions of $w$. Combined with the standard construction, we arrive at a positive atlas on ${\rm Conf}_{w}({\cal A}^n, {\cal B}, {\cal B})$. There is a potential ${\cal W}$ on ${\rm Conf}_{w}({\cal A}^n, {\cal B}, {\cal B})$, defined by restriction of the usual one. So the set ${\rm Conf}^+_{w}({\cal A}^n, {\cal B}, {\cal B})({\mathbb Z}^t)$ is defined. The canonical map $\kappa_w$ provides a collection of cycles \begin{equation} \label{MVGCyy} {\cal M}_l^\circ\subset {\rm Conf}_{w}({\rm Gr}^n, {\cal B}, {\cal B}), ~~~~l\in {\rm Conf}^+_{w}({\cal A}^n, {\cal B}, {\cal B})({\mathbb Z}^t). \end{equation} \vskip 3mm Notice that our approach makes the map $\kappa$ transparent, and allows to avoid any kind of explicit parametrisations in its definition. It makes obvious a parametrisation of generalized MV cycles, defined as components of $\overline{{\rm S}_{e}^{\lambda}\cap {\rm S}_{w}^{\mu}}$ for arbitrary $w\in W$ -- one needs to use the whole configuration space ${\rm Conf}({\cal A}, {\cal B}, {\cal B})$, not only its generic part. \paragraph{Constructible functions $D_F$.} Let $F$ be a rational function on ${\cal A}^{n}\times ({\cal B}\times {\cal B})_w$, invariant under the left diagonal action of ${\rm G}$. Using the isomorphism $ {\mathbb Q}({\cal A}^{n}\times ({\cal B}\times {\cal B})_w)^{\rm G} = {\mathbb Q}({\cal A}^{n})^{{\rm B}_w}, $ we realize $F$ as an ${\rm B}_w$-invariant rational function on ${\cal A}^{n}$. Define a function $D_F$ on ${\rm G}({\cal K})^{n}$ by \begin{equation} \label{dff} D_F(g_1(t), ..., g_{n}(t)):= {\rm val}~F(g_1(t)A_1, ..., g_{n}(t)A_n) ~~~\mbox{\rm for some $A_1, ..., A_{n} \in {\cal A}({\mathbb C})$}. \end{equation} It is left ${\rm B}_w({\cal K})$-equivariant, and right ${\rm G}({\cal O})^{n}$-equivariant, and so descends to a function $$ D_F: {\rm B}_w({\cal K})\backslash {\rm Gr}^{n}\longrightarrow {\mathbb Z}. $$ {\bf Remark}. The function $D_F$ assigned to a positive rational function $F$ on ${\rm Conf}({\cal A}^n, {\cal B}, {\cal B})$ is not a function on the whole space ${\rm Conf}({\rm Gr}^n, {\cal B}, {\cal B})$, only on its generic part. One has $$ {\rm Conf}({\rm Gr}^n, {\cal B}, {\cal B}) =\coprod_{w\in W}{\rm Conf}_{w}({\rm Gr}^n, {\cal B}, {\cal B}), $$ and one needs to use the positive rational functions on the strata ${\rm Conf}_{w}({\cal A}^n, {\cal B}, {\cal B})$ to define constructable functions on the strata ${\rm Conf}_{w}({\rm Gr}^n, {\cal B}, {\cal B})$. \begin{theorem} \label{5.8.10.45xw} Let $l\in{\rm Conf}_w^+({\cal A}^{n}, {\cal B}, {\cal B})({\mathbb Z}^t)$, and $F\in {{\mathbb Q}}_{+}({\rm Conf}_w({\cal A}^{n}, {\cal B}, {\cal B}))$. Then we have $$ D_F\big({\cal M}_l^\circ\big)\equiv F^t(l). $$ \end{theorem} \section{Tropicalization of configuration spaces}\label{sec10} \section{Weight multiplicities and tensor product multiplicities} \label{sec10} The following Theorem is due to \cite[Section 8]{L}. We provide a simple proof by using the set ${\bf B}_{\lambda}^\mu$. \begin{theorem} [\cite{L}] \label{8.21.wtm.h} The weight multiplicity $b_{\lambda}^{\mu}:={\rm dim}~V_{\lambda}^{\mu}$ is equal to the cardinality of the subset \begin{equation} \label{13.2.1.519h} \{l\in {\bf A}_{\lambda-\mu}~|~ {\cal L}_i^t(l)\leq \langle \lambda, \alpha_{i^*}\rangle,~\forall i\in I\}\subset {\rm U}^+_{\chi}({\mathbb Z}^t). \end{equation} \end{theorem} \begin{proof} Recall the positive birational isomorphism $$ \alpha_2=(\pi_{12}, p_2): {\rm Conf}({\cal A}^2;{\cal B})\longrightarrow {\rm H}\times {\rm U},~~~({\rm A}_1,{\rm A}_2,{\rm B}_3)\longmapsto (h_{{\rm A}_1,{\rm A}_2}, u_{{\rm B}_1,{\rm B}_3}^{{\rm A}_2}). $$ The potential ${\cal W}$ on ${\rm Conf}({\cal A}^2;{\cal B})$ induces a positive function ${\cal W}_{\alpha_2}={\cal W}\circ \alpha_2^{-1}: {\rm H}\times {\rm U} \rightarrow {\Bbb A}^1.$ By Lemma \ref{lem1}, we get $$ {\cal W}_{\alpha_2}(h,u)=\sum_{i\in I}\frac{\alpha_{i^*}(h)}{{\cal L}_i(u)}+\chi(u). $$ Its tropicalization becomes $$ {\cal W}_{\alpha_2}^{t}(\lambda, l)=\min_{i\in I}\{\langle \lambda, \alpha_{i^*}\rangle- {\cal L}_{i}^{t}(l), \chi^t(l)\} $$ Therefore, under the map $p_2^t$, the set ${\bf B}_{\lambda}^{\mu}$ is identified with the set \eqref{13.2.1.519h}. Note that the set ${\bf B}_{\lambda}^\mu$ parametrized the MV basis of the weight space $V_{\lambda}^{\mu}$ of $V_{\lambda}$. Therefore the weight multiplicity $b_{\lambda}^{\mu}$ is equal to the cardinality of the set ${\bf B}_{\lambda}^\mu$. \end{proof} Below we give a (rather simple) proof of \cite[Corollary 3.4]{BZ} based on Proposition \ref{13.2.1.5.29h}. \begin{theorem}[\cite{BZ}] \label{COMBZ} The tensor product multiplicity $c_{\lambda, \nu}^{\mu}$ is equal to the cardinality of the set \begin{equation} \label{13.2.1.6.13h} \{l\in {\bf A}_{\lambda+\nu-\mu}~|~ {\cal L}_i^t(l)\leq \langle \lambda, \alpha_{i^*}\rangle \text{ and } {\cal R}_i^t(l)\leq \langle \nu, \alpha_{i}\rangle\text{ for all } i\in I\}\subset {\rm U}^+_{\chi}({\mathbb Z}^t) \end{equation} \end{theorem} \begin{proof} Recall the positive birational isomorphism $$ \alpha_2=(\pi_{12}, p_2, \pi_{23}): {\rm Conf}_3({\cal A})\stackrel{\sim}{\longrightarrow} {\rm H}\times{\rm U} \times {\rm H},~~~({\rm A}_1,{\rm A}_2,{\rm A}_3)\longrightarrow(h_{{\rm A}_1,{\rm A}_2}, u_{{\rm B}_1,{\rm B}_3}^{{\rm A}_2}, h_{{\rm A}_2,{\rm A}_3}). $$ By Theorem \ref{13.2.1.6.05h}, the function ${\cal W}_{\alpha_2}={\cal W}\circ \alpha_2^{-1}$ becomes \begin{equation} \label{25huh} {\cal W}_{\alpha_2}(h_1,u,h_2)=\sum_{i\in I}\frac{\alpha_i(h_1)}{{\cal L}_{i^*}(u)}+\chi(u)+\sum_{i\in I}\frac{\alpha_i(h_2)}{{\cal R}_i(u)}. \end{equation} We tropicalize (\ref{25huh}): $$ {\cal W}_{\alpha_2}^t(\lambda, l, \nu)=\min \big\{\min_{i\in I}\{\langle \lambda, \alpha_{i^*}\rangle-{\cal L}_{i}^t(l)\},~\chi^t(l),~\min_{i\in I}\{\langle \nu, \alpha_i\rangle-{\cal R}_{i}^t(l)\}\big\}. $$ Therefore, under $p_2^t$, the set ${\bf C}_{\lambda, \nu}^{\mu}$ is identified with \eqref{13.2.1.6.13h}. The rest is due to Proposition \ref{13.2.1.5.29h}. \end{proof} \section{Geometric crystal structure on ${\rm Conf}({\cal A}^{n}, {\cal B})$} \label{sec9} We construct a geometric crystal structure on ${\rm Conf}({\cal A}^{n}, {\cal B})$. See \cite{BK3} for definition of geometric crystals. By the positive birational isomorphism $p: {\rm Conf}({\cal A}^2, {\cal B})\rightarrow {\rm B}^{-}$, it recovers the crystal structure on ${\rm B}^{-}$ defined in Example 1.10 of {\it loc.cit}. Using machinery of tropicalization, the subset ${\bf B}_{\lambda}^{\mu}\subset {\rm Conf}({\cal A}^2,{\cal B})({\mathbb Z}^t)$ becomes a crystal basis. We refer the reader to Section 2 of {\it loc.cit.} for details concerning the tropicalization of geometric crystals. Braverman and Gaitsgory \cite{BG} construct a crystal structure for the MV basis. As a direct consequence of this paper, the MV basis are parametrized by the set ${\bf B}_{\lambda}^{\mu}$. Therefore we give a direct isomorphism between the crystals of \cite[Theorem 6.15]{BK2} and \cite{BG} without using the uniqueness theorem from \cite{J}. The tensor product of crystals can be interpreted as tropicalizations of the convolution product $*$ from Section \ref{sec7.1h}. Given the geometric background of the configuration spaces, definitions/proofs become simple. \subsection{Geometric crystal structure on ${\rm Conf}({\cal A}^n, {\cal B})$} \label{sec7.01h} Let $x=({\rm A}_1,\ldots, {\rm A}_{n}, {\rm B}_{n+1})\in {\rm Conf}({\cal A}^{n}, {\cal B})$. Let $i\in I$. Recall the following positive maps: \begin{itemize} \item $p: {\rm Conf}({\cal A}^n,{\cal B})\rightarrow {{\rm B}^-}$ such that $p(x)=b^{{\rm A}_1,{\rm A}_n}_{{\rm B}_{n+1}}$. \item $\mu: {\rm Conf}({\cal A}^{n},{\cal B})\rightarrow {\rm H}$ such that $\mu(x)=\mu_{{\rm B}_{n+1}}^{{\rm A}_{1}, {\rm A}_{n}}.$ \item $\varphi_i, \varepsilon_i, {\cal W}: {\rm Conf}({\cal A}^{n},{\cal B})\rightarrow {\Bbb A}^1$ such that $$\varphi_i(x)={\cal L}_i(u_{{\rm B}_{n+1}, {\rm B}_{n}}^{{\rm A}_1}), ~~\varepsilon_i(x)={\cal R}_i(u_{{\rm B}_{1}, {{\rm B}}_{n+1}}^{{\rm A}_{n}}), ~~{\cal W}(x)=\sum_{k=1}^{n}\chi(u_{{\rm B}_{k-1}, {\rm B}_{k+1}}^{{\rm A}_{k}}).$$ Here $k$ is modulo {\it n}+1. \item $e_i^{\cdot}: {\Bbb G}_m\times {\rm Conf}({\cal A}^{n},{\cal B})\rightarrow {\rm Conf}({\cal A}^{n},{\cal B})$, where $e_i^c({\rm A}_1,\ldots, {\rm A}_n,{\rm B}_{n+1})=({\rm A}_1,\ldots, {\rm A}_n, \widetilde{{\rm B}}_{n+1})$ is such that $$ e_i^c({\rm A}_1,{\rm A}_n, {\rm B}_{n+1})=({\rm A}_1,{\rm A}_n,\widetilde{{\rm B}}_{n+1}) $$ The action $e_i$ on ${\rm Conf}({\cal A}^2,{\cal B})$ is defined by \eqref{13.4.22.2108h}. \end{itemize} \begin{theorem} The 6-tuple $\big({\rm Conf}({\cal A}^{n}, {\cal B}), \mu, {\cal W}, \varphi_i, \varepsilon_i, e_i^{\cdot}|i\in I\big)$ is a positive decorated geometric crystal. \end{theorem} {\bf Warning.} The maps $\varphi_i$, $\varepsilon_i$ are the inverse of those $\varphi_i$, $\varepsilon_i$ in \cite[Definition 1.3]{BK3}. \begin{proof} By \cite[Definitions 1.3 $\&$ 2.7]{BK3}, it remains to show the following Lemmas. \begin{lemma} \label{8.13.4.7h} Let $\alpha_i^\vee$ and $\alpha_i$ be the simple coroot and simple root corresponding to $i\in I$. Let $x=({\rm A}_1,\ldots, {\rm A}_n, {\rm B}_{n+1})$. Let $c\in {\Bbb G}_m$. Then \begin{itemize} \item[1.]$\mu(e_i^c(x))=\alpha_i^\vee (c)\mu(x)$. \item[2.] $\varepsilon_i(x)\alpha_i(\mu(x))=\varphi_i(x)$. \item[3.] $\varphi_i(e_i^c(x))=c\varphi_i(x)$, $\varepsilon_i(e_i^c(x))=c^{-1}\varepsilon_i(x)$. \item[4.] ${\cal W}(e_i^c(x))={\cal W}(x)+(c-1)\varphi_i(x)+(c^{-1}-1)\varepsilon_i(x)$. \end{itemize} \end{lemma} \begin{proof} We pick ${{\rm A}_{n+1}}$ such that $\pi({\rm A}_{n+1})={\rm B}_{n+1}.$ By \eqref{8.13.4a}, $$ \chi_{i^*}(u_{{\rm B}_n,{\rm B}_1}^{{\rm A}_{n+1}})=\frac{\alpha_{i}(h_{{\rm A}_1,{\rm A}_{n+1}})}{{\cal L}_i(u_{{\rm B}_{n+1},{\rm B}_n}^{{\rm A}_1})}=\frac{\alpha_i(h_{{\rm A}_n,{\rm A}_{n+1}})}{{\cal R}_i(u_{{\rm B}_1,{\rm B}_{n+1}}^{{\rm A}_n})}. $$ Therefore, \begin{equation} \label{13.6.9.2059h} \frac{\varphi_i(x)}{\varepsilon_i(x)}=\frac{{\cal L}_i(u_{{\rm B}_{n+1},{\rm B}_n}^{{\rm A}_1})}{{\cal R}_i(u_{{\rm B}_1,{\rm B}_{n+1}}^{{\rm A}_n})}=\alpha_{i}(h_{{\rm A}_1,{\rm A}_{n+1}}h_{{\rm A}_n,{\rm A}_{n+1}}^{-1})=\alpha_i(\mu(x)). \end{equation} The last identity is due to Property 4 of Lemma \ref{12.12.11.h}. Thus 2 follows. Recall the proof of Lemma \ref{13.6.9.1548h}. Let ${\bf i}=(i_1,\ldots, i_m)$ be a reduced word for $w_0$ such that $i_1=i$. Assume $p(x)$ is expressed by \eqref{8.15.3.0h}. Recall the positive coroots $\beta_k^{\bf i}$ in Lemma \ref{8.13.1.20h}. In particular $\beta_1^{\bf i}=\alpha_{i}^\vee$. By Lemma \ref{8.13.1.20h} and property 4 of Lemma \ref{12.12.11.h}, we have $$ \mu(x)=h_{{\rm A}_1, {\rm A}_n}\beta(u_{{\rm B}_1, {\rm B}_{n+1}}^{{\rm A}_n})=h\prod_{k=1}^m\beta_{k}^{\bf i}(b_k^{-1}). $$ Similarly, by \eqref{13.4.22.1550h}, we have $$\mu(e_i^c(x))=h\beta_1^{\bf i}(cb_1^{-1})\prod_{k=2}^m\beta_{k}^{\bf i}(b_k^{-1})=\alpha_i^\vee(c)\mu(x). $$ Thus 1 follows. By \eqref{13.4.22.1550h} and definitions of the functions $\varepsilon_i$, $\varphi_i$, ${\cal W}$, we get 3 and 4. \end{proof} \begin{lemma} \label{13.4.22.2051h} For two different $i,j \in I$, set $a_{ij}=\langle\alpha_i, \alpha_j^{\vee}\rangle$. We have the following relation \begin{align} &e_i^{c_1}e_j^{c_2}=e_j^{c_2}e_i^{c_1}~~~\mbox{if }a_{ij}=0;\\ &e_i^{c_1}e_j^{c_1c_2}e_i^{c_2}=e_j^{c_2}e_{i}^{c_1c_2}e_{j}^{c_1}~~~\mbox{if }a_{ij}=a_{ji}=-1;\\ &e_i^{c_1}e_j^{c_1^2c_2}e_i^{c_1c_2}e_j^{c_2}=e_j^{c_2}e_i^{c_1c_2}e_{j}^{c_1^2c_2}e_{i}^{c_1}~~~\mbox{if } a_{ij}=-1, ~a_{ji}=-2;\\ &e_i^{c_1}e_j^{c_1^3c_2}e_i^{c_1^2c_2}e_j^{c_1^3c_2^2}e_i^{c_1c_2}e_j^{c_2}=e_j^{c_2}e_i^{c_1c_2}e_j^{c_1^3c_2^3}e_i^{c_1^2c_2}e_j^{c_1^3c_2}e_i^{c_1}~~~\mbox{if }a_{ij}=-1, ~a_{ji}=-3. \end{align} \end{lemma} \begin{proof} By the definition of the action $e_i^{\cdot}$, it is enough to prove the case when $n=2$, i.e. ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$. By \eqref{13.4.22.1550h}, we reduce the Lemma to the case when ${\rm G}$ is of rank 2. The first identity is clear. For the second identity, one can check for ${\rm G}={\rm PGL}_3$ case directly. The third and the fourth identities can be reduced to simple-laced case by ``folding". See \cite[Section 5.2]{BK1} for details. \end{proof} \end{proof} \begin{theorem} \label{8.18h.con.pro}Let $a\in {\rm Conf}^*({\cal A}^{m+1},{\cal B})$ and let $b\in {\rm Conf}^*({\cal A}^{n+1},{\cal B})$. Recall the convolution product $*$ from Section \ref{sec7.1h}. The following identities hold \begin{itemize} \item[1.] $p(a*b)=p(a)p(b)$, $\mu(a*b)=\mu(a)\mu(b)$. \item[2.] ${\cal W}(a*b)={\cal W}(a)+{\cal W}(b)$. \item[3.] $\varphi_i(a*b)=\frac{\varphi_i(a)\varphi_i(b)}{\varepsilon_i(a)+\varphi_i(b)}$, $\varepsilon_i(a*b)=\frac{\varepsilon_i(a)\varepsilon_i(b)}{\varepsilon_i(a)+\varphi_i(b)}$. \item[4.] $e_i^c(a*b)=e_i^{c_1}(a)*e_i^{c_2}(b)$, where $c_1=\frac{\varepsilon_i(a)+c\varphi_i(b)}{\varepsilon_i(a)+\varphi_i(b)}$, $c_2=\frac{\varepsilon_i(a)+\varphi_i(b)}{c^{-1}\varepsilon_i(a)+\varphi_i(b)}$. \end{itemize} \end{theorem} \begin{proof} 1-2. Follow from Lemma \ref{LEMMA.4.2}. \vskip 2mm 3. We prove the second formula. The first one follows similarly. By Figure \ref{convmap}, it suffices to prove the case when $a=({\rm A}_1, {\rm A}_2, {\rm B}_4), b=({\rm A}_2, {\rm A}_3, {\rm B}_4)\in {\rm Conf}({\cal A}^2,{\cal B})$. Then $a*b=({\rm A}_1,{\rm A}_2,{\rm A}_3,{\rm B}_4)$. Pick ${\rm A}_4\in{\cal A}$ such that $\pi({\rm A}_4)={\rm B}_4$. By \eqref{13.6.9.2059h}, $\alpha_i(h_{{\rm A}_2,{\rm A}_4})={\alpha_i(h_{{\rm A}_3,{\rm A}_4})\varphi_i(b)}/{\varepsilon_i(b)}.$ By \eqref{8.13.4a}, we have $$ \chi_{i^*}(u_{{\rm B}_3,{\rm B}_2}^{{\rm A}_4})=\frac{\alpha_i(h_{{\rm A}_3,{\rm A}_4})}{\varepsilon_i(b)},~~ \chi_{i^*}(u_{{\rm B}_2,{\rm B}_1}^{{\rm A}_4})=\frac{\alpha_i(h_{{\rm A}_2,{\rm A}_4})}{\varepsilon_i(a)}=\frac{\alpha_i(h_{{\rm A}_3,{\rm A}_4})\varphi_i(b)}{\varepsilon_i(a)\varepsilon_i(b)},~~ \chi_{i^*}(u_{{\rm B}_3,{\rm B}_1}^{{\rm A}_4})=\frac{\alpha(h_{{\rm A}_3,{\rm A}_4})}{\varepsilon_i(a*b)}. $$ Since $\chi_{i^*}(u_{{\rm B}_3,{\rm B}_1}^{{\rm A}_4})=\chi_{i^*}(u_{{\rm B}_3,{\rm B}_2}^{{\rm A}_4})+\chi_{i^*}(u_{{\rm B}_2,{\rm B}_1}^{{\rm A}_4})$, the formula follows. \vskip 2mm 4. It suffices to prove the case when $a=({\rm A}_1, {\rm A}_2, {\rm B}_4), b=({\rm A}_2, {\rm A}_3, {\rm B}_4)$. Let $e_i^c(a*b)=({\rm A}_1,{\rm A}_2,{\rm A}_3,{{\rm B}}_4')$. Recall the definition of $e_i$ by cross ratio in Section \ref{sec7.01h}. Let ${\rm P}\in {\cal P}_i $ be the parabolic subgroup containing ${\rm B}_4$ and ${\rm B}_4'$. Let ${\rm B}_1',{\rm B}_2', {\rm B}_3'\in {\cal B}_{\rm P}$ such that $$ {\rm B}_4\stackrel{s_i}{\longrightarrow}{{\rm B}_k'}\stackrel{s_iw_0}{\longrightarrow}\pi({\rm A}_k),~~~~k=1,2,3. $$ We have \begin{equation} \label{13.4.22.2139h} c=r({\rm B}_4, {\rm B}_4'; {\rm B}_1', {\rm B}_3' )=r({\rm B}_4, {\rm B}_4'; {\rm B}_1', {\rm B}_2' )r({\rm B}_4, {\rm B}_4'; {\rm B}_2', {\rm B}_3' )=c_1c_2. \end{equation} Note that \begin{equation} \label{13.4.22.2140h} \varepsilon_i(a)+\varphi_i(b)=\chi(u_{{\rm B}_1', {\rm B}_4}^{{\rm A}_2})+\chi(u_{{\rm B}_4, {\rm B}_3'}^{{\rm A}_2})=\chi(u_{{\rm B}_1', {\rm B}_3'}^{{\rm A}_2})= \chi(u_{{\rm B}_1', {\rm B}_4'}^{{\rm A}_2})+\chi(u_{{\rm B}_4', {\rm B}_3'}^{{\rm A}_2})=c_1^{-1}\varepsilon_i(a)+c_2\varphi_i(b). \end{equation} Combining \eqref{13.4.22.2139h}\eqref{13.4.22.2140h}, we get 4. \end{proof} {\bf Remark.} This Theorem recovers the properties in \cite[Lemma 3.9]{BK1}. It is analogous to the tensor product of Kashiwara's crystals. \begin{figure}[ht] \epsfxsize350pt \centerline{\epsfbox{convmap.eps}} \caption{Convolution products of configurations.} \label{convmap} \end{figure} \subsection{A simple illustration of tensor product of crystals.} Recall the subsets ${\bf B}_{\underline{\lambda}}^{\mu}$ and ${\bf C}_{\underline{\lambda}}^{\nu}$. We tropicalize the map $$ c_{1,n+1}: {\rm Conf}({\cal A}^{n+1}, {\cal B})\longrightarrow {\rm Conf}_{n+1}({\cal A}) \times {\rm Conf}({\cal A}^2,{\cal B}),$$ $$({\rm A}_1,\ldots,{\rm A}_{n+1},{\rm B}_{n+2})\longmapsto ({\rm A}_1,\ldots, {\rm A}_{n+1})\times ({\rm A}_1,{\rm A}_{n+1},{\rm B}_{n+2}). $$ It provides a canonical decomposition \begin{equation} \label{12.12.18.3h18} {\bf B}_{\underline{\lambda}}^{\mu}=\bigsqcup_{\nu} {\bf C}_{\underline{\lambda}}^{\nu}\times {\bf B}_{\nu}^{\mu}. \end{equation} Recall the decomposition \eqref{13.2.1.7.01h}. As illustrated by Figure \ref{pd}, we get \begin{theorem} \label{12.18.thm8.9} There is a canonical bijection $$ \bigsqcup_{\mu_1+\ldots+\mu_{n}=\mu}{\bf B}_{\lambda_1}^{\mu_1} \times \ldots \times {\bf B}_{\lambda_{n}}^{\mu_{n}}=\bigsqcup_{\nu} {\bf C}_{\lambda_1,\ldots, \lambda_n}^{\nu}\times {\bf B}_{\nu}^{\mu} $$ \end{theorem} We show that ${\bf B}_{\lambda}^{\mu}$ parametrizes a crystal basis. Figure \ref{pd} illustrates the tensor product of crystals. When $n=2$, it recovers \cite[Theorem 3.2]{BG}. \begin{figure}[ht] \epsfxsize400pt \centerline{\epsfbox{pd.eps}} \caption{Tensor product structure of the crystal basis} \label{pd} \end{figure} \subsubsection{Landau-Ginzburg mirror of a simple split group ${\rm G}$} In this Section we interprete a split simple group ${\rm G}$ as a configuration space, and using this deduce its Landau-Ginzburg mirror from Conjecture \ref{MIRRORDUALa} by using our standard toolbox. The companion conjecture tells that the mirror of the maximal double Bruhat cell for ${\rm G}$ is the maximal double Bruhat cell for ${\rm G}^L$. \vskip 3mm Denote by ${\rm Conf}^\times({\cal B}, {\cal A}, {\cal B}, {\cal A})$ the space parametrising configurations $({\rm B}_1, {\rm A}_2, {\rm B}_3, {\rm A}_4)$ where all four consecutive pairs are generic. There is a potential given by the sum of the potentials at the $A$-vertices: $$ {\cal W}_{2,4}({\rm B}_1, {\rm A}_2, {\rm B}_3, {\rm A}_4):= \chi_{{\rm A}_2}({\rm B}_1, {\rm A}_2, {\rm B}_3) + \chi_{{\rm A}_4}({\rm B}_3, {\rm A}_4, {\rm B}_1). $$ The space with potential is illustrated on the left of Fig \ref{untitle}. Let us describe its mirror. \epsfxsize250pt \begin{figure}[ht] \centerline{\epsfbox{fig8.eps}} \caption{The Landau-Ginzburg model (left) dual to ${{\rm G}^L}$ (right).} \label{untitle} \end{figure} Recall the isomorphism $ \alpha: {\rm Conf}^\times({\cal A}, {\cal A}) \longrightarrow {\rm H}. $ Consider the moduli space of configurations \begin{equation} \label{G} ({\rm A}_1, {\rm A}_2, {\rm A}_3, {\rm A}_4) \in {\rm Conf}_4({\cal A}_L)~|~ \mbox{$({\rm A}_1, {\rm A}_2)$, $({\rm A}_3, {\rm A}_4)$ are generic}; ~\alpha({\rm A}_1, {\rm A}_2) = \alpha({\rm A}_3, {\rm A}_4) =e. \end{equation} The picture on the right of Fig \ref{untitle} illusatrates this moduli space. \begin{lemma} The moduli space (\ref{G}) is isomorphic to the group ${\rm G}^L$. \end{lemma} \begin{proof} Pick a generic pair $\{A_1, A_2\}$ with $\alpha(A_1, A_2)=e$. Then for each ${\rm G}^L$-orbit in (\ref{G}) there is a unique representative $\{{\rm A}_1, {\rm A}_2, {\rm A}_3, {\rm A}_4\}$ where $\{{\rm A}_1, {\rm A}_2\}$ is the chosen pair. There is a unique $g \in {\rm G}^L$ such that $ g\{{\rm A}_1, {\rm A}_2\} = \{{\rm A}_3, {\rm A}_4\}. $ The map $({\rm A}_1, {\rm A}_2, {\rm A}_3, {\rm A}_4) \to g$ provides the isomorphism. \end{proof} \begin{conjecture} \label{ConG} The mirror to a split semisimple algebraic group ${\rm G}^L$ over ${\mathbb Q}$ is the pair \begin{equation} \label{G4s} ({\rm Conf}^\times({\cal B}, {\cal A}, {\cal B}, {\cal A}), {\cal W}_{2,4}). \end{equation} \end{conjecture} \paragraph{Example.} Let ${\rm G}^L=PGL_2$, so ${\rm G}=SL_2$. Then ${\cal A}= {\Bbb A}^2 -\{0\}$, ${\cal B}= {\Bbb P}^1$, and \begin{equation} \label{10.16.14.1} {\rm Conf}^\times({\cal B}, {\cal A}, {\cal B}, {\cal A}) = \{(L_1, v_2, L_3, v_4)\}/SL_2. \end{equation} Here $L_1, L_3$ are one dimensional subspaces in a two dimensional vector space $V_2$, and $v_2, v_4$ are non-zero vectors in $V_2$. The pairs $(L_1, v_2)$, $(v_2, L_3)$, $(L_3, v_4)$, $(v_4, L_1)$ are generic, i.e. the corresponding pairs of lines are distinct. Pick non-zero vectors $l_1\in L_1$ and $l_3\in L_3$. Then $$ {\cal W}_{2,4} = \frac{\Delta(l_1, l_3)}{\Delta(l_1, v_2)\Delta(v_2, l_3)} + \frac{\Delta(l_1, l_3)}{\Delta(l_3, v_4)\Delta(l_1, v_4)}. $$ It is a regular function on (\ref{10.16.14.1}), independent of the choice of vectors $l_1, l_3$. To calculate it, set \begin{equation} l_1 = (1,0), ~~~~ v_2 = (x,1/p), ~~~~ l_3 = (1,y/p), ~~~~ v_4 = (0,1). \label{10.19.14h} \end{equation} Then $$ {\rm Conf}^\times({\cal B}_L, {\cal A}_L, {\cal B}_L, {\cal A}_L) = \{(x, y, p) \in {\Bbb A}^1 \times {\Bbb A}^1 \times {\Bbb G}_m - (xy-1= 0)\}. $$ \begin{equation} \label{10.19.14.1h} {\cal W}_{2,4} = \frac{y/p}{1/p\cdot (xy/p-1/p)} + \frac{y/p}{1\cdot 1} = \frac{yp}{xy-1} +\frac{y}{p}. \end{equation} The case ${\rm G}=PGL_2$, ${\rm G}^L=SL_2$ is similar, except that now ${\cal A}_{PGL_2} = {\rm A}^2-\{0\}/\pm 1$. \vskip 3mm Let us explain how this conjecture can be deduced from our general conjecture. \epsfxsize250pt \begin{figure}[ht] \centerline{\epsfbox{fig9.eps}} \caption{Duality between configurations of decorated flags for ${\rm G}$ and ${\rm G}^L$.} \label{dual2} \end{figure} \paragraph{Step 1.} Conjecture \ref{MIRRORDUALa} tells us mirror duality, illustrated on Fig \ref{dual2}: $$ ({\rm Conf}^\times_4({\cal A}), {\cal W}_{1,2,3,4})\leftrightarrow {\rm Conf}_4({\cal A}_L). $$ \paragraph{Step 2.} We alter the pair $({\rm Conf}^\times_4({\cal A}), {\cal W}_{1,2,3,4})$ by removing the potentials at the vertices ${\rm A}_1$ and ${\rm A}_3$. This reduces the potential ${\cal W}_{1,2,3,4}$ to a new potential: $$ {\cal W}_{2,4}({\rm A}_1, {\rm A}_2, {\rm A}_3, {\rm A}_4): = \chi_{{\rm A}_2}({\rm B}_1, {\rm A}_2, {\rm B}_3) + \chi_{{\rm A}_4}({\rm B}_3, {\rm A}_4, {\rm B}_1). $$ In the dual picture this amounts to removing two divisors from ${\rm Conf}_4({\cal A}_L)$, illustrated by two punctured edges on the right of Fig \ref{dual3}, dual to the vertices ${\rm A}_1$ and ${\rm A}_3$ on the left. Precisely, we introduce a subspace $\widetilde {\rm Conf}_4({\cal A}_L)$ such that the pairs of decorated flags at punctured sides are generic. The obtained dual pair is illustrated on Fig \ref{dual3}. \epsfxsize250pt \begin{figure}[ht] \centerline{\epsfbox{fig10.eps}} \caption{Dual pair of spaces obtained on Step 2.} \label{dual3} \end{figure} In particular there is a projection provided by the two punctured sides: \begin{equation} \label{G2} \widetilde {\rm Conf}_4({\cal A}_L) \longrightarrow {\rm H}_L^2. \end{equation} \paragraph{Step 3.} There is an action of the group ${\rm H}\times {\rm H}$ on ${\rm Conf}^\times_4({\cal A})$ preserves the potential ${\cal W}_{2,4}$, given by $({\rm A}_1, {\rm A}_2, {\rm A}_3, {\rm A}_4) \longrightarrow (h_1 \cdot {\rm A}_1, {\rm A}_2, h_1 \cdot {\rm A}_3, {\rm A}_4).$ The quotient is the space (\ref{G4s}): $$ ({\rm Conf}^\times_4({\cal A}), {\cal W}_{2,4})/ ({\rm H}\times {\rm H}) = ({\rm Conf}^\times({\cal B}, {\cal A}, {\cal B}, {\cal A}), {\cal W}_{2,4}). $$ \paragraph{Step 4.} The action of the group ${\rm H}\times {\rm H}$ is dual to the projection (\ref{G2}). The quotient by the ${\rm H}\times {\rm H}$-action is dual to the fiber over $e \in {\rm H}_L\times {\rm H}_L$. The fiber is just the space (\ref{G}). On the level of pictures, this is how we go from Fig \ref{dual3} to Fig \ref{untitle}. This way we arrived at Conjecture \ref{ConG}. \paragraph{Canonical basis motivation.} Let us explain how the positive integral tropical points of the space from Conjecture \ref{ConG} parametrise a canonical basis in ${\cal O}({\rm G}^L)$. One has $ {\cal O}({\rm G}^L) = \bigoplus_{\lambda\in {\rm P}^+}V_\lambda \otimes V_{\lambda}^*. $ \epsfxsize200pt \begin{figure}[ht] \centerline{\epsfbox{G2.eps}} \caption{The (tropicalised) Landau-Ginzburg model dual to ${\rm G}^L$ is obtained by gluing the two LG models dual to ${\cal A}_L$ along their ``vertical sides'', as shown on the left.} \label{G2} \end{figure} Recall that $ {\cal O}({\cal A}_L) = \bigoplus_{\lambda\in {\rm P}^+}V_\lambda$. The decomposition of ${\cal O}({\cal A}_L)$ into irreducible ${\rm G}^L$-modules is provided by the ${\rm H}_L$-action on ${\cal A}_L$. According to our general picture, $$ {\cal A}_L = {\rm Conf}_{w_0}({\cal B}_L, {\cal A}_L, {\cal A}_L) ~~ \mbox{is mirror dual to} ~~({\rm Conf}^\times({\cal B}, {\cal A}, {\cal A}), {\cal W}_{2,3}). $$ The canonical basis in $V_\lambda$ is parametrised by the fiber of the projection $ {\rm Conf}^+({\cal B}, {\cal A}, {\cal A})({\mathbb Z}^t) \longrightarrow P^+ $ over the $\lambda \in P^+$. This projection is the tropicalisation of the positive rational map $ {\rm Conf}({\cal B}, {\cal A}, {\cal A}) \longrightarrow {\rm Conf}({\cal A}, {\cal A}). $ Therefore the tensor product of canonical basis in $V_\lambda \otimes V_\lambda^*$ is parametrised by the fiber over $\lambda$ of the tropicalisation of the positive rational map $ {\rm Conf}({\cal B}, {\cal A}, {\cal B}, {\cal A}) \longrightarrow {\rm Conf}({\cal A}, {\cal A}). $ \begin{lemma} The space ${\rm Conf}^\times({\cal B},{\cal A}, {\cal B},{\cal A})$ is isomorphic to the open double Bruhat cell of ${\rm G}$. \end{lemma} \begin{proof} Note that ${\rm Conf}^\times({\cal B},{\cal A}, {\cal B},{\cal A})$ is isomorphic to the moduli space parametrizing the configurations $({\rm A}_1,{\rm A}_2, {\rm A}_3, {\rm A}_4)\in {\rm Conf}_4({\cal A})$ such that $\alpha({\rm A}_1, {\rm A}_2)=\alpha({\rm A}_4, {\rm A}_3)=e$ and each consecutive pair $({\rm A}_i, {\rm A}_{i+1})$ is generic. There is a unique element $g\in {\rm G}$ such that $ \{g\cdot{\rm A}_1, g\cdot{\rm A}_2\}= \{{\rm A}_4, {\rm A}_3\}. $ Let $\pi({\rm A}_1)= {\rm B}$ and $\pi({\rm A}_2)={\rm B}^-$. Then we have $$ \{{\rm A}_1, {\rm A}_4\}= \{{\rm A}_1, g\cdot {\rm A}_1\} \mbox{ is generic} ~~~ \Longleftrightarrow ~~~g\in {\rm B} w_0{\rm B}, $$ $$ \{{\rm A}_2, {\rm A}_3\}= \{{\rm A}_2, g\cdot {\rm A}_2\} \mbox{ is generic} ~~~\Longleftrightarrow ~~~g\in {\rm B}^- w_0{\rm B}^-. $$ So the space is isomorphic to the open double Bruhat cell ${\rm B} w_0{\rm B}\bigcap {\rm B}^- w_0{\rm B}^-$. \end{proof} \begin{conjecture} The open double Bruhat cell of ${\rm G}$ is mirror to the open double Bruhat cell of ${\rm G}^L$. \begin{figure}[ht] \epsfxsize300pt \centerline{\epsfbox{conjecture11.eps}} \label{bruhat} \end{figure} \end{conjecture} Below we investigate the case when ${\rm G}={\rm SL}_2$. \paragraph{An example: the open double Bruhat cell of ${\rm SL}_2$.} It consists of elements $$ \begin{bmatrix} x & p \\ q & y \\ \end{bmatrix} \in {\rm SL}_2, \quad \quad xy-1=pq, ~~p.q \in {\Bbb G}_m. $$ Let $$ D=\{xy-1=0\}\subset {\Bbb A}^2, ~~~~X={\Bbb A}^2\backslash D. $$ The open Bruhat cell of ${\rm SL}_2$ is isomorphic to $X\times {\Bbb G}_m$. It is a cluster ${\cal A}$-variety with two seeds $p \longleftarrow x \longrightarrow q$ and $p \longleftarrow y \longrightarrow q$. The $\{p,q\}$ are the frozen variables. The seeds are related by the cluster transformation $$ x=\frac{1+pq}{y}. $$ Note that the coordinates here are compatible with \eqref{10.19.14h}. In particular, the potential function \eqref{10.19.14.1h} becomes $$ {\cal W}=\frac{y}{q}+\frac{y}{p} $$ \paragraph{An example: the open double Bruhat cell of ${\rm PGL}_2$.} It consists of elements $$ \begin{bmatrix} px & 1 \\ p & y \\ \end{bmatrix} \in {\rm GL}_2, \quad \quad xy-1\neq 0,~~ p\in {\Bbb G}_m. $$ It is again isomorphic to $X\times {\Bbb G}_m$. So we expect that $ {X\times {\Bbb G}_m} \mbox{ is mirror to itself.} $ This open double Bruhat cell admits a cluster ${\cal X}$-structure. We set $$ x_1=px, \quad\quad x_2=xy-1, \quad \quad x_3=x. $$ It gives rise to a cluster ${\cal X}$-structure $ x_1\longleftarrow x_2 \longrightarrow x_3. $ Mutation at $x_2$ delivers $ x_1'\longleftarrow x_2' \longrightarrow x_3'. $ Here $$ x_1'=x_1(1+x_2)^{-1}=p/y, $$ $$ x_2'=x_2^{-1}=\frac{1}{xy-1}, $$ $$ x_3'=x_3(1+x_2)^{-1}=1/y. $$ The example $X={\Bbb A}^2\backslash D$ is considered by Auroux in Section 5 of \cite{Au1}. See also Section 2 of \cite{P} Finally, let us remind that in general, when ${\rm G}$ is simply connected, then the open double Bruhat cell is a cluster ${\cal A}$-variety. The open double Bruhat cell of ${\rm G}^L$ is a cluster ${\cal X}$-variety. \subsubsection{Landau-Ginzburg mirror of $G^n$} Since $G^n$ is a semi-simple group, the previous discussion applies. However our general approach leads to a slightly different mirror dual, which has an additional symmetry: the group ${\mathbb Z}/(n+1){\mathbb Z}$ acts naturally on each of the spaces. It starts from the dual pair $$ {\rm Conf}^\times_{2n+2}({\cal A}) ~~\mbox{mirror dual to}~~ {\rm Conf}^\times_{2n+2}({\cal A}_L). $$ Let ${\rm Conf}^\times_{2n+2}({\cal A}, {\cal B}, \ldots , {\cal A}, {\cal B})$ be the space parametrising configurations $({\rm A}_1, {\rm B}_2, {\rm A}_3, {\rm B}_4, \ldots , {\rm A}_{2n+1}, {\rm B}_{2n+2})$ such that any consequtive pair is generic. There is a potential $$ {\cal W}_{1,3, ..., 2n+1}({\rm A}_1, {\rm B}_2, \ldots , {\rm A}_{2n+1}, {\rm B}_{2n+2}):= \sum_{i=1, 3, ..., 2n+1}\chi_{{\rm A}_i}({\rm B}_{i-1}, {\rm A}_i, {\rm B}_{i+1}). $$ The dual space parametrises configurations \begin{equation} \label{Gcon2} ({\rm A}_1, \ldots , {\rm A}_{2n+2}) \in {\rm Conf}_{2n+2}({\cal A}_L)~~\mbox{such that $({\rm A}_{2k+1}, {\rm A}_{2k+2})$ are generic, and $\alpha({\rm A}_{2k+1}, {\rm A}_{2k+2})=1$}. \end{equation} The group ${\mathbb Z}/(n+1){\mathbb Z}$ acts by automorphisms of this pair of spaces. The dual pair is illustrated on Fig \ref{ababab}. \begin{lemma} The space (\ref{Gcon2}) is isomorphic to $(G^L)^n$. \end{lemma} \begin{proof} For any given collection $\{{\rm A}_1, \ldots , {\rm A}_{2n+2}\}$ representing a point in the moduli space (\ref{Gcon2}) there is unique $g_k\in G^L$ such that $\{{\rm A}_{2k+1}, {\rm A}_{2k+2}\} = g_k \{{\rm A}_{1}, {\rm A}_{2}\}$. So picking a representative with the first pair $\{{\rm A}_{1}, {\rm A}_{2}\}$ provided by a pinning in ${\rm G}^L$, we get an isomorphism with $(G^L)^n$. \end{proof} \begin{conjecture} \label{ConG} The mirror to $({\rm G}^L)^n$ is the pair \begin{equation} \label{G4s} ({\rm Conf}^\times_{2n+2}({\cal A}, {\cal B}, \ldots , {\cal A}, {\cal B}), {\cal W}_{1,3, ..., 2n+1}). \end{equation} \end{conjecture} As in the $n=1$ case, Conjecture \ref{ConG} can be deduced from Conjecture \ref{MIRRORDUALa} telling that \begin{equation} \label{G4s1} ({\rm Conf}^\times_{2n+2}({\cal A}), {\cal W}_{1,2,..., 2n+2}) ~~\mbox{mirror dual to}~~ {\rm Conf}_{2n+2}({\cal A}_L). \end{equation} Indeed, starting with duality (\ref{G4s1}), we turn off the potentials at the even vertices. Then the group $H^{n+1}$ acts by automorphisms of the space with the new potential. The quotient is the pair (\ref{G4s}). On the other hand, turning off the potentials at the even vertices amounts on the dual side to removing the divisors from ${\rm Conf}_{2n+2}({\cal A}_L)$ assigned to the sides of the $(2n+2)$-gon dual to those vertices. The obtained space is fibered over ${H_L}^{n+1}$. The fiber over $e$ is the space (\ref{Gcon2}). \epsfxsize300pt \begin{figure}[ht] \centerline{\epsfbox{fig13Gn.eps}} \caption{Getting the mirror dual for $G^n$ from configurations.} \label{ababab} \end{figure} \subsection{Mixed configurations and a generalization of Mirkovi\'{c}-Vilonen cycles} \label{sec2.3} In this Section we discuss several other examples. Each of them fits in the general scheme of Section \ref{sec1.2}. We show how to encode all the data in a polygon. \subsubsection{Mixed configurations and the map $\kappa$} \label{sec2.3.1} \begin{definition} \label{10.14.12.1a} i) Given a subset ${\rm I} \subset [1,n]$, the moduli space ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$ parametrizes configurations $(x_1, ..., x_n)$, where $x_i\in {\cal A}$ if $i\in {\rm I}$, otherwise $x_i\in {\cal B}$. ii) Given subsets ${\rm J} \subset {\rm I} \subset [1, n]$, the moduli space ${\rm Conf}_{{\rm J} \subset {\rm I}}({\rm Gr}; {\cal A}, {\cal B})$ parametrizes configurations $(x_1, ..., x_n)$ where $$ x_i\in {\rm Gr}~~\mbox{ if}~~ i\in {\rm J},~~~~ x_i\in {\cal A}({\cal K})~~\mbox{ if}~~ i\in {\rm I} -{\rm J}, ~~~~ x_i\in {\cal B}({\cal K}) ~~\mbox{ otherwise}. $$ We set ${\rm Conf}_{\rm I}({\rm Gr}; {\cal B}):= {\rm Conf}_{{\rm I} \subset {\rm I}}({\rm Gr}; {\cal B}).$ \end{definition} A positive structure on the space ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$ is defined in Section \ref{proofmth1}. This positive structure is invariant under a cyclic twisted shift. See Lemma \ref{13.1.10.1h} for the precise statement. \begin{definition} \label{aointegral} Let ${\rm J} \subset {\rm I} \subset [1, n]$. A configuration in ${\rm Conf}_{\rm I}({\cal A}; {\cal B})({\cal K})$ is called ${\cal O}$-integral relative to ${\rm J}$ if \begin{enumerate} \item For all $j\in {\rm J}$ and $k\not=j$, the pairs $({\rm A}_j, {\rm B}_k)$ are generic. Here ${\rm B}_k=\pi({\rm A}_k)$ if $k\in {\rm I}$. \item The lattices ${\rm L}_j:={\rm L}({\rm A}_j, {\rm B}_k)$ given by the above pairs only depend on $j$. \end{enumerate} Denote by ${\rm Conf}^{\cal O}_{{\rm J} \subset {\rm I}}({\cal A}; {\cal B})$ the moduli space of such configurations. \end{definition} By the very definition, there is a canonical map \begin{equation} \label{5.12.12.2} \kappa: {\rm Conf}^{\cal O}_{{\rm J} \subset {\rm I}}({\cal A}; {\cal B}) \longrightarrow {\rm Conf}_{{\rm J} \subset {\rm I}}({\rm Gr}; {\cal A}, {\cal B}). \end{equation} It assigns to ${\rm A}_j$ the lattice ${\rm L}_j$ when $j\in {\rm J}$ and keeps the rest intact . \vskip 2mm Recall $u_j\in {\rm U}_{{\rm A}_j}$ in (\ref{7.20.9.8}). The potential ${\cal W}_{\rm J}$ on ${\rm Conf}_{{\rm I}}({\cal A}; {\cal B})$ is a function \begin{equation} \label{13.3.1.527h} {\cal W}_{\rm J}:=\sum_{j\in {\rm J}}\chi_{{\rm A}_j}(u_j). \end{equation} Positivity of ${\cal W}_{\rm J}$ is proved in Section \ref{sec6.4}. Next Theorem generalizes Theorem \ref{8.27.17.08hh}. Its proof is the same. See Section \ref{sec6.4}. \begin{theorem} \label{13.2.22.2226h} Let $l\in {\rm Conf}_{\rm I}({\cal A}; {\cal B})({\mathbb Z}^t)$. A configuration in ${\cal C}_l^\circ$ is ${\cal O}$-integral relative to ${\rm J}$ if and only if ${\cal W}_{\rm J}^t(l)\geq 0$. \end{theorem} Denote by ${\rm Conf}^+_{{\rm J}\subset {\rm I}}({\cal A}; {\cal B})({\mathbb Z}^t)$ the set of points $l\in{\rm Conf}_{\rm I}({\cal A};{\cal B})({\mathbb Z}^t)$ such that ${\cal W}_{\rm J}^t(l)\geq 0$. Set \begin{equation} \label{MVGCyyyxs} {\cal M}^\circ_l := \kappa({\cal C}^\circ_l)\subset {\rm Conf}_{{\rm J}\subset {\rm I}}({\rm Gr};{\cal A}, {\cal B}), ~~~~ l\in {\rm Conf}^+_{{\rm J} \subset {\rm I}}({\cal A}; {\cal B})({\mathbb Z}^t). \end{equation} These cycles generalize the Mirkovi\'{c}-Vilonen cycles, as we will see in Section \ref{mdgmc}. \subsubsection{Basic invariants} \label{sec2.3.2} Recall the isomorphism (\ref{conf2A}): \begin{equation} \label{conf2A2} \alpha: {\rm Conf}^*({\cal A}, {\cal A}) \stackrel{\sim}{\longrightarrow} {\rm H}, ~~~~\alpha({\rm A}_1\cdot h_1, {\rm A}_2\cdot h_2) = h_1^{-1}w_0(h_2) \alpha({\rm A}_1, {\rm A}_2). \end{equation} Given a generic triple $\{{\rm A}_1, {\rm B}_2, {\rm A}_3\}$, we choose a decorated flag ${\rm A}_2$ over the flag ${\rm B}_2$, and set $$ \mu({\rm A}_1, {\rm B}_2, {\rm A}_3):= \alpha({\rm A}_1, {\rm A}_2)\alpha({\rm A}_3, {\rm A}_2)^{-1}\in {\rm H}. $$ Due to (\ref{conf2A2}), it does not depend on the choice of ${\rm A}_2$. We illustrate the invariant $\mu$ by a pair of red dashed arrows on the left in Fig \ref{tpm14}. Given a generic configuration $({\rm A}_1, {\rm B}_2, {\rm B}_3, {\rm A}_4)$, see the right of Fig \ref{tpm14}, choose decorated flags ${\rm A}_2, {\rm A}_3$ over the flags ${\rm B}_2, {\rm B}_3$, and set $$ \mu({\rm A}_1, {\rm B}_2, {\rm B}_3, {\rm A}_4):= \alpha({\rm A}_2, {\rm A}_1)\alpha_2({\rm A}_2, {\rm A}_3)^{-1}\alpha({\rm A}_4, {\rm A}_3)\in {\rm H}. $$ These invariants coincide with a similar {\rm H}-valued $\mu$-invariants from Section \ref{sec1.4}. \begin{figure}[ht] \centerline{\epsfbox{tpm14.eps}} \caption{The invariants $\mu({\rm A}_1, {\rm B}_2, {\rm A}_3)\in {\rm H}$ and $\mu({\rm A}_1, {\rm B}_2, {\rm B}_3, {\rm A}_4)\in {\rm H}$.} \label{tpm14} \end{figure} \vskip 3mm There are canonical isomorphisms: \begin{align}\label{caniso} &\pi_{\rm Gr}: {\rm Conf}({\rm Gr}, {\rm Gr}) \stackrel{=}{\longrightarrow}{\rm P}^+,\nonumber\\ &\alpha_{\rm Gr}: {\rm Conf}({\cal A}, {\rm Gr}) \stackrel{=}{\longrightarrow} {\rm P},\nonumber\\ &\alpha_{\rm Gr}': {\rm Conf}({\rm Gr}, {\cal A})\stackrel{=}{\longrightarrow} {\rm P}. \end{align} The first map uses the decomposition ${\rm G}({\cal K}) = {\rm G}({\cal O})\cdot {\rm H}({\cal K})\cdot {\rm G}({\cal O})$: $$ {\rm Conf}({\rm Gr}, {\rm Gr}) = {\rm G}({\cal O})\backslash {\rm G}({\cal K})/{\rm G}({\cal O}) = W\backslash{\rm H}({\cal K})/{\rm H}({\cal O}) = {\rm P}^{+}. $$ The second map uses the Iwasawa decomposition ${\rm G}({\cal K}) = {\rm U}({\cal K})\cdot {\rm H}({\cal K})\cdot {\rm G}({\cal O})$: $$ {\rm Conf}({\cal A}, {\rm Gr}) = {\rm G}({\cal K})\backslash \Bigl({\rm G}({\cal K})/{{\rm U}}({\cal K}) \times {\rm G}({\cal K})/{\rm G}({\cal O})\Bigr) = {\rm U}({\cal K})\backslash {\rm G}({\cal K})/{\rm G}({\cal O}) = {\rm H}({\cal K})/{\rm H}({\cal O}) = {\rm P}. $$ The third map is a cousin of the second one: $$ \alpha_{\rm Gr}'({\rm L}, {\rm A}):=- w_0\big(\alpha_{\rm Gr}({\rm A}, {\rm L})\big). $$ {\bf Remark.} These isomorphisms parametrize ${\rm G}({\cal O})$, ${\rm U}({\cal K})$ and ${\rm U}^-({\cal K})$-orbits of ${\rm Gr}$. Each coweight $\lambda \in {\rm P}={\rm H}({\mathbb Z}^t)={\rm H}({\cal K})/{\rm H}({\cal O})$ corresponds to an element $t^\lambda$ of ${\rm Gr}$. Then \begin{align} \label{13.3.13.246h} &\pi_{\rm Gr}([1], g\cdot t^\lambda)=\lambda, ~~~\forall g\in {\rm G}({\cal O}); \nonumber\\ &\alpha_{\rm Gr}({\rm U}, u\cdot t^{\lambda})=\lambda,~~\forall u\in {{\rm U}}({\cal K}); \nonumber\\ &\alpha_{\rm Gr}'(v\cdot t^{-\lambda}, \overline{w}_0\cdot {\rm U})=\lambda,~~~\forall v\in {\rm U}^{-}({\cal K}). \end{align} We define Grassmannian versions of $\mu$-invariants: $$ \mu_{\rm Gr}: {\rm Conf}({\rm Gr}, {\cal B}, {\rm Gr}) \longrightarrow {\rm P}, ~~~~ \mu_{\rm Gr}: {\rm Conf}({\rm Gr}, {\cal B}, {\cal B}, {\rm Gr}) \longrightarrow {\rm P} $$ $$ \mu_{\rm Gr}({\rm L}_1, {\rm B}_2, {\rm L}_3):= \alpha_{\rm Gr}'({\rm L}_1, {\rm A}_2)- \alpha_{\rm Gr}'({\rm L}_3, {\rm A}_2)\in {\rm P}. $$ $$ \mu_{\rm Gr}({\rm L}_1, {\rm B}_2, {\rm B}_3, {\rm L}_4):= \alpha_{\rm Gr}({\rm A}_2, {\rm L}_1) - {\rm val}\circ \alpha({\rm A}_2, {\rm A}_3) +\alpha_{\rm Gr}'({\rm L}_4, {\rm A}_3)\in {\rm P}. $$ Let ${\rm pr}:{\rm B}^-({\cal K})\rightarrow {\rm H}({\cal K})\rightarrow {\rm P}$ be the composite of standard projections. The first map has an equivalent description: $$\mu_{\rm Gr} ([b_1],{\rm B}^{-}, [b_2])={\rm pr}(b_1^{-1}b_2),~~~b_1, b_2 \in {\rm B}^{-}({\cal K}). $$ \vskip 3mm More generally, take a chain of flags starting and ending by a decorated flag, pick an alternating sequence of arrows, and write an alternating product of the $\alpha$-invariants. We get regular maps \begin{equation} \label{invmu1} \mu: {\rm Conf}^*({\cal A}, {\cal B}^{2n+1}, {\cal A}) \longrightarrow {\rm H}, \end{equation} $$ ({\rm A}_1, {\rm B}_2, ..., {\rm B}_{2n+2}, {\rm A}_{2n+3}) \longmapsto \frac{\alpha({\rm A}_1, {\rm A}_2)}{\alpha({\rm A}_3, {\rm A}_2)} \frac{\alpha({\rm A}_3, {\rm A}_4)}{\alpha({\rm A}_5, {\rm A}_4)} \ldots \frac{\alpha({\rm A}_{2n+1}, {\rm A}_{2n+2})}{\alpha({\rm A}_{2n+3}, {\rm A}_{2n+2})} . $$ \begin{equation} \label{invmu2} \mu: {\rm Conf}^*({\cal A}, {\cal B}^{2n}, {\cal A}) \longrightarrow {\rm H}, \end{equation} $$ ({\rm A}_1, {\rm B}_2, ..., {\rm B}_{2n+1}, {\rm A}_{2n+2}) \longmapsto \frac{\alpha({\rm A}_2, {\rm A}_1)}{\alpha({\rm A}_2, {\rm A}_3)} \frac{\alpha({\rm A}_4, {\rm A}_3)}{\alpha({\rm A}_4, {\rm A}_5)}\ldots \alpha({\rm A}_{2n+2}, {\rm A}_{2n+1}). $$ Given a cyclic collection of an even number of flags, there is an invariant which for $n=2$ and ${\rm G}={\rm SL}_2$ recovers the cross-ratio: $$ {\rm Conf}^*_{2n}({\cal B}) \longrightarrow {\rm H}, ~~~~({\rm B}_1, ..., {\rm B}_{2n}) \longmapsto \frac{\alpha({\rm A}_1, {\rm A}_2)}{\alpha({\rm A}_3, {\rm A}_2)}\frac{\alpha({\rm A}_3, {\rm A}_4)}{\alpha({\rm A}_5, {\rm A}_4)} \ldots \frac{\alpha({\rm A}_{2n-1}, {\rm A}_{2n})}{\alpha({\rm A}_1, {\rm A}_{2n})} . $$ One gets Grassmannian versions by replacing ${\cal A}$ by ${\rm Gr}$, and $\alpha$ by one of the maps (\ref{caniso}). \vskip 3mm \begin{figure}[ht] \centerline{\epsfbox{tpm11.eps}} \caption{Generalized MV cycles ${\cal M}_{l}\subset {\rm Gr}^3 = {\rm Conf}_{w_0}({\cal A}, {\rm Gr}^3, {\cal B})$.} \label{tpm11} \end{figure} These invariants provide decompositions for both spaces in \eqref{MVGCyyyxs}. Let us encode all the data in a polygon, as illustrated on Fig \ref{tpm11}. Let $l\in {\rm Conf}^+_{{\rm J} \subset {\rm I}}({\cal A}; {\cal B})({\mathbb Z}^t)$. We show on the left an element of ${\cal C}_{l}^{\circ}$. Flags or decorated flags are assigned to the vertices of a convex polygon. The vertices labeled by ${\rm J}$ are boldface. Note that although we order the vertices by choosing a reference vertex, due to the twisted cyclic invariance the story does not depend on its choice. The solid blue sides are labeled by a pair of decorated flags. There is an invariant $\lambda_E \in {\rm P}$ assigned to such a side $E$. It is provided by the tropicalization of the isomorphism (\ref{conf2A2}) evaluated on ${l}$. The collection of dashed edges determines an invariant $\mu \in {\rm P}$. {Recall the cone ${\rm R}^+ \subset {\rm P}$ generated by positive coroots}. The ${\cal O}$-integrality imposes restrictions on basic invariants, summarized in Lemma \ref{restr}, and illustrated on Fig \ref{tpm15}. \begin{lemma} \label{restr}i) Let $({\rm A}_1, {\rm A}_2, {\rm B}_3) \in {\cal C}_l^{\circ}\subset{\rm Conf}^{\cal O}({\cal A}, {\cal A}, {\cal B})$. Then ${\rm val}\circ \alpha({\rm A}_1, {\rm A}_2) \in {\rm P}^+. $ ii) Let $({\rm B}_1, {\rm A}_2, {\rm B}_3) \in {\cal C}_l^{\circ} \subset {\rm Conf}^{\cal O}({\cal B}, {\cal A}, {\cal B})$. Then ${\rm val}\circ \mu({\rm A}_2, {\rm B}_1, {\rm B}_3, {\rm A}_2) \in {\rm R}^+. $ \end{lemma} {\begin{proof} Here i) follows from Lemma \ref{9.21.17.56h}, and ii) follows from Lemmas \ref{8.13.1.20h} \& \ref{12.12.11.h}(4). \end{proof} } \begin{figure}[ht]\centerline{\epsfbox{tpm15.eps}} \caption{One has $\lambda \in {\rm P}^+$ and $\mu \in {\rm R}^+$.}\label{tpm15} \end{figure} Applying the map $\kappa$, we replace the decorated flag at each boldface vertex by the corresponding lattice. Others remain intact. We use the notation $\underline{\cal A}$ for the decorated flags which do not contribute the character $\chi_{{\rm A}}$ to the potential -- they are assigned to the unmarked vertices. For example, we associate to the polygons on Fig \ref{tpm11} the following maps $$ \kappa: {\cal C}_{l}^\circ \longrightarrow {\rm Conf}({\cal A}, {\rm Gr}^3, {\cal B}), ~~l\in {\rm Conf}^+(\underline{\cal A}, {\cal A}^3, {\cal B})({\mathbb Z}^t). $$ $$ \pi: {\rm Conf}^*(\underline{{\cal A}}, {\cal A}^3, {\cal B}) \longrightarrow {\rm H}^3, ~~~ \mu: {\rm Conf}^*(\underline{{\cal A}}, {\cal A}^3, {\cal B}) \longrightarrow {\rm H}, $$ \begin{equation} \label{restr1} (\pi^t, \mu^t): {\rm Conf}^{+}(\underline{{\cal A}}, {\cal A}^3, {\cal B})({\mathbb Z}^t) \longrightarrow {\rm P}\times ({\rm P}^+)^2 \times {\rm P}, \end{equation} $$ (\pi_{\rm Gr}, \mu_{\rm Gr}): {\rm Conf}(\underline{{\cal A}}, {\rm Gr}^3, {\cal B}) \longrightarrow {\rm P}\times ({\rm P}^+)^2\times {\rm P}. $$ It is easy to check that the targets of the invariants assigned to configurations of flags are the same as the targets of their Grassmannian counterparts. \subsubsection{Generalized Mirkovi\'{c}-Vilonen cycles} \label{mdgmc} Let us recall the standard definition of {\it Mirkovi\'{c}-Vilonen cycles} following \cite{MV}, \cite{A}, \cite{K}. For $w\in W$, let ${\rm U}_{w}=w{\rm U}w^{-1}$. For $w\in W$ and $\mu\in {\rm P}$ define the {\it semi-infinite cells} \begin{equation} {\rm S}_{w}^{\mu}:={\rm U}_w({\cal K})t^{\mu}. \end{equation} Let $\lambda, \mu \in {\rm P}$. The closure $\overline{{\rm S}_{e}^{\lambda}\cap {\rm S}_{w_0}^{\mu}}$ is non-empty if and only if $\lambda-\mu\in {\rm R}^+$. In that case, it is also well known that $\overline{{\rm S}_{e}^{\lambda}\cap {\rm S}_{w_0}^{\mu}}$ has pure dimension ${\rm ht}(\lambda-\mu):=\langle\rho, \lambda-\mu\rangle$. \begin{definition} A component of $\overline{{\rm S}_{e}^{\lambda}\cap {\rm S}_{w_0}^{\mu}} \subset {\rm Gr}$ is called an {\it MV cycle} of coweight $(\lambda,\mu)$. \end{definition} Since ${\rm H}$ normalizes ${\rm U}_w$, for each $h\in {\rm H}({\cal K})$ such that $[h]=t^{\nu}$, we have $h \cdot {\rm S}_{w}^{\mu}={\rm S}_{w}^{\mu+\nu}$. Therefore if $V$ is an MV cycle of coweight $(\lambda,\mu)$, then $h\cdot V$ is an MV cycle of coweight $(\lambda+\nu,\mu+\nu)$. The ${\rm H}({\cal K})$-orbit of an MV cycle of coweight $(\lambda, \mu)$ is called a {\it stable MV cycle} of coweight $\lambda-\mu$. \vskip 2mm Let $\underline{\lambda}=(\lambda_1,\ldots, \lambda_n)\in ({\rm P}^+)^n$. Consider the convolution variety \begin{equation} \label{con.var.247} {\rm Gr}_{\underline{\lambda}}=\{({\rm L}_1, {\rm L}_2, \ldots, {\rm L}_n)~|~[1]\stackrel{\lambda_1}{\longrightarrow}{\rm L}_1\stackrel{\lambda_2}{\longrightarrow}\ldots\stackrel{\lambda_n}{\longrightarrow}{\rm L}_n\}\subset {\rm Gr}^n. \end{equation} Let ${\rm pr}_n: {\rm Gr}^n\rightarrow {\rm Gr}$ be the projection onto the last factor. Set \begin{equation} \label{13.3.13.407h} {\rm Gr}_{\underline{\lambda}}^{\mu}:={\rm Gr}_{\underline{\lambda}}\cap {\rm pr}_n^{-1}\big({\rm S}_{w_0}^\mu\big). \end{equation} When n=1, under the geometric Satake correspondence, the components of ${\rm Gr}_{\lambda}^\mu$ give a basis (the MV basis) for the weight space $V_{\lambda}^{(\mu)}$, see \cite[Corollary 7.4]{MV}. It is easy to see that they are precisely MV cycles of coweight $(\lambda,\mu)$ contained in $\overline{{\rm Gr}_{\lambda}}$, see \cite[Proposition 3]{A}. \vskip 3mm Now we restrict constructions in preceding subsections to four main examples associated to an (n+2)-gon. The $n=1$ case recovers the above three versions of {\rm MV} cycles. In this sense, the following can be viewed as a generalization of ${\rm MV}$ cycles. \paragraph{Example 1: ${\rm J}=[2,n+1]\subset{\rm I}=[1,n+1]$.} Let ${\rm Conf}_{w_0}({\cal A}, {\rm Gr}^n, {\cal B})\subset {\rm Conf}_{{\rm J}\subset {\rm I}}({\rm Gr};{\cal A}, {\cal B})$ be the substack parametrizing configurations $({\rm A}_1,{\rm L}_2,\ldots, {\rm L}_{n+1}, {\rm B}_{n+2})$ where $({\rm A}_1, {\rm B}_{n+2})$ is generic. Recall ${\cal F}_{{\rm G}}$ in Definition \ref{torsorF}. Then $$ {\rm Conf}_{w_0}({\cal A}, {\rm Gr}^n, {\cal B})={\rm G}({\cal K})\backslash\big({\cal F}_{\rm G}({\cal K})\times {\rm Gr}^n\big). $$ Since ${\cal F}_{\rm G}$ is a ${\rm G}$-torsor, we get an isomorphism \begin{equation} \label{10.9.12.2a} i: {\rm Gr}^n \stackrel{=}{\longrightarrow}{\rm Conf}_{w_0}({\cal A}, {\rm Gr}^n, {\cal B}),~~~({\rm L}_1, \ldots, {\rm L}_{n})\longmapsto ({\rm U}, {\rm L}_1,\ldots,{\rm L}_{n}, {\rm B}^-). \end{equation} From now on we identify ${\rm Gr}^n$ with ${\rm Conf}_{w_0}({\cal A}, {\rm Gr}^n, {\cal B})$. There is a map, whose construction is illustrated on the right of Fig \ref{tpm11}: $$ \pi_{\rm Gr}: {\rm Conf}_{w_0}({\cal A}, {\rm Gr}^{n}, {\cal B}) \longrightarrow {\Bbb P}:= {\rm P}\times ({\rm P}^+)^{n-1}\times {\rm P}. $$ Its fibers are finite dimensional subvarieties ${\rm Gr}_{\lambda; \underline{\lambda}}^\mu$: \begin{equation} \label{MVlm} {\rm Gr}^n = \coprod {\rm Gr}_{\lambda; \underline{\lambda}}^\mu, ~~~~\mbox{where} ~~ (\lambda, \underline{\lambda},\mu)\in {\rm P}\times ({\rm P}^+)^{n-1}\times {\rm P}. \end{equation} By \eqref{13.3.13.246h} we see that $$ {\rm Gr}_{\lambda;\underline{\lambda}}^{\mu}=\{({\rm L}_1,\ldots, {\rm L}_n)\in {\rm Gr}^n~|~ {\rm L}_1\stackrel{\lambda_2}{\longrightarrow}\ldots\stackrel{\lambda_n}{\longrightarrow}{\rm L}_n,~{\rm L}_1\in {\rm S}_e^{\lambda},~ {\rm L}_n\in {\rm S}_{w_0}^{\mu}\},~~~~~\underline{\lambda}:=(\lambda_2,\ldots, \lambda_n). $$ When $n=1$, it is the intersection ${\rm S}_{e}^{\lambda}\cap {\rm S}_{w_0}^{\mu}$. Note that the very notion of MV cycles depends on the choice of the pair ${\rm H}\subset {\rm B}$. We transport the MV cycles to ${\rm Conf}_{w_0}({\cal A},{\rm Gr},{\cal B})$ by the isomorphism \eqref{10.9.12.2a}. It is then independent of the pair chosen. In general we define \begin{definition} \label{MVlm1ayxsh} The irreducible components of $\overline{{\rm Gr}_{\lambda; \underline{\lambda}}^\mu}$ are called the {\it generalized Mirkovi\'{c}-Vilonen cycles} of coweight $(\lambda, \underline{\lambda}, \mu)$. \end{definition} Similarly the left of Fig \ref{tpm11} provides a map \begin{equation} \label{13.2.22.852h} \pi^t: {\rm Conf}^+(\underline{\cal A}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t) \longrightarrow {\rm P}\times ({\rm P}^+)^{n-1}\times {\rm P}. \end{equation} Let ${\bf P}_{\lambda; \underline{\lambda}}^{\mu}:={\rm Conf}^+(\underline{\cal A}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t)_{\lambda; \underline{\lambda}}^\mu$ be the fiber of map \eqref{13.2.22.852h} over $(\lambda, \underline{\lambda}, \mu)$. Then \begin{equation} \label{13.2.22.902h} {\rm Conf}^+(\underline{\cal A}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t)= \coprod {\bf P}_{\lambda; \underline{\lambda}}^{\mu}~~~~\mbox{where} ~~ (\lambda, \underline{\lambda},\mu)\in {\rm P}\times ({\rm P}^+)^{n-1}\times {\rm P}. \end{equation} By definition $\pi^t\circ {\rm val}$ and $\pi_{\rm Gr}\circ \kappa$ deliver the same map from ${\cal C}_{l}^\circ$ to ${\Bbb P}$. Thus we arrive at \begin{equation} \label{MVlm1} {\cal M}_l:=\overline{{\cal M}^\circ_l} \subset \overline{{\rm Gr}_{\lambda; \underline{\lambda}}^\mu}, ~~~~ l\in {\bf P}_{\lambda; \underline{\lambda}}^{\mu}:={\rm Conf}^+_{w_0}({\cal A}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t)_{\lambda; \underline{\lambda}}^\mu. \end{equation} \begin{theorem} \label{MVlm1a} The cycles (\ref{MVlm1}) are precisely the generalized MV cycles of coweight $(\lambda, \underline{\lambda},\mu)$. \end{theorem} \paragraph{Example 2: ${\rm J}={\rm I}=[2,n+1]$.} Let ${\rm Conf}_{w_0}({\cal B}, {\rm Gr}^n, {\cal B})\subset {\rm Conf}_{{\rm J}\subset {\rm I}}({\rm Gr};{\cal A}, {\cal B})$ be the substack parametrizing configurations $({\rm B}_1,{\rm L}_2,\ldots, {\rm L}_{n+1}, {\rm B}_{n+2})$ where $({\rm B}_1, {\rm B}_{n+2})$ is generic. Similarly, we get an isomorphism of stacks \begin{equation} \label{10.9.12.2} i_s: {\rm H}({\cal K})\backslash{\rm Gr}^n\stackrel{=}{\longrightarrow}{\rm Conf}_{w_0}({\cal B},{\rm Gr}^n, {\cal B}),~~~({\rm L}_1,\ldots, {\rm L}_n)\longmapsto ({\rm B}, {\rm L}_1,\ldots, {\rm L}_n, {\rm B}^-). \end{equation} Here the group ${\rm H}({\cal K})$ acts diagonally on ${\rm Gr}^n$. Let $h\in {\rm H}({\cal K})$. If $[h]=t^{\mu}$, then $h\cdot \overline {{\rm Gr}_{\lambda;\underline{\lambda}}^\nu}=\overline {{\rm Gr}_{\lambda+\mu;\underline{\lambda}}^{\nu+\mu}}$. It provides an isomorphism between the sets of components of both varieties. \begin{definition} The ${\rm H}({\cal K})$-orbit of a generalized $MV$ cycle of coweight $(\lambda,\underline{\lambda}, \nu)$ is called a {\it generalized stable MV cycle} of coweight $(\underline{\lambda},\lambda-\nu)$. \end{definition} When $n=1$, it recovers the usual stable MV cycles. The generalized stable MV cycles live naturally on the stack ${\rm H}({\cal K})\backslash {\rm Gr}^n$. The isomorphism \eqref{10.9.12.2} transports them to ${\rm Conf}_{w_0}({\cal B},{\rm Gr}^n, {\cal B})$. \vskip 2mm The solid blue arrows and the triple of dashed reds on Fig \ref{tpm13a} provide a canonical projection $$(\pi^t,\mu^t): {\rm Conf}^+({\cal B}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t) \longrightarrow ({\rm P}^+)^{n-1}\times {\rm P}.$$ Let ${\bf A}_{\underline{\lambda}}^\mu:= {\rm Conf}^+({\cal B}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t)_{\underline{\lambda}}^\mu$ be its fiber over $(\underline{\lambda}, \mu)$. Then \begin{equation} {\rm Conf}^+(\underline{\cal B}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t)= \coprod {\bf A}_{\underline{\lambda}}^{\mu}~~~~\mbox{where} ~~ \underline{\lambda}\in ({\rm P}^+)^{n-1}, ~~\mu \in {\rm P}. \end{equation} On the other hand, our general construction provides us with the irreducible cycles \begin{equation} \label{MVlm1s} {\cal M}_l:=\overline{{\cal M}_l^\circ} \subset {\rm H}({\cal K})\backslash{\rm Gr}^n= {\rm Conf}_{w_0}({\cal B}, {\rm Gr}^{n}, {\cal B}), ~~~~ l\in {\bf A}_{\underline{\lambda}}^\mu. \end{equation} \begin{theorem} \label{MVlm1sa} The cycles (\ref{MVlm1s}) are precisely the generalized stable MV cycles of coweight $(\underline{\lambda},\mu)$. \end{theorem} \begin{figure}[ht] \centerline{\epsfbox{tpm13a.eps}} \caption{Generalized stable MV cycles ${\cal M}_{l}\subset {\rm Conf}({\cal B}, {\rm Gr}^3, {\cal B}) = {\rm H}({\cal K})\backslash {\rm Gr}^3$.} \label{tpm13a} \end{figure} \paragraph{Example 3: ${\rm J}={\rm I}=[1,n+1]$.} By Iwasawa decomposition we get an isomorphism \begin{equation} \label{stackk1} i_b: {\rm B}^-({\cal O})\backslash {\rm Gr}^n \stackrel{=}{\longrightarrow} {\rm Conf}({\rm Gr}^{n+1},{\cal B}),~~~({\rm L}_1,\ldots, {\rm L}_n)\longmapsto ([1], {\rm L}_1,\ldots, {\rm L}_n, {\rm B}^{-}). \end{equation} There are two projections, illustrated on Fig \ref{tpm12}: \begin{equation} \label{lm} (\pi_{\rm Gr}, \mu_{\rm Gr}): {\rm Conf}({\rm Gr}^{n+1}, {\cal B}) \longrightarrow ({\rm P}^+)^n \times {\rm P}, \end{equation} \begin{equation} \label{mpp} (\pi^t, \mu^t): {\rm Conf}^{+}({\cal A}^{n+1}, {\cal B})({\mathbb Z}^t) \longrightarrow ({\rm P}^+)^n \times {\rm P}. \end{equation} Their fibers over $(\underline \lambda, \mu)\in ({\rm P}^+)^n\times {\rm P}$ provide decompositions \begin{equation} \label{decompo} {\rm Conf}({\rm Gr}^{n+1}, {\cal B}) = \coprod_{\underline \lambda, \mu }{\rm Conf}({\rm Gr}^{n+1}, {\cal B})^\mu_{\underline \lambda}. \end{equation} \begin{equation} {\rm Conf}^+({\cal A}^{n+1}, {\cal B})({\mathbb Z}^t) = \coprod_{\underline \lambda, \mu }{\rm Conf}^+({\cal A}^{n+1}, {\cal B})({\mathbb Z}^t)^\mu_{\underline \lambda}. \end{equation} By definition, these decompositions are compatible under the map $\kappa$. We get irreducible cycles \begin{equation}\label{MVn+1bb} {\cal M}_l:=\overline{{\cal M}^\circ_l} \subset {\rm B}^-({\cal O})\backslash {\rm Gr}^n={\rm Conf}({\rm Gr}^{n+1}, {\cal B}), ~~~~ l \in {\bf B}^\mu_{\underline \lambda}:={\rm Conf}^+({\cal A}^{n+1}, {\cal B})({\mathbb Z}^t)^\mu_{\underline \lambda}. \end{equation} \begin{figure}[ht] \centerline{\epsfbox{tpm12.eps}} \caption{Generalized MV cycles ${\cal M}_{l}\subset {\rm Conf}({\rm Gr}^4, {\cal B}) = {\rm B}^-({\cal O})\backslash {\rm Gr}^3$.} \label{tpm12} \end{figure} The connected group ${\rm B}^-({\cal O})$ acts diagonally ${\rm Gr}^n$. It preserves components of subvarieties $\overline{{\rm Gr}_{\underline{\lambda}}^\mu}$ in \eqref{13.3.13.407h}. Hence these components live naturally on the stack ${\rm B}^-({\cal O})\backslash {\rm Gr}^n$. We transport them to ${\rm Conf}({\rm Gr}^{n+1},{\cal B})$ by \eqref{stackk1}. \begin{theorem} \label{MVn+1bbb} The cycles (\ref{MVn+1bb}) are precisely the components of ${\rm B}^{-}({\cal O})\backslash\overline{{\rm Gr}_{\underline{\lambda}}^\mu}$. \end{theorem} \paragraph{Example 4: ${\rm J}={\rm I}=[1,n+2]$.} There is an isomorphism \begin{equation} i_g: {\rm G}({\cal O})\backslash {\rm Gr}^{n+1}\stackrel{=}{\longrightarrow}{\rm Conf}_{n+2}({\rm Gr}),~~~({\rm L}_1,\ldots, {\rm L}_{n+1})\longrightarrow ([1],{\rm L}_1,\ldots, {\rm L}_{n+1}). \end{equation} We arrive at irreducible cycles defined in Definition \ref{9.22.12.13h}: $$ {\cal M}_l:=\overline{{\cal M}^\circ_l} \subset {\rm G}({\cal O})\backslash {\rm Gr}^{n+1}= {\rm Conf}_{n+2}({\rm Gr}) ~~~~ l \in {\bf C}_{\underline \lambda}:= {\rm Conf}^{+}_n({\cal A})({\mathbb Z}^t)_{\underline \lambda}. $$ This example recovers Theorem \ref{5.8.10.45b}. \vskip 2mm \begin{figure}[ht] \centerline{\epsfbox{tpm7.eps}} \caption{Mirkovi\'{c}-Vilonen cycles ${\cal M}_{l}\subset {\rm Conf}_{w_0}({\cal A}, {\rm Gr}, {\cal B}) = {\rm Gr}$.} \label{tpm7} \end{figure} \begin{figure}[ht] \centerline{\epsfbox{tpm8.eps}} \caption{Stable Mirkovi\'{c}-Vilonen cycles ${\cal M}_{l}\subset {\rm Conf}_{w_0}({\cal B}, {\rm Gr}, {\cal B}) = {\rm H}({\cal K})\backslash {\rm Gr}$.} \label{tpm8} \end{figure} \begin{figure}[ht] \centerline{\epsfbox{tpm9.eps}} \caption{MV cycles which lie in ${\rm Gr}_\lambda$ are the cycles ${\cal M}^\circ_{l}\subset {\rm Conf}({\rm Gr}, {\rm Gr}, {\cal B})_\lambda = {\rm B}^-({\cal O})\backslash {\rm Gr}_\lambda$.} \label{tpm9} \end{figure} \begin{figure}[ht] \centerline{\epsfbox{tpm10.eps}} \caption{Generalized MV cycles ${\cal M}_{l}\subset {\rm Conf}({\rm Gr}, {\rm Gr}, {\rm Gr})$.} \label{tpm10} \end{figure} Specializing Theorems \ref{MVlm1a}-\ref{MVn+1bbb} to $n=1$, we get \begin{theorem} \label{kth} 1) Mirkovi\'{c}-Vilonen cycles of coweight $(\lambda, \mu)$ are precisely the cycles $$ {\cal M}_l \subset {\rm Gr}, ~~~~l\in {\bf P}_{\lambda}^{\mu}:={\rm Conf}^+(\underline {\cal A}, {\cal A}, {\cal B})({\mathbb Z}^t)^\mu_\lambda~~\mbox{ for}~~~ {\cal W} = \chi_{{\rm A}_2}.$$ 2) Stable Mirkovi\'{c}-Vilonen cycles of coweight $\mu$ are precisely the cycles $$ {\cal M}_l \subset {\rm H}({\cal K})\backslash {\rm Gr}, ~~~~l\in {\bf A}_{\mu}:= {\rm Conf}^+({\cal B}, {\cal A}, {\cal B})({\mathbb Z}^t)^\mu~~\mbox{ for}~~~ {\cal W} = \chi_{{\rm A}_2} .$$ 3) Mirkovi\'{c}-Vilonen cycles of coweight $(\lambda, \mu)$ which lie in $\overline {\rm Gr}_\lambda \subset {\rm Gr}$ are precisely the cycles $$ {\cal M}_l\subset {\rm B}^-({\cal O})\backslash {\rm Gr}, ~~~~l\in {\bf B}_{\lambda}^{\mu}:= {\rm Conf}^+({\cal A}, {\cal A}, {\cal B})({\mathbb Z}^t)_\lambda^\mu~~\mbox{ for}~~~ {\cal W} = \chi_{{\rm A}_1}+\chi_{{\rm A}_2} $$ \end{theorem} Theorem \ref{kth} is proved in Section \ref{sec12.1.1}. Note that there is a positive birational isomorphism ${\rm Conf}({\cal B}, {\cal A}, {\cal B}) \stackrel{\sim}{=} {\rm U}$. Thus we identify ${\rm Conf}^+({\cal B}, {\cal A}, {\cal B})({\mathbb Z}^t)$ with the subset of ${\rm U}({\mathbb Z}^t)$ used by Lusztig \cite{L}, \cite{L1} to parametrize the canonical basis in Lemma \ref{8.20.3.42h}. Then Theorem \ref{kth} is equivalent to the main results of Kamnitzer's paper \cite{K}. Our approach, using the moduli space ${\rm Conf}({\cal B}, {\cal A}, {\cal B})$ rather than ${\rm U}$, makes parametrization of the MV cycles more natural and transparent, and puts it into the general framework of this paper. \vskip 3mm To summarize, there are four different versions of the cycles relevant to representation theory related to mixed configurations of triples, as illustrate on Fig \ref{tpm7}-\ref{tpm10}. \subsubsection{Constructible equations for the cycles ${\cal M}_l^\circ$} \label{sec2.3.2-} Let $F$ be a rational function on the stack ${\rm Conf}_{\rm I}({\cal A};{\cal B})$. We generalize the construction of $D_F$ from Section \ref{sec2.2.5}. As an application, it implies that the cycles {${\cal M}_l^{\circ}$ in \eqref{MVGCyyyxs} are disjoint. } Given ${\rm J}\subset {\rm I}\subset [1,n]$, let $m$ be the cardinality of ${\rm J}$. We assume ${\rm J}=\{j_1,\ldots, j_m\}$. Consider the space $$ \mathfrak{X}:=X_1\times \ldots \times X_{n},~~~\mbox{where } X_i =\left\{\begin{array}{ll}{\rm G} ~~~&\text{ if }i\in {\rm J}, \\ {\cal A} &\text{ if } i\in {{\rm I}-{\rm J}},\\ {\cal B} &\text{ otherwise}.\end{array}\right. $$ Let $\mathfrak{X}_*$ be its subset consisting of collections $\{x_1,\ldots, x_{n}\}$ whose subcollections $\{x_{i_1},\ldots, x_{i_{n-m}}\}$, $i_s\not \in {\rm J}$, are generic. Given a rational function $F$ on ${\rm Conf}_{\rm I}({\cal A};{\cal B})$, each $x=\{x_1,\ldots, x_{n}\}\in \mathfrak{X}_*({\cal K})$ provides a function $F_x$ on ${\cal A}^m$, whose value on $\{{\rm A}_{j_1},\ldots, {\rm A}_{j_m}\} \in {\cal A}^m$ is \begin{equation} \label{13.1.30.8.43h} F_x({\rm A}_{j_1},\ldots, {\rm A}_{j_m}):=F(x_1',\ldots, x_{n}')\in {\cal K},~~~x_i'=\left\{\begin{array}{cl} x_j\cdot {\rm A}_{j} &\text{ if } j\in {\rm J},\\ x_i &\text{ otherwise}. \end{array}\right. \end{equation} Then $F_x\in {\cal K}({\cal A}^m)$ Recall the map {${\rm val}: {\cal K}({\cal A}^m)^\times \to {\mathbb Z}$}. We get a ${\mathbb Z}$-valued function \begin{equation} \label{13.1.26.10.21hhh} D_F: \mathfrak{X}_*({\cal K}) \longrightarrow {\mathbb Z}, ~~~~D_F(x):={\rm val}(F_x). \end{equation} {Recall the right action of ${\rm G}^m$ on ${\mathbb C}({\cal A}^m)$}. Thanks to Lemma \ref{13.1.26.11.18h} and the fact that $F\in {\mathbb Q}({\rm Conf}_{\rm I}({\cal A};{\cal B}))$, we have \begin{equation} \label{13.2.22.1854h} \forall g\in {\rm G}({\cal K}),~\forall h\in {\rm G}({\cal O})^m,~~~{\rm val}(F_{g\cdot x}\circ h)={\rm val}(F_x). \end{equation} Thus $D_F$ descends to \begin{equation} D_F: {\rm Conf}_{{\rm J}\subset {\rm I}}^*({\rm Gr}; {\cal A},{\cal B})\longrightarrow {\mathbb Z}. \end{equation} Here ${\rm Conf}_{{\rm J}\subset{\rm I}}^*({\rm Gr};{\cal A}, {\cal B})$ is a subspace of ${\rm Conf}_{{\rm J}\subset{\rm I}}({\rm Gr};{\cal A}, {\cal B})$ consisting of the configurations whose subconfigurations of flags and decorated flags are generic. By definition, ${\cal M}_{l}^{\circ}$ in {\eqref{MVGCyyyxs}} are contained in ${\rm Conf}_{{\rm J}\subset {\rm I}}^*({\rm Gr}; {\cal A},{\cal B})$. The following Theorem is a generalization of Theorem \ref{5.8.10.45a}. See Section \ref{sec7} for its proof. \begin{theorem} \label{13.1.30.742h} Let $l\in {\rm Conf}_{{\rm J}\subset{\rm I}}^+({\cal A}; {\cal B})({\mathbb Z}^t)$. Let $F\in {\mathbb Q}_+({\rm Conf}_{\rm I}({\cal A};{\cal B}))$. Then $D_{F}({\cal M}_l^{\circ})\equiv F^t(l)$. \end{theorem} \section{Main definitions and results: the disc case} \label{sec2} \subsection{Configurations of decorated flags, the potential ${\cal W}$, and tensor product invariants} \label{sec2.1} \subsubsection{Positive spaces and their tropical points} \label{sec2.1.2} Below we recall briefly the main definitions, following \cite[Section 1]{FG2}. \paragraph{Positive spaces.} A positive rational function on a split algebraic torus ${\rm T}$ is a nonzero rational function on ${\rm T}$ which in a coordinate system, given by a set of characters of ${\rm T}$, can be presented as a ratio of two polynomials with positive integral coefficients. A {\it positive rational morphism} $\varphi: {\rm T}_1\rightarrow {\rm T}_2$ of two split tori is a morphism such that for each character $\chi$ of ${\rm T}_2$ the function $\chi\circ \varphi$ is a positive rational function. \vskip 2mm A {\it positive atlas} on an irreducible space (i.e. variety / stack) ${\cal Y}$ over ${\mathbb Q}$ is given by a non-empty collection $\{{\bf c}\}$ of birational isomorphisms over ${\mathbb Q}$ $$ \alpha_{\bf c}: {\rm T} \longrightarrow {\cal Y}, $$ where ${\rm T}$ is a split algebraic torus, satisfying the following conditions: \begin{itemize} \item For any pair ${\bf c}, {\bf c'}$ the map $\varphi_{\bf c, \bf c'} := \alpha_{\bf c}^{-1}\circ \alpha_{\bf c'}$ is a positive birational isomorphism of ${\rm T}$. \item Each map $\alpha_{\bf c}$ is regular on a complement to a divisor given by positive rational function. \end{itemize} A {\it positive space} is a space with a positive atlas. A split algebraic torus ${\rm T}$ is the simplest example of a positive space. It has a single positive coordinate system, given by the torus itself. A {\it positive rational function} $F$ on ${\cal Y}$ is a rational function given by a subtraction free rational function in one, and hence in all coordinate systems of the positive atlas on ${\cal Y}$. A {\it positive rational map} ${\cal Y} \to {\cal Z}$ is a rational map given by positive rational functions in one, and hence in all positive coordinate systems. \paragraph{Tropical points.} The tropical semifield ${\mathbb Z}^t$ is the set ${\mathbb Z}$ equipped with tropical addition and multiplication given by $$ a+_t b=\min\{a,b\}, \quad a\cdot_t b=a+b, \quad a, b\in {\mathbb Z}. $$ This definition can be motivated as follows. Consider the semifield ${\mathbb R}_{+}((t))$ of Laurent series $f(t)$ with {\it positive} leading coefficients: there is no ``$-$" operation in ${\mathbb R}_{+}((t))$. Then the valuation map $f(t) \mapsto {\rm val}(f)$ is a homomorphism of semifields ${\rm val}: {\mathbb R}_{+}((t)) \to {\mathbb Z}^t$. Denote by $X_*({\rm T})= {\rm Hom}({\Bbb G}_m, {\rm T})$ and $X^*({\rm T})= {\rm Hom}({\rm T}, {\Bbb G}_m)$ the lattices of cocharacters and characters of a split algebraic torus ${\rm T}$. There is a pairing $\langle\ast, \ast\rangle: X^*({\rm T}) \times X_*({\rm T}) \to {\mathbb Z}$. The set of ${\mathbb Z}^t$-points of a split torus ${\rm T}$ is defined to be its lattice of cocharacters: $$ {\rm T}({\mathbb Z}^t):= X_*({\rm T}) $$ A positive rational function $F$ on ${\rm T}$ gives rise to its tropicalization $F^t$, which is a ${\mathbb Z}$-valued function on the set ${\rm T}({\mathbb Z}^t)$. Its definition is clear from the following example: $$ F = \frac{x_1x_2^2 + 3 x_2x_3^5}{x_2x_4}, ~~~ F^t= {\rm min}\{x_1+2x_2, x_2+5x_3\} - {\rm min}\{x_2+x_4\}. $$ Similarly, a positive morphism $\varphi:{\rm T}\rightarrow {\rm S}$ of two split tori gives rise to a piecewise linear morphism $\varphi^t: {\rm T}({\mathbb Z}^t)\rightarrow {\rm S}({\mathbb Z}^t)$. \vskip 2mm There is a unique way to assign to a positive space ${\cal Y}$ a set ${\cal Y}({\mathbb Z}^t)$ of its ${\mathbb Z}^t$-points such that \begin{itemize} \item Each of the coordinate systems ${\bf c}$ provides a canonical isomorphism $$ \alpha^t_{\bf c}: {\rm T}({\mathbb Z}^t) \stackrel{\sim}{\longrightarrow} {\cal Y}({\mathbb Z}^t). $$ \item These isomorphisms are related by piecewise-linear isomorphisms $\varphi^t_{\bf c, \bf c'}$: $$ \alpha^t_{\bf c'}(l) = \alpha^t_{\bf c}\circ \varphi^t_{\bf c, \bf c'}(l). $$ \end{itemize} \vskip 2mm We raise the above process to the category of positive spaces. It gives us a functor called {\it tropicalization} from the category of positive spaces to the category of sets of tropical points. For each positive morphism $f:{\cal Y}\rightarrow {\cal Z}$, denote by $f^t: {\cal Y}({\mathbb Z}^t)\rightarrow {\cal Z}({\mathbb Z}^t)$ its corresponding tropicalized morphism. Pick a basis of cocharacters of ${\rm T}$. Then, assigning to each positive coordinate system ${\bf c}$ a set of integers $(l^{\bf c}_1, ..., l^{\bf c}_n)\in {\mathbb Z}^n$ related by piecewise-linear isomorphisms $\varphi^t_{\bf c, \bf c'}$, we get an element $$ l = \alpha^t_{\bf c}(l^{\bf c}_1, ..., l^{\bf c}_n)\in {\cal Y}({\mathbb Z}^t). $$ For a variety ${\cal Y}$ with a positive atlas, the set ${\cal Y}({\mathbb Z}^t)$ can be interpreted as the set of {\it transcendental cells} of the infinite dimensional variety ${\cal Y}\big({\mathbb C}((t))\big)$, as we will explain in Section \ref{sec2.2.1}. \paragraph{The set of positive tropical points.} Let $({\cal Y}, {\cal W})$ be a pair given by a positive space ${\cal Y}$ equipped with a positive rational function ${\cal W}$. Let us tropicalize this function, getting a map $$ {\cal W}^t: {\cal Y}({\mathbb Z}^t)\longrightarrow {\mathbb Z}. $$ We define the set of {\it positive tropical points}: $$ {\cal Y}_{\cal W}^{+}({\mathbb Z}^t):=\{l\in {\cal Y}({\mathbb Z}^t)~|~{\cal W}^t(l)\geq 0\}. $$ {\bf Example.} The Cartan group ${\rm H}$ of ${\rm G}$ is a split torus and hence has a standard positive structure. The set ${\rm H}({\mathbb Z}^t)=X_*({\rm H})$ is the coweight lattice of ${\rm G}$. Let $\{\alpha_i\}$ the set of simple positive roots indexed by $I$. We define \begin{equation}\label{n=2pot} {\cal W}: {\rm H}\longrightarrow {\Bbb A}^1,~~~h\longmapsto \sum_{i\in I}\alpha_i(h). \end{equation} The set of positive tropical points is the positive Weyl chamber in $X_*({\rm H})$: $$ {\rm H}^+({\mathbb Z}^t):={\rm H}_{\cal W}^+({\mathbb Z}^t)=\{\lambda\in X_*({\rm H})~|~\langle \lambda, \alpha_i\rangle\geq 0, ~\forall i\in I\}. $$ \subsubsection{Basic notations for a split reductive group ${\rm G}$} \label{sec2.1.1} Denote by ${\rm H}$ the Cartan group of ${\rm G}$, and by ${\rm H}^L$ the Cartan group of the Langlands dual group ${\rm G}^L$. There is a canonical isomorphism $X^*({\rm H}^L) = X_*({\rm H}). $ Denote by $\Delta^+ \subset X^*({\rm H})$ the set of positive roots for ${\rm G}$, and by $\Pi:= \{\alpha_i\}\subset \Delta^+$ the subset of simple positive roots, indexed by a finite set ${I}$. We sometimes use ${\rm P}$ instead of $X_*({\rm H})$. Denote by ${\rm P}^+$ the positive Weyl chamber in ${\rm P}$. It is also the cone of dominant weights for the dual group ${\rm G}^L$. Denote by $V_\lambda$ the irreducible finite dimensional ${\rm G}^L$-modules parametrized by $\lambda\in {\rm P}^+$. Let ${\rm U}^{\pm}_{i}~ (i\in I)$ be the simple root subgroup of ${\rm U}^{\pm}$. Let $\alpha_i^{\vee}: \mathbb{G}_m \rightarrow {\rm H}$ be the simple coroot corresponding to the root $\alpha_i: {\rm H}\rightarrow \mathbb{G}_m$. For all $i\in I$, there are isomorphisms $x_i: \mathbb{G}_{a}\rightarrow {\rm U}_{i}^{+}$ and $y_{i}: \mathbb{G}_a\rightarrow {\rm U}_i^{-}$ such that the maps \begin{equation} \label{pinning} \begin{pmatrix} 1 & a \\ 0 & 1 \\ \end{pmatrix} \longmapsto x_i(a), \quad \begin{pmatrix} 1 & 0 \\ b & 1 \\ \end{pmatrix} \longmapsto y_i(b), \quad \begin{pmatrix} t & 0 \\ 0 & t^{-1} \\ \end{pmatrix} \longmapsto \alpha_i^{\vee}(t) \end{equation} provide homomorphisms $\phi_i: {\rm SL}_2 \rightarrow {\rm G}.$ Let $s_i ~(i\in I)$ be the simple reflections generating the Weyl group. Set $\overline{s}_i:=y_i(1)x_i(-1)y_i(1).$ The elements $\overline{s}_i$ satisfy the braid relations. So we can associate to each $w\in W$ its representative $\overline{w}$ in such a way that for any reduced decomposition $w=s_{i_1}\ldots s_{i_k}$ one has $\overline{w}=\overline{s}_{i_1}\ldots \overline{s}_{i_k}$. Denote by $w_0$ be the longest element of the Weyl group. Set $s_{\rm G}:=\overline{w}_0^2$. It is an order two central element in {\rm G}. For ${\rm G=SL}_2$ it is the element $-{\rm Id}$. For an arbitrary reductive {\rm G} the element $s_{\rm G}$ is the image of the element $s_{{\rm SL}_2}$ under a principal embedding ${\rm SL}_2 \hookrightarrow {\rm G}$. For example, $s_{{\rm SL}_m} = (-1)^{m-1}{\rm Id}$. See \cite[Section 2.3]{FG1} for proof. \subsubsection{Lusztig's positive atlas of ${\rm U}$ and the character $\chi_{\rm A}$} \label{sec4.1} Let $w_0=s_{i_1}\ldots s_{i_m}$ be a reduced decomposition. It is encoded by the sequence ${\bf i}=(i_1, i_2,\ldots, i_m)$. It provides a regular map \begin{equation} \label{11.20.11.191} \phi_{\bf i}: ({\Bbb G}_m)^m \longrightarrow {\rm U}, ~~~ (a_1, ..., a_m)\longmapsto x_{i_1}(a_1)\ldots x_{i_m}(a_m). \end{equation} The map $\phi_{\bf i}$ is an open embedding \cite{L}, and a birational isomorphism. Thus it provides a rational coordinate system on ${\rm U}$. It was shown in {\it loc.cit.} that the collection of these rational coordinate systems form a positive atlas of ${\rm U}$, which we call {\it Lusztig's positive atlas}. There is a similar positive atlas on ${{\rm U}^{-}}$ provided by the maps $y_i$. \vskip 2mm The choice of the maps $x_i$, $y_i$ in (\ref{pinning}) provides the standard character: \begin{equation} \label{10.1.chi} \chi: {\rm U}\longrightarrow {\Bbb A}^1,~~~x_{i_1}(a_1)\ldots x_{i_m}(a_m)\longmapsto \sum_{j=1}^m a_j. \end{equation} It is evidently a positive function in Lusztig's positive atlas. Moreover it is independent of the the sequence ${\bf i}$ chosen. Similarly, there is a character $ \chi^{-}: {\rm U}^{-} \to {\Bbb A}^1$, $y_{i_1}(b_1)\ldots y_{i_m}(b_m) \mapsto \sum_{j=1}^m b_j$, which is positive in the positive atlas on ${{\rm U}^{-}}$. Let ${\rm A}:=g\cdot {\rm U}$ be a decorated flag. Its stabilizer is ${\rm U}_{{\rm A}}=g{\rm U} g^{-1}$. The associated character is $$ \chi_{{\rm A}}: {\rm U}_{{\rm A}}\longrightarrow {\Bbb A}^{1}, ~~~u\longmapsto \chi(g^{-1}ug). $$ For example, for an $h\in {\rm H}$, the character $\chi_{h \cdot {\rm U}}$ is given by $ x_{i_1}(a_1)\ldots x_{i_m}(a_m) \longmapsto \sum_{j=1}^m a_j/\alpha_{i_j}(h). $ \subsubsection{The potential ${\cal W}$ on the moduli space ${\rm Conf}_n({\cal A})$.} \label{sec2.1.4} Given a group ${\rm G}$ and ${\rm G}$-sets $X_1, ..., X_n$, orbits of the diagonal ${\rm G}$-action on $X_1\times ...\times X_n$ are called {\it configurations}. Denote by $\{x_1,\ldots, x_n\}$ a collection of points, and by $(x_1,\ldots, x_n)$ its configuration. We usually denote a decorated flag by ${\rm A}_i$ and the corresponding flag $\pi({\rm A}_i)$ by ${\rm B}_i$. Denote the set $\{1,\ldots, n\}$ of consecutive integers by $[1,n]$. \begin{definition} \label{6.3.12.1} A pair $\{{\rm B}_1, {\rm B}_2\}\in {\cal B}\times{\cal B}$ of Borel subgroups is {\em generic} if ${\rm B}_1 \cap {\rm B}_2$ is a Cartan subgroup in ${\rm G}$. A collection $\{{\rm A}_1,\ldots, {\rm B}_{m+n}\}\in {\cal A}^n\times {\cal B}^m$ is {\em generic} if for any distinct $i,j\in [1, m+n]$, the pair $\{{\rm B}_i, {\rm B}_j\}$ is generic. \end{definition} Set ${\rm Conf}({\cal A}^n, {\cal B}^m):={{\rm G}}\backslash ({\cal A}^n\times {\cal B}^m)$. Note that if $\{{\rm A}_1,\ldots, {\rm B}_{m+n}\}$ is generic, then so is $g\cdot \{{\rm A}_1, \ldots, {\rm B}_{m+n}\}$ for any $g\in {\rm G}$. Denote by ${\rm Conf}^*({\cal A}^n, {\cal B}^m)$ the subset of generic configurations. \begin{definition} \label{torsorF} A frame for a split reductive algebraic group ${\rm G}$ over ${\mathbb Q}$ is a generic pair $\{{\rm A},{\rm B}\}\in {\cal A}\times {\cal B}$. Denote by ${\cal F}_{{\rm G}}$ the moduli space of frames. \end{definition} The space ${\cal F}_{{\rm G}}$ is a left ${\rm G}$-torsor. If ${\rm G} = {\rm SL}_m$, then a $K$-point of ${\cal F}_{{\rm G}}$ is the same thing as a unimodular frame in a vector space over $K$ of dimension $m$ with a volume form. If ${\rm G}$ is an adjoint group, then a frame is the same thing as a pinning. \vskip 3mm Let $\{{\rm A}_1, \ldots, {\rm A}_n\}$ be a generic collection of decorated flags. For each $j\in [1,n]$, take the triple $\{{\rm B}_{j-1}, {\rm A}_j, {\rm B}_{j+1}\}$. Since ${\cal F}_{\rm G}$ is a ${\rm G}$-torsor, there is a unique $u_j\in {\rm U}_{{\rm A}_j}$ such that \begin{equation} \label{7.20.9.8} \{ {\rm A}_j,{\rm B}_{j+1}\} = u_j \cdot \{ {\rm A}_j, {\rm B}_{j-1}\}. \end{equation} Consider the following rational function on ${\cal A}^n$, whose definition is illustrated on Fig \ref{cal-1}: \begin{equation} \label{potential} {\cal W}({\rm A}_1,\ldots, {\rm A}_n): = \sum_{j=1}^n \chi_{{\rm A}_j}(u_j). \end{equation} \begin{lemma} \label{9.21.17.00h} For any $g\in {\rm G}$, we have ${\cal W}(g{\rm A}_1,\ldots, g{\rm A}_n)={\cal W}({\rm A}_1,\ldots, {\rm A}_n)$. \end{lemma} \begin{proof} Clearly $\{g{\rm A}_j, g{\rm B}_{j+1} \}=gu_jg^{-1}\cdot\{g{\rm A}_j, g{\rm B}_{j-1}\}$. The Lemma follows from (\ref{obvious}). \end{proof} \begin{figure}[ht] \epsfxsize130pt \centerline{\epsfbox{cal-1.eps}} \caption{The potential is a sum of the contribution at the vertices.} \label{cal-1} \end{figure} Since ${\cal W}$ is invariant under the ${\rm G}$-diagonal action on ${\cal A}^n$, we define \begin{definition} The potential ${\cal W}$ is a rational function on ${\rm Conf}_n({\cal A})$, given by (\ref{potential}). \end{definition} \begin{theorem} \label{mth1} The potential ${\cal W}$ is a positive rational function on the space ${\rm Conf}_n({\cal A})$, $n>2$. \end{theorem} Theorem \ref{mth1} is a non-trivial result. It is based on two facts: the character $\chi$ is a positive function on ${\rm U}$, and the positive structure on ${\rm Conf}_n({\cal A})$ is twisted cyclic invariant, see Section \ref{sec2.1.6}. We prove Theorem \ref{mth1} in {Section \ref{sec6.4}}. \vskip 3mm Therefore we arrive at the set of positive tropical points of ${\rm Conf}_n({\cal A})$: \begin{equation} \label{11.15.11.1} {\rm Conf}^+_n({\cal A})({\mathbb Z}^t):= \{l \in {\rm Conf}_n({\cal A})({\mathbb Z}^t) ~|~ {\cal W}^t(l) \geq 0\}, \quad n>2. \end{equation} \vskip 3mm {\bf Example}. Let ${\rm G}={\rm SL}_2$. The space ${\rm Conf}_3({\cal A})$ parametrizes configurations $(v_1, v_2, v_3)$ of vectors in a two dimensional vector space with a volume form $\omega$. Set $\Delta_{i,j}:= \langle v_i \wedge v_j, \omega\rangle$. Then \begin{equation} \label{FW} {\cal W}(v_1, v_2, v_3) = \frac{\Delta_{1,3}}{\Delta_{1,2} ~\Delta_{2,3}}+ \frac{\Delta_{1,2}}{\Delta_{2,3}~\Delta_{1,3}}+ \frac{\Delta_{2,3}}{\Delta_{1,3}~\Delta_{1,2}}. \end{equation} Therefore tropicalizing the function (\ref{FW}) we get $$ {\rm Conf}^+_3({\cal A}_{{\rm SL}_2})({\mathbb Z}^t) = \{a,b,c\in {\mathbb Z} ~~|~~ a \geq b+c, ~b \geq a+c, ~c \geq a+b\}. $$ Notice that the inequalities imply $a,b,c\in {\mathbb Z}_{\leq 0}$. \subsubsection{Parametrization of a canonical basis in tensor products invariants} \label{sec2.1.5} By Bruhat decomposition, for each $({\rm A}_1,{\rm A}_2)\in {\rm Conf}_2^*({\cal A})$, there is a unique $h_{{\rm A}_1,{\rm A}_2}\in {\rm H}$ such that $$ ({\rm A}_1, {\rm A}_2)=({\rm U}, h_{{\rm A}_1,{\rm A}_2}\overline{w}_0\cdot {\rm U}). $$ It provides an isomorphism, which induces a positive structure on ${\rm Conf}_2({\cal A})$: \begin{equation} \label{conf2A} \alpha: {\rm Conf}_2^*({\cal A}) \stackrel{\sim}{\longrightarrow} {\rm H}, ~~~~({\rm A}_1,{\rm A}_2)\longrightarrow h_{{\rm A}_1,{\rm A}_2}. \end{equation} We extend definition (\ref{11.15.11.1}) to $n=2$ using the potential \eqref{n=2pot}, so that one has an isomorphism $$ \alpha^t: {\rm Conf}^+_2({\cal A})({\mathbb Z}^t) \stackrel{\sim}{\longrightarrow} {\rm H}^{+}({\mathbb Z}^t)={\rm P}^+. $$ See more details in Section \ref{proofmth1}, formula (\ref{8.28.10.28h}), and \cite{FG1}. \paragraph{The restriction maps $\pi_{ij}$.} We picture configurations $({\rm A}_1, ..., {\rm A}_n)$ at the labelled vertices $[1,n]$ of a convex $n$-gon $P_n$. Each pair of distinct $i,j\in [1,n]$ gives rise to a map $$ \pi_{ij}: {\rm Conf}_n({\cal A}) \longrightarrow {\rm Conf}_2({\cal A}), ~~~ ({\rm A}_1, ..., {\rm A}_n) \longrightarrow \left\{ \begin{array}{cl}({\rm A}_i, {\rm A}_j)~~~~&\text{if $i<j$},\\ (s_{{\rm G}}\cdot {\rm A}_i, {\rm A}_j) &\text{if $i>j$}. \end{array} \right. $$ The maps $\pi_{ij}$ are positive \cite{FG1}, and therefore can be tropicalized: $$ \begin{array}{ccc} {\rm Conf}_n({\cal A})({\mathbb Z}^t) &\stackrel{\pi_{ij}^t}{\longrightarrow} &{\rm Conf}_2({\cal A})({\mathbb Z}^t) = {\rm P}\\ \cup &&\cup\\ {\rm Conf}^+_n({\cal A})({\mathbb Z}^t) &\stackrel{\pi_{ij}^t}{\longrightarrow} & {\rm Conf}^+_2({\cal A})({\mathbb Z}^t) = {\rm P}^+ \end{array} $$ The fact that $\pi_{ij}^t\big({\rm Conf}_n^+({\cal A})({\mathbb Z}^t)\big)\subseteq {\rm P}^{+}$ is due to Lemma \ref{9.21.17.56h}. In particular, the oriented sides of the polygon $P_n$ give rise to a positive map \begin{equation} \pi=(\pi_{12},\pi_{23},\ldots, \pi_{n,1}):{\rm Conf}_{n}({\cal A})\longrightarrow\big({\rm Conf}_2({\cal A})\big)^n\simeq {\rm H}^n. \end{equation} \paragraph{A decomposition of ${\rm Conf}_n^{+}({\cal A})({\mathbb Z}^t)$.} Given $\underline{\lambda}:= (\lambda_{1}, \ldots , \lambda_n)\in ({\rm P}^{+})^n$, define \begin{equation} \label{11.20.11.1} {\bf C}_{\underline{\lambda}} = \{l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)~|~ \pi^t(l) = \underline{\lambda}\}. \end{equation} The weights $\underline{\lambda}$ of ${\rm G}^L$ are assigned to the oriented sides of $P_n$, as shown on Fig \ref{tmp1}. Such sets provide a canonical decomposition \begin{equation} \label{9.21.18.17h} {\rm Conf}_n^+({\cal A})({\mathbb Z}^t)=\bigsqcup_{\underline{\lambda}\in ({\rm P}^{+})^n} {\bf C}_{\underline{\lambda}}. \end{equation} \begin{figure}[ht] \epsfxsize130pt \centerline{\epsfbox{tmp1.eps}} \caption{Dominant weights labels of the polygon sides for the set ${\bf C}_{\lambda_1, \lambda_2, \lambda_{3}, \lambda_4}$.} \label{tmp1} \end{figure} \paragraph{Tensor products invariants.} Here is one of our main results. \begin{theorem} \label{11.18.11.1} Let $\lambda_{1}, \ldots , \lambda_n\in {\rm P}^+$. The set ${\bf C}_{\lambda_1, ..., \lambda_{n}}$ parametrizes a canonical basis in the space of invariants $ \big(V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_n}\big)^{{\rm G}^L}. $ \end{theorem} \noindent Theorem \ref{11.18.11.1} follows from Theorem \ref{5.8.10.45b} and geometric Satake correspondence, see Section \ref{sec2.2.4}. \vskip 2mm {Alternatively, there is a similar set, defined by reversing the order of the side $(1,n)$: \begin{equation} \label{11.20.11.2} {\bf C}_{\lambda_1, ..., \lambda_{n-1}}^{\lambda_n}:= \{l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)~|~ \pi^t_{i, i+1}(l) = \lambda_i, ~~ i=1, ..., n-1, ~ \pi_{1, n}^t(l) = \lambda_n\}. \end{equation} Then $$ {\bf C}_{\lambda_1, ..., \lambda_{n}} = {\bf C}_{\lambda_1, ..., \lambda_{n-1}}^{-w_0(\lambda_n)}. $$ The set ${\bf C}_{\lambda_1, ..., \lambda_{n-1}}^{\lambda_n}$ parametrizes a basis in the space of tensor product multiplicities \begin{equation} \label{11.18.11.2a} {\rm Hom}_{{\rm G}^L}(V_{\lambda_n}, V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_{n-1}}). \end{equation}} \subsubsection{Some features of the set ${\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$.} \label{sec2.1.6} Here are some features of the set ${\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$. All of them follow immediately from the definition of the potential ${\cal W}$ and basic facts about the positive structure on ${\rm Conf}_n(\cal A)$. One of the most crucial is twisted cyclic invariance, so we start from it. \paragraph{The twisted cyclic shift.} It was proved in \cite[Section 8]{FG1} that the defined there positive atlas on ${\rm Conf}_n({\cal A})$ is invariant under the {\it twisted cyclic shift} $$ t: {\rm Conf}_n({\cal A}) \longrightarrow {\rm Conf}_n({\cal A}), ~~~({\rm A}_1, ...,{\rm A}_n) \longmapsto ({\rm A}_2, ..., {\rm A}_n, {\rm A}_1\cdot s_{\rm G}). $$ Its tropicalization is a cyclic shift on the space of the tropical points: \begin{equation} \label{11.18.11.9} t: {\rm Conf}_n({\cal A})({\mathbb Z}^t) \longrightarrow {\rm Conf}_n({\cal A})({\mathbb Z}^t). \end{equation} \begin{itemize} \item {\bf Twisted cyclic shift invariance}. {\it The potential ${\cal W}$ is evidently invariant under the twisted cyclic shift. Therefore the set (\ref{11.15.11.1}) is invariant under the tropical cyclic shift (\ref{11.18.11.9})}. \end{itemize} \vskip 2mm Given a triangle $t = \{i_1< i_2<i_3\}$ inscribed into the polygon $P_n$, there is a positive map $$ \pi_t: {\rm Conf}_n({\cal A}) \longrightarrow {\rm Conf}_3({\cal A}), ~~~ ({\rm A}_1, \ldots, {\rm A}_n) \longmapsto ({\rm A}_{i_1}, {\rm A}_{i_2}, {\rm A}_{i_3}). $$ Each triangulation $T$ of $P_n$ gives rise to a positive injection $ \pi_T: {\rm Conf}_n({\cal A}) \to \prod_{t \in T}{\rm Conf}_3({\cal A}), $ where the product is over all triangles $t$ of $T$. Set its image \begin{equation} \label{5.12.31.1} {\rm Conf}_T({\cal A}):= {\rm Im}\pi_T \subset \prod_{t \in T}{\rm Conf}_3({\cal A}). \end{equation} For each pair $(t,d)$, where $t \in T$ and $d$ is a side of $t$, there is a map given by obvious projections $$ p(t,d): \prod_{t\in T}{\rm Conf}_3({\cal A})\stackrel{{\rm pr}_{t}}{\longrightarrow} {\rm Conf}_3({\cal A}) \stackrel{{\rm pr}_d}{\longrightarrow} {\rm Conf}_2({\cal A}). $$ For each diagonal $d$ of $T$, there are two triangles, $t^d_1$ and $t^d_2$, sharing $d$. A point $x$ of ${\rm Conf}_{T}({\cal A})$ is described by the condition that $p(t^d_1,d)(x) = p(t^d_2,d)(x)$ for all diagonals $d$ of $T$. \begin{proposition} {\bf \cite{FG1}} There is an isomorphism of positive moduli spaces $$ \pi_{T}: {\rm Conf}_n({\cal A}) \stackrel{\sim}{\longrightarrow} {\rm Conf}_{T}({\cal A}). $$ \end{proposition} It leads to an isomorphism of sets of their ${\mathbb Z}$-tropical points: \begin{equation} \label{6.1.12.1} \pi^t_{T}: {\rm Conf}_n({\cal A})({\mathbb Z}^t) \stackrel{\sim}{\longrightarrow} {\rm Conf}_{T}({\cal A})({\mathbb Z}^t). \end{equation} \vskip 3mm Some important features of the potential ${\cal W}$ are the following: \begin{itemize} \item {\bf Scissor congruence invariance}. {\it For any triangulation $T$ of the polygon, the potential ${\cal W}_n$ on ${\rm Conf}_n({\cal A})$ is a sum over the triangles $t$ of $T$}: \begin{equation} \label{11.18.11.5} {\cal W}_n = \sum_{t \in T} {\cal W}_3\circ \pi_t. \end{equation} \end{itemize} This follows immediately from the fact that $\chi_{\rm A}$ is a character of the subgroup ${\rm U}_{\rm A}$. Combining this with the isomorphism (\ref{6.1.12.1}) we get \begin{itemize} \item \label{DI} {\bf Decomposition isomorphism}. {\it Given a triangulation $T$ of $P_n$, one has an isomorphism} $$ i^{t,+}_{T}: {\rm Conf}^+_n({\cal A})({\mathbb Z}^t) \stackrel{\sim}{\longrightarrow} {\rm Conf}^+_{T}({\cal A})({\mathbb Z}^t). $$ \end{itemize} So one can think of the data describing a point of ${\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$ as of a collection of similar data assigned to triangles $t$ of a triangulation $T$ of the polygon, which match at the diagonals. Therefore each triangulation $T$ provides a further decomposition of the set \eqref{11.20.11.1}. By Lemma \ref{9.21.17.56h}, the weights of ${\rm G}^L$ assigned to the sides and edges of the polygon are dominant. \vskip 2mm Consider an algebra with a linear basis $e_\lambda$ parametrized by dominant weights $\lambda$ of ${\rm G}^L$ with the structure constants given by the cardinality of the set ${\bf C}_{\lambda_1, \lambda_2}^{\mu}$: \begin{equation} \label{11.18.11.8} e_{\lambda_1} \ast e_{\lambda_2} = \sum_{\mu \in {\rm P}^+}|{\bf C}_{\lambda_1, \lambda_2}^{\mu}|e_{\mu}. \end{equation} The following basic property is evident from our definition of the set ${\bf C}_{\lambda_1, \lambda_2}^{\mu}$: \begin{itemize} \item {\bf Associativity}. {\it The product $\ast$ is associative}. \end{itemize} The associativity is equivalent to the fact that there are two different decompositions of the set ${\rm Conf}^+_4({\cal A})({\mathbb Z}^t)$ corresponding to two different triangulations of the $4$-gon. \begin{figure}[ht] \centerline{\epsfbox{tpm2.eps}} \caption{The associativity.} \label{tpm2} \end{figure} \paragraph{A simple proof of Knutson-Tao-Woodward's theorem \cite{KTW}.} That theorem asserts the associativity of the similar $\ast$-product whose structure constants are given by the number of hives. The associativity in our set-up, where the structure constant are given by the number of positive integral tropical points, is obvious for any group ${\rm G}$. So to prove the theorem we just need to relate hives to positive integral tropical points for ${\rm G=GL}_m$, which is done in Section \ref{KT}. \subsection{Parametrization of top components of fibers of convolution morphisms} \label{sec2.2} \subsubsection{Transcendental cells and integral tropical points} \label{sec2.2.1} For a non-zero $C=\sum_{k\geq p} c_kt^k\in {\cal K}$ such that $c_p$ is not zero, we define its {\it valuation} and {\it initial term}: $$ {{\rm val}}(C):= p, ~~~ {\rm in}(C):=c_p. $$ \paragraph{A decomposition of ${\rm T}({\cal K})$.} For each split torus ${\rm T}$, there is a natural projection, which we call the valuation map: $$ {\rm val}: {\rm T}({\cal K})\longrightarrow {\rm T}({\cal K})/{\rm T}({\cal O})={\rm T}({\mathbb Z}^t). $$ Given an isomorphism ${\rm T}=({\Bbb G}_m)^k$, the map is expressed as $(C_1,\ldots, C_k)\mapsto \big({\rm val}(C_1),\ldots, {\rm val}(C_k)\big)$. Each $l \in {\rm T}({\mathbb Z}^t)$ gives rise to a cell $$ {\rm T}_l:= \{x\in {\rm T}({\cal K}) ~|~ {\rm val}(x) =l\}. $$ It is a projective limit of irreducible algebraic varieties: each of them is isomorphic to $ ({\Bbb G}_m)^k \times{ \Bbb A}^N. $ Therefore ${\rm T}_l$ is an irreducible proalgebraic variety, and ${\rm T}({\cal K})$ is a disjoint union of them: $$ {\rm T}({\cal K}) = \coprod_{l \in {\rm T}({\mathbb Z}^t)} {\rm T}_l. $$ \paragraph{Transcendental ${\cal K}$-points of ${\rm T}$.} Let us define an initial term map for ${\rm T}({\cal K})$ in coordinates: $$ {\rm in}: {\rm T}({\cal K})\longrightarrow {\rm T}({\mathbb C}),~~~(C_1,\ldots, C_k)\longmapsto \big({\rm in}(C_1),\ldots, {\rm in}(C_k)\big). $$ A subset $\{c_1,\ldots, c_q\}\subset{\mathbb C}$ is {\em algebraically independent} if $P(c_1,\ldots, c_q)\not =0$ for any $P\in {\mathbb Q}(X_1,\ldots, X_q)^*$. \begin{definition} A point $C\in {\rm T}({\cal K})$ is {\em transcendental} if its initial term ${\rm in}(C)$ is algebraically independent as a subset of ${{\mathbb C}}$. Denote by ${\rm T}^{\circ}({\cal K})$ the set of transcendental points in ${\rm T}({\cal K})$. Set $$ {\rm T}_l^{\circ}:={\rm T}_l\bigcap {\rm T}^{\circ}({\cal K}). $$ \end{definition} \begin{lemma} \label{9.21.14.8h} Let $F$ be a positive rational function on ${\rm T}$. For any $C\in {\rm T}^{\circ}({\cal K})$, we have $$ {\rm val} \big(F(C)\big)=F^t\big({\rm val}(C)\big). $$ \end{lemma} \begin{proof} It is clear. \end{proof} \paragraph{Transcendental ${\cal K}$-cells of a positive space ${\cal Y}$.} \begin{definition} A birational isomorphism $f: {\cal Y} \rightarrow {\cal Z}$ of positive spaces is a {\em positive birational isomorphism} if it is a positive morphism, and its inverse is also a positive morphism. \end{definition} \begin{theorem} \label{8.23.7.32hh} Let $f:{\rm T}\rightarrow{\rm S}$ be a positive birational isomorphism of split tori. Then $$ f({\rm T}_{l}^{\circ})={\rm S}_{f^t(l)}^{\circ}. $$ \end{theorem} We prove Theorem \ref{8.23.7.32hh} in Section \ref{sec4}. It is crucial that the inverse of $f$ is also a positive morphism. As a counterexample, the map $ f: {\Bbb G}_m\rightarrow {\Bbb G}_m, ~x\mapsto x+1 $ is a positive morphism, but its inverse $x\mapsto x-1$ is not. Let $l \in {\Bbb G}_m({\mathbb Z}^t) ={\mathbb Z}$. If $l>0$, then Theorem \ref{8.23.7.32hh} fails: the points in $f({\rm T}_l^\circ)$ are not transcendental since ${\rm in}(f({\rm T}_l^\circ))\equiv 1.$ \begin{definition} Let $\alpha_{\bf c}:{\rm T}\rightarrow {\cal Y}$ be a coordinate system from a positive atlas on ${\cal Y}$. The set of transcendental ${\cal K}$-points of ${\cal Y}$ is $$ {\cal Y}^{\circ}({\cal K}):=\alpha_{\bf c}({\rm T}^{\circ}({\cal K})). $$ For each $l\in {\cal Y}({\mathbb Z}^t)$, the transcendental $l$-cell \footnote{By abuse of notation, such a cell will always be denoted by ${\cal C}_l^\circ$. The tropical point $l$ tells which space it lives.} of ${\cal Y}$ is $$ {\cal C}_{l}^{\circ}:=\alpha_{\bf c}({\rm T}_{\beta^t(l)}^{\circ}),\quad \mbox{where $\beta=\alpha_{\bf c}^{-1}$}. $$ \end{definition} By Theorem \ref{8.23.7.32hh}, this definition is independent of the coordinate system $\alpha_{\bf c}$ chosen. Similarly one can upgrade the valuation map to positive spaces: given a positive space ${\cal Y}$, there is a unique map \begin{equation} \label{val12} {\rm val}: {\cal Y}^{\circ}({\cal K})\longrightarrow {\cal Y}({\mathbb Z}^t) \end{equation} such that $$ {\cal C}_l^{\circ}=\{y\in {\cal Y}^{\circ}({\cal K})~|~{\rm val}(y)=l\}. $$ The valuation map (\ref{val12}) is functorial under positive birational isomorphisms of positive spaces. Therefore the transcendental cells are also functorial under positive birational isomorphisms. Thus there is a canonical decomposition parametrized by the set ${\cal Y}({\mathbb Z}^t)$: $$ {\cal Y}^{\circ}({\cal K})=\bigsqcup_{l\in {\cal Y}({\mathbb Z}^t)} {\cal C}_{l}^{\circ}. $$ Thanks to the following Lemma, one can identify each tropical point $l$ with ${\cal C}_l^{\circ}$. \begin{lemma} \label{thm10.1.1.2} Let $F$ be a positive rational function on ${\cal Y}$. For any $C\in {\cal Y}^{\circ}({\cal K})$, we have $$ {\rm val} \big (F(C)\big)=F^t\big( {\rm val}(C)\big). $$ \end{lemma} \begin{proof} It follows immediately from Lemma \ref{9.21.14.8h} and Theorem \ref{8.23.7.32hh}. \end{proof} \subsubsection{ ${\cal O}$-integral configurations of decorated flags and the affine Grassmannian} \label{sec2.2.2} Recall the affine Grassmannian ${\rm Gr}$. Recall the moduli space ${\cal F}_{{\rm G}}$ of frames from Definition \ref{torsorF}. \begin{lemma-construction} \label{LCAG} There is a canonical onto map \begin{equation} \label{10.13.12.1} {\rm L}: {\cal F}_{{\rm G}}({\cal K})\longrightarrow {\rm Gr}, ~~~~\{{\rm A}_1,{\rm B}_2\}\longmapsto {\rm L}({\rm A}_1,{\rm B}_2) \end{equation} \end{lemma-construction} \begin{proof} Let $\{{\rm U}, {\rm B}^-\}\in {\cal F}_{{\rm G}}({\mathbb Q})$ be a standard frame. There is a unique $g_{\{{\rm A}_1, {\rm B}_2\}} \in {\rm G}({\cal K})$ such that $$ \{{\rm A}_1, {\rm B}_2\}= g_{\{{\rm A}_1, {\rm B}_2\}}\cdot \{{\rm U}, {\rm B}^-\}. $$ It provides an isomorphism ${\cal F}_{\rm G}({\cal K})\stackrel{\sim}{\rightarrow}{\rm G}({\cal K})$. Composing it with the projection $[\cdot]: {\rm G}({\cal K})\rightarrow {\rm Gr}$, \begin{equation} \label{8.11.12.100} {\rm L}({\rm A}_1, {\rm B}_2):= [g_{\{{\rm A}_1, {\rm B}_2\}}]\in {\rm Gr}. \end{equation} Note that ${\cal F}_{{\rm G}}({\mathbb Q})$ is a ${\rm G}({\mathbb Q})$-torsor. So choosing a different frame in ${\cal F}_{{\rm G}}({\mathbb Q})$ we get another representative of the coset $g_{\{{\rm A}_1, {\rm B}_2\}} \cdot {\rm G}({\mathbb Q})$. Since ${\rm G}({\mathbb Q})\subset {\rm G}({\cal O})$, the resulting lattice (\ref{8.11.12.100}) will be the same. Therefore the map ${\rm L}$ is canonical. \end{proof} \paragraph{Symmetric space and affine Grassmannian.} The affine Grassmannian is the non-archimedean version of the symmetric space ${\rm G}({{\mathbb R}})/{\rm K}$, where ${\rm K}$ is a maximal compact subgroup in ${\rm G}({\mathbb R})$. A generic pair of flags $\{{\rm B}_1, {\rm B}_2\}$ over ${\mathbb R}$ gives rise to an ${\rm H}({{\mathbb R}}_{>0})$-torsor in the symmetric space -- the projection of ${\rm B}_1\cap {\rm B}_2$. \footnote{Here is a non-archimedean analog: A generic pair of flags $\{{\rm B}_1, {\rm B}_2\}$ over ${\cal K}$ gives rise to an $H({\cal K})/H({\cal O})$-torsor in the affine Grassmannian -- the projection of ${\rm B}_1({\cal K})\cap {\rm B}_2({\cal K})$ to ${\rm G}({\cal K})/{\rm G}({\cal O})$.} Notice that ${\rm H}({{\mathbb R}}_{>0}) = {\rm H}({\mathbb R})/({\rm H}({\mathbb R})\cap {\rm K}) $. A generic pair $\{{\rm A}_1, {\rm B}_2\}$ determines a point\footnote{In the archimedean case, a maximal compact subgroup ${\rm K}$ is defined by using the Cartan involution. A generic pair $\{{\rm A}, {\rm B}\}$ determines a pinning, and hence a Cartan involution.} $ Q({\rm A}_1, {\rm B}_2)\in {\rm G}({\mathbb R})/{\rm K}. $ So we get the archimedean analog of the map (\ref{10.13.12.1}): \begin{equation} \label{10.13.12.2} Q: {\cal F}_{{\rm G}}({{\mathbb R}})\longrightarrow {\rm G}({\mathbb R})/{\rm K}, ~~~~\{{\rm A}_1,{\rm B}_2\}\longmapsto Q({\rm A}_1, {\rm B}_2). \end{equation} \paragraph{Decorated flags and horospheres.} For the adjoint group ${\rm G}'$, the principal affine space ${\cal A}$ can be interpreted as the moduli space of horospheres in the symmetric space ${\rm G}'({\mathbb R})/K$ in the archimedean case, or in the affine Grassmannian ${\rm Gr}$. The horosphere ${\cal H}_{\rm A}$ assigned to a decorated flag ${\rm A}$ is an orbit of the maximal unipotent subgroup ${\rm U}_{\rm A}$. Let ${\cal B}^*_{\rm A}$ be the open Schubert cell of flags in generic position to a given decorated flag ${\rm A}$. Then there is an isomorphism $$ {\cal B}^*_{{\rm A}} \longrightarrow {\cal H}_{\rm A}, ~~~~ {\rm B}\longmapsto {\rm L}({\rm A}, {\rm B})~~ \mbox{or}~~ {\rm B}\longmapsto Q({\rm A},{\rm B}). $$ \begin{figure}[ht] \epsfxsize80pt \centerline{\epsfbox{tpm0.eps}} \caption{The metric $q(h,y)$ determined by a horocycle $h$ and a boundary point $y$.} \label{tpm0} \end{figure} {\bf Examples}. 1. Let ${\rm G}({\mathbb R})={\rm SL}_2({\mathbb R})$. Its maximal compact subgroup ${\rm K}={\rm SO}_2({\mathbb R})$. The symmetric space ${\rm SL}_2({\mathbb R})/{\rm SO}_2({\mathbb R})$ is the hyperbolic plane ${\cal H}^2$. A decorated flag ${{\rm A}}_1\in {\cal A}_{{\rm PGL}_2}({\mathbb R})$ corresponds to a horocycle $h$ based as a point $x$ at the boundary. A flag ${{\rm B}}_2$ corresponds to another point $y$ at the boundary. Let $g(x,y)$ be the geodesic connecting $x$ and $y$. The point $Q({\rm A}_1,{\rm B}_2)$ is the intersection of $h$ and $g(x, y)$, see Fig \ref{tpm0}: $$ q(h, y):= h\cap g(x, y) \in {\cal H}^2. $$ 2. Let ${\rm G}={\rm GL}_n$. Recall that a flag $F_\bullet$ in an $n$-dimensional vector space $V_n$ over a field is a data $F_1 \subset \ldots \subset F_n$, ${\rm dim}F_i=i$. A generic pair of flags $(F_\bullet, G_\bullet)$ in $V_n$ is the same thing as a decomposition of $V_n$ into a direct sum of one dimensional subspaces \begin{equation} \label{L} V_n= L_1 \oplus \ldots \oplus L_n, \end{equation} where $ L_i = F_{i} \cap G_{n+1-i}.$ Conversely, $F_{a}= L_1 \oplus \ldots \oplus L_a$ and $G_{b}= L_{n-b+1} \oplus \ldots \oplus L_n.$ Over the field ${\mathbb R}$, this decomposition gives rise to a $({\mathbb R}^*_{>0})^n$-torsor in the symmetric space, given by a family of positive definite metrics on $V_n$ with the principal axes $(L_1, \ldots , L_n)$: \begin{equation} \label{M} a_1 x_1^2 + \ldots + a_n x_n^2, \quad a_i > 0. \end{equation} Here $(x_1, ..., x_n)$ is a coordinate system for which the lines $L_i$ are the coordinate lines. A decorated flag ${\rm A}$ in $V_n$ is a flag $F_\bullet$ plus a collection of non-zero vectors $l_i \in F_{i}/F_{i-1}$. A frame in ${\cal F}_{{\rm GL}_n}$ is equivalent to a generic pair of flags $(F_\bullet, G_\bullet)$ and a decorated flag ${\rm A}$ over the flag $F_\bullet$. It determines a basis $(e_1, ..., e_n)$ in $V_n$ and vice verse. Here $e_i\in L_i$ and $e_i=l_i$ under the projection $L_i\longrightarrow F_i/F_{i-1}$. This basis determines a metric -- the positive definite metric with the principal axes $L_i$ such that the vectors $e_i$ are unit vectors. 3. Over the field ${\cal K}$, decomposition (\ref{L}) gives rise to an ${\rm H}({\cal K})/{\rm H}({\cal O}) = {\mathbb Z}^n$-torsor in ${\rm Gr}$, given by the following collection of lattices in $V_n$. $$ {\cal O}t^{k_1}e_1 \oplus \ldots \oplus {\cal O}t^{k_n}e_n, ~~k_i \in {\mathbb Z}. $$ These lattices are the non-archimedean version of the unit balls of the metrics (\ref{M}). \paragraph{${\cal O}$-integral configurations of decorated flags.} \begin{definition} \label{13.3.12.502h} A collection of decorated flags $\{{\rm A}_1,\ldots, {\rm A}_n\}$ over ${\cal K}$ is ${\cal O}$-integral if it is generic and for any $i\in [1,n]$ the lattice ${\rm L}({\rm A}_i, {\rm B}_j)$ does not depend on the choice of $j$ different then $i$. \end{definition} Let $g\in {\rm G}({\cal K})$. Note that ${\rm L}(g{\rm A}_i, g{\rm B}_j)=g\cdot {\rm L}({\rm A}_i, {\rm B}_j)$. Therefore if $\{{\rm A}_1,\ldots, {\rm A}_n\}$ is ${\cal O}$-integral, so is $g\cdot \{{\rm A}_1,\ldots, {\rm A}_n\}$. Thus we define \begin{definition} \label{8.8.12.1} A configuration in ${\rm Conf}_n({\cal A})({\cal K})$ is ${\cal O}$-integral if it is a ${\rm G}({\cal K})$-orbit of an ${\cal O}$-integral collection of decorated flags. Denote by ${\rm Conf}_n^{\cal O}({\cal A})$ the space of such configurations. \end{definition} The archimedean version of Definition \ref{8.8.12.1} is trivial. For example, let ${\rm G}= {\rm SL}_2({\mathbb R})$. Then there are no horocycles $(h_1, h_2, h_3)$ such that their boundary points $(x_1, x_2, x_3)$ are distinct, and the intersection of the horocycle $h_i$ with the geodesic $g(x_i,x_j)$ do not depend on $j \not = i$. In contrast with this, we demonstrate below that the non-archimedean version is very rich. The difference stems from the fact that in the archimedean case the intersection $K \cap {\rm U}=e$ is trivial, while in the non-archimedean ${\rm G}({\cal O}) \cap {\rm U}({\cal K}) = {\rm U}({\cal O})$. \paragraph{Transcendental cells and ${\cal O}$-integral configurations.} The following fact is crucial. \begin{theorem} \label{8.27.17.08hh} If $l \in {\rm Conf}_n^{+}({\cal A})({\mathbb Z}^t)$, then there is an inclusion {\cal C}_l^{\circ}\subset {\rm Conf}^{\cal O}_n({\cal A}). $ Otherwise ${\cal C}_l^{\circ}\cap {\rm Conf}^{\cal O}_n({\cal A})$ is an empty set. \end{theorem} Theorem \ref{8.27.17.08hh} gives an alternative conceptual definition of the set of positive integral tropical points of the space ${\rm Conf}_n({\cal A})$, which {refers neither to the potential ${\cal W}$, nor to a specific positive coordinate system.} However to show that the set ${\rm Conf}_n^{+}({\cal A})({\mathbb Z}^t)$ is ``big'', or even non-empty, we use the potential ${\cal W}$ and its properties, which imply, for example, that the set ${\rm Conf}_n^{+}({\cal A})({\mathbb Z}^t)$ is obtained by amalgamation of similar sets assigned to triangles of a triangulation of the polygon. We prove Theorem \ref{8.27.17.08hh} in Section \ref{sec6.4}. \subsubsection{The canonical map $\kappa$ and cycles on ${\rm Conf}_n({\rm Gr})$} \label{sec2.2.3} \paragraph{The canonical map $\kappa$.} Recall the configuration spac $$ {\rm Conf}_n({\rm Gr}):= {\rm G}({\cal K})\backslash ({{\rm Gr}\times \ldots \times {\rm Gr}}). $$ Given an ${\cal O}$-integral collection $\{{\rm A}_1, ..., {\rm A}_n\}$ of decorated flags, we get a collection of lattices $\{{\rm L}_1, ..., {\rm L}_n\}$ by setting ${\rm L}_i:= {\rm L}({\rm A}_i, {\rm B}_j)$ for some $j \not = i$. By definition, the lattice ${\rm L}_i$ is independent of $j$ chosen. This construction descends to configurations, providing a canonical map \begin{equation} \label{4.18.12.41qx} \kappa: {\rm Conf}^{\cal O}_n({\cal A}) \longrightarrow {\rm Conf}_n({\rm Gr}), \quad ({\rm A}_1, \ldots, {\rm A}_n) \longmapsto ({\rm L}_1, ..., {\rm L}_n). \end{equation} The map is evidently cyclic invariant, and commutes with the restriction to subconfigurations: $$ \kappa({\rm A}_{i_1}, ..., {\rm A}_{i_k}) = ({\rm L}_{i_1}, ..., {\rm L}_{i_k})\quad \mbox{for any $1\leq i_1 < ...< i_k\leq n$}. $$ \paragraph{The cycles ${\cal M}_l$ in ${\rm Conf}_n({\rm Gr})$.} Let $l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$. Thanks to Theorem \ref{8.27.17.08hh}, we can combine the inclusion there with the canonical map (\ref{4.18.12.41qx}): \begin{equation} \label{4.18.12.444} {\cal C}_l^{\circ}\subset {\rm Conf}^{\cal O}_n({\cal A}) \stackrel{\kappa}{\longrightarrow} {\rm Conf}_n({\rm Gr}). \end{equation} \begin{definition} \label{9.22.12.13h} The cycle ${\cal M}_l \subset {\rm Conf}_n({\rm Gr})$ is a substack given by the closure of $\kappa({\cal C}^{\circ}_l)$: \begin{equation} \label{4.18.12.444} {\cal M}_l:= \overline{{\cal M}_l^\circ}, ~~~~{\cal M}_l^\circ:= {\kappa({\cal C}^{\circ}_{l})} \subset {\rm Conf}_n({\rm Gr}),\qquad l \in {\rm Conf}_n^{+}({\cal A})({\mathbb Z}^t). \end{equation} \end{definition} \begin{lemma} The cycle ${\cal M}_l$ is irreducible. \end{lemma} \begin{proof} For a split torus ${\rm T}$, the cycle ${\rm T}_{l}$ is irreducible. So the cycles ${\cal C}^{\circ}_{l}$ and ${\cal M}_l$ are irreducible. \end{proof} In other words, ${\cal M}_l$ is a ${\rm G}(\cal K)$-invariant closed subspace in ${\rm Gr}^n$. There is a bijection \begin{equation} \label{MTH11} \{\mbox{${\rm G}(\cal K)$-orbits in ${\rm Gr}^n$}\} \stackrel{1:1}{\longleftrightarrow} \{\mbox{${\rm G}(\cal O)$-orbits in $[1] \times{\rm Gr}^{n-1}$}\}. \end{equation} Therefore one can also view the cycles ${\cal M}_l$ as ${\rm G}(\cal O)$-invariant closed subspaces in $[1]\times{\rm Gr}^{n-1}$. Let us describe them using this point of view. \subsubsection{Top components of the fibers of the convolution morphism} \label{sec2.2.4} Given $\underline{\lambda}=(\lambda_1,\ldots, \lambda_n)\in ({\rm P}^+)^n$, recall the cyclic convolution variety $$ {\rm Gr}_{c(\underline{\lambda})}:=\{({\rm L}_1, \ldots, {\rm L}_n)\in {\rm Gr}^n ~|~ {\rm L}_1 \stackrel{\lambda_1}{\longrightarrow}{\rm L}_2\stackrel{\lambda_2}{\longrightarrow}\ldots\stackrel{\lambda_n}{\longrightarrow}{\rm L}_{n+1}, ~{\rm L}_1={\rm L}_{n+1} = [1]\}. $$ It is a finite dimensional reducible variety of top dimension $$ {\rm ht}(\underline{\lambda}):= \langle \rho, \lambda_1+ \ldots +\lambda_n\rangle. $$ It is the fiber of the convolution morphism, and therefore, thanks to the geometric Satake correspondence \cite{L4} \cite{G} \cite{MV}, there is a canonical isomorphism \begin{equation} \label{5.31.12.2} {\rm IH}^{{\rm ht}(\underline{\lambda})}({\rm Gr}_{c(\underline{\lambda}) }) = \Bigl(V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_n}\Bigr)^{{\rm G}^L}. \end{equation} Each top dimensional component of ${\rm Gr}_{c(\underline{\lambda}) }$ provides an element in the space (\ref{5.31.12.2}). These elements form a canonical basis in (\ref{5.31.12.2}). Let ${\bf T}_{\underline{\lambda}}$ be the set of top dimensional components of ${\rm Gr}_{c(\underline{\lambda})}$. Recall the set ${\bf C}_{\underline{\lambda}}$ of positive tropical points (\ref{11.20.11.1}), and the cycle ${\cal M}_l$ from Definition \ref{9.22.12.13h}. \begin{theorem} \label{5.8.10.45b} Let $l\in {\bf C}_{\underline{\lambda}}$. Then the cycle ${\cal M}_{l}$ is the closure of a top dimensional component of ${{\rm Gr}_{c(\underline{\lambda})}}$. The map $l \longmapsto {\cal M}_l$ provides a canonical bijection from ${\bf C}_{\underline{\lambda}}$ to ${\bf T}_{\underline{\lambda}}$. \end{theorem} Theorem \ref{5.8.10.45b} is proved in Section \ref{sec9.4}. It implies Theorem \ref{11.18.11.1}. \subsubsection{Constructible equations for the top dimensional components} \label{sec2.2.5} We have defined the cycles ${\cal M}_{l}$ as the closures of the images of the cells ${\cal C}_{l}^\circ$. Now let us define the cycles ${\cal M}_{l}$ by equations, given by certain constructible functions on the space ${\rm Conf}_n({\rm Gr})$. These functions generalize Kamnitzer's functions $H_{i_1,\ldots, i_n}$ for ${\rm G=GL}_m$ (\cite{K1}). \paragraph{Constructible function $D_F$.} Let $R$ be a reductive algebraic group over ${\mathbb C}$. We assume that there is a rational left algebraic action of $R$ on ${\mathbb C}^n$. Let ${\mathbb C}(x_1,\ldots, x_n)$ be the field of rational functions on ${\mathbb C}^n$. We get a right algebraic action of $R$ on ${\mathbb C}(x_1,\ldots, x_n)$ denoted by $\circ$. Let ${\cal K}(x_1,\ldots, x_n)$ be the field of rational functions with ${\cal K}$-coefficients. The valuation of ${\cal K}^\times$ induces a natural valuation map $$ {\rm val}:~~ {\cal K}(x_1,\ldots, x_n)^\times \longrightarrow {\mathbb Z}. $$ Let $F, G\in {\cal K}(x_1,\ldots, x_n)^\times$. The valuation map has two basic properties \begin{align} {\rm val}(FG)&={\rm val}(F)+{\rm val}(G),\label{9.7.14.1h}\\ {\rm val}(F+G)&={\rm val}(F), \quad \mbox{if }{\rm val}(F)<{\rm val}(G).\label{9.7.14.2h} \end{align} The group $R({\cal K})$ acts on ${\cal K}(x_1,\ldots, x_n)$ on the right. We have the following \begin{lemma}\label{lem1.9.7.14} Let $F\in {\cal K}(x_1,\ldots, x_n)^\times$. If $h\in R({\cal O})$, then ${\rm val}(F\circ h)={\rm val}(F)$. \end{lemma} \begin{proof} For any $k\in {\cal K}^\times$, we have $(kF)\circ h=k(F\circ h).$ Therefore it suffices to prove the case when ${\rm val}(F)=0$. Note that the group $R$ is reductive. It is generated by $$ x_i(a)\in {\rm U},\quad y_i(b)\in {\rm U}^{-},\quad \alpha(c)\in {\rm H}, \quad \quad \mbox{where $i\in I$ and $\alpha\in {\rm Hom}({\Bbb G}_m, {\rm H})$.} $$ Since the action of $R$ is algebraic, for any $f\in {\mathbb C}(x_1,\ldots, x_n)^\times$, we have $f\circ x_i(a)\in {\mathbb C}(x_1,\ldots, x_n, a)^\times$. Note that $f\circ x_i(0)=f$. Therefore we get \begin{equation}\label{9.7.14.4h} f\circ x_i(a)= \frac{f+af_1+\ldots+a^l f_l}{1+ag_1+\ldots+a^mg_m}, \quad \mbox{where $f_j, g_j \in {\mathbb C}(x_1,\ldots, x_n)$.} \end{equation} If $a\in {\mathbb C}$, then $f\circ x_i(a)\in {\mathbb C}(x_1,\ldots,x_n)$. Moreover $f\circ x_i(a)$ is non zero. Otherwise, $ f=(f\circ x_i(a))\circ x_i(-a)=0. $ If $a\in t{\cal O}$, then by the basic property \eqref{9.7.14.2h}, we get ${\rm val}(f\circ x_i(a))={\rm val}(f)=0.$ Let $a=a_0+b=a_0+a_1t+a_2t^2+\ldots \in {\cal O}$. Then $ f\circ x_i(a)=\big(f\circ x_i(a_0)\big)\circ x_i(b). $ Note that $f\circ x_i(a_0)\in {\mathbb C}(x_1,\ldots, x_n)^\times$ and $b\in t{\cal O}$. Combining the above arguments we get \begin{equation} \label{9.7.14.40h} {\rm val}(f\circ x_i(a))={\rm val}(f\circ x_i(a_0))=0={\rm val}(f), ~~~\forall a\in {\cal O}. \end{equation} Now let $F\in {\cal K}(x_1, ..., x_n)^\times$ such that ${\rm val}(F)=0$. Then $F$ can be expressed as $$ F=\frac{f_0+b_lf_1+\ldots+b_lf_l}{1+c_1g_1+\ldots+c_mg_m}. $$ Here $f_0, f_p, g_q\in {\mathbb C}(x_1,\ldots, x_n)^\times$, $b_p, c_q\in {\cal K}^\times$, ${\rm val}(b_p)>0, {\rm val}(c_q)>0, $. By definition, we have $$ F\circ x_i(a)=\frac{f_0\circ x_i(a)+b_1f_1\circ x_i(a)+ \ldots+b_lf_l\circ x_i(a)}{1+c_1g_1\circ x_i(a)+\ldots+c_mg_m\circ x_i(a)}. $$ Let $a\in {\cal O}$. {By \eqref{9.7.14.40h}}, we get \begin{align} {\rm val}(f_0\circ x_i(a))&=0,\nonumber\\ {\rm val}(b_pf_p\circ x_i(a))&={\rm val}(b_p)+{\rm val}(f_p\circ x_i(a))={\rm val}(b_p)>0,\nonumber\\ {\rm val}(c_qg_q\circ x_i(a))&={\rm val}(c_q)+{\rm val}(g_q\circ x_i(a))={\rm val}(c_q)>0. \nonumber \end{align} By the basic property \eqref{9.7.14.2h}, we get ${\rm val}(F\circ x_i(a))={\rm val}(f_0\circ x_i(a))=0$. Hence we prove that $$ {\rm val}(F\circ x_i(a))={\rm val}(F),\quad \forall a\in {\cal O}. $$ By the same argument, we show that $$ {\rm val}(F\circ y_i(b))={\rm val}(F),\quad \forall b\in {\cal O}, ~~~~ {\rm val}(F\circ \alpha(c))={\rm val}(F),\quad \forall c\in {\cal O}^\times. $$ Note that $R({\cal O})$ is generated by the elements $ x_i(a), y_i(b), \alpha(c), \quad a,b\in {\cal O}, c\in {\cal O}^\times. $ The Lemma is proved. \end{proof} Let $\mathfrak{X}$ be rational space over ${\mathbb C}$, i.e., ${\mathbb C}(\mathfrak{X})\stackrel{\sim}{=}{\mathbb C}(x_1,\ldots, x_n).$ Similarly, there is a valuation map ${\rm val}: {\cal K}(\mathfrak{X})^\times \rightarrow {\mathbb Z}$. We assume that there is left algebraic action of $R$ on $\mathfrak{X}$. Lemma \ref{lem1.9.7.14} implies \begin{lemma} \label{13.1.26.11.18h} Let $F\in {\cal K}(\mathfrak{X})^\times.$ If $h\in R({\cal O})$, then ${\rm val}(F\circ h)={\rm val}(F)$. \end{lemma} \paragraph{Constructible equations for top components.} Let $\mathfrak{X} := {\cal A}^n$ and let $R:={\rm G}^n$. Let $F\in {\mathbb C}({\cal A}^n)$ and let $(g_1,\ldots, g_n)\in {\rm G}^n$. Then ${\rm G}^n$ acts on ${\mathbb C}({\cal A}^n)$ on the right: \begin{equation} \label{rightaction}(F\circ(g_1, ..., g_n))({\rm A}_1, ..., {\rm A}_n):= F(g_1\cdot{\rm A}_1, ..., g_n\cdot {\rm A}_n),~~~\forall ({\rm A}_1,\ldots, {\rm A}_n)\in {\cal A}^n.\end{equation} By definition, a nonzero rational function $F\in {\mathbb C}({\rm Conf}_n({\cal A}))$ is also a ${\rm G}$-diagonal invariant function on ${\cal A}^n$ $$ F(g{\rm A}_1, ..., g{\rm A}_n) = F({\rm A}_1, ..., {\rm A}_n). $$ There is a ${\mathbb Z}$-valued function \begin{equation} \label{6.4.12.1} D_F: {\rm G}({\cal K})^n\longrightarrow {\mathbb Z}, ~~~~ D_F(g_1(t), ..., g_n(t)):= {\rm val}\Bigl(F\circ (g_1(t), ..., g_n(t))\Bigr). \end{equation} \begin{lemma-construction} The function $D_F$ is invariant under the left diagonal action of the group ${\rm G}({\cal K})$ on ${\rm G}({\cal K})^n$, and the right action of the subgroup ${\rm G}({\cal O})^n \subset {\rm G}({\cal K})^n$. Therefore $D_F$ descends to a function ${\rm Conf}_n({\rm Gr})\rightarrow {\mathbb Z}$ which we also denote by $D_F$. \end{lemma-construction} \begin{proof} The first property is clear since $F \in {\mathbb C}({\cal A}^n)^{\rm G}$. The second property is by Lemma \ref{13.1.26.11.18h}. \end{proof} Let ${\mathbb Q}_{+}({\rm Conf}_n({\cal A}))$ be the semifield of positive rational functions on ${\rm Conf}_n({\cal A})$. Take a non-zero function $F\in {\mathbb Q}_{+}({\rm Conf}_n({\cal A}))$. Therefore it gives rise to a function $D_F$ on ${\rm Conf}_n({\rm Gr})$. Meanwhile, its tropicalization $F^t$ is a function on ${\rm Conf}_n({\cal A})({\mathbb Z}^t)$. \begin{theorem} \label{5.8.10.45a} Let $l\in{\rm Conf}_n^+({\cal A})({\mathbb Z}^t)$ and $F\in {{\mathbb Q}}_{+}({\rm Conf}_n({\cal A}))$. Then $ D_F\big(\kappa({\cal C}_l^{\circ})\big)\equiv F^t(l). $ \end{theorem} } Theorem \ref{5.8.10.45a} is proved in Section \ref{sec7}. It implies that the map in Theorem \ref{5.8.10.45b} is injective. It can be reformulated as follows: \begin{equation} \label{MHV1111} \mbox{For any $l$ and $F$ as above, the generic value of $D_F$ on the cycle ${\cal M}_l$ is $F^t(l)$}. \end{equation} When ${\rm G=GL}_m$, one can describe the set ${\bf C}_{\underline{\lambda}}$ by using the special collection of functions on the space ${\rm Conf}_n({\cal A})$ defined in Section 9 of \cite{FG1}. The obtained description coincides with Kamnitzer's generalization of hives \cite{K1}. He conjectured in \cite{K1} that the latter set parametrizes the components of the convolution variety for ${\rm GL}_m$. Therefore Theorems \ref{5.8.10.45b} and \ref{5.8.10.45a} imply Conjecture 4.3 in \cite{K1}. \section{Positive structures on the unipotent subgroups ${\rm U}$ and ${\rm U}^{-}$} \label{sec4} \subsection{Lusztig's data and MV cycles} \paragraph{Lusztig's data.} Fix a reduced word ${\bf i}=(i_m,\ldots, i_1)$ for ${w}_0$. There are positive functions \begin{equation} \label{13.3.27.1827h} F_{{\bf i}, j}: {\rm U}\longrightarrow {\Bbb A}^1,~~~~x_{i_m}(a_m)\ldots x_{i_1}(a_1)\longmapsto a_j. \end{equation} Their tropicalizations induce an isomorphism $f_{\bf i}: {\rm U}({\mathbb Z}^t)\stackrel{=}{\rightarrow} {\mathbb Z}^m,~p\mapsto \{F_{{\bf i}, j}^t(p)\}.$ Let ${\Bbb N}=\{0,1,2,\ldots\}$. Lusztig proved \cite{L1} that the subset \begin{equation} \label{8.20.1.20h} f_{\bf i}^{-1}({\Bbb N}^m)\subset {\rm U}({\mathbb Z}^t) \end{equation} does not depend on {\bf i}, and parametrizes the canonical basis in the quantum enveloping algebra of the Lie algebra of a maximal unipotent subgroup of the Langlands dual group ${\rm G}^L$. \vskip 2mm \begin{lemma} \label{8.20.3.42h} The subset ${\rm U}_{\chi}^{+}({\mathbb Z}^t):=\{l\in {\rm U}({\mathbb Z}^t)~|~ \chi^t(l)\geq 0\}$ is identified with the set \eqref{8.20.1.20h}. \end{lemma} \begin{proof} Note that $\chi=\sum_{j=1}^mF_{{\bf i},j}$. It tropicalization is $\min_{1\leq j\leq m}\{F_{{\bf i},j}^t\}$. Let $l\in {\rm U}({\mathbb Z}^t)$. Then $$ \chi^t(l)\geq 0 \Longleftrightarrow F_{{\bf i},j}^t(l)\geq 0,~ \forall j\in[1,m] \Longleftrightarrow f_{\bf i}(l)\in {\Bbb N}^m. $$ \end{proof} Let $l\in {\rm U}({\mathbb Z}^t)$. Recall the transcendental cell ${\cal C}_{l}^{\circ}\subset {\rm U}({\cal K})$. \begin{lemma} \label{Lema10.1.1} Let $u\in {\cal C}_{l}^{\circ}$. Then $u\in {{\rm U}}({\cal O})$ if and only if $l\in {{\rm U}}_{\chi}^+({\mathbb Z}^t)$. \end{lemma} \begin{proof} Set $u=x_{i_m}(a_m)\ldots x_{i_1}(a_1)\in {\cal C}_l^{\circ}$. {Note that $u$ is transcendental.} Using Lemma \ref{thm10.1.1.2}, we get $$ \chi^t(l)={\rm val}(\chi(u));~~~~ {F}_{{\bf i},j}^t(l)={\rm val}(a_j), ~\forall j\in [1,m]. $$ If $l\in {{\rm U}}_\chi^+({\mathbb Z}^t)$, then ${\rm val}(a_j)={F}_{{\bf i},j}^t(l)\geq 0$. Therefore $a_j\in {\cal O}$. Hence $u\in {\rm U}({\cal O})$. {Note that $\chi$ is a regular function of ${\rm U}$. So $u\in {\rm U}({\cal O})$, then $\chi(u)\in {\cal O}$.} Therefore $\chi^t(l)={\rm val}(\chi(u))\geq 0$. Hence $l\in {\rm U}_\chi^+({\mathbb Z}^t)$. \end{proof} \paragraph{The positive morphism $\beta$.} Let $[g]_0:=h$ if $g=u_{+}hu_{-}$, where $u_{\pm}\in {\rm U}^{\pm}$, $h\in {\rm H}$. Define \begin{equation} \label{8.16.10.40h} \beta: {\rm U}\longrightarrow {\rm H},~~u\longmapsto [\overline{w}_0u]_0. \end{equation} Let ${\bf i}=(i_m,\ldots, i_1)$ as above. Let $w_{k}^{\bf i}:=s_{i_1}\ldots s_{i_k}\in W$. Let $\beta_{k}^{\bf i}:=w_{k-1}^{\bf i}(\alpha_{i_k}^{\vee})\in {\rm P}$. {The following Lemma shows that $\beta$ is a positive map. \begin{lemma}[{\cite[Lemma 6.4]{BZ}}] \label{8.13.1.20h} For each $u=x_{i_m}(a_m)\ldots x_{i_1}(a_1)\in {\rm U}$, we have $[\overline{w}_0u]_0=\prod_{k=1}^m \beta_{k}^{\bf i}(a_k^{-1}).$ \end{lemma} } Let $l\in {\rm U}({\mathbb Z}^t)$. The tropicalization $\beta^t$ becomes $\beta^t(l)=-\sum_{k=1}^mF_{{\bf i},k}^t(l)\beta_{k}^{\bf i}.$ Note that $\beta_k^{\bf i}\in {\rm P}$ are positive coroots. If $l\in {\rm U}_{\chi}^{+}({\mathbb Z}^t)$, then $-\beta^t(l) \in {\rm R}^+$. Hence \begin{equation} \label{8.23.10.22h} {\rm U}_{\chi}^{+}({\mathbb Z}^t)=\bigsqcup_{\lambda\in {\rm R}^+} {\bf A}_{\lambda}, ~~~~ {\bf A}_{\lambda}:=\{l\in {\rm U}_{\chi}^{+}({\mathbb Z}^t)~|~ -\beta^t(l)=\lambda\}. \end{equation} The set ${\bf A}_\lambda$ is identified with Lusztig's set parametrizing the canonical basis of weight $\lambda$ \cite{L1}. \paragraph{Kamnitzer's parametrization of MV cycles.} Kamnitzer \cite{K} constructs a canonical bijection between Lusztig's data (i.e. ${\rm U}_{\chi}^{+}({\mathbb Z}^t)$ in our set-up) and the set of stable MV cycles. Let us briefly recall Kamnitzer's result for future use. Let ${\rm U}_{*}:={\rm U}\cap {\rm B}^{-}w_0{\rm B}^{-}$ and let ${\rm U}^{-}_{*}={\rm U}^{-}\cap {\rm B}w_0{\rm B}$. There is an well-defined isomorphism \begin{equation} \label{13.map.eta} \eta: {\rm U}_{*}\rightarrow {\rm U}^{-}_{*},~~~u\longmapsto \eta(u). \end{equation} such that $\eta(u)$ is the unique element in ${\rm U}^{-}\cap {\rm B}{w}_0u$. The map $\eta$ was used in \cite{FZ1}. Set \begin{equation} \label{kappa.kam} \kappa_{\rm Kam}: {\rm U}_{*}({\cal K})\longrightarrow {\rm Gr},~~~u\longrightarrow [\eta(u)]. \end{equation} Let $l\in {\rm U}({\mathbb Z}^t)$. Then ${\cal C}_l^{\circ}\subset {\rm U}_{*}({\cal K})$. Define \begin{equation} \label{thm.kam.a.l} {\rm MV}_l:=\overline{\kappa_{\rm Kam}({\cal C}^\circ_l)}\subset {\rm Gr}. \end{equation} The following Theorem is a reformulation of Kamnitzer's result. \begin{theorem} [{\cite[Theorem 4.5]{K}}] \label{thm.kam} Let $l\in {\bf A}_{\lambda}$. Then ${\rm MV}_l$ is an MV cycle of coweight $(\lambda,0)$. It gives a bijection between ${\bf A}_{\lambda}$ and the set of such MV cycles. \end{theorem} A stable MV cycle of coweight $\lambda $ has a unique representative of coweight $(\lambda,0)$. Therefore Theorem \ref{thm.kam} tells that the set ${\bf A}_{\lambda}$ parametrizes the set of stable MV cycles of coweight $\lambda$. \subsection{Positive functions $\chi_i, {\cal L}_i, {\cal R}_i$ on ${{\rm U}}$.} Let $i\in I$. We introduce positive rational functions $\chi_i$, ${\cal L}_{i}$, ${\cal R}_{i}$ on ${\rm U}$, and $\chi_i^{-}$, ${\cal L}_i^{-}$, ${\cal R}_i^{-}$ on ${\rm U}^{-}$. Let ${\bf i}=(i_1, \ldots, i_m)$ be a reduced word for $w_0$. Let $$ x=x_{i_1}(a_1)\ldots x_{i_m}(a_m)\in {\rm U},~~~y=y_{i_1}(b_1)\ldots y_{i_m}(b_m) \in {\rm U}^{-}. $$ Using above decompositions of $x$ and $y$, we set $$ \chi_{i}(x):=\sum_{p~|~ i_p=i} a_p ~~~~~ \chi_i^{-}(y):=\sum_{p~|~i_p=i}b_p $$ By definition the characters $\chi$ and $\chi^{-}$ have decompositions $\chi=\sum_{i\in I}\chi_i$ and $\chi^{-}=\sum_{i\in I}\chi_i^{-}$. We take ${\bf i}$ which starts from $i_1=i$. Define the ``left" functions: $$ {\cal L}_{i}(x):=a_1, ~~~~ {\cal L}_{i}^{-}(y):=b_1 $$ We take ${\bf i}$ which ends by $i_m=i$. Define the ``right" functions: $$ {\cal R}_{i}(x):=a_m ~~~~~ {\cal R}_{i}^{-}(y):=b_m. $$ It is easy to see that the above functions are well-defined and independent of ${\bf i}$ chosen. \vskip 2mm For each simple reflection $s_i\in W$, set $s_{i^*}$ such that $w_0s_{i^*}=s_iw_0$. Set ${\rm Ad}_v(g):=vgv^{-1}$. For any $u\in {\rm U}$, set $\widetilde{u}:={\rm Ad}_{\overline{w}_0}(u^{-1})\in {\rm U}^{-}$. \begin{lemma} \label{13.1.11.31h.1} The map $u\mapsto \widetilde{u}$ is a positive birational isomorphism from ${\rm U}$ to ${\rm U}^{-}$. Moreover, \begin{equation} \label{13.1.11.31h} \chi_i(u)=\chi_{i^*}^{-}(\widetilde{u}),~~~{\cal L}_i(u)={\cal R}_{i^*}(\widetilde{u}),~~~{\cal R}_i(u)={\cal L}_{i^*}(\widetilde{u})~~~~ \forall i\in I. \end{equation} \end{lemma} \begin{proof} Note that $ {\rm Ad}_{\overline{w}_0}(x_i(-a))=y_{i^*}(a). $ Let $u=x_{i_1}(a_1)\ldots x_{i_m}(a_m)\in {\rm U}$. Then $$ \widetilde{u}={\rm Ad}_{\overline{w}_0}(u^{-1} =y_{i_m^*}(a_m)\ldots y_{i_1^*}(a_1). $$ Clearly it is a positive birational isomorphism. Identities in \eqref{13.1.11.31h} follow by definition. \end{proof} \begin{lemma} \label{13.1.11.31h.2} Let $h\in {\rm H}$, $x\in {\rm U}$ and $y\in {\rm U}^{-}$. For any $i\in I$, we have \begin{equation} \label{13.1.11.33h} \chi_i\big({\rm Ad}_h(x)\big)=\chi_i(x)\cdot\alpha_i(h),~~~{\cal L}_i\big({\rm Ad}_h(x)\big)={\cal L}_i(x)\cdot\alpha_i(h),~~~ {{\cal R}_i\big({\rm Ad}_h(x)\big)={\cal R}_i(x)\cdot\alpha_i(h)}. \end{equation} \begin{equation} \label{13.1.11.34h} {\chi_i^{-}\big({\rm Ad}_h(y)\big)=\chi_i(y)/\alpha_i(h),~~~{\cal L}_i^{-}\big({\rm Ad}_h(y)\big)={\cal L}_i^{-}(y)/\alpha_i(h),~~~ {\cal R}_i^{-}\big({\rm Ad}_h(y)\big)={\cal R}_i^{-}(y)/\alpha_i(h).} \end{equation} \end{lemma} \begin{proof} Follows from the identities $ {\rm Ad}_h\big(x_i(a)\big)=x_i(a\alpha_i(h))$ and ${\rm Ad}_h\big(y_i(a)\big)=y_i(a/\alpha_i(h)). $ \end{proof} \subsection{The positive morphisms $\Phi$ and $\eta$}\label{sec6.2.2} We show that each $\chi_i$ is closely related to ${\cal L}_i^{-}$ by the following morphism. \begin{definition} There exists a unique morphism $\Phi: {\rm U}^{-}\longrightarrow {\rm U}$ such that \begin{equation} \label{13.map.Phi} u_{-}{\rm B}= \Phi(u_{-})w_0{\rm B}. \end{equation} \end{definition} \begin{lemma} \label{lem2} For each $i \in I$, one has \begin{equation} 1/{\cal L}_{i}^{-}=\chi_{i} \circ \Phi,~~~~1/\chi^{-}_i={\cal L}_i\circ \Phi \end{equation} \end{lemma} ${\bf Example.}$ Let ${\rm G=SL}_3$. We have $$ y=y_1(b_1)y_2(b_2)y_1(b_3)=y_2(\frac{b_2b_3}{b_1+b_3})y_1(b_1+b_3)y_2(\frac{b_1b_2}{b_1+b_3}). $$ $$ \Phi(y)=x_1(\frac{1}{b_1+b_3})x_2(\frac{b_1+b_3}{b_2b_3})x_1(\frac{b_3}{b_1(b_1+b_3)})=x_2(\frac{1}{b_2})x_1(\frac{1}{b_1})x_2(\frac{b_1}{b_2b_3}). $$ $$ 1/{\cal L}_{1}^{-}(y)=\chi_{1}(\Phi(y))=\frac{1}{b_1}, ~~~ 1/{\cal L}_{2}^{-}(y)=\chi_{2}(\Phi(y))=\frac{b_1+b_3}{b_2b_3}. $$ $$ 1/{\chi}_{1}^{-}(y)={\cal L}_{1}(\Phi(y))=b_1+b_3, ~~~ 1/{\chi}_{2}^{-}(y)={\cal L}_{2}(\Phi(y))=b_2. $$ \vskip 2mm The proof was suggested by the proof of Proposition 3.2 of \cite{L2}. \begin{proof} We prove the first formula. The second follows similarly by considering the inverse morphism $\Phi^{-1}:{\rm U}\rightarrow {\rm U}^{-}$ such that $ u{\rm B}^{-}=\Phi^{-1}(u)w_0{\rm B}^{-}. $ Let $i\in I$. Let $w\in W$ such that its length $l(w)<l(s_iw)$. We use two basic identities: \begin{equation} \label{10.1.bac1} y_i(b)x_i(a)=x_i\big(a/(1+ab)\big)y_i\big(b(1+ab)\big)\alpha_i^\vee\big(1/(1+ab)\big). \end{equation} \begin{equation} \label{10.1.bac2} y_i(b)w {\rm B}=x_i(1/b){s}_i{w} {\rm B}. \end{equation} By \eqref{10.1.bac2}, one can change $y_i(b)$ on the most right to $x_i(1/b)$. By \eqref{10.1.bac1}, one can ``move" $y_i(b)$ from left to the right. After finite steps, we get \begin{equation} \label{phi} y_{i_1}(b_1)y_{i_2}(b_2)...y_{i_m}(b_m){\rm B}= y_{i_1}(b_1)x_{i_m}(a_m)x_{i_{m-1}}(a_{m-1})\ldots x_{i_{2}}(a_2){s}_{i_2}\ldots {s}_{i_m}{\rm B}. \end{equation} The last step is to move the very left term $y_{i_1}(b_1)$ to the right. Let $$ f_s(c_1,c_2,\ldots, c_m)=x_{i_{m}}(c_m)x_{i_{m-1}}(c_{m-1})...x_{i_{s+1}}(c_{s+1})y_{i_1}(c_1)x_{i_s}(c_s)\ldots x_{i_2}(c_2){s}_{i_2}\ldots {s}_{i_m}{\rm B}. $$ We will need the relations between $\{c_i\}$ and $\{c_i'\}$ such that $$ f_s(c_1,c_2,\ldots,c_m)=f_{s-1}(c_1',c_2',\ldots,c_m') $$ By \eqref{10.1.bac1}-\eqref{10.1.bac2}, if $i_1\neq i_s$, then $c_p=c_p'$ for all $p$. If $i_1=i_s$, then \begin{align} &c_p'=c_p\quad \text{for } p=s+1,\ldots, m;\nonumber\\ & c_s'=c_s/(1+c_1c_s), \quad c_1'=c_1(1+c_1c_s);\nonumber\\ &c_p'=c_p(1+c_1c_s)^{-\langle\alpha_{i_1}^{\vee}, \alpha_{i_p}\rangle} \quad \text{for }p=2,\ldots, s-1.\nonumber \end{align} For each $q=f_s(c_1,c_2,\ldots,c_m)$, we set \begin{equation} \label{6.21.1.1} h(q):=\frac{1}{c_1}+\sum_{p~|~i_p=i_1, ~p>s}c_p. \end{equation} If $i_s=i_1$, then $$ \frac{1}{c_1'}+\sum_{p~|~i_p=i_1,~p>s-1}c_p'=\frac{1}{c_1(1+c_1c_s)}+\frac{c_s}{1+c_1c_s}+\sum_{p~|~i_p=i_1, ~p>s}c_p=\frac{1}{c_1}+\sum_{p~|~i_p=i_1,~p>s}c_p. $$ Same is true for $i_s\neq i_1$. Therefore the function (\ref{6.21.1.1}) does not depend on $s$. Back to (\ref{phi}), we have \begin{align} u{\rm B}&=y_{i_1}(b_1)y_{i_2}(b_2)...y_{i_m}(b_m){\rm B} \nonumber\\ &=y_{i_1}(b_1)x_{i_m}(a_m)...x_{i_{2}}(a_2){s}_{i_2}...{s}_{i_n}{{\rm B}}\nonumber\\ &=x_{i_m}(c_m)...x_{i_2}(c_2)y_{i_1}(c_1){s}_{i_2}...{s}_{i_n}{{\rm B}}\nonumber\\ &=x_{i_m}(c_m)...x_{i_2}(c_2)x_{i_1}(1/c_1){s}_{i_1}...{s}_{i_n}{{\rm B}}\nonumber\\ &=\Phi(u){w}_0{\rm B}\nonumber \end{align} Hence $\Phi(u)=x_{i_m}(c_m)...x_{i_1}(c_2)x_{i_1}(1/c_1)$. Then $$ \chi_{i_1}(\Phi(u))=\frac{1}{c_1}+\sum_{p~|~i_p=i_1, ~p>1}c_p=h(u{\rm B})=\frac{1}{b_1}=\frac{1}{{\cal L}_{i_1}^{-}(u)}. $$ \end{proof} \begin{lemma} \label{10.1.phi} The morphism $\Phi: {\rm U}^{-}\rightarrow {\rm U}$ is a positive birational isomorphism with respect to Lusztig's positive atlases on ${\rm U}^-$ and ${\rm U}$. \end{lemma} \begin{proof} According to the algorithm in the proof of Lemma \ref{lem2}, clearly $\Phi$ is a positive morphism. By the same argument, one can show that $\Phi^{-1}$ is a positive morphism. The Lemma is proved. \end{proof} The morphism $\eta$ in \eqref{13.map.eta} is the right hand side version of $\Phi$, i.e. $ {\rm B}^{-} u={\rm B}^{-} w_0 \eta(u). $ Similarly, \begin{lemma} \label{10.1.eta} The morphism $\eta: {\rm U}\rightarrow {\rm U}^{-}$ is a positive birational isomorphism. Moreover, \begin{equation} \label{lem2.eta} \forall i\in I,~~~~~~ 1/{\cal R}_{i}=\chi_{j}^{-} \circ \eta,~~~~1/\chi_i={\cal R}_i^{-}\circ \eta. \end{equation} \end{lemma} \subsection{Birational isomorphisms $\phi_i$ of ${\rm U}$} Let $i\in I$. Define $$ z_i(a):=\alpha_i^{\vee}(a)y_i(-a),~~~z_i^*(a):=\alpha_i^{\vee}(1/a)y_i(1/a). $$ Clearly $z_i(a)z_i^{*}(a)=1$. \begin{lemma-construction} \label{13.1.4.26h} There is a birational isomorphism \begin{equation} \label{map.phi_i} \phi_i: {\rm U}\stackrel{\sim}{\longrightarrow}{\rm U},~~~ u\longmapsto \overline{s}_i\cdot u \cdot z_i\big(\chi_i(u)\big). \end{equation} \end{lemma-construction} {\bf Remark.} The map $\phi_i$ is not a positive birational isomorphism. \begin{proof} We need the following identities: \begin{equation} \label{13.1.4.21h} \overline{s}_ix_i(a)z_i(a)=x_i(-1/a). \end{equation} \begin{equation} z_i^{*}(a) x_i(b-a) z_i(b)=x_i(1/a-1/b). \end{equation} If $j\neq i$, then \begin{equation} \label{13.1.4.22h} z_i^*(a)x_j(b)z_i(a)=x_j(ba^{-\langle \alpha_i^{\vee}, \alpha_j\rangle}) \end{equation} Let ${\bf i}=(i_1,i_2,\ldots, i_m)$ be a reduced word for $w_0$ such that $i_1=i$. For each $s\in [1,m]$, define $$ I_s^{{\bf i},i}:=\{p\in [1,s]~|~i_p=i\}. $$ Let $u=x_{i_1}(a_1)\ldots x_{i_m}(a_m)\in {\rm U}$. Set $d_s:=\sum_{k\in I_s^{{\bf i},i}} a_k.$ In particular, $d_1=a_1$, $d_m=\chi_i(u)$. Let us assume that $u\in {\rm U}$ is generic, so that $d_s\neq 0$ for all $s\in [1,m]$. By \eqref{13.1.4.21h}-\eqref{13.1.4.22h}, we get \begin{align} \phi_i(u)=&\overline{s}_i\cdot x_{i_1}(a_1)x_{i_2}(a_2)\ldots x_{i_m}(a_m) \cdot z_i\big(\chi_i(u)\big)\nonumber\\ =&\big(\overline{s}_ix_{i_1}(a_1)z_i(d_1)\big)\cdot \big(z_i^*(d_1)x_{i_2}(a_2)z_i(d_2)\big)\cdot \ldots \cdot \big(z_i^{*}(d_{m-1})x_{i_m}(a_m)z_i(d_m)\big)\nonumber\\ =&x_{i_1}(a_1')x_{i_2}(a_2')\ldots x_{i_m}(a_m'). \end{align} Here $a_1'=-1/d_1$. For $s>1$, \begin{equation} \label{13.1.4.23h} a_s'=\left\{\begin{array} {cc}1/d_{s-1}-1/d_s,&\text{if }i_s=i,\\a_sd_s^{-\langle \alpha_i^{\vee}, \alpha_{i_s}\rangle}, &\text{if }i_s\neq i .\\ \end{array}\right. \end{equation} Thus $\phi_i(u)\in {\rm U}$. The map $\phi_i$ is well-defined. By \eqref{13.1.4.23h}, we have $\chi_i(\phi_i(u))=-1/\chi_i(u)$. Therefore $$ \phi_i\circ \phi_i(u)=\overline{s}_i\cdot \overline{s}_i\cdot u \cdot z_i(\chi_i(u))\cdot z_i(-1/\chi_i(u))=\overline{s}_i^2\cdot u \cdot \overline{s}_i^2. $$ Since $\overline{s}_i^4=1$, we get $\phi_i^4={\rm id}$. Therefore $\phi_i$ is birational. \end{proof} Let $\lambda\in {\rm P}^+$. Recall $t^{\lambda}\in {\rm Gr}$. Recall the ${{\rm G}}({\cal O})$-orbit ${\rm Gr}_{\lambda}$ of $t^{\lambda}$ in ${\rm Gr}$. \begin{lemma} \label{13.1.8.53h} Let $l\in {\rm U}({\mathbb Z}^t)$. For any $u\in {\cal C}_l^{\circ}$, the element $u\cdot t^{\lambda} \in \overline{{\rm Gr}_{\lambda}}$ if and only if $l\in {\rm U}_{\chi}^+({\mathbb Z}^t).$ \end{lemma} \begin{proof} If $l\in {\rm U}_{\chi}^+({\mathbb Z}^t)$, by Lemma \ref{Lema10.1.1}, we see that $u\in {\rm U}({\cal O})$. Hence $u\cdot t^{\lambda} \in \overline{{\rm Gr}_{\lambda}}$. If $\chi^t(l)=\min_{i\in I}\{\chi_i^t(l)\}<0,$ then pick $i$ such that $\chi_i^{t}(l)<0$. Set $\mu:=\lambda-\chi_i^t(l)\cdot\alpha_i^{\vee}$. Since $y_i(t^{\langle \alpha_i, \lambda\rangle}/\chi_i(u))\in {\rm G}({\cal O})$, we get \begin{equation} \label{13.1.4.25h} z_i^*(\chi_i(u))\cdot t^{\lambda}=\alpha_i^{\vee}(1/\chi_i(u))\cdot t^{\lambda}\cdot y_i(t^{\langle \alpha_i, \lambda\rangle}/\chi_i(u))=\alpha_i^{\vee}(1/\chi_i(u))\cdot t^{\lambda}=t^{\mu}. \end{equation} Recall the ${\rm U}_{w}({\cal K})$-orbit ${\rm S}_w^{\nu}$ of $t^\nu$ in ${\rm Gr}$. We have \begin{equation} u\cdot t^{\lambda}=u z_i(\chi_i(u))\cdot z_i^*(\chi_i(u)) t^{\lambda}\stackrel{\eqref{13.1.4.25h}}{=} u z_i(\chi_i(u)) \cdot t^{\mu}\stackrel{\eqref{map.phi_i}}{=}\overline{s}_i^{-1}\phi_i(u) \overline{s}_i\cdot t^{s_i(\mu)}\in {\rm S}_{s_i}^{s_i(\mu)}. \end{equation} It is well-known that the intersection ${\rm S}_{w}^{\nu}\cap \overline{{\rm Gr}_{\lambda}}$ is nonempty if and only if $t^{\nu}\in \overline{{\rm Gr}_{\lambda}}.$ In this case $t^{s_i(\mu)}\notin \overline{{\rm Gr}_{\lambda}}$. Therefore ${\rm S}_{s_i}^{s_i(\mu)}\cap \overline{{\rm Gr}_{\lambda}}$ is empty. Hence $u\cdot t^{\lambda}\notin \overline{{\rm Gr}_{\lambda}}$. \end{proof} \section{The potential ${\cal W}$ in special coordinates for ${\rm GL}_m$}\label{KT} \subsection{Potential for ${\rm Conf}_3({\cal A})$ and Knutson-Tao's rhombus inequalities} Recall that a flag $F_\bullet$ for ${\rm GL}_m$ is a collection of subspaces in an $m$-dimensional vector space $V_m$: $$ F_\bullet = F_0 \subset F_1 \subset \ldots \subset F_{m-1} \subset F_m, ~~~~ {\rm dim}F_i=i. $$ A decorated flag for ${\rm GL}_m$ is a flag $F_\bullet$ with a choice of non-zero vectors $f_i \subset F_{i}/F_{i-1}$ for each $i=1, \ldots, m$, called {\it decorations}. It determines a collection of decomposable $k$-vectors $$ f_{(1)}:= f_1, ~~f_{(2)}:= f_1\wedge f_2,~~ \ldots , ~~f_{(m)}:= f_1 \wedge ... \wedge f_m. $$ A decorated flag is determined by a collection of decomposable $k$-vectors such that each divides the next one. A linear basis $(f_1, ..., f_m)$ in the space $V_m$ determines a decorated flag by setting $F_k:= \langle f_1, ..., f_k\rangle$, and taking the projections of $f_k$ to $F_k/F_{k-1}$ to be the decorations. Recall the notion of an $m$-triangulation of a triangle \cite[Section 9]{FG1}. It is a graph whose vertices are parametrized by a set \begin{equation} \label{Gamma} \Gamma_m:=\{(a,b,c)~|~ a+b+c=m, \quad a,b,c\in {\mathbb Z}_{\geq 0}\}. \end{equation} Let $({\rm F, G, H})\in {\rm Conf}_3({\cal A})$ be a generic configuration of three decorated flags, described by a triple of linear bases in the space $V_m$: $$ {\rm F}=(f_1,\ldots, f_m), ~{\rm G}=(g_1,\ldots,g_m),~{\rm H}=(h_1,\ldots,h_m). $$ Let $\omega\in \det V_m^*$ be a volume form. Then each vertex $(a,b,c)\in (\ref{Gamma})$ gives rise to a function $$ \Delta_{a,b,c}({\rm F, G, H})=\langle f_{(a)}\wedge g_{(b)}\wedge h_{(c)}, \omega\rangle. $$ There is a one dimensional space ${\rm L}_a^{b,c}:={\rm F}_{a+1}\cap ({\rm G}_b\oplus {\rm H}_c)$. Let $e_{a}^{b,c}\in {\rm L}_a^{b,c}$ such that $e_{a}^{b,c}-f_{a+1}\in {\rm F}_a$. It is easy to see that $ e_{a}^{b+1,c-1}-e_{a}^{b,c}\in {\rm L}_{a-1}^{b+1, c} $ Therefore there exists a unique scalar $\alpha_{a}^{b,c}$ such that $ e_{a}^{b+1,c-1}-e_{a}^{b,c}=\alpha_{a}^{b,c} e_{a-1}^{b+1,c}. $ \begin{figure}[ht] \epsfxsize400pt \centerline{\epsfbox{san.eps}} \caption{Zig-zag paths and bases for the decorated flag ${\rm F}$.} \label{tri} \end{figure} \begin{lemma} \label{lem36} One has \begin{equation} \alpha_{a}^{b,c}=\frac{\Delta_{a-1,b+1,c}\Delta_{a+1,b,c-1}}{\Delta_{a,b,c}\Delta_{a,b+1,c-1}}. \end{equation} \end{lemma} \begin{proof} Set \begin{equation} \alpha:=\alpha_{a}^{b,c}, \quad \beta:=\frac{\Delta_{a,b,c}}{\Delta_{a+1,b,c-1}},\quad \gamma:=\frac{\Delta_{a,b+1,c-1}}{\Delta_{a+1,b,c-1}}. \label{eq34} \end{equation} By definition, \begin{align} f_{(a)}&=f_{(a-1)}\wedge e_{a-1}^{b,c+1},\nonumber\\ f_{(a+1)}&=f_{(a)}\wedge e_{a}^{b,c}=f_{(a)}\wedge e_{a}^{b-1,c+1},\nonumber\\ g_{(b)}\wedge h_{(c)}&=\beta e_{a}^{b,c}\wedge g_{(b)}\wedge h_{(c-1)},\nonumber\\ g_{(b+1)}\wedge h_{(c-1)}&= \gamma e_a^{b+1,c-1} \wedge g_{(b)}\wedge h_{(c-1)}. \nonumber \end{align} Therefore, \begin{align} g_{(b+1)}\wedge h_{(c)}&=\gamma e_a^{b+1,c-1} \wedge g_{(b)}\wedge h_{(c)}\nonumber\\ &=\beta \gamma e_{a}^{b+1,c-1}\wedge e_{a}^{b,c}\wedge g_{(b)}\wedge h_{(c-1)}\nonumber\\ &=\beta\gamma(e_{a}^{b+1,c-1}-e_{a}^{b,c})\wedge e_{a}^{b,c}\wedge g_{(b)}\wedge h_{(c-1)}\nonumber\\ &=\beta\gamma\alpha e_{a-1}^{b+1,c}\wedge e_{a}^{b,c}\wedge g_{(b)}\wedge {h_{(c-1)}}. \nonumber \end{align} So $$ f_{(a-1)}\wedge g_{(b+1)}\wedge h_{(c)}=\alpha \beta\gamma f_{(a+1)}\wedge g_{(b)}\wedge h_{(c-1)}. $$ Therefore, $$ \alpha \beta\gamma =\frac{\Delta_{a-1,b+1,c}}{\Delta_{a+1,b,c-1}}. $$ Go back to (\ref{eq34}), the Lemma is proved. \end{proof} As shown on Fig \ref{tri}, each zig-zag path $p$ provides a basis ${\rm E}_{p}$ for ${\rm F}$. For example, $$ {\rm E}_{l}:=\{e_{0}^{0, n}, e_{1}^{0,n-1},\ldots, e_{n-1}^{0,1}\},~~~ {\rm E}_{r}:=\{e_{0}^{n,0}, e_{1}^{n-1, 1},\ldots,e_{n-1}^{1,0}\} $$ are the bases provided by the very left and very right paths. Given two zig-zag paths, say $p$ and $q$, there is a unique unipotent element $u_{pq}$ stabilizing {\rm F}, transforming ${\rm E}_{p}$ to ${\rm E}_{q}$. Recall the character $\chi_{\rm F}$ in section 1. For each triple $(p, q,r)$ of zig-zag paths, we have \begin{align} \chi_{\rm F}(u_{pq})&=-\chi_{\rm F}(u_{qp}),\nonumber\\ \chi_{\rm F}(u_{pr})&=\chi_{\rm F}(u_{pq})+\chi_{\rm F}(u_{qr}).\nonumber \end{align} If $p$, $q$ are adjacent paths, see the right of Fig \ref{tri}, then by Lemma \ref{lem36}, $$ \chi_{\rm F}(u_{pq})=\alpha_{a}^{b,c}=\frac{\Delta_{a-1,b+1,c}\Delta_{a+1,b,c-1}}{\Delta_{a,b,c}\Delta_{a,b+1,c-1}}. $$ One can transform the very left path to the very right by a sequence of adjacent paths. Let $u\in {\rm U}_{\rm F}$ transform ${\rm E}_l$ to ${\rm E}_r$. Then $$ \chi_{\rm F}(u)=\sum_{(a,b,c)\in \Gamma_m, a\neq 0, c\neq 0}\alpha_{a}^{b,c}=\sum_{(a,b,c)\in \Gamma_m, a\neq 0, c\neq 0}\frac{\Delta_{a-1,b+1,c}\Delta_{a+1,b,c-1}}{\Delta_{a,b,c}\Delta_{a,b+1,c-1}}. $$ Its tropicalization $$ \chi_{\rm F}^t=\min_{(a,b,c)\in \Gamma_n, a\neq 0, c\neq 0}\{ \Delta_{a-1,b+1,c}^t+\Delta_{a+1,b,c-1}^t-\Delta_{a,b,c}^t-\Delta_{a,b+1,c-1}^t\} $$ delivers 1/3 of Knutson-Tao rhombus inequalities. Clearly, same holds for the other two directions. By definition, $$ {\cal W}({\rm F,G,H})=\chi_{\rm F}+\chi_{\rm G}+\chi_{\rm H}. $$ Our set ${\rm Conf}_3^{+}({\cal A})({\mathbb Z}^t)$ coincides with the set of hives in \cite{KT}. \begin{figure}[ht] \epsfxsize=3in \centerline{\epsfbox{phase.eps}} \caption{Calculating the potential ${\cal W}$ on ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$ in the special coordinates for ${\rm GL}_m$.} \label{phase} \end{figure} In Sections \ref{sec4.2Gi}-\ref{sec4.2GZ} we show that the potential on the space ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$ for ${\rm GL}_m$, written in the special coordinates there, recovers Givental's potential and, after tropicalization, Gelfand-Tsetlin's patterns. \subsection{The potential for ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$ and Givental's potential for ${\rm GL}_m$} \label{sec4.2Gi} Let ${\rm G}={\rm GL}_m$. Recall the set $\Gamma_m$, see (\ref{Gamma}). For each triple $(a,b,c)\in \Gamma_m$, there is a canonical function $\Delta_{a,b,c}: {\rm Conf}_3({\cal A})\rightarrow {\Bbb A}^1$. Consider the functions $\Delta_{a,b,c}$ with $(a,b,c)\in \Gamma_m- (0,0,m)$, illustrated by the $\bullet$-vertices on Fig \ref{giv}. For each triple $(a,b,c)\in \Gamma_{m-1}$, let us set \begin{equation} \label{Rratio} {\rm R}_{a,b,c}:=\frac{\Delta_{a, b+1, c}}{\Delta_{a+1, b,c}}. \end{equation} The functions ${\rm R}_{a,b,c}$ are assigned naturally to the $\circ$-vertices on Fig \ref{giv}. Each of them is the ratio of the $\Delta$-functions at the ends of the slant edge centered at a $\circ$-vertex. They are functions on ${\rm Conf}({\cal A},{\cal A},{\cal B})$ since ${\rm R}_{a,b,c}({\rm A}_1,{\rm A}_2,{\rm A}_3\cdot h)={\rm R}_{a,b,c}({\rm A}_1,{\rm A}_2,{\rm A}_3)$ for any $h\in {\rm H}$. The functions ${\rm R}_{a,b,c}$ form a coordinate system on ${\rm Conf}({\cal A},{\cal A},{\cal B})$, referred to as the {\it special coordinate system}. The functions $\{{\rm R}_{a,b,0}\}$ provide the canonical map \begin{equation} \label{q-map} {\rm Conf}({\cal A},{\cal A},{\cal B}) \longrightarrow {\rm Conf}({\cal A},{\cal A}) = ({\Bbb G}_m)^{m-1}. \end{equation} \begin{figure}[ht] \epsfxsize=1.5in \centerline{\epsfbox{giv.eps}} \caption{The Givental quiver and special coordinates on ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$ for ${\rm GL}_4$.} \label{giv} \end{figure} Consider now the {\it Givental quiver $\Gamma_{m-1}$}, whose vertices are the $\circ$-vertices, parametrised by the set $\Gamma_{m-1}$, with the arrows are going down and to the right, as shown on Fig \ref{giv}. For each arrow connecting two vertices, take the sourse/tail ratio of the corresponding functions. For example, see Fig \ref{phase}, the vertical arrow $\alpha$ connecting $(a+1,b-1,c)$ and $(a,b-1,c+1)$ provides \begin{equation} \label{13.12.28} Q_{\alpha}=\frac{{\rm R}_{a,b-1,c+1}}{{\rm R}_{a+1, b-1,c}}=\frac{\Delta_{a,b,c+1}\Delta_{a+2,b-1,c}}{\Delta_{a+1,b-1,c+1}\Delta_{a+1,b,c}}. \end{equation} \vskip 2mm Recall the function $\chi_{{\rm A}_1}$, $\chi_{{\rm A}_2}$ on ${\rm Conf}({\cal A},{\cal A},{\cal B})$. Taking the sum of $Q_{\alpha}$ over the vertical arrows $\alpha$, and a similar sum over the horizontal arrows $\beta$, and using \eqref{13.12.28}, we get $$ \chi_{{\rm A}_1}=\sum_{\alpha \mbox{~vertical}} Q_{\alpha}, ~~~~ \chi_{{\rm A}_2}=\sum_{\beta \mbox{~horizontal}} Q_{\beta}. $$ \paragraph{Relating to Givental's work.} Givental \cite{Gi2}, pages 3-4, introduced parameters $T_{i,j}$, $0 \leq i\leq j \leq m$, matching the vertices of the Givental quiver: $$ \begin{matrix} T_{0,0}&&&\\ T_{01}&T_{1,1}&&\\ T_{02}&T_{12}&T_{2,2}&\\ T_{03}&T_{13}&T_{2,3}&T_{3,3}\\ \end{matrix} $$ He treats the entires on the main diagonal ${\rm a}= ({T}_{0,0}, {T}_{1,1}, ..., {T}_{m,m})$ as parameters, and defines the potential as a sum over the oriented edges of the quiver: $$ {\cal W}_{\rm a} = \sum_{0 \leq i< j \leq m} \bigl({\rm exp}(T_{i,j} - T_{i,j-1}) + {\rm exp}(T_{i,j} - T_{i+1,j})\Bigr). $$ Let $Y_{\rm a}$ be the subvariety with a given value of ${\rm a}$. Then Givental's integral is $$ {\cal F}({\rm a}, \hbar)= \int_{Y_{\rm a}}{\rm exp}(-{\cal W}_{\rm a}/\hbar)\bigwedge_{i=1}^n\bigwedge_{j=0}^{i-1}d T_{i.j}. $$ Givental's variables $T_{i,j}$ match our coordinates ${\rm R}_{a,b,c}$ where $a+b+c=m-1$: $$ {\rm R}_{m-i-1, j, i-j} = {\rm exp}(T_{i,j}). $$ Observe that $Y_{\rm a}$ is the fiber of the map (\ref{q-map}) over a point ${\rm a}= ({\rm R}_{m-1, 0}, {\rm R}_{m-2, 1}, ..., {\rm R}_{0, m-1})$. Givental's potential coincides with $\chi_{{\rm A}_1}+\chi_{{\rm A}_2}$. Givental's volume form on $Y_{\rm a}$ coincides, up to a sign, with ours since $$ \bigwedge_{i=1}^n\bigwedge_{j=0}^{i-1}d T_{i.j} = \pm \bigwedge_{a+b+c=m-1, c>0}d\log {\rm R}_{a,b,c}. $$ \subsection{The potential for ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$ and Gelfand-Tsetlin's patterns for ${\rm GL}_m$} \label{sec4.2GZ} Gelfand-Tsetlin's patterns for ${\rm GL}_m$ \cite{GT1} are arrays of integers $\{p_{i,j}\}$, $1 \leq i \leq j \leq m$, such that \begin{equation} \label{ineq} {p_{i,j+1}\leq p_{i,j}\leq p_{i+1,j+1}.} \end{equation} \begin{figure}[ht] \centerline{\epsfbox{tpm50.eps}} \caption{Gelfand-Tsetlin patterns for ${\rm GL}_4$ and the special coordinates for ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$.} \label{tpm50} \end{figure} \begin{theorem} The special coordinate system on ${\rm Conf}({\cal A}_{{\rm GL}_m}, {\cal A}_{{\rm GL}_m}, {\cal B}_{{\rm GL}_m})$ together with the potential ${\cal W} = \chi_{{\rm A}_1}+\chi_{{\rm A}_2}$ provide a canonical isomorphism $$ \{\mbox{\rm Gelfand-Tsetlin's patterns for ${\rm GL}_m$}\} ~~= ~~ {\rm Conf}^+({\cal A}_{{\rm GL}_m}, {\cal A}_{{\rm GL}_m}, {\cal B}_{{\rm GL}_m})({\mathbb Z}^t). $$ \end{theorem} \begin{proof} The space ${\rm Conf}({\cal A}_{{\rm GL}_m}^3, \omega_m)$ of ${\rm GL}_m$-orbits on ${\cal A}_{{\rm GL}_m}^3 \times {\rm det}V_m^*$ has dimension $\frac{(m+1)(m+2)}{2}$. It has a coordinate system given by the functions $\Delta_{a,b,c}$, {$a+b+c=m$}, parametrized by the vertices of the graph $\Gamma_{m}$, shown on the left of Fig \ref{tpm50}. The coordinates on ${\rm Conf}({\cal A}_{{\rm GL}_m}, {\cal A}_{{\rm GL}_m}, {\cal B}_{{\rm GL}_m})$ are parametrized by the edges $E$ of the graph parallel to the edge ${\rm A}_1{\rm A}_2$ of the triangle. They are little red segments on the right of Fig \ref{tpm50}. They are given by the ratios of the coordinates at the ends of the edge $E$, recovering formula (\ref{Rratio}). Notice that the edges $E$ are oriented by the orientation of the side ${\rm A}_1{\rm A}_2$. The monomials of the potential $\chi_{{\rm A}_1}+\chi_{{\rm A}_2}$ are paramerized by the blue edges, that is by the edges of the graph inside of the triangle parallel to either side ${\rm B}_3{\rm A}_1$ or ${\rm B}_3{\rm A}_2$. We claim that the monomials of potential $\chi_{{\rm A}_1}+\chi_{{\rm A}_2}$ are in bijection with Gelfand-Tsetlin's inequalities. Indeed, a typical pair of inequalities (\ref{ineq}) is encoded by a part of the graph shown on Fig \ref{tpm50a}. The three coordinates $(P_1, P_2, Q)$ on ${\rm Conf}({\cal A}_{{\rm GL}_m}, {\cal A}_{{\rm GL}_m}, {\cal B}_{{\rm GL}_m})$ assigned to the red edges are expressed via the coordinates $(A, B, C, D, F)$ at the vertices: $$ P_1 = \frac{B}{A}, ~~~~P_2 = \frac{C}{B}, ~~~~Q = \frac{E}{D}. $$ \begin{figure}[ht] \epsfxsize80pt \centerline{\epsfbox{tpm50a.eps}} \caption{Gelfand-Tsetlin patterns from the potential.} \label{tpm50a} \end{figure} The monomials of the potential at the two blue edges are $\frac{EA}{DB}$ and $\frac{DC}{EB}$. Their tropicalization delivers the inequalities $ p_1 \leq q, q\leq p_2. $ \end{proof} \section{Proof of Theorem \ref{8.23.7.32hh}} \label{9.21.12.17h} Let ${\rm T}$ be a split torus. Let $g:=\sum_{\alpha\in X^*({\rm T})} g_{\alpha} X^{\alpha}$ be a nonzero positive polynomial on ${\rm T}$, i.e. its coefficients $g_{\alpha}\geq 0$ are non-negative. The integral tropical points $l\in {\rm T}({\mathbb Z}^t)=X_{*}({\rm T})$ are cocharacters of ${\rm T}$. The tropicalization of $g$ is a piecewise linear function on ${\rm T}({\mathbb Z}^t)$: $$ g^t(l)=\min_{\alpha~|~g_{\alpha}>0}\{\langle l, \alpha\rangle\}. $$ Fix an $l\in {\rm T}({\mathbb Z}^t)$. Set $$ \Lambda_{g,l}:=\{\alpha\in X^*({\rm T})~|~ g_{\alpha}> 0, ~\langle l,\alpha\rangle =g^t(l)\}, \qquad g_l:=\sum_{\alpha \in \Lambda_{g, l}}g_{\alpha}X^{\alpha}. $$ The set $\Lambda_{g,l}$ is non-empty. Therefore $g_l$ is a nonzero positive polynomial. If $f$ and $g$ are two such polynomials, so is the product $f\cdot g$. We have $(f\cdot g)_l=f_{l}\cdot g_{l}$ for all $l\in {\rm T}({\mathbb Z}^t).$ Let $h$ be a nonzero positive rational function on ${\rm T}$. It can be expressed as a ratio $f/g$ of two nonzero positive polynomials. Set $h_l:=f_l/g_l$. Let $h=f'/g'$ be another expression. Then $$ f/g=f'/g' ~\Longrightarrow ~ f \cdot g'=f'\cdot g ~{\Longrightarrow} ~f_l\cdot g'_l =f'_l\cdot g_l~\Longrightarrow ~f_l/g_l =f'_l/g_l'. $$ Hence $h_l$ is well defined. \begin{lemma} \label{8.21.14.46h} Let $h$, $l$ be as above. For each $C\in {\rm T}_l$ such that \footnote{Every transcendental point $C\in {\rm T}^{\circ}_l$ automatically satisfies such conditions.} $h_l({\rm in}(C))\in {\mathbb C}^*$, we have \begin{equation} \label{9.12.12.1} {{\rm val}}(h(C))=h^t(l),~~~~~{\rm in}(h(C))=h_{l}({\rm in}(C)). \end{equation} \end{lemma} \begin{proof} Assume that $h$ is a nonzero positive polynomial. By definition $$ \forall C\in {\rm T}_l,~~~~h(C)=h_l({\rm in}(C))t^{h^t(l)}+\text{ terms with higher valuation}. $$ If $h_l({\rm in}(C))\in {\mathbb C}^*$, then \eqref{9.12.12.1} follows. The argument for a positive rational function is similar. \end{proof} Let $f=(f_1, \ldots, f_k):{\rm T}\rightarrow {\rm S}$ be a positive birational isomorphism of split tori. Let $l\in {\rm T}({\mathbb Z}^t)$. We generalize the above construction by setting $f_l:=(f_{1,l},\ldots,f_{k,l}): {\rm T}\longrightarrow {\rm S}.$ \begin{lemma} \label{9.20.13.54h} Let $f$, $l$ be as above. Let $C\in {\rm T}_l^{\circ}$. Then \begin{equation} \label{9.20.13.54h1} {\rm val}(f(C))=f^t(l),~~~~~ {\rm in}(f(C))=f_l({\rm in}(C)). \end{equation} Let $h$ be a nonzero positive rational function on ${\rm S}$. Then \begin{equation} \label{9.20.13.54h2} {\rm in}\big(h\circ f (C)\big)=h_{f^t(l)}\big({\rm in}(f(C))\big). \end{equation} \end{lemma} \begin{proof} Here \eqref{9.20.13.54h1} follows directly from Lemma \ref{8.21.14.46h}. Note that $h_{f^t(l)}\circ f_l$ is a nonzero positive rational function on {\rm T}. Since $C$ is transcendental, we get $$ h_{f^t(l)}\big( {\rm in}(f(C))\big)=h_{f^t(l)}\circ f_l({\rm in }(C))\in {\mathbb C}^*. $$ Thus \eqref{9.20.13.54h2} follows from Lemma \ref{8.21.14.46h}. \end{proof} \paragraph{Proof of Theorem \ref{8.23.7.32hh}.} It suffices to prove $f({\rm T}_{l}^{\circ})\subseteq{\rm S}_{f^t(l)}^{\circ}$. The other direction is the same. Let $C=(C_1,\ldots,C_k)\in {\rm T}_l^{\circ}$. Let $f(C):=(C_1',\ldots, C_k')$. By \eqref{9.20.13.54h1}, we get $ f(C)\in {\rm S}_{f^t(l)}$ and the field extension {\mathbb Q}({\rm in}(C_1'),\ldots, {\rm in}(C_k'))\subseteq {\mathbb Q}({\rm in}(C_1),\ldots, {\rm in}(C_k)). Let $g=(g_1,\ldots, g_k): {\rm S}\rightarrow {\rm T}$ be the inverse morphism of $f$. Then $C_j=g_j\circ f(C)$ for $j\in [1,k]$. The functions $g_j$ are nonzero positive rational functions on ${\rm S}$. Therefore $$ {\rm in}(C_j)={\rm in}(g_j\circ f(C))\stackrel{\eqref{9.20.13.54h2}}{=}g_{j,f^t(l)}({\rm in}( f(C)))\in {\mathbb Q}({\rm in}(C_1'),\ldots, {\rm in}(C_k')). $$ Therefore {\mathbb Q}({\rm in}(C_1),\ldots, {\rm in}(C_k))\subseteq {\mathbb Q}({\rm in}(C_1'),\ldots, {\rm in}(C_k')). $ Summarizing, we ge \begin{equation} \label{9.20.2.12} {\mathbb Q}({\rm in}(C_1),\ldots, {\rm in}(C_k))={\mathbb Q}({\rm in}(C_1'),\ldots, {\rm in}(C_k')). \end{equation} Therefore $f(C)$ is transcendental. Theorem \ref{8.23.7.32hh} is proved. \section{Main examples of configuration spaces} \label{sec7sec} As discussed in Section \ref{sec1}, the pairs of configuration spaces especially important in representation theory are: $$ \big\{{\rm Conf}_n({\cal A}), ~~{\rm Conf}_n({\rm Gr})\big\}, ~~~ \big\{{\rm Conf}({\cal A}^n, {\cal B}), ~~{\rm Conf}({\rm Gr}^n, {\cal B})\big\}, ~~~ \big\{{\rm Conf}({\cal B}, {\cal A}^n, {\cal B}), ~~{\rm Conf}({\cal B}, {\rm Gr}^n, {\cal B})\big\}. $$ In Section \ref{sec7sec} we express the potential ${\cal W}$ and the map $\kappa$ in these cases under explicit coordinates. \subsection{The configuration spaces ${\rm Conf}_n({\cal A})$ and ${\rm Conf}_n({\rm Gr})$} Recall $h_{ij}, u_{ij}^k$ in \eqref{3.23.12.1}. Recall the positive birational isomorphism \begin{equation} \label{7.22.1.1s.h} \alpha_1: {\rm Conf}_n({\cal A})\stackrel{\sim}{\longrightarrow} {\rm H}^{n-1}\times{\rm U}^{n-2}, ~~~({\rm A}_1, \ldots, {\rm A}_n)\longmapsto(h_{12},\ldots, h_{1n}, u_{3,2}^1,\ldots, u_{n,n-1}^1). \end{equation} The potential ${\cal W}$ on ${\rm Conf}_n({\cal A})$ induces a positive function ${\cal W}_{\alpha_1}:={\cal W}\circ {\alpha_1}^{-1}$ on ${\rm H}^{n-1}\times {\rm U}^{n-2}$. \begin{theorem} \label{13.2.1.6.05h} The function \begin{equation} \label{12.12.ptWn} {\cal W}_{\alpha_1}(h_2,\ldots, h_n, u_2,\ldots, u_{n-1})=\sum_{j=2}^{n-1}\big(\chi (u_j)+\sum_{i\in I}\frac{\alpha_i(h_{j})}{{\cal R}_i(u_j)}+\sum_{i\in I}\frac{\alpha_{i}(h_{j+1}}{{\cal L}_{i}(u_{j})}\big). \end{equation} \end{theorem} \begin{proof} By the scissor congruence invariance \eqref{11.18.11.5}, we get ${\cal W}({\rm A}_1,\ldots, {\rm A}_n)=\sum_{j=2}^{n-1}{\cal W}({\rm A}_1, {\rm A}_{j}, {\rm A}_{j+1}).$ The rest follows from \eqref{pontential.p.j} and Lemma \ref{lem1}. \end{proof} Let us choose a map without stable points which is not necessarily a bijection: $$ \alpha:[1,n]\longrightarrow [1,n],~~~\alpha(k)\neq k. $$ Let $x=({\rm A}_1,\ldots, {\rm A}_n)\in {\rm Conf}^{\cal O}_n({\cal A})$. Define \begin{equation} \label{13.1.24.41h} \omega_{k}(x):=[g_{\{{\rm U}, {\rm B}^{-}\}}(\{{\rm A}_1, {\rm B}_n\}, \{{\rm A}_k, {\rm B}_{\alpha(k)}\})]\in {\rm Gr}. \end{equation} By the definition of ${\rm Conf}^{\cal O}_n({\cal A})$, the map $\omega_k$ is independent of the map $\alpha$ chosen. Define \begin{equation} \label{13.1.23.23hh} \omega:=(\omega_2,\ldots, \omega_n): {\rm Conf}_n^{\cal O}({\cal A})\longrightarrow {\rm Gr}^{n-1}, ~~~~x\longmapsto (\omega_2(x),\ldots, \omega_n(x)). \end{equation} Consider the projection $$ i_1: {\rm Gr}^{n-1}\longrightarrow {\rm Conf}_n({\rm Gr}),~~~~\{{\rm L}_2,\ldots, {\rm L}_n\}\longmapsto ([1],{\rm L}_2,\ldots, {\rm L}_n) $$ \begin{lemma} \label{13.1.24.1.hhh} The map $\kappa$ in \eqref{4.18.12.41qx} is $i_1\circ \omega$. \end{lemma} \begin{proof} Here $\omega_k(x)=g_{\{{\rm U}, {\rm B}^{-}\}, \{{\rm A}_1,{\rm B}_n\}} {\rm L}({\rm A}_k, {\rm B}_{\alpha(k)}).$ In particular $\omega_1(x)=[1]$. The Lemma follows. \end{proof} \begin{figure}[ht] \epsfxsize=5in \centerline{\epsfbox{cal6.eps}} \caption{The map $\omega$ expressed by two different choices of frames $\{{\rm A}_{i}, {\rm B}_{\alpha(i)}\}$} \label{cal6} \end{figure} Below we give two explicit expressions of $\omega$ based on different choices of the map $\alpha$. We emphasize that although the expressions look entirely different from each other, they are the same map. As before, set $x=({\rm A}_1,\ldots, {\rm A}_n)\in {\rm Conf}^{\cal O}_n({\cal A})$. \vskip 2mm 1. Let $\alpha(k)=k-1$. It provides frames $\{{\rm A}_i, {\rm B}_{i-1}\}$, see the first graph of Fig \ref{cal6}. Set \begin{equation} \label{8.27.10.33h} g_k: = g_{\{{\rm U}, {\rm B}^{-}\}}(\{{\rm A}_{k}, {\rm B}_{k-1}\}, \{{\rm A}_{k+1}, {\rm B}_{k}\})\stackrel{*}{=} {u^{{\rm A}_k}_{{\rm B}_{k-1}, {\rm B}_{k+1}}}h_{{\rm A}_k, {\rm A}_{k+1}}\overline w_0. \end{equation} See Fig \ref{inv2} for proof of $*$. By \eqref{5.25.12.1}, we get \begin{equation} \omega_k(x)=[g_{\{{\rm U},{\rm B}^{-}\}}(\{{\rm A}_1,{\rm B}_n\}, \{{\rm A}_{k}, {\rm B}_{k-1}\} )]=[g_1\ldots g_{k-1}],~~~ k\in [2,n] \end{equation} Therefore \begin{equation} \label{3.23.12.3} \omega(x)= ([g_1],\ldots, [g_1 ... g_{n-1}])\in {\rm Gr}^{n-1}. \end{equation} \vskip 2mm 2. Let $\alpha(k)=n$ when $k\neq n$. Let $\alpha(n)=1$. See the second graph of Fig \ref{cal6}. Set $$ b_k:=b_{{\rm B}_n}^{{\rm A}_k,{\rm A}_{k+1}}, ~k\in [1,n-2];~~~~~ h_{n}:=h_{{\rm A}_1,{\rm A}_n}. $$ Then \begin{equation} \label{3.23.12.3.lemh.i} \omega_{k}(x)=[g_{\{{\rm U},{\rm B}^-\}}(\{{\rm A}_1,{\rm B}_n\},\{{\rm A}_{k},{\rm B}_n\})]=[b_1\ldots b_{k-1}], ~ k\in [2,n-1];~~~~\omega_n(x)=[h_n]. \end{equation} Therefore \begin{equation} \label{3.23.12.3.lemh} \omega(x)=([b_1],\ldots, [b_1\ldots b_{n-2}], [h_n])\in {\rm Gr}^{n-1}. \end{equation} \subsection{The configuration spaces ${\rm Conf}({\cal A}^n, {\cal B})$ and ${\rm Conf}({\rm Gr}^n, {\cal B})$} \label{sec7.1h} Consider the scissoring morphism \begin{align} \label{8.18h.cut.map} s: {\rm Conf}({\cal A}^{m+n+1},{\cal B})&\longrightarrow {\rm Conf}({\cal A}^{m+1},{\cal B})\times {\rm Conf}({\cal A}^{n+1},{\cal B}),\nonumber\\ ({{\rm A}_1,\ldots, {\rm A}_{m+n+1}, {\rm B}_{0}})&\longmapsto ({{\rm A}_{1},\ldots, {\rm A}_{m+1}, {\rm B}_{0}})\times ({\rm A}_{m+1},\ldots, {\rm A}_{m+n+1},{\rm B}_{0}). \end{align} By Lemmas \ref{7.22.1.1s}, \ref{13.1.10.3h}, the morphism $s$ is a positive birational isomorphism. In fact, the inverse map of $s$ can be defined by ``gluing" two configurations: \begin{equation} \label{5.7.12.1} *: {\rm Conf}^*({\cal A}^{m+1}, {\cal B}) \times {\rm Conf}^*({\cal A}^{n+1}, {\cal B}) \longrightarrow {\rm Conf}({\cal A}^{m+n+1}, {\cal B}),~~~(a,b)\longmapsto a*b. \end{equation} By Lemma \ref{8.28.10.39hhh}, $a$ has a unique representative $\{{\rm A}_1, \ldots, {\rm A}_{m}, {\rm U}, {\rm B}^{-}\}$, $b$ has a unique representative $\{{\rm U}, {\rm A}_1',\ldots, {{\rm A}}_{n}', {{\rm B}}^{-}\}$. We define the {\it convolution product} $a *b := ({\rm A}_1,\ldots,{\rm A}_m, {\rm U}, {\rm A}_1',\ldots, {\rm A}_n' , {\rm B}^-).$ The associativity of the convolution product is clear. \begin{figure}[ht] \epsfxsize250pt \centerline{\epsfbox{cut.eps}} \caption{A map given by scissoring a convex pentagon.} \label{cut} \end{figure} \vskip 2mm Recall $b_{k}^{ij}$ in \eqref{3.23.12.1}. Recall the morphisms $\pi_r, \pi_l$ in \eqref{13.1.12.51h}. \begin{theorem} \label{12.17.thm7h} The following morphism is a positive birational isomorphism \begin{equation} c: {\rm Conf}({\cal A}^n, {\cal B})\longrightarrow ({\rm B}^{-})^{n-1}, ~~~({\rm A}_1,\ldots, {\rm A}_n,{\rm B}_{n+1})\longmapsto (b_{n+1}^{1,2},\ldots, b_{n+1}^{i,i+1},\ldots, b_{n+1}^{n-1,n}). \end{equation} \end{theorem} \begin{proof} Scissoring the convex ($n$+1)-gon along diagonals emanating from $n$+1, see Fig \ref{cut}, we get a positive birational isomorphism ${\rm Conf}({\cal A}^n, {\cal B})\stackrel{\sim}{\rightarrow} \big({\rm Conf}({\cal A}^2, {\cal B})\big)^{n-1}.$ The Theorem is therefore reduced to $n=2$. Recall $\alpha_2$ in Lemma \ref{7.22.1.1s}. By Lemma \ref{12.12.11.h}, it is equivalent to prove that ${\rm H}\times {\rm U} {\rightarrow}{\rm H}\times {\rm U}^{-},~(h, u)\mapsto (\beta(u)h, \eta(u))$ is a positive birational isomorphism. Since $\eta$ is a positive birational isomorphism, {and} $\beta$ is a positive map, the Theorem follows. \end{proof} The potential ${\cal W}$ on ${\rm Conf}({\cal A}^n,{\cal B})$ induces a positive function ${\cal W}_{c}={\cal W}\circ {c}^{-1}$ on $({{\rm B}}^{-})^{n-1}$. \begin{lemma} The function \begin{equation} \label{12.12.11.2h} {\cal W}_{c}(b_1,\ldots, b_{n-1})=\sum_{j=1}^{n-1}\sum_{i\in I}\big(\frac{1}{{\cal L}_i^{-}\circ\pi_l(b_j)}+\frac{1}{{\cal R}_i^{-}\circ\pi_r(b_j)}\big) \end{equation} \end{lemma} \begin{proof} Note that $$ {\cal W}({\rm A}_1,\ldots, {\rm A}_n,{\rm B}_{n+1})=\sum_{j=1}^{n-1}{\cal W}({\rm A}_{j}, {\rm A}_{j+1},{\rm B}_{n+1})=\sum_{j=1}^{n-1}\big(\chi(u_{n+1,j+1}^j)+\chi(u_{j, n+1}^{j+1})\big). $$ The Lemma follows directly from Lemma \ref{lem2}, \eqref{lem2.eta} and Lemma \ref{12.12.11.h}. \end{proof} \begin{figure}[ht] \epsfxsize=4in \centerline{\epsfbox{AAAB.eps}} \caption{Frames assigned to $({\rm A}_1,\ldots, {\rm A}_n, {\rm B}_{n+1})$.} \label{AAB} \end{figure} \vskip 2mm Define \begin{equation} \label{13.1.26.337h} \tau: {\rm Conf}^{\cal O}({\cal A}^n, {\cal B})\longrightarrow {\rm Gr}^{n-1},~~~({\rm A}_1,\ldots, {\rm A}_n)\longmapsto \{[b_{n+1}^{1,2}],\ldots, [b_{n+1}^{1,n}]\}. \end{equation} Consider the projection $$ i_b: {\rm Gr}^{n-1}\longrightarrow{\rm Conf}({\rm Gr}^{n}, {\cal B}),~~~\{{\rm L}_2,\ldots, {\rm L}_{n}\}\longmapsto ([1], {\rm L}_2,\ldots, {\rm L}_{n}, {\rm B}^{-}). $$ Recall the map ${\kappa}$ in \eqref{5.12.12.2}. As illustrated by Fig \ref{AAB}, we get \begin{lemma} \label{12.15.12h35m} When ${\rm J}={\rm I}=[1,n]\subset[1,n+1]$, we have $\kappa=i_b\circ \tau$. \end{lemma} \subsection{The configuration spaces ${\rm Conf}({\cal B}, {\cal A}^n, {\cal B})$ and ${\rm Conf}({\cal B}, {\rm Gr}^n, {\cal B})$} Recall $r^{ij}_k$ in \eqref{13.3.16.1101}. Similarly, there is a positive birational isomorphism \begin{equation} p: {\rm Conf}({\cal B}, {\cal A}^{n}, {\cal B})\longrightarrow {\rm U}^{-}\times({{\rm B}}^{-})^{n-1}, ~~~({\rm B}_1, {\rm A}_2, \ldots,{\rm A}_{n+1},{\rm B}_{n+2})\longmapsto (r_{n+2}^{1, 2}, b_{n+2}^{2, 3},\ldots, b_{n+2}^{n,n+1}). \end{equation} The potential ${\cal W}$ on ${\rm Conf}({\cal B},{\cal A}^{n}, {\cal B})$ induces a positive function ${\cal W}_p:={\cal W}\circ p^{-1}$ on ${{\rm U}}^{-}\times ({{\rm B}}^{-})^{n-1}$. We have \begin{equation} \label{12.12.1.2ll} {\cal W}_p(r_1, b_2,\ldots, b_{n})=\sum_{i\in I}\frac{1}{{\cal R}_i^{-}(r_1)}+ \sum_{2\leq j\leq n}\sum_{i\in I}\big(\frac{1}{{\cal L}_i^{-}\circ \pi_l(b_j)}+\frac{1}{{\cal R}_i^{-}\circ \pi_r(b_j)}\big). \end{equation} \begin{figure}[ht] \epsfxsize=4in \centerline{\epsfbox{ABB.eps}} \caption{Frames assigned to $({\rm B}_1,{\rm A}_2,\ldots, {\rm A}_{n+1},{\rm B}_{n+2})$. Here $\pi({\rm A}_{1}^*)={\rm B}_{1}$.} \label{ABB} \end{figure} \vskip 2mm Recall the map ${\kappa}$ in \eqref{5.12.12.2}. Define \begin{equation} \label{13.1.26.352h} \tau_s: {\rm Conf}_{w_0}^{\cal O}({\cal A}, {\cal B}^n,{\cal A})\longrightarrow {\rm Gr}^{n}, ~~~~({\rm B}_1,{\rm A}_2,\ldots, {\rm A}_{n+1}, {\rm B}_{n+2})\longmapsto ([r_{n+2}^{1,2}],[r_{n+2}^{1,2}b_{n+2}^{2,3}],\ldots, [r_{n+2}^{1,2}b_{n+2}^{2,n+1}]). \end{equation} Consider the projection $$ i_s: {\rm Gr}^{n}\longrightarrow {\rm Conf}_{w_0}({\cal B}, {\rm Gr}^n, {\cal B}),~~~\{{\rm L}_2,\ldots, {\rm L}_{n+1}\}\longmapsto ({\rm B},{\rm L}_2,\ldots, {\rm L}_{n+1}, {\rm B}^{-}). $$ Let $x=({\rm B}_1,{\rm A}_2,\ldots, {\rm A}_{n+1}, {\rm B}_{n+2})\in {\rm Conf}_{w_0}^{\cal O}({\cal A}, {\cal B}^n,{\cal A})$. Let ${\rm A}_{1}^*\in {\cal A}$ be the preimage of ${\rm B}_1$ such that $b_{{\rm B}_{n+2}}^{{\rm A}_{1}^*, {\rm A}_2}=r_{n+2}^{1,2}.$ As illustrated by Fig \ref{ABB}, we get \begin{lemma} \label{12.15.kappa.j} When ${\rm J}={\rm I}=[2,n+1]\subset[1,n+2]$, we have $\kappa=i_s\circ \tau_s$. \end{lemma} \section{A positive structure on the configuration space ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$} \label{sec5 \subsection{Left ${\rm G}$-torsors} \label{sec5.1} Let ${\rm G}$ be a group. Let ${X}$ be a left principal homogeneous ${\rm G}$-space, also known as a left {\rm G}-torsor. Then for any $x, y \in {X}$ there exists a unique $g_{x,y}\in {\rm G}$ such that $ x= g_{x,y}y. $ Clearly, \begin{equation} \label{5.251210} g_{x,y}g_{y,z} = g_{x,z}, \qquad g_{gx,y}=gg_{x,y},~g_{x,gy}=g_{x,y}g^{-1}, ~~g\in {\rm G}. \end{equation} Given a reference point $p\in {X}$, one defines a ``$p$-distance from $x$ to $y$'': \begin{equation} \label{5.19.12.1as} g_p(x,y):= g_{p, x}g_{y, p} \in {\rm G}. \end{equation} If $i_p: X \to {\rm G}$ is a unique isomorphism of ${\rm G}$-sets such that $i_p(p)=e$, then $g_p(x,y)= i_p(x)^{-1}i_p(y)$. \begin{lemma} \label{LEMMA.4.2} One has: \begin{align} g_p(x,y)g_p(y, z) &= g_p(x,z). \label{5.25.12.1}\\ g_p(gx,gy) &= g_p(x,y),~~~~~~g\in {\rm G}. \label{5.25.12.1.ll}\\ y&=g_{p}(p,y)\cdot p. \label{ll.5.25.12.1} \end{align} \end{lemma} \begin{proof} Indeed, $$ g_p(x,y)g_p(y, z) = g_{p,x}g_{y,p}g_{p,y}g_{z,p}=g_{p,x}g_{z,p}=g_{p}(x,z),$$ $$ g_p(gx,gy) = g_{p, gx}g_{gy, p} \stackrel{(\ref{5.251210})}{=} g_{p, x}g^{-1} gg_{y, p} = g_{p, x}g_{y, p} = g_p(x,y), $$ $$ y=g_{y,p}\cdot p=g_{p,p}g_{y,p}\cdot p=g_{p}(p,y)\cdot p. $$ \end{proof} Recall ${\cal F}_{{\rm G}}$ in Definition \ref{torsorF}. From now on, we apply the above construction in the set-up $$X={\cal F}_{\rm G},\quad p=\{{\rm U},{\rm B}^{-}\}.$$ Pick a collection $\{{\rm A}_1, ..., {\rm A}_n\}$ representing a configuration in ${\rm Conf}_n({\cal A})$. We assign ${\rm A}_i$ to the vertices of a convex n-gon, so that they go clockwise around the polygon. Each oriented pair $\{{\rm A}_i, {\rm A}_j\}$ provides a frame $\{{\rm A}_i, {\rm B}_j\}$, shown on Fig \ref{frame} by an arrow with a white dot. \begin{figure}[ht] \epsfxsize=2in \centerline{\epsfbox{frame.eps}} \caption{A frame $\{{\rm A}_i,{\rm B}_j\}$.} \label{frame} \end{figure} \subsection{Basic invariants associated to a generic configuration} \label{sec5.2} We introduce several invariants that will be useful in the rest of this paper. We employ $\cdot$ to denote the action of ${\rm G}$ on (decorated) flags. \paragraph{The invariant $u^{{\rm A}_2}_{{\rm B}_1, {\rm B}_3} \in {\rm U}$.} Let $({\rm B}_1, {\rm A}_2, {\rm B}_3)\in {\rm Conf}({\cal B,A,B})$ be a generic configuration. Set \begin{equation} \label{8.28.10.30h} u_{{{\rm B}}_1,{\rm B}_3}^{{\rm A}_2}:=g_{\{{\rm U}, {\rm B}^-\}}(\{{\rm A}_2, {\rm B}_1\},\{{\rm A}_2,{\rm B}_3\}). \end{equation} By \eqref{5.25.12.1.ll}, the invariant $u_{{\rm B}_1, {\rm B}_3}^{{\rm A}_2}$ is independent of the representative chosen. Clearly, $u_{{\rm B}_3, {\rm B}_2}^{{\rm A}_1}\in {\rm U}$. \paragraph{The invariant $h_{{\rm A}_1, {\rm A}_2}\in {\rm H}$.} Let $({\rm A}_1, {\rm A}_2)$ be a generic configuration. There is a unique element $h_{{\rm A}_1, {\rm A}_2}\in {\rm H}$ such that \begin{equation} \label{4.25.12.2} ({\rm A}_1, {\rm A}_2) = ({\rm U}, ~h_{{\rm A}_1, {\rm A}_2}\overline w_0\cdot {\rm U}). \end{equation} Using the notation ({\ref{5.19.12.1as}}), we have \begin{equation} \label{8.28.10.28h} h_{{\rm A}_1,{\rm A}_2}\overline{w}_0=g_{\{{\rm U},{\rm B}^-\}}(\{{\rm A}_1,{\rm B}_2\},\{{\rm A}_2,{\rm B}_1\}). \end{equation} \paragraph{The invariant $b_{\rm B_3}^{\rm A_1, A_2}\in {\rm B^{-}}$.} Let $({\rm A_1, A_2, B_3})$ be a generic configuration. Define $$ b_{\rm B_3}^{\rm A_1, A_2}:=g_{\{{\rm U}, {\rm B}^{-}\}}(\{{\rm A}_1,{\rm B}_3\},\{{\rm A}_2,{\rm B}_3\})\in {\rm B}^{-}. $$ \paragraph{Relations between basic invariants.} Let $({\rm A}_1, ..., {\rm A}_n) \in {\rm Conf}^*_n({\cal A})$. Set \begin{equation} \label{3.23.12.1} h_{ij}:= h_{{\rm A}_i, {\rm A}_j}\in {\rm H}, \quad u_{ik}^j := u^{{\rm A}_j}_{{\rm B}_i, {\rm B}_k}\in {\rm U}, \quad b_{k}^{ij}:=b_{{\rm B}_k}^{{\rm A}_i, {\rm A}_j}\in {\rm B}^{-}. \end{equation} We denote these invariants by dashed arrows, see Fig \ref{cal8}. \begin{figure}[ht] \centerline{\epsfbox{cal09.eps}} \caption{Invariants of a configuration.} \label{cal8} \end{figure} { \begin{lemma} \label{3.23.12.1aa} The data (\ref{3.23.12.1}) satisfy the following relations: \begin{enumerate} \item $h_{12}\overline{w}_0h_{21}\overline{w}_0=1$. \item $u_{23}^1u_{34}^1=u_{24}^1$, in particular $u_{23}^1u_{32}^1=1$. \item $b_{4}^{12}b_{4}^{23}=b_{4}^{13}$. \item $b_{3}^{12}=u^{1}_{32}h_{12}\overline{w}_0u^{2}_{13}=h_{13}\overline{w}_0u_{12}^3\overline{w}_0^{-1}h_{23}^{-1}.$ \item $u_{32}^1h_{12}\overline{w}_0u_{13}^2h_{23}\overline{w}_0u_{21}^3h_{31}\overline{w}_0=1.$ \end{enumerate} \end{lemma} \begin{proof} We prove the first identity of 4. The others follow similarly. Let $p=\{{\rm U},{\rm B}^{-}\}$. Let $$x_1=\{{\rm A}_1, {\rm B}_3\},~x_2=\{{\rm A}_1, {\rm B}_2\}, ~x_3=\{{\rm A}_2, {\rm B}_1\}, ~x_4=\{{\rm A}_2, {\rm B}_3\}.$$ As illustrated by Figure \ref{cal8}, $$ b_{3}^{12}=g_{p}(x_1,x_4),~u^{1}_{32}=g_p(x_1,x_2),~h_{12}\overline{w}_0=g_p(x_2, x_3),~u_{13}^2=g_p(x_3, x_4). $$ By \eqref{5.25.12.1}, we get $g_{p}(x_1,x_4)=g_p(x_1,x_2)g_p(x_2,x_3)g_p(x_3,x_4)$. \end{proof} \begin{lemma} \label{8.28.10.39hhh} Let $x\in {\rm Conf}({\cal A}, {\cal A}, {\cal B})$ be a generic configuration. Then it has a unique representative $\{{\rm A}_1,{\rm A}_2,{\rm B}_3\}$ with $\{{\rm A}_1, {\rm B}_3\}=\{{\rm U}, {\rm B}^{-}\}$. Such a representative is \begin{equation} \label{8.28.10.39hh} \{{\rm U}, u_{32}^1h_{12}\overline{w}_0\cdot {\rm U}, {\rm B}^{-}\}. \end{equation} \end{lemma} \begin{figure}[ht] \centerline{\epsfbox{inv2.eps}} \caption{Invariants of a configuration $({{\rm A}}_1, {{\rm A}}_2, {{\rm B}}_3)$.} \label{inv2} \end{figure} \begin{proof} The existence and uniqueness are clear. It remains to show that it is $\eqref{8.28.10.39hh}$. By Fig \ref{inv2}, \begin{equation} \label{8.28.10.39h} g_{\{{\rm U}, {\rm B}^{-}\}}(\{{\rm A}_1,{\rm B}_3\},\{{\rm A}_2,{\rm B}_1\})=u_{32}^1h_{12}\overline{w}_0. \end{equation} If $\{{\rm A}_1,{\rm B}_3\}=\{{\rm U},{\rm B}^{-}\}$, then by \eqref{ll.5.25.12.1}, we get $$ \{{\rm A}_2,{\rm B}_1\}=g_{\{{\rm U}, {\rm B}^{-}\}}(\{{\rm A}_1,{\rm B}_3\},\{{\rm A}_2,{\rm B}_1\})\cdot \{{\rm U}, {\rm B}^{-}\}=\{u_{32}^1h_{12}\overline{w}_0\cdot{\rm U}, {\rm B}\}. $$ \end{proof} } Each $b\in {\rm B}^{-}$ can be decomposed as $b=y_l\cdot h=h\cdot y_r$ where $h\in {\rm H}$, $y_l,y_r\in {{\rm U}^{-}}$. Thus ${\rm B}^{-}$ has a positive structure induced by positive structures on ${\rm U}^{-}$ and ${\rm H}$. There are three positive maps \begin{equation} \label{13.1.12.51h} \pi_l, \pi_r: {\rm B}^{-}\longrightarrow {\rm U}^{-},~\pi_h : {\rm B}^{-} \longrightarrow {\rm H}, ~~\pi_l(b)=y_l, ~\pi_r(b)=y_r,~ \pi_h(b)=h. \end{equation} These maps give rise to three more invariants. \paragraph{The invariant $\mu_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}\in {\rm H}$.} For each generic $({\rm A}_1,{\rm A}_2,{\rm B}_3)$, we define \begin{equation} \mu_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}:= \pi_{h}(b_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}). \end{equation} \paragraph{The invariant $r_{{\rm B}_3}^{{\rm B}_1,{\rm A}_2}\in {\rm U}^{-}$.} For any $h\in {\rm H}$, we have \begin{equation} \label{12.12.15.1h} b_{{\rm B}_3}^{{\rm A}_1\cdot h^{-1}, {\rm A}_2}= h\cdot b_{{\rm B}_3}^{{\rm A}_1, {\rm A}_2}. \end{equation} Thus we can define \begin{equation} \label{inv.r} r_{{\rm B}_3}^{{\rm B}_1,{\rm A}_2}:=\pi_r(b_{{\rm B}_3}^{{\rm A}_1\cdot h^{-1}, {\rm A}_2})=\pi_r(b_{{\rm B}_3}^{{\rm A}_1, {\rm A}_2})\in {\rm U}^{-}. \end{equation} \paragraph{The invariant $l_{{\rm B}_3}^{{\rm A}_1, {\rm B}_2}\in {\rm U}^{-}$.} For any $h\in {\rm H}$, we have \begin{equation} \label{12.12.15.2h} b_{{\rm B}_3}^{{\rm A}_1, {\rm A}_2\cdot h}= b_{{\rm B}_3}^{{\rm A}_1, {\rm A}_2}\cdot h. \end{equation} Define \begin{equation} l_{{\rm B}_3}^{{\rm A}_1,{\rm B}_2}:=\pi_l(b_{{\rm B}_3}^{{\rm A}_1, {\rm A}_2\cdot h})=\pi_l(b_{{\rm B}_3}^{{\rm A}_1, {\rm A}_2})\in {\rm U}^{-}. \end{equation} { For simplicity, we set \begin{equation} \label{13.3.16.1101} \mu_{k}^{ij}:=\mu_{{\rm B}_k}^{{\rm A}_i, {\rm A}_j}\in {\rm H}, ~~~r_{k}^{ij}:=r_{{\rm B}_k}^{{\rm B}_i, {\rm A}_j}\in {\rm U}^{-},~~~l_{k}^{ij}:=l_{{\rm B}_k}^{{\rm A}_i, {\rm B}_j}\in{\rm U}^{-}. \end{equation} Recall that $\widetilde{u}=\overline{w}_0u^{-1}\overline{w}_0^{-1}$. By Relations 3, 4 of Lemma \ref{3.23.12.1aa}, we get \begin{equation} \mu_{4}^{12}\mu_{4}^{23}=\mu_{4}^{13}. \end{equation} \begin{equation} \label{12.12.hh} b_{3}^{12}=l_3^{12}\mu_{3}^{12}=\mu_{3}^{12} r_{3}^{12}=u_{32}^1 h_{12} \overline{w}_0 u_{13}^2=h_{13}\widetilde{u_{21}^3}h_{23}^{-1}. \end{equation} Recall the morphisms $\Phi$, $\eta$ and $\beta$ in Section \ref{sec4}. By the definition of these morphisms, we get \begin{lemma} \label{12.12.11.h} We have \begin{enumerate} \item $u_{32}^1=\Phi\big(l_3^{12}\big)$. \item $r_{3}^{12}=\eta\big(u_{13}^2\big)$. \item $\widetilde{u_{21}^{3}}={\rm Ad}_{h_{13}^{-1}}\big(l_3^{12}\big)={\rm Ad}_{h_{23}^{-1}}\big(r_3^{12}\big)$. \item $\mu_{3}^{12}=h_{12}\beta(u_{13}^2)=h_{13}h_{23}^{-1},~~~\beta(u_{13}^2)=h_{13}h_{23}^{-1}h_{12}^{-1}$. \end{enumerate} \end{lemma} \begin{proof} By \eqref{12.12.hh}, we have $$ l_3^{12}\mu_3^{12}=u_{32}^1(h_{13}\overline{w}_0u_{13}^2). $$ The first identity follows. Similarly, the second identity follows from $$ \mu_{3}^{12} r_{3}^{12}=(u_{32}^1h_{13}\overline{w}_0)u_{13}^2 $$ The third identity follows from $$ l_{3}^{12}\mu_{3}^{12}=h_{13}\widetilde{u_{21}^3}h_{23}^{-1}={\rm Ad}_{h_{13}}(\widetilde{u_{21}^3}) h_{13}h_{23}^{-1},\quad \quad \mu_{3}^{12}r_{3}^{12}=h_{13}h_{23}^{-1}{\rm Ad}_{h_{23}}(r_{3}^{12}). $$ The identity $\mu_{3}^{12}=h_{12}\beta(u_{13}^2)$ follows from $$ \mu_{3}^{12}r_{3}^{12}=u_{32}^1h_{12}\cdot (\overline{w}_0u_{13}^2). $$ The identity $\mu_3^{12}=h_{13}h_{23}^{-1}$ follows from $$ l_{3}^{12}\mu_{3}^{12}={\rm Ad}_{h_{13}}(\widetilde{u_{21}^3}) h_{13}h_{23}^{-1}. $$ \end{proof} \begin{lemma} \label{lem1} We have \begin{equation} \label{8.13.4a} \chi(u_{21}^3)=\sum_{i\in I}\frac{\alpha_i(h_{13})}{{\cal L}_i(u_{32}^1)}=\sum_{i\in I}\frac{\alpha_i(h_{23})}{{\cal R}_i(u_{13}^2)}. \end{equation} \begin{equation} \label{13.1.12.1h} \alpha_i(h_{12})=\alpha_{i^*}(h_{21}),~~~\forall i\in I. \end{equation} \end{lemma} \begin{proof} Use Lemmas \ref{13.1.11.31h.1}, \ref{12.12.11.h}, \ref{13.1.11.31h.2} and \ref{lem2} , we get $$ \chi(u_{21}^3)=\chi^{-}(\widetilde{u_{21}^3})=\chi^{-}\big({\rm Ad}_{h_{13}^{-1}}(l_3^{12})\big)=\sum_{i\in I}\alpha_i(h_{13})\chi_i^{-}(l_{3}^{12})=\sum_{i\in I}\frac{\alpha_i(h_{13})}{{\cal L}_i(u_{32}^1)}. $$ By the same argument, we get the other identity in \eqref{8.13.4a}. By Relation 1 of Lemma \ref{lem2}, we get $$ h_{12}=\overline{w}_0h_{21}^{-1}\overline{w}_0^{-1}\cdot s_{\rm G}. $$ Then \eqref{13.1.12.1h} follows. \end{proof} } \subsection{A positive structure on ${\rm Conf}_{\rm I}({\cal A; {\cal B}})$} \label{proofmth1} Let ${\rm I}\subset [1,n]$ be a nonempty subset of cardinality $m$. Following \cite[Section 8]{FG1}, there is a positive structure on the configuration space ${\rm Conf}_{\rm I}({\cal A};{\cal B})$. We briefly recall it below. \vskip 3mm Let $x=(x_1,\ldots, x_n)\in {\rm Conf}_{\rm I}({\cal A}; {\cal B})$ be a generic configuration such that \begin{equation} \label{13.1.12.10h} x_i={\rm A}_i\in {\cal A}\mbox{ when } i\in{\rm I}, \text{ otherwise } x_i={\rm B}_i\in{\cal B}. \end{equation} Set ${\rm B}_j:=\pi({\rm A}_j)$ when $j\in {\rm I}$. Let $i\in {\rm I}$. For each $k\in [2,n]$, set \begin{equation} u_{k}^i(x):=u_{{\rm B}_{i+k}, {\rm B}_{i+k-1}}^{{\rm A}_i},~~~~~\mbox{ where the subscript is modulo $n$}.\end{equation} For each pair $i, j\in {\rm I}$, recall \begin{equation} \label{13.3.15.321h} \pi_{ij}(x):=\left\{\begin{array}{lc}h_{{\rm A}_i, {\rm A}_j}, ~~&\text{ if } i<j,\\ h_{s_{{\rm G}}\cdot {\rm A}_i, {\rm A}_j},& \text { if }i>j.\end{array}\right. \end{equation} \begin{lemma} \label{7.22.1.1s} Fix $i\in {\rm I}$. The following morphism is birational $$ \alpha_{i}: ~{\rm Conf}_{\rm I}({\cal A}; {\cal B})\longrightarrow {\rm H}^{m-1}\times {\rm U}^{n-2},~~~x\longmapsto (\{\pi_{ij}(x)\}, \{u_k^i(x)\}),~~j\in {\rm I}-\{i\},~k\in[2,n-1]. $$ \end{lemma} {\bf Example.} Fig \ref{alpha} illustrates the map $\alpha_1$ for ${\rm I}=\{1,3,5\}\subset[1,6]$. \begin{figure}[ht] \epsfxsize=2in \centerline{\epsfbox{alpha.eps}} \caption{The map $\alpha_1$ for ${\rm I}=\{1,3,5\}\subset [1,6]$.} \label{alpha} \end{figure} \begin{proof} Assume that $i=1\in {\rm I}$. Clearly $\alpha_1$ is well defined on the subspace $$ \widetilde{\rm Conf}_{\rm I}({\cal A}; {\cal B}):=\{(x_1,\ldots, x_{n})~|~ (x_1, x_k) \text{ is generic for all } k\in[2,n]\}. $$ Note that $\widetilde{\rm Conf}_{\rm I}({\cal A}; {\cal B})$ is dense in ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$. We prove the Lemma by showing that $\alpha_1$ is a bijection from $\widetilde{\rm Conf}_{\rm I}({\cal A}; {\cal B})$ to ${\rm H}^{m-1}\times {\rm U}^{n-2}$, \vskip 2mm Let $y=(\{h_j\}, \{u_k\})\in {\rm H}^{m-1}\times {\rm U}^{n-2}$. Set $u_n':=1$. Set $u_k':=u_{n-1}\ldots u_k$ for $k\in [2,n-1].$ Let $x=(x_1,\ldots, x_n)\in \widetilde{\rm Conf}_{\rm I}({\cal A}; {\cal B})$ such that \begin{equation} \label{9.3.2014.1.41h} x_1:={\rm U}; ~~~~ x_j:=u_j'h_j\overline{w}_0\cdot {\rm U}\in {\cal A},~ j\in {\rm I}-\{1\};~~~~x_k:=u_k'\cdot {\rm B}^{-}\in {\cal B},~ k\notin {\rm I}. \end{equation} Clearly $\alpha_1(x)=y$. Hence $\alpha_1$ is a surjection. {Let $x\in \widetilde{\rm Conf}_{\rm I}({\cal A}; {\cal B})$ such that $\alpha_1(x)=y$. Note that $x$ has a unique representative $\{x_1,\ldots, x_n\}$ such that $\{x_1, x_n\}=\{{\rm U}, {\rm B}^-\}$ if $n\notin {\rm I}$, and $\{x_1, \pi(x_n)\}=\{{\rm U}, {\rm B}^-\}$ if $n\in {\rm I}$. By Lemma \ref{8.28.10.39hhh}, each $x_i$ is uniquely expressed by \eqref{9.3.2014.1.41h}. The injectivity of $\alpha_1$ follows.} \end{proof} The product ${\rm H}^{m-1}\times {{\rm U}}^{n-2}$ has a positive structure induced by the ones on ${\rm H}$ and ${\rm U}$. When ${\rm I}=[1,n]$, we first introduce a positive structure on ${\rm Conf}_n({\cal A})$ such that the map $\alpha_1$ is a positive birational isomorphism. Such a positive structure is twisted cyclic invariant: \begin{theorem} [{\cite[Section 8]{FG1}}] \label{12.9.FG1} The following map is a positive birational isomorphism $$t: {\rm Conf}_n({\cal A})\stackrel{\sim}{\longrightarrow} {\rm Conf}_n({\cal A}), ~~~ ({\rm A}_1,\ldots, {\rm A}_n)\longmapsto ({\rm A}_2, \ldots, {\rm A}_{n}, {\rm A}_1\cdot s_{\rm G}).$$ \end{theorem} Each $\alpha_i$ determines a positive structure on ${\rm Conf}_n({\cal A})$. Theorem \ref{12.9.FG1} tells us that these positive structures coincide. We prove the same result for ${\rm Conf}_{\rm I}({\cal A};{\cal B})$, using the following Lemmas. \begin{lemma} \label{13.1.10.1h} Let ${\cal Y}$ be a space equipped with two positive structures denoted by ${\cal Y}^1$ and ${\cal Y}^2$. If for every rational function $f$ on ${\cal Y}$, we have $$ f \text{ is positive on } {\cal Y}^1 \Longleftrightarrow f \text{ is positive on } {\cal Y}^2, $$ then ${\cal Y}^1$ and ${\cal Y}^2$ share the same positive structure. \end{lemma} \begin{proof} It is clear. \end{proof} \begin{lemma} \label{13.1.10.2h} Let ${\cal Y}, {\cal Z}$ be a pair of positive spaces. If there are two positive maps $\gamma: {\cal Y}\rightarrow {\cal Z}$ and $\beta:{\cal Z}\rightarrow{\cal Y}$ such that $\beta\circ \gamma={\rm id}_{\cal Y}$, then for every rational function $f$ on ${\cal Y}$ we have $$ f \text{ is positive on } {\cal Y} \Longleftrightarrow \beta^*(f) \text{ is positive on } {\cal Z}. $$ \end{lemma} \begin{proof} If $f$ is positive on ${\cal Y}$, since $\beta$ is a positive morphism, then $\beta^*(f)$ is positive on ${\cal Z}$. If $\beta^*(f)$ is positive on ${\cal Z}$, since $\gamma$ is a positive morphism, then $\gamma^*(\beta^*(f))=f$ is positive. \end{proof} \begin{lemma} \label{13.1.10.3h} Every $\alpha_i$ $(i\in {\rm I})$ determines the same positive structure on ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$. \end{lemma} {\bf Remark.} Lemma \ref{13.1.10.3h} is equivalent to say that for any pair $i,j\in{\rm I}$, the map $\phi_{i,j}:=\alpha_{i}\circ\alpha_{j}^{-1}$ is a positive birational isomorphism of ${\rm H}^{m-1}\times {{\rm U}}^{n-2}$. \begin{proof} Let us temporary denote the positive structure on ${\rm Conf}_{\rm I}({\cal X}; {\cal Y})$ by ${\rm Conf}_{\rm I}^i({\cal A}; {\cal B})$ such that $\alpha_i$ is a positive birational isomorphism. There is a projection $\beta: {\rm Conf}_n({\cal A})\rightarrow {\rm Conf}_{\rm I}({\cal A}; {\cal B})$ which maps ${\rm A}_k$ to ${\rm A}_k$ if $k\in {\rm I}$ and maps ${\rm A}_k$ to $\pi({\rm A}_k)$ otherwise. By Lemma \ref{12.9.FG1}, $\beta$ is a positive morphism for all ${\rm Conf}_{\rm I}^i({\cal A};{\cal B})$. Fix $i\in {\rm I}$. Each generic $x=(x_1,\ldots, x_n)\in {\rm Conf}_{\rm I}({\cal A}; {\cal B})$ has a unique preimage $\gamma^i(x):=({\rm A}_1,\ldots, {\rm A}_n)\in {\rm Conf}_n({\cal A})$ such that $$ \mbox{${\rm A}_j=x_j$ when $j\in {\rm I}$, otherwise ${\rm A}_j$ is the preimage of $x_j$ such that $\pi_{ij}(\gamma^i(x))=1$.} $$ \noindent Clearly $\gamma^i$ a positive morphism from ${\rm Conf}_{\rm I}^i({\cal A}; {\cal B})$ to ${\rm Conf}_n({\cal A}).$ By definition $\beta\circ \gamma^i={\rm id}$. Let $f$ be a rational function on ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$. Let $i, j\in {\rm I}$. By Lemma \ref{13.1.10.1h}, $$ f \text{ is positive on } {\rm Conf}_{\rm I}^i({\cal A}; {\cal B}) \Longleftrightarrow \beta^*(f) \text{ is positive on } {\rm Conf}_{n}({\cal A}) \Longleftrightarrow f \text{ is positive on } {\rm Conf}_{\rm I}^j({\cal A}; {\cal B}). $$ This Lemma follows from Lemma \ref{13.1.10.2h}. \end{proof} Thanks to Lemma \ref{13.1.10.3h}, we introduce a canonical positive structure on ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$. From now on, we view ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$ as a positive space. \vskip 3mm Given $k\in {\mathbb Z}/n$, we define the $k$-shift of the subset ${\rm I}$ by setting $ {\rm I}(k):=\{i \in [1,n]~|~ i+k\in {\rm I}\}. $ The following Lemma is clear now. \begin{lemma} \label{12.9.FG1.g} The following map is a positive birational isomorphism $$t: {\rm Conf}_{\rm I}({\cal A};{\cal B})\stackrel{\sim}{\longrightarrow} {\rm Conf}_{{\rm I}(1)}({\cal A};{\cal B}), ~~~ (x_1,\ldots, x_n)\longmapsto (x_2, \ldots, x_{n}, x_1\cdot s_{\rm G}).$$ \end{lemma} \paragraph{An invariant definition of positive structures.} We have defined above positive structures on the configuration spaces using pinning in ${\rm G}$, which allows to make calculations. Let us explain now how to define positive structures on the configurations spaces without choosing a pinning. When ${\rm G}$ is of type $A_m$, such a definition is given in \cite[Section 9]{FG1}. In general, given a decomposition of the longest Weyl group element $w_0=s_{i_1}\ldots s_{i_n}$, for each generic pair $\{{\rm B}, {\rm B}'\}$ of flags, there exists a unique chain $${\rm B}={\rm B}_0\stackrel{i_1}{\longrightarrow}{\rm B}_1\stackrel{i_2}{\longrightarrow}\ldots\stackrel{i_{n-1}}{\longrightarrow}{\rm B}_{n-1}\stackrel{i_n}{\longrightarrow}{\rm B}_n={\rm B}'.$$ Here ${\rm B}_{k-1}\stackrel{i_k}{\rightarrow}{\rm B}_k$ indicates that $\{{\rm B}_{k-1}$, ${\rm B}_k\}$ is in the position $s_{i_k}$. The positive structure of ${\rm Conf}({\cal B}, {\cal A}, {\cal B})$ can be defined via the birational map $$ {\rm Conf}({\cal B}, {\cal A}, {\cal B})\longrightarrow ({\Bbb G_m})^n, \quad ({\rm B}, {\rm A}, {\rm B}')\longmapsto (\chi^o({\rm B}_0, {\rm A}, {\rm B}_1), \chi^o({\rm B}_1,{\rm A}, {\rm B}_2),\ldots, \chi^o({\rm B}_{n-1},{\rm A}, {\rm B}_n)). $$} Each generic pair $\{{\rm A}, {\rm A}'\}\in {\cal A}^2$ uniquely determines a pinning for ${\rm G}$ such that $$x_i(a)\in {\rm U}_{{\rm A}}, ~~~\chi_{\rm A}(x_i(a))=a, ~~~ y_i(a)\in {\rm U}_{{\rm A}'}, ~~~i\in I.$$ The pinning gives rise to a representative $\overline{w}_0\in {\rm G}$ of $w_0$. There is a unique element $h\in \pi({\rm A})\cap \pi({\rm A}')$ such that $$ {\rm A}'=h\overline{w}_0\cdot {\rm A}. $$ Such an element $h$ gives rise to a birational map from ${\rm Conf}_2({\cal A})$ to the Cartan group of ${\rm G}$, determining a positive structure of ${\rm Conf}_2({\cal A})$. The positive structures of general configuration spaces are defined via the positive structures of ${\rm Conf}_2({\cal A})$ and ${\rm Conf}({\cal B}, {\cal A},{\cal B})$.} \subsection{Positivity of the potential ${\cal W}_{\rm J}$ and proof of Theorem \ref{13.2.22.2226h} \label{sec6.4} Let ${\rm J}\subset {\rm I}\subset[1,n]$. Consider the ordered triples $\{i,j,k\}\subset[1,n]$ such that \begin{equation} \label{13.1.11.12h} j\in {\rm J}, \text{ and } i,j,k {\rm ~seated~clockwise}. \end{equation} Let $x\in {\rm Conf}_{\rm I}({\cal A}; {\cal B})$ be presented by \eqref{13.1.12.10h}. Define $ p_{j;i,k}(x):=u_{{\rm B}_i,{\rm B}_k}^{{\rm A}_j}.$ In particular, we are interested in the triples $\{j-1,j, j+1\}$. Set \begin{equation} \label{fun.u.13.1.11.h} p_j(x):=p_{j;j-1,j+1}=u_{{\rm B}_{j-1}, {\rm B}_{j+1}}^{{\rm A}_j}, ~~\forall j\in {\rm J}. \end{equation} \begin{lemma} \label{12.9.pos} The following morphisms are positive morphisms \begin{itemize} \item [1.] $\pi_{ij}: {\rm Conf}_{\rm I}({\cal A};{\cal B})\longrightarrow {\rm H},~\forall ~i, j\in {\rm I}$. \item [2.] $p_{j;i,k}: {\rm Conf}_{\rm I}({\cal A}; {\cal B})\longrightarrow {\rm U}, ~\forall ~ \{i,j,k\}\in \eqref{13.1.11.12h}$. \end{itemize} \end{lemma} \begin{proof} The positivity of $\pi_{ij}$ is clear. By Relation 2 of Lemma \ref{3.23.12.1aa}, we get $$ u_{{\rm B}_i, {\rm B}_k}^{{\rm A}_j}=u_{{\rm B}_{i}, {\rm B}_{i-1}}^{{\rm A}_j}u_{{\rm B}_{i-1},{\rm B}_{i-2}}^{{\rm A}_j}\ldots u_{{\rm B}_{k+1},{\rm B}_{k}}^{{\rm A}_j}. $$ The product map $ {\rm U}\times {\rm U}\rightarrow {\rm U}, ~(u_1, u_2)\mapsto u_1u_2 $ is positive. The positivity of $p_{j;i,k}$ follows. \end{proof} \paragraph{Positivity of the potential ${\cal W}_{\rm J}$.} Recall the positive function $\chi$ on ${\rm U}$. Let $x\in {\rm Conf}_{\rm I}({\cal A}; {\cal B})$ be a generic configuration presented by (\ref{13.1.12.10h}). By Lemma \ref{8.28.10.39hhh}, each generic triple $({\rm B}_{j-1}, {\rm A}_j, {\rm B}_{j+1})$ has a unique representative $\{{\rm B}^-,{\rm U}, u_{{\rm B}_{j-1},{\rm B}_{j+1}}^{{\rm A}_j}\cdot {\rm B}^-\}$. In this case $u_j$ in \eqref{7.20.9.8} becomes $p_j(x)$. Therefore $\chi_{{\rm A}_j}(u_j)=\chi\circ p_j(x).$ The potential ${\cal W}_{\rm J}$ of ${\rm Conf}_{\rm I}({\cal A}; {\cal B})$ becomes \begin{equation} \label{pontential.p.j} {\cal W}_{\rm J}=\sum_{j\in {\rm J}} \chi\circ p_j \end{equation} Since $p_j$ are positive morphisms, the positivity of ${\cal W}_{\rm J}$ follows. \vskip 2mm By Relation 2 of Lemma \ref{3.23.12.1aa}, we get \begin{equation} \label{13.1.11.11h} \chi\circ p_j=\chi\circ p_{j; j-1,i}+\chi\circ p_{j; i,k}+\chi\circ p_{j; k,j+1} \end{equation} All summands on right side are positive functions. By \eqref{pontential.p.j}, the set ${\rm Conf}_{{\rm J}\subset{\rm I}}^{+}({\cal A};{\cal B})({\mathbb Z}^t)$ of tropical points such that ${\cal W}_{\rm J}^t\geq 0$ is the set \begin{equation} \label{13.1.11.21h} \{l\in {\rm Conf}_{\rm I}({\cal A};{\cal B})({\mathbb Z}^t)~|~ p_{j;i,k}^t(l)\in {\rm U}_{\chi}^+({\mathbb Z}^t) \mbox{ for all } \{i,j,k\} \in \eqref{13.1.11.12h}\}. \end{equation} \paragraph{Proof of Theorem \ref{13.2.22.2226h}.} Recall the moduli space ${\rm Conf}_{{\rm J}\subset{\rm I}}^{\cal O}({\cal A}; {\cal B})$ in Definition \ref{aointegral}. \begin{lemma} \label{Lema1} A generic configuration in ${\rm Conf}_{\rm I}({\cal A}; {\cal B})({\cal K})$ is ${\cal O}$-integral relative to ${\rm J}$ if and only if $u^{{\rm A}_j}_{{\rm B}_i,{\rm B}_k} \in {\rm U}({\cal O})$ for all $\{i,j,k\}\in\eqref{13.1.11.12h}$. \end{lemma} \begin{proof} By definition ${\rm L}({\rm A}_j,{\rm B}_k)=[g_{\{{\rm A}_j, {\rm B}_k\}, \{{\rm U},{\rm B}^{-}\}}]\in {\rm Gr}$. Let $\{i,j,k\}\in$\eqref{13.1.11.12h}. Then $$ {\rm L}({\rm A}_j,{\rm B}_k)={\rm L}({\rm A}_j,{\rm B}_i) \Longleftrightarrow g_{\{{\rm A}_j, {\rm B}_i\}, \{{\rm U},{\rm B}^{-}\}}^{-1}g_{\{{\rm A}_j, {\rm B}_k\}, \{{\rm U},{\rm B}^{-}\}}=u^{{\rm A}_j}_{{\rm B}_i,{\rm B}_k}\in {\rm G}({\cal O}). $$ The Lemma is proved. \end{proof} Let $l\in{\rm Conf}_{\rm I}({\cal A}; {\cal B})$. Let $x\in {\cal C}_{l}^{\circ}$ be presented by \eqref{13.1.12.10h}. By Lemma \ref{Lema10.1.1}, $u^{{\rm A}_j}_{{\rm B}_i,{\rm B}_k} \in {\rm U}({\cal O})$ if and only if $p_{j;i,k}^t(l)\in {\rm U}_{\chi}^{+}({\mathbb Z}^t).$ Theorem \ref{13.2.22.2226h} follows from Lemma \ref{Lema1} and \eqref{13.1.11.21h}. \vskip 2mm Tropicalizing the morphism \eqref{13.3.15.321h}, we get $\pi_{ij}^t: {\rm Conf}_{\rm I}({\cal A}; {\cal B})({\mathbb Z}^t)\to {\rm H}({\mathbb Z}^t)={\rm P}.$ \begin{lemma} \label{9.21.17.56h} Let $i,j \in {\rm J}$. If $l\in {\rm Conf}_{{\rm J}\subset{\rm I}}^{+}({\cal A}; {\cal B})({\mathbb Z}^t)$, then $\pi_{ij}^t(l) \in {\rm P}^+$. \end{lemma} \begin{proof} Since $\pi_{ij}^t(l)=-w_0(\pi_{ji}^t(l)),$ we can assume that there exists $k$ such that $\{i,j, k\}\in \eqref{13.1.11.12h}$. Otherwise we switch $i$ and $j$. Set $\lambda:=\pi_{ij}^t(l)$, $u_1:=p_{i;k,j}^t(l)$, $u_2:=p_{j;i,k}^{t}(l)$. We tropicalize \eqref{8.13.4a}: \begin{equation} \label{8.21.10.56h} \chi^t(u_2)=\min_{r\in I}\{\langle \lambda, \alpha_r\rangle-{\cal R}_{r}^t(u_1)\} \end{equation} If $l\in \eqref{13.1.11.21h}$, then $ \chi^t(u_1)\geq 0$, $\chi^t(u_2)\geq 0.$ {By the definition of ${\cal R}_r$ and $\chi$, we get ${\cal R}_r^t(u_1)\geq \chi^t(u_1)$. Therefore ${\cal R}^t(u_1)\geq 0$.} Hence $$ \forall r\in I,~~~ \langle \lambda, \alpha_r\rangle\geq \langle \lambda, \alpha_r\rangle-{\cal R}_{r}^t(u_1) \geq \chi^t(u_2)\geq 0~\Longrightarrow ~\lambda\in {\rm P}^+. $$ \end{proof} \section{Proof of Theorems \ref{5.8.10.45a} and \ref{13.1.30.742h}}\label{sec7} \subsection{Lemmas} Let ${\cal Y}={\cal Y}_1\times\ldots\times{\cal Y}_k$ be a product of positive spaces. The positive structure on ${\cal Y}$ is induced by positive structures on ${\cal Y}_i$. Let $y_i\in {\cal Y}_i^{\circ}({\cal K})$. Let $(y_{i,1}, \ldots, y_{i,n_i})$ be the coordinate of $y_i$ in a positive coordinate system ${\bf c}_{i}$. Define the field extension \begin{equation} \label{13.2.12.3.38hh} {\mathbb Q}(y_1,\ldots, y_k):={\mathbb Q}\big({\rm in}(y_{1,1}), \ldots, {\rm in}(y_{1,n_1}),\ldots, {\rm in}(y_{k,n_k})\big). \end{equation} Thanks to \eqref{9.20.2.12}, such an extension is independent of the positive coordinate systems chosen. \vskip 2mm Recall the morphisms $\pi_l$, $\pi_r$ in \eqref{13.1.12.51h}. \begin{lemma} \label{5.7.1} Fix $i\in I$. Let $(b, c)\in ({\rm B}^{-}\times {\Bbb G}_m)^{\circ}({\cal K})$. {Recall $y_i(c)\in {\rm U}^-({\cal K})$}. Then $b':=b\cdot y_i(c)\in ({\rm B}^-)^{\circ}({\cal K})$. Moreover, if ${\rm val}({\cal R}_i^{-}\circ \pi_r(b))\leq {\rm val}(c)$, then ${\rm val}(b')={\rm val}(b)$ and ${\mathbb Q}(b',c)={\mathbb Q}(b,c).$ \end{lemma} \begin{proof} Let $b=h\cdot y$. Fix a reduced word for $w_0$ which ends with $i_m=i$. It provides a decomposition $y=y_{i_1}(c_1)\ldots y_{i_m}(c_m)$. Then $b'=h\cdot y_{i_1}(c_1)\ldots y_{i_m}(c_m+c).$ The rest is clear. \end{proof} \begin{lemma} \label{5.7.8} Let $(b,h)\in ({\rm B}^-\times {\rm H})^{\circ}({\cal K})$. Then $b':=b\cdot h\in ({\rm B}^{-})^{\circ}({\cal K})$. Moreover, if $h\in {\rm H}({\mathbb C})$, then ${\rm val}(b')={\rm val}(b)$ and ${\mathbb Q}(b', h)={\mathbb Q}(b,h).$ \end{lemma} \begin{proof} Let $b=y\cdot h_b$. The rest is clear. \end{proof} \begin{lemma} \label{12.12.1.1h} Let $(b,p)\in ({{\rm B}^{-}}\times {\rm B}^{-})^{\circ}({\cal K})$. Assume $p\in {{\rm B}^-}({\mathbb C})$. \begin{itemize} \item[1.] If $ {\rm val}({\cal R}_i^-\circ\pi_r(b))\leq 0$ for all $i\in I$, then $b\cdot p$ is a transcendental point. Moreover $${\rm val} (b\cdot p)={\rm val}(b),~~~~~~{\mathbb Q}(b\cdot p, p)={\mathbb Q}(b,p).$$ \item[2.] If ${\rm val}({\cal L}_i^{-}\circ\pi_l(b))\leq 0$ for all $i\in I$, then $p^{-1}\cdot b$ is a transcendental point. Moreover $${\rm val}(p^{-1}\cdot b)={\rm val} (b),~~~~~~{\mathbb Q}(p^{-1}\cdot b, p) = {\mathbb Q}(b,p). $$ \end{itemize} \end{lemma} \begin{proof} Combining Lemmas \ref{5.7.1}-\ref{5.7.8}, we prove 1. Analogously 2 follows. \end{proof} \subsection{Proof of Theorem \ref{13.1.30.742h}.} Our first task is to prove Theorem \ref{13.1.30.742h} for the cases when ${\rm I}=[1,n]\subset [1,n+1]$. Let ${\rm J}=\{j_1,\ldots, j_m\}\subset {\rm I}$. Recall ${\cal W}_{\rm J}$ in \eqref{13.3.1.527h}. Let $l\in {\rm Conf}({\cal A}^n,{\cal B})({\mathbb Z}^t)$ be such that ${\cal W}_{\rm J}^t(l)\geq 0$. Let ${\bf x}\in {\cal C}_l^{\circ}$. Recall the map $c$ in Theorem \ref{12.17.thm7h}. Set $c({\bf x}):=(b_1,\ldots, b_{n-1})\in ({\rm B}^{-})^{n-1}({\cal K}).$ \begin{lemma} \label{13.1.26.10.3111h} For every $i\in I$, we have \begin{itemize} \item[1.] ${\rm val}({\cal L}_i^{-}\circ \pi_l(b_j))\leq 0$ if $j\in[1,n-1]\cap {\rm J}$, \item[2.] ${\rm val}({\cal R}_i^-\circ \pi_r(b_{k-1}))\leq 0$ if $k\in[2,n]\cap {\rm J}$. \end{itemize} \end{lemma} \begin{proof} Let $j\in[1,n-1]\cap {\rm J}$. By definition $b_j=b_{{\rm B}_{n+1}}^{{\rm A}_j,{\rm A}_{j+1}}$. By Lemmas \ref{lem2}, \ref{12.12.11.h}, we get $${\rm val}({\cal L}_i^-\circ\pi_l(b_j))=-{\rm val}(\chi_i(u_{{\rm B}_{n+1}, {\rm B}_{j+1}}^{{\rm A}_j}))\leq -\chi_{{\rm A}_j}^t(l)\leq 0.$$ The second part follows similarly. \end{proof} As illustrated by Fig \ref{AAB}, we see that $$ {\bf x}=(g_1\cdot{\rm U}, g_2\cdot {\rm U},\ldots, g_n\cdot {\rm U}, {\rm B}^-),~~~~g_1:=1,~g_j:=b_1\ldots b_{j-1},~j\in [2,n]. $$ If $j\in {\rm J}$, then ${\rm L}_j:={\rm L}(g_j\cdot {\rm U}, {\rm B}^-)=[g_j]\in {\rm Gr}$. Therefore $$ \kappa({\bf x})=(x_1,\ldots, x_n, {\rm B}^-),~~~~x_j=\left\{\begin{array}{ll}[g_j] &\text{ if }j \in {\rm J},~~\\ g_j\cdot {\rm U} &\text{ otherwise}.\end{array}\right. $$ Let $\{{\rm A}_{j_1},\ldots, {\rm A}_{j_m}\}\in {\cal A}^m({\mathbb C})$ be a generic point in the sense of algebraic geometry. Define $$ {\bf y}:=({\rm A}_1', {\rm A}_2',\ldots, {\rm A}_n', {\rm B}^-) \in {\rm Conf}({\cal A}^n; {\cal B}),~~~~{\rm A}_j'=\left\{\begin{array}{ll}g_j\cdot {\rm A}_j &\text{ if }j \in {\rm J},~~\\ g_j\cdot {\rm U} &\text{ otherwise}.\end{array}\right. $$ Let $F\in {\mathbb Q}_+\big({\rm Conf}({\cal A}^n;{\cal B})\big)$. By the very definition of $D_F$, we have $D_F\big({\kappa({\bf x})}\big)={\rm val}\big(F({\bf y})\big)$. \vskip 2mm Since $\{{\rm A}_{j_1},\ldots, {\rm A}_{j_m}\}$ is generic, it can be presented by \begin{equation} \label{13.1.26.10.4111h} \{{\rm A}_{j_1},\ldots, {\rm A}_{j_m}\}:=\{p_{j_1}\cdot {\rm U},\ldots, p_{j_m}\cdot {\rm U}\},~~~~{\bf p}=\{p_{j_1},\ldots, p_{j_m}\}\in ({\rm B}^-)^m({\mathbb C}). \end{equation} We can also assume that $({\bf x},{\bf p})$ is a transcendental point, so that \begin{equation} (c({\bf x}), {\bf p})\in \big(({\rm B}^{-})^{m+n-1}\big)^\circ ({\cal K}). \end{equation} Set $p_j=1$ for $j\notin{\rm J}$. Keep the same $p_j$ for $j\in {\rm J}$. Then $${\bf y}=(g_1p_1\cdot {\rm U}, \ldots, g_{n}p_n\cdot {\rm U}, {\rm B}^{-});~~~~~~ c({\bf y})=(\tilde{b}_1,\ldots, \tilde{b}_{n-1}),~\tilde{b}_j:=p_j^{-1}b_jp_{j+1}\in {{\rm B}}^{-}({\cal K}). $$ By Lemmas \ref{12.12.1.1h}-\ref{13.1.26.10.3111h}, we get \begin{align} {\mathbb Q}(c({\bf x}), {\bf p})&={\mathbb Q}(b_1,\ldots, b_{n-1}, p_{i_1},\ldots, p_{i_m})={\mathbb Q}(\tilde{b}_1,\ldots, b_{n-1}, p_{i_1},\ldots, p_{i_m})=\ldots \nonumber\\ &={\mathbb Q}(\tilde{b}_1,\ldots, \tilde{b}_{n-1}, p_{i_1},\ldots, p_{i_m})={\mathbb Q}(c({\bf y}), {\bf p}).\\ {\rm val}({b}_j)&={\rm val}(\tilde{b}_j),~~~~\forall j\in [1,n-1]. \label{13.1.30.9.44h} \end{align} Therefore $(c({\bf y}), {\bf p})\in \big(({\rm B}^{-})^{m+n-1}\big)^\circ ({\cal K}).$ Thus $c({\bf y})$ is a transcendental point. Since ${\rm val}(c({\bf y}))={\rm val}(c({\bf x}))=c^t(l)$, we get ${\bf y}\in {\cal C}_{l}^{\circ}$. By Lemma \ref{thm10.1.1.2}, ${\rm val}\big(F({\bf y})\big)=F^t(l)$. Theorem \ref{13.1.30.742h} is proved. \vskip 2mm Now consider the general cases when ${\rm J}\subset{\rm I}\subset[1,n]$. Consider the positive projection $$ d_{\rm I}=p_{\rm I}\circ d: {\rm Conf}({\cal A}^n;{\cal B})\stackrel{d}{\longrightarrow} {\rm Conf}_n({\cal A})\stackrel{p_{\rm I}}{\longrightarrow} {\rm Conf}_{\rm I}({\cal A}; {\cal B}). $$ Here the map $d$ kills the last flag ${\rm B}_{n+1}$. The map ${p}_{\rm I}$ keeps ${\rm A}_i$ intact when $i\in {\rm I}$, and takes ${\rm A}_i$ to $\pi({\rm A}_i)$ otherwise. \begin{lemma} Let $l\in {\rm Conf}_{{\rm J}\subset{\rm I}}^+({\cal A};{\cal B})({\mathbb Z}^t)$. There exists $l'\in {\rm Conf}({\cal A}^n; {\cal B})({\mathbb Z}^t)$ such that ${\cal W}_{\rm J}^t(l')\geq 0$ and $d_{\rm I}^t(l')=l$. \end{lemma} \begin{proof} We prove the case when ${\rm J}$ contains $\{1, n\}$. In fact, the other cases are easier. Let $x=({\rm A}_1,\ldots, {\rm A}_n, {\rm B}_{n+1})$. Consider a map $u: {\rm Conf}({\cal A}^n;{\cal B})\rightarrow {\rm U}$ given by $x\mapsto u_{{\rm B}_{n+1},{\rm B}_n}^{{\rm A}_1}.$ Then \begin{align} \label{13.1.14.1h04} {\cal W}_{\rm J}(x)&={\cal W}_{\rm J}({\rm A}_1,\ldots, {\rm A}_n)+{\cal W}({\rm A}_1,{\rm A}_n, {\rm B}_{n+1}) ={\cal W}_{\rm J}(d_{\rm I}(x))+\chi(u_{{\rm B}_{n+1},{\rm B}_n}^{{\rm A}_1})+\chi (u_{{\rm B}_1,{\rm B}_{n+1}}^{{\rm A}_n}) \nonumber\\ &={\cal W}_{\rm J}(d_{\rm I}(x))+\chi\big(u(x)\big)+\sum_{i\in I}\frac{\pi_{1,n}(d_{\rm I}(x))}{{\cal R}_i(u(x))}. \end{align} By Lemma \ref{9.21.17.56h}, we have $\lambda:=\pi_{1,n}^t(l)\in {\rm P}^+$. Clearly there exists $l'\in {\rm Conf}({\cal A}^n; {\cal B})({\mathbb Z}^t)$ such that $d_{\rm I}^t(l')=l$ and $u^t(l')=0\in {\rm U}({\mathbb Z}^t). $ We tropicalize \eqref{13.1.14.1h04}: $$ {\cal W}_{\rm J}^t(l')=\min\{{\cal W}_{\rm J}^t(l), ~\chi^t(0),~\min_{i\in I}\{\langle \lambda, \alpha_i\rangle-{\cal R}_i^t(0)\}\} =\min\{{\cal W}_{\rm J}^t(l), ~0,~\min_{i\in I}\{\langle \lambda, \alpha_i\rangle\}\}=0. $$ \end{proof} Let $l, l'$ be as above. Let ${\bf x}\in {\cal C}_l^\circ$. Clearly there exists ${\bf z}\in {\cal C}_{l'}^\circ$ such that $d_{\rm I}({\bf z})={\bf x}$. For any $F\in {{\mathbb Q}}_+({\rm Conf}_{\rm I}({\cal A};{\cal B}))$, we have $$ D_F(\kappa({\bf x}))=D_{F\circ d_{\rm I}}(\kappa({\bf z}))=(F\circ d_{\rm I})^t(l')=F^t\circ d_{\rm I}^t(l')=F^t(l). $$ The second identity is due to the special cases discussed before. The rest are by definition. \section{Cluster varieties, frozen variables and potentials} \label{seccluster} \subsection{Basics of cluster varieties} \begin{definition} A quiver ${\bf q}$ is described by a data $ (\Lambda, \Lambda_0, \{e_i\}, (\ast, \ast)), $ where \begin{enumerate} \item $\Lambda$ is a lattice, $\Lambda_0$ is a sublattice of $\Lambda$, and $\{e_i\}$ is a basis of $\Lambda$ such that $\Lambda_0$ is generated by a subset of {\it frozen basis vectors}; \item $(\ast, \ast)$ is a skewsymmetric $\frac{1}{2}{\mathbb Z}$-valued bilinear form on $\Lambda$ with $(e_i, e_j) \in {\mathbb Z}$ unless $e_i,e_j \in {\Lambda}_0$. \end{enumerate} \end{definition} Any non-frozen basis element $e_k$ provides a {\it mutated in the direction $e_k$} quiver ${\mathbf q'}$. The quiver ${\bf q}'$ is defined by changing the basis $\{e_i\}$ only. The new basis $\{e'_i\}$ is defined via halfreflection of the $\{e_i\}$ along the hyperplane $(e_k, \cdot)=0$: \begin{equation} \label{12.12.04.2a} e'_i := \left\{ \begin{array}{lll} e_i + [\varepsilon_{ik}]_+e_k & \mbox{ if } & i\not = k\\ -e_k& \mbox{ if } & i = k.\end{array}\right. \end{equation} Here $[\alpha]_+:= \alpha$ if $\alpha\geq 0$ and $[\alpha]_+:=0$ otherwise. The frozen/non-frozen basis vectors of the mutated quiver are the images of the ones of the original quiver. The composition of two mutations in the same direction $k$ is an isomorphism of quivers. Set ${\varepsilon}_{ij} := (e_i, e_j)$. A {quiver} can be described by a data ${\bf q}=({\rm I}, {\rm I}_0,\varepsilon)$, where ${\rm I}$ (respectively ${\rm I}_0$) is the set parametrising the basis vectors (respectively frozen vectors). Formula (\ref{12.12.04.2a}) amounts then to the Fomin-Zelevinsky formula telling how the $\varepsilon$-matrix changes under mutations. \begin{equation} \label{5.11.03.6} \varepsilon'_{ij} := \left\{ \begin{array}{lll} - \varepsilon_{ij} & \mbox{ if $k \in \{i,j\}$} \\ \varepsilon_{ij} & \mbox{ if $\varepsilon_{ik} \varepsilon_{kj} \leq 0, \quad k \not \in \{i,j\}$} \\ \varepsilon_{ij} + |\varepsilon_{ik}| \cdot \varepsilon_{ kj}& \mbox{ if $\varepsilon_{ik} \varepsilon_{kj} > 0, \quad k \not \in \{i,j\}.$}\end{array}\right. \end{equation} We assign to every quiver ${\bf q}$ two sets of coordinates, each parametrised by the set ${\rm I}$: the ${\cal X}$-coordinates $\{X_i\}$, and the ${\cal A}$-coordinates $\{A_i\}$. Given a mutation of quivers $\mu_k: {\bf q} \longmapsto {\bf q}'$, the cluster coordinates assigned to these quivers are related as follows. Denote the cluster coordinates related to the quiver ${\bf q'}$ by $\{X'_i\}$ and $\{A'_i\}$. Then \begin{equation} \label{5.11.03.1a} A_{k}A'_{k} := \quad \prod_{j| \varepsilon_{kj} >0} A_{j}^{\varepsilon_{kj}} + \prod_{j| \varepsilon_{kj} <0} A_{j}^{-\varepsilon_{kj}}; \qquad A'_{i} = A_{i}, \quad i \not = k. \end{equation} If any of the sets $\{j| \varepsilon_{kj} >0\}$ or $\{j| \varepsilon_{kj} < 0\}$ is empty, the corresponding monomial is $1$. \begin{equation} \label{5.11.03.1x} X'_{i} := \left\{\begin{array}{ll} X_k^{-1}& \mbox{ if } i=k \\ X_i(1+X_k^{-{\rm sgn} (\varepsilon_{ik})})^{-\varepsilon_{ik}} & \mbox{ if } i\neq k, \end{array} \right. \end{equation} The tropicalizations of these transformations are \begin{equation} \label{5.11.03.1atr} a'_{k} := - a_{k}+ {\rm min}\left\{\sum_{j| \varepsilon_{kj} >0} {\varepsilon_{kj}}a_{j}, \sum_{j| \varepsilon_{kj} <0} -\varepsilon_{kj}a_{j}\right\}; \qquad a'_{i} = a_{i}, \quad i \not = k. \end{equation} \begin{equation} \label{5.11.03.1xtr} x'_{i} := \left\{\begin{array}{ll} -x_k& \mbox{ if } i=k \\ x_i-\varepsilon_{ik}{\rm min}\{0, -{\rm sgn} (\varepsilon_{ik})x_k\} & \mbox{ if } i\neq k, \end{array} \right. \end{equation} Cluster transformations are transformations of cluster coordinates obtained by composing mutations. Cluster ${\cal A}$-coordinates and mutation formulas (\ref{12.12.04.2a}) and (\ref{5.11.03.1a}) are main ingredients of the definition of cluster algebras \cite{FZI}. Cluster ${\cal X}$-coordinates and mutation formulas (\ref{5.11.03.1x}) describe a dual object, introduced in \cite{FG2} under the name {\it cluster ${\cal X}$-variety}. \paragraph{The cluster volume forms \cite{FG5}.} Given a quiver ${\bf q}$, consider the volume forms $$ {\rm Vol}^{\bf q}_{\cal A}:= d\log A_1 \wedge \ldots \wedge d\log A_n, ~~~~ {\rm Vol}^{\bf q}_{\cal X}:= d\log X_1 \wedge \ldots \wedge d\log X_n. $$ Cluster transformations preserve them up to a sign: given a mutation ${\bf q} \longmapsto {\bf q}'$, we have $$ {\rm Vol}^{\bf q'}_{\cal A} = - {\rm Vol}^{\bf q}_{\cal A}, \qquad {\rm Vol}^{\bf q'}_{\cal X} = - {\rm Vol}^{\bf q}_{\cal X}. $$ Denote by ${\rm Or}_\Lambda$ the two element set of orientations of a rank $n$ lattice $\Lambda$, given by expressions $l_1\wedge ...\wedge l_n$ where $\{l_i\}$ form a basis of $\Lambda$. {\it An orientation ${\rm or}_\Lambda$ of $\Lambda$} is a choice of one of its elements. Given a basis $\{e_i\}$ of $\Lambda$, we define its sign ${\rm sign}(e_1, ..., e_n)$ by $ e_1\wedge ...\wedge e_n = {\rm sign}(e_1, ..., e_n){\rm or}_\Lambda. $ A quiver mutation changes the sign of the basis, and the sign of each of the cluster volume forms. So there is a definition of the cluster volume forms invariant under cluster transformations. \begin{definition} Choose an orientation ${\rm or}_\Lambda$ for a quiver ${\bf q}$. Then in any quiver obtained by from ${\bf q}$ by mutations, the cluster volume forms are given by $$ {\rm Vol}_{\mathcal A}= {\rm sign}(e_1, ..., e_n) d\log A_1 \wedge \ldots \wedge d\log A_n, ~~~~{\rm Vol}_{\mathcal X}= {\rm sign}(e_1, ..., e_n) d\log X_1 \wedge \ldots \wedge d\log X_n. $$ \end{definition} \paragraph{Residues of the cluster volume form ${\rm Vol}_{\mathcal A}$ and frozen variables.} Take a space $M$ equipped with a cluster ${\cal A}$-coordinate system $\{A_i\}$. \begin{lemma} \label{nonfr} Let us assume that $k\in {\rm I}-{\rm I}_0$ is nonfrozen, and $\varepsilon_{kj}\not =0$ for some $j$. Then \begin{equation} \label{reszero} {\rm Res}_{A_k=0}({\rm Vol}_{\cal A})=0. \end{equation} \end{lemma} \begin{proof} We have $ {\rm Res}_{A_k=0}({\rm Vol}_{\cal A}) = \pm \bigwedge_{i\not = k}d\log A_i. $ Since $k$ is nonfrozen, there is an exchange relation (\ref{5.11.03.1a}). It implies a monomial relation on the locus $A_k=0$: $\prod_{j} A_{j}^{\varepsilon_{kj}}= -1.$ Since $\varepsilon_{kj}$ is not identically zero, this monomial is nontrivial. Thus $\bigwedge_{i\not = k}d\log A_i =0$ at the $A_k=0$ locus. \end{proof} \begin{corollary} \label{9.12.14.1} A coordinate $A_k$, with $\varepsilon_{kj}\not =0$ for some $j$, can be nonfrozen only if we have (\ref{reszero}), i.e. the functions $A_1, ..., \widehat A_k, ... , A_n$ become dependent on every component of the $A_k=0$ locus. \end{corollary} If we define a cluster algebra axiomatically, without referring to a particular space on which it is realised, then any subset of an initial quiver can be declared to be the frozen subset. However if a cluster algebra is realised geometrically, we do not have much freedom in the definition of frozen variables, as Corollary \ref{9.12.14.1} shows. This leads to the following geometric definition of the frozen coordinates. \begin{definition} \label{9.12.14.2} Let $M$ be a space equipped with a cluster ${\cal A}$-coordinate system. Then a cluster variable ${\rm A}$ is a frozen variable if and only if the residue form ${\rm Res}_{{\rm A}}({\rm Vol}_{\cal A})$ is not zero. \end{definition} \paragraph{Non-negative real points for a cluster algebra.} The space of positive real points of any positive space is well defined. Let us define the space of non-negative real points for a cluster algebra. Let $\{A_i^{\bf q}\}$, $i \in {\rm I}$, be the set of all cluster coordinates in a given quiver ${\bf q}$. The cluster algebra ${\cal O}_{\rm aff}({\cal A})$ is the algebra generated by the formal variables $\{A_i^{\bf q}\}$, for all quivers ${\bf q}$ related by mutations to a given one, modulo the ideal generated by exchange relations (\ref{5.11.03.1a}): \begin{equation} \label{affcv} {\cal O}_{\rm aff}({\cal A}):= \frac{{\mathbb Z}[A_i^{\bf q}]}{(\mbox{\rm exchange relations})}. \end{equation} This ring is not necessarily finitely generated. Let ${\cal A}_{\rm aff}$ be its spectrum. Then the points of ${\cal A}_{\rm aff}({\mathbb R}_{\geq 0})$ are just the collections of positive real numbers $\{a_i^{\bf q}\in {\mathbb R}_{\geq 0}\}$ satisfying the exchange relations. The {\it positive boundary} is defined as the complement to the set of positive real points: $$ \partial{\cal A}_{\rm aff}({\mathbb R}_{\geq 0}):= {\cal A}_{\rm aff}({\mathbb R}_{\geq 0}) - {\cal A}_{\rm aff}({\mathbb R}_{> 0}). $$ Let $A_{f}$ be a frozen variable. Then $\{A_{f} =0\} \cap \partial{\cal A}_{\rm aff}({\mathbb R}_{\geq 0})$ is of real codimension one in ${\cal A}_{\rm aff}({\mathbb R}_{\geq 0})$. Indeed, the frozen ${\cal A}$-cluster coordinates do not mutate, and so the codimension one domain given by the points with the coordinates $A_{f_t} =0, A^{\bf q}_{j} >0$ where $j$ is different then $f_t$ is a part of the intersection. Let $A_k^{\bf q}$ be a non-frozen variable. It is likely, although we did not prove this, that in many cases \begin{equation} \label{claimposi} \{A_k^{\bf q} =0\} \cap \partial{\cal A}_{\rm aff}({\mathbb R}_{\geq 0}) ~~ \mbox{is of real codimension $\geq 2$ in} ~~{\cal A}_{\rm aff}({\mathbb R}_{\geq 0}). \end{equation} Indeed, the exchange relation for the $A_k^{\bf q}$, restricted to the $A_k^{\bf q} =0$ hyperplane, reads $$ 0 \cdot A_k^{\bf q'} = \prod_{j| \varepsilon_{kj} >0} A_{j}^{\varepsilon_{kj}} + \prod_{j| \varepsilon_{kj} <0} A_{j}^{-\varepsilon_{kj}}. $$ So both monomials on the right, being non-negative, are zero, and each of them is non-empty: the empty one contributes $1$, violating $0$ on the left. So we get at least two different cluster coordinates equal to zero. It is easy to see that then in any cluster coordinate system at least two of cluster coordinates are zero. \subsection{Frozen variables, partial compactification $\widehat {\cal A}$, and potential on the ${\cal X}$-space} \label{potfrv} \paragraph{Potential on the ${\cal X}$-space} \begin{lemma} Any frozen $f\in {\rm I}_0$ gives rise to a tropical point $l_f\in {\cal A}({\mathbb Z}^t)$ such that in any cluster ${\cal A}$-coordinate system all tropical ${\cal A}$-coordinates except $a_f$ are zero, and $a_f=1$. \end{lemma} \begin{proof} Pick a cluster ${\cal A}$-coordinate system $\alpha=\{{\rm A}_f, \ldots\}$ starting from a coordinate $A_f$. Consider a tropical point in ${\cal A}({\mathbb Z}^t)$ with the coordinates $(1,0,\ldots, 0)$. It is clear from (\ref{5.11.03.1atr}) that the coordinates of this point are invariant under mutations at non-frozen vertices. Indeed, at least one of the two quantities we minimize in (\ref{5.11.03.1atr}) is zero, and the other must be non-negative. \end{proof} \paragraph{The potential.} Let us assume that there there are canonical maps, implied by the cluster Duality Conjectures for the dual pair $({\cal A}, {\cal X}^\vee)$ of cluster varieties: $$ \mathbb{I}_{\cal A}: ~~~{\cal A}({\mathbb Z}^t)\stackrel{}{\longrightarrow}{\Bbb L}_+({\cal X}^\vee), ~~~~ \mathbb{I}_{\cal X}: ~~~{\cal X}^\vee({\mathbb Z}^t)\stackrel{}{\longrightarrow}{\Bbb L}_+({\cal A}). $$ Here ${\Bbb L}_+({\cal X}^\vee)$ and ${\Bbb L}_+({\cal A})$ are the sets of universally Laurent functions. \begin{definition} Let us assume that for each frozen $f\in {\rm I}_0$ there is a function $$ {\cal W}_{{\cal X}^\vee,f}:=\mathbb{I}_{{\cal A}}(l_f)\in {\Bbb L}_+({\cal X}^\vee) $$ predicted by the Duality Conjectures. Then the potential on the space ${\cal X}$ is given by the sum $$ {\cal W}_{{\cal X}^\vee}:=\sum_{f\in {\rm I}_0}{\cal W}_{{\cal X}^\vee,f}. $$ \end{definition} \paragraph{Partial compactifications of the ${\cal A}$-space.} Given any subset ${\rm I}_0'\in {\rm I}_0$, we can define a partial completion ${\cal A}\bigsqcup_{f\in {\rm I}_0'} {\rm D}_f$ of ${\cal A}$ by attaching to ${\cal A}$ the divisor ${\rm D}_f$ corresponding to the equation ${\rm A}_f=0$ for each $f \in {\rm I}_0'$. The duality should look like $$ ({\cal A}\bigsqcup_{f\in {\rm I}_0'} {\rm D}_f) <=> ({\cal X}^\vee, \sum_{f\in {\rm I}_0'}{\cal W}_f). $$ The order of pole of $\mathbb{I}_{\cal X}(l)$ at the divisor ${\rm D}_f$ should be equal to ${\cal W}_f^t(l)$. In particular, $\mathbb{I}_{\cal X}(l)$ extends to ${\cal A}\bigsqcup {\rm D}_f$ if and only if it is in the subset $\{l\in {\cal X}^\vee({\mathbb Z}^t)~|~ {\cal W}_f^t(l)\geq 0\} \subset {\cal X}^\vee({\mathbb Z}^t)$. \paragraph{Canonical tropical points of the ${\cal X}$-space.} Let $i \in {\rm I}$. Given a cluster ${\cal X}$-coordinate system, consider a point $t_i \in {\cal X}({\mathbb Z}^t)$ with the coordinates $\varepsilon_{ji}$, $j\in {\rm I}$. \begin{lemma} The point $t_i$ is invariant under mutations of cluster ${\cal X}$-coordinate systems. So there is a point $t_i \in {\cal X}({\mathbb Z}^t)$ which in any cluster ${\cal X}$-coordinate system has coordinates $\varepsilon_{ji}$, $j\in {\rm I}$. \end{lemma} \begin{proof} Given a mutation in the direction of $k$, let us compare, using (\ref{5.11.03.1xtr}), the rule how the ${\cal X}$-coordinates $\{\varepsilon_{ji}\}$, $j\in {\rm I}$ change with the mutation formulas (\ref{5.11.03.6}) for the matrix $\varepsilon_{ij}$. Let us assume that $k \not \in \{i,j\}$. Then, due to formula (\ref{5.11.03.1xtr}) for mutation of tropical ${\cal X}$-points, we have to prove that \begin{equation} \label{ef} \varepsilon'_{ji} \stackrel{?}{=}\varepsilon_{ji} - \varepsilon_{jk}{\rm min}\{0, -{\rm sgn} (\varepsilon_{jk}) \varepsilon_{ki}\}. \end{equation} Let us assume now that $\varepsilon_{jk}\varepsilon_{ki}<0$. Then ${\rm sgn} (-\varepsilon_{jk}) \varepsilon_{ki}>0$. So ${\rm min}\{0, {\rm sgn} (-\varepsilon_{jk}) \varepsilon_{ki}\}=0$, and the right hand side is $\varepsilon_{ji}$. This agrees with $\varepsilon'_{ij}= \varepsilon_{ij}$, see (\ref{5.11.03.6}), in this case. If $\varepsilon_{jk}\varepsilon_{ki}>0$, then ${\rm sgn} (-\varepsilon_{jk}) \varepsilon_{ki}<0$. So the right hand side is $$ \varepsilon_{ji} - \varepsilon_{jk}{\rm min}\{0, {\rm sgn} (-\varepsilon_{jk}) \varepsilon_{ki}\} = \varepsilon_{ji} - \varepsilon_{jk}{\rm sgn} (-\varepsilon_{jk}) \varepsilon_{ki} = \varepsilon_{ji} + |\varepsilon_{jk}| \varepsilon_{ki}. $$ Comparing with (\ref{5.11.03.6}), we see that in both cases we get the expected formula (\ref{ef}). Finally, if $k \in \{i,j\}$, then $\varepsilon'_{ij} = -\varepsilon_{ij}$, and by formula (\ref{5.11.03.1xtr}), we also get $-\varepsilon_{ij}$. \end{proof} Let us assume that, for each frozen $f\in {\rm I}_0$, there is a function $\mathbb{I}_{{\cal X}}(t_f)\in {\Bbb L}_+({\cal A}^\vee)$. predicted by the duality conjectures. Then we conjecture that in many situations there exist monomials $M_f$ of frozen ${\cal A}$-coordinates such that the potential on the space ${\cal A}$ is given by $$ {\cal W}_{{\cal A}^\vee}:= \sum_{f\in {\rm I}_0}M_f\cdot \mathbb{I}_{{\cal X}}(t_f). $$ \section{Introduction} \label{sec1} \subsection{Geometry of canonical bases in representation theory} \label{sec1.1} \subsubsection{Configurations of flags and parametrization of canonical bases} \label{sec1.1.1} Let ${\rm G}$ be a split semisimple simply-connected algebraic group over ${\mathbb Q}$. There are several basic vector spaces studied in representation theory of the Langlands dual group ${\rm G}^L$: \begin{enumerate} \item The weight $\lambda$ component $U({\cal N}^L)^{(\lambda)}$ in the universal enveloping algebra $U({\cal N}^L)$ of the maximal nilpotent Lie subalgebra in the Lie algebra of ${\rm G}^L$. \item The weight $\mu$ subspace $V^{(\mu)}_\lambda$ in the highest weight $\lambda$ representation $V_\lambda$ of ${\rm G}^L$. \item The tensor product invariants $ (V_{\lambda_1}\otimes ... \otimes V_{\lambda_n})^{{\rm G}^L}.$ \item The weight $\mu$ subspaces in the tensor products $V_{\lambda_1}\otimes ... \otimes V_{\lambda_n}$. \end{enumerate} Calculation of the dimensions of these spaces, in the cases 1)-3), is a fascinating classical problem, which led to Weyl's character formula and Kostant's partition function. \vskip 3mm The first examples of special bases in finite dimensional representations are Gelfand-Tsetlin's bases \cite{GT1}, \cite{GT2}. Other examples of special bases were given by De Concini-Kazhdan \cite{DCK}. The {\it canonical bases} in the spaces above were constructed by Lusztig \cite{L1}, \cite{L3}. Independently, canonical bases were defined by Kashiwara \cite{Ka}. Canonical bases in representations of ${\rm GL}_3, {\rm Sp}_4$ were defined by Gelfand-Zelevinsky-Retakh \cite{GZ}, \cite{RZ}. Closely related, but in general different bases were considered by Nakajima \cite{N1}, \cite{N2}, Malkin \cite{Ma}, Mirkovi\'{c}-Vilonen \cite{MV}, and extensively studied afterwords. Abusing terminology, we also call them canonical bases. It was discovered by Lusztig \cite{L} that, in the cases 1)-2), the sets parametrising canonical bases in representations of the group ${\rm G}$ are intimately related to the Langlands dual group ${\rm G}^L$. Kashiwara discovered in the cases 1)-2) an additional {\it crystal structure} on these sets, and Joseph proved a rigidity theorem \cite{J} asserting that, equipped with the crystal structure, the sets of parameters are uniquely determined. \vskip 3mm One of the results of this paper is a uniform geometric construction of the sets parametrizing all of these canonical bases, which leads to a natural uniform construction of canonical bases parametrized by these sets in the cases 2)-4). In particular, we get a new canonical bases in the case 4), generalizing the Mirkovi\'{c}-Vilonen (MV) basis in $V_\lambda$. To explain our set-up let us recall some basic notions. \vskip 3mm A {\it positive space} ${\cal Y}$ is a space, which could be a stack whose generic part is a variety, equipped with a {\it positive atlas}. The latter is a collection of rational coordinate systems with subtraction free transition functions between any pair of the coordinate systems. Therefore the set ${\cal Y}({\mathbb Z}^t)$ of the {\it integral tropical points} of ${\cal Y}$ is well defined. We review all this in Section \ref{sec2.1.2}. Let $({\cal Y}, {\cal W})$ be a {\it positive pair} given by a positive space ${\cal Y}$ equipped with a positive rational function ${\cal W}$. Then one can tropicalize the function ${\cal W}$, getting a ${\mathbb Z}$-valued function $$ {\cal W}^t: {\cal Y}({\mathbb Z}^t)\longrightarrow {\mathbb Z}. $$ Therefore a positive pair $({\cal Y}, {\cal W})$ determines a set of {\it positive integral tropical points}: \begin{equation} \label{postropintp} {\cal Y}_{\cal W}^{+}({\mathbb Z}^t):=\{l\in {\cal Y}({\mathbb Z}^t)~|~{\cal W}^t(l)\geq 0\}. \end{equation} We usually omit ${\cal W}$ in the notation and denote the set by ${\cal Y}^{+}({\mathbb Z}^t)$. To introduce the positive pairs $({\cal Y}, {\cal W})$ which play the basic role in this paper, we need to review some basic facts about flags and decorated flags in ${\rm G}$. \paragraph{Decorated flags and associated characters.} Below ${\rm G}$ is a split reductive group over ${\mathbb Q}$. Recall that the {\it flag variety} ${\cal B}$ parametrizes Borel subgroups in {\rm G}. Given a Borel subgroup {\rm B}, one has an isomorphism ${\cal B} = {\rm G/B}$. Let ${\rm G'}$ be the adjoint group of ${\rm G}$. The group ${\rm G}'$ acts by conjugation on pairs $({\rm U}, \chi)$, where $\chi: {\rm U} \to {\Bbb A}^1$ is an additive character of a maximal unipotent subgroup ${\rm U}$ in ${\rm G}'$. The subgroup ${\rm U}$ stabilizes each pair $({\rm U}, \chi)$. A character $\chi$ is {\it non-degenerate} if ${\rm U}$ is the stabilizer of $({\rm U}, \chi)$. The {\it principal affine space}\footnote{Inspite of the name, it is not an affine variety.} ${\cal A}_{\rm G'}$ parametrizes pairs $({\rm U}, \chi)$ where $\chi$ is a non-degenerate additive character of a maximal unipotent group ${\rm U}$. Therefore there is an isomorphism $$ i_{\chi}: {\cal A}_{\rm G'}\stackrel{\sim}{\longrightarrow} {\rm G'/U}. $$ This isomorphism is not canonical: the coset $[{\rm U}] \in {\rm G'/U}$ does not determine a point of ${\cal A}_{\rm G'}$. To specify a point one needs to choose a non-degenerate character $\chi$. One can determine uniquely the character by using a {\it pinning}, see Sections \ref{sec2.1.1}-\ref{sec4.1}. So writing ${\cal A}_{\rm G'} = {\rm G'}/{\rm U}$ we abuse notation, keeping in mind a choice of the character ${\chi}$, or a pinning. Having said this, one defines the principal affine space ${\cal A}_{\rm G}$ for the group ${\rm G}$ by $ {\cal A}_{\rm G}:= {\rm G/U}. $ We often write ${\cal A}$ instead of ${\cal A}_{{\rm G}}$. The points of ${\cal A}$ are called {\it decorated flags} in {\rm G}. The group ${\rm G}$ acts on ${\cal A}$ from the left. For each ${\rm A}\in {\cal A}$, let ${\rm U}_{\rm A}$ be its stabilizer. It is a maximal unipotent subgroup of ${\rm G}$. There is a canonical projection \begin{equation} \label{5.29.12.111} \pi: {\cal A} \longrightarrow {\cal B}, ~~~~\mbox{\rm $\pi ({\rm A}):=$ the normalizer of ${\rm U}_{{\rm A}}$}. \end{equation} The projection ${\rm G}\to {\rm G'}$ gives rise to a map $p: {\cal A}_{\rm G} \longrightarrow {\cal A}_{\rm G'}$ whose fibers are torsors over the center of ${\rm G}$. Let $p({\rm A}) = ({\rm U}_{\rm A}, \chi_{\rm A})$. Here ${\rm U}_{\rm A}$ is a maximal unipotent subgroup of ${\rm G}'$. It is identified with a similar subgroup of ${\rm G}$, also denoted by ${\rm U}_{\rm A}$. So a decorated flag ${\rm A}$ in ${\rm G}$ provides a non-degenerate character of the maximal unipotent subgroup ${\rm U}_{\rm A}$ in ${\rm G}$: \begin{equation} \label{11.20.11.10a} \chi_{\rm A}: {\rm U_{\rm A}} \longrightarrow {\Bbb A}^1. \end{equation} Clearly, if $u\in {\rm U}_{{\rm A}}$, then $gug^{-1}\in {\rm U}_{g\cdot {\rm A}}$, and \begin{equation} \label{obvious} \chi_{{\rm A}}(u)=\chi_{g\cdot {\rm A}}(gug^{-1}). \end{equation} \paragraph{Example.} A flag for ${\rm SL}_m$ is a nested collection of subspaces in an $m$-dimensional vector space $V_m$ equipped with a volume form $\omega \in {\rm det}V_m^*$: $$ F_\bullet = F_0 \subset F_1 \subset \ldots \subset F_{m-1} \subset F_m, ~~~~ {\rm dim}F_i=i. $$ A decorated flag for ${\rm SL}_m$ is a flag $F_\bullet$ with a choice of non-zero vectors $f_i \subset F_{i}/F_{i-1}$ for each $i=1, \ldots, m-1$, called {\it decorations}. For example, ${\cal A}_{SL_2}$ parametrises non-zero vectors in a symplectic space $(V_2, \omega)$. The subgroup preserving a vector $f \in V_2 -\{0\}$ is given by transformations $u_{f}(a): v \longmapsto v+a\omega(v, f) v$. Its character $\chi_f$ is given by $\chi_f(u_{f}(a))=a$. \vskip 2mm Our basic geometric objects are the following three types of configuration spaces: \begin{equation} \label{mixedconf} {\rm Conf}_n({\cal A})= {\rm G} \backslash {\cal A}^n, ~~~~ {\rm Conf}({\cal A}^n, {\cal B}):= {\rm G} \backslash ({\cal A}^n\times {\cal B}), ~~~~ {\rm Conf}({\cal B}, {\cal A}^n, {\cal B}):= {\rm G} \backslash ({\cal B}\times {\cal A}^n\times {\cal B}). \end{equation} \paragraph{The potential ${\cal W}$.} A key observation is that there is a natural rational function $$ \chi^o: {\rm Conf}({\cal B}, {\cal A}, {\cal B}) = {\rm G} \backslash ({\cal B} \times {\cal A}\times {\cal B}) \longrightarrow {\Bbb A}^1. $$ Let us explain its definition. A pair of Borel subgroups $\{{\rm B}_1, {\rm B}_2\}$ is {\it generic} if ${\rm B}_1 \cap {\rm B}_2$ is a Cartan subgroup in ${\rm G}$. A pair $\{{\rm A}_1, {\rm B}_2\}\in {\cal A} \times {\cal B}$ is generic if the pair $(\pi({\rm A}_1), {\rm B}_2)$ is generic. Generic pairs $\{{\rm A}_1, {\rm B}_2\}$ form a principal homogeneous ${\rm G}$-space. Thus, given a triple $\{ {\rm B}_{1}, {\rm A}_2,{\rm B}_{3}\} \in {\cal B} \times {\cal A} \times {\cal B}$ such that $\{{\rm A}_2, {\rm B}_3\}$ and $\{{\rm A}_2, {\rm B}_1\}$ are generic, there is a unique $u\in {\rm U}_{{\rm A}_2}$ such that \begin{equation} \label{7.20.9.8} \{ {\rm A}_2,{\rm B}_{3}\} = u \cdot \{ {\rm A}_2, {\rm B}_{1}\}. \end{equation} So we define $ \chi^o({\rm B}_{1}, {\rm A}_2,{\rm B}_{3}):= \chi_{{\rm A}_2}(u). $ Using it as a building block, we define a positive rational function ${\cal W}$ on each of the spaces (\ref{mixedconf}). For example, to define the ${\cal W}$ on the space ${\rm Conf}_n({\cal A})$ we start with a generic collection $\{{\rm A}_1, ..., {\rm A}_n\} \in {\cal A}^n$, set ${\rm B}_i:= \pi({\rm A}_i)$, and define ${\cal W}$ as a sum, with the indices modulo $n$: \begin{equation} \label{theWp} {\cal W}: {\rm Conf}_n({\cal A}) \longrightarrow {\Bbb A}^1, ~~~~ {\cal W}({\rm A}_1, ..., {\rm A}_n):= \sum_{i=1}^n\chi^o({\rm B}_{i-1}, {\rm A}_i, {\rm B}_{i+1}). \end{equation} Note that the potential ${\cal W}$ is well-defined when each adjacent pair $\{{\rm A}_i, {\rm A}_{i+1}\}$ is generic, meaning that $\{\pi({\rm A}_i), \pi({\rm A}_{i+1})\}$ is generic. Assigning the (decorated) flags to the vertices of a polygon, we picture the potential ${\cal W}$ as a sum of the contributions $\chi_{\rm A}$ at the ${\rm A}$-vertices (shown boldface) of the polygon, see Fig \ref{polygon}. By construction, the potential ${\cal W}_{{\rm G}}$ on the space ${\rm Conf}_n({\cal A}_{{\rm G}})$ is the pull back of the potential ${\cal W}_{{\rm G}'}$ for the adjoint group ${\rm G}'$ via the natural projection $p_{{\rm G}\to {\rm G'}}: {\rm Conf}_n({\cal A}_{{\rm G}}) \to {\rm Conf}_n({\cal A}_{{\rm G}'})$: \begin{equation} \label{potentialadj} {\cal W}_{{\rm G}} = p_{{\rm G}\to {\rm G'}}^*{\cal W}_{{\rm G}'}. \end{equation} \begin{figure}[ht] \centerline{\epsfbox{polygon.eps}} \caption{The potential ${\cal W}$ is a sum of the contributions $\chi_{\rm A}$ at the ${\rm A}$-vertices (boldface). \label{polygon}} \end{figure} Potentials for the other two spaces in (\ref{mixedconf}) are defined similarly, as the sums of the characters assigned to the decorated flags of a configuration. A formula similar to (\ref{potentialadj}) evidently holds. \paragraph{Parametrisations of canonical bases.} It was shown in \cite{FG1} that all of the spaces (\ref{mixedconf}) have natural positive structures. We show that the potential ${\cal W}$ is a positive rational function. We prove that the sets parametrizing canonical bases admit a uniform description as the sets ${\cal Y}_{\cal W}^{+}({\mathbb Z}^t)$ of positive integral tropical points assigned to the following positive pairs $({\cal Y}, {\cal W})$. To write the potential ${\cal W}$ we use an abbreviation $\chi_{{\rm A}_i}:= \chi^o({\rm B}_{i-1},{\rm A}_i, {\rm B}_{i+1})$, with indices mod $n$: \begin{enumerate} \item The canonical basis in $U({\cal N}^L)$: $$ {\cal Y}={\rm Conf}({\cal B}, {\cal A},{\cal B}), ~~~{\cal W}({{\rm B}}_1, {{\rm A}}_2, {\rm B}_3):=\chi_{{\rm A}_2}. $$ \item The canonical basis in $V_\lambda$: $$ {\cal Y}={\rm Conf}({\cal A}, {\cal A},{\cal B}), ~~~{\cal W}({\rm A}_1, {\rm A}_2, {\rm B}_3):=\chi_{{\rm A}_1} + \chi_{{\rm A}_2}. $$ \item The canonical basis in invariants of tensor product of $n$ irreducible ${\rm G}^L$-modules: \begin{equation} \label{ppoottee} {\cal Y}={\rm Conf}_n({\cal A}), ~~~{\cal W}({\rm A}_1,\ldots,{\rm A}_n) :=\sum_{i=1}^n\chi_{{\rm A}_i}. \end{equation} \item The canonical basis in tensor products of $n$ irreducible ${\rm G}^L$-modules: \begin{equation} \label{ppoott} {\cal Y}={\rm Conf}({\cal A}^{n+1}, {\cal B}), ~~~{\cal W}({\rm A}_1,\ldots,{\rm A}_{n+1}, {\rm B}) :=\sum_{i=1}^{n+1}\chi_{{\rm A}_i}. \end{equation} \end{enumerate} Natural decompositions of these sets, like decompositions into weight subspaces in 1) and 2), are easily described in terms of the corresponding configuration space, see Section \ref{sec2.3.2}. Let us emphasize that the canonical bases in tensor products are not the tensor products of canonical bases in irreducible representations. Similarly, in spite of the natural decomposition $$ V_{\lambda_1}\otimes ... \otimes V_{\lambda_n} = \oplus_{\lambda} V_{\lambda} \otimes (V^*_{\lambda} \otimes V_{\lambda_1}\otimes ... \otimes V_{\lambda_n})^{{\rm G}^L}, $$ the canonical basis on the left is not a product of the canonical bases on the right. \vskip 2mm Descriptions of the sets parametrizing the canonical bases were known in different but equivalent formulations in the following cases: In the cases 1)-2) there is the original parametrization of Lusztig \cite{L}. In the case 3) for $n=3$, there is Berenstein-Zelevinsky's parametrization \cite{BZ}, referred to as the BZ data. We produce in Appendix an isomorphism between our parametrization and the BZ data. The cyclic symmetry, evident in our approach, is obscure for the BZ data. The description in the $n>3$ case in 3) seems to be new. The cases 1), 2) and 4) were investigated by Berenstein and Kazhdan \cite{BK1},\cite{BK2}, who introduced and studied {\it geometric crystals} as algebraic-geometric avatars of Kashiwara's {\it crystals}. In particular, they describe the sets parametrizing canonical bases in the form (\ref{postropintp}), without using, however, configuration spaces. We show in Appendix \ref{sec9} that the space of generic configurations ${\rm Conf}^*({\cal A}^n,{\cal B})$ with the potential ${\cal W}$ is a {\it positive decorated geometric crystal} in the sense of \cite{BK3}. Interpretation of geometric crystals relevant to representation theory as moduli spaces of mixed configurations of flags makes, to our opinion, the story more transparent. \vskip 2mm To define canonical bases in representations, one needs to choose a maximal torus in ${\rm G}^L$ and a positive Weyl chamber. Usual descriptions of the sets parametrizing canonical bases require the same choice. Unlike this, working with configurations we do not require such choices.\footnote{{ We would like to stress that the positive structures and potentials on configuration spaces which we employ for parametrization of canonical bases do not depend on any extra choices, like pinning etc., in the group. See Section \ref{proofmth1}.}} \vskip 2mm Most importantly, our parametrization of the canonical basis in tensor products invariants leads immediately to a similar set which parametrizes a linear basis in the space of functions on the moduli space ${\rm Loc}_{{\rm G^L}, S}$ of ${\rm G}^L$-local systems on a decorated surface $S$. Here the approach via configurations of decorated flags, and in particular its transparent cyclic invariance, are essential. See the example when $G=SL_2$ in Section \ref{LAMCB}. \vskip 2mm Summarizing, we understood the sets parametrizing canonical bases as the sets of positive integral tropical points of various configuration spaces. Let us show now how this, combined with the geometric Satake correspondence \cite{L4}, \cite{G}, \cite{MV}, \cite{BD}, leads to a natural uniform construction of canonical bases in the cases 2)-4). We explain in Section \ref{sec1.1.2} the construction in the case of tensor products invariants. A canonical basis in this case was defined by Lusztig \cite{L3}. However Lusztig's construction does not provide a description of the set parametrizing the basis. Our basis in tensor products is new -- it generalizes the MV basis in $V_\lambda$. We explain this in Section \ref{tensor}. \subsubsection{Constructing canonical bases in tensor products invariants} \label{sec1.1.2} We start with a simple general construction. Let ${\cal Y}$ be a positive space, understood just as a collection of split tori glued by positive birational maps \cite{FG1}. Since it is a birational notion, there is no set of $F$-points of ${\cal Y}$, where $F$ is a field. Let ${\cal K}:={\mathbb C}((t))$. In Section \ref{sec2.2.1} we introduce a set $ {\cal Y}^\circ({\cal K}). $ We call it the set of {\it transcendental ${\cal K}$-points of ${\cal Y}$}. It is a set making sense of ``generic ${\cal K}$-points of ${\cal Y}$''. In particular, if ${\cal Y}$ is given by a variety $Y$ with a positive rational atlas, then ${\cal Y}^\circ({\cal K})\subset Y({\cal K})$. The set ${\cal Y}^\circ({\cal K})$ comes with a natural {\it valuation map}: $$ {\rm val}: {\cal Y}^\circ({\cal K})\longrightarrow {\cal Y}({\mathbb Z}^t). $$ For any $l\in {\cal Y}({\mathbb Z}^t)$, we define the {\it transcendental cell} ${\cal C}^\circ_l$ assigned to $l$: $$ {\cal C}^\circ_l:= {\rm val}^{-1}(l)\subset {\cal Y}^\circ({\cal K}), ~~~~ {\cal Y}^\circ({\cal K}) = \coprod_{l\in {\cal Y}({\mathbb Z}^t)}{\cal C}^\circ_l. $$ Let us now go to canonical bases in invariants of tensor products of ${\rm G}^L$-modules (\ref{tmima}). The relevant configuration space is {\rm Conf}_n({\cal A}). $ The tropicalized potential $ {\cal W}^t: {\rm Conf}_n({\cal A})({\mathbb Z}^t) \to {\mathbb Z} $ determines the subset of {positive integral tropical points}: \begin{equation} \label{121212as} {\rm Conf}^+_n({\cal A})({\mathbb Z}^t):= \{l \in {\rm Conf}_n({\cal A})({\mathbb Z}^t)~|~ {\cal W}^t(l)\geq 0\}. \end{equation} We construct a canonical basis in (\ref{tmima}) parametrized by the set (\ref{121212as}). Let ${\cal O}:={\mathbb C}[[t]]$. In Section \ref{sec2.2.2} we introduce a moduli subspace \begin{equation} \label{O-int} {\rm Conf}^{\cal O}_n({\cal A})\subset {\rm Conf}_n({\cal A})({\cal K}). \end{equation} We call it the space of {\it ${\cal O}$-integral configurations of decorated flags}. Here are its crucial properties: \begin{enumerate} \item A transcendental cell ${\cal C}^\circ_l$ of ${\rm Conf}_n({\cal A})$ is contained in ${\rm Conf}_n^{\cal O}({\cal A})$ if and only if it corresponds to a positive tropical point. Moreover, given a point $l\in {\rm Conf}_n({\cal A})({\mathbb Z}^t)$, one has \begin{equation} \label{features} l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t) ~{\Longleftrightarrow} ~{\cal C}^\circ_l \subset {\rm Conf}^{\cal O}_n({\cal A}) ~{\Longleftrightarrow}~ {\cal C}^\circ_l \cap {\rm Conf}^{\cal O}_n({\cal A})\not =\emptyset. \end{equation} \item Let ${\rm Gr}:= {\rm G}({\cal K})/{\rm G}({\cal O})$ be the affine Grassmannian. It follows immediately from the very definition of the subspace (\ref{O-int}) that there is a canonical map $$ \kappa: {\rm Conf}^{\cal O}_n({\cal A})\longrightarrow {\rm Conf}_n({\rm Gr}):= {\rm G}({\cal K})\backslash ({\rm Gr})^n. $$ \end{enumerate} These two properties of ${\rm Conf}_n^{\cal O}({\cal A})$ allow to transport points $l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$ into the top components of the stack ${\rm Conf}_n({\rm Gr})$. Namely, given a point $l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$, we define a cycle $$ {\cal M}_l := \mbox{closure of ${\cal M}_l ^\circ\subset {\rm Conf}_n({\rm Gr}),~~~~\mbox{where } {\cal M}_l^\circ:= \kappa({\cal C}^\circ_l)$}. $$ The cycle ${\cal C}^\circ_l$ is defined for any $l\in {\rm Conf}_n({\cal A})({\mathbb Z}^t)$. However, as it clear from (\ref{features}), the map $\kappa$ can be applied to it if and only if $l$ is positive: otherwise ${\cal C}^\circ_l$ is not in the domain of the map $\kappa$. We prove that the map $l\longmapsto {\cal M}_l$ provides a bijection \begin{equation} \label{ltoml} {\rm Conf}^+_n({\cal A})({\mathbb Z}^t) \stackrel{\sim}{\longrightarrow} \{\mbox{\rm closures of the top dimensional components of the stack ${\rm Conf}_n({\rm Gr})$}\}. \end{equation} Here the very notion of a ``top dimensional'' component of a stack requires clarification. For now, we will bypass this question in a moment by passing to more traditional varieties. We use a very general argument to show the injectivity of the map $l \longmapsto {\cal M}_l$. Namely, given a positive rational function $F$ on ${\rm Conf}_n({\cal A})$, we define a ${\mathbb Z}$-valued function $D_F$ on ${\rm Conf}_n({\rm Gr})$. It generalizes the function on the affine Grassmannian for ${\rm G}={\rm GL}_m$ and its products defined by Kamnitzer \cite{K}, \cite{K1}. We prove that the restriction of $D_F$ to ${\cal M}^\circ_l$ is equal to the value $F^t(l)$ of the tropicalization $F^t$ of $F$ at the point $l\in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$. Thus the map (\ref{ltoml}) is injective. \vskip 2mm Let us reformulate our result in a more traditional language. The orbits of ${{\rm G}}({\cal O})$ acting on ${\rm Gr}\times {\rm Gr}$ are labelled by dominant weights of ${{\rm G}}^L$. We write ${{\rm L}_1}\stackrel{\lambda}{\longrightarrow}{\rm L}_2$ if $({\rm L}_1, {\rm L}_2)$ is in the orbit labelled by $\lambda$. Let $[1]$ be the identity coset in ${\rm Gr}$. A set $\underline{\lambda}=(\lambda_1,\ldots, \lambda_{n})$ of dominant weights of ${\rm G}^L$ determines a {\it cyclic convolution variety}, better known as a {\it fiber of the convolution map}: \begin{equation} \label{convvarvar} {\rm Gr}_{c(\underline{\lambda})}:=\{({\rm L}_1,\ldots, {\rm L}_{n})~|~ {\rm L}_1\stackrel{\lambda_1}{\longrightarrow }{\rm L}_2 \stackrel{\lambda_2}{\longrightarrow }\ldots \stackrel{\lambda_{n}}{\longrightarrow} {\rm L}_{n+1},~{\rm L}_1={\rm L}_{n+1}=[1]\} \subset [1]\times {\rm Gr}^{n-1}. \end{equation} These varieties provide a ${\rm G}({\cal O})$-equivariant decomposition \begin{equation} \label{cvcvcv} [1] \times {\rm Gr}^{n-1} = \coprod_{\underline{\lambda}=(\lambda_1, ..., \lambda_{n})}{\rm Gr}_{c(\underline{\lambda})}. \end{equation} Since ${\rm G}({\cal O})$ is connected, it preserves each component of ${\rm Gr}_{c(\underline{\lambda})}$. Thus the components of ${\rm Gr}_{c(\underline{\lambda})}$ live naturally on the stack $$ {\rm Conf}_{n}({\rm Gr}) = {\rm G}({\cal O})\backslash ([1]\times {\rm Gr}^{n-1}). $$ We prove that the cycles ${\cal M}_l$ assigned to the points $l \in {\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$ are closures of the top dimensional components of the cyclic convolution varieties. The latter, due to the geometric Satake correspondence, give rise to a canonical basis in (\ref{tmima}). We already know that the map (\ref{ltoml}) is injective. We show that the $\underline{\lambda}$-components of the sets related by the map (\ref{ltoml}) are finite sets of the same cardinality, respected by the map. Therefore the map (\ref{ltoml}) is an isomorphism. \vskip 3mm Our result generalizes a theorem of Kamnitzer \cite{K1}, who used hives \cite{KT} to parametrize top components of convolution varieties for ${\rm G=GL}_m$, $n=3$. Our construction generalizes Kamnitzer's construction of parametrizations of Mirkovi\'{c}-Vilonen cycles \cite{K}. At the same time, it gives a coordinate free description of Kamnitzer's construction. When ${\rm G=GL}_m$, there is a special coordinate system on the space ${\rm Conf}_3({\cal A})$, introduced in Section 9 of \cite{FG1}. We show in Section \ref{KT} that it provides an isomorphism of sets $$ {\rm Conf}^+_3({\cal A})({\mathbb Z}^t) ~\stackrel{\sim}{\longrightarrow} ~\{\mbox{Knutson-Tao's hives \cite{KT}}\}. $$ Using this, we get a one line proof of Knutson-Tao-Woodward's theorem \cite{KTW} in Section \ref{sec2.1.6}. For ${\rm G=GL}_m$, $n>3$, we prove Kamnitzer conjecture \cite{K1}, describing the top components of convolution varieties via a generalization of hives -- we identify the latter with the set ${\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$ via the special positive coordinate systems on ${\rm Conf}_n({\cal A})$ from \cite{FG1}. \subsection{Positive tropical points and top components} \label{sec1.2} \subsubsection{Our main example}\label{sec1.2.2} Denote by ${\rm Conf}^\times_n({\cal A})$ the subvariety of ${\rm Conf}_n({\cal A})$ parametrizing configurations of decorated flags $({\rm A}_1, ..., {\rm A}_n)$ such that the flags $(\pi({\rm A}_i), \pi({\rm A}_{i+1}))$ are in generic position for each $i=1, ..., n$ modulo $n$. The potential ${\cal W}$ was defined in (\ref{theWp}). It is evidently a regular function on ${\rm Conf}^\times_n({\cal A})$. Let ${\rm P}^+$ be the cone of dominant coweights. There are canonical isomorphisms \begin{equation} \label{agr} {\alpha}: {\rm Conf}^\times_2({\cal A}) \stackrel{\sim}{\longrightarrow} {\rm H}, ~~~~ {\rm Conf}_2({\rm Gr}) = {\rm P}^+. \end{equation} Configurations $({\rm A}_1, ..., {\rm A}_n)$ sit at the vertices of a polygon, as on Fig \ref{polygon1}. Let $\pi_{E}: {\rm Conf}_n({\cal A}) \to {\rm Conf}_2({\cal A})$ be the projection corresponding to a side $E$ of the polygon. Denote by $\pi^\times_{E}$ its restriction to ${\rm Conf}^\times_n({\cal A})$. The collection of the maps $\{\pi^\times_{E}\}$, followed by the first isomorphism in (\ref{agr}) provides a map $$ \pi: {\rm Conf}^\times_n({\cal A}) \longrightarrow {\rm Conf}^\times_2({\cal A})^n \stackrel{{\alpha}}{=} {\rm H}^n. $$ Using similarly the second isomorphism in (\ref{agr}), we get a map \begin{figure}[ht] \centerline{\epsfbox{polygon1.eps}} \caption{Going from an ${\cal O}$-integral configuration of decorated flags to a configuration of lattices.} \label{polygon1} \end{figure} $$ \pi_{\rm Gr}: {\rm Conf}_n({\rm Gr}) \longrightarrow {\rm Conf}_2({\rm Gr})^n {=} ({\rm P}^+)^n. $$ Let $\{\omega_i\}$ be a basis of the cone of positive dominant weights of ${\rm H}$. The functions $\pi_{E}^*\omega_i$ are equations of the irreducible components of the divisor $D:= {\rm Conf}_n({\cal A}) - {\rm Conf}^\times_n({\cal A})$: $$ D:= {\rm Conf}_n({\cal A}) - {\rm Conf}^\times_n({\cal A}) = \cup_{E, i}D^E_{i}. $$ Equivalently, the component $D^E_{i}$ is determined by the condition that the pair of flags at the endpoints of the edge $E$ belongs to the codimension one ${\rm G}$-orbit corresponding to the simple reflection $s_{i}\in W$. \footnote{Indeed, $\omega_i({\alpha}({\rm A}_1, {\rm A}_2))=0$ if and only if the corresponding pair of flags belongs to the codimension one ${\rm G}$-orbit corresponding to a simple reflection $s_i$.} The space ${\rm Conf}_n({\cal A})$ has a cluster ${\cal A}$-variety structure, described for ${\rm G}=SL_m$ in \cite{FG1}, Section 10. An important fact \cite{FG5} is that any cluster ${\cal A}$-variety ${\cal A}$ has a canonical cluster volume form $\Omega_{\cal A}$, which in any cluster ${\cal A}$-coordinate system $({\rm A}_1, \ldots , {\rm A}_n)$ is given by $$ \Omega_{\cal A} = \pm d\log {\rm A}_1 \wedge \ldots \wedge d\log {\rm A}_n. $$ The functions $\pi_{E}^*\omega_i$ are the {\it frozen ${\cal A}$-cluster coordinates} in the sense of Definition \ref{9.12.14.2}. This is equivalent to the claim that the canonical volume form $\Omega_{\cal A}$ on ${\rm Conf}_n({\cal A})$ has non-zero residues precisely at the irreducible components of the divisor $D$.\footnote{Indeed, it follows from Lemma \ref{nonfr} and an explcit description of cluster structure on ${\rm Conf}_n({\cal A})$ that the form $\Omega_{\cal A}$ can not have non-zero residues anywhere else the divisors $D^E_i$. One can show that the residues at these divisors are non-zero.} All this data is defined for any split semi-simple group ${\rm G}$ over ${\mathbb Q}$. Indeed, the form $\Omega$ on ${\rm Conf}_n({\cal A})$ for the simply-connected group is invariant under the action of the center the group, and thus its integral multiple descends to a form on ${\rm Conf}_n({\cal A}_{\rm G})$. The potential ${\cal W}_{\rm G}$ is defined by pulling back the potential ${\cal W}_{\rm G'}$ for the adjoint group ${\rm G'}$. We continue discussion of this example in Section \ref{sec1.4}, where it is casted as an example of the mirror symmetry. \paragraph{The simplest example.} Let $(V_2, \omega)$ be a two dimensional vector space with a symplectic form. Then ${\rm SL_2} = {\rm Aut}(V_2, \omega)$, and ${\cal A}_{SL_2}=V_2 -\{0\}$. Next, ${\rm Conf}_n({\cal A}_{\rm SL_2}) = {\rm Conf}_n(V_2)$ is the space of configuration $(l_1, ..., l_n)$ of $n$ non-zero vectors in $V_2$. Set $\Delta_{i,j}:= \langle \omega, l_i \wedge l_j\rangle$. Then the potential is given by the following formula, where the indices are mod $n$: \begin{equation} \label{potsl2} {\cal W}:= \sum_{i=1}^n\frac{\Delta_{i, i+2}}{\Delta_{i, i+1}\Delta_{i+1, i+2}}. \end{equation} The boundary divisors are given by equations $\Delta_{i, i+1}=0$. To write the volume form, pick a triangulation $T$ of the polygon whose vertices a labeled by the vectors. Then, up to a sign, $$ \Omega:= \bigwedge_{E} d\log \Delta_E. $$ where $E$ are the diagonals and sides of the $n$-gon, and $\Delta_E:= \Delta_{i, j}$ if $E=(i,j)$. The function (\ref{potsl2}) is invariant under $l_i\to -l_i$, and thus descends to ${\rm Conf}_n({\cal A}_{\rm PGL_2}) = {\rm Conf}_n(V_2/\pm 1)$. \subsubsection{The general framework}\label{sec1.2.1} Let us explain main features of the geometric picture underlying our construction in most general terms, which we later on elaborate in details in every particular situation. First, there are three main ingredients: \begin{enumerate} \item A positive space ${\cal Y}$ with a positive rational function ${\cal W}$ called the {\it potential}, and a volume form $\Omega_{\cal Y}$ with logarithmic singularities. This determines the set ${\cal Y}^+_{\cal W}({\mathbb Z}^t)$ of positive integral tropical points -- the set parametrizing a canonical basis.\footnote{The set ${\cal Y}({\mathbb Z}^t)$, the tropicalization ${\cal W}^t$, and thus the subset ${\cal Y}^+_{\cal W}({\mathbb Z}^t)$ can also be determined by the volume form $\Omega_{\cal Y}$, without using the positive structure on ${\cal Y}$.} \item A subset of ${\cal O}$-integral points ${\cal Y}^{\cal O} \subset {\cal Y}^\circ({\cal K})$. Its key feature is that, given an $ l\in {\cal Y}({\mathbb Z}^t)$, \begin{equation} \label{kf1} l \in {\cal Y}^+_{\cal W}({\mathbb Z}^t) ~{\Longleftrightarrow} ~ {\cal C}^\circ_l \subset {\cal Y}^{\cal O}~{ \Longleftrightarrow} ~ {\cal C}^\circ_l \cap {\cal Y}^{\cal O}\not = \emptyset. \end{equation} \item A moduli space ${\rm Gr}_{{\cal Y}, {\cal W}}$, together with a canonical map \begin{equation} \label{kf2} \kappa: {\cal Y}^{\cal O} \longrightarrow {\rm Gr}_{{\cal Y}, {\cal W}}. \end{equation} \end{enumerate} These ingredients are related as follows: \begin{itemize} \item Any positive rational function $F$ on ${\cal Y}$ gives rise to a ${\mathbb Z}$-valued function $D_F$ on ${\rm Gr}_{{\cal Y}, {\cal W}}$, such that for any $l\in {\cal Y}^+_{\cal W}({\mathbb Z}^t)$, the restriction of $D_F$ to $\kappa({\cal C}^\circ_l)$ equals $F^t(l)$. \end{itemize} So we arrive at a collection of irreducible cycles $$ {\cal M}^\circ _l := \kappa({\cal C}^\circ_l)\subset {\rm Gr}_{{\cal Y}, {\cal W}}, ~~~~ {\cal M}_l := \mbox{closure of ${\cal M}^\circ_l$},~~~l \in {\cal Y}^+_{\cal W}({\mathbb Z}^t). $$ Thanks to the $\bullet$, the assignment $l \longmapsto {\cal M}_l$ is injective. \vskip 3mm Consider the set $\{D_c\}$ of all irreducible divisors in ${\cal Y}$ such that the residue of the form $\Omega_{\cal Y}$ at $D_c$ is non-zero. We call them the {\it boundary divisors} of ${\cal Y}$. We define \begin{equation} \label{Yspace} {\cal Y}^\times:= {\cal Y} - \cup D_c. \end{equation} By definition, the form $\Omega_{\cal Y}$ is regular on ${\cal Y}^\times$. In all examples the potential ${\cal W}$ is regular on ${\cal Y}^\times$. There is a split torus ${\Bbb H}$, and a positive regular surjective projection $$ \pi: {\cal Y}^\times\longrightarrow {\Bbb H}. $$ The map $\pi$ is determined by the form $\Omega_{\cal Y}$. For example, assume that each boundary divisor $D_c$ is defined by a global equation $\Delta_c=0$. Then the regular functions $\{\Delta_c\}$ define the map $\pi$, i.e. $\pi(y) = \{\Delta_c(y)\}$. \vskip 2mm Next, there is a semigroup ${\Bbb H}^{\cal O}\subset {\Bbb H}({\cal K})$ containing ${\Bbb H}({\cal O})$, defining a cone $$ {\Bbb P}:= {\Bbb H}^{\cal O}/{\Bbb H}({\cal O}) \subset {\Bbb H}({\mathbb Z}^t):= {\Bbb H}({\cal K})/{\Bbb H}({\cal O}) = X_*({\Bbb H}), $$ such that the tropicalization of the map $\pi$ provides a map $\pi^t: {\cal Y}^+_{\cal W}({\mathbb Z}^t) \to {\Bbb P}$, and there is a surjective map $\pi_{\rm Gr}: {\rm Gr}_{{\cal Y}, {\cal W}} \to {\Bbb P}$. Denote by $\pi^{\cal O}$ restricting of $\pi\otimes {\cal K}$ to ${\cal Y}^{\cal O}$. These maps fit into a commutative diagram \begin{equation} \label{mcdia} \begin{array}{ccccc} {\cal Y}^+_{\cal W}({\mathbb Z}^t)&\stackrel{\rm val}{\longleftarrow} &{\cal Y}^{\cal O} & \stackrel{\kappa}{\longrightarrow}&{\rm Gr}_{{\cal Y}, {\cal W}} \\ &&&&\\ \pi^t \downarrow &&\pi^{\cal O}\downarrow &&\downarrow \pi_{\rm Gr}\\ &&&&\\ {\Bbb P}&\stackrel{\rm val}{\longleftarrow} &{\Bbb H}^{\cal O} &\stackrel{\rm val}{\longrightarrow} &{\Bbb P} \end{array} \end{equation} We define ${\rm Gr}_{{\cal Y}, {\cal W}}^{(\lambda)}$ and ${\cal Y}^+_{\cal W}({\mathbb Z}^t)_\lambda$ as the fibers of the maps $\pi_{\rm Gr}$ and $\pi^t$ over a $\lambda \in {\Bbb P}$. So we have \begin{equation} \label{stratilam} {\rm Gr}_{{\cal Y}, {\cal W}} = \coprod_{\lambda \in {\Bbb P}}{\rm Gr}_{{\cal Y}, {\cal W}}^{(\lambda)}, ~~~~ {\cal Y}^+_{\cal W}({\mathbb Z}^t) = \coprod_{\lambda \in {\Bbb P}}{\cal Y}^+_{\cal W}({\mathbb Z}^t)_\lambda. \end{equation} The following is a key property of our picture: \begin{itemize} \item $\bullet$ The map $l \longrightarrow {\cal M}_l $ provides a bijection $$ {\cal Y}^+_{\cal W}({\mathbb Z}^t)_\lambda~~{ \longleftrightarrow} ~~ \{\mbox{\rm Closures of top dimensional components of ${\rm Gr}_{{\cal Y}, {\cal W}}^{(\lambda)}$}\}. $$ \end{itemize} Although the space ${\rm Gr}_{{\cal Y}, {\cal W}}$ is usually infinite dimensional, it is nice. The map $\pi_{\rm Gr}: {\rm Gr}_{{\cal Y}, {\cal W}} \to {\Bbb P}$ slices it into highly singular and reducible pieces. However the slicing makes the perverse sheaves geometry clean and beautiful. It allows to relate the positive integral tropical points to the top components of the slices. \paragraph{Example.} In our main example, discussed in Section \ref{sec1.1} we have $$ {\cal Y} = {\rm Conf}_n({\cal A}), ~~~~{\cal Y}^\times = {\rm Conf}^\times_n({\cal A}), ~~~~ {\cal Y}^{\cal O} = {\rm Conf}^{\cal O}_n({\cal A}), ~~~~ {\rm Gr}_{{\cal Y}, {\cal W}} = {\rm Conf}_n({\rm Gr}), ~~~~ {\Bbb H}= {\rm H}^n, ~~~~ {\Bbb P}= ({\rm P}^+)^n. $$ The potential ${\cal W}$ is defined in (\ref{theWp}), and decomposition (\ref{stratilam}) is described by cyclic convolution varieties (\ref{cvcvcv}). \subsubsection{Mixed configurations and a generalization of Mirkovi\'{c}-Vilonen cycles} \label{sec1.2a} Let us briefly discuss other examples relevant to representation theory. All of them follow the set-up of Section \ref{sec1.2}. The obtained cycles ${\cal M}_l$ can be viewed as generalisations of Mirkovi\'{c}-Vilonen cycles. Let us list first the spaces ${\cal Y}$ and ${\rm Gr}_{{\cal Y}, {\cal W}}$. The notation ${\rm Conf}_{w_0}$ indicates that the pair of the first and the last flags in configuration is in generic position. i) {\it Generalized Mirkovi\'{c}-Vilonen cycles}: $$ {\cal Y}:= {\rm Conf}_{w_0}({\cal A}, {\cal A}^n,{\cal B}), ~~~~ {\rm Gr}_{{\cal Y}, {\cal W}}:= {\rm Conf}_{w_0}({\cal A}, {\rm Gr}^n,{\cal B}) = {\rm Gr}^n. $$ If $n=1$, we recover the Mirkovi\'{c}-Vilonen cycles in the affine Grassmannian \cite{MV}. ii) {\it Generalized stable Mirkovi\'{c}-Vilonen cycles}: $$ {\cal Y}:= {\rm Conf}_{w_0}({\cal B}, {\cal A}^n,{\cal B}), ~~~~ {\rm Gr}_{{\cal Y}, {\cal W}}:= {\rm Conf}_{w_0}({\cal B}, {\rm Gr}^n,{\cal B}) = {\rm H}({\cal K}) \backslash {\rm Gr}^n. $$ If $n=1$, we recover the stable Mirkovi\'{c}-Vilonen cycles in the affine Grassmannian. In our interpretation they are top components of the stack $$ {\rm Conf}_{w_0}({\cal B}, {\rm Gr}, {\cal B}) = {\rm H}\backslash {\rm Gr}. $$ iii) {\it The cycles providing canonical bases in tensor products} $$ {\cal Y}:= {\rm Conf}({\cal A}^{n+1}, {\cal B}), ~~~~ {\rm Gr}_{{\cal Y}, {\cal W}}:= {\rm Conf}({\rm Gr}^{n+1}, {\cal B}) = {\rm B}^-({\cal O}) \backslash {\rm Gr}^n. $$ The spaces ${\cal Y}$ in examples i) and iii) are essentially the same. However the potentials are different: in the case iii) it is the sum of contributions of all decorated flags, while in the case i) we skip the first one. Passing from ${\cal Y}$ to ${\rm Gr}_{{\cal Y}, {\cal W}}$ we replace those ${\cal A}$'s which contribute to the potential by ${\rm Gr}$'s, but keep the ${\cal B}$'s and the ${\cal A}$'s which do not contribute to the potential intact. We picture configurations at the vertices of a convex polygon, as on Fig \ref{polygon}. Some of the ${\cal A}$-vertices are shown boldface. The potential ${\cal W}$ is a sum of the characters assigned to the boldface ${\cal A}$-vertices, generalizing (\ref{theWp}). The decorated polygons in the cases ii) and iii) are depicted on the right of Fig \ref{tpm20.5} and on Fig \ref{tpm21.5}. We discuss these examples in detail in Sections \ref{sec2.3} - \ref{tensor}. \subsection{Examples related to decorated surfaces} \subsubsection{Laminations on decorated surfaces and canonical basis for ${\rm G}=SL_2$} \label{LAMCB} \paragraph{1. Canonical basis in the tensor products invariants.} This example can be traced back to XIX century. We relate it to laminations on a polygon. \begin{definition} An integral lamination $l$ on {an} $n$-gon $P_n$ is a collection $\{\beta_j\}$ of simple nonselfintersecting intervals ending on the boundary of $P_n- \{\mbox{vertices}\}$, modulo isotopy. \end{definition} \begin{figure}[ht] \centerline{\epsfbox{polygon2.eps}} \caption{An integral lamination on a pentagon of type $(4,4,1,6,3)$.} \label{polygon2} \end{figure} Pick a vertex of $P_n$, and number the sides clockwise. Given a collection of positive integers $a_1, ..., a_n$, consider the set ${\cal L}_n(a_1, ..., a_n)$ of all integral laminations $l$ on the polygon $P_n$ such that the number of endpoints of $l$ on the $k$-th side is $a_k$. Let $(V_2, \omega)$ be a two dimensional ${\mathbb Q}$-vector space with a symplectic form. Let us assign to an $l\in {\cal L}_n(a_1, ..., a_n)$ an $SL_2$-invariant map $$ {\Bbb I}_l: (\otimes^{a_1}V_2) \otimes \ldots \otimes (\otimes^{a_n}V_2) \longrightarrow {\mathbb Q}. $$ We assign the factors in the tensor product to the endpoints of $l$, so that the order of the factors match the clockwise order of the endpoints. Then for each interval $\beta$ in $l$ we evaluate the form $\omega$ on the pair of vectors in the two factors of the tensor product labelled by the endpoints of $\beta$, and take the product over all intervals $\beta$ in $l$. Recall that the $SL_2$-modules $S^{a}V_2$, $a> 0$, provide all non-trivial irreducible finite dimensional $SL_2$-modules up to isomorphism. \begin{theorem} \label{9.19.13.19} Projections of the maps ${\Bbb I}_l$, $l\in {\cal L}_n(a_1, ..., a_n)$, to $S^{a_1}V_2 \otimes \ldots \otimes S^{a_n}V_2$ form a basis in ${\rm Hom}_{SL_2}(S^{a_1}V_2 \otimes \ldots \otimes S^{a_n}V_2, {\mathbb Q})$. \end{theorem} \paragraph{2. Canonical basis in the space of functions on the moduli space of $SL_2$-local systems.} \begin{definition} \label{9.19.13.1} Let $S$ be a surface with boundary. An integral lamination $l$ on $S$ is a collection of simple, mutually non intersecting, non isotopic loops $\alpha_i$ with positive integral multiplicities $$ l = \sum_i n_i[\alpha_i] ~~~~ n_i \in {\mathbb Z}_{>0}, $$ considered modulo isotopy. The set of all integral laminations on $S$ is denoted by ${\cal L}_{\mathbb Z}({S})$.\footnote{Laminations on decorated surfaces were investigated in \cite{FG1}, Section 12, and \cite{FG3}. However the two types of laminations considered there, the ${\cal A}$- and ${\cal X}$-laminations, are different then the ones in Definition \ref{9.19.13.1}. Indeed, they parametrise canonical bases in ${\cal O}({\cal X}_{PGL_2, S})$ and, respectively, ${\cal O}({\cal A}_{SL_2, S})$, while the latter parametrise a canonical basis in ${\cal O}({\rm Loc}_{SL_2, S})$. Notice that a lamination in Definition \ref{9.19.13.1} can not end on a boundary circle.} \end{definition} \begin{figure}[ht] \epsfxsize130pt \centerline{\epsfbox{fish.eps}} \caption{An integral lamination on a surface with two holes, and no special points.} \label{fish} \end{figure} In the case when $S$ is a surface without boundary we get Thurston's integral laminations. Given an integral lamination $l$ on $S$, let us define a regular function $M_l$ on the moduli space ${\rm Loc}_{SL_2, S}$ of $SL_2$-local systems on $S$. Denote by ${\rm Mon}_{\alpha}({\cal L})$ the monodromy of an $SL_2$-local system ${\cal L}$ over a loop $\alpha$ on $S$. The value of the function $M_l$ on ${\cal L}$ is given by $$ M_l({\cal L}):= \prod_i {\rm Tr} ({\rm Mon}^{n_i}_{\alpha_i}({\cal L})). $$ \begin{theorem} \label{9.19.13.20} (\cite{FG1}, Proposition 12.2). The functions $M_l$, $l\in {\cal L}_{\mathbb Z}({S})$, form a linear basis in the space ${\cal O}({\rm Loc}_{SL_2, S})$. \end{theorem} Recall that a {\it decorated surface} $S$ is an oriented surface with boundary, and a finite, possibly empty, collection $\{s_1, ..., s_n\}$ of {\it special} points on the boundary, considered modulo isotopy. We define a moduli space ${\rm Loc}_{SL_2, S}$ for any decorated surface $S$, so that laminations on $S$ provide a canonical basis ${\cal O}({\rm Loc}_{SL_2, S})$, generalising both Theorem \ref{9.19.13.19} (when $S$ is a polygon) and Theorem \ref{9.19.13.20}, see Section \ref{sec10.3n}. Let us discuss now how to generalize constructions of Section \ref{sec1.1.2} to the decorated surfaces. \subsubsection{Positive ${\rm G}$-laminations and top components of surface affine Grassmannians} \label{sec1.3} A pair $({\rm G}, S)$ gives rise to a moduli space ${\cal A}_{{\rm G}, S}$ \cite{FG1}. Here are two basic examples. \begin{itemize} \item When $S$ is a disc with $n$ special points on the boundary, we recover the space ${\rm Conf}_n({\cal A})$. \item When $S$ is just a surface, without special points, the moduli space ${\cal A}_{{\rm G}, S}$ is a twisted version of the moduli space of {{\rm G}-local systems with unipotent monodromy around boundary components} on $S$ equipped with a covariantly constant decorated flag near every boundary component of $S$. \end{itemize} The space ${\cal A}_{{\rm G}, S}$ has a positive structure \cite{FG1}. We define in Section \ref{sec11} a {\it potential} ${\cal W}$ on the space ${\cal A}_{{\rm G}, S}$. It is a rational positive function, with the tropicalization $ {\cal W}^t: {\cal A}_{{\rm G}, S}({\mathbb Z}^t) \longrightarrow {\mathbb Z}. $ The condition ${\cal W}^t\geq 0$ determines a subset of {\it positive integral {\rm G}-laminations on $ S$}: \begin{equation} \label{121212a} {\cal A}^+_{{\rm G}, S}({\mathbb Z}^t):= \{l \in {\cal A}_{{\rm G}, S}({\mathbb Z}^t)~|~ {\cal W}^t(l)\geq 0\}. \end{equation} For any decorated surface $S$, the set ${\cal A}^+_{SL_2, S}({\mathbb Z}^t)$ is canonically isomorphic to the set of integral laminations on $S$, see Section \ref{sec10.3n}. An interesting approach to a geometric definition of laminations for ${\rm G}=SL_m$, which employs the affine Grassmannian, was suggested by Ian Le \cite{Le}. There is a canonical volume form $\Omega$ on the space ${\cal A}_{{\rm G}, S}$, which can be defined by using an ideal triangulation of $S$ and the volume forms on ${\rm Conf}_n({\cal A})$. When $G$ is simply-connected, it is also the cluster volume form $\Omega_{\cal A}$. We also assign to a pair $({\rm G}, S)$ a stack ${\rm Gr}_{{\rm G}, S}$, which we call the {\it surface affine Grassmannian}. When $S$ is a disc with $n$ special points on the boundary, we recover the stack ${\rm Conf}_n({\rm Gr})$. In general it is an infinite dimensional stack. The components of the punctured boundary $\partial S - \{s_1, ..., s_n\}$ isomorphic to intervals are called boundary intervals. We define the torus ${\Bbb H}$ and the lattice ${\Bbb P}$ by $$ {\Bbb H}:= {\rm H}^{\{\mbox{boundary intervals on $S$}\}}, ~~~~{\Bbb P}:= ({\rm P}^+)^{\{\mbox{boundary intervals on $S$}\}}. $$ The map $\pi$ is defined by assigning to a boundary interval ${\rm I}$ the element $i({\rm A}_+, {\rm A}_-)\in {\rm H}$, see (\ref{agr}), where $({\rm A}_-,{\rm A}_+)$ are the decorated flags at the ends of the interval ${\rm I}$, ordered by the orientation of $S$, provided by the very definition of the space ${\cal {\rm A}}_{{\rm G}, S}$. Given a point $l\in {\cal A}^+_{{\rm G}, S}({\mathbb Z}^t)$, we define a cycle $ {\cal M}^o_l \subset {\rm Gr}_{G, S}. $ Given an element $\lambda \in {\Bbb P}$, we prove that the map $l\longmapsto {\cal M}^\circ_l$ gives rise to a bijection of sets \begin{equation} \label{map} {{\cal A}}^+_{{\rm G}, S}({\mathbb Z}^t)_\lambda \stackrel{\sim}{\longrightarrow} \{\mbox{\rm closures of top dimensional components of ${\rm Gr}^{(\lambda)}_{{\rm G}, S}$}\}. \end{equation} However in this case we can no longer bypass the question what are the ``top components'' of an infinite dimensional stack, as we did in Section \ref{sec1.1.2}. So we define in Section \ref{sec11.3.1} ``dimensions'' of certain relevant stacks with values in certain {\it dimension ${\mathbb Z}$-torsors}. As a result, although the ``dimension'' is no longer an integer, the difference of two ``dimensions'' from the same dimension ${\mathbb Z}$-torsor is an integer, and so the notion of ``top dimensional components'' does make sense. \vskip 2mm To define the analog of the space of tensor product invariants for a decorated surface $S$, we introduce in Section \ref{sec11} a moduli space ${\rm Loc}_{{\rm G}^L, S}$. If $S$ has no special points, it is the moduli space of ${\rm G}^L$-local systems on $S$. If $S$ is a disc with $n$ points on the boundary, it is the space ${\rm Conf}_n({\cal A}_{{\rm G}^L})$. We prove there that the set ${\cal A}^+_{{\rm G}, S}({\mathbb Z}^t)$ parametrizes a linear basis in ${\cal O}({\rm Loc}_{{\rm G}^L, S})$. \subsection{Canonical bases, canonical pairings, and homological mirror symmetry} \label{sec1.4} Below we write ${\cal A}$ for ${\cal A}_{{\rm G}}$ etc., and use notation ${\cal A}_L$ for ${\cal A}_{{\rm G^L}}$ etc. For any split reductive group ${\rm G}$, the space ${\cal O}({\cal A}_{L})$ of regular functions on the principal affine space ${\cal A}_{L}$ of ${\rm G}^L$ is a model of representations of ${\rm G}^L$: every irreducible ${\rm G}^L$-module appears there once. This allows us to organize the direct sum of all vector spaces of a given kind where the canonical bases live into a vector space of regular functions on a single space. For example: \begin{equation} \label{model} \bigoplus_{ (\lambda_1, \ldots , \lambda_n) \in ({\rm P}^+)^n} V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_n} = {\cal O}({\cal A}_{L}^n). \end{equation} \begin{equation} \label{model1} \bigoplus_{ (\lambda_1, \ldots , \lambda_n)\in ({\rm P}^+)^n} \Bigl(V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_n}\Bigr)^{{\rm G}^L} = {\cal O}({\cal A}_{L}^n)^{{\rm G}^L} = {\cal O}({\rm Conf}_n({\cal A}_{L})). \end{equation} Using this, let us interpret the statement that a canonical basis of a given kind is parametrized by positive integral tropical points of a certain space as existence of a {\it canonical pairing}. \subsubsection{Tensor product invariants and mirror symmetry} \label{s1.4.1} For any split reductive group ${\rm G}$, the set ${\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$ parametrizes a canonical basis in the space (\ref{model1}). So there are canonical pairings \begin{equation} \label{canpar} {\bf I}_{\rm G}: {\rm Conf}^+_n({\cal A})({\mathbb Z}^t) \times {\rm Conf}_n({\cal A}_{L})\longrightarrow {\Bbb A}^1. \end{equation} \begin{equation} \label{canparL} {\bf I}_{\rm G^L}: {\rm Conf}_n({\cal A})\times {\rm Conf}^+_n({\cal A}_{L})({\mathbb Z}^t) \longrightarrow {\Bbb A}^1. \end{equation} So the story becomes completely symmetric. The idea that the set parametrizing canonical bases in tensor product invariants is a subset of ${\rm Conf}_n({\cal A})({\mathbb Z}^t)$ goes back to Duality Conjectures from \cite{FG1}. It is quite surprising that taking into account the potential we get a canonical basis in the space of regular functions on the {\it same kind of space}, ${\rm Conf}_n({\cal A}_{L})$, for the Langlands dual group. \begin{figure}[ht] \centerline{\epsfbox{tpm5.5.eps}} \caption{Duality between configurations spaces of decorated flags for ${\rm G}$ and ${\rm G}^L$. The potential is a sum of contributions at the boldface vertices. Pairs of decorated flags at the dashed sides are in generic position. No condition on the pairs of decorated flags at the solid sides.} \label{tpm5.5} \end{figure} To picture this symmetry, consider a convex $n$-gon $P_n$ on the left of Fig \ref{tpm5.5}, and assign a configuration $({\rm A}_1, ..., {\rm A}_n)\in {\rm Conf}^\times_n({\cal A})$ to its vertices. The potential ${\cal W}$ is a sum of the vertex contributions; so the vertices are shown boldface. The pair of decorated flags at each side is generic; so all sides are dashed. Tropicalizing the data at the vertices, and using the isomorphism ${\rm Conf}^+_2({\cal A})({\mathbb Z}^t) ={\rm P}^+$, we assign a dominant weight $\lambda_k$ of ${\rm G}^L$ to each side of the left polygon. Consider now the dual $n$-gon $\ast P_n$ on the right, and a configuration of decorated flags $({\rm A}'_1, ..., {\rm A}'_n)$ in ${\rm G}^L$ at its vertices. The dominant weight $\lambda_k$ on the left corresponds to the irreducible representation $V_{\lambda_k}$, realised in the model ${\cal O}({\cal A}_{L})$ assigned to the dual vertex of $\ast P_n$. \vskip 3mm Tropical points live naturally at the boundary of a positive space, compactifying the set of its real positive points \cite{FG4}. An example is given by Thurston's boundary of Teichm\"uller spaces, realized as the space of projective measured laminations. It is tempting to think that canonical pairings (\ref{canpar}) and (\ref{canparL}) are manifestations of a symmetry involving both spaces simultaneously, rather then relating the tropical points of one space to the regular functions on the other space. We conjecture that this elusive symmetry is the mirror symmetry, and the function ${\cal W}$ is the potential for the Landau-Ginzburg model. To formulate precise conjectures, let us start with a general set-up. \paragraph{The A-model.} Let ${\cal M}$ be a complex affine variety. So it has an affine embedding $i: {\cal M}\hookrightarrow {\mathbb C}^N$. The Kahler form $\sum_i dz_id\bar z_i$ on ${\mathbb C}^N$ induces a Kahler form on ${\cal M}({\mathbb C})$ with an exact symplectic form $\omega$. The wrapped Fukaya category ${\cal F}_{\rm wr}({\cal M}, \omega)$ \cite{AS} does not depend on the embedding $i$. We denote it by ${\cal F}_{\rm wr}({\cal M})$. A potential ${\cal W}$ on $({\cal M}, \omega)$ allows to define the wrapped Fukaya-Seidel category $ {\cal F}{\cal S}_{\rm wr}({\cal M}) = {\cal F}{\cal S}_{\rm wr}({\cal M}, \omega, {\cal W}). $ The case of a potential with only Morse singularities is treated in \cite{S08}. It also does not depend on the choice of affine embedding. A volume form $\Omega$ provides a ${\mathbb Z}$-grading on ${\cal F}{\cal S}_{\rm wr}({\cal M})$ \cite{S}. \paragraph{The positive A-brane.} In our examples ${\cal M}$ is a {positive space} over ${\mathbb Q}$. So it has a submanifold ${\cal M}({\mathbb R}_{>0})$ of real positive points. It is a Lagrangian submanifold for the symplectic form $\omega$ induced by any affine embedding. The form $\Omega$ is defined over ${\mathbb Q}$, and so ${\cal M}({\mathbb R}_{>0})$ is a special Lagrangian submanifold since it restricts to a real volume form on ${\cal M}({\mathbb R}_{>0})$. The potential ${\cal W}$ is a positive function on ${\cal M}$. So the special Lagrangian submanifold ${\cal M}({\mathbb R}_{>0})$ should give rise to an object of the wrapped Fukaya-Seidel category of ${\cal M}$, which we call the {\it positive $A$-brane}, denoted by ${\cal L}_+$. \paragraph{The projection / action data.} In all our examples we have a mirror dual pair ${\cal M} \leftrightarrow {\cal M}_L$ equipped with the following data: a projection $\pi: {\cal M} \longrightarrow {\Bbb H}$ onto a split torus ${\Bbb H}$, an action of the split torus ${\Bbb T}$ on ${\cal M}$ preserving the volume form and the potential, and a similar pair of tori ${\Bbb H}_L, {\Bbb T}_L$ for ${\cal M}_L$. These tori are in duality: $$ X_*({\Bbb T}_L) = X^*({\Bbb H}), ~~~~X_*({\Bbb H}_L) = X^*({\Bbb T}). $$ This projection / action data gives rise to the following additional structures on the categories. \vskip 2mm i) The group ${\rm Hom}(X_*({\Bbb H}), {\mathbb C}^*) = \widehat {\Bbb H}({\mathbb C})$ of ${\mathbb C}^*$-local systems on the complex torus ${\Bbb H}({\mathbb C})$ acts on the category ${\cal F}{\cal S}_{\rm wr}({\cal M})$. Namely, we assume that the objects of the category are given by Lagrangian submanifolds in ${\cal M}({\mathbb C})$ with ${\rm U}(1)$-local systems. Then a ${\rm U}(1)$-local system ${\cal L}$ on ${\Bbb H}({\mathbb C})$ acts by the tensor product with $\pi^*({\cal L})$, providing an action of the subgroup ${\rm Hom}(X_*({\Bbb H}), {\rm U}(1))$ on the category. We assume that the action extends to an algebraic action of the complex torus $$ {\rm Hom}(X_*({\Bbb H}), {\mathbb C}^*) = X^*({\Bbb H})\otimes {\mathbb C}^* = X_*(\widehat {\Bbb H})\otimes {\mathbb C}^* = \widehat {\Bbb H}({\mathbb C}). $$ \vskip 2mm ii) Let ${\Bbb T}_K$ be the maximal compact subgroup of the torus ${\Bbb T}({\mathbb C})$. We assume that the action of the group ${\Bbb T}_K$ on the symplectic manifold $({\cal M}, \omega)$ is Hamiltonian.\footnote{In our main examples the symplectic structure is exact, $\omega = d\alpha$. So avereging the form $\alpha$ by the action of the compact group ${\Bbb T}_K$ we can assume that it is ${\Bbb T}_K$-invariant. Therefore the action is Hamiltonian: the Hamiltonian at $x$ for a one parametric subgroup $g^t$ is given by the formula $\alpha(\frac{d}{dt}g^t(x))$.} Then any subgroup $S^1\subset {\Bbb T}_K$ provides a family of symplectic maps $r_t$, $t \in {\mathbb R}/{\mathbb Z} = S^1$. The map $r_1$ provides an invertible functorial automorphism of Hom's of the category ${\cal F}{\cal S}_{\rm wr}({\cal M})$, and thus an invertible element of the center of the category. So the group algebra ${\mathbb Z}[X_*({\Bbb T})]= {\cal O}(\widehat {\Bbb T})$ is mapped into the center: $$ {\cal O}(\widehat {\Bbb T})\longrightarrow {\rm Center}({\cal F}{\cal S}_{\rm wr}({\cal M})). $$ \vskip 2mm iii) Clearly, there is a map ${\cal O}({\Bbb H})\longrightarrow {\rm Center}(D^b{\rm Coh}({\cal M}))$, and the group ${\Bbb T}$ acts on $D^b{\rm Coh}({\cal M})$. \paragraph{The potential / boundary divisors.} It was anticipated by Hori-Vafa \cite{HV} and Auroux \cite{Au1} that adding a potential on a space ${\cal M}$ amounts to a partial compactification of its mirror ${\cal M}_L$ by a divisor. More precisely, denote by ${\cal M}^\times$ and ${\cal M}_L^\times$ the regular loci of the forms $\Omega$ and $\Omega_{L}$. The potential is a sum ${\cal W} = \sum_c{\cal W}_c$. Its components ${\cal W}_c$ are expected to match the irreducible divisors $D_c$ of ${\cal M}_L - {\cal M}_L^\times$. The divisors $D_c$ are defined as the divisors on ${\cal M}_L$ where ${\rm Res}_{D_c}(\Omega_{L})$ is non-zero. So we should have \begin{equation} \label{12.1.14.10} {\cal W} = \sum_c{\cal W}_c, ~~~ {\cal M}_L - {\cal M}_L^\times= \cup_cD_c, ~~~{\cal W}_c \stackrel{?}{\leftrightarrow} D_c. \end{equation} There are several ways to explain how this correspondence should work. \vskip 2mm i) The potential ${\cal W}_c$ determines an element $ [{\cal W}_c] \in {\rm H}{\rm H}^0({\cal M}), $ which defines a deformation of the category $D^b{\rm Coh}({\cal M})$ as a ${\mathbb Z}/2{\mathbb Z}$-category. On the dual side it corresponds to a deformation of the Fukaya category obtained by adding to the symplectic form on ${\cal M}_L$ a multiple of the 2-form $\omega_c$, whose cohomology class is the cycle class $[D_c]\in H^2({\cal M}_L, {\mathbb Z}(1))$ of the divisor $D_c$. \vskip 2mm ii) The Landau-Ginzburg potential ${\cal W}_c$ should be obtained by counting the holomorphic discs touching the divisor $D_c$, as was demonstrated by Auroux \cite{Au1} in examples. \vskip 2mm iii) In the cluster variety set up the correspondence is much more precise, see Section \ref{seccluster}. \vskip 3mm {\bf Example}. To illustrate the set-up, let us specify the data on the moduli space ${\rm Conf}_n({\cal A}_{})$. \begin{itemize} \item A regular positive function, the potential ${\cal W}: {\rm Conf}^\times_n({\cal A}) \longrightarrow {\Bbb A}^1$. \item A regular volume form $\Omega$ on ${\rm Conf}^\times_n({\cal A})$, with logarithmic singularities at infinity. \item A regular projection $\pi: {\rm Conf}^\times_n({\cal A}) \longrightarrow {\Bbb H}$ onto a torus $ {\Bbb H}:= {\rm H}^{\{\mbox{sides of the $n$-gon $P_n$}\}}. $ \item An action $r$ of the torus ${\Bbb T}:= {\rm H}^{\{\mbox{vertices of $P_n$}\}}$ on ${\rm Conf}_n({\cal A})$ by rescaling decorated flags. \end{itemize} Changing ${\rm G}$ to ${\rm G}^L$ we interchanges the action with the projection: \begin{itemize} \item The torus ${\Bbb T}_L$ is dual to the torus ${\Bbb H}$, i.e. there is a canonical isomorphism $X_*({\Bbb T}_L) = X^*({\Bbb H})$. \end{itemize} By construction, the potential is a sum \begin{equation} \label{12.1.14.11} {\cal W} = \sum_v\sum_{i\in {\rm I}}{\cal W}^v_{i} \end{equation} over the vertices $v$ of the polygon $P_n$, parametrising configurations $({\rm A}_1, ..., {\rm A}_n)$, and the set ${\rm I}$ of simple positive roots for ${\rm G}$. Indeed, a non-degenerate character $\chi$ of ${\rm U}$ is naturally a sum $\chi =\sum_i\chi_i$. On the other hand, the set of irreducible components of the divisor ${\rm Conf}_n({\cal A}_L) - {\rm Conf}^\times_n({\cal A}_L)$ is parametrised by the pairs $(E, i)$ where $E$ are the edges of the dual polygon $\ast P_n$, see Section \ref{sec1.2.2}: \begin{equation} \label{12.1.14.12} {\rm Conf}_n({\cal A}_L) - {\rm Conf}^\times_n({\cal A}_L) = \cup_{E}\cup_{i\in {\rm I}} D^E_{i}. \end{equation} Since vertices of the polygon $P_n$ match the sides of the dual polygon $\ast P_n$, the components of the potential (\ref{12.1.14.11}) match the irreducible components of the divisor at infinity (\ref{12.1.14.12}) on the dual space. \vskip 3mm We start with the most basic form of our mirror conjectures, which does not involve the potential. \begin{conjecture} \label{MIRRORDUAL} For any split semisimple group ${\rm G}$ over ${\mathbb Q}$, there is a mirror duality \begin{equation} \label{8.6.13.2} ({\rm Conf}^\times_n({\cal A}), \Omega) ~~\mbox{is mirror dual to}~~ ({\rm Conf}^\times_n({\cal A}_L), \Omega_{L}). \end{equation} This means in particular that one has an equivalence of ${\rm A}_\infty$-categories \begin{equation} \label{8.6.13.3} {\cal F}_{\rm wr}({\rm Conf}^\times_n({\cal A})({\mathbb C})) \stackrel{\sim}{\longrightarrow} D^b{\rm Coh}({\rm Conf}^\times_n({\cal A}_L)). \end{equation} This equivalence maps the positive $A$-brane ${\cal L}_+$ to the structure sheaf ${\cal O}$. It identifies the action of the group $\widehat {\Bbb H}({\mathbb C})$ on the category ${\cal F}_{\rm wr}({\rm Conf}^\times_n({\cal A})({\mathbb C}))$ with the action of the group ${\Bbb T}_L({\mathbb C})$ on $D^b{\rm Coh}({\rm Conf}^\times_n({\cal A}_L))$, and identifies the subalgebras $$ {\cal O}(\widehat {\Bbb T})\subset {\rm Center}({\cal F}_{\rm wr}({\rm Conf}^\times_n({\cal A})({\mathbb C}))) ~~\mbox{\rm and}~~ {\cal O}({\Bbb H}_L)\subset {\rm Center}(D^b{\rm Coh}({\rm Conf}^\times_n({\cal A}_L))). $$ \end{conjecture} The projection / action data for the pair (\ref{8.6.13.2}) is given by $$ {\Bbb H} = {\rm H}^{n}, ~~~~{\Bbb H}_L = {\rm H}^{n}_L, ~~~~ {\Bbb T} = {\rm H}^{n}, ~~~~{\Bbb T}_L = {\rm H}_L^{n}. $$ The pair (\ref{8.6.13.2}) is symmetric: interchanging the group ${\rm G}$ with the Langlands dual group ${\rm G}^L$ amounts to exchanging the ${\rm A}$-model with the ${\rm B}$-model. \vskip 2mm Using the mirror pair (\ref{8.6.13.2}) as a starting point, we can now turn on the potentails at all vertices of the left polygon $P_n$. This amounts to a partial compactification of the dual space. Namely, we take the space ${\rm Conf}_n({\cal A}_L)$, and consider its affine closure ${\rm Conf}_n({\cal A}_L)_{\bf a}:= {\rm Spec}\Bigl({\cal O}({\cal A}^n_L)^{{\rm G}^L}\Bigr)$. Since the action of the group ${\rm H}^{n}$ on ${\rm Conf}^\times_n({\cal A})$ alters the potential ${\cal W}$, and the projection $\pi_L$ onto ${\rm H}_L^n$ does not extend to ${\rm Conf}_n({\cal A}_L)_{\bf a}$, the projection / action data for the pair (\ref{8.6.13.2b}) is $$ {\Bbb H} = {\rm H}^{n}, ~~~~{\Bbb H}_L = \{e\}, ~~~~ {\Bbb T} = \{e\}, ~~~~{\Bbb T}_L = {\rm H}_L^{n}. $$ Therefore by turning on the potentials we arrive at the following Mirror Conjecture: \begin{conjecture} \label{MIRRORDUALa} For any split semisimple group ${\rm G}$ over ${\mathbb Q}$, there is a mirror duality \begin{equation} \label{8.6.13.2a} ({\rm Conf}^\times_n({\cal A}), {\cal W}, \Omega) ~~\mbox{is mirror dual to}~~ {\rm Conf}_n({\cal A}_L)_{\bf a}. \end{equation} This means in particular that there is an equivalence of ${\rm A}_\infty$-categories \begin{equation} \label{8.6.13.1b} {\cal F}{\cal S}_{\rm wr}({\rm Conf}^\times_n({\cal A})({\mathbb C}), {\cal W}, \Omega) ~~\stackrel{\sim}{\longrightarrow}~~ D^b{\rm Coh}({\rm Conf}_n({\cal A}_L)_{\bf a}). \end{equation} It maps the positive $A$-brane ${\cal L}_+$ to the structure sheaf ${\cal O}$, and identifies the action of the group $\widehat {\Bbb H}({\mathbb C})$ on the category ${\cal F}{\cal S}_{\rm wr}({\rm Conf}^\times_n({\cal A})({\mathbb C}))$ with the action of ${\Bbb T}_L({\mathbb C})$ on $D^b{\rm Coh}({\rm Conf}_n({\cal A}_L)_{\bf a})$. \end{conjecture} \vskip 2mm The geometry of mirror dual objects in Conjectures \ref{MIRRORDUAL} and \ref{MIRRORDUALa} is {\it essentially} dictated by representation theory. Indeed, the tropical points are determined by birational types of the spaces, and canonical bases tell the algebras of functions on the dual affine varieties: \begin{equation} \label{dcI} \mbox{ The set ${\rm Conf}^+_n({\cal A})({\mathbb Z}^t)$ parametrises a canonical basis in ${\cal O}({\rm Conf}_n({\cal A}_{L}))$}. \end{equation} \begin{equation} \label{dcII} \mbox{ The set ${\rm Conf}_n({\cal A}_L)({\mathbb Z}^t)$ should parametrise a canonical basis in ${\cal O}({\rm Conf}^\times_n({\cal A}))$}.\footnote{Although the claim (\ref{dcII}) is not addressed in the paper, it can be deduced from (\ref{dcI}).} \end{equation} The potential ${\cal W}$ and the projection $\pi$ define a regular map $ (\pi, {\cal W}): {\rm Conf}^\times_n({\cal A}) \longrightarrow {\Bbb H}\times {\Bbb A}^1. $ The form $\Omega$ on ${\rm Conf}^\times_n({\cal A})$ and the canonical volume forms on ${\Bbb H}$ and ${\Bbb A}^1$ provide a volume form $\Omega^{(a,c)}$ at the fiber $F_{a, c}$ of this map over a generic point $(a,c) \in {\Bbb H}\times {\Bbb A}^1$. \vskip 3mm More generally, we can turn on only partial potentials at the vertices of the polygon $P_n$, which amounts on the dual side to taking partial compactifications, and then considering their affine closures. This way we get an array of conjecturally dual pairs, described as follows. For each vertex $v$ of the polygon $P_n$ parametrising configurations $({\rm A}_1, ..., {\rm A}_n)$ choose an arbitrary subset ${\rm I}_v \subset {\rm I}$ of the set parametrising the simple positive roots of ${\rm G}$. It determines a partial potential \begin{equation} \label{WPART} {\cal W}_{\{{\rm I}_v\}} = \sum_v{\cal W}_{{\rm I}_v}, ~~~~{\cal W}_{{\rm I}_v}:= \sum_{i\in {\rm I}_v}{\cal W}^v_{i}. \end{equation} On the dual side, subsets $\{{\rm I}_v\}$ determine a partial compactification of the space ${\rm Conf}^\times_n({\cal A}_L)$, obtained by adding the divisors $D^{E_v}_{i}$ where $i \in {\rm I}_v$. Here $E_v$ is the side of the polygon $\ast P_n$ dual to the vertex $v$ of $P_n$: \begin{equation} \label{SCOMPACTI} {\rm Conf}_n({\cal A}_L)_{\{{\rm I}_v\}}:= {\rm Conf}^\times_n({\cal A}_L) \bigcup \cup_v\cup_{i\in {\rm I}_v} D^{E_v}_{i}. \end{equation} For each vertex $v$ of $P_n$ there is a subgroup ${\rm H}_{{\rm I}_v} \subset {\rm H}$ preserving the partial potential ${\cal W}_{{\rm I}_v}$ at $v$. On the dual side, let ${\rm H}^{{\rm I}_v}_L$ be the dual quotient of the Cartan group ${\rm H}_L$. So we arrive at the projection / action data \begin{equation} \label{actproj} {\Bbb H} = {\rm H}^{n}, ~~~~{\Bbb H}_L = \prod_v{\rm H}^{{\rm I}_v}_L, ~~~~ {\Bbb T} = \prod_v{\rm H}_{{\rm I}_v}, ~~~~{\Bbb T}_L = {\rm H}_L^{n}. \end{equation} So turning on partial potentials we arrive at Conjecture \ref{MIRRORDUALab}, interpolating Conjectures \ref{MIRRORDUAL} and \ref{MIRRORDUALa}: \begin{conjecture} \label{MIRRORDUALab} For any split semisimple group ${\rm G}$ over ${\mathbb Q}$, there is a mirror duality \begin{equation} \label{8.6.13.2ab} ({\rm Conf}^\times_n({\cal A}), {\cal W}_{\{{\rm I}_v\}}, \Omega) ~~\mbox{is mirror dual to the affine closure of}~~ {\rm Conf}_n({\cal A}_L)_{\{{\rm I}_v\}}. \end{equation} Its action / projection data is given by (\ref{actproj}). \end{conjecture} Needless to say, the positive integral tropical points of the left space parametrise a basis in the space of functions on the right space. \vskip 3mm Here is another general principle to generate new mirror dual pairs. We start with a mirror dual pair $({\cal M}, \Omega, {\cal W}) \leftrightarrow {\cal M}_L$, equipped with the projection / action data which involves a dual pair $({\Bbb T}, {\Bbb H}_L)$. So ${\Bbb T}$ acts by automorphisms of the triple $({\cal M}, \Omega, {\cal W})$, and there is a dual projection $\pi_L: {\cal M}_L \to {\Bbb H}_L$. Choose any subgroup ${\Bbb T}' \subset {\Bbb T}$, and consider the corresponding ${\Bbb T}'$-equivariant category. If the group ${\Bbb T}'$ acts freely, this amounts to taking the quotient of the space with potential $({\cal M}, {\cal W})$ by the action of ${\Bbb T}'$. A volume form on ${\Bbb T}'$ gives rise to a volume form on the quotient, obtained by contructing the volume form $\Omega$ with the dual polyvector field on ${\Bbb T}'$. The subgroup ${\Bbb T}'\subset {\Bbb T}$ determines by the duality a quotient group ${\Bbb H}_L\longrightarrow {\Bbb H}_L'$, and therefore a projection $\pi_L': {\cal M}_L \to {\Bbb H}_L'$. \begin{itemize} \item {\it The quotient stack $({\cal M}/{\Bbb T}', {\cal W})$ is mirror dual to the family $\pi_L': {\cal M}_L \to {\Bbb H}_L'$}. In the examples below $({\cal M}/{\Bbb T}', {\cal W})$ is just dual to a fiber ${\pi_L'}^{-1}(a) \subset {\cal M}_L$, $a\in {\Bbb H}_L'$. \end{itemize} In particular, starting from a mirror dual pair (\ref{8.6.13.2ab}), we can choose any subgroup ${\Bbb T}' \subset {\Bbb T} = \prod_v{\rm H}_{{\rm I}_v}$ acting on the space with potential on the left. All examples below are obtained this way. \vskip 3mm {\bf Example.} We start with the space ${\rm Conf}^\times({\cal A}^{n+1})$ with the potential ${\cal W}_{1, ..., n}$ given by the sum of the full potentials at all vertices but one, the vertex ${\rm A}_{n+1}$. The action of the group ${\rm H}$ on the decorated flag ${\rm A}_{n+1}$ preserves the potential ${\cal W}_{1, ..., n}$. Applying the above principle, we get a dual pair illustrated on Fig \ref{tpm21.5}. The fiber over $a$, illustrated by the middle picture on Fig \ref{tpm21.5}, is canonically isomorphic to the less symmetrically defined space illustrated on the right. \begin{figure}[ht] \epsfxsize300pt \centerline{\epsfbox{fig6.eps}} \caption{Dual pairs $({\rm Conf}^\times( {\cal A}^3, {\cal B}), {\cal W}_{1,2,3})$ and ${\rm Conf}_{w_0}( {\cal A}_L^3, {\cal B}_L) = {\cal A}_L^2$. The ${\rm H}$-components of the projection $\lambda$ sit at the ${\rm A}$-decorated blue dashed edges on the left. The projection $\mu$ to ${\rm H}$ is assigned to the red ${\rm A}_3{\rm B}_4{\rm A}_1$. } \label{tpm21.5} \end{figure} In the next Section we consider this example from a different point of view, starting from representation-theortic picture, just as we did with our basic example, and arrive to the same dual pairs. \subsubsection{Tensor products of representations and mirror symmetry} \label{s1.4.1a} The set ${\rm Conf}^+({\cal A}^{n+1}, {\cal B})({\mathbb Z}^t)$ defined using the potential ${\cal W} $ from (\ref{ppoott}) parametrises canonical bases in $n$-fold tensor products of simple ${\rm G}^L$-modules. So using (\ref{model}) we arrive at a canonical pairing \begin{equation} \label{canpar1} {\bf I}: {\rm Conf}^+({\cal A}^{n+1}, {\cal B})({\mathbb Z}^t) \times {\cal A}_{L}^n\longrightarrow {\Bbb A}^1. \end{equation} Let us present ${\cal A}_{L}^n$ as a configuration space. Recall that ${\rm Conf}_{w_0}({\cal A}^{n+1}_{L}, {\cal B}_{L})$ parametrises configurations $({\rm A}_{1}, \ldots , {\rm A}_{n+1}, {\rm B}_{n+2})$ such that the pair $({\rm A}_{n+1}, {\rm B}_{n+2})$ is generic. Generic pairs $\{{\rm A}, {\rm B}\}$ form a ${\rm G}^L$-torsor. Let $\{{\rm A}^+, {\rm B}^-\}$ be a standard generic pair. Then there is an isomorphism \begin{equation} \label{mduala1a2} {\cal A}_{L}^n \stackrel{=}{\longrightarrow} {\rm Conf}_{w_0}({\cal A}^{n+1}_{L}, {\cal B}_{L}), ~~~~\{{\rm A}_{1}, \ldots , {\rm A}_{n}\} \longmapsto ({\rm A}_{1}, \ldots , {\rm A}_{n}, {\rm A}^+, {\rm B}^-). \end{equation} The subspace ${\rm Conf}^\times({\cal A}^{n+1}_{L}, {\cal B}_{L})$ parametrises configurations $({\rm A}_{1}, \ldots , {\rm A}_{n+1}, {\rm B}_{n+2})$ such that the consecutive pairs of flags are generic. It is the quotient of ${\rm Conf}_{n+2}^\times({\cal A})$ by the action of the group ${\rm H}$ on the last decorated flag. The projection ${\rm Conf}_{n+2}^\times({\cal A}) \to {\rm H}^{n+2}$ induces a map, see (\ref{agr}), \begin{equation} \label{project} \pi= (\lambda, \mu): {\rm Conf}^\times({\cal A}^{n+1}, {\cal B}) \longrightarrow {\rm H}^{n}\times {\rm H}. \end{equation} $$ ({\rm A}_{1}, \ldots , {\rm A}_{n+1}, {\rm B}_{n+2})\longmapsto \Bigl(\alpha({\rm A}_1, {\rm A}_2), ..., \alpha({\rm A}_n, {\rm A}_{n+1})\Bigr) \times \alpha({\rm A}_{n+1}, {\rm B}_{n+2}) \alpha({\rm A}_{1}, {\rm B}_{n+2})^{-1}. $$ Then the symmetry is restored, and we can view (\ref{canpar1}) as a manifestation of a mirror duality: \begin{equation} \label{mduala2} ({\rm Conf}^\times({\cal A}_{}^{n+1}, {\cal B}_{}), {\cal W}_{}, \Omega_{}, \pi) ~~\mbox{is mirror dual to}~~ ({\rm Conf}_{w_0}({\cal A}^{n+1}_{L}, {\cal B}_{L}), \Omega_{L}, r_L). \end{equation} Here $r_L$ is the action of ${\rm H}^{n+1}_{L}$ by rescaling of the decorated flags. The projection/action data is $$ {\Bbb H} = {\rm H}^{n+1}, ~~~~{\Bbb H}_L = \{e\}, ~~~~ {\Bbb T} = \{e\}, ~~~~{\Bbb T}_L = {\rm H}_L^{n+1}, ~~~~ $$ The analog of mirror dual pair (\ref{8.6.13.2}) and its projection/action data are given by, see Fig \ref{tpm21.75}, \begin{equation} \label{8.6.13.2b} ({\rm Conf}^\times({\cal A}^{n+1}, {\cal B}), \Omega) ~~\mbox{is mirror dual to}~~ ({\rm Conf}^\times({\cal A}^{n+1}_L, {\cal B}_L), \Omega_{L}). \end{equation} $$ {\Bbb H} = {\rm H}^{n+1}, ~~~~{\Bbb H}_L = {\rm H}_L^{n+1}, ~~~~ {\Bbb T} = {\rm H}^{n+1}, ~~~~{\Bbb T}_L = {\rm H}_L^{n+1}, ~~~~ $$ So we arrived at the two dual pairs and (\ref{mduala2}) and (\ref{8.6.13.2b}) using canonical pairings as a guideline. As discussed in the Example in Section \ref{sec1.4}, we can get them from the basic dual pairs (\ref{8.6.13.2a}) and (\ref{8.6.13.2}) using the action / projection duality $\bullet$, which in this case tells that the quotient by the action of the group ${\rm H}$ on one side is dual to a fiber of the family of spaces over the dual group ${\rm H}_L$ over a point $a \in {\rm H}_L$. In particular, the dual pair (\ref{8.6.13.2}) leads to the dual pair illustrated on Fig \ref{tpm21.75}. Notice that configurations $({\rm A}_1, ..., {\rm A}_{n+2})$ with $\alpha({\rm A}_{n+1}, {\rm A}_{n+2}) =a \in {\rm H}$ are in bijection with configurations $({\rm A}_1, ..., {\rm A}_{n+1}, {\rm B}_{n+2})$ where the pair $({\rm A}_{n+1}, {\rm B}_{n+2})$ is generic. So the two diagrams on the right of Fig \ref{tpm21.75} represent isomorphic configuration spaces, and we get the dual pair (\ref{8.6.13.2b}) from (\ref{8.6.13.2}). The dual pair (\ref{mduala2}) is obtained from (\ref{8.6.13.2b}) by adding potentials at the ${\rm A}$-vertices, thus allowing arbitrary pairs of flags on the dual sides. We conjecture that the analogs of Conjectures \ref{MIRRORDUAL} and \ref{MIRRORDUALa} hold for the pairs (\ref{8.6.13.2b}) and (\ref{mduala2}). \begin{figure}[ht] \epsfxsize300pt \centerline{\epsfbox{fig7.eps}} \caption{Dual spaces ${\rm Conf}^\times( {\cal A}^3, {\cal B})$ (left) and ${\rm Conf}^\times( {\cal A}_L^3, {\cal B}_L) = {\cal A}_L^2$ (right). } \label{tpm21.75} \end{figure} \subsubsection{Landau-Ginzburg mirror of a maximal unipotent group ${\rm U}$ and it generalisations} \label{s1.4.1ab} We view Lusztig's dual canonical basis in ${\cal O}({\rm U}^L)$ as a canonical pairing, and hence as a mirror duality: \begin{equation} \label{mduala3a} {\bf I}: {\rm U}^+_\chi({\mathbb Z}^t) \times {\rm U}^L\longrightarrow {\Bbb A}^1, ~~~~({\rm U}^*, \chi) ~~\mbox{is mirror dual to}~~ {\rm U}^L. \end{equation} To define ${\rm U}^*$, we realise a maximal unipotent subgroup ${\rm U}$ as a big Bruhat cell in the flag variety, and intersect it with the opposite big Bruhat cell. The $\chi$ is a non-degenerate additive character of ${\rm U}$, restricted to ${\rm U}^*$. This example is explained and generalised using configurations as follows. \vskip 3mm Let ${\rm Conf}_{w_0}({\cal B}, {\cal A}^n, {\cal B})$ be the space parametrising configurations $({\rm B}_{1}, {\rm A}_{2}, \ldots , {\rm A}_{n+1}, {\rm B}_{n+2})$ such that the pairs {$({\rm B}_{1}, {\rm B}_{n+2})$ and $({\rm A}_{n+1}, {\rm B}_{n+2})$} are generic, see the right picture on Fig \ref{tpm20.5}. There is an isomorphism \begin{equation} \label{mduala1a23} {\rm U}_L \times{\cal A}_{L}^{n-1}= {\rm Conf}_{w_0}({\cal B}_{L}, {\cal A}^n_{L}, {\cal B}_{L}), ~~~~ \{{\rm B}_1, {\rm A}_2, ..., {\rm A}_n\} \longmapsto ({\rm B}_1, {\rm A}_2, ..., {\rm A}_n, {\rm A}^+, {\rm B}^-). \end{equation} The group ${\rm H}^{n}_{L}$ acts on ${\rm Conf}_{w_0}( {\cal B}_{L}, {\cal A}^n_{L}, {\cal B}_{L})$ by rescaling decorated flags. The subspace ${\rm Conf}^\times({\cal B}, {\cal A}^n, {\cal B})$ parametrises configurations where each consecutive pair of flags is generic. It is depicted on the left of Fig \ref{tpm20.5}. It is the quotient of ${\rm Conf}_{n+2}^\times({\cal A})$ by the action of ${\rm H}\times {\rm H}$ on the first and last decorated flags. Thus there is a map $\pi$, defined similarly to (\ref{project}): \begin{equation} \label{projectabb} \pi = (\lambda, \mu): {\rm Conf}^\times( {\cal B}, {\cal A}^{n}, {\cal B}) \to {\rm H}^{n-1}\times {\rm H}. \end{equation} So the projection / action data in this case is $$ {\Bbb H} = {\rm H}^{n}, ~~~~{\Bbb H}_L = \{e\}, ~~~~ {\Bbb T} = \{e\}, ~~~~{\Bbb T}_L = {\rm H}_L^{n}, ~~~~ $$ For example, ${\rm Conf}^\times({\cal B}, {\cal A}, {\cal B}) = {\rm U}^*$, in agreement with ${\rm U}^*$ in (\ref{mduala3a}). \begin{conjecture} \label{9.22.13.1} The set ${\rm Conf}^+({\cal B}, {\cal A}^{n}, {\cal B})({\mathbb Z}^t)$ parametrises a canonical basis in ${\cal O}({\rm U}_L \times {\cal A}_{L}^{n-1})$. The subset $(\lambda^t, \mu^t)^{-1}(\lambda_1, ..., \lambda_{n-1}; \nu)$ parametrises a canonical basis in the weight $\nu$ subspace of $$ {\rm U}({\cal N}^L) \otimes V_{\lambda_1}\otimes ... \otimes V_{\lambda_{n-1}}. $$ The analogs of Conjectures \ref{MIRRORDUAL} and \ref{MIRRORDUALa} hold for the following mirror dual pairs: $$ ({\rm Conf}^\times({\cal B}, {\cal A}^{n}, {\cal B}), \Omega)~~ \mbox{is mirror dual to}~~ ({\rm Conf}^\times({\cal B}_{L}, {\cal A}^n_{L}, {\cal B}_{L}), \Omega_L), $$ $$ ({\rm Conf}^\times({\cal B}, {\cal A}^{n}, {\cal B}), {\cal W}, \Omega, \pi)~~ \mbox{is mirror dual to}~~ ({\rm Conf}_{w_0}({\cal B}_{L}, {\cal A}^n_{L}, {\cal B}_{L}), r_L) $$ \end{conjecture} These mirror pairs can be obtained from the basic mirror pairs (\ref{8.6.13.2}) and (\ref{8.6.13.2a}) by trading, using the action / projection principle $\bullet$, the quotient by ${\rm H}^2_L$ to the fiber over $(a,b) \in {\rm H}^2$ on the dual side, see Fig \ref{tpm20.5}. \begin{figure} \epsfxsize300pt \centerline{\epsfbox{tpm20.75.eps}} \caption{Duality ${\rm Conf}^\times({\cal B}^2, {\cal A}^3) \leftrightarrow {\rm Conf}_{w_0}({\cal B}_{L}^2, {\cal A}_{L}^3) = {\rm U}_{L} \times {\cal A}_{L}^2$. In the middle: the ${\rm H}$-components of the map $\lambda$ sit at the dashed blue sides. The map $\mu$ is assigned to ${\rm A}_2{\rm B}_1{\rm B}_5{\rm A}_4$. } \label{tpm20.5} \end{figure} \input{Gtpm.tex} \subsection{Representation theory and examples of homological mirror symmetry for stacks} \label{sec4.2stack} As soon as our space ${\cal M}$ is fibered over a split torus ${\Bbb H}$, the mirror dual space ${\cal M}_L$ acquires an action of the dual torus ${\Bbb T}_L$. Thus we want to find the mirror of the stack ${\cal M}_L/{\Bbb T}_L$. Let us discuss two examples corresponding to the examples in Section \ref{s1.4.1a} and \ref{s1.4.1ab}. Let us look first at the dual pair (\ref{mduala2}). The subgroup $1\times {\rm H}_L^n$ acts freely on the last $n$ decorated flags in ${\rm Conf}_{w_0}({\cal B}_L, {\cal A}^{n+1}_L)$, and the quotient is ${\cal B}^{n}_L$. So one has \begin{equation} \label{8.11.13.110} {\rm Conf}_{w_0}({\cal B}_L, {\cal A}^{n+1}_L)/({\rm H}_L\times {\rm H}^n_L)= {\rm H}_L\backslash {\cal B}^{n}_L. \end{equation} We start with the problem reflecting the A-model to this stack. \paragraph{1. Equivariant quantum cohomology of products of flag varieties.} There is a way to understand mirror symmetry as an isomorphism of two modules over the algebra of $\hbar$-differential operators ${\cal D}_\hbar$: one provided by the quantum cohomology connection, and the other by the integral for the mirror dual Landau-Ginzburg model: $$ \mbox{The quantum cohomology ${\cal D}_\hbar$-module of a projective (Fano) variety ${\cal M}$} = $$ $$ \mbox{The ${\cal D}_\hbar$-module for the Landau-Ginzburg mirror $(\pi: {\cal M}^\vee\to {\Bbb H}, {\cal W}, \Omega)$, defined by $\int e^{-{\cal W}/\hbar}\Omega$}. $$ Here the space ${\cal M}^\vee$ is fibered over a torus ${\Bbb H}$, the $\Omega$ is a volume form on ${\cal M}^\vee$, and ${\cal W}$ is a function on ${\cal M}^\vee$, called the Landau-Ginzburg potential. The form $\Omega$ and the canonical volume form on the torus ${\Bbb H}$ define a volume form $\Omega^{(a)}$ on the fiber of the map $\pi$ over an $a\in {\Bbb H}$. The integrals $\int e^{-{\cal W}/\hbar}\Omega^{(a)}$ over cycles in the fibers are solutions of the ${\cal D}_\hbar$-module $\pi_*( e^{-{\cal W}/\hbar}\Omega)$ on ${\Bbb H}$. This approach to mirror symmetry was originated by Givental \cite{Gi}, see also Witten \cite{W} and \cite{EHX}, and developed further in \cite{HV} and many other works. See \cite{Au1}, \cite{Au2} for a discussion of examples of mirrors for the complements to anticanonical divisors on Fano varieties. In our situation ${\cal M}$ is a positive space and ${\cal W}$ is a positive function, so there is an integral \begin{equation} \label{keyfunct} {\cal F}_{{\cal M}}(a; \hbar):= \int_{\gamma^+(a)} e^{-{\cal W}/\hbar}\Omega^{(a)}, ~~~~\gamma^+(a):= \pi^{-1}(a) \cap {\cal M}({\mathbb R}_{>0}). \end{equation} If {it} converges, it defines a function on ${\Bbb H}({\mathbb R}_{>0})$. This function as well as its partial Mellin transforms is a very important object to study. It plays a key role in the story. Below we elaborate some examples related to representation theory. Let $\psi_s$ be the character of ${\rm H}({\mathbb R}_{>0})$ corresponding to an element $s \in {\rm H}_{L}({\mathbb R}_{>0})$. Recall the projection $\mu: {\rm Conf}^\times({\cal A}^{n+1}, {\cal B})\to {\rm H}$ from (\ref{project}). Consider the integral \begin{equation} \label{integralfoq} {\cal F}_{{\rm Conf}^\times ({\cal A}^{n+1}, {\cal B})}(a, s; \hbar):= \int_{\gamma^+(a)}\mu^*(\psi_s) e^{-{\cal W}/\hbar}\Omega^{(a)}, ~~~~ (a, s) \in ({\rm H}^{n}\times {\rm H}_L)({\mathbb R}_{>0}). \end{equation} It is the Mellin transform of the function (\ref{keyfunct}) along the torus $1 \times {\rm H}\subset {\rm H}^{n+1}$. If $n=1$, one can identify integral (\ref{integralfoq}) with an integral presentation for the Whittaker-Bessel function of the principal series representation of ${\rm G}({\mathbb R})$ corresponding to the character $\psi_s$. The latter solves the quantum Toda lattice integrable system \cite{Ko}. Therefore it provides, generalising Givental's work \cite{Gi2} for ${\rm G}=GL_m$ in non-equivariant setting, the integral presentation of the special solution of equivariant quantum cohomology ${\cal D}_\hbar$-module for the flag variety ${\cal B}_L$ studied in \cite{GiK}, \cite{Gi3}, \cite{GKLO}, \cite{GLO1}-\cite{GLO3}, \cite{L}, \cite{R1}, \cite{R2}. Recall the special cluster coordinate system on ${\rm Conf}_{3}({\cal A})$ for ${\rm G}={\rm GL}_m$ from \cite{FG1}. It has a slight modification providing a rational coordinate system on ${\rm Conf}_{w_0}({\cal B}, {\cal A}, {\cal A})$, see Section \ref{KT}. \begin{theorem} \label{4.23.13.999} i) Let ${\rm G}=GL_m$. Then the potential ${\cal W}$, expressed in the special cluster coordinate system on ${\rm Conf}_{w_0}({\cal B}, {\cal A}, {\cal A})$, is precisely Givental's potential from \cite{Gi2}. The value of the integral ${\cal F}_{{\rm Conf}^\times({\cal B}, {\cal A}, {\cal A})}(a; s, \hbar)$ at $s=e$ coincides with Givental's integral for a solution of the quantum cohomology ${\cal D}_\hbar$-module ${\rm QH}^*({\cal B}_L)$ \cite{Gi2}. ii) For any group ${\rm G}$, the integral ${\cal F}_{{\rm Conf}^\times({\cal B}, {\cal A}, {\cal A})}(a; s)$ is a solution of the ${\cal D}_\hbar$-module ${\rm QH}^*_{\rm H_L}({\cal B}_L)$. \end{theorem} \begin{proof} i) It is proved in Section \ref{sec4.2Gi}. ii) Since integral (\ref{integralfoq}) provides an integral presentation for the Whittaker function, it is equivalent to the results of \cite{GLO1}, \cite{R1}. Observe that the parameter $a \in {\rm H}({\mathbb C})$ is interpreted as the parameter on $H^2({\cal B}_L, {\mathbb C}^*)$, which is the base of the small quantum cohomology connection, while the parameter $s\in {\rm H}_L({\mathbb R}_{>0})$ is the parameter of the ${\rm H_L}$-equivariant cohomology. \end{proof} For arbitrary $n$, integral (\ref{integralfoq}) determines the equivariant quantum cohomology ${\cal D}_\hbar$-module of ${\cal B}_L^n$. The latter lives on ${\rm H}^{n}\times {\rm H}_L$, it is a ${\cal D}_\hbar$-module on ${\rm H}^{n}$, but only ${\cal O}$-module along ${\rm H}_L$. Integral (\ref{integralfoq}) is a solution of this ${\cal D}_\hbar$-module. \paragraph{2. Mirror of equivariant B-model on ${\cal B}_L^n$.} The integral (\ref{integralfoq}) admits an analytic continuation in $s$ provided by the analytic continuation of the character $\psi_s$ in the integrand. The complex integrand lives on an analytic space defined as follows. Let $\widetilde {\rm H}({\mathbb C})$ is the universal cover of ${\rm H}({\mathbb C})$. Denote by $({\cal B}\times \ldots\times {\cal B})_n^{\ast, a}$ the fiber of the map $\lambda$ in (\ref{project}) over an $a \in {\rm H}^n$. It is a Zariski open subset of ${\cal B}^n$. Consider the fibered product $$ \begin{array}{ccc} \widetilde {({\cal B}\times \ldots\times {\cal B})_n^{\ast, a}}({\mathbb C})&\stackrel{\widetilde {\rm exp}}{\longrightarrow} &({\cal B}\times \ldots\times {\cal B})_n^{\ast, a}({\mathbb C})\\ \widetilde\mu\downarrow &&\mu\downarrow\\ \widetilde {\rm H}({\mathbb C})& \stackrel{{\rm exp}}{\longrightarrow}&{\rm H}({\mathbb C}) \end{array} $$ Let $\widetilde {\cal W}$ and $\widetilde \Omega$ be the lifts of ${\cal W}$ and $\Omega$ by the map $\widetilde {\rm exp}$. We get a locally constant family of categories ${\cal F}{\cal S}_{\rm wr}(\widetilde {({\cal B}\times \ldots\times {\cal B})_n^{\ast, a}}({\mathbb C}), \widetilde {\cal W}, \widetilde \Omega)$ over ${\rm H}^n({\mathbb C})$. So the fundamental group $\pi_1({\rm H}^n({\mathbb C}))$ acts on the category for any given $a$. The group $\pi_1({\rm H}({\mathbb C}))$ also acts on it by the deck transformations induced from the universal cover $\widetilde {\rm H}({\mathbb C})\longrightarrow{\rm H}({\mathbb C})$. On the other hand, the Picard group of the stack ${\rm H}_L\backslash {\cal B}_L^{n}$, $$ {\rm Pic}({\rm H}_L\backslash {\cal B}_L^{n}) = X^*({\rm H}_L) \times {\rm Pic}({\cal B}_L^n) = X^*({\rm H}_L)\times X^*({\rm H}^n_L) $$ acts by autoequivalences of the category $D^b{\rm Coh}_{{\rm H}_L}({\cal B}_L^{n})$. \begin{conjecture} \label{4.27.13.1aas} There is an equivalence of ${\rm A}_\infty$-categories \begin{equation} \label{4.27.13.1as} {\cal F}{\cal S}_{\rm wr}( \widetilde {({\cal B}\times \ldots\times {\cal B})_n^{\ast, a}}({\mathbb C}), \widetilde {\cal W}, \widetilde \Omega)~~ \sim ~~ D^b{\rm Coh}_{{\rm H}_L}({\cal B}_L^{n}). \end{equation} It intertwines the deck transformation action of $\pi_1({\rm H}({\mathbb C}))$ $\times$ the monodromy action of $\pi_1({\rm H}^n({\mathbb C}))$ on the Fukaya-Seidel category with the action of $X^*({\rm H}_L) \times {\rm Pic}({\cal B}_L^n)$ on the category $D^b{\rm Coh}_{{\rm H}_L}({\cal B}_L^{n})$. The integral (\ref{integralfoq}) over Lagrangian submanifolds supporting objects of the Fukaya-Seidel category is a central charge for a stability condition on the category. \end{conjecture} Kontsevich argued \cite{K13} that there is a smaller class of stability conditions, which he called ``physical stability conditions''. Stability conditions above should be from that class. \vskip 3mm {\bf Examples}. 1. Let $n=1$. Then ${\cal B}_1^{\ast, a}$ is the intersection ${\cal B}^{\ast}$ of two big Bruhat cells in the flag variety ${\cal B}$. It parametrising flags in generic position to two generic flags, say $(B^+, B^-)$. 2. Let ${\rm G}=SL_2$, $n=1$. Then ${\cal B}_1^{\ast, a} = {\mathbb C}^*$ with the coordinate $u$, $\widetilde {\cal B}_1^{\ast, a} = {\mathbb C}$ with the coordinate $t$, $u=e^t$, and $\widetilde {\cal W} = a^{-1}(e^{t} + e^{-t})$ where $a\in {\mathbb C}^*$ is a parameter. Next, ${\cal B}_L= {\mathbb C}{\Bbb P}^1$, with the natural ${\mathbb C}^*$-action preserving $0, \infty$. Conjecture \ref{4.27.13.1aas} predicts an equivalence $$ {\cal F}{\cal S}_{\rm wr}({\mathbb C}; a^{-1}(e^{t}+ e^{-t}), dt)~~\stackrel{}{\sim}~~ D^b{\rm Coh}_{{{\mathbb C}^*}}({\mathbb C}{\Bbb P}^1), ~~~~ a\in {\mathbb C}^*. $$ The equivalence is a trivial exercise for the experts. It can be checked by using the Kontsevich combinatorial model \cite{K09}, \cite{A09}, \cite{STZ}, \cite{DK} for the Fukaya-Seidel category as a category of locally constant sheaves on the Lagrangian skeleton for a surface with potential in the case of $({\mathbb C}, e^{t}+ e^{-t})$, shown on Fig \ref{gcb2}. Varying the parameter $a\in {\mathbb C}^*$ in the potential we get a locally constant family of the Fukaya-Seidel categories. Its monodromy is an autoequivalence corresponding to the action of a generator of the group ${\rm Pic}({\Bbb P}^1)$. The translation $t \longmapsto t+2\pi i$ is another autoequivalence corresponding to the action of a generator of the character group $X^*({\mathbb C}^*) ={\mathbb Z}$ on $D^b{\rm Coh}_{{{\mathbb C}^*}}({\mathbb C}{\Bbb P}^1)$. \begin{figure}[ht] \epsfxsize120pt \centerline{\epsfbox{gcb2.eps}} \caption{ Horisontal rays are the rays of fast decay of the potential. Together with the vertical line, they form the Lagrangian skeleton of the Kontsevich model of ${\cal F}{\cal S}_{\rm wr}({\mathbb C}; a^{-1}(e^{t}+ e^{-t}), dt)$. } \label{gcb2} \end{figure} Let us consider now the oscillatory integral $$ \int_{L} {\rm exp}(\frac{1}{\hbar}(-a^{-1}(e^{t} + e^{-t})-st))dt = \int_{{\rm exp}(L)} e^{-a^{-1}(u+u^{-1})/\hbar}u^{s/\hbar}\frac{du}{u}. $$ Here $L$ is a path which goes to infinity along the line of fast decay of the integrand. This is an integral for the Bessel function. It defines a family of stability conditions on the Fukaya-Seidel category depending on $s\in {\mathbb C}$ -- it is the value of the central charge on the $K_0$-class of the object supported on $L$. The parameter $s$ reflects the equivariant parameter for the ${\mathbb C}^*$-action. \paragraph{3. Mirror of equivariant B-model on ${\cal B}_L^n\times {\rm U}_L$.} There is an integral very similar to (\ref{integralfoq}): \begin{equation} \label{integralfo} {\cal F}_{{\rm Conf}^\times( {\cal B}, {\cal A}^{n}, {\cal B})}(a, s):= \int_{\gamma^+(a)}\mu^*(\psi_s) e^{-{\cal W}/\hbar}\Omega^{(a)}, ~~~~ (a, s) \in ({\rm H}^{n-1}\times {\rm H}_L)({\mathbb R}_{>0}). \end{equation} Denote {by} $\lambda_{(\ref{projectabb})}$ the map $\lambda$ onto ${\rm H}_L^{n-1}$ from (\ref{projectabb}). The integrand has an analytic continuation in $s$ which lives on the fibered product $$ \begin{array}{ccc} \widetilde {\lambda^{-1}_{(\ref{projectabb})}}(a)({\mathbb C})&\stackrel{\widetilde {\rm exp}}{\longrightarrow} &\lambda^{-1}_{(\ref{projectabb})}(a)({\mathbb C})\\ \widetilde\mu\downarrow &&\mu\downarrow\\ \widetilde {\rm H}({\mathbb C})& \stackrel{{\rm exp}}{\longrightarrow}&{\rm H}({\mathbb C}) \end{array} $$ There is a conjecture similar to Conjecture \ref{4.27.13.1aas} describing the category $D^b{\rm Coh}_{{\rm H}_L}({\cal B}_L^{n-1}\times {\rm U}_L)$. For example, when $n=1$ it reads as follows. \begin{conjecture} \label{9.10.13.1} There is an equivalence of ${\rm A}_\infty$-categories \begin{equation} \label{9.10.13.2} {\cal F}{\cal S}_{\rm wr}( \widetilde {{\rm U}^{\ast}}({\mathbb C}), \widetilde {\cal W}, \widetilde \Omega)~~ \sim ~~ D^b{\rm Coh}_{{\rm H}_L}({\rm U}_L). \end{equation} It intertwines the deck transformation action of $\pi_1({\rm H}({\mathbb C}))$ on the Fukaya-Seidel category with the action of $X^*({\rm H}_L)$ on the category $D^b{\rm Coh}_{{\rm H}_L}({\rm U}_L)$. \end{conjecture} {\bf Example}. If ${\rm G}=SL_2$ and $n=1$, then ${\rm Conf}_{w_0}( {\cal B}, {\cal A}, {\cal B})={\mathbb C}$ with the ${\mathbb C}^*$-action. On the dual side, ${\rm Conf}^\times( {\cal B}, {\cal A}, {\cal B})={\mathbb C}^*$, $\pi=\mu$ is the identity map, ${\cal W}=u$, $\Omega = du/u$. The universal cover of ${\mathbb C}^*$ is ${\mathbb C}$ with the coordinate $t$ such that $u=e^t$. The integral is $$ {\cal F}(s) = \int^\infty_{0}e^{-u}u^s du/u = \Gamma(s). $$ The equivalence of categories predicted by Conjecture \ref{9.10.13.1} is \begin{equation} \label{9.10.13.3} {\cal F}{\cal S}_{\rm wr}( {\mathbb C}, e^t, dt)~~ \sim ~~ D^b{\rm Coh}_{{\mathbb C}^*}({\mathbb C}). \end{equation} It can be checked by using the Kontsevich combinatorial model for the Fukaya-Seidel category. \subsection{Concluding remarks} \paragraph{1. Mirror dual of the moduli spaces of ${\rm G}^L$-local systems on $S$.} The true analog of the moduli space of $G^L$-local systems for a decorated surface $S$ is the moduli space ${\rm Loc}_{{\rm G^L}, S}$. We view the function ${\cal W}$ on the space ${\cal A}_{{\rm G}, S}$ as the Landau-Ginzburg potential on ${\cal A}_{{\rm G}, S}$, and suggest \begin{conjecture} \begin{equation} \label{12.1.14.1} \mbox{\rm $({\cal A}^\times_{{\rm G}, S}, {\cal W}_{}, \Omega_{}, \pi)$ is mirror dual to $({\rm Loc}_{{\rm G^L}, S}, \Omega_{L}, r_L)$}. \end{equation} \end{conjecture} It would be interesting to compare this mirror duality conjecture with the mirror duality conjectures of Kapustin-Witten \cite{KW} and Gukov-Witten \cite{GW1}, which do not involve a potential, and refer to families of moduli spaces, which are somewhat different then the moduli spaces we consider. \vskip 3mm Notice also that if each boundary component of $S$ has at least one special point, then ${\rm Loc}_{{\rm G^L}, S} = {\cal A}_{{\rm G^L}, S}$, and so in this case we have a more symmetric picture: \begin{equation} \label{12.1.14.2} \mbox{\rm $({\cal A}^\times_{{\rm G}, S}, {\cal W}_{}, \Omega_{}, \pi)$ is mirror dual to $({\cal A}_{{\rm G^L}, S}, \Omega_{{L}}, r_L)$}. \end{equation} \begin{equation} \label{12.1.14.2a} \mbox{\rm $({\cal A}^\times_{{\rm G}, S}, \Omega_{}, \pi)$ is mirror dual to $({\cal A}^\times_{{\rm G^L}, S}, \Omega_{{L}}, r_L)$}. \end{equation} \paragraph{2. Oscillatory integrals.} The analog of integral (\ref{keyfunct}) in the surface case is an integral \begin{equation} \label{stapiS} {\cal F}_{{\rm G}, S}(a):= \int_{\gamma^+(a)/\Gamma_S} e^{-{\cal W}/\hbar}\Omega^{(a)}. \end{equation} Since the integrand is $\Gamma_S$-invariant, the integration cycles are defined by intersecting the fibers with ${\cal A}_{{\rm G}, S}({\mathbb R}_{>0})/\Gamma_S$. Notice that ${\cal A}_{{\rm G}, S}({\mathbb R}_{>0})$ is the decorated Higher Teichmuller space \cite{FG1}. If ${\rm G}=SL_2$, the integral converges. For other groups convergence is a problem. Notice also that the three convergent oscillatory integrals $$ {\cal F}_{{\rm Conf}^\times_n({\cal A}, {\cal B}, {\cal B})}(s),~~~ {\cal F}_{{\rm Conf}^\times({\cal A}, {\cal A}, {\cal B})}(a;s), ~~~ {\cal F}_{{\rm Conf}^\times_3({\cal A})}(a_1, a_2, a_3), ~~~~a_i \in {\rm H}({\mathbb R}_{>0}),~ s\in {\rm H}_L({\mathbb R}_{>0}) $$ are continuous analogs of the Kostant partition function, weight multiplicities and dimensions of triple tensor product invariants for the Langlands dual group ${\rm G}^L({\mathbb R})$. \paragraph{3. Relating our dualites to cluster Duality Conjectures \cite{FG2}.} The latter study dual pairs $({\cal A}, {\cal X}^\vee)$, where ${\cal A}$ is a cluster ${\cal A}$-variety, and ${\cal X}^\vee$ is the Langlands dual cluster ${\cal X}$-variety: $$ {\cal A} ~~\mbox{is dual to}~~ {{\cal X}^\vee}. $$ There is a discrete group $\Gamma$ acting by automorphisms of each of the spaces ${\cal A}$ and ${\cal X}^\vee$, called the {\it cluster modular group}. So it acts on the sets of tropical points ${\cal A}({\mathbb Z}^t)$ and ${\cal X}^\vee({\mathbb Z}^t)$. Cluster Duality Conjectures predict canonical $\Gamma$-equivariant pairings \begin{equation} \label{duaIIcon} {\bf I}_{\cal A}: {\cal A}({\mathbb Z}^t) \times {\cal X}^\vee \longrightarrow {\Bbb A}^1, ~~~~ {\bf I}_{\cal X}: {\cal A} \times {\cal X}^\vee({\mathbb Z}^t) \longrightarrow {\Bbb A}^1. \end{equation} As the work \cite{GHK13} shows, in general the functions assigned to the tropical points may exist only as formal universally Laurent power series rather then universally Laurent polynomials. There are cluster volume forms ${\Omega}_{\cal A}$ and ${\Omega}_{\cal X}$ on the ${\cal A}$ and ${\cal X}$ spaces \cite{FG5}, see Section \ref{seccluster}. {We suggest that, in a { rather general situation}, there is a natural $\Gamma$-invariant positive potential ${\cal W}_{\cal A}$ on the space ${\cal A}$, a similar potential ${\cal W}_{\cal X}$ on the space ${\cal X}$, and a certain ``alterations'' $\widehat {{\cal X}^\vee}$ and $\widehat {{\cal A}^\vee}$ of the spaces ${\cal X}^\vee$ and ${\cal A}^\vee$ providing mirror dualities underlying canonical pairings (\ref{duaIIcon}): \begin{equation} \label{duality321} ({\cal A}, {\cal W}_{\cal A}, {\Omega}_{\cal A}, \pi_{\cal A}) ~~\mbox{is mirror dual to}~~ (\widehat {{\cal X}^\vee}, {\Omega}_{{\cal X}^\vee}, r_{{\cal X}^\vee}).\end{equation} \begin{equation} ({\cal X}, {\cal W}_{{\cal X}}, {\Omega}_{{\cal X}}, \pi_{{\cal X}}) ~~\mbox{is mirror dual to}~~ (\widehat {{\cal A}^\vee}, {\Omega}_{{\cal A}^\vee}, r_{{\cal A}^\vee}). \end{equation} Canonical pairings (\ref{duaIIcon}) should induce canonical pairings related to the potentials and alterations: $$ {\bf I}_{({\cal A}, {\cal W}_{\cal A})}: {\cal A}_{{\cal W}_{\cal A}}^+({\mathbb Z}^t) \times \widehat {{\cal X}^\vee} \longrightarrow {\Bbb A}^1, ~~~~ {\bf I}_{({\cal X}, {\cal W}_{\cal X})}: {\cal X}_{{\cal W}_{\cal X}}^+({\mathbb Z}^t) \times \widehat {{\cal A}^\vee} \longrightarrow {\Bbb A}^1. $$ \vskip 2mm This should provide a cluster generalisation of our examples. For instance, there is a split torus ${\Bbb H}_{{\cal A}}$ associated to a cluster variety ${\cal A}$, coming with a canonical basis of characters, given by the {\it frozen ${\cal A}$-coordinates}. They describe the projection $\pi_{\cal A}: {\cal A} \to {\Bbb H}_{{\cal A}}$, see Section \ref{seccluster}. An alteration $\widehat {\cal A}^\vee$, given by a partial compactification of the space ${\cal A}^\vee$, and a conjectural definition of the potential ${\cal W}_{\cal X}$ are given in Section \ref{potfrv}. \paragraph{4. Conclusion.} {\it A parametrisation of a canonical basis, casted as a canonical pairing ${\bf I}$, should be understood as a manifestation of a mirror duality between a space with a Landau-Ginzburg potential and a similar space for the Langlands dual group}. \vskip2mm Our main evidence is that canonical pairing (\ref{canpar1}) describing a parametrisation of canonical basis in tensor products of $n$ irreducible ${\rm G}^L$-modules is related via an integral presentation to the ${\cal D}_\hbar$-module describing the equivariant quantum cohomology of $({\cal B}_L)^n$. \vskip 3mm There is a remarkable mirror conjecture of Gross-Hacking-Keel \cite{GHK11}, who start with a maximally degenerate log Calabi-Yau $Y$ and conjecture that the Gromov-Witten theory of $Y$ gives rise to a commutative ring $R(Y)$, with a basis. Its spectrum is an affine variety which is conjectured to be the mirror of $Y$. Notice that in our conjectures we give an {\it a priori} description of the mirror dual pair of spaces, while in \cite{GHK11} the mirror space is encrypted in the conjecture. For example, mirror conjecture (\ref{8.6.13.2}) is expected to be an example of the Gross-Hacking-Keel conjecture, but we do not know how to deduce, starting from the pair $({\rm Conf}^\times_n({\cal A}), \Omega)$, the former from the latter, and in particular why the Langlands dual group appears in the description of the mirror. We want to stress that in our mirror conjectures we usually deal with mirror dual pairs where at least one is a Landau-Ginzburg model, i.e. is represented by a space with a potential. In particular canonical bases in representation theory and their generalisations related to moduli spaces of ${\rm G}$-local systems on decorated surfaces $S$ always require the dual space to be a space with a non-trivial potential, unless $S$ is a closed surface without boundary. Finally, in applications to representation theory we are forced to deal with stacks rather then varieties, as discussed in Section \ref{sec4.2stack}. This is a less explored chapter of the homological mirror symmetry. See also a recent paper of C. Teleman \cite{Te14} in this direction. \vskip 2mm The space ${\cal M}({\cal K})$ of ${\cal K}$-points of a space ${\cal M}$ is a cousin of the loop space $\Omega{\cal M}({\mathbb C})$. Heuristically, the quantum cohomology ${\cal D}_\hbar$-module is best seen in the (ill defined) $S^1$-equivariant Floer cohomology of the loop space $\Omega{\cal M}({\mathbb C})$ \cite{Gi}, which are sort of ``semi-infinite cohomology'' of the loop space. It would be interesting to relate this to the infinite dimensional cycles ${\cal C}^\circ_l \subset {\cal M}^\circ({\cal K})$. \vskip 2mm It would be very intersecting to relate our approach to the construction of canonical bases via cycles ${\cal M}^\circ_l$ to the work in progress of Gross-Hacking-Keel-Kontsevich on construction of canonical bases on cluster varieties via scattering diagrams. \paragraph{Organization of the paper.} In Section \ref{sec2} we present main definitions and results relevant to representation theory. We start from a detailed discussion of the geometry of the tensor product invariants in Sections \ref{sec2.1}-\ref{sec2.2}. We discuss more general examples in Sections \ref{sec2.3}. In Section \ref{tensor} we construct a canonical basis in tensor products of finite dimensional ${\rm G}^L$-modules, and its parametrization. In Sections \ref{sec2} we give all definitions and complete descriptions of the results, but include a proof only if it is very simple. The only exception is a proof of Theorem \ref{mmvvth} in Section \ref{tensor}. The rest of the proofs occupy the next Sections. In Section \ref{sec11} we discuss the general case related to a decorated surface. In the Section \ref{seccluster} we discuss the volume form and the potential in the cluster set-up. \paragraph{Acknowledgments.} This work was supported by the NSF grants DMS-1059129 and DMS-1301776. A.G. is grateful to IHES and Max Planck Institute fur Mathematic (Bonn) for the support. We are grateful to Mohammed Abouzaid, Joseph Bernstein, Alexander Braverman, Vladimir Fock, Alexander Givental, David Kazhdan, Joel Kamnitzer, Sean Keel, Ivan Mirkovic, and Sergey Oblezin for many useful discussions. We are especially grateful to Maxim Kontsevich for fruitful conversations on mirror symmetry during the Summer of 2013 in IHES. We are very grateful to the referee for many fruitful comments, remarks and suggestions. \section{The potential ${\cal W}$ and Weyl group actions on the space ${\cal A}_{{\rm G}, S}$} \label{sec11} \section{The Weyl group actions on the space ${\cal X}_{{\rm G}, S}$ are positive} \label{sec10n} \subsection {The moduli space ${\cal X}_{{\rm G}, S}$} \label{xmodsp} In Section \ref{sec10n}, ${\rm G}$ is a split semisimple algebraic group over ${\mathbb Q}$ with trivial center. Given a right ${\rm G}$-local system ${\cal L}$ on $S$, there is the associated flag bundle ${\cal L}_{\cal B}:={\cal L}\times_{{\rm G}}{\cal B}$. \begin{definition}[\cite{FG1}] The moduli space ${\cal X}_{{\rm G}, S}$ parametrizes pairs $({\cal L}, \beta)$ where ${\cal L}$ is a ${{\rm G}}$-local system on $S$, and $\beta$ a flat section of the restriction of ${\cal L}_{\cal B}$ to the punctured boundary $\widehat \partial (\ast S)$. \end{definition} If $S$ is a disk $D_n$ with $n$-special points on its boundary, then ${\cal X}_{{\rm G},D_n}={\rm Conf}_n({\cal B})$. We briefly recall the positive structure of ${\cal X}_{{\rm G}, S}$ introduced in {\it loc.cit.} Let $T$ be an ideal triangulation of $S$, ${\cal T}$ the set of the triangles of $T$, and ${\cal E}_{\rm int}$ the set of the internal edges of $T$. There is a birational map \begin{equation} \displaystyle{ \pi_{T}: {\cal X}_{{\rm G}, S}\longrightarrow \Pi_{t\in {\cal T}}{\rm Conf}_3({\cal B})\times {\Pi}_{e\in {\cal E}_{\rm int}}{\rm H}}. \end{equation} The map to the first factor is defined by restricting ${\cal X}_{{\rm G}, S}$ to the triangles $t\in {\cal T}$. Recall the basic invariants in Section \ref{sec2.3.2}. If $S=D_4$, then the map to second factor is $$ p_{13}: {\rm Conf}_4({\cal B})\longrightarrow {\rm H},~~({\rm B}_1, {\rm B}_2,{\rm B}_3, {\rm B}_4)\longmapsto \frac{\alpha({\rm A}_1, {\rm A}_2)}{\alpha({\rm A}_3, {\rm A}_2)}\frac{\alpha({\rm A}_3,{\rm A}_4)}{\alpha({\rm A}_1, {\rm A}_4)}, ~~~\mbox{where ${\rm A}_i\in {\cal A}$ and $\pi({\rm A}_i)={\rm B}_i$}. $$ In general, one can go to a finite cover of $S$ if needed, and construct a similar map from ${\cal X}_{{\rm G}, S}$ to ${\Pi}_{e \in {\cal E}_{\rm int}}{\rm H}$. Note that both ${\rm Conf}_3({\cal B})$ and ${\Pi}_{e \in {\cal E}_{\rm int}}{\rm H}$ admit natural positive structures. The positive structure on ${\cal X}_{{\rm G}, S}$ is defined such that $\pi_{T}$ is a positive birational isomorphism. It is proved in {\it loc.cit} that the positive structure on ${\cal X}_{{\rm G}, S}$ is independent of the triangulation $T$ chosen. \vskip 2mm Let $g\in {\rm G}$ be a regular semisimple element. It is well known that the set ${\bf B}_{g}$ of Borel subgroups containing $g$ is a $W$-torsor. Let ${\rm C}$ be a boundary circle of $S$. Given a generic pair $({\cal L}, {\beta})\in{\cal X}_{{\rm G}, S}$, the monodromy around ${\rm C}$ preserve the flat section ${\beta}|_{{\rm C}}$ restricted to ${\rm C}$. In particular, if we trivialize ${\cal L}$ at the fiber ${\cal L}_x$ over a point $x\in {\rm C}$, then the flat section ${\beta}|_{{\rm C}}$ becomes a Borel subgroup containing the monodromy $g$. This defines a Weyl group action on ${\cal X}_{{\rm G}, S}$. \begin{definition} [${\it loc.cit}$] For each boundary circle ${\rm C}$ of $S$, there is a rational Weyl group action on ${\cal X}_{{\rm G},S}$ by altering the flat section over ${\rm C}$. \end{definition} In the rest of this Section, we prove \begin{theorem} \label{13.5.4.11115h} The Weyl group acts on ${\cal X}_{{\rm G}, S}$ by positive birational isomorphisms. \end{theorem} \subsection{A positive Weyl group action on ${\rm Conf}({\cal A},{\cal A},{\cal B})$} \label{sec7.01h} The ${\rm G}$-orbits of ${\cal B}\times{\cal B}$ are parametrized by the Weyl group. Two flags ${\rm B}_1,{\rm B}_2$ are called of distance $w$, denoted by ${\rm B}_1\stackrel{w}{\rightarrow}{\rm B}_2$, if $\{{\rm B}_1,{\rm B}_2\}$ belongs to the orbit parametrized by $w$. Let us fix a flag ${\rm B}^-$. Let ${\cal B}_w$ be the set of flags ${\rm B}'$ such that ${\rm B}^-\stackrel{w}{\rightarrow}{\rm B}'$. Then we have $ {\cal B}=\bigsqcup_{w\in W}{\cal B}_w. $ In particular, the flags $x_i(c)\cdot {\rm B}^{-}$ ($c\in {\Bbb G}_m$) are in ${\cal B}_{s_i}$. Let $g\in {\rm G}$ be a regular semisimple element. As stated in Section \ref{xmodsp}, the set ${\bf B}_g$ of Borel subgroups containing $g$ is a $W$-torsor. For example, if $b\in {\rm B}^{-}$ is a regular semisimple element, then in each cell ${\cal B}_w$, there exists a unique Borel subgroup containing $b$. \vskip 2mm Let ${\bf x}=\{{\rm A}_1, {\rm A}_2, {\rm B}_3\}\in {\cal A\times A\times B}$ be a generic point. There is a unique $b_{\bf x}\in {\rm B}_3$ taking ${\rm A}_1$ to ${\rm A}_2$. Since the subset of regular semisimple elements in ${\rm G}$ is Zariski open, we can assume that $b_{\bf x}$ is regular semisimple. Let ${\rm B}_{{\bf x}}^w$ be the unique Borel subgroup containing $b_{\bf x}$ such that ${\rm B}_3\stackrel{w}{\rightarrow}{\rm B}_{\bf x}^{w}$. Set $$w({\bf x}):=\{{\rm A}_1, {\rm A}_2, {\rm B}_{\bf x}^{w}\}.$$ It defines a rational Weyl group action on ${\cal A\times A\times B}$. Such an action commutes with the ${\rm G}$-diagonal action. Thus it descends to an action on ${\rm Conf}({\cal A},{\cal A}, {\cal B})$. \begin{theorem} \label{13.5.4.1211h} The Weyl group action on ${\rm Conf}({\cal A, A, B})$ is a positive action. \end{theorem} First we recall the following basic facts. \vskip 1mm 1) The set of conjugacy classes of parabolic subgroups of ${\rm G}$ is in bijection with the subsets of $I$. Let $i\in I$. Denote by ${\cal P}_i$ the space of parabolic subgroups corresponding to $\{i\}$. For any Borel subgroup ${\rm B}$, there is a unique ${\rm P}\in {\cal P}_i$ containing ${\rm B}$. Denote by ${\cal B}_{\rm P}$ the space of Borel subgroups contained in ${\rm P}$. Then ${\cal B}_{\rm P}\stackrel{\sim}{=}\P^1$. It consists of ${\rm B}$ and Borel subgroups which are of distance $s_i$ to ${\rm B}$. \vskip 1mm 2) Let ${\P}_*:=\P^1-\{y_1, y_2\}$ be the projective line without two points. Consider the cross ratio $$ r(z_1,z_2; y_1, y_2)=\frac{(z_1-y_1)(z_2-y_2)}{(z_1-y_2)(z_2-y_1)}, ~~~~\mbox{where } z_1, z_2\in {\P}_*. $$ Since $r(z_1,z_2;y_1, y_2)r(z_2, z_3;y_1, y_2)=r(z_1, z_3; y_1, y_2)$, it gives rise to a ${\Bbb G}_m$-action on $\P_*$ such that $$ \forall c\in {\Bbb G}_m, \forall z\in {\P}_*,~~~~ r(z, c\cdot z; y_1, y_2)=c. $$ Let ${\rm P}\in {\cal P}_i$. Since ${\cal B}_{\rm P}\stackrel{\sim}{=}{\P}^1$, each pair of distinct Borel subgroups contained in ${\rm P}$ will give rise to a rational ${\Bbb G}_m$-action on ${\cal B}_{\rm P}$. \vskip 1mm 3) Let $w=w_1w_2$ be such that $l(w)=l(w_1)+l(w_2)$. For any pair ${\rm B}\stackrel{w}{\rightarrow}{\rm B}'$, there is a unique Borel subgroup ${\rm B}_1$ such that ${\rm B}\stackrel{w_1}{\rightarrow}{\rm B}_1\stackrel{w_2}{\rightarrow}{\rm B}'$. In particular, if $\{{\rm B}, {\rm B}'\}$ is of distance $w_0$, then each reduced word ${\bf i}=(i_1,\ldots, i_m)$ for $w_0$ gives rise to a unique chain $$ {\rm B}={\rm B}_0\stackrel{s_{i_1}}{\longrightarrow}{\rm B}_1\stackrel{s_{i_2}}{\longrightarrow}\ldots \stackrel{s_{i_m}}{\longrightarrow}{\rm B}_m={\rm B}'. $$ Let ${\rm A}\in {\cal A}$ be a generic decorated flag. Set $a_k:=\chi(u_{{\rm B}_{{k-1}}, {\rm B}_{k}}^{\rm A})$. It is easy to see that $u_{{\rm B}_{{k-1}}, {\rm B}_{k}}^{\rm A}=x_{i_k}(a_k)$. It recovers Lusztig's coordinate: $u_{{\rm B}, {\rm B}'}^{{\rm A}}=x_{i_1}(a_1)\ldots x_{i_m}(a_m)$. \vskip 2mm Let ${\bf x}=\{{\rm A}_1, {\rm A}_2, {\rm B}_3\}\in {\cal A\times A\times B}$ be a generic point. Let $i\in I$. Set ${\rm B}_1'$, ${\rm B}_2'$ such that \begin{equation} \label{13.4.22.955h} {\rm B}_3\stackrel{s_i}{\longrightarrow}{\rm B}_k'\stackrel{s_iw_0}{\longrightarrow}\pi({\rm A}_k),~~~~ k=1,2. \end{equation} Let ${\rm P}\in {\cal P}_i$ contain ${\rm B}_3$. Note that ${\rm B}_1', {\rm B}_2'\in{\cal B}_{\rm P}$. They give rise to a ${\Bbb G}_m$-action on ${\cal B}_{\rm P}$. Denote by $c\cdot {\rm B}_3$ the image of ${\rm B}_3$ under the action of $c\in {\Bbb G}_m$. Set \begin{equation} \label{13.4.22.2108h} e_i^c({\bf x}):=\{{\rm A}_1, {\rm A}_2, c\cdot {\rm B}_3\}. \end{equation} It defines a rational ${\Bbb G}_m$-action on ${\cal A\times A\times B}$. There is a unique $u_1'$ in the stabilizer of ${\rm A}_1$ transporting ${\rm B}_3$ to ${\rm B}_2'$. There is a unique $u_2'$ in the stabilizer of ${\rm A}_2$ transporting ${\rm B}_1'$ to ${\rm B}_3$. Set \begin{equation} \label{13.6.8.12.41h} l_i({\bf x})=\chi_{{\rm A}_1}(u_1'),~~~~~ r_i({\bf x})=\chi_{{\rm A}_2}(u_2'). \end{equation} Since $e_i^\ast$, $l_i$, $r_i$ commute with the ${\rm G}$-diagonal action, we can descend them to ${\rm Conf}({\cal A},{\cal A}, {\cal B}).$ \vskip 3mm Theorem \ref{13.5.4.1211h} is a direct consequence of the following Lemmas. \begin{lemma} \label{13.6.9.1548h} The functions $l_i, r_i$ are positive functions. The action $e_i^{\ast}$ is a positive action. \end{lemma} \begin{proof} Let ${\bf i}=(i_1,\ldots, i_m)$ be a reduced word for $w_0$ which starts from $i_1=i$. Let ${x}=({\rm A}_1, {\rm A}_2, {\rm B}_3)$ be a generic configuration. We associate to $x$ two chains of Borel subgroups: \begin{equation} \label{13.4.22.1354h} {\rm B}_3={\rm B}_0'\stackrel{s_{i_1}}{\longrightarrow}{\rm B}_1'\stackrel{s_{i_2}}{\longrightarrow}\ldots \stackrel{s_{i_m}}{\longrightarrow}{\rm B}_{m}'=\pi({\rm A}_2),~~~~~\pi({\rm A}_1)={\rm B}_m''\stackrel{s_{i_m}}{\longrightarrow}\ldots\stackrel{s_{i_{2}}}{\longrightarrow}{\rm B}_1'' \stackrel{s_{i_1}}{\longrightarrow}{\rm B}_0''={\rm B}_3. \end{equation} Set \begin{equation} \label{13.5.5.1122h} h:=h_{{\rm A}_1, {\rm A}_2}\in {\rm H},~~~~ a_k:=\chi(u_{{\rm B}_{k-1}',{\rm B}_{k}'}^{{\rm A}_1}),~b_k:=\chi(u_{{\rm B}_k'', {\rm B}_{k-1}''}^{{\rm A}_2}). \end{equation} Recall the left and right functions ${\cal L}_i$ and ${\cal R}_i$. By definition, \begin{equation} \label{8.13.4.4h} l_i(x)={\cal L}_i(u_{\rm B_3, B_2}^{\rm A_1})=a_1,~~~~~ r_i(x)={\cal R}_i(u_{\rm B_1, B_3}^{\rm A_2})=b_1, \end{equation} The positivity of $l_i$, $r_i$ follows. By definition, we get \begin{equation} \label{13.4.22.1452h} l_i(e_i^c({x}))=cl_i ({x})=ca_1,~~~~ r_i(e_i^c({ x}))=r_i({ x})/c=b_1/c. \end{equation} Recall the birational isomorphism $p:{\rm Conf}({\cal A},{\cal A},{\cal B})\rightarrow{{\rm B}}^{-}$ mapping $({\rm A}_1,{\rm A}_2,{\rm B}_3)$ to $b_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}.$ Using \eqref{13.4.22.1354h}\eqref{13.5.5.1122h}, we get a decomposition \begin{equation} \label{8.15.3.0h} p({x})=x_{i_1}(a_1)\ldots x_{i_m}(a_m)h\overline{w}_0x_{i_m}(b_m)\ldots x_{i_1}(b_1). \end{equation} Note that the action $e_i^c $ only changes ${\rm B}_0'$, ${\rm B}_0''$ to $c\cdot {\rm B}_3$ in \eqref{13.4.22.1354h}. Combining with \eqref{13.4.22.1452h}, we get \begin{equation} \label{13.4.22.1550h} p(e_i^c( x))=x_{i_1}(ca_1)\ldots x_{i_m}(a_m)h\overline{w}_0x_{i_m}(b_m)\ldots x_{i_1}(b_1/c). \end{equation} It multiplies the first term by $c$, divides the last term by $c$, and keeps the rest intact. Thus it defines a positive rational ${\Bbb G}_m$-action on ${\rm B}^-$. Since $p$ is a positive birational isomorphism, the positivity of the action $e_i^\ast $ follows. \end{proof} \begin{lemma} Let $i\in I$. For each generic $ x\in {\rm Conf}({\cal A},{\cal A},{\cal B})$, we have \begin{equation} s_i(x)=e_i^{c_i}(x),~~~~~~\mbox{where } c_i=r_{i}(x)/l_{i}(x). \end{equation} \end{lemma} \begin{proof} Note that $x$ has a representative ${\bf x}=\{{\rm U}, p(x)\cdot {\rm U}, {\rm B}^{-}\}\in {\cal A}\times {\cal A}\times{\cal B}$, where $p(x)$ has a decomposition \eqref{8.15.3.0h}. So $c_{i}=b_1/a_1$. Set $u:=x_{i}(a_1-b_1)$. By \eqref{13.4.22.1550h}, $$ p\big(e_{i}^{c_{i}}(x)\big) ={\rm Ad}_{u^{-1}}(p(x))= x_{i_1}(b_1)\ldots x_{i_m}(a_m)h\overline{w}_0x_{i_m}(b_m)\ldots x_{i_1}(a_1) \in {\rm B}^{-}. $$ Therefore $p(x)\in u\cdot {\rm B}^{-}$. Note that ${\rm B}^{-}\stackrel{s_{i}}{\rightarrow} u\cdot{\rm B}^{-}$. Then $s_{i}({\bf x})=\{{\rm U}, p(x)\cdot {\rm U}, u\cdot {\rm B}^-\}$. Therefore $$ s_{i}(x)=({\rm U}, p(x)\cdot {\rm U}, u\cdot {\rm B}^-)=({\rm U}, {\rm Ad}_{u^{-1}}(p(x))\cdot {\rm U}, {\rm B}^{-})=e_{i}^{c_{i}}(x). $$ \end{proof} There is a natural projection ${\rm Conf}({\cal A}^{n+1}, {\cal B})\rightarrow {\rm Conf}({\cal A, A, B})$ sending $({\rm A}_1,\ldots, {\rm A}_{n+1}, {\rm B})$ to $({\rm A}_1, {\rm A}_{n+1}, {\rm B})$. The Weyl group action on ${\rm Conf}({\cal A, A, B})$ induces an action on ${\rm Conf}({\cal A}^{n+1}, {\cal B})$. The following Corollary is clear. \begin{corollary} \label{13.5.6.953h} The induced Weyl group action on ${\rm Conf}({\cal A}^{n+1}, {\cal B})$ is a positive action. \end{corollary} \subsection{Proof of Theorem \ref{13.5.4.11115h}} \label{sec11.1.2} Let us shrink all holes without special points on $S$ into punctures, getting a homotopy equivalent surface denoted again by $S$. Let $D_1^*$ be a punctured disk with one special point on its boundary. Let ${\cal L}_{{\rm G}, D^*_1}$ be the moduli space parametrizing the triples $({\cal L}, {\alpha}, \beta)$ where ${\cal L}$ is a ${\rm G}$-local system on $D^*_1$, $\alpha$ is flat section of ${\cal L}_{\cal A}$ restricted to the neighbor of the special point, and $\beta$ is a flat section of ${\cal L}_{\cal B}$ restricted to the loop around the puncture. Taking a triangle with two ${\rm A}$-vertices and one ${\rm B}$-vertex, see Fig \ref{tpm16}, and gluing it as shown, we obtain a birational isomorphism: \begin{equation} \label{biriso} {\rm Conf}({\cal A},{\cal A}, {\cal B}) \stackrel{\sim}{=} {\cal L}_{{\rm G}, D^*_1}. \end{equation} \begin{figure}[ht] \centerline{\epsfbox{tpm16.eps}} \caption{Birational isomorphism of moduli spaces ${\rm Conf}({\cal A},{\cal A}, {\cal B}) \sim {\cal L}_{{\rm G}, D^*_1}$.} \label{tpm16} \end{figure} Let us elaborate \eqref{biriso}. Let $\{{\rm A}_1, {\rm A}_2, {\rm B}\}$ be a generic triple. Let $b\in {\rm G}$ be the unique element such that $b\cdot\{{\rm B}, {\rm A}_1\} = \{{\rm B}, {\rm A}_2\}$. Then $b \in {\rm B}$. We glue the sides $\{{\rm B}, {\rm A}_1\}$ and $\{{\rm B}, {\rm A}_2\}$, matching the flags. We get a disc with a special point $s$ and a puncture $p$. There is a ${\rm G}$-local system on the punctured disc, trivialized over the segment connecting the points $s$ and $p$ (the dashed segment in the disc on Fig \ref{tpm16}), with the clockwise monodromy $b$. It has an invariant flag at the puncture -- the flag ${\rm B}$. It also has a decorated flag at the special point $s$. Another configuration $\{g{\rm A}_1, g{\rm A}_2, g{\rm B}\}$ provides an isomorphic object. Thus it provides a rational map ${\rm Conf}({\cal A}, {\cal A}, {\cal B})\to {\cal L}_{{\rm G}, D_1^*}$. Its inverse map is obtained by cutting $D_1^*$ along this dashed segment. Note that there is a natural projection ${\cal L}_{{\rm G}, D_1^*}\rightarrow {\cal X}_{{\rm G}, D_1^*}$. Composing with the isomorphism \eqref{biriso}, we obtain a positive rational dominant map $p: {\rm Conf}({\cal A}, {\cal A}, {\cal B})\rightarrow {\cal X}_{{\rm G}, D_1^*}$. By definition, it commutes with the Weyl group actions on both spaces. It is easy to show that a rational function $f$ of ${\cal X}_{{\rm G}, D_1^*}$ is positive if and only if the function $p^*(f)$ of ${\rm Conf}({\cal A}, {\cal A}, {\cal B})$ is positive. By Theorem \ref{13.5.4.1211h}, the $W$-action on ${\cal X}_{{\rm G}, D^*_1}$ is positive. Let $D_n^*$ be a punctured disk with $n$ special points on its boundary. Similarly the $W$-action on ${\cal X}_{{\rm G}, D^*_n}$ is positive. For arbitrary $S$, we take a triangulation $T$ of $S$ and consider the triangles with the vertex at $p$. Go, if needed, to a finite cover of $S$ to make sure that the triangles near $p$ form a polygon, providing an ideal triangulation of a punctured disc $D^*_n$. The action of the Weyl group affects only this polygon, and so it is positive. Theorem \ref{13.5.4.11115h} is proved. \subsection{Canonical bases in tensor products and ${\rm Conf}({\cal A}^{n}, {\cal B})$} \label{tensor} Recall that a collection of dominant coweights $\underline {\lambda}= (\lambda_1, ..., \lambda_n)$ gives rise to a convolution variety ${\rm Gr}_{\underline {\lambda}}\subset {\rm Gr}^n$. It is open and smooth. Its dimension is calculated inductively: \begin{equation} \label{dimgr} {\rm dim}~{\rm Gr}_{\underline {\lambda}} = 2{\rm ht}(\underline {\lambda}):= 2\langle \rho, \lambda_1 + \ldots + \lambda_n\rangle. \end{equation} The subvarieties ${\rm Gr}_{\underline {\lambda}}$ form a stratification ${\cal S}$ of ${\rm Gr}^n$. Let ${\rm IC}_{\underline {\lambda}}$ be the ${\rm IC}$-sheaf of $\overline {{\rm Gr}_{\underline {\lambda}}}$. By the geometric Satake correspondence, \begin{equation} \label{gSC} {\rm H}^*({\rm IC}_{\underline {\lambda}}) = V_{\underline{\lambda}}:= V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_n}. \end{equation} Let ${\rm pr}_n:{\rm Gr}^n \to {\rm Gr}$ be the projection onto the last factor. Recall the point $t^\mu \in{\rm Gr}$. Set $$ {\rm S}_\mu:= {\rm pr}_n^{-1}({\rm U}({\cal K})t^\mu) \subset {\rm Gr}^n, ~~~~ {\rm T}_\mu:= {\rm pr}_n^{-1}({\rm U}^-({\cal K})t^\mu) \subset {\rm Gr}^n. $$ The sum of positive coroots is a cocharacter $2\rho^{\vee}: {\Bbb G}_m \to {\rm H}$. It provides an action of the group ${\Bbb G}_m$ on ${\rm Gr}^n$ given by the action on the last factor. The subvarieties ${\rm S}_\mu$ and ${\rm T}_\mu$ are attracting and repulsing subvarieties for this action. Set $$ {\rm Gr}^\mu_{\underline {\lambda}}:= {\rm Gr}_{\underline {\lambda}} \cap {\rm S}_\mu. $$ \begin{lemma} If ${\rm Gr}^\mu_{\underline {\lambda}}$ is non-empty, then it is a subvariety of pure dimension \begin{equation} \label{dimgrs} {\rm dim }~{\rm Gr}^\mu_{\underline {\lambda}} = {\rm ht}(\underline {\lambda};\mu):= \langle \rho, \lambda_1 + \ldots + \lambda_n +\mu\rangle. \end{equation} \end{lemma} Denote by ${\rm Irr}(X)$ the set of top dimensional components of a variety $X$, and by ${\mathbb Q}[{\rm Irr}(X)]$ the vector space with the bases parametrised by the set ${\rm Irr}(X)$. \begin{theorem} \label{mmvvth} There are canonical isomorphisms $$ {\rm H}^*({\rm Gr}^n, {\rm IC}_{\underline {\lambda}}) =\oplus_{\mu}{\rm H}^{2{\rm ht}(\mu)}_c({\rm S}_\mu, {\rm IC}_{\underline {\lambda}}) = \oplus_{\mu}{\mathbb Q}[{\rm Irr}(\overline {{\rm Gr}^\mu_{\underline {\lambda}}})]. $$ \end{theorem} \begin{proof} Theorem \ref{mmvvth} for $n=1$ is proved in \cite[Section 3]{MV}. The proof for arbitrary $n$ follows the same line. For convenience of the reader we provide a complete proof. Let $m: {\mathbb C}^* \times X \to X$ be a map defining an action of the group ${\mathbb C}^*$ on $X$. Let ${\cal D}(X)$ be the bounded derived category of constructible sheaves on $X$. An object ${\cal F} \in {\cal D}(X)$ is {\it weakly ${\mathbb C}^*$-equivariant}, if $m^*{\cal F} = L\boxtimes {\cal F}$ for some locally constant sheaf $L$ on ${\mathbb C}^*$. Recall the action of ${\rm G}_m$ on ${\rm Gr}^n$ defined above. Denote by ${\rm P}_{\cal S}({\rm Gr}^n)$ the category of weakly ${\mathbb C}^*$-equivariant perverse sheaves on ${\rm Gr}^n$ which are constructible with respect to the stratification ${\cal S}$. \begin{lemma} The sheaf ${\rm IC}_{\underline{\lambda}}$ is locally constant along the stratification ${\cal S}$. It belongs to the category ${\rm P}_{\cal S}({\rm Gr}^n)$. \end{lemma} \begin{proof} Given a subgroup ${\rm G}'\subset {\rm G}$, denote by ${\rm G}'_{[k,n]}\subset {\rm G}^n$ the subgroup of elements $(e, ..., e, g, ..., g)$, with $(n-k+1)$ of $g \in {\rm G}'$. Denote by ${\rm G}(L)$ the subgroup stabilising a point $L\in {\rm Gr}$. The group ${\rm G}(L)_{[k,n]}$ preserves the category ${\rm P}_{\cal S}({\rm Gr}^n)$. Take two collections $(L_1, ..., L_n), (M_1, ..., M_n)\in {\rm Gr}^n$, with $L_1=M_1=[1]$ and in the same stratum. We can move $(L_1, ..., L_n)$ by an element of ${\rm G}(L_1)_{[1,n]}$, getting $(M_1, M_2, L_3', ..., L'_n)$. Then we move it by an element of ${\rm G}(M_2)_{[2,n]}$, getting $(M_1, M_2, M_3, ..., L''_n)$, and so on, using subgroups ${\rm G}(L_n)_{[k,n]}$ for $k=3, 4, ...n-1$. In the last step we get $(M_1, ..., M_n)$. The ${\mathbb C}^*$-equivariance is evident. \end{proof} \begin{proposition} For all ${\cal P} \in {\rm P}_{\cal S}({\rm Gr}^n)$ we have a canonical isomorphism \begin{equation} \label{hyploc} {\rm H}^k_c({\rm S}_\mu, {\cal P}) \stackrel{\sim}{\longrightarrow} {\rm H}^k_{{\rm T}_\mu}({\rm Gr}^n, {\cal P}). \end{equation} Both sides vanish if $k \not = 2{\rm ht}(\mu)$. The functors $F_\mu:= {\rm H}_c^{2{\rm ht}(\mu)}(S_\mu, -) : {\rm P}_{\cal S}({\rm Gr}^n) \longrightarrow {\rm Vect}$ are exact. \end{proposition} \begin{proof} Isomorphism (\ref{hyploc}) follows from the hyperbolic localisation theorem of Braden \cite{Br}. Let us briefly recall how it works. Let $X$ be a normal complex variety on which the group ${\mathbb C}^*$ acts. Let $F$ be the stable points variety. It is a union of components $F_1, ..., F_k$. Consider the attracting and repulsing subvarieties $$ X_k^+ = \{x\in X~|~ {\rm lim}_{t\to 0}t\cdot x \in F_k\}, ~~~~ X_k^- = \{x\in X~|~ {\rm lim}_{t\to \infty}t\cdot x \in F_k\}, $$ Let $X^+$ (resp. $X^-$) be the disjoint union of all the $X_k^+$ (resp. $X_k^-$). There are projections $$ \pi^{\pm}: X^{\pm} \to F, ~~~~ \pi^+(x)={\rm lim}_{t\to 0}t\cdot x , ~~~\pi^-(x)={\rm lim}_{t\to \infty}t\cdot x. $$ Let $g^{\pm}: X^{\pm} \hookrightarrow X$ be the natural inclusions. Given an object ${\cal F} \in {\cal D}(X)$, define hyperbolic localisation functors $$ {\cal F}^{!*}:= (\pi^+)_!(g^+)^*{\cal F}, ~~~~{\cal F}^{*!}:= (\pi^-)_*(g^-)^!{\cal F}. $$ Combining Theorem 1 and Section 3 of \cite{Br}, we have the following result, which implies (\ref{hyploc}). \begin{proposition} If ${\cal F}$ is weakly ${\mathbb C}^*$-equivariant, the natural map ${\cal F}^{!*}\to {\cal F}^{*!}$ is an isomorphism. \end{proposition} Let us prove the vanishing. One has ${\rm H}^k_c({\rm Gr}^\mu_{\underline {\lambda}}, {\mathbb Q}) =0$ for $k> 2{\rm dim}{\rm Gr}^\mu_{\underline {\lambda}} = 2{\rm ht}(\underline {\lambda}; \mu)$. Due to perversity, the restriction of any $ {\cal P} \in {\rm P}_{\cal S}({\rm Gr}^n)$ to ${\rm Gr}_{\underline {\lambda}}$ lies in degrees $\leq - {\rm dim}{\rm Gr}_{\underline {\lambda}} = -2{\rm ht}(\underline {\lambda})$. So \begin{equation} \label{esti} {\rm H}^k_c({\rm Gr}^\mu_{\underline {\lambda}}, {\cal P}) =0 ~~~\mbox{if $k>2{\rm ht}(\mu)$}. \end{equation} Although ${\rm S}_\mu$ is infinite dimensional, we can slice it by its intersections with the strata ${\rm Gr}_{\underline {\lambda}}$. Since the estimate (\ref{esti}) on each strata does not depend on $\underline {\lambda}$, a devissage using exact triangles $j_!j^* {\cal A} \to{\cal A} \to i_!i^* {\cal A}$ tells that $$ {\rm H}^k_c({\rm S}_\mu, {\cal P}) =0 ~~~\mbox{if $k>2{\rm ht}(\mu)$}. $$ Applying the duality, and using the fact that $ \ast{\cal P} = {\cal P}$, we get the dual estimate $$ {\rm H}^k_{{\rm T}_{\mu}}({\rm Gr}^n, {\cal P}) =0 ~~~\mbox{if $k<2{\rm ht}(\mu)$}. $$ Combining with the isomorphism (\ref{hyploc}), we get the proof. The last claim is then obvious. \end{proof} \begin{proposition} \label{mainprop} We have natural equivalence of functors $$ {\rm H}^* \stackrel{\sim}{=} \oplus_{\mu \in {\rm P}}{\rm H}_c^{2{\rm ht}(\mu)}(S_\mu, -) : {\rm P}_{\cal S}({\rm Gr}^n) \longrightarrow {\rm Vect}. $$ \end{proposition} \begin{proof} The proof of Theorem 3.6 in \cite{MV} works in our case. Namely, the two filtrations of ${\rm Gr}^n$ by the closures of ${\rm S}_\mu$ and ${\rm T}_\mu$ give rise to two filtrations of ${\rm H}^*$, given by the kernels of ${\rm H}^* \to {\rm H}^*_c(\overline {\rm S}_\mu, -)$ and the images of ${\rm H}^*_{\overline {\rm T}_\mu}({\rm Gr}^n, -) \to {\rm H}^*$. The vanishing implies ${\rm H}^{2{\rm ht}(\mu)}_c(\overline {\rm S}_\mu, -) = {\rm H}^{2{\rm ht}(\mu)}_c({\rm S}_\mu, -)$ and ${\rm H}^{2{\rm ht}(\mu)}_{\overline {\rm T}_\mu}({\rm Gr}^n, -) = {\rm H}^{2{\rm ht}(\mu)}_{{\rm T}_\mu}({\rm Gr}^n, -)$, and the composition ${\rm H}^{2{\rm ht}(\mu)}_{{\rm T}_\mu}({\rm Gr}^n, -) \to {\rm H}^{2{\rm ht}(\mu)} \to {\rm H}^{2{\rm ht}(\mu)}_c({\rm S}_\mu, -)$ is an isomorphism. So the two filtrations split each other. \end{proof} \begin{corollary} The global cohomology functor ${\rm H}^*: {\rm P}_{\cal S}({\rm Gr}^n) \longrightarrow {\rm Vect}$ is faithful and exact. \end{corollary} Denote by ${\rm H}^p_{\rm per}{\cal F}$ the cohomology of an ${\cal F} \in D^b_{\cal S}({\rm Gr}^n)$ for the the perverse $t$-structure. Let $j: {\rm Gr}_{\underline{\lambda}}\hookrightarrow \overline {{\rm Gr}_{\underline{\lambda}}}$ be the natural embedding, ${\cal J}_!(\underline {\lambda}, {\mathbb Q}):= {\rm H}^0_{\rm per}(j_!{\mathbb Q}[{\rm dim}{\rm Gr}_{\underline{\lambda}}])$, and ${\cal J}_*(\underline {\lambda}, {\mathbb Q}):= {\rm H}^0_{\rm per}(j_*{\mathbb Q}[{\rm dim}{\rm Gr}_{\underline{\lambda}}])$. The following Lemma is a generalisation of Lemma 7.1 of \cite{MV}. \begin{lemma} The category ${\rm P}_{\cal S}({\rm Gr}^n)$ is semi-simple. The sheaves ${\cal J}_!(\underline {\lambda}, {\mathbb Q})$, ${\cal J}_*(\underline {\lambda}, {\mathbb Q})$, and ${\cal J}_{!*}(\underline {\lambda}, {\mathbb Q})$ are isomorphic. \end{lemma} \begin{proof} Let us prove first the parity vanishing for the stalks of the sheaf ${\cal J}_{!*}(\underline {\lambda}, {\mathbb Q})$: the stalks could have non-zero cohomology only at even degrees. For $n=1$ it is proved in \cite{L4}. It can also be proved by using the Bott-Samelson resolution of the Schubert cells in the affine (i.e. Kac-Moody) case, as was explained to us by A. Braverman. Let ${\cal F}$ be a Kac-Moody flag variety. Take an element $w=w_{1} ... w_{n}$ of the affine Weyl group such that $l(w) = l(w_{1}) + ... + l(w_{n})$. Denote by ${\cal F}_{w_1, ..., w_n}$ the variety parametrising flags $(F_1=[1], F_2, ..., F_n)$ such that the pair $(F_i, F_{i+1})$ is in the incidence relation $w_i$. Choose reduced decompositions $[w_{1}], ... , [w_{n}]$ of the elements $w_{1}, ... , w_{n}$. Their product is a reduced decomposition $[w]$ of $w$. It gives rise to the Bott-Samelson variety $X_{[w]}$. By its very definition, it is a tower of fibrations $$ X([w_{1}], ... , [w_{n}]) \longrightarrow X([w_{1}], ... , [w_{n-1}]) \longrightarrow \ldots \longrightarrow X([w_{1}]). $$ The Bott-Samelson resolution of the affine Schubert cell ${\rm Gr}_\lambda$ is a smooth projective variety $X_\lambda$ with a map $\beta_\lambda: X_\lambda \to {\rm Gr}_\lambda$ which is $1:1$ at the open stratum, and which, according to \cite{Gau1}, \cite{Gau2}, has the following property. For each of the strata ${\rm Gr}_\mu\subset {\rm Gr}_\lambda$, there exists a point $p_\mu\in {\rm Gr}_\mu$ such that the fiber $\beta_\lambda^{-1}(p_\mu)$ of the Bott-Samelson resolution has a cellular decomposition with the cells being complex vector spaces. Therefore the stalk of the push forward $\beta_{\lambda\ast}{\mathbb Q}_{X_\lambda}$ of the constant sheaf on $X_\lambda$ at the point $p_\mu$ satisfies the parity vanishing. By the decomposition theorem \cite{BBD}, the sheaf ${\rm IC}_\lambda$ is a direct summand of the push forward $\beta_{\lambda\ast}{\mathbb Q}_{X_\lambda}$ of the constant sheaf on $X_\lambda$. Indeed, the latter is a direct sum of shifts of perverse sheaves, and it is the constant sheaf over the open stratum. Therefore the stalk of the sheaf ${\rm IC}_\lambda$ at the point $p_\mu$ satisfies the parity vanishing. Since the cohomology of ${\rm IC}_\lambda$ is locally constant over each of the stratum ${\rm Gr}_\mu$, we get the parity vanishing. The general case of ${\rm Gr}_{\underline{\lambda}}$ is treated very similarly to the case of ${\rm Gr}_{{\lambda}}$. The rest is pretty standard, and goes as follows. The strata ${\rm Gr}_{\underline{\lambda}}$ are simply connected: this is well known for $n=1$, and the strata ${\rm Gr}_{\underline{\lambda}}$ is fibered over ${\rm Gr}_{\underline{\lambda'}}$ with the fiber ${\rm Gr}_{{\lambda_n}}$, where $\underline{\lambda} = (\underline{\lambda'},\lambda_n)$. Since the strata are even dimensional over ${\mathbb R}$, this plus the parity vanishing implies that there are no extensions between the simple objects in ${\rm P}_{\cal S}({\rm Gr}^n)$. Indeed, by devissage this claim reduces to calculation of extensions between constant sheaves concentrated on two open strata. Thus there are no extensions in the category ${\rm P}_{\cal S}({\rm Gr}^n)$, i.e. it is semi-simple. Let us show now that ${\cal J}_!(\underline {\lambda}, {\mathbb Q}) = {\cal J}_{!*}(\underline {\lambda}, {\mathbb Q})$. Since ${\rm H}^p_{\rm per}(j_!{\mathbb Q}_{{\rm Gr}_{\underline{\lambda}}})=0$ for $p>0$, there is a map $j_!{\mathbb Q}_{{\rm Gr}_{\underline{\lambda}}}\to {\rm H}^0_{\rm per}(j_!{\mathbb Q}_{{\rm Gr}_{\underline{\lambda}}}) = {\cal J}_{!}(\underline {\lambda}, {\mathbb Q})$. If ${\cal J}_!(\underline {\lambda}, {\mathbb Q}) \not = {\cal J}_{!*}(\underline {\lambda}, {\mathbb Q})$, since the category ${\rm P}_{\cal S}({\rm Gr}^n)$ is semisimple, there is a non-zero direct summand ${\cal B}$ of ${\cal J}_!(\underline {\lambda}, {\mathbb Q})$ supported at a lower stratum. Composing these two maps, we get a non-zero map $j_!{\mathbb Q}_{{\rm Gr}_{\underline{\lambda}}}\to {\cal B}$. On the other hand, given a space $X$ and complexes of sheaves ${\cal A}$ and ${\cal B}$ supported at disjoint subsets $A$ and $B$ respectively, one has ${\rm Hom}(j_!{\cal A}, {\cal B})=0$, where $j:A \hookrightarrow X$. Contradiction. The statement about ${\cal J}_\ast$ follows by the duality. \end{proof} \begin{lemma} \label{gen3.5} There are canonical isomorphisms $$ F_{\mu}[{\cal J}_!(\underline {\lambda}, {\mathbb Q})] = {\mathbb Q}[{\rm Irr}(\overline {{\rm Gr}^\mu_{\underline {\lambda}}})] = F_{\mu}[{\cal J}_*(\underline {\lambda}, {\mathbb Q})]. $$ \end{lemma} \begin{proof} We prove the first claim. The second is similar. We follow closely the proof of Proposition 3.10 in \cite{MV}. Set ${\cal F}:= {\cal J}_!(\underline {\lambda}, {\mathbb Q})$. Let ${\rm Gr}_{\underline{\eta}}$ be a stratum in the closure of ${\rm Gr}_{\underline{\lambda}}$. Let $i_{\underline{\eta}}: {\rm Gr}_{\underline{\eta}} \hookrightarrow \overline{{\rm Gr}_{\underline{\lambda}}}$ be the natural embedding. Then $i_{\underline{\eta}}^*{\cal F} \in D^{\leq -{\rm dim}{\rm Gr}_{\underline{\eta}} -2}({\rm Gr}_{\underline{\eta}})$. Indeed, we use $i_\eta^*j_!{\mathbb Q}=0$, and ${\rm H}^p_{\rm per}j_!{\mathbb Q}[{\rm dim}{\rm Gr}_{\underline{\lambda}}]=0$ for $p>0$ and apply $i_{\underline{\eta}}^*$ to the exact triangle $$ \longrightarrow {\tau}_{\rm per}^{\leq -1}(j_!{\mathbb Q}[{\rm dim}{\rm Gr}_{\underline{\lambda}}]) \longrightarrow j_!{\mathbb Q}[{\rm dim}{\rm Gr}_{\underline{\lambda}}] \longrightarrow {\rm H}_{\rm per}^{0}(j_!{\mathbb Q}[{\rm dim}{\rm Gr}_{\underline{\lambda}}]) \longrightarrow \ldots . $$ Due to dimension counts (\ref{dimgr}) and (\ref{dimgrs}), we have ${\rm H}^k_c({\rm Gr}_{\underline{\eta}} \cap {\rm S}_\mu, {\cal F})=0$ if $k> 2{\rm ht}(\mu)-2$. Thus the devissage associated to the filtration of ${\rm Gr}^n$ by ${\rm Gr}_{\underline{\eta}}$ tells that there is no contribution from the lower strata ${\rm Gr}_{\underline{\eta}}$ to ${\rm H}^{2{\rm ht}(\mu)}_c$, i.e. ${\rm H}^{2{\rm ht}(\mu)}_c({\rm S}_\mu, {\cal F}) = {\rm H}^{2{\rm ht}(\mu)}_c({\rm Gr}_{\underline{\lambda}} \cap {\rm S_\mu}, {\cal F})$. Now we can conclude: $$ {\rm H}^{2{\rm ht}(\mu)}_c({\rm Gr}_{\underline{\lambda}}^\mu, {\cal F}) = {\rm H}^{2{\rm ht}(\mu) +2{\rm ht}(\underline{\lambda})}_c({\rm Gr}_{\underline{\lambda}}^\mu, {\mathbb Q}) = {\rm H}_c^{2{\rm dim}({\rm Gr}^\mu_{\underline{\lambda}})}({\rm Gr}^\mu_{\underline{\lambda}}, {\mathbb Q}). $$ The last cohomology group has a basis given by the top dimensional components of ${\rm Gr}^\mu_{\underline{\lambda}}$. \end{proof} Lemma \ref{gen3.5} implies that there is a canonical isomorphism $ {\rm H}^{2{\rm ht}(\mu)}_c({\rm S}_\mu, {\rm IC}_{\underline {\lambda}}) = {\mathbb Q}[{\rm Irr}(\overline {{\rm Gr}^\mu_{\underline {\lambda}}})]. $ Combined with Proposition \ref{mainprop} we arrive at Theorem \ref{mmvvth}. \end{proof} \paragraph{Parametrisation of a canonical basis.} Since the group ${{\rm B}}({\cal O})$ is connected, the projection $$ p: {\rm Gr}_{\underline{\lambda}}^{\mu} \longrightarrow {\rm B}({\cal O})\backslash {\rm Gr}_{\underline{\lambda}}^{\mu} = {\rm Conf}({\rm Gr}^{n+1}, {\cal B})^\mu_{\underline \lambda} $$ identifies the top components. So Theorem \ref{MVn+1bbb} tells that the cycles $p^{-1}({\cal M}^\circ_l)$, $l \in {\bf B}^\mu_{\underline {\lambda}}$, see (\ref{MVn+1bb}), are the top components of ${\rm Gr}_{\underline{\lambda}}^{\mu}$. Theorem \ref{mmvvth} plus (\ref{gSC}) implies that they give rise to classes $[p^{-1}({\cal M}^\circ_l)] \in V_{\underline{\lambda}}$. Moreover, the $\mu$ is the weight of the class in $V_{\underline{\lambda}}$. So we get the following result. \begin{theorem} \label{tensorproductbasis} The set ${\bf B}^\mu_{\underline {\lambda}}$ parametrises a canonical basis in the weight $\mu$ part $V^{(\mu)}_{\underline{\lambda}}$ of the representation $V_{\lambda_1} \otimes \ldots \otimes V_{\lambda_n}$ of ${\rm G^L}$. This basis is given by the classes $[p^{-1}({\cal M}_l)]$, $l \in {\bf B}^\mu_{\underline {\lambda}}$. \end{theorem} \section{Configurations and generalized Mircovi\'{c}-Vilonen cycles}\label{sec11} \subsection{Proof of Theorem \ref{kth} } \label{sec12.1.1} In this Section we use extensively the notation from Section \ref{sec5.2}, such as $u_{{\rm B}_1,{\rm B}_3}^{{\rm A}_2}$, $r_{{\rm B}_3}^{{\rm B}_1,{\rm A}_2}\in {\rm U}^-$. We identify the subset ${\bf A}_{\nu}$ in Theorem \ref{kth} with the subset ${\bf A}_{\nu}\subset {\rm U}_{\chi}^+({\mathbb Z}^t)$ in \eqref{8.23.10.22h} by tropicalizing \begin{equation} \label{12.12.30.1hh} \alpha: {\rm Conf}({\cal B}, {\cal A},{\cal B})\stackrel{\sim}{\longrightarrow} {{\rm U}}, ~~({\rm B}_1, {\rm A}_2, {\rm B}_3)\longmapsto u_{{\rm B}_1,{\rm B}_3}^{{\rm A}_2}. \end{equation} Thanks to identity 4 of Lemma \ref{12.12.11.h}, the index $\nu$ for both definitions match. \begin{proof}[Proof of Theorem \ref{kth}] 2). Let $l\in {\bf A}_\nu$. Let $x=({\rm B}_1, {\rm A}_2, {\rm B}_3)\in {\cal C}_{l}^{\circ}$. By Lemma \ref{12.12.11.h}, $r_{{\rm B}_3}^{{\rm B}_1, {\rm A}_2}=\eta(u_{{\rm B}_1, {\rm B}_3}^{{\rm A}_2}).$ Recall $\kappa_{\rm Kam}$ in \eqref{kappa.kam}. Recall $i_s$ in \eqref{10.9.12.2}. By Lemma \ref{12.15.kappa.j}, we get \begin{equation} \label{13.1.8.1h} \kappa (x)=({\rm B}, [r_{{\rm B}_3}^{{\rm B}_1,{\rm A}_2}], {\rm B}^{-})=({\rm B}, \kappa_{\rm Kam}(u_{{\rm B}_1, {\rm B}_3}^{{\rm A}_2}), {\rm B}^{-})=i_s(\kappa_{\rm Kam}(\alpha(x))). \end{equation} Recall ${\rm MV}_l$ in \eqref{thm.kam.a.l}. Then ${\cal M}_l=i_s({\rm MV}_l).$ Thus 2) is a reformulation of Theorem \ref{thm.kam}. \vskip 2mm 1). Recall the map \begin{equation} p_i: {\rm Conf}({\cal A, A, B})\longrightarrow {\rm U}, ~~~({\rm A}_1,{\rm A}_2,{\rm B}_3)\longmapsto u_{{\rm B}_{i+2},{\rm B}_{i+1}}^{{\rm A}_{i}}, ~~~i=1,2. \end{equation} {Recall the map $\tau$ defined by \eqref{13.1.26.337h}} \begin{equation} \label{h13.1.16.12h} \tau: {\rm Conf}({\cal A},{\cal A}, {\cal B})({\cal K})\longrightarrow {\rm Gr},~~~({\rm A}_1, {\rm A}_2, {\rm B}_3)\longmapsto [b_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}]. \end{equation} Note that $p_2^{t}$ induces a bijection from ${\bf P}_{\lambda}^{\mu}$ to ${\bf A}_{\lambda-\mu}$. The MV cycles of coweight $(\lambda-\mu,0)$ are $$ \overline{\kappa_{\rm Kam} \circ p_2({\cal C}_{l}^{\circ})}=\overline{\kappa_{\rm Kam}({\cal C}_{p_2^t(l)}^{\circ})}={\rm MV}_{p_2^t(l)},~~~~l\in {\bf P}_{\lambda}^{\mu}. $$ Let $x=({\rm A}_1,{\rm A}_2, {\rm B}_3)\in {\cal C}_{l}^{\circ}$. Note that $$ \tau(x)=[b_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}]=[\mu_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}r_{{\rm B}_3}^{{\rm B}_2, {\rm A}_1}]=\mu(x)\cdot \kappa_{\rm Kam}(p_2(x)),~~~\mbox{where } [\mu(x)]=[\mu_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}]=t^{\mu}. $$ We get $\overline{\tau({\cal C}_{l}^{\circ})}=t^{\mu}\cdot {\rm MV}_{p_2^t(l)}$. They are precisely MV cycles of coweight $(\lambda,\mu)$. Recall the isomorphism $i$ in \eqref{10.9.12.2a}. Clearly ${\cal M}_l=i(\overline{\tau({\cal C}_l^\circ}))$. Thus 1) is proved. \vskip 2mm 3). The set ${\bf B}_{\lambda}^{\mu}$ is a subset of ${\bf P}_{\lambda}^{\mu}$ such that $p_1^t({\bf B}_{\lambda}^{\mu})\subset {\rm U}_{\chi}^+({\mathbb Z}^t)$. By Lemma \ref{9.21.17.56h}, ${\bf B}_{\lambda}^{\mu}$ is empty unless $\lambda\in {\rm P}^+$. So we assume $\lambda\in {\rm P}^+$. Let $l\in {\bf P}_{\lambda}^{\mu}$. Let $x=({\rm A}_1,{\rm A}_2,{\rm B}_3)\in {\cal C}_{l}^\circ$. By Lemma \ref{3.23.12.1aa}, \begin{equation} \tau(x)=[b_{{\rm B}_3}^{{\rm A}_1,{\rm A}_2}]=[u_{{\rm B}_3,{\rm B}_2}^{{\rm A}_1}h_{{\rm A}_1,{\rm A}_2}\overline{w}_0u_{{\rm B}_1,{\rm B}_3}^{{\rm A}_2}]=p_1(x)\cdot t^{\lambda}. \end{equation} The last identity is due to $p_2^t(l)\in {\rm U}_{\chi}^+({\mathbb Z}^t)$ \big(hence $u_{{\rm B}_1,{\rm B}_3}^{{\rm A}_2} \in {\rm U}({\cal O})$\big). By Lemma \ref{13.1.8.53h}, $ \tau(x)\in \overline{{\rm Gr}_{\lambda}}$ if and only if $p_1^t(l)\in {\rm U}^{+}_{\chi}({\mathbb Z}^t)$. Therefore \begin{equation} \label{13.1.22.168h} \overline{\tau({\cal C}_{l}^\circ)} \subset \overline{{\rm Gr}_{\lambda}} \Longleftrightarrow p_1^t(l)\in {\rm U}_{\chi}^+({\mathbb Z}^t) \Longleftrightarrow l\in {\bf B}_{\lambda}^{\mu}. \end{equation} The rest follows from Lemma \ref{12.15.12h35m}. \end{proof} \subsection{Proof of Theorems \ref{MVlm1a}, \ref{MVlm1sa}, \ref{MVn+1bbb}} By Theorem \ref{kth}, we have \begin{equation} \label{linlang} {\rm S}_{w_0}^{\mu}\cap {\rm S}_e^{\lambda}=\bigcup_{l\in {\bf P}_{\lambda}^{\mu}}{\rm N}_l,~~~~~ {\rm S}_{w_0}^{\mu}\cap{\rm Gr}_{\lambda}=\bigcup_{l\in{\bf B}_{\lambda}^{\mu}} {\rm M}_{l}, \end{equation} Here ${\rm N}_l$ (resp. ${\rm M}_l$) are components containing $\tau({\cal C}_l^{\circ})$ as dense subsets. They are all of dimension $\langle \rho, \lambda-\mu\rangle$. The closures $\overline{{\rm N}_l}=\overline{\tau({\cal C}_l^{\circ})}$ are MV cycles. \paragraph{Proof of Theorem \ref{MVlm1a}.} Scissoring the convex ($n$+2)-gon along diagonals emanating from the vertex labelled by $n$+2, see Fig \ref{cut}, we get a positive birational isomorphism between ${\rm Conf}({\cal A}^{n+1}, {\cal B})$ and $\big({\rm Conf}({\cal A}^2, {\cal B})\big)^n$. Its tropicalization provides a decomposition \begin{equation} \label{13.2.1.7.01h} {\bf P}_{\lambda; \underline{\lambda}}^{\mu}=\bigsqcup_{\mu_1+\ldots+\mu_n=\mu} {\bf P}_{\lambda}^{\mu_1}\times{\bf B}_{\lambda_2}^{\mu_2} \ldots \times {\bf B}_{\lambda_n}^{\mu_n},~~~~\underline{\lambda}=(\lambda_2,\ldots, \lambda_n)\in ({\rm P}^{+})^{n-1}. \end{equation} Let $l=(l_1,\ldots, l_n)\in {\bf P}_{\lambda; \underline{\lambda}}^{\mu}$. We construct an irreducible subset $$ {\rm C}_l:=\{([b_1], [b_1b_2],\ldots, [b_1b_2\ldots b_n])\in {\rm Gr}^n~|~b_i\in {\rm B}^{-}({\cal K}), ~[b_1]\in {\rm N}_{l_1},~[b_i]\in {\rm M}_{{l}_i},~i\in [2,n]\}. $$ By induction, ${\rm C}_l$ is of dimension $\langle \rho, \lambda+\lambda_2+\ldots+\lambda_n-\mu\rangle.$ \begin{lemma} \label{13.3.16.114h} Recall the subvariety ${\rm Gr}_{\lambda, \underline{\lambda}}^{\mu}$ in \eqref{MVlm}. We have ${\rm Gr}_{\lambda, \underline{\lambda}}^{\mu}=\cup {\rm C}_{l}$ where ${l\in {\bf P}_{\lambda; \underline{\lambda}}^{\mu}}.$ \end{lemma} \begin{proof} Thanks to the isomorphism ${\rm B}^{-}({\cal K})/{{\rm B}}^{-}({\cal O})\stackrel{\sim}{\rightarrow}{\rm Gr}$, each $x\in {\rm Gr}_{\lambda, \underline{\lambda}}^{\mu}$ can be presented as $([b_1], [b_1b_2]\ldots, [b_1\ldots b_n])$, where $b_i\in {\rm B}^{-}({\cal K})$ for all $i\in[1,n].$ By the definition of ${\rm Gr}_{\lambda, \underline{\lambda}}^{\mu}$, we have $$ [b_i]\in {\rm Gr}_{\lambda_i}, ~\forall i\in [2,n];~~~[b_1]\in {\rm S}_e^{\lambda},~[b_1\ldots b_n]\in {\rm S}_{w_0}^{\mu}. $$ Let ${\rm pr}: {\rm B}^{-}({\cal K})\rightarrow{\rm H}({\cal K})\rightarrow{\rm H}({\cal K})/{\rm H}({\cal O})={\rm P}$ be the composite of standard projections. Set ${\rm pr}(b_i):=\mu_i$. Then $[b_i]\in {\rm S}_{w_0}^{\mu_i}$. When $i=1$, $[b_1]\in {\rm S}_{w_0}^{\mu_1}\cap {\rm S}_e^{\lambda}$. Thus $[b_1]\in {\rm N}_{l_1}$ for some $l_1\in {\bf P}_{\lambda}^{\mu_1}$. When $i>1$, $[b_i]\in {\rm S}_{w_0}^{\mu_i}\cap {\rm Gr}_{\lambda_i}$. Thus $[b_i]\in {\rm M}_{l_i}$ for some $l_i\in {\bf B}_{\lambda_i}^{\mu_i}$. Note that $\mu_1+\ldots+\mu_n={\rm pr}(b_1)+\ldots+{\rm pr}(b_n)={\rm pr}(b_1\ldots b_n)=\mu.$ Thus $l:=(l_1,\ldots, l_n)\in {\bf P}_{\lambda, \underline{\lambda}}^{\mu}$. By definition $x\in {\rm C}_l$. Therefore ${\rm Gr}_{\lambda, \underline{\lambda}}^{\mu}\subseteq\cup_{l\in {\bf P}_{\lambda; \underline{\lambda}}^{\mu}} {\rm C}_{l}$. The other direction follows similarly. \end{proof} Let $l\in {\bf P}_{\lambda, \underline{\lambda}}^{\mu}$. Recall the map $$ \tau: {\rm Conf}({\cal A}^{n+1}, {\cal B})\longrightarrow {\rm Gr}^n, ~~~~({\rm A}_1,\ldots, {\rm A}_{n+1}, {\rm B}_{n+2})\longmapsto ([b_{{\rm B}_{n+2}}^{{\rm A}_1,{\rm A}_2}],\ldots, [b_{{\rm B}_{n+2}}^{{\rm A}_1,{\rm A}_{n+1}}]). $$ Clearly $\tau({\cal C}_l^{\circ})$ is a dense subset of ${\rm C}_l$. Recall the isomorphism $i$ in \eqref{10.9.12.2a}. Following Lemma \ref{12.15.12h35m}, the isomorphism $i$ identifies $\tau({\cal C}_l^{\circ})$ with ${\cal M}_l^{\circ}$. By Theorem \ref{13.1.30.742h}, the cells ${\cal M}_l^{\circ}$ are disjoint. Theorem \ref{MVlm1a} follows from Lemma \ref{13.3.16.114h}. \paragraph{Proof of Theorem \ref{MVlm1sa}.} The group ${\rm H}({\cal K})$ acts diagonally on ${\rm Gr}^n$. Let $h\in {\rm H}({\cal K})$ be such that $[h]=t^{\nu}$. Then $ h\cdot {\rm Gr}_{\lambda; \underline{\lambda}}^{\mu}={\rm Gr}_{\lambda+\nu; \underline{\lambda}}^{\mu+\nu}. $ One can choose $h$ such that $[h]=t^{-\mu}$. The rest follows by the same argument in the proof of Theorem \ref{MVlm1a}. \paragraph{Proof of Theorem \ref{MVn+1bbb}.} By definition ${\bf B}_{\lambda_1,\lambda_2,\ldots, \lambda_n}^{\mu}\subset {\bf P}_{{\lambda_1}; \lambda_2, ..., \lambda_n}^{\mu}$. The Theorem follows by the same argument in the proof of Theorem \ref{MVlm1a}. \subsection{Components of the fibers of convolution morphisms}\label{sec9.3new} Let $\underline{\lambda}=(\lambda_1,\ldots, \lambda_n)\in ({\rm P}^+)^n$. Recall the convolution variety ${\rm Gr}_{\underline{\lambda}}$ in \eqref{con.var.247}. By the geometric Satake correspondence, ${\rm IH}(\overline{{\rm Gr}_{\underline{\lambda}}}) =V_{\underline{\lambda}}:=V_{\lambda_1}\otimes \ldots \otimes V_{\lambda_n}$. Set $|\underline{\lambda}|:=\lambda_1+\ldots+\lambda_n$. Set ${\rm ht}(\underline{\lambda};\mu):=\langle \rho, |\underline{\lambda}|-\mu\rangle.$ The {\it convolution morphism} $m_{\underline{\lambda}}: \overline{{\rm Gr} _{\underline{\lambda}}}\to \overline{{\rm Gr}_{|\underline{\lambda}|}}$ projects $({\rm L}_1,\ldots, {\rm L}_n)$ to ${\rm L}_n$. It is semismall, i.e. for any $\mu\in {\rm P}^+$ such that $t^{\mu}\in \overline{{\rm Gr}_{|\underline{\lambda}|}}$, the fiber $m_{\underline{\lambda}}^{-1}(t^{\mu})$ over $t^{\mu}$ is of top dimension ${\rm ht}(\underline{\lambda};\mu)$. See \cite{MV} for proof. By the decomposition theorem \cite{BBD}, we have $$ {\rm IH}(\overline{ {\rm Gr}_{\underline{\lambda}}})=\bigoplus_\mu F_{\mu}\otimes {\rm IH}(\overline{{\rm Gr}_{\mu}}). $$ Here the sum is over $\mu\in {\rm P}^+$ such that $t^{\mu}\subseteq \overline{{\rm Gr}_{|\underline{\lambda}|}}$, and $F_{\mu}$ is the vector space spanned by the fundamental classes of top dimensional components of $m_{\underline{\lambda}}^{-1}(t^{\mu})$. As a consequence, the number of top components of $m_{\underline{\lambda}}^{-1}(t^{\mu})$ equals the tensor product multiplicity $c_{\underline{\lambda}}^{\mu}$ of $V_\mu$ in $V_{\underline{\lambda}}$. Recall the subsets ${\bf C}_{\underline{\lambda}}^{\mu}$ in \eqref{11.20.11.2}. By Lemma \ref{9.21.17.56h}, the set ${\bf C}_{\underline{\lambda}}^{\mu}$ is empty unless $(\mu,\underline{\lambda})\in ({\rm P}^+)^{n+1}$. Recall the map $\omega$ in \eqref{13.1.23.23hh}. In this subsection we prove \begin{theorem} \label{8.24.8.22h} Let ${\bf T}_{\underline{\lambda}}^{\mu}$ be the set of top components of $m_{\underline{\lambda}}^{-1}(t^{\mu})$. For each $l\in {\bf C}_{\underline{\lambda}}^{\mu}$, the closure $\overline{{\omega({\cal C}_{l}^{\circ})}}\in {\bf T}_{\underline{\lambda}}^{\mu}$. It gives a bijection between ${\bf C}_{\underline{\lambda}}^{\mu}$ and {${\bf T}_{\underline{\lambda}}^\mu$}. \end{theorem} First we prove the case when $n=2$. In this case, the fiber $m_{\lambda_1,\lambda_2}^{-1}(t^{\mu})$ is isomorphic to $$ \{{\rm L}\in {\rm Gr}~|~({\rm L}, t^{\mu})\in \overline{ {\rm Gr}_{\lambda_1,\lambda_2}}\}={\overline{{\rm Gr}_{\lambda_1}}}\cap t^{\mu}{\overline{{\rm Gr}_{\lambda_2^{\vee}}}}. $$ Here $\lambda_2^{\vee}:=-w_0(\lambda_2)\in {\rm P}^+$. The following Theorem is due to Anderson. \begin{theorem} [\cite{A}] \label{Thm.A.} The top components of ${\overline{{\rm Gr}_{\lambda_1}}}\cap t^{\mu}{\overline{{\rm Gr}_{\lambda_2^{\vee}}}}$ are precisely the MV cycles of coweight $(\lambda_1,\mu-\lambda_2)$ contained in ${\overline{{\rm Gr}_{\lambda_1}}}\cap t^{\mu}{\overline{{\rm Gr}_{\lambda_2^{\vee}}}}$. \end{theorem} \begin{figure}[ht] \epsfxsize=3.5in \centerline{\epsfbox{htpm.eps}} \caption{The projection $\pi_3$ induces a bijection $\pi_3^t: \widetilde{\bf B}_{\lambda_1,\lambda_2}^{\mu}\rightarrow {\bf B}_{\lambda_1}^{\mu-\lambda_2}$.} \label{htpm} \end{figure} Recall the positive morphisms $$ p_i: {\rm Conf}_3({\cal A})\longrightarrow {\rm U},~~~({\rm A}_1, {\rm A}_2, {\rm A}_3)\longrightarrow u_{{\rm B}_{i-1}, {\rm B}_{i+1}}^{{\rm A}_i},~i\in {\mathbb Z}/3 $$ Let us put the potential condition on two vertices, see the left of Fig \ref{htpm}, getting $$ \widetilde{\bf B}_{\lambda_1, \lambda_2}^{\mu}:=\{l\in {\rm Conf}_3({\cal A})({\mathbb Z}^t)~|~(\pi_{12}, \pi_{23}, \pi_{13})^t(l)=(\lambda_1,\lambda_2,\mu), {~p_1^t(l)\in {\rm U}^+_\chi({\mathbb Z}^t),~p_2^t(l)\in {\rm U}^+_\chi({\mathbb Z}^t)}\}. $$ Consider the projection $ \pi_3: {\rm Conf}_3({\cal A})\rightarrow {\rm Conf}({\cal A}^2, {\cal B})$ which maps $({\rm A}_1,{\rm A}_2,{\rm A}_3)$ to $({\rm A}_1,{\rm A}_2,{\rm B}_3)$. Its tropicalization $\pi_3^t$ induces a bijection \footnote{There is a positive Cartan group action on ${\rm Conf}_3({\cal A})({\mathbb Z}^t)$ defined via $$ {\rm H}\times {\rm Conf}_3({\cal A})\longrightarrow {\rm Conf}_3({\cal A}), \quad \quad h\times ({\rm A}_1, {\rm A}_2, {\rm A}_3)\longmapsto ({\rm A}_1, {\rm A}_2, {\rm A}_3\cdot h). $$ Its tropicalization determines a free ${\rm H}({\mathbb Z}^t)$-action on ${\rm Conf}_3({\cal A})({\mathbb Z}^t)$. By definition, one can thus identify the ${\rm H}({\mathbb Z}^t)$-orbits of ${\rm Conf}_3({\cal A})({\mathbb Z}^t)$ with points of ${\rm Conf}({\cal A},{\cal A}, {\cal B})({\mathbb Z}^t)$. Note that each element in ${\bf B}_{\lambda_1}^{\mu-\lambda_2}$ has a unique representative in $\widetilde{\bf B}_{\lambda_1,\lambda_2}^{\mu}$. Hence the map $\pi_3^t$ is a bijection.} from $\widetilde{\bf B}_{\lambda_1,\lambda_2}^{\mu}$ to ${\bf B}_{\lambda_1}^{\mu-\lambda_2}$. Recall $\omega_2$ in \eqref{3.23.12.3.lemh.i}. By \eqref{13.1.22.168h}, the cycles $$ \overline{\omega_2({\cal C}_l^{\circ})}= \overline{\tau({\cal C}_{\pi_3^t(l)}^{\circ})},~~~~~l\in\widetilde{\bf B}_{\lambda_1,\lambda_2}^{\mu} $$ are precisely MV cycles of coweight $(\lambda_1,\mu-\lambda_2)$ contained in $\overline{{\rm Gr}_{\lambda_1}}$. Let $l\in\widetilde{\bf B}_{\lambda_1, \lambda_2}^{\mu}$. Let $x=({\rm A}_1,{\rm A}_2,{\rm A}_3)\in {\cal C}_{l}^{\circ}$. By identity 2 of Lemma \ref{3.23.12.1aa}, $$ \omega_2(x)=[\pi_{13}(x)\overline{w}_0 \cdot \big(p_3(x)\big)^{-1}{ \pi_{32}(x)}],~~~ \mbox{where } [\pi_{13}(x)]=t^{\mu},~~[\pi_{32}(x)]=t^{\lambda_2^{\vee}}.$$ Therefore {$$ \omega_2(x)\in t^{\mu}\overline{{\rm Gr}_{\lambda_2^{\vee}}} \Longleftrightarrow t^{-\mu}\omega_2(x)\in \overline{{\rm Gr}_{\lambda_2^{\vee}}}\Longleftrightarrow t^{-\mu}\pi_{13}(x)\overline{w}_0\cdot [\big(p_3(x)\big)^{-1}{\pi_{32}(x)}] \in \overline{{\rm Gr}_{\lambda_2^{\vee}}} \Longleftrightarrow \big(p_3(x)\big)^{-1}\cdot t^{\lambda_2^{\vee}}\in \overline{{\rm Gr}_{\lambda_2^{\vee}}}. $$ Here the last equivalence is due to the fact that $t^{-\mu}\pi_{13}\overline{w}_0\in {\rm G}({\cal O})$. Therefore for any $l\in\widetilde{\bf B}_{\lambda_1,\lambda_2}^\mu$, $$ \omega_2({\cal C}_l^\circ)\subset t^\mu \overline{{\rm Gr}_{\lambda_2^{\vee}}}\Longleftrightarrow \big(p_3({\cal C}_l^\circ)\big)^{-1}\cdot t^{\lambda_2^{\vee}}\subset \overline{{\rm Gr}_{\lambda_2^{\vee}}} $$ By Lemma \ref{Lema10.1.1}, Lemma \ref{13.1.8.53h}, and the definition of ${\bf C}_{\lambda_1,\lambda_2}^\mu$, we get $$\big(p_3({\cal C}_l^\circ)\big)^{-1}\cdot t^{\lambda_2^{\vee}}\in \overline{{\rm Gr}_{\lambda_2^{\vee}}} \Longleftrightarrow \big(p_3({\cal C}_l^\circ)\big)^{-1}\in {\rm U}({\cal O}) \Longleftrightarrow p_3({\cal C}_l^\circ) \in {\rm U}({\cal O})\Longleftrightarrow p_3^t(l) \in {\rm U}_{\chi }^{+}({\mathbb Z}^t)\Longleftrightarrow l\in {\bf C}_{\lambda_1,\lambda_2}^{\mu}. $$ } Let $l\in {\bf C}_{\lambda_1,\lambda_2}^{\mu}$. Let $x=({\rm A}_1,{\rm A}_2,{\rm A}_3)\in {\cal C}_l^{\circ}$. Note that $\omega_3(x)=[h_{{\rm A}_1,{\rm A}_3}]=t^{\mu}.$ Therefore $\omega(x)=(\omega_2(x), \omega_3(x))\in m_{\lambda_1,\lambda_2}^{-1}(t^{\mu})$. The rest is due to Theorem \ref{Thm.A.}. \vskip 2mm Now let us prove the general case. Consider the scissoring morphism \begin{align} \label{13.1.18.1h} c=(c_1,c_2): {\rm Conf}_{n+1}({\cal A})&\longrightarrow {\rm Conf}_{n}({\cal A})\times {\rm Conf}_3({\cal A}), \nonumber\\ ({\rm A}_1,\ldots, {\rm A}_{n+1})&\longmapsto ({\rm A}_1,\ldots, {\rm A}_{n-1}, {\rm A}_{n+1})\times ({\rm A}_{n-1},{\rm A}_n,{\rm A}_{n+1}) \end{align} Due to the scissoring congruence invariance, the map $c^t$ induces a decomposition \begin{equation} \label{13.1.17.101hh} {\bf C}_{\lambda_1,\ldots, \lambda_{n}}^{\mu}=\bigsqcup_{\nu\in {\rm P}^+} {\bf C}_{\lambda_1,\ldots, \lambda_{n-2}, \nu}^{\mu}\times {\bf C}_{\lambda_{n-1},\lambda_n}^{\nu}. \end{equation} \begin{proposition} \label{13.2.1.5.29h} The cardinality of ${\bf C}_{\underline{\lambda}}^{\mu}$ is the tensor product multiplicity $c_{\underline{\lambda}}^{\mu}$ of $V_{\mu}$ in $V_{\underline{\lambda}}$. \end{proposition} \begin{proof} Decomposing the last tensor products in $V_{\lambda_1}\otimes \ldots \otimes (V_{\lambda_{n-1}}\otimes V_{\lambda_n})$ into a sum of irreducibles, and tensoring then each of them with $V_{\lambda_1}\otimes\ldots\otimes V_{\lambda_{n-2}}$, we get $$ c_{\lambda_1,\ldots, \lambda_n}^{\mu}=\sum_{\nu \in{\rm P}^+} c_{\lambda_1,\ldots, \lambda_{n-2},\nu}^{\mu}c_{\lambda_{n-1},\lambda_n}^{\nu}. $$ As a consequence of $n=2$ case, $|{\bf C}_{\lambda, \mu}^{\nu}|=c_{\lambda, \mu}^{\nu}$. The Lemma follows by induction and \eqref{13.1.17.101hh}. \end{proof} \begin{lemma} \label{13.1.18.24h} For $l\in {\bf C}_{\underline{\lambda}}^{\mu}$, the cycles $\omega({\cal C}_{l}^{\circ})$ are disjoint. \end{lemma} \begin{proof} By Lemma \ref{13.1.24.1.hhh}, $\kappa({\cal C}_l^\circ)=i_1\circ \omega({\cal C}_l^\circ).$ The Lemma follows from Theorem \ref{5.8.10.45a}. \end{proof} \begin{lemma} \label{13.1.18.23h} For any $l \in {\bf C}_{\underline{\lambda}}^{\mu}$, we have $\omega({\cal C}_{l}^{\circ})\subset m_{\underline{\lambda}}^{-1}(t^{\mu}).$ \end{lemma} \begin{proof} Let $x=({\rm A}_1,\ldots, {\rm A}_{n+1})\in {\cal C}_{l}^{\circ}$. Recall the expression \eqref{3.23.12.3}. We have $$ [g_i]:=[u_{{\rm B}_{i-1},{\rm B}_{i+1}}^{{\rm A}_i}h_{{\rm A}_i, {\rm A}_{i+1}}\overline{w}_0]=u_{{\rm B}_{i-1}, {\rm B}_{i+1}}^{{\rm A}_i}\cdot t^{\lambda_i}\in {\rm Gr}_{\lambda_i}, ~~~i\in [1,n]. $$ Thus $\omega(x)\in {{\rm Gr}_{\underline{\lambda}}}$. Meanwhile $m_{\underline{\lambda}}\circ \omega(x)=[h_{{\rm A}_1,{\rm A}_{n+1}}]=t^{\mu}.$ The Lemma is proved. \end{proof} \begin{lemma} \label{13.1.18.25h} Let $l\in {\bf C}_{\underline{\lambda}}^{\mu}$. The closure $\overline{\omega({\cal C}_l^{\circ})}$ is an irreducible variety of dimension ${\rm ht}(\underline{\lambda};\mu)$. \end{lemma} \begin{proof} By construction, $\overline{\omega({\cal C}_{l}^{\circ})}$ is irreducible. Note that $m_{\underline{\lambda}}^{-1}(t^{\mu})$ is of top dimension ${\rm ht}(\underline{\lambda};\mu)$. By Lemma \ref{13.1.18.23h}, ${\rm dim}~\overline{\omega({\cal C}_{l}^{\circ})}\leq {\rm ht}(\underline{\lambda};\mu).$ To show that ${\rm dim}~\overline{\omega({\cal C}_{l}^{\circ})}\geq {\rm ht}(\underline{\lambda};\mu)$, we use induction. \vskip 2mm Set $\pi_{n-1,n+1}^t(l):=\nu.$ Recall $c=(c_1,c_2)$ in \eqref{13.1.18.1h}. Then $c_1^t(l)\in {\bf C}_{\lambda_1,\ldots, \lambda_{n-2},\nu}^{\mu}$, $c_2^t(l)\in {\bf C}_{\lambda_{n-1},\lambda_n}^{\nu}.$ Consider the projection $$ {\rm pr}: \omega({\cal C}_{l}^{\circ})\longrightarrow {\rm Gr}^{n-1},~~~({\rm L}_1,\ldots,{\rm L}_{n-1}, {\rm L}_n)\longrightarrow ({\rm L}_1,\ldots,{\rm L}_{n-2}, {\rm L}_n) $$ Its image ${\rm pr}(\omega({\cal C}_{l}^{\circ}))=\omega({\cal C}_{c_1^t({l})}^\circ).$ Let ${\bf b}=({\rm L}_1,\ldots, {\rm L}_{n-2},{\rm L}_n)\in \omega({\cal C}_{c_1^t(l)}^\circ)$. The fiber over ${\bf b}$ is $$ {\rm pr}^{-1}({\bf b}):=\{{\rm L} \in {\rm Gr}~|~({\rm L}_1,\ldots, {\rm L}_{n-2},{\rm L},{\rm L}_{n})\in \omega({\cal C}_{l}^{\circ})\}. $$ Let $y=({\rm A}_1,\ldots,{\rm A}_{n-1}, {\rm A}_{n+1})\in {\cal C}_{c_1^t(l)}^\circ$ such that $\omega(y)={\bf b}$. Set $b_y:=b_{{\rm B}_{n+1}}^{{\rm A}_1,{\rm A}_{n-1}}.$ For any $x\in {\cal C}_{l}^{\circ}$ such that $c_1(x)=y$, we have ${\rm pr}(\omega(x))=\omega(y)={\bf b}$. By \eqref{3.23.12.3.lemh.i}, we have $$ \omega_{n-1}(x)=[b_{{\rm B}_{n+1}}^{{\rm A}_1,{\rm A}_n}]=b_y\cdot \omega_2(c_2(x)) \in {\rm pr}^{-1}(b). $$ Then it is easy to see that $ b_y\cdot \overline{\omega_2({\cal C}_{c_2^t(l)}^{\circ})}\subset \overline{{\rm pr}^{-1}(b)}. $ Therefore $${\rm dim}~\overline{\omega({\cal C}_{l}^{\circ})}\geq {\rm dim}~\overline{\omega({\cal C}_{c_1^t(l)}^{\circ})}+{\rm dim}~\overline{\omega({\cal C}_{c_2^t(l)}^{\circ})}.$$ The case when $n=2$ is proved above. The Lemma follows by induction. \end{proof} \begin{proof} [Proof of Theorem \ref{8.24.8.22h}] By Lemmas \ref{13.1.18.23h}, \ref{13.1.18.25h}, the map ${\bf C}_{\underline{\lambda}}^{\mu} \longrightarrow {\bf T}_{\underline{\lambda}}^{\mu}$, $l\longmapsto\overline{\omega({\cal C}_{l}^\circ)}$ is well-defined. By Lemma \ref{13.1.18.24h} and the very construction of the cell ${\cal C}_{l}^{\circ}$, it is injective. Since $|{\bf C}_{\underline{\lambda}}^{\mu}|=|{\bf T}_{\underline{\lambda}}^{\mu}|=c_{\underline{\lambda}}^{\mu}$, the map is a bijection. \end{proof} \subsection{Proof of Theorem \ref{5.8.10.45b}} \label{sec9.4} We focus on the case when $\mu=0$ for ${\bf C}_{\underline{\lambda}}^{\mu}$. Consider the scissoring morphism \begin{align} c=(c_1,c_2): {\rm Conf}_{n+1}({\cal A})&\longrightarrow {\rm Conf}_n({\cal A})\times {\rm Conf}_3({\cal A}),\nonumber\\ ({\rm A}_1,\ldots, {\rm A}_n, {\rm A}_{n+1})&\longmapsto ({\rm A}_1,\ldots, {\rm A}_n)\times ({\rm A}_1,{\rm A}_n, {\rm A}_{n+1}).\nonumber \end{align} Due to the scissoring congruence invariance, the morphism $(c_1^t, c_2^t)$ induces a decomposition $$ {\bf C}_{\underline{\lambda}}^{0}=\bigsqcup _{\nu} {\bf C}_{\lambda_1,\ldots, \lambda_{n-1},\nu}\times {\bf C}_{\nu^{\vee}, \lambda_n}^{0}. $$ Note that ${\bf C}_{\nu^{\vee}, \lambda_n}^{0}$ is empty if $\nu\neq\lambda_n$. Moreover $|{\bf C}_{\lambda_n^{\vee}, \lambda_n}^{0}|=1$. Thus $c_1^t: {\bf C}_{\underline{\lambda}}^{0}\rightarrow {\bf C}_{\underline{\lambda}}$ is a bijection. \vskip 2mm Consider the shifted projection $$ p_s : {\rm Gr}^n\longrightarrow {\rm Conf}_n({\rm Gr}),~~~~~~\{{\rm L}_1,\ldots, {\rm L}_n\}\longrightarrow ({\rm L}_n, {\rm L}_1,\ldots, {\rm L}_{n-1}). $$ \begin{lemma} \label{13.1.24.47hhh} Let $l\in {\bf C}_{\underline{\lambda}}^{0}$. Then $p_s\circ \omega({\cal C}_{l}^\circ)=\kappa({\cal C}_{c_1^t(l)}^{\circ}).$ \end{lemma} \begin{proof} Let $x=({\rm A}_1,\ldots, {\rm A}_{n+1})\in {\cal C}_{l}^{\circ}$. Then $u:=u_{{\rm B}_{n+1},{\rm B}_n}^{{\rm A}_1}\in {\rm U}({\cal O})$. Let $y:=c_1(x)\in {\cal C}_{c_1^t(l)}^\circ$. Recall $\omega_i$ in \eqref{13.1.24.41h}. Then $\omega_{n+1}(x)=[1]$. For $i\in [2,n]$, we have $$ \omega_i(x)=[g_{\{{\rm U},{\rm B}^{-}\}}(\{{\rm A}_1,{\rm B}_{n+1}\}, \{{\rm A}_i,{\rm B}_1\})]=u\cdot [g_{\{{\rm U},{\rm B}^{-}\}}(\{{\rm A}_1,{\rm B}_{n}\}, \{{\rm A}_i,{\rm B}_1\})]= u\cdot \omega_i(y). $$ Therefore $$ p_s\circ \omega(x)=(\omega_{n+1}(x), u\cdot \omega_2(y),\ldots, u\cdot \omega_n(y))=([1], \omega_2(y),\ldots, \omega_n(y))=\kappa(y). $$ Here the last step is due to Lemma \ref{13.1.24.1.hhh}. Since $c_1({\cal C}_{l}^{\circ})={\cal C}_{c_1^t(l)}^{\circ}$, the Lemma is proved. \end{proof} Recall ${\rm Gr}_{c(\underline{\lambda})}$ and the set ${\bf T}_{\underline{\lambda}}$ of its top components in Theorem \ref{5.8.10.45b}. The connected group ${\rm G}({\cal O})$ acts on ${\rm Gr}_{c(\underline{\lambda})}$. It preserves each component of ${\rm Gr}_{c(\underline{\lambda})}$. So these components live naturally on the stack ${\rm Conf}_n({\rm Gr})={\rm G}({\cal O})\backslash \big([1]\times {\rm Gr}^{n-1}\big)$. Recall the fiber $m_{\underline{\lambda}}^{-1}([1])$ and the set ${\bf T}_{\underline{\lambda}}^0$ in Theorem \ref{8.24.8.22h}. Note that $p_s\big(m_{\underline{\lambda}}^{-1}([1])\big)={{\rm G}}({\cal O})\backslash\overline{{\rm Gr}_{c(\underline{\lambda})}}\subset {\rm Conf}_{n}({\rm Gr}).$ It induces a bijection ${\bf T}_{\underline{\lambda}}^0\stackrel{\sim}{\longrightarrow} {\bf T}_{\underline{\lambda}}$. \paragraph{Proof of Theorem \ref{5.8.10.45b}.} By Theorem \ref{8.24.8.22h} and above discussions, there is a chain of bijections: $ {\bf C}_{\underline{\lambda}}\stackrel{\sim}{\longrightarrow} {\bf C}_{\underline{\lambda}}^0\stackrel{\sim}{\longrightarrow} {\rm T}_{\underline{\lambda}}^0\stackrel{\sim}{\longrightarrow} {\rm T}_{\underline{\lambda}}. $ By Lemma \ref{13.1.24.47hhh}, this chain is achieved by the map $\kappa$. The Theorem is proved. \section{Positive ${\rm G}$-laminations and surface affine Grassmannians} \label{sec11} A decorated surface $S$ comes with an unordered collection $\{s_1, ..., s_n\}$ of special points, defined up to isotopy. Denote by $\partial S$ the boundary of $S$. We assume that $\partial S$ is not empty. We define {\it punctured boundary} \begin{equation} \label{punctb} \widehat \partial S:= \partial S - \{s_1, ..., s_n\}. \end{equation} Its components are called {\it boundary circles} and {\it boundary intervals}. Let us shrink all holes without special points on $S$ into {\it punctures}, getting a homotopy equivalent surface. Abusing notation, we denote it again by $S$. We say that the punctures and special points on $S$ form the set of {\it marked points} on $S$: $$ \{\mbox{marked points}\}:= \{\mbox{special points $s_1, ..., s_n$}\} \cup \{\mbox{punctures}\}. $$ Pick a point $\ast s_i$ in each of the boundary intervals. The dual decorated surface $\ast S$ is given by the same surface $S$ with the set of special points $\{\ast s_1, ..., \ast s_n\}$. We have a duality: $\ast \ast S =S$. Observe that the marked points are in bijection with the components of the punctured boudary $\widehat \partial (\ast S)$. \subsection{The space ${\cal A}_{{\rm G}, S}$ with the potential ${\cal W}$} \paragraph{Twisted local systems and decorations.} Let ${\rm T}'S $ be the complement to the zero section of the tangent bundle on a surface $S$. Its fiber ${\rm T}'_y$ at $y \in S$ is homotopy equivalent to a circle. Let $x\in {\rm T}'_yS$. The fundamental group $\pi_1({\rm T}'S, x)$ is a central extension: \begin{equation} \label{cen.ext} 0\longrightarrow \pi_1({\rm T}_y'S,x)\longrightarrow \pi_1({\rm T}'S, x)\longrightarrow \pi_1(S,y)\longrightarrow 0, ~~~~\pi_1({\rm T}_y'S,x)={\mathbb Z}. \end{equation} Let ${\cal L}$ be a ${\rm G}$-local system on ${\rm T}'S$ with the monodromy $s_{{\rm G}}$ around a generator of $\pi_1({\rm T}'_yS,x)$. Let us assume that ${\rm G}$ acts on ${\cal L}$ on the right. We call ${\cal L}$ a {\it twisted} ${\rm G}$-local system on $S$. It gives rise to the {\it associated decorated flag bundle} ${\cal L}_{\cal A}:={\cal L}\times_{{\rm G}}{\cal A}$. Let ${\rm C}$ be a component of $\widehat\partial (\ast S)$. There is a canonical up to isotopy section $\sigma: {\rm C}\rightarrow {\rm T}'{\rm C}$ given by the tangent vectors to ${\rm C}$ directed according to the orientation of ${\rm C}$. A {\it decoration on ${\cal L}$ over ${\rm C}$} is a flat section of the restriction of ${\cal L}_{\cal A}$ to $\sigma({\rm C})$. \begin{definition}[\cite{FG1}] \label{moduliMGS} A twisted decorated ${\rm G}$-local system on $S$ is a pair $({\cal L,\alpha})$, where ${\cal L}$ is a twisted ${\rm G}$-local system on $S$, and $\alpha$ is given by a decoration on ${\cal L}$ over each component of $\widehat\partial (\ast S)$. The moduli space ${\cal A}_{{\rm G}, S}$ parametrizes twisted decorated ${\rm G}$-local systems on $S$. \end{definition} Abusing terminology, a decoration is given by decorated flags at the marked points. \paragraph{Remark.} Since the boundary $\partial S$ of $S$ is not empty, the extension \eqref{cen.ext} splits: $$ \pi_1({\rm T}'S, x)\stackrel{\sim}{=}\pi_1({\rm T}_y'S,x)\times \pi_1(S,y). $$ However the splitting is not unique. As a space, ${\cal A}_{{\rm G}, S}$ is isomorphic, although non canonically if $s_{{\rm G}}\neq 1$, to its counterpart of usual unipotent ${\rm G}$-local systems on $S$ with decorations. The mapping class group $\Gamma_S$ acts differently on the two spaces. For example, when $S$ is a disk $D_n$ with $n$ special points on the boundary, then $\Gamma_{D_n}={\mathbb Z}/n{\mathbb Z}$. Both moduli spaces are isomorphic to the configuration space ${\rm Conf}_n({\cal A})$. The mapping class group ${\mathbb Z}/n{\mathbb Z}$ acts on the untwisted moduli space is by the cyclic rotation $({\rm A}_1,\ldots, {\rm A}_n)\mapsto ({\rm A}_n, {\rm A}_1,\ldots, {\rm A}_{n-1})$, while its action on ${\cal A}_{{\rm G}, D^n}$ is given by the ``twisted" rotation $$ ({\rm A}_1,{\rm A}_2, \ldots, {\rm A}_n)\longmapsto ({\rm A}_n\cdot s_{\rm G}, {\rm A}_1,\ldots, {\rm A}_{n-1}). $$ \begin{theorem} [${\it loc.cit.}$] The space ${\cal A}_{{\rm G}, S}$ admits a natural positive structure such that the mapping class group $\Gamma_S$ acts on ${\cal A}_{{\rm G}, S}$ by positive birational isomorphisms. \end{theorem} Below we give two equivalent definitions of the potential ${\cal W}$ on ${\cal A}_{{\rm G}, S}$. \paragraph{Potential via generalized monodromy.} A decorated flag ${\rm A}$ provides an isomorphism \begin{equation} \label{isoia} i_{{\rm A}}: {\rm U_{{\rm A}}/[U_{{\rm A}},U_{{\rm A}}]} \stackrel{\sim}{\longrightarrow} \oplus_{\alpha \in \Pi}{\Bbb A}^1. \end{equation} Let $\Sigma: \oplus_{\alpha \in \Pi}{\Bbb A}^1\to {\Bbb A}^1$ be the sum map. Then $\chi_{\rm A} = \Sigma\circ i_{\rm A}$. This characterizes the map $i_{{\rm A}}$. \vskip 2mm Let us assign to each component ${\rm C}$ of $\widehat \partial (\ast S)$ a canonical rational map, called {\it generalized monodromy at ${\rm C}$}: ${\mu}_{\rm C}: {\cal A}_{{\rm G}, S} \longrightarrow \oplus_{\alpha \in \Pi}{\Bbb A}^1.$ There are two possible cases. \vskip 2mm (i) The component C is a boundary circle. The decoration over C is a decorated flag ${\rm A}_{\rm C}$ in the fiber of ${\cal L}_{\cal A}$ on C, invariant under the monodromy around C. It defines a conjugacy class in the unipotent subgroup ${\rm U}_{{\rm A}_{\rm C}}$ preserving ${\rm A}_{\rm C}$. So we get a regular map $$ {\mu}_{\rm C}: {\cal A}_{{{\rm G}}, S} \longrightarrow {\rm U_{A_C}/[U_{{\rm A}_C},U_{{\rm A}_C}]}\stackrel{i_{{\rm A}_{\rm C}}}{=}\oplus_{\alpha \in \Pi}{\Bbb A}^1. $$ (ii) The component C is a boundary interval on a hole $h$. The universal cover of $h$ is a line. We get an infinite sequence of intervals on this line projecting to the boundary interval(s) on $h$. There are decorated flags assigned to these intervals. Take an interval ${\rm C}'$ on the cover projecting to C. Let ${\rm C}_{-}'$ and ${\rm C}_{+}'$ be the intervals just before and after ${\rm C}'$. We get a triple of decorated flags $({\rm A}_{-}, {\rm A}, {\rm A}_{+})$ sitting over these intervals. There is a unique $u\in {\rm U}_{{\rm A}}$ such that $ {\rm B}_{+} = u \cdot {\rm B}_{-}, $ where ${\rm B}_{\pm}=\pi({\rm A}_{\pm})\in {\cal B}$. Projecting $u$ to ${\rm U}_{{\rm A}}/[{\rm U}_{{\rm A}}, {\rm U}_{{\rm A}}]$, we get a map $\mu_{\rm C}: {\cal A}_{{\rm G}, S} \rightarrow \oplus_{\alpha \in \Pi}{\Bbb A}^1.$ It is clear that $\mu_{\rm C}$ does not depend on the choice of ${\rm C}'$. \vskip 2mm Composing the generalized monodromy $\mu_{\rm C}$ with the sum map $\oplus_{\alpha \in \Pi}{\Bbb A}^1\to {\Bbb A}^1$, we get \begin{equation} \label{6} {\cal W}_{\rm C}:= \Sigma \circ \mu_{\rm C}: {\cal A}_{{\rm G}, S} \longrightarrow {\Bbb A}^1, \end{equation} called {\it the potential associated with ${\rm C}$}. \begin{definition} The potential ${\cal W}$ on the space ${\cal A}_{{\rm G}, S}$ is defined as \begin{equation} \label{defpot1} {\cal W}:= \sum_{\mbox{\rm components C of $\widehat\partial (\ast S)$}} {\cal W}_{\rm C}. \end{equation} \end{definition} \paragraph{Potential via ideal triangulations.} \begin{definition} An ideal triangulation of a decorated surface $S$ is a triangulation of the surface whose vertices are the marked points of $S$. \end{definition} Let $T$ be an ideal triangulation of $S$. Pick a triangle $t$ of $T$. The restriction to $t$ provides a projection\footnote{If the vertices of $t$ coincide, one can first pull back to a sufficient big cover $\widetilde{S}$ of $S$, and then consider the restriction to a triangle $\widetilde{t}\subset\widetilde{S}$ which projects onto $t$. Clearly the result is independent of the pair $\widetilde{t}\subset \widetilde{S}$ chosen.} $\pi_t$ from ${\cal A}_{{\rm G}, S}$ to ${\rm Conf}_3({\cal A})$. Recall the potential ${\cal W}_3$ on the latter space. \begin{definition} The potential on the space ${\cal A}_{{\rm G}, S}$ is defined as \begin{equation} \label{defpot2} {\cal W}:= \sum_{\mbox{\rm triangles $t$ of $T$}} {\cal W}_3\circ \pi_t. \end{equation} \end{definition} Changing $T$ by a flip we do not change the sum (\ref{defpot2}) since the potential on a quadrilateral is invariant under a flip (Section \ref{sec2}). Since any two ideal triangulations are related by a sequence of flips, the potential (\ref{defpot2}) is independent of the ideal triangulation $T$ chosen. \paragraph{The above definitions are equivalent.} There is a natural bijection between the marked points, that is the vertices of $T$, and the components of $\widehat \partial (\ast S)$. Working with definition (\ref{defpot2}), the sum over all angles of the triangles shared by a puncture is the potential ${\cal W}_{\rm C}$ assigned to the corresponding boundary circle. A similar sum over all angles shared by a special point is the potential ${\cal W}_{\rm C}$ assigned to the corresponding boundary interval. Thus the potentials (\ref{defpot1}) and (\ref{defpot2}) coincide. \paragraph{Positivity of the potential ${\cal W}$.} In the positive structure of ${\cal A}_{{\rm G},S}$ introduced in \cite{FG1}, the projection $\pi_t: {\cal A}_{{\rm G}, S}\rightarrow {\rm Conf}_3({\cal A})$ is a positive morphism. By Theorem \ref{mth1} and \eqref{defpot2}, we get \begin{theorem} The potential ${\cal W}$ is a positive function on the space ${\cal A}_{{\rm G}, S}$. \end{theorem} \paragraph{Positive integral ${\rm G}$-laminations.} We define the set of {\it positive integral ${\rm G}$-laminations on $S$}: \begin{equation} {\cal A}^{+}_{{\rm G}, S}({\mathbb Z}^t)=\{l\in {\cal A}_{{\rm G}, S}({\mathbb Z}^t)~|~ {\cal W}^t(l)\geq 0\}. \end{equation} By tropicalization, the mapping class group $\Gamma_S$ acts on ${\cal A}_{{\rm G}, S}({\mathbb Z}^t)$. The potential ${\cal W}$ is $\Gamma_S$-invariant. Thus $\Gamma_S$ acts on the subset ${\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$. \paragraph{Partial potentials.} Given any simple positive root $\alpha$, there is a component $\chi_{A, \alpha}$ of the character $\chi_{A}$ so that $\chi_{A} = \sum_{\alpha \in \Pi}\chi_{A, \alpha}$. Let $S$ be a decorated surface. Then to each boundary component $C \in \partial (\ast S)$ one associates a function $W_{C, \alpha}$. It is evidently invariant under the action of the mapping class group $\Gamma_S$ of $S$. \begin{theorem} Let $S$ be a surface with $n$ holes and no special points. Then the algebra of regular $\Gamma_S$-invariant functions on the space ${\cal A}_{G,S}$ is a polynomial algebra in $n {\rm rk}(G)$ variables freely generated by the partial potentials $W_{C, \alpha}$, where $C$ run through all boundary circles on $S$, and $\alpha$ are simple positive roots. \end{theorem} \begin{proof} It is well known that the action of the mapping class group $\Gamma_S$ on the moduli space ${\rm Loc}^{\rm un}_{G, S}$ of unipotent $G$-local systems on a surface $S$ with holes is ergodic. So there are no non-constant $\Gamma_S$-invariant regular functions on this space. On the other hand, there is a canonical $\Gamma_S$-invariant projection given by the generalised monodromy around the holes: $$ {\cal A}_{G, S} \longrightarrow \prod_{\mbox{\rm holes of $S$}}({\Bbb A}^1)^{\prod}. $$ Its fiber over zero is the space ${\rm Loc}^{\rm un}_{G, S}$. \end{proof} \subsection{Duality Conjectures for decorated surfaces} \label{secdualitycon} \begin{definition} \label{loc.gl.s} The moduli space ${\rm Loc}_{{\rm G} , S}$ parametrizes pairs $({\cal L},\gamma)$, where ${\cal L}$ is a twisted ${\rm G} $-local system on $S$, and $\gamma$ assigns a decoration on ${\cal L}$ to each boundary interval of $\widehat\partial (\ast S)$. \end{definition} {It is important to consider several different types of twisted ${\rm G}$-local system on $S$ which differ by the data assigned to the boundary. Recall that components of the punctured boundary $\widehat\partial (\ast S)$ are in bijection with the marked points of $S$. There are three options for the data at a given marked point, which could be either a special point, or a puncture: 1) No data. 2) A decoration, that is a flat section of the associated decorated flag bundle ${\cal L}_{\cal A}$ near $m$. 3) A framing, that is a flat section of the associated flag bundle ${\cal L}_{\cal B}$ near $m$. In accordance to this, there are five different moduli spaces: \begin{itemize} \item ${\cal A}_{{\rm G} , S}$: decorations at both special points and punctures. \item ${\cal L}oc_{{\rm G} , S}$: no extra data. \item ${\rm Loc}_{{\rm G} , S}$: decorations at the special points only. No extra data at the punctures. \item ${\cal P}_{{\rm G} , S}$: decorations at the special points, framings at the punctures. \item ${\cal X}_{{\rm G} , S}$: framings at the special points and punctures. \end{itemize} If $S$ does have special points, it is silly to consider ${\cal L}oc_{{\rm G} , S}$ since it ignores them. If $S$ has no punctures, then (besides ${\cal L}oc_{{\rm G} , S}$) there are three different moduli spaces: $$ {\cal A}_{{\rm G}, S} = {\rm Loc}_{{\rm G}, S}, ~~~~{\cal P}_{{\rm G} , S}, ~~~~{\cal X}_{{\rm G} , S}. $$ If $S$ has no special points, i.e. it is a punctured surface, there are three different moduli spaces: $$ {\cal A}_{{\rm G} , S}, ~~~~{\cal L}oc_{{\rm G} , S} = {\rm Loc}_{{\rm G} , S}, ~~~~{\cal P}_{{\rm G} , S} = {\cal X}_{{\rm G} , S}. $$ Duality Conjectures interchange a group ${\rm G}$ with the Langlands dual group ${\rm G}^L$, and a decorated surface $S$ with the dual decorated surface $\ast S$.\footnote{Although the decorated surface $\ast S$ is isomorphic to $S$, the isomorphism is not quite canonical.} Here are some examples. If $S$ has no special points, the dual pairs look as follows: $$ {\cal A}_{{\rm G}, S}~~~\mbox{\rm is dual to} ~~~{\cal P}_{{\rm G}^L, \ast S} = {\cal X}_{{\rm G}^L, \ast S}, ~~~~~~~~ ({\cal A}_{{\rm G}, S}, {\cal W}) ~~~\mbox{\rm is dual to} ~~~{\cal L}oc_{{\rm G}^L, \ast S} = {\rm Loc}_{{\rm G}^L, \ast S}. $$ If $S$ does have special points, the moduli space ${\cal X}_{{\rm G} , S}$ plays a secondary role. The key dual pair is this: $$ ({\cal A}_{{\rm G}, S}, {\cal W}) ~~~\mbox{\rm is dual to} ~~~{\rm Loc}_{{\rm G}^L, \ast S}. $$ There are plenty of other dual pairs, obtained from this one by degenerating the potential, and simulateneously altering the dual space. Let us discuss some of them. \paragraph{Generalisations.} Let us assign to each marked point $m$ of $S$ a subset ${I_m}\subset I$, possibly empty. First, let us define a new potential on the space ${\cal A}_{{\rm G}, S}$. Observe that any non-degenerate additive character $\chi$ of ${\rm U}$ is naturally decomposed into a sum of characters parametrised by the set of positive simple roots: $\chi = \sum_{i\in I}\chi_i$. Then, replacing in the definition of the potential at a given marked point $m$ the nondegenerate character $\chi$ by the character $\sum_{i\in I_m}\chi_i$, we get a new function ${\cal W}_{m, {I_m}}$ at $m$, and set \begin{equation} \label{modpot} {\cal W}_{\{{I_m}\}}:= \sum_{\mbox{\rm marked points $m$ on $S$}} {\cal W}_{m, {I_m}}. \end{equation} Next, let us define a modified moduli space ${\cal P}^{\{{\rm I_m}\}}_{{\rm G}^L, \ast S}$. Recall that for each simple positive root $\alpha_i$ there is a ${\rm G}$-invariant divisor in ${\cal B} \times {\cal B}$. Let $D_i$ be its preimage in ${\cal A} \times {\cal A}$. We say that a pair $(A_1, A_2) \in {\cal A} \times {\cal A}$ is in position $I-I_m$ if $(A_1, A_2) \in {\cal A} \times {\cal A} - \cup_{i\in I-I_m}D_i$. Recall that $C_m$ is the boundary component of $\ast S$ matching a marked point $m$ on $S$. \begin{definition} \label{bulletdef} The moduli space ${\cal P}^{\{{I_m}\}}_{{\rm G}^L, \ast S}$ paramatrizes twisted ${\rm G}^L$-local systems on $S$ plus a) A reduction of the structure group ${\rm G}^L$ near each puncture $m$ to the parabolic subgroup of type $I-I_m$. b) A decoration at every boundary interval $C_m$ of $\ast S$ such that \begin{itemize} \item The decorated flags at the ends of the boundary interval $C_m$ are in the position $I-I_m$. \end{itemize} \end{definition} So if $I = I_m$, the data a) is empty, and the condition b) is vacuous. \vskip 2mm Finally, we consider the largest subspace $$ {\cal A}^{\{{I_m}\}}_{{\rm G}, S} \subset {\cal A}_{{\rm G}, S} $$ on which the potential ${\cal W}_{\{{I_m}\}}$ is regular. This condition is vacuous at punctures, and boils down to the $\bullet$-condition from Definition \ref{bulletdef} at boundary intervals of $\ast S$. So if $I_m = \emptyset$ at every special point $m$, then ${\cal A}^{\{{I_m}\}}_{{\rm G}, S} = {\cal A}_{{\rm G}, S}$. \begin{conjecture} ({\cal A}^{\{{I_m}\}}_{{\rm G}, S}, {\cal W}_{\{{I_m}\}}) ~~~\mbox{\rm is dual to} ~~~{\cal P}^{\{{I_m}\}}_{{\rm G}^L, \ast S}. \end{conjecture} Let us now formulate what the Duality Conjecture tells about canonical bases for the most interesting moduli space ${\rm Loc}_{{\rm G}^L, S}$, leaving similar formulations in other cases as a straightforward exercise. } \paragraph{Duality Conjecture for the space ${\rm Loc}_{{\rm G}^L, \ast S}$.} The group $\Gamma_S$ acts on the set ${\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$, and on the space ${\cal O}({\rm Loc}_{{\rm G^L} , \ast S})$ of regular functions on ${\rm Loc}_{{\rm G}^L , \ast S}$. \begin{conjecture} \label{cbcon} There is a canonical basis in the space ${\cal O}({\rm Loc}_{{\rm G}^L, \ast S})$ parametrized by the set ${\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$. This parametrization is $\Gamma_S$-equivariant. \end{conjecture} \vskip 2mm {\bf Example.} If $S$ is a disc $D_n$ with $n$ special points on the boundary, then $\Gamma_{D_n}={\mathbb Z}/n{\mathbb Z}$. Theorem \ref{11.18.11.1} provides a $\Gamma_{D_n}$-equivariant canonical basis. Thus Conjecture \ref{cbcon} is proved. \vskip 2mm If ${\rm G}={\rm SL}_2$ (or ${\rm G}={\rm PGL}_2$), then \cite{FG1} provides a concrete construction of the $\Gamma_S$-equivariant parametrization, using laminations. The following Theorem tells that the set ${\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$ is of the right size. \begin{theorem} \label{cbcont} Given an ideal triangulation $T$ of a decorated surface $S$, there is a linear basis in ${\cal O}({\rm Loc}_{{\rm G}^L, \ast S})$ parametrized by the set ${\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$. \end{theorem} {\bf Remark.} The parametrization depends on the choice of the ideal triangulations. In particular, it is not $\Gamma_{S}$-equivariant. \begin{proof} The graph $\Gamma$ dual to the triangulation $T$ is a ribbon trivalent graph homotopy equivalent to $S$. An {\it end vertex} of $\Gamma$ is a univalent vertex of the graph. It corresponds to a boundary interval of $\widehat\partial S$. Let ${\rm Loc}_{{\rm G}^L, \Gamma}$ be the moduli space of pairs $({\cal L}, \gamma)$, where ${\cal L}$ is a ${\rm G}^L$-local system on $\Gamma$, and $\gamma$ is a flat section of the restriction of the local system ${\cal L}_{\cal A}$ to the end vertices of $ \Gamma$. Choose an orientation of the edges of $\Gamma$. Let $V(\Gamma)$ and $E(\Gamma)$ be the sets of vertices and edges of $\Gamma$. Pick an edge $E = (v_1, v_2)$ of $\Gamma$, oriented from $v_1$ to $v_2$. Given a function $\lambda: E(\Gamma)\longrightarrow {\rm P}^+, $ we assign irreducible ${\rm G}^L$-modules to the two flags of $E$, denoted $V_{v, E}$: $$ V_{(v_1, E)}:= V_{\lambda(E)}, ~~~V_{(v_2, E)}:= V_{-w_0(\lambda(E))}. $$ According to \cite[Section 12.5, (12.30)]{FG1}, there is a canonical isomorphism \begin{equation} \label{21} {\cal O}({\rm Loc}_{{\rm G}^L, \Gamma}) = \bigoplus_{\{\lambda: E(\Gamma)\longrightarrow P^+\}} \bigotimes_{v\in V(\Gamma)}\Bigl(\bigotimes_{(v, E)}V_{\lambda(v,E)}\Bigr)^{{\rm G}^L} \end{equation} The second tensor product is over all flags incident to a given vertex $v$ of $\Gamma$. By Applying Theorem \ref{11.18.11.1} parametrizing a basis in the ${\rm G}^L$-invariants of the tensor product for each vertex of $\Gamma$, it follows that ${\cal O}({\rm Loc}_{{\rm G}^L, \Gamma}) $ admits a linear basis parametrized by ${\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$. Note that the central extension \eqref{cen.ext} is split. Following the remark after Definition \ref{moduliMGS}, the moduli space ${\rm Loc}_{{\rm G}^L, S}$ is isomorphic to ${\rm Loc}_{{\rm G}^L, \Gamma}.$ The Theorem is proved. \end{proof} \subsection{Canonical basis in the space of functions on ${\rm Loc}_{{\rm SL}_2, S}$}\label{sec10.3n} Given any decorated surface $S$, there is a generalisation of integral laminations on $S$. \begin{definition} \label{9.19.13.3} Let $S$ be a decorated surface. An integral lamination $l$ on $S$ is a formal sum \begin{equation} \label{laml} l = \sum_i n_i[\alpha_i] + \sum_j m_j [\beta_j], ~~~~n_i, m_j \in {\mathbb Z}_{>0}. \end{equation} where $\{\alpha_i\}$ is a collection of simple nonisotopic loops, $\{\beta_j\}$ is a collection of simple nonisotopic intervals ending inside of boundary intervals on $\partial S - \{s_1, ..., s_n\}$, such that the curves do not intersect, considered modulo isotopy. The set of integral laminations on $S$ is denoted by ${\cal L}_{\mathbb Z}({S})$. \end{definition} \begin{figure}[ht] \epsfxsize130pt \centerline{\epsfbox{fish1.eps}} \caption{An integral lamination on a surface with two holes, with $2+3$ special points.} \label{fish1} \end{figure} Let ${\rm Mon}_{\alpha}({\cal L}, \alpha)$ be the monodromy of a twisted ${\rm SL}_2$-local system $({\cal L}, \alpha)$ over a loop $\alpha$ on $S$. Let us show that a simple path $\beta$ on $S$ connecting two points $x$ and $y$ on $\widehat \partial S$ gives rise to a regular function $\Delta_\beta$ on ${\rm Loc}_{{\rm SL}_2, S}$. Let $({\cal L}, \alpha)$ be a decorated ${\rm SL}_2$-local system on $S$. The associated flat bundle ${\cal L}_{\cal A}$ is a two dimensional flat vector bundle without zero section. Let $v_x$ and $v_y$ be the tangent vectors to $\partial S$ at the points $x,y$. The decoration $\alpha$ at $x$ and $y$ provides vectors $l_x$ and $l_y$ in the fibers of ${\cal L}_{\cal A}$ over $v_x$ and $v_y$. The set $S_\beta$ of non-zero tangent vectors to $\beta$ is homotopy equivalent to a circle. Let us connect $v_x$ and $v_y$ by a path $p$ in $S_\beta$, and transform the vector $l_x$ at $v_x$ to the fiber of ${\cal L}_{\cal A}$ over $v_y$, getting there a vector $l_x'$. We claim that $\Delta(l_x', l_y)$ is independent of the choice of $p$. This uses crucially the fact that ${\cal L}$ is a twisted local system. So we arrive at a well defined number $\Delta(l_x', l_y)$ assigned to $({\cal L}, \alpha)$. We denote by $\Delta_\beta$ the obtained function on ${\rm Loc}_{{\rm SL}_2, S}$. Given an integral lamination $l$ on $S$ as in (\ref{laml}), we a regular function $M_l$ on ${\rm Loc}_{{\rm SL}_2, S}$ by $$ M_l({\cal L}, \alpha):= \prod_i {\rm Tr} ({\rm Mon}^{n_i}_{\alpha_i}({\cal L}, \alpha)) \prod_j\Delta_{\beta_j}^{m_j}({\cal L}, \alpha). $$ \begin{theorem} \label{9.19.13.20} The functions $M_l$, $l\in {\cal L}_{\mathbb Z}({S})$, form a linear basis in the space ${\cal O}({\rm Loc}_{{\rm SL}_2, S})$. \end{theorem} \begin{theorem} \label{9.19.13.21} For any decorated surface $S$, there is a canonical isomorphism $$ {\cal A}^+_{{\rm PGL}_2, S}({\mathbb Z}^t) = {\cal L}_{\mathbb Z}(S). $$ \end{theorem} Theorem \ref{9.19.13.21} is proved similarly to Theorem 12.1 in \cite{FG1}. Notice that ${\cal A}_{{\rm PGL}_2, S}$ is a positive space for the adjoint group ${\rm PGL}_2$, the potential ${\cal W}$ lives on this space and is a positive function there. Theorem \ref{9.19.13.20} is proved by using arguments similar to the proof of Theorem \ref{cbcont} and \cite[Proposition 12.2]{FG1}. Combining Theorem \ref{9.19.13.20} and Theorem \ref{9.19.13.21} we arrive at a construction of the canonical basis predicted by Conjecture \ref{cbcon} for ${\rm G}={\rm PGL}_2$. \subsection{Surface affine Grassmannian and amalgamation.} \label{sec11.2} \paragraph{The surface affine Grassmannian ${\rm Gr}_{{\rm G}, S}$.} Given a twisted right ${\rm G}({\cal K})$-local system ${\cal L}$ on $S$, there is the associated flat affine Grassmannian bundle $ {\cal L}_{\rm Gr}:= {\cal L}\times_{{\rm G}({\cal K})}{\rm Gr}. $ Similarly to Definition \ref{moduliMGS}, we define \begin{definition} \label{srfag} Let $S$ be a decorated surface. The moduli space ${\rm Gr}_{{\rm G}, S}$ parametrizes pairs $({\cal L}, \nu)$ where ${\cal L}$ is a twisted right ${\rm G}({\cal K})$-local system on $S$, and $\nu$ a flat section of the restriction of ${\cal L}_{\rm Gr}$ to the punctured boundary $\widehat \partial (\ast S)$. \end{definition} Abusing terminology, the data $\nu$ is given by the lattices ${\rm L}_m$ at the marked points $m$ on $S$. The moduli space $\widetilde {\rm Gr}_{{\rm G}, S}$ parametrizes similar data $(\widetilde {\cal L}, \nu)$, where $\widetilde {\cal L}$ is a twisted ${\rm G}({\cal K})$-local system on $S$ trivialized at a given point of $S$. So one has $ {\rm Gr}_{{\rm G}, S} = {\rm G}\backslash \widetilde {\rm Gr}_{{\rm G}, S}. $ \vskip 2mm {\bf Example}. Let $D_n$ be a disc with $n$ special points on the boundary. Then a choice of a special point provides isomorphisms $$ {\rm Gr}_{{\rm G}, D_n} = {\rm Conf}_n({\rm Gr}), ~~~ \widetilde {\rm Gr}_{{\rm G}, D_n} = {\rm Gr}^n. $$ \paragraph{Cutting and amalgamating decorated surfaces.} Let ${\rm I}$ be an ideal edge on a decorated surface $S$, i.e. a path connecting two marked points. Cutting $ S$ along the edge ${\rm I}$ we get a decorated surface $ S^*$. Denote by ${\rm I}'$ and ${\rm I}''$ the boundary intervals on $ S^*$ obtained by cutting along ${\rm I}$. Conversely, gluing boundary intervals ${\rm I}'$ and ${\rm I}''$ on a decorated surface $ S^*$, we get a new decorated surface $ S$. We assume that the intervals ${\rm I}'$ and ${\rm I}''$ on $ S^*$ are oriented by the orientation of the surface, and the gluing preserves the orientations. More generally, let $ S$ be a decorated surface obtained from decorated surfaces $ S_1, ..., S_n$ by gluing pairs $\{{\rm I}'_1, {\rm I}''_1\}$, ..., $\{{\rm I}'_m, {\rm I}''_m\}$ of oriented boundary intervals. We say that $ S$ is the {\it amalgamation} of decorated surfaces $ S_1, ..., S_n$, and use the notation S = S_1 \ast ... \ast S_n. $ Abusing notation, we do not specify the pairs $\{{\rm I}'_1, {\rm I}''_1\}, ..., \{{\rm I}'_m, {\rm I}''_m\}$. \paragraph{Amalgamating surface affine Grassmannians.} There is a moduli space ${\rm Gr}_{{\rm G}, {\rm I}}$ related to an oriented closed interval ${\rm I}$, so that there is a canonical isomorphism of stacks $$ {\rm Gr}_{{\rm G}, {\rm I}} = {\rm Conf}_2({\rm Gr}). $$ \begin{definition} \label{6.9.12.10} Let ${\rm I}'$, ${\rm I}''$ be boundary intervals on a decorated surface $ S^*$, perhaps disconnected. The {\rm amalgamation stack} ${\rm Gr}_{{\rm G}, S^*}({\rm I}'\ast {\rm I}'')$ parametrises triples $({\cal L}, \gamma, g)$, where $({\cal L}, \gamma)$ is the data parametrised by ${\rm Gr}_{{\rm G}, S^*}$, and $g$ is a {\rm gluing data}, given by an equivalence of stacks \begin{equation} \label{6.9.12.101a} g: {\rm Gr}_{{\rm G}, {\rm I}'} \stackrel{\sim}{\longrightarrow} {\rm Gr}_{{\rm G}, {\rm I}''}. \end{equation} \end{definition} This immediately implies that there is a canonical equivalence of stacks: \begin{equation} \label{isostacks} {\rm Gr}_{{\rm G}, S} \stackrel{\sim}{\longrightarrow} {\rm Gr}_{{\rm G}, S^*}({\rm I}'\ast {\rm I}''). \end{equation} Given decorated surfaces $ S_1, ..., S_n$ and a collection $\{{\rm I}'_1, {\rm I}''_1\}$, ..., $\{{\rm I}'_m, {\rm I}''_m\}$ of pairs of boundary intervals, generalising the construction from Definition \ref{6.9.12.10}, we get the amalgamation stack $$ {\rm Gr}_{{\rm G}, S_1\ast ...\ast S_n} = {\rm Gr}_{{\rm G}, S_1\ast ...\ast S_n}({\rm I}'_1\ast {\rm I}''_1, \ldots , {\rm I}'_m\ast {\rm I}''_m). $$ Applying equivalences (\ref{isostacks}) we get \begin{lemma} \label{6.9.12.100} There is a canonical equivalence of stacks: \begin{equation} \label{isostacks2} {\rm Gr}_{{\rm G}, S} \stackrel{\sim}{\longrightarrow} {\rm Gr}_{{\rm G}, S_1\ast ...\ast S_n}({\rm I}'_1\ast {\rm I}''_1, \ldots , {\rm I}'_m\ast {\rm I}''_m). \end{equation} \end{lemma} Let $T$ be an ideal triangulation of a decorated surface $ S$. Let $t_1, ..., t_n$ be the triangles of the triangulation. Abusing notation, denote by $t_i$ the decorated surface given by the triangle $t_i$, with the special points given by the vertices. Denote by ${\rm I}'_i$ and ${\rm I}''_i$ the pair of edges obtained by cutting an edge ${\rm I}_i$ of the triangulation $t$, $i=1, ..., m$. Then one has an isomorphism of stacks \begin{equation} \label{isostacks3} {\rm Gr}_{{\rm G}, S} = {\rm Gr}_{{\rm G}, t_1\ast ...\ast t_n}({\rm I}'_1\ast {\rm I}''_1, \ldots , {\rm I}'_m\ast {\rm I}''_m). \end{equation} \subsection{Top components of the surface affine Grassmannian} \label{sec11.3} \subsubsection{Regularised dimensions} \label{sec11.3.1} Recall that if a finite dimensional group ${\rm A}$ acts on a finite dimensional variety $X$, we define the dimension of the stack $X/{\rm A}$ by $$ {\rm dim}~X/{\rm A}:= {\rm dim}~X - {\rm dim}~{\rm A}. $$ Our goal is to generalise this definition to the case when $X$ and ${\rm A}$ could be infinite dimensional. \paragraph{Dimension torsors ${\bf t}^n$.} Let us first define a rank one ${\mathbb Z}$-torsor ${\bf t}$. The kernel ${\Bbb N}$ of the evaluation map ${\rm G}({\cal O}) \to {\rm G}({\mathbb C})$ is a prounipotent algebraic group over ${\mathbb C}$. Let $N$ be its finite codimension normal subgroup. We assign to each such an $N$ a copy ${\mathbb Z}_{(N)}$ of ${\mathbb Z}$, and for each pair $N_1 \subset N_2$ such that $N_2/N_1$ is a finite dimensional, an isomorphism of ${\mathbb Z}$-torsors \begin{equation} \label{in} i_{N_1, N_2}: {\mathbb Z}_{(N_1)} \longrightarrow {\mathbb Z}_{(N_2)}, ~~~ x \longmapsto x + {\rm dim}~N_2/N_1. \end{equation} \begin{definition} A ${\mathbb Z}$-torsor ${\bf t}$ is given by the collection of ${\mathbb Z}$-torsors ${\mathbb Z}_{(N)}$ and isomorphisms $i_{N_1, N_2}$. We set ${\bf t}^n:= {\bf t}^{\otimes n}$ for any $n \in {\mathbb Z}$. \end{definition} In particular, ${\bf t}^0={\mathbb Z}$. To define an element of ${\bf t}^n$ means to exhibit a collection of integers $d_N$ assigned to the finite codimension subgroups $N$ of ${\Bbb N}$ related by isomorphisms (\ref{in}). \vskip 2mm {\bf Example}. There is an element ${\bf dim}~{\rm G}({\cal O}) \in {\bf t}$, given by an assignment $$ {\bf dim}~{\rm G}({\cal O}):= \{N \longmapsto {\rm dim}~{\rm G}({\cal O})/N\in {\mathbb Z}_{(N)}\}\in {\bf t}. $$ More generally, there is an element $$ n~{\bf dim}~{\rm G}({\cal O}) := \{N \longmapsto {\rm dim}~({\rm G}({\cal O})/N)^n\in {\mathbb Z}_{(N)}\} \in {\bf t}^n. $$ For example, the stack $\ast/{\rm G}({\cal O})^n$, where $\ast={\rm Spec}({\mathbb C})$ is the point, has dimension $$ {\bf dim}~\ast/{\rm G}({\cal O})^n = - n~{\bf dim}~{\rm G}({\cal O}) \in {\bf t}^{-n}. $$ \vskip 2mm If $X$ and $Y$ have dimensions ${\bf dim}~X \in {\bf t}^n$ and ${\bf dim}~Y \in {\bf t}^m$, then ${\bf dim}~X\times Y \in {\bf t}^{n+m}$. \paragraph{Dimension torsors ${\bf t}_{\rm A}^n$.} We generalise this construction by replacing the group ${\rm G}({\cal O})$ by a pro-algebraic group ${\rm A}$, which has a finite codimension prounipotent normal subgroup.\footnote{Taking the quotient by a unipotent group does not affect the category of equivariant sheaves. This is why we require the prounipotence condition here.} Then there are the dimension torsor ${\bf t}_{\rm A}$, its tensor powers ${\bf t}_{\rm A}^n$, $n\in {\mathbb Z}$, and an element ${\bf dim}~{\rm A}\in {\bf t}_{\rm A}$. One has ${\bf t}_{{\rm A}^n}= {\bf t}^n_{{\rm A}}$. Moreover, $$ n~{\bf dim}~{\rm A} \in {\bf t}_{\rm A}^n, ~~~~{\bf t}_{\rm A}^n= \{m+ n~{\bf dim}~{\rm A}\}, m\in {\mathbb Z}. $$ \paragraph{Regularised dimension.} Given such a group ${\rm A}$, we can define the dimension of a stack ${\cal X}$ under the following assumptions. \begin{enumerate} \item There is a finite codimension prounipotent subgroup ${\rm N} \subset {\rm A}$ such that $$ {\rm N}^n ~~\mbox{acts freely on ${\cal X}$.} $$ \item There is a finite dimensional stack ${\cal Y}$ and an action of the group ${\rm A}^m$ on ${\cal Y}$ such that \begin{equation} \label{canYX} {\cal Y}/{\rm A}^m = {\cal X}/{\rm N}^n. \end{equation} \item There exists a finite codimension normal prounipotent subgroup ${\rm M} \subset {\rm A}$ such that the action of ${\rm A}^m$ on ${\cal Y}$ restricts to the trivial action of the subgroup ${\rm M}^m$ on ${\cal Y}$. \end{enumerate} The last condition implies that we have a finite dimensional stack ${\cal Y}/({\rm A}/{\rm M}^m)$. The stack ${\cal Y}/{\rm A}^m$ is the quotient of the stack ${\cal Y}/({\rm A}/{\rm M}^m)$ by the trivial action of the group ${\rm M}^m$. In this case we define an element of the torsor ${\bf t}_{\rm A}^{n-m}$ by the assignment \begin{equation} \label{9.30.12.1000} ({\rm N}, {\rm M})\longmapsto {\rm dim}({\cal Y}/{\rm A}^m) + {\bf dim}~({\rm N}^n):= (n- m)~ {\bf dim}~{\rm A} +{\rm dim}~{\cal Y}- n ~ {\rm dim}({\rm A}/{\rm N})\in {\bf t}_{\rm A}^{n-m}. \end{equation} \begin{definition} \label{regddef} Assuming 1) -- 2), the assignment (\ref{9.30.12.1000}) defines the regularised dimension $$ {\bf dim}~{\cal X} \in {\bf t}_{\rm A}^{n-m}. $$ \end{definition} {\bf Remark}. Often an infinite dimensional stack ${\cal X}$ does not have a canonical presentation (\ref{canYX}), but rather a collection of such presentations. For instance such a presentation of the stack ${\cal M}_l^\circ$ defined below depends on a choice of an ideal triangulation $T$ of $S$. Then we need to prove that the regularised dimension is independent of the choices. \subsubsection{Top components of the stack ${\rm Gr}_{{\rm G}, S}$} \label{sec11.3.2} Suppose that a decorated surface $ S$ is an amalgamation of decorated surfaces: \begin{equation} \label{6.9.12.1} S = S_1 \ast \ldots \ast S_n. \end{equation} \begin{definition} Given an amalgamation pattern (\ref{6.9.12.1}), define the amalgamation $$ {\cal A}_{{\rm G}, S_1}({\mathbb Z}^t)\ast \ldots \ast {\cal A}_{{\rm G}, S_n}({\mathbb Z}^t):= \{(l_1, ..., l_n) \in {\cal A}_{{\rm G}, S_1}({\mathbb Z}^t)\times \ldots \times {\cal A}_{{\rm G}, S_n}({\mathbb Z}^t) ~~|~~ \mbox{(\ref{7.30.12.1}) holds} \}: $$ \begin{equation} \label{7.30.12.1} \mbox{$\pi^t_{{\rm I}'_k}(l_i) = \pi^t_{{\rm I}''_k}(l_j)$ for any boundary intervals ${\rm I}'_k\subset S_i$ and ${\rm I}''_k\subset S_j$ glued in $ S$}. \end{equation} \end{definition} \begin{lemma} \label{9.18.13.1} Given an amalgamation pattern (\ref{6.9.12.1}), there are canonical isomorphism of sets $$ {\cal A}_{{\rm G}, S}({\mathbb Z}^t) = {\cal A}_{{\rm G}, S_1}({\mathbb Z}^t)\ast \ldots \ast {\cal A}_{{\rm G}, S_n}({\mathbb Z}^t). $$ $$ {\cal A}^+_{{\rm G}, S}({\mathbb Z}^t) = {\cal A}^+_{{\rm G}, S_1}({\mathbb Z}^t)\ast \ldots \ast {\cal A}^+_{{\rm G}, S_n}({\mathbb Z}^t). $$ \end{lemma} In this case we say that $l$ is presented as an amalgamation, and write $l=l_1 \ast \ldots \ast l_n$. Let us pick an ideal triangulation $T$ of $S$, and present $S$ as an amalgamation of the triangles: \begin{equation} \label{6.9.12.1a} S = t_1 \ast \ldots \ast t_n. \end{equation} By Lemma \ref{9.18.13.1}, any $l\in {\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$ is uniquely presented as an amalgamation \begin{equation} \label{6.9.12.1b} l = l_1 \ast \ldots \ast l_n, ~~~~l_i\in {\cal A}_{{\rm G}, t_i}^+({\mathbb Z}^t). \end{equation} Recall that given a polygon $D_n$, there are cycles $$ {\cal M}^\circ_l:= \kappa({\cal C}_l^\circ)\subset {\rm Gr}_{G, D_n}, ~~~~ l\in {\cal A}^+_{{\rm G}, D_n}({\mathbb Z}^t). $$ \begin{definition} Given an ideal triangulation $T$ of $S$ and an $l\in {\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$ we set, using amalgamations (\ref{6.9.12.1a}) and (\ref{6.9.12.1b}), $$ {\cal M}^\circ_{T, l} = {\cal M}^\circ_{t_1, l_1}\ast \ldots \ast {\cal M}^\circ_{t_n, l_n}, ~~~~ {\cal M}_{T, l}:= ~~\mbox{Zariski closure of} ~~{\cal M}^\circ_{T, l}. $$ \end{definition} Thanks to Lemma \ref{9.21.17.56h}, the restriction to the boundary intervals of $S$ leads to a map of sets $$ {\cal A}_{{\rm G}, S}^+({\mathbb Z}^t) \longrightarrow {{\rm P}^+}^{\{\mbox{boundary intervals of $S$}\}}. $$ It assigns to a point $l\in {\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$ a collection of dominant coweights $\lambda_{{\rm I}_1}, ..., \lambda_{{\rm I}_n}\in {\rm P}^+$ at the boundary intervals ${\rm I}_1, ..., {\rm I}_n$ of $ S$. For any decorated subsurface $i: S' \subset S$ there is a projection given by the restriction map for the surface affine Grassmannian: $ r_{\rm Gr}: {\rm Gr}_{{\rm G}, S} \longrightarrow {\rm Gr}_{{\rm G}, S'} . $ There are two canonical projections: \begin{equation} \label{9.17.13.3} \begin{array}{ccccccc} {\cal A}^+_{{\rm G}, S}({\mathbb Z}^t)& &&& & {\rm Gr}_{{\rm G}, S} \\ &&&&&&\\ r^t_{\cal A}\downarrow &&&& &\downarrow r_{\rm Gr}\\ &&&&&&\\ {\rm Conf}^+_{{\rm G}, S'}({\cal A})({\mathbb Z}^t)&&&&& {\rm Gr}_{{\rm G}, S'} ) \end{array} \end{equation} \begin{theorem} \label{6.8.12.1} Let $ S$ be a decorated surface. i) The stack ${\cal M}_{T, l}$ does not depend on the triangulation $T$. We denote it by ${\cal M}_l$. ii) Let $l\in {\cal A}_{{\rm G}, S}^+({\mathbb Z}^t)$. Let $\{{\rm I}_1, ..., {\rm I}_n\}$ be the set of boundary intervals of $ S$, and $\lambda_{{\rm I}_1}, ..., \lambda_{{\rm I}_n}$ are the dominant coweights assigned to them by $l$. Then \begin{equation} \label{6.8.12.3} {\bf dim}~{\cal M}_l = \langle \rho, \lambda_{{\rm I}_1} + \ldots + \lambda_{{\rm I}_n}\rangle -\chi(S) ~{\bf dim}~{\rm G}({\cal O})\in {\bf t}^{-\chi(S)}. \end{equation} iii) The stacks ${\cal M}_l$, $l\in {\cal A}^+_{{\rm G}, S}({\mathbb Z}^t)$, are top dimensional components of ${\rm Gr}_{{\rm G}, S}$. iv) The map $l \longmapsto {\cal M}_l$ provides a bijection $$ {\cal A}^+_{{\rm G}, S}({\mathbb Z}^t) \stackrel{\sim}{\longrightarrow} \{\mbox{top dimensional components of the stack ${\rm Gr}_{{\rm G}, S}$}\}. $$ This isomorphism commutes with the restriction to decorated subsurfaces of $S$. \end{theorem} \begin{proof} Let us calculate first dimensions of the stacks ${\cal M}^\circ_{T, l}$, and show that they are given by formula (\ref{6.8.12.3}). We present first a heuristic dimension count, and then fill the necessary details. \paragraph{Heuristic dimension count.} Let us present a decorated surface $ S$ as an amalgamation of a (possible disconnected) decorated surface along a pair of boundary intervals ${\rm I}', {\rm I}''$, as in Definition \ref{6.9.12.10}. The space of isomorphisms $g$ from (\ref{6.9.12.101a}) is a disjoint union ${\rm G}({\cal K})$-torsors parametrised by dominant coweights $\lambda$, since the latter parametrise ${\rm G}({\cal K})$-orbits on ${\rm Gr}\times {\rm Gr}$. Pick one of them. Let ${\rm L}'_0 \stackrel{{\lambda}}{\longrightarrow} {\rm L}'_1$ (respectively ${\rm L}_0'' \stackrel{\lambda}{\longrightarrow} {\rm L}_1''$) be a pair of lattices assigned to the vertices of the interval ${\rm I}'$ (respectively ${\rm I}''$). Then the gluing data is a map $ g: ({\rm L}'_0, {\rm L}'_1) \longrightarrow ({\rm L}''_0, {\rm L}''_1). $ Let ${\rm G}_{\lambda}$ be the subgroup stabilising the pair ${\rm L}'_0 \stackrel{\lambda}{\longrightarrow} {\rm L}'_1$. The space of gluings is a ${\rm G}_{\lambda}$-torsor. The group ${\rm G}_{\lambda}$ is a subgroup of codimension $2 \langle \rho, \lambda\rangle$ in ${\rm Aut}~{\rm L}_0 \stackrel{\sim}{=} {\rm G}({\cal O})$. So $$ {\bf dim}~{\rm G}_{\lambda} = {\bf dim}~{\rm G}({\cal O}) - 2 \langle \rho, \lambda\rangle = {\bf dim}~{\rm G}({\cal O}) - {\bf dim}~{\rm Gr}_{{\lambda, \lambda^{\vee}}}. $$ Take the stack ${\cal M}^\circ_{t, l}$ assigned to a triangle $t$ and a point $l\in {\rm Conf}_3^+({\cal A})({\mathbb Z}^t)$. Let $\lambda_1, \lambda_2, \lambda_3$ be the dominant coweights assigned to the sides of the triangle by $l$. Then ${\cal M}^\circ_{t, l}$ is an open part of a component of the stack ${\rm Gr}_{\lambda_1, \lambda_2, \lambda_3}/{\rm G}({\cal O})$. Thus \begin{equation} \label{9.30.12.105} {\bf dim}~{\cal M}^\circ_{t, l} = \langle \rho, \lambda_1+ \lambda_2+ \lambda_3 \rangle - {\bf dim}~{\rm G}({\cal O}) \in {\bf t}^{-1}. \end{equation} Let us calculate now the dimension of the stack ${\cal M}^\circ_{T, l}$. Let $|{\cal T}|$ be the number of triangles, and ${\cal E}_{\rm int}$ (respectively ${\cal E}_{\rm ext}$) the set of the internal (respectively external) edges of the triangulation $T$. Then the dimension of the product of stacks assigned to the triangles is $$ \sum_{E\in {\cal E}_{\rm ext}} \langle \rho, \lambda_{E} \rangle + 2 \sum_{E\in {\cal E}_{\rm int}} \langle \rho, \lambda_{E} \rangle - |{\cal T}| ~{\bf dim}~{\rm G}({\cal O})\in {\bf t}^{-|{\cal T}|}. $$ Gluing two boundary intervals into an internal edge $E$, with the dominant weights $\lambda_E$ associated to it, we have to add the dimension of the corresponding gluing data torsor, that is $$ {\bf dim}~{\rm G}({\cal O})- 2\langle \rho, \lambda_E\rangle\in {\bf t}. $$ So, gluing all the intervals, we get $$ \sum_{E\in {\cal E}_{\rm ext}} \langle \rho, \lambda_{E} \rangle + (|{\cal E}_{\rm int}|- |{\cal T}|) ~{\bf dim}~{\rm G}({\cal O}) = \sum_{E\in {\cal E}_{\rm ext}} \langle \rho, \lambda_{E} \rangle -\chi(S)~{\bf dim}~{\rm G}({\cal O}) = (\ref{6.8.12.3}). $$ Notice that $|{\cal E}_{\rm int}|- |{\cal T}| =-\chi(S)$. Indeed, the triangles $t$ with external sides removed cover the surface $S$ minus the boundary, which has the same Euler characteristic as $S$. \paragraph{Rigorous dimension count.} For each of the triangles $t$ of the triangulation $T$ there are three dominant coweights $\underline {\lambda}(t):= \lambda_1(t), \lambda_2(t), \lambda_3(t)$ assigned by $l$ to the sides of $t$. Pick a vertex $v(t)$ of the triangle $t$. We present the stack ${\rm Gr}_{{\rm G}, t}$ as a quotient of the convolution variety \begin{equation} \label{9.30.12.1a} {\rm Gr}_{{\rm G}, t}= {\rm Gr}_{\underline {\lambda}(t)}/{\rm G}({\cal O}). \end{equation} Namely, choose the lattice ${\rm L}_{v(t)}$ at the vertex $v(t)$ to be the standard lattice ${\rm L}_{v(t)} = {\rm G}({\cal O})$. There exists a finite codimension normal prounipotent subgroup $ N_{t, l} \subset {\rm G}({\cal O}) $ acting trivially on ${\rm Gr}_{\underline{\lambda}(t)}$. It depends on the choice of coweights $\underline{\lambda}(t)$, and, via them, on the choice of the $t$ and $l$. We assign to each finite codimension normal subgroup $N'_{t, l} \subset N_{t, l}$ a finite dimensional stack $$ \frac{{\rm Gr}_{\underline {\lambda}(t)}}{{\rm G}({\cal O})/N'_{t, l}}. $$ Its dimension is $ \langle \rho, \lambda_1+ \lambda_2+ \lambda_3 \rangle - {\rm dim}~{\rm G}({\cal O})/N'_{t, l}. $ This just means that we have formula (\ref{9.30.12.105}). \vskip 3mm There is a canonical surjective map of stacks \begin{equation} \label{9.30.12.2000} {\rm Gr}_{G, S} \longrightarrow \prod_{t\in T}{\rm Gr}_{G, t} = \prod_{t\in T}{\rm Gr}_{\underline {\lambda}(t)}/{\rm G}({\cal O}). \end{equation} Its fibers are torsors over the product over the set ${\cal E}_{\rm int}$ of internal edges $E$ of $T$ of certain groups $G_{\lambda(E)}$ defined as follows. Let $\lambda(E)$ be the dominant coweight assigned to $E$ by $l$. Consider the pair $E', E''$ of edges of triangles glued into the edge $E$. For each of them, there is a pair of the lattices assigned to its vertices. We get two pairs of lattices: $$ ({\rm L}_{E'}^- \stackrel{\lambda(E)}{\longrightarrow} {\rm L}_{E'}^+) ~~~\mbox{and}~~~ ({\rm L}_{E''}^-\stackrel{\lambda(E)}{\longrightarrow} {\rm L}_{E''}^+). $$ Choose one of the edges, say $E'$. Set $ {\rm G}_{\lambda(E)}:= {\rm Aut}~ ({\rm L}_{E'}^- \stackrel{\lambda(E)}{\longrightarrow} {\rm L}_{E'}^+). $ Therefore we conclude that $$ \mbox{The fibers of the map (\ref{9.30.12.2000}) are torsors over the group} ~~ \prod_{E \in {\cal E}_{\rm int}}{\rm G}_{\lambda(E)}. $$ For each $E$, choose a finite codimension subgroup $ N_{\lambda(E)} \subset {\rm G}_{\lambda(E)}. $ Then we are in the situation discussed right before Definition \ref{regddef}, where $$ {\cal X} = {\cal M}^\circ_l, ~~~~{\rm A} = {{\rm G}}({\cal O}), ~~~~ N:= \cap_{E \in {\cal E}_{\rm int}}N_{\lambda(E)}, ~~~~M=\cap_{t}N'_{t, l}, ~~~~n=|{\cal E}_{\rm int}|, ~~~~ m=|{\cal T}|. $$ So we get the expected formula for the regularised dimension of ${\cal M}_{T, l}^\circ$. The resulting regularised dimension does not depend on the choice of ideal triangulation $T$ -- the triangulation does not enter to the answer. Alternatively, one can see this as follows. Any two ideal triangulations of $S$ are related by a sequence of flips. Let $T \longrightarrow T'$ be a flip at an edge $E$. Let $R_E$ be the unique rectangle of the triangulation $T$ with the diagonal $E$. Consider the restriction map $\pi: {\rm Gr}_{G, S} \longrightarrow {\rm Gr}_{R_E, S}$. So one can fiber ${\cal M}^\circ_{l}$ over the component ${\cal M}^\circ_{\pi^t(l)}$. The dimension of the latter does not depend on the choice of the triangulation of the rectangle. A similar argument with a flip of triangulation proves i). Combining with the formula for the regularised dimension of ${\cal M}_{T, l}^\circ$ we get ii). iii), iv). Present $ S$ as an amalgamation of the triangles of an ideal triangulation. It is known that the cycles ${\cal M}_l$ are the top dimensional components of the convolution variety, and thus the stack ${\rm Gr}_{G, t}$, assigned to the triangle. It remains to use Lemma \ref{6.9.12.100}. \end{proof}
1,314,259,996,107
arxiv
\section{Introduction} Diagrammatic Monte Carlo methods are one of the most successful approaches to the sign problem of quantum field theories at nonzero chemical potential, e.g., see \cite{Chandrasekharan:2008gp, Gattringer:2014nxa,Bruckmann:2015sua}. In lattice QCD at strong coupling this idea is rather old \cite{Rossi:1984cv,Karsch:1988zx}: expanding the quark weight and integrating out the gauge links\footnote{Integrating out the quarks instead gives the determinant of the non-hermitian Dirac operator and, therefore, a complex weight.} leads to certain building blocks along the lattice bonds. They are of mesonic and baryonic nature and (together with fermion saturation at every site) facilitate an intuitive diagrammatic representation of the QCD path integral. Nonetheless, positivity of the so-obtained weight is not guaranteed and terms of opposite sign indeed appear, even at zero chemical potential. This has hampered further use of this approach in simulations of realistic QCD even though worm algorithms have proven capable of simulating such constrained systems (for recent attempts see \cite{Fromm:2010lga,deForcrand:2014tha}). One may view the problem of this approach as a fermionic sign problem. For staggered fermions there are clearly four sources of negative signs: (i) the relative sign between the hopping terms in the Dirac operator which is of first order in derivatives, (ii) antiperiodic boundary conditions in Euclidean time, (iii) the Grassmann nature of the quark fields in the path integral, (iv) the staggered signs. All of them are absent if quarks were Lorentz scalars, and one may expect scalar QCD (sQCD) to be free of the sign problem. Note, however, that the gauge links are complex and SU(3) group integrals are not necessarily positive \cite{Creutz:1984mg}. To analyse the sign problem in sQCD is the main motivation of this paper. Gauge theories with scalar matter might be relevant beyond the Standard Model; here we compare sQCD to QCD at nonzero chemical potential. One of the main differences is the flavor-antisymmetric nature of the baryons of sQCD; as a consequence at least three scalar quark flavors are necessary to generate a dependence on the chemical potential (see Sec.~\ref{sec_sign_flavor} below). For the first interesting case of three flavors we are able to prove that the path integral weight is positive, i.e., this representation solves the sign problem at nonzero chemical potential. Scalar quarks are not subject to the Pauli exclusion principle and thus the building blocks of sQCD diagrams come with less constrained occupation numbers, e.g., baryon worldlines may intersect. This shall be of advantage for numerical simulations as well as for treating higher flavor numbers, which we conjecture to be free of the sign problem as well. \section{Original action and derivation of building blocks} We treat sQCD with $N_f$ masssive flavors coupled to the same chemical potential $\mu$ in the strong coupling limit, i.e., without gauge (plaquette) action. The corresponding Euclidean lattice action is the (negative) discretized SU(3) gauge-covariant Laplacian, \begin{align} S = \sum_{x,f}\Big( &- \sum_\nu\big( e^{\mu\delta_{\nu,0}}\phi^f(x)^{\dagger} U_\nu(x) \phi^f(x+\hat{\nu})\notag\\ &\qquad +e^{-\mu\delta_{\nu,0}}\phi^f(x+\hat{\nu})^{\dagger} U_\nu^\dagger(x) \phi^f(x) \big)\notag\\ &+(2d+m^2) |\phi^{f}(x)|^2 \Big)\,, \end{align} where $x$ denotes the lattice sites, $\nu=1,\ldots.,d$ is the direction index ($\nu=0$ is the temporal direction in which $\mu$ acts), $\hat{\nu}$ its unit vector and $f=1,\ldots,N_f$ is the flavor index; the lattice spacing has been set to unity. At real $\mu$ the action is not real, since the second line is not the complex conjugate of the first line, which is the case at $\mu=0$ (or imaginary $\mu$). In \cite{Bruckmann:2016fuj} we have presented numerical evidence that reweighting in the conventional approach of integrating out the quarks to an inverse determinant suffers from a sign problem in the sense of an oscillating phase. In particular, this gives rise to a reweighting factor, $r\sim e^{-V\Delta f(\mu)}$, that decays with the volume. The diagrammatic representation of this system emerges after integrating out all gauge links. To do so for a particular link, we collect the two terms in which $U_\nu(x)$ or $U_\nu^\dagger(x)$ appear and write the matter bilinears to which they couple as matrices \begin{align} \begin{split} \sum_f \phi^f(x+\hat{\nu})\phi^f(x)^\dagger& =:J_\nu(x)\,,\\ \sum_f \phi^f(x)\phi^f(x+\hat{\nu})^\dagger& =J^\dagger_\nu(x)\,, \end{split} \label{eq_def_J} \end{align} involving an outer product in color space. Now the gauge dependent terms in the action read \begin{align} -S[U_\nu(x)]= e^{\mu\delta_{\nu,0}}\,\mathrm{tr}\, J_\nu(x) U_\nu(x) +e^{-\mu\delta_{\nu,0}}\,\mathrm{tr}\, J^\dagger_\nu(x) U_\nu^\dagger(x). \end{align} Note that under local gauge transformations, under which the link becomes $\Omega(x)U_\nu(x)\Omega^\dagger(x+\hat{\nu})$, the matter matrix transforms complementary, it becomes $\Omega(x+\hat{\nu})J_\nu(x)\Omega^\dagger(x)$, such that the traces in the action are gauge invariant\footnote{For fermionic quarks $J_\nu(x)$ and $J_\nu^\dagger(x)$ are commuting Grassmann bilinears, such that the presented analysis can be used for them as well, with the main difference being that powers of $J_\nu(x)$ and $J_\nu^\dagger(x)$ higher than $3N_f$ vanish due to the Grassmann nature.}. The integration over SU(3) group elements (with Haar measure) can be turned into a five-fold sum \cite{Eriksson:1980rq}\footnote{We have further expanded the two terms $\det m+\det m^\dagger$ in \cite{Eriksson:1980rq} separately, with powers $n$ and $\bar{n}$. Note a typo in the definition of $Y$ in that reference.} \begin{align} &\int\limits_{SU(3)}\!\!\!\!dU\, e^{\,\alpha\, \mathrm{tr}\, JU+\alpha^{-1}\, \mathrm{tr}\, J^\dagger U^\dagger}\label{eq:int_group_one}\\ &=2\!\!\!\sum_{j,k,l,n,\bar n = 0}^{\infty}\, \frac{\alpha^{3(n-\bar{n})}}{g_{(1)}!g_{(2)}!}\, \frac{X^jY^kZ^l\Delta^{n}(\Delta^*)^{\bar n}} {j!k!l!n!\bar{n}!}\notag \end{align} over gauge invariants \begin{align}\begin{split} X&= \mathrm{tr}\,(J^\dagger J)\,,\: Y=\frac{1}{2} \big( X^2 - \mathrm{tr}\,[(J^\dagger J)^2] \big)\,,\: Z=\det(J^\dagger J)\,,\\ \Delta &= \det J\,,\:\: \Delta^*= \det J^\dagger=(\det J)^* \,, \end{split}\end{align} where \begin{align}\begin{split} g_{(1)}&= k+2l+n+\bar n + 1\,,\\ g_{(2)}&= j+2k+3l+n+\bar n +2 \end{split}\end{align} are positive integers. When using these formulas in sQCD, one has to reinsert indices ${}_\nu$ and arguments $(x)$ on both the fields $J^{(\dagger)}$ and consequently $\{X,\ldots,\Delta^*\}$ and on the integers (`dual variables') $\{j,\ldots,\bar{n}\}$. The fugacity factor $\alpha=e^{\,\mu\delta_{\nu,0}}$ is present when the bond is in the 0-direction, its exponent is the difference of powers of $\Delta$ and $\Delta^*$. Thus the partition function $\mathcal{Z}=\int \mathcal{D}\phi \mathcal{D}U\, e^{-S}$ can in the diagrammatic formulation be written as \begin{align} \label{eq:partition_function_dual} \mathcal{Z} =& \sum_{\{j,k,l,n,\bar n\}} \int \mathcal{D}\phi\, e^{-(m^2+2d)\sum_{x,f} |\phi^f(x)|^2}\\ &\prod_{x,\nu} \left( 2\frac{\alpha^{3(n-\bar{n})}}{g_{(1)}!g_{(2)}!}\, \frac{X^jY^kZ^l\Delta^{n}(\Delta^*)^{\bar n}} {j!k!l!n!\bar{n}!} \right)_{\nu}(x),\notag \end{align} where the sum goes over all admissable configurations of the variables $\{j,\ldots,\bar n\}$, to be specified more concretely in Sec.~\protect\ref{sec_sign_flavor}. When interpreting $J$ as the hopping of (all flavors of) quarks along a bond in the direction $\nu$, then $J^\dagger$ is the hopping of (all) antiquarks on the same bond. Thus $X$ represents the hopping of `mesons' (with any pair of quark/antiquark flavors). Consistently, $X$ does not contribute a $\mu$-factor (the exponent of $\alpha$ does not contain $j$). $Y$ and $Z$ are of similar nature with two/three quarks and antiquarks hopping on a bond and no $\mu$-contribution. Therefore, we will call $X$, $Y$ and $Z$ \textit{mesonic building blocks}. \begin{figure}[b!] \center \includegraphics{figure1.pdf} \caption{Example of a diagram on a $6\times 6$ lattice (with periodic boundary conditions). Unoriented single and double lines denote bonds with unit occupation of the mesonic building blocks $X$ and $Y$, i.e., $j_\nu(x)=1$ and $k_\nu(x)=1$. Oriented bonds (due to the current conservation discussed in Sec.~\protect\ref{sec_sign_flavor}) stand for baryon building blocks $\Delta$ and $\Delta^*$: arrows upwards and to the right denote $n_\nu(x)=1$ (on one bond $n_\nu(x)=2$) and arrows downwards and to the left denote $\bar{n}_\nu(x)=1$. The baryon loop winds once (actually in both directions) and thus obtains a fugacity factor $e^{3\mu/T}$ in the weight.} \label{fig_big_example} \end{figure} In $\Delta$ and $\Delta^{*}$ three quarks or three antiquarks are hopping on a bond, respectively, which is why we will call them \textit{baryonic building blocks}. As expected, they contribute positive and negative multiples of the baryon chemical potential, $3\mu$, in the exponent. As is typical for bosonic systems, occupation numbers are unbounded from above and do not exclude each other. A configuration in this new representation can easily be determined by a list of all integers $\{j,\ldots,\bar{n}\}_\nu(x)$ on all bonds plus the values of the matter fields on all sites (since we have not integrated out the latter). In a visualization of these numbers one uses building blocks very similar to those from the fermionic case, see Fig.~\ref{fig_big_example}: unoriented one-, two- and three-bonds for the occupation numbers of the mesons and directed bonds for the (anti)baryons (arrows are connected to current conservation to be derived in the next section). Since multiple occupation numbers per bond are harder to visualize, the example diagram shown in that figure mostly contains single occupation numbers. Note that sites without any occupied bond are admissible, too. As numbers, the mesons $X$, $Y$ and $Z$ are positive functions of the positive matrix $JJ^\dagger$. The factorials in Eq.~\eqref{eq:int_group_one} and the remaining Gaussian factors in Eq.~\eqref{eq:partition_function_dual} for $\phi$ are positive as well. Thus, a potential sign problem can only come from the (anti)baryons $\Delta^{(*)}$, as in fermionic QCD. One of the main features of the diagrammatic representation is that the chemical potential appearing through the fugacity $\alpha$ does not introduce signs\footnote{Interestingly, imaginary $\mu$'s, which do not induce a sign problem in the original formulation, do so in the diagrammatic representation.}, i.e., if the system has no sign problem at vanishing $\mu$ it does not develop a sign problem at nonzero $\mu$. This is in very close analogy to the defining energy representation of the grand canonical partition function. \section{Positivity of the weight depending on the number of flavors} \label{sec_sign_flavor} The objects potentially inducing a sign problem in the diagrammatic representation of sQCD are $\Delta^{(*)}=\det J^{(*)}$ and powers thereof. Importantly, the complex matrices $J$ are built out of outer products, see Eq.~\eqref{eq_def_J}, which will be analysed now. For $N_f=1$ obviously any row (or column) of $J$ is linearly dependent on any other row. Consequently, the determinant of $J$ vanishes and so do all $\Delta^{(*)}$'s, such that no dependence on $\mu$ can emerge (only $n\equiv \bar{n}\equiv 0$ contributes). Similarly, for $N_f=2$ at most two rows of $J$ can be linearly independent and its determinant vanishes again. We conclude that at strong coupling \textbf{sQCD develops a dependence on $\mu$ only for $\mathbf{N_f\geq 3}$}. The latter is the matrix size of $J$ and thus generalizes to the number of colors in gauge theories with higher gauge group. For arbitrary $N_f$ the following formula is useful \begin{align} &\det_3 \Big(\sum_{f=1}^{N_f} \phi^f(x+\hat{\nu}) \phi^f(x)^\dagger\Big)\notag\\ &=\frac{1}{3!}\sum_{f_1,f_2,f_3=1}^{N_f} d_{f_1f_2f_3}(x+\hat{\nu})\,d^*_{f_1f_2f_3}(x)\notag\\ &=\frac{1}{3!}\:\sum_{\sigma} d_{\sigma_1\sigma_2\sigma_3}(x+\hat{\nu})\,d^*_{\sigma_1\sigma_2\sigma_3}(x)\, \label{eq_the_formula} \end{align} with determinants \begin{align} d_{f_1f_2f_3}(x) &:=\det_3\big(\phi^{f_1}(x)|\phi^{f_2}(x)|\phi^{f_3}(x)\big)\,, \end{align} where $|$ is used to separate three columns in a three-by-three matrix. $\sigma$ denotes choices of three flavors \begin{align} \sigma &:\{1,2,3\}\to\{1,\ldots, N_f\}\,. \end{align} This formula\footnote{Note that this formula is of the type `determinant of a sum is a sum of determinants' (!) which holds for the outer product structure in which we are interested here.} (and its obvious generalization to $N_c\neq 3$) can easily be shown through writing the determinant with Levi-Civita symbols. It makes manifest the antisymmetry of the flavor indices $\{f_1,f_2,f_3\}$ (or $\{\sigma_1,\sigma_2,\sigma_3\}$) in the determinant. In the language of sQCD this means antisymmetry of quark flavors which hop together in $\det J^{(\dagger)}=\Delta^{(*)}$ representing an (anti)baryon. It also confirms the discussion above, that this determinant vanishes for less than 3 flavors. For $N_f=3$ Eq.~\eqref{eq_the_formula} has just one summand \begin{align} \label{eq:baryonic_weight_nf3} \Delta_\nu(x) =\det J_\nu(x) &=\det\big(\phi^1(x+\hat{\nu})\big|\phi^2(x+\hat{\nu})\big|\phi^3(x+\hat{\nu})\big)\notag\\ &\times \det\big(\phi^1(x)\big|\phi^2(x)\big|\phi^3(x)\big)^*\,, \end{align} where all the three quark flavors enter together just once. So far we have not performed the matter field integrations. The $\phi$-dependent path integral weight consists of a Gaussian term, which suppresses large absolute values of $\phi$, multiplied by the product in the second line of Eq.~\eqref{eq:partition_function_dual}. The latter part is a complicated function of $\phi$. One could leave the $\phi$-integrations to numerics, provided the admissible configurations have non-negative weights. However, there is one important feature of the $\phi$-integration which can easily be utilized; schematically \begin{align} \int_{\mathbb{C}} \!d\phi\,e^{-\#|\phi|^2} (\phi)^A (\phi^*)^B\sim \delta_{AB}\,, \label{eq_phi_int} \end{align} which comes from the integration over the phase\footnote{Phase integrations typically cause U(1) current conservation in diagrammatic approaches to bosonic systems, for fermionic systems this role is played by the saturation of Grassmann integrals by equal numbers of $\psi$'s and $\bar{\psi}$'s.} of $\phi$. For sQCD this formula means that only those terms contribute, for which the power of $\phi^f_a(x)$ matches the power of its complex conjugate $\phi^f_a(x)^*$. This has to hold for every flavor $f$, color $a$ and site $x$ separately, $A^f_a(x)\stackrel{!}{=}B^f_a(x)$. To derive an immediate consequence for the diagrams, consider the \textbf{coarser constraints} that occur after summing these constraints over all indices but the site, $\sum_{f,a}A^f_a(x)\stackrel{!}{=}\sum_{f,a}B^f_a(x)$ for all $x$. Mesonic contributions are functions of $J^\dagger J\sim \phi_a^f(x)\phi_b^g(x)^*$ and thus contribute equal integers to both sums. The baryons $\Delta^{(*)}$, on the other hand, are of third order in $\phi^f_a(x)^*$ and $\phi^f_a(x+\hat{\nu})$, where the two factors live on neighboring sites. A single (anti)baryon thus vanishes under the $\phi$-integration and needs to be accompanied by other (anti)baryons connecting to $x$ and $x+\hat{\nu}$. One easily obtains the constraints \begin{align} \sum_\nu \big[m_\nu(x)-m_\nu(x-\hat{\nu})\big]=0\,, \end{align} where \begin{align} m_\nu(x)=n_\nu(x)-\bar{n}_\nu(x)\,. \end{align} This is nothing but the discrete version of a \textbf{manifest current conservation} $\sum_\nu\partial_\nu m_\nu(x)=0$, namely for the net baryon current $(n-\bar{n})_\nu$ (as in fermionic QCD) from all flavors. Diagrammatically, baryon building blocks must come in \textbf{closed loops}. In Fig.~\ref{fig_big_example}, for instance, the baryon content can be viewed as one long and winding loop plus one plaquette loop touching each other at one bond. According to Eq.~\eqref{eq:int_group_one}, $3\mu$ couples to all $n-\bar{n}$'s in the $0$-direction, i.e., to $\sum_x m_0(x)$, which is just the conserved charge\footnote{Since $m_\nu$ is conserved one can replace $\sum_x m_0(x)$ by $N_0\sum_{\vec{x}} m_0(x_0,\vec{x})$ (for any $x_0$) which turns $3a\mu$ with $a$ the lattice spacing into the expected factor $3\mu a N_0=3\mu/T$ with $T$ the temperature.} of this current. Equivalently, $3\mu/T$ couples to the net winding number of baryon loops in the $0$-direction. Coming back to the sign problem at $N_f=3$, any baryonic factor $\det(\phi^1(x)\big|\phi^2(x)\big|\phi^3(x))$ has to be accompanied by just its complex conjugate from a neighboring baryonic hopping, cf.\ Eq.\ \eqref{eq:baryonic_weight_nf3}, such that one obtains a product of positive terms\footnote{One might argue that a negative sign occurs upon permuting the flavors under one of the determinants, but according to Eq.~\eqref{eq_the_formula} this would also permute the flavors at a neighboring site and thus keep a positive sign of the total weight.} $|\det(\phi^1(x)\big|\phi^2(x)\big|\phi^3(x))|^2$ (and powers thereof) for all sites $x$ on baryon loops. This \textbf{solves the sign problem at $\mathbf{N_f=3}$}. In the diagrammatic representation this system can therefore be simulated, presumably with a hybrid approach for the updates: for unconstrained variables such as $j_\nu(x),\,k_\nu(x),\,l_\nu(x), n_\nu(x)+\bar{n}_\nu(x)$ and $\phi^f(x)$ local updates can be used, while for the constrained variables $n_\nu(x)-\bar{n}_\nu(x)$ worm algorithms are promising. We close by discussing the technicalities faced at more than three flavors, say at $N_f=4$. The baryonic matching described above for $N_f=3$ does not work here: according to Eq.~\eqref{eq_the_formula} $d_{123}$ from one baryon factor multiplies not only $d_{123}^*$ from another baryon factor, but the sum $\# d_{123}+\# d_{124}+\# d_{134}+\# d_{124}$ multiplies the sum $\# d_{123}^*+\# d_{124}^*+\# d_{134}^*+\# d_{124}^*$ with the factors $\#$ all different (determined by the fields at two neighboring sites). Obviuosly there are mixed terms, for which no positivity argument applies. Indeed, the configuration which just contains one closed baryon loop has a complex weight generically. It seems necessary to explore \textbf{finer constraints} than above. For instance, the summand $d_{123}(x)d_{124}^*(x)$ has a `mismatch' in that it contains $\phi^3(x)$ and $\phi^4(x)^*$ once, but not their complex conjugates. This summand thus vanishes under the $\phi^3(x)$- or $\phi^4(x)$-integration, cf.\ Eq.~\eqref{eq_phi_int}. In the absence of other occupied bonds connecting to it, each baryon loop thus contains only terms $|d_{f_1,f_2,f_3}|^2$ and, therefore, is positive. At first sight, mesons may change this positivity argument. A mesonic building block, say $X$, connecting to the baryon loop at site $x$ does contain the `missing' factor $\phi^3(x)^*\phi^4(x)$ (with arbitrary gauge indices) to make the summand $d_{123}d_{124}^*$ survive the $\phi$-integration. However, this summand in $X$ also carries $\phi^3(y)\phi^4(y)^*$ at a neighboring site $y$. The $\phi$-integration at $y$ thus gives zero. As a consequence, the baryon loop reduces to terms $|d_{f_1,f_2,f_3}|^2$ multiplying the positive mesonic weight. \begin{figure} \center \includegraphics{figure2.pdf}\caption{Two simple examples, where a closed baryon loop is connected to a single meson line at two sites $x$ and $y$. As discussed in the text, the weights of both diagrams are positive (in a nontrivial way).} \label{fig_small_examples} \end{figure} Building up slightly more complicated configurations, consider a baryon loop connected to a line of unit mesonic $X$ at two sites $x$ and $y$. Fig. \ref{fig_small_examples} shows two simple examples of this kind. On all sites of the baryon loop except $x$ and $y$ the $\phi$-integrations discussed above force the flavor combinations $(1,2,3)$, $(1,2,4)$, $(1,3,4)$ and $(2,3,4)$ to traverse these parts of the loop separately. At $x$ and $y$, besides the positive $|d_{f_1,f_2,f_3}|^2$, the mixed terms already discussed come into play. The typical nonvanishing contribution reads \begin{align} &\det(\phi^1|\phi^2|\phi^3)(x) \det(\phi^1|\phi^2|\phi^3)^*(y)\notag\\ &\times \det(\phi^1|\phi^2|\phi^4)(y) \det(\phi^1|\phi^2|\phi^4)^*(x)\notag\\ &\times \mathrm{tr}\,\big(\phi^4(x)\phi^4(y)^\dagger\phi^3(y)\phi^3(x)^\dagger\big)\notag\\ &=\epsilon_{abc}\phi^1_a\phi^2_b\phi^3_c\phi^4_d(x) \,\epsilon_{ABC}\phi^{1*}_A\phi^{2*}_B\phi^{4*}_C\phi^{3*}_d(x)\notag\\ &\times (x\rightarrow y)^*\,. \end{align} Now the $\phi^{1,2,3,4}(x)$-integrations are nonvanishing provided $A=a\,,B=b\,,c=d\,,d=C$, such that the field factor from site $x$ becomes positive, $|\phi_1|^2|\phi_2|^2|\phi_3|^2|\phi_4|^2$ (times Gaussian), with a positive prefactor $\epsilon_{abd}\epsilon_{abd}=6$. Such a factor appears from site $y$, too, and the total weight of these configurations are again positive. In our opinion, these examples point at the positivity of all diagrammatic weights even for $N_f > 3$, when flavor dependent constraints, i.e., the conservation of each flavor current, are used at all sites. The full $\phi$-integration could then still be performed with Monte Carlo sampling. Using these finer constraints means to break down the flavor-summed exponents $\{j,\ldots,\bar{n}\}$ into flavor-dependent exponents through multinomials. The book-keeping of the nonvanishing terms becomes rather intricate, especially if higher occupations appear (which is determined by the dynamics of the system, its phases etc.). We leave this to future work. An alternative approach to project onto the relevant contributions are subsets \cite{Bloch:2011jx,Bloch:2013ara,Bloch:2015iha}. \section{Summary and outlook} We have shown that, in the strong coupling limit, the sign problem in sQCD can, indeed, be solved for $N_f=1,2,3$ flavors. It is of further note that these lattice flavors correspond to the same number of flavors in the continuum theory. In the staggered fermion case this is a serious problem, as one staggered flavor generally corresponds to more than one flavor in the continuum. Furthermore the remaining doublers cannot be removed by the rooting trick in the diagrammatic formulation. Also using more than one staggered flavor seems to give rise to a serious sign problem even in the mesonic case in $U(3)$, cf. \cite{Fromm:2010lga}. In this respect, the scalar theory is much more feasible. For more than three flavors, $N_f>3$, further research is needed to decide whether the approach outlined at the end of section \ref{sec_sign_flavor} is viable and removes the sign problem. The discussed examples point to this conjecture. The case of $N_f>3$ is particularly interesting when one wants to go beyond strong coupling. Recent approaches which lend themselves easily to the formulation outlined in this paper are detailed in \cite{Budczies:2003za, Brandt:2016duy,Vairinhos:2014uxa}. The main goal in these references is to rewrite the gauge plaquette action in such a way as to make only single links appear in the new action, at the expense of introducing auxiliary bosonic field variables. The auxiliary fields are either scalar fields \cite{Brandt:2016duy} or matrix valued fields \cite{Vairinhos:2014uxa}. In particular, the additional scalar fields naturally increase the number of flavors to $N_f>3$. Thus, if the sign problem is, indeed, absent in that case, a diagrammatic simulation of full sQCD seems to be in reach. \vskip5mm \noindent {\bf Acknowledgments:} The authors are supported by the DFG (BR 2872/6-2 and BR 2872/7-1) and thank Jacques Bloch and Christof Gattringer for helpful discussions. FB is grateful to the Mainz Institute for Theoretical Physics (MITP) for its hospitality and its partial support during the completion of this work. \input{scalar.bbl} \end{document}
1,314,259,996,108
arxiv
\section{Introduction} \par Flows with moving boundaries are of high significance for a variety of applications in a wide range of flow regimes -- low Reynolds number flows such as swimming fish \citep{kern2006simulations,borazjani2009numerical}, moderate speed flows such as wind turbines \citep{arrigan2011control}, and high Mach number flows such as store separation from a tactical aircraft. \par The term immersed boundary (IB) method encompasses all such methods that simulate viscous flows with immersed (or embedded) boundaries on grids that do not conform to the shape of these boundaries. A variety of numerical approaches have been used for computing the flow around moving bodies, and they can be broadly classified as -- body-fitted mesh methods, and embedded boundary (EB) methods. Body-fitted mesh approaches such as the overset/Chimera approach \citep{steger1983chimera} and the Arbitrary Lagrangian-Eulerian (ALE) approach \citep{donea1982arbitrary,hughes1981lagrangian} require expensive regeneration of the mesh as the body moves, which becomes cumbersome with complex body motion \citep{liu1998numerical,sahin2009arbitrary}. The embedded boundary method places the body within a Cartesian mesh which does not conform with the geometry, and hence does not require complex grid (re)generation. But such an approach requires sophisticated numerical schemes for computing the terms in the governing equations on and close to the moving body, and the imposition of boundary conditions at the embedded boundary. \par The earliest work in immersed boundary methods is the diffuse interface approach by \citet{peskin1972flow}, in which a two dimensional simulation of flow in a heart valve was performed. The boundary was replaced by a force field defined on the mesh points of the rectangular domain which was calculated from the configuration of the boundary. In order to link the representations of the boundary and fluid, since boundary points and mesh points need not coincide, a semi-discrete analog of the delta function was introduced. Later, \citet{goldstein1993modeling} used the idea to impose a force field along a surface and can vary in space and time, with a magnitude and direction opposing the local flow was applied to bring the flow to rest. Two dimensional flow around cylinders and three dimensional turbulent channel flow in riblet-covered surface were simulated. Later, a direct forcing approach was developed for rigid body problems, in which the forces at immersed boundaries were calculated based on the temporally discretized momentum equation \citep{mohd1997combined,fadlun2000combined, uhlmann2005immersed,su2007immersed}. Another class of immersed boundary methods are the sharp-interface methods that have no smearing, and thus have an accurate representation of the geometry. The Ghost fluid Method (GFM) belongs to the class of sharp interface methods. \citet{fedkiw1999non} and \citet{fedkiw2002coupling} developed the ghost fluid method for multiphase flows, in which the interface is tracked with a level set function, which gives the exact sub-cell interface location, and at the interface, an approximate Riemann problem is solved. Further improvements were made in the method by \citet{liu2003ghost} and \citet{terashima2009front}. \citet{tseng2003ghost} developed a ghost cell IB method for flows with complex geometries using a reconstruction procedure to determine the values in the ghost cells for enforcing the boundary conditions. \citet{mittal2008versatile} used the ghost-cell approach to develop a sharp interface method for incompressible flows with moving and deforming bodies. More recently, \citet{brahmachary2018sharp} developed a sharp interface method for high-speed, compressible, inviscid flows, which imposed the boundary conditions on the body geometry, and used a novel reconstruction procedure to compute the solution in the vicinity of the solid-fluid interface. \citet{al2017versatile} developed a multidimensional partial differential extrapolation approach to reconstruct the solution in the ghost fluid regions and imposing boundary conditions on the fluid-solid interface, coupled with a multi-dimensional algebraic interpolation for freshly cleared cells. \par In contrast to the diffuse interface methods, cut-cell methods are piecewise linear "accurate" and are strictly conservative. One of the early studies of the cut-cell approach \citep{clarke1986euler} computed the flow over single- and multi-element airfoils on two-dimensional Cartesian grids using a finite volume approach. Later, \citet{pember1995adaptive} developed a Godunov method which used a volume-of-fluid approach for the fluid-body interface. \citet{yang1997cartesian1} developed a cut-cell method for static boundaries and later extended to moving boundaries \citep{yang1997cartesian2,yang2000calculation}, in which the upwind fluxes on the interfaces of static cells were updated using an HLLC approximate Riemann solver and an exact Riemann solution for a moving piston is used to update moving solid boundaries. One of the earliest studies that employed adaptive mesh refinement in a cut-cell, Cartesian framework was by \citet{dezeeuw1993adaptively}. They performed compressible flow simulations on single- and multi-element airfoils using Roe's approximate Riemann solver and a solution-adaptive refinement to resolve the high gradient regions. The cut-cell approach was used by \citet{hu2006conservative} to solve multi-fluid and complex moving geometry problems for compressible flows in two dimensions. Other early studies that used the cut-cell approach involve simulations of viscous incompressible flows with complex boundaries \citep{ye1999accurate}, forced and natural convection problems in an incompressible framework \citep{udaykumar1996elafint} and computation of solid-liquid phase fronts \citep{udaykumar1999computation}. There are a number of studies that used the approach for simulations of incompressible, two- and three-dimensional flows over complex geometries \citep{almgren1997cartesian,kirkpatrick2003representation,popinet2003gerris,meyer2010conservative}. \citet{hartmann2009general} developed a cut-cell approach using ghost cells which can be freely positioned in space, hence making the approach flexible in terms of the shape and size of the embedded boundaries. A linear least-squares method is used to reconstruct the cell center gradients in irregular regions of the mesh to compute the flux at the surface. \citet{cheny2010ls} developed a Cartesian grid/immersed boundary method for incompressible viscous flows in two-dimensions for moving complex geometries. \par The use of explicit time discretization schemes in a cut-cell approach leads to the classical small-cell issue. The Courant-Friedrichs-Lewy (CFL) restriction is based on the fluid volume in a cell, and hence will cause the admissible time step to be extremely small for cells with low volume fractions. \citet{noh1963cel} did some of the earliest work on this using a cell-merging and redistribution technique. The cell-merging technique identifies a cluster of cells around a cut-cell and merges them to form a larger control volume, and computes the flux update for this newly formed, merged control volume. The cell-merging technique was used by \citet{bayyuk1993simulation} for moving and deforming bodies, and by \citet{quirk1994cartesian} in a block-structured adaptive framework. This technique has been used for compressible flows with moving complex geometries \citep{yang2000calculation}, and compressible, multi-solid/fluid systems \citep{barton2011conservative}. Though widely used, \citet{schneiders2013accurate} showed that the technique can lead to unphysical oscillations for moving boundary problems, and developed a method that used a smooth discrete formulation when cells are freshly cleared or covered by the moving boundary. \citet{kirkpatrick2003representation} developed a novel cell-linking algorithm, which avoids the complexities involved with the cell-merging approach. Another technique to address the small-cell issue is the $h$-box method \citep{berger1989adaptive,leveque1988cartesian,berger2003h}, which approximates the numerical fluxes at the interfaces of a small cell based on initial values specified over regions of length $h$ - size of a regular grid cell, which will allow for the time step to be based on the regular grid size, rather than the small cell. In the current work, we use the flux redistribution technique \citep{pember1995adaptive,colella2006cartesian}. This involves updating the cut-cells using a hybrid divergence -- a linear combination of the flux divergence of the cut-cell and the non-conservative divergence computed using the neighboring cell divergences, and then redistributing the ``excess" quantity of the conserved variables to a neighborhood region of the cut-cell to ensure global conservation. \citet{ji2010numerical} used a cut-cell approach for detonation simulations with a cell-merging technique to avoid the small-cell issue. \citet{muralidharan2016high} developed a novel cell clustering approach to treat the small-cell issue and maintain stability. The central idea was to employ a $k$-order, polynomial piecewise approximation of the flow solution to a cluster of cells, and also extended it for moving boundary problems \citep{muralidharan2018simulation}. More recently, \citet{sharan2020stable} have developed a method for deriving higher-order, provably stable schemes that avoids the small cell issue in a finite difference, cut-cell framework. \par Immersed boundary methods offer a significant advantage in the simulation of flows with moving boundaries, and there have been a number of studies in this context \citep{gilmanov2005hybrid,yang2006embedded,khalili2018immersed,mittal2008versatile}. But the cut-cell approach presents additional challenges when applied to moving boundary problems. In particular, cut-cell based approach to high-speed, moving body problems require special care, and hence, the number of studies is limited. One of the earliest studies was by \citet{yang1997cartesian2,yang2000calculation}, in which the finite volume, unsplit MUSCL--Hancock method of the Godunov type was modified for moving boundaries, in conjunction with a cell-merging technique to maintain numerical stability in the presence of arbitrarily small cut cells to ensure strict conservation at the moving boundaries. \citet{schneiders2013accurate} showed that the widely used cell-merging technique creates unphysical oscillations for moving boundary problems, and developed an accurate moving boundary formulation based on the varying discretization operators which avoids the oscillations. \citet{muralidharan2018simulation} developed a second-order cut-cell approach for flows with moving boundaries enforcing strict conservation using the small-cell clustering algorithm \citep{muralidharan2016high}. The cell clustering algorithm also preserves the smoothness of solution near moving surfaces. \citet{bennett2018moving} employed a directional operator splitting method by extending the cut-cell approach for static walls from \citet{klein2009well}. The scheme calculates the fluxes needed for a conservative update of the near-wall cut-cells as linear combinations of fluxes, which were obtained without regard to the small sub-cell problem, from a one-dimensional extended stencil. \citet{tan2011high} developed a high order numerical boundary condition for compressible inviscid flows involving complex moving geometries. Their methodology was based on finite difference methods which was an extension of the inverse Lax–Wendroff procedure \citep{tan2010inverse} for conservation laws in static geometries. \par Fluid flow simulations with predictive capability require high resolution and superior parallel performance of the flow solvers, and this inevitably leads to the need for adaptive mesh refinement (AMR) strategies. In particular, for moving body problems, where the flow features that are of interest constantly change with time, solution-adaptive refinement can result in a significant reduction in compute time and memory. However, issues of deciding when and how to adapt, and keeping track of the evolving mesh, have to be addressed carefully for scalable performance. The cell-based tree data structure, which is very widely used for AMR, is very flexible, and provides a systematic way to keep track of the mesh. However, since each node of a cell-based tree is a single cell, computations suffer from significant overhead due to indirect addressing and lower FLOP rates are achieved \citep{stout1997adaptive}. Similar performance issues arise with general unstructured grids as well. Block-structured adaptive mesh refinement (SAMR) is more advantageous in many respects compared to the cell-based tree and unstructured grids \citep{stout1997adaptive}. Loop and cache optimizations can be performed over the arrays of cells when using adaptive blocks. The cost of neighbor pointers are amortized over entire arrays, and their ghost cell to computational cell ratio is superior to other data structures. Since the blocks permit refinement of larger multi-cell regions at a time, mesh adaptation is required less frequently than other data structures, which reduce computational cost. A number of frameworks exist for SAMR -- BoxLib, Cactus, Chombo, Enzo, FLASH, and Uintah, are some of the publicly available frameworks. A survey of these frameworks can be found in \citet{dubey2014survey}. \par In the current work, we use AMReX \citep{amrex,AMReX_JOSS} -- a publicly available software framework for building massively parallel SAMR applications with C\texttt{++} and Fortran interfaces. The features include parallelization via flat MPI, OpenMP, hybrid MPI/OpenMP, or MPI/MPI, GPUs, logical tiling of grids, support for multilevel mesh operations such as coarsening/interpolation between different levels and ghost cell filling, multigrid solvers for Poisson and Helmholtz equations, sub-cycling time-stepping algorithm, and support for particles and particle-mesh operations. The block-structured adaptive refinement strategy is based on the work of \citet{berger1982adaptive}, and has been subsequently employed in various studies \citep{berger1984adaptive,berger1989local,bell1994three,berger1991algorithm,brown2000adaptive}. \par In this paper we devise a strategy for moving boundary problems for the compressible, Navier-Stokes equations within the finite-volume, block-structured adaptive mesh refinement framework of AMReX. The paper is organized as follows. Section~\ref{sec:flow_solver} describes the algorithm for the cut-cell approach with static boundaries and its extension to moving boundaries. The numerical results for one-, two-, and three-dimensional compressible flow problems and comparison with experiments and other cases in literature are demonstrated in Section~\ref{sec:testcases}. Conclusion are given in Section~\ref{sec:conclusions}. \section{The flow solver}\label{sec:flow_solver} The flow solver employs a finite volume, second order method to solve the compressible Navier-Stokes equations, given by \begin{eqnarray*} \frac{\partial \bm{Q}}{\partial t} + \frac{1}{V}\int\limits_A\bm{F}\cdot\bm{n}\,dA = S, \end{eqnarray*} where $\bm{Q}\equiv(\rho,\rho \bm{u},\rho E$) is the vector of intensive conserved quantities -- density, momentum and total energy, averaged over the cell with volume $V$, $\bm{F}$ is the flux vector (including both inviscid and viscous fluxes), $A$ denotes the surface of the finite volume region, $\bm{n}$ is the outward normal to the surface and $S$ denotes the source terms. A Godunov approach is used to discretize the advection terms and Riemann solver evaluates the single-valued conservative fluxes on each cell face based on Van Leer-limited solution gradients of the transformed characteristic variables. The temporal discretization uses the second-order Runge-Kutta method. The flow solver \citep{amrexcns} is implemented within the block-structured adaptive mesh refinement (AMR) framework of AMReX \citep{AMReX_JOSS}. An embedded boundary (EB) approach is used to modify the finite volume discretization near complex geometries \citep{colella2006cartesian,modiano2000higher}. The EB approach uses the volume fraction of the cut-cells, the area fraction of the cut-cell faces, the face normals, and the fluid volumetric centroid for the flux computation. In a naive formulation of the embedded boundary approach, the update in a cut-cell is given by (the source terms are omitted) \begin{eqnarray*} \bm{Q}^{n+1} = \bm{Q}^{n} - \frac{\Delta t}{\alpha V}\int\limits_A\bm{F}\cdot\bm{n}\,dA, \end{eqnarray*} where $\alpha$ is the volume fraction of the cut-cell. If we use an explicit time advancement scheme, then this leads to the classical small cut-cell issue. As $\alpha\rightarrow0$, the CFL restriction results in the admissible time step $\Delta t\rightarrow0$. The technique of flux redistribution is utilized to treat the issue \citep{colella2006cartesian,pember1995adaptive}, and is described here for completeness. This involves a two-step procedure -- a hybrid divergence update of the cut-cells, and a redistribution of ``excess'' in the conserved quantity to the neighboring cells. The hybrid divergence is a volume fraction weighted average of the conservative ($c$) and non-conservative divergences ($nc$), and the update with the hybrid divergence is given by \begin{eqnarray*} \bm{Q}^{n+1} = \bm{Q}^{n} - \Delta t(\alpha (\nabla\cdot\bm{F})_c + (1-\alpha)(\nabla\cdot\bm{F})_{nc}). \end{eqnarray*} The conservative divergence is given by the standard finite-volume expression \begin{eqnarray*} (\nabla\cdot\bm{F})_c=\frac{1}{\alpha V}\int\limits_A\bm{F}\cdot\bm{n}\,dA, \end{eqnarray*} and hence the update can be written as \begin{eqnarray*} \bm{Q}^{n+1} = \bm{Q}^{n} - \Delta t\Bigg(\frac{1}{V}\int\limits_A\bm{F}\cdot\bm{n}\,dA + (1-\alpha)(\nabla\cdot\bm{F})_{nc}\Bigg), \end{eqnarray*} thereby avoiding the volume fraction $\alpha$ appear explicitly in the denominator. Note that this approach circumvents the CFL restriction that leads to vanishing small time steps for small $\alpha$, however it is not strictly conservative. The non-conservative divergence contribution is computed as a weighted average of the conservative divergences of the neighboring cells as \begin{eqnarray*} (\nabla\cdot\bm{F})_{nc} = \cfrac{\sum\limits_{i\epsilon\text{N}}\alpha_iV_i(\nabla\cdot\bm{F})_c}{\sum\limits_{i\epsilon\text{N}}\alpha_iV_i}, \end{eqnarray*} where $N$ is the set of all reachable cells containing fluid in a 3$\times$3$\times$3 cell neighborhood. A flux redistribution technique following that described in \citet{colella2006cartesian} is used to modify this update and ensure conservation. Had we only used the conservative divergence for the cut-cell $i$, a conservative update would be \begin{eqnarray*} \bm{Q}_i^{n+1} = \bm{Q}_i^n - \Delta t (\nabla\cdot\bm{F})_c. \end{eqnarray*} The hybrid update instead is \begin{eqnarray*} \bm{Q}_i^{n+1} = \bm{Q}_i^n - \Delta t (\alpha_i(\nabla\cdot\bm{F})_c + (1-\alpha_i)(\nabla\cdot\bm{F})_{nc}). \end{eqnarray*} The latter expression leads to excess ``mass'' (mass refers to any of the conserved variables) in the EB cell given by \begin{eqnarray*} \delta \bm{M}_\mathrm{excess} = -\Delta t(1-\alpha_i)((\nabla\cdot\bm{F})_{nc}-(\nabla\cdot\bm{F})_c). \end{eqnarray*} This excess mass is subtracted from the neighbors of the cut-cell. Let $\delta \bm{M}_i=-\alpha_i\delta \bm{M}_\mathrm{excess}$, and the total mass to be redistributed is \begin{eqnarray}\label{eqn:redistribution} \delta \bm{M}_i V_i = \sum\limits_{j\epsilon N} \alpha_j\delta \bm{M}_{ij} V_j, \end{eqnarray} where $N$ is the set of neighboring cells and $\delta \bm{M}_{ij}$ is the portion of redistributed mass that is to be added to the $j^\mathrm{th}$ neighbor. Now, let us assign weights $w_j$ to the cells, which will determine the amount of redistributed quantity it gets. A cell with weight $w_j$ gets a volume averaged redistributed quantity $\delta\bm{M}_{ij}=w_j\delta \bm{M}_i$, and Eq.~\ref{eqn:redistribution} implies $\sum\limits_{j\epsilon N} \alpha_jw_j=1$. There are a number of choices for how to partition the redistribution. If, for example, the redistribution is volume-weighted (assuming equal volumes for all cells), and \begin{eqnarray*} w_j = \frac{\alpha_j}{\sum\limits_{k\epsilon N} \alpha_k^2}, \end{eqnarray*} note that $\sum\limits_{j\epsilon N} \alpha_jw_j=1$. Other strategies include upwind and/or mass weighting the distribution. For simplicity here, we select the volume-weighting scheme, \begin{eqnarray*} \delta \bm{M}_{ij} = \frac{\alpha_j}{\sum\limits_{k\epsilon N} \alpha_k^2}\delta \bm{M}_i. \end{eqnarray*} Hence, the final update for every cell $p$ in the domain (fluid cells and cut-cells) is given by \begin{eqnarray*} {\bm{Q}_p^{n+1}}_\mathrm{final} = \bm{Q}_p^{n+1} + \sum\limits_{i\epsilon \mathrm{N}_\mathrm{cut-cells}}\delta\bm{M}_{ip}, \end{eqnarray*} where $\bm{Q}_p^{n+1}$ is the update obtained using the divergence (conservative divergence for regular fluid cells and hybrid divergence for cut-cells), and N$_\mathrm{cut-cells}$ is the set of all neighboring cut-cells of cell $p$, which contributes a redistributed mass of $\delta\bm{M}_{ip}$ to cell $p$. \subsection{Moving boundary method}\label{sec:movingEB} In this section, we develop the moving boundary formulation for the compressible Navier-Stokes equations. From the Reynolds transport theorem for the general case of moving/deformable control volumes, we have \begin{eqnarray} \frac{d}{dt}\Bigg(\int\limits_{\Omega(t)}f\,dV\Bigg) = \int\limits_{\Omega(t)}\frac{\partial f}{\partial t}\,dV + \int\limits_{\partial\Omega(t)}f\bm{u}\cdot\bm{n}\,dA = \text{RHS}, \end{eqnarray} where $f$ is a conserved variable, $\Omega(t)$ and $\partial\Omega(t)$ are the temporally varying volume and surface of the control volume respectively, $\bm{u}$ is the velocity of the fluid on the control surface, $\bm{n}$ is the outward unit normal to the control surface, and RHS is the contribution of the inviscid (pressure part) and viscous fluxes including source terms. For a control volume with volume fraction $\alpha$ (the fraction of cell volume occupied by the fluid), we have \begin{eqnarray} \overline{\frac{\partial f}{\partial t}}\alpha\Delta V + \int\limits_{\partial\Omega(t)}f\bm{u}\cdot\bm{n}\,dA = \text{RHS}. \end{eqnarray} where $\Delta V=\Delta x\Delta y\Delta z$ is the cell volume, and $\overline{(\cdot)}$ denotes the cell averaged value. First-order discretization in time (for simplicity) gives \begin{eqnarray}\label{eqn:Update} \frac{\overline{f}^{n+1}-\overline{f}^n}{\Delta t}\alpha\Delta V &=& - \int\limits_{\partial\Omega(t)}f\bm{u}\cdot\bm{n}\,dA + \text{RHS}\nonumber, \\ \overline{f}^{n+1} &=& \overline{f}^{n} + \frac{\Delta t}{\alpha^{n}\Delta V}\Bigg(-\int\limits_{\partial\Omega(t)}f\bm{u}\cdot\bm{n}\,dA + \text{RHS}\Bigg), \end{eqnarray} where we have chosen $\alpha=\alpha^{n}$, though other approximations are possible. In our simulations a second-order Runge-Kutta scheme is employed, and the above time discretization simplifies the presentation. The compressible flow equations (viscous terms omitted for simplicity) in the finite volume formulation are \begin{eqnarray} \frac{\partial\rho}{\partial t} + \frac{1}{\alpha\Delta V}\int\limits_{\partial\Omega(t)}\rho\bm{u}\cdot\bm{n} \,dA&=& 0\nonumber\\ \frac{\partial\rho\bm{u}}{\partial t} + \frac{1}{\alpha\Delta V}\int\limits_{\partial\Omega(t)}(p\bm{n}+\rho\bm{u}\bm{u}\cdot\bm{n})\,dA &=& 0\nonumber\\ \frac{\partial\rho E}{\partial t} + \frac{1}{\alpha\Delta V}\int\limits_{\partial\Omega(t)}(p+\rho E)\bm{u}\cdot\bm{n}\, dA &=& 0, \end{eqnarray} where the conservative variables $(\rho,\rho\bm{u},\rho E$) are cell-averaged. The finite volume solver computes the flux at every face of the control volume. Special care is needed for treating the EB faces especially when the surface is moving. In the present implementation, a Riemann solver takes in the left and right states for a face, and computes the upwind flux on that face. For a stationary EB, a Riemann-like problem is constructed consistent with the no-slip boundary condition. "Left" and "right" states are generated using the pressure and density from the cut fluid cell, and the normal velocity at the EB surface is set to satisfy the inviscid no-penetration condition ($u_n$ (body) = $-u_n$ (cut-cell)). For an EB that moves at a velocity $\bm{u}_w$, the the velocity on the EB surface $\bm{u}_s$ should satisfy the inviscid no-penetration condition given by \begin{eqnarray}\label{eqn:NoPenetration} \bm{u}_s\cdot\bm{n}=\bm{u}_w\cdot\bm{n}. \end{eqnarray} The velocity for the ghost point inside the EB is given by a reflective or mirroring condition as \citep{bennett2018moving,toro2013riemann,forrer1999flow} \begin{eqnarray*} \bm{u}_g = \bm{u}-2(\bm{u}\cdot\bm{n})\bm{n} + 2(\bm{u}_w\cdot\bm{n})\bm{n}, \end{eqnarray*} and hence the velocity at the EB surface which moves at a velocity $\bm{u}_w$ is given by \begin{eqnarray}\label{eqn:us} \bm{u}^s = \frac{\bm{u} + \bm{u_g}}{2} = \bm{u}-(\bm{u}\cdot\bm{n})\bm{n} + (\bm{u}_w\cdot\bm{n})\bm{n}, \end{eqnarray} which satisfies the no-penetration condition at the EB boundary given by Eqn.~\ref{eqn:NoPenetration}. With $\bm{u}^s$ defined by Eqn.\ref{eqn:us}, the flux on the moving EB face is \begin{eqnarray}\label{eqn:MovingEBFlux} F_\rho &=&\rho \bm{u}^s\cdot\bm{n}\nonumber\\ F_{\rho\bm{u}} &= &p_{gd}\bm{n} + (\rho u)_{gd}\bm{u}^s\cdot\bm{n}\nonumber\\ F_{\rho E} &=& (p_{gd}+\rho E)\bm{u}^s\cdot\bm{n} = (p_{gd}+p_{gd}+\frac{1}{2}\rho|\bm{u}^s|^2)\bm{u}^s\cdot\bm{n}. \end{eqnarray} where $p_{gd}$ and $(\rho u)_{gd}$ are the pressure and momentum flux evaluated by the Riemann solver using the left and right hand states at the EB surface (Fig.~\ref{fig:EB1}). \par For flows with a moving EB, we have to deal with the issue of freshly-cleared cells (FC) i.e. a cell covered by the EB at $t^n$ becomes a cut-cell at $t^{n+1}$. Such cells do not have a history of fluid data, and hence special care needs to be taken to compute the data on these cells. Fig.~\ref{fig:EB1} and Fig.~\ref{fig:EB2} show the position of the boundary at $t^n$ and $t^{n+1}$ respectively. The conserved state of a FC cell at $t^{n+1}$ (shown in blue in Fig.~\ref{fig:EB2}) is initialized using a volume-weighted average of the neighboring valid cells at $t^n$ as \begin{eqnarray*}\label{eqn:fcc_interp} f_{\text{FC}} = \cfrac{\sum\limits_{i\epsilon\text{N}}\alpha_if_iV_i}{\sum\limits_{i\epsilon\text{N}}\alpha_iV_i}, \end{eqnarray*} where $N$ is the set of all valid cells (cut-cells and fluid cells) at $t^n$. To justify this approach, consider a system with fluid moving at a uniform velocity of $U\bm{s}$, where $\bm{s}$ is the unit normal in the flow direction, and a body moving at the same velocity as the fluid. In this case, the flow field should remain unchanged with time. Fluid cell updates are trivial for this case, and hence we consider the fluxes on a cut-cell. Using $\bm{u}=U\bm{s}$, and since the flow quantities are constant everywhere in the domain (since density and pressure on the EB face are the same as the adjoining cut-cell fluid), we have the flux contribution of the density, momentum and energy fluxes as \begin{eqnarray*} \text{F}_\rho &=& \int\limits_{\partial\Omega(t)} \rho U\bm{s}\cdot\bm{n}\, dA = \rho U\int\limits_{\Omega(t)}\nabla\cdot\bm{s} \,dV\\ \text{F}_{\rho \bm{u}} &=& \int\limits_{\Omega(t)}p\bm{n} + \rho U^2\bm{s}\bm{s}\cdot\bm{n} \, dA = p\int\limits_{\partial\Omega(t)}\bm{n}\,dA + \rho U^2\int\limits_{\Omega(t)}\nabla\cdot\bm{s}\bm{s} \, dV\\ \text{F}_{\rho E} &=& \int\limits_{\Omega(t)}(p+\rho E)U\bm{s}\cdot\bm{n} \, dA = (p+\rho E)U\int\limits_{\Omega(t)}\nabla\cdot \bm{s}\,dV. \end{eqnarray*} All the above integrals evaluate to exactly zero discretely, since $\bm{s}$ is a constant vector and $\int\limits_{\partial\Omega(t)}\bm{n}\,dA=\bm{0}$ for a closed control volume $\Omega(t)$. This ensures that the contribution of the fluxes to the field update is exactly zero, and hence the field remains unchanged. {\color{black} \subsection{Treatment of the viscous terms at the embedded boundary} A number of different approaches have been used in the literature for the gradient evaluation at the EB \citep{johansen1998cartesian,schwartz2006cartesian,muralidharan2017embedded}. For the time varying momentum equation \begin{eqnarray*} \frac{\partial\rho\bm{u}}{\partial t} + \frac{1}{\alpha\Delta V}\int\limits_{\partial\Omega(t)}(p\bm{n}+\rho\bm{u}\bm{u}\cdot\bm{n})\,dA = \int\limits_{\partial\Omega(t)} \bm{\tau}\cdot\bm{n}\,dA, \end{eqnarray*} we use a 3$^\mathrm{rd}$ order least squares formulation to approximate the stress tensor, $\tau$, at the EB surface. The details of the formulation are given in the Appendix. To determine the gradients on the EB face (green square in Fig.~\ref{fig:EBGradient_LS}), the corresponding least squares formulation results in a linear system of equations at each EB face, given by \begin{eqnarray*} A_\mathrm{LS} X = b_\mathrm{LS}(\phi_\mathrm{nei},\phi_0). \end{eqnarray*} For a 3$^\mathrm{rd}$ order formulation, $A_\mathrm{LS}$ is a 9$\times$9 matrix that is dependent only on the mesh in the neighborhood region, $X$ is the solution vector that contains the derivatives of variable $\phi$ -- first and second (including mixed derivatives), and $b_\mathrm{LS}$ is a 9$\times$1 vector which is a function of the values of $\phi_\mathrm{nei}$ in the neighborhood region (blue circles in Fig.~\ref{fig:EBGradient_LS}), and the value $\phi_0$ at the EB face (the green square in Fig.~\ref{fig:EBGradient_LS}). While evaluating gradients of velocity, $\phi_0$ will take the value of the velocity of the point on the moving body. Note that the values of $\phi_\mathrm{nei}$ are assumed to be positioned at the volumetric centroids of the fluid region of the cell (blue circles in Fig.~\ref{fig:EBGradient_LS}). The 9 $\times$ 9 system of equations is solved using LAPACK \citep{anderson1990lapack}. In the present work, we use a cluster size of 3, which means that for evaluating the gradients at the EB face on cell $(i,j,k)$, the neighborhood region is given by the reachable fluid-containing cells in the index region $(i-3:i+3,j-3:j+3,k-3:k+3)$. } \begin{figure} \subfigure[] { \includegraphics[trim=1.0cm 6.0cm 18cm 1.0cm,clip=true,scale=0.5]{EB1} \label{fig:EB1} } \subfigure[] { \includegraphics[trim=1.0cm 6.0cm 15cm 1.0cm,clip=true,scale=0.5]{EB2} \label{fig:EB2} } \caption{(a) The EB surface at $t^n$ and (b) $t^{n+1}$ showing the EB cells (red), freshly cleared cells (blue) and the neighbors of the freshly cleared cell ($\times$).} \label{fig:EB} \end{figure} \begin{figure} \centering \includegraphics[trim=15.0cm 3.0cm 1.0cm 9.0cm,clip=true,scale=0.5]{EBGradient_LS} \caption{ The neighborhood stencil for a least squares method that computes the gradients at the EB surface (shown by the green square) by minimizing an error norm (see Appendix) over a neighborhood (shown by blue circles) \citep{schwartz2006cartesian}. This stencil has a cluster size of 2, which means that the neighborhood region is given by the reachable fluid-containing cells in the index region $(i-2:i+2,j-2:j+2,k-2:k+2)$.} \label{fig:EBGradient_LS} \end{figure} \section{Test cases}\label{sec:testcases} Several inviscid and viscous test cases of increasing complexity are performed to validate the numerical method for moving embedded boundaries. \subsection{Harmonically pulsating sphere} This test case is that of a sphere of mean radius $R_s=0.01$ m, with a harmonically pulsating surface with radius $r(t) = R_s + A\cos(2\pi ft)$, with $A = 10^{-6}$ m, and $f = 1000$ Hz .The pulsation of the sphere creates traveling pressure waves in the surrounding fluid. In the limit of the mean radius of the sphere being small compared to the wavelength corresponding to the acoustic wave with frequency $f$, i.e. $R_s \ll \lambda = \frac{c_0}{f}$, where $c_0$ is the ambient speed of sound, then the pressure perturbation in the domain is given by \citep{tsangaris2000analytical} as \begin{eqnarray} p'(r,t) \approx -\frac{4\pi^2\rho_0 R_s f^2}{r}\cos\Bigg(2\pi f\Bigg(t-\frac{r}{c_0}\Bigg)\Bigg). \end{eqnarray} The quiescent ambient condition of the surrounding air ($\gamma=1.4$, $R=287.0$ J/kg~K) is given by $\rho_0=1.226$ kg/m$^3$, $p_0=101325.0$ N/m$^2$. The domain size is $1$ m $\times$ $1$ m $\times$ $1$ m, with a base mesh size $64\times64\times64$, and three levels of refinement, that gives a resolution of $n_d=0.02/(1.0/(64\times2^3))=10$ points across the diameter of the sphere. The velocity of the sphere is surface is given by \begin{eqnarray*} \bm{u_s} = -2\pi Af\sin(2\pi ft)\bm{n}, \end{eqnarray*} where $\bm{n}$ is the outward normal to the spherical surface. Fig.~\ref{fig:PulsatingSphere_Levels} shows the sphere and the three levels of refinement (note that in this and all subsequent test cases presented here, the refinement criteria is set to refine all cut cells in order to avoid intersecting the EB with coarse-fine boundaries. Such intersections are manageable, but would unnecessarily complicate the presentation here.). Fig.~\ref{fig:PulsatingSphere_pprime_contours} shows the instantaneous contours of pressure perturbation on three orthogonal planes through the center of the sphere. Figs.~\ref{fig:PulsatingSphere0037} and \ref{fig:PulsatingSphere0047} show the comparison of the numerical and exact solution of pressure perturbation along a radial line that originates at the surface. In this test case, the amplitude of oscillation is small ($A\ll\Delta x, \Delta y, \Delta z$), and hence the sphere surface does not cut across cells. This test case provides a verification of the correctness of the moving EB flux computation in three dimensions. \begin{figure}[htpb!] \centering \subfigure[] { \includegraphics[trim=0.0cm 2.0cm 0.0cm 2.0cm, clip=true,scale=0.2]{PulsatingSphere_Levels} \label{fig:PulsatingSphere_Levels} } \subfigure[] { \includegraphics[trim=0.0cm 2.0cm 0.0cm 2.0cm, clip=true,scale=0.2]{PulsatingSphere_pprime_contours} \label{fig:PulsatingSphere_pprime_contours} } \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 0.0cm 0.0cm, clip=true,scale=0.35]{PulsatingSphere0037} \label{fig:PulsatingSphere0037} } \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 0.0cm 0.0cm, clip=true,scale=0.35]{PulsatingSphere0047} \label{fig:PulsatingSphere0047} } \caption{(a) The 0.5 isocontour of the volume fraction and the 4-level refinement mesh boxes, (b) instantaneous contours of pressure perturbation $p'$, and comparison of the numerical and exact pressure perturbation along a radial line originating from the surface at (c) $t=1.85$ ms and (d) $t=2.35$ ms.} \end{figure} \subsection{An accelerating piston} This test case demonstrates the classical introduction to shock waves and expansion fans -- an accelerating, advancing piston compressing the fluid creates a series of compression waves, the coalescence of which eventually creates a shock wave, and an accelerating, receding piston creates an expansion fan. \subsubsection{Advancing piston} The initial condition consists of quiescent air ($\gamma=1.4$, $R=287.0$ J/kgK) with $\rho_0=1.226$ kg/m$^3$, and $p_0=101325.0$ N/m$^2$, with the piston located at $x=0.0$ m. The piston accelerates a constant rate $a=1.25e5$ m/s$^2$ at $t=0$, and hence, the velocity of the piston is given by \begin{eqnarray*} u_p = at, \end{eqnarray*} which gives the piston motion as \begin{eqnarray*} x(t) = \frac{1}{2}at^2. \end{eqnarray*} The velocity of the fluid is given as \citep{tsangaris2000analytical} \begin{eqnarray}\label{eqn:PistonEqn} u(x,t) = \begin{cases} -\frac{1}{\gamma}\Big(c_0-\frac{\gamma+1}{2}at\Big)+\sqrt{\frac{1}{\gamma^2}\Big(c_0-\frac{\gamma+1}{2}at\Big)^2-\frac{2}{\gamma}a(x-c_0t)} & \quad \text{if} \quad x>\frac{1}{2}at^2 \quad \text{and} \quad x-c_0t < 0\\ 0.0 & \quad \text{otherwise.} \end{cases} \end{eqnarray} The location and time of coalescence of compression waves to a shock wave are given by \begin{eqnarray*} x_c = \frac{2c_0^2}{(\gamma+1)a}\quad\mathrm{and}\quad t_c = \frac{2c_0}{(\gamma+1)a}. \end{eqnarray*} The domain size is 1 m $\times$ 0.125 m and the base mesh size is 64$\times$8 with three levels of refinement. Fig.~\ref{fig:PistonShock} (a) shows the instantaneous Schlieren ($\vert\nabla\rho\vert$) images and the 4-level adaptive mesh. The coalescence of the compression waves to form the shock wave can be seen at $t=t_c=2.26$ ms. The location of the shock $x\approx0.77$ m shows good quantitative agreement with the theory. The comparison of the numerical velocity profiles with the exact solution at different time instants is shown in Fig.~\ref{fig:PistonShock} (b). \begin{figure}[htpb!] \begin{tabular}{cc} & \multirow{2}*{\includegraphics[trim=0.0cm 0.0cm 1.0cm 0.0cm, clip=true,scale=0.42]{ShockComparison}}\\ \includegraphics[trim=2.0cm 14.0cm 2.0cm 14.0cm, clip=true,scale=0.25]{PistonShock_Schlieren_1}\\ \includegraphics[trim=2.0cm 14.0cm 2.0cm 14.0cm, clip=true,scale=0.25]{PistonShock_Schlieren_2}\\ \includegraphics[trim=2.0cm 14.0cm 2.0cm 14.0cm, clip=true,scale=0.25]{PistonShock_Schlieren_3}\\ \includegraphics[trim=2.0cm 14.0cm 2.0cm 14.0cm, clip=true,scale=0.25]{PistonShock_Schlieren_Mesh}\\ &\\ (a)&(b) \end{tabular} \caption{(a) Numerical Schlieren ($\vert\nabla\rho\vert$) at $t=$ 1 ms, 1.75 ms, and at the shock formation time $t_c=$ 2.3 ms, and the 4-level mesh, and (b) the comparison of the velocity profiles with the exact solution at $t=$ 0.75 ms, 1.25 ms, 1.75 ms and $t_c=$ 2.3 ms.} \label{fig:PistonShock} \end{figure} \subsubsection{Receding piston} The initial condition consists of quiescent air ($\gamma=1.4$, $R=287.0$ J/kgK), with $\rho_0=1.226$ kg/m$^3$, and $p_0=101325.0$ N/m$^2$, with the piston located at $x=0.0$ m. The piston accelerates and recedes at a constant rate $a=-5e5$ m/s$^2$ at $t=0$. The velocity of the fluid is given by Eqn.~\ref{eqn:PistonEqn}. The domain size is 1 m $\times$ 0.125 m and the base mesh size is 64$\times$8 with two levels of refinement. Fig.~\ref{fig:Expansion}(a) shows the instantaneous velocity contours and the 3-level mesh at various time instants. The comparison of the numerical velocity profiles with the exact solution at different time instants is shown in Fig.~\ref{fig:Expansion} (b). This test case creates freshly cleared cells as the piston recedes, and demonstrates the efficiency of our moving boundary formulation. \par An expansion fan is a smooth, isentropic flow, and hence the entropy should remain exactly zero at all times. We compute the order of accuracy of the numerical scheme for moving boundary problems using the $L^2$ norm of entropy computed as $s=\text{ln}\Bigg[\Bigg(\cfrac{p}{p_0}\Bigg)\Bigg(\cfrac{\rho}{\rho_0}\Bigg)^\gamma\Bigg]$. Fig.~\ref{fig:Expansion}(c) shows that the order of accuracy is 1. Although, the numerical scheme has an accuracy of 2 for smooth problems, for moving boundary problems, the order is found to be 1, and is attributed to the interpolation procedure for the freshly cleared cells given by Eqn.~\ref{eqn:fcc_interp}, as has been observed by \citet{muralidharan2018simulation} as well. \begin{figure} \begin{tabular}{cc} & \multirow{2}*{\includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.4]{ExpansionComparison}}\\ \includegraphics[trim=1.0cm 14.0cm 1.0cm 11.0cm, clip=true,scale=0.25]{Expansion_Mesh_1}&\\ \includegraphics[trim=1.0cm 14.0cm 1.0cm 14.0cm, clip=true,scale=0.25]{Expansion_Mesh_2}&\\ \includegraphics[trim=1.0cm 14.0cm 1.0cm 14.0cm, clip=true,scale=0.25]{Expansion_Mesh_3}&\\ &\\&\\ (a)&(b)\\ \multicolumn{2}{c}{\multirow{2}{*}{\includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.4]{AccuracyMovingEB}}}\\ &\\&\\&\\&\\&\\&\\&\\&\\&\\&\\&\\&\\&\\&\\ \multicolumn{2}{c}{\multirow{2}{*}{(c)}}\\ &\\ \end{tabular} \caption{The contours of velocity and the 3-level mesh at (a) $t=$ 0.4, 0.8, 1.2 ms, (b) comparison of numerical and exact velocity profiles at $t=$ 0.4, 0.8, 1.2 ms and (c) $L^2$ norm of entropy showing that the scheme is first order accurate for moving boundary problems} \label{fig:Expansion} \end{figure} \subsection{Shock-cylinder interaction}\label{sec:shock-cylinder} To further demonstrate the capability of the moving boundary algorithm we consider the shock-cylinder interaction problem, which has been studied experimentally by \citet{bryson1961diffraction}, and computationally by \citet{bennett2018moving}. This test case consists of a rigid circular cylinder of diameter $D=0.04$ m interacting with a stationary Mach 1.34 shock wave. The initial condition corresponds to a stationary shock wave located at $x=0.05$ m, characterized by the left and right-hand states given by $\rho_L=0.595$ kg/m$^3$, $u_L=459.54$ m/s, $p_L=5e4$ Pa, and $\rho_R=0.944$ kg/m$^3$, $u_R=289.86$ m/s, $p_R=9.6e4$ Pa. The cylinder has a constant horizontal velocity of $u=u_L=459.54$ m/s. The center of the cylinder is initially located at $x=0.028$ m. The computational domain is 0.18 m$\times$ 0.18 m with a base mesh size of 32 $\times$ 32, with four levels of refinement, which gives a resolution of 114 points across the diameter of the cylinder. The refinement criterion tags all cut-cells and has an additional gradient based detector for resolving high-gradient regions. The detector at a point $(i,j)$ is given by \citet{wong2016multiresolution} as \begin{eqnarray}\label{eqn:ShockRefine} \widetilde{w}=\frac{\sqrt{w_x^2+w_y^2}}{w_\mathrm{mean}+\epsilon}, \end{eqnarray} where \begin{eqnarray*} w_x&=&\vert\rho_{i+1,j}-2\rho_{i,j}+\rho_{i-1,j}\vert\\\nonumber w_y&=&\vert\rho_{i,j+1}-2\rho_{i,j}+\rho_{i,j-1}\vert\\\nonumber w_\mathrm{mean}&=&\sqrt{(\rho_{i+1,j}+2\rho_{i,j}+\rho_{i-1,j})^2+(\rho_{i,j+1}+2\rho_{i,j}+\rho_{i,j-1})^2}.\\\nonumber \end{eqnarray*} If $\widetilde{w}>0.005$, cells are tagged for refinement. Fig.~\ref{fig:ShockCylinder} (a)-(d) shows the evolution of the numerical Schlieren ($\vert\nabla\rho\vert$) as the cylinder interacts with the shock wave. Since the cylinder moves at the same speed as the surrounding fluid before encountering the shock wave, the flow-field should not change with time during this period. The absence of any waves in the domain during this time shows that this consistency check is satisfied by our moving-boundary method similar to \citet{bennett2018moving}. Fig.~\ref{fig:ShockCylinder_Schlieren_4} shows the mesh at $t=197.86 \mu$s, and the efficiency of the refinement criterion to resolve the high gradient regions is evident.\\ \par Interaction of a cylinder with a stationary, Mach 2.82 shock wave is done for comparison with experiments of \citet{bryson1961diffraction}. The density and pressure ratios are $\rho_R/\rho_L=3.68$, and $p_R/p_L=9.11$. The initial condition is a shock wave located at $x=0.05$ m characterized by the left and right states given by $\rho_L=0.595$ kg/m$^3$, $u_L=967.25$ m/s, $p_L=5e4$ Pa, and $\rho_R=0.944$ kg/m$^3$, $u_R=262.59$ m/s, $p_R=4.56e5$ Pa. The cylinder moves with velocity $u_L$. Fig.~\ref{fig:ShockCylinder_Comparison} shows the comparison of the numerical Schlieren with the experiment of \citet{bryson1961diffraction}. When the incident shock (IS) first impinges on the cylinder, a regular reflected shock (RS) is formed, and later, as the cylinder moves past the incident shock wave, a Mach stem (MS) and a slip surface are formed, that leads to a triple Mach point (TP). The flow features are well resolved with adaptive refinement, and demonstrate qualitatively good comparison with experiments. \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=0.0cm 3.0cm 0.0cm 1.5cm, clip=true,scale=0.22]{ShockCylinder_Schlieren_1} } \subfigure[] { \includegraphics[trim=0.0cm 3.0cm 0.0cm 1.5cm, clip=true,scale=0.22]{ShockCylinder_Schlieren_2} }\\ \subfigure[] { \includegraphics[trim=0.0cm 3.0cm 0.0cm 1.5cm, clip=true,scale=0.22]{ShockCylinder_Schlieren_3} } \subfigure[] { \includegraphics[trim=0.0cm 3.0cm 0.0cm 1.5cm, clip=true,scale=0.22]{ShockCylinder_Schlieren_4} \label{fig:ShockCylinder_Schlieren_4} } \begin{center} \subfigure[] { \includegraphics[trim=6.0cm 3.5cm 6.0cm 3.0cm, clip=true,scale=0.5]{ShockCylinder_Comparison.pdf} \label{fig:ShockCylinder_Comparison} } \end{center} \caption{The numerical Schlieren ($\vert\nabla\rho\vert$) at (a) $t=66$ $\mu$s, (b) $t=131$ $\mu$s, (c) $t=198$ $\mu$s and (d) the adaptive 5-level mesh (the coarsest level is completely refined and hence not seen at this instant) and (e) comparison of the numerical Schlieren with the experiment of \citet{bryson1961diffraction} showing the features. IS: Incident shock, RS: Reflected shock, MS: Mach stem, CD: Contact discontinuity, TP: mach triple point.} \label{fig:ShockCylinder} \end{figure} \subsection{Shock-wedge interaction}\label{sec:shock-wedge} In this section, we consider an experimental test case studied by \citet{chang2000shock}, known as Schardin's problem \citep{schardin1957high} -- a Mach 1.34 shock interaction with a triangular wedge. This test case demonstrates the capability of the algorithm to handle high-speed flows around sharp corners. This test case consists of a rigid, equilateral triangular wedge with side length $L=0.02$ m interacting with a stationary Mach 1.34 shock wave. The initial condition corresponds to a stationary shock wave located at $x=0.03$ m, characterized by the left and right-hand states given by $\rho_L=0.595$ kg/m$^3$, $u_L=459.54$ m/s, $p_L=5e4$ Pa, and $\rho_R=0.944$ kg/m$^3$, $u_R=289.86$ m/s, $p_R=9.6e4$ Pa. The cylinder has a constant horizontal velocity of $u=u_L=459.54$ m/s. The center of the vertical side of the wedge is initially located at $(x,y)=(0.012,0.0)$ m. The computational domain is 0.09 m$\times$ 0.09 m with a base mesh size of 32 $\times$ 32, with four levels of refinement. The refinement criterion tags all cut-cells and has an additional gradient based detector for resolving high-gradient regions as described in Section~\ref{sec:shock-cylinder}. Fig.~\ref{fig:ShockWedge_Schlieren_1}-\ref{fig:ShockWedge_Schlieren_4} show the temporal evolution of the numerical Schlieren ($\vert\nabla\rho\vert$). Fig.~\ref{fig:ShockWedge_Schlieren_5} shows the 5-level mesh (coarser level is fully refined and hence not seen) showing the effectiveness of the refinement criterion in resolving the high-gradient regions. \par Fig.~\ref{fig:ShockWedgeExpComparison} shows the comparison of the experimental \citep{chang2000shock} and numerical Schlieren images. It can be seen that the various features of the flow are well-resolved and qualitatively match the experimental results. A more detailed Schlieren image of the various features at a later time is shown in Fig.~\ref{fig:ShockWedgeExpComparison_Features}. As the wedge interacts with the incident shock wave, a regular reflected shock wave (R) and an expansion fan E are formed initially. As the wedge passes the incident shock the sharp corners lead to the formation of strong vortices (V) and the symmetric decelerated shock wave pattern (D) . At a later time, Mach stems (M$_1$) form on the top and bottom, leading to a triple mach point T$_1$. As the wedge moves forward another Mach stem (M$_2$) originates leading to another triple point (T$_2$). \begin{figure}[htpb!] \centering \subfigure[] { \includegraphics[trim=4.5cm 3.0cm 0.0cm 1.0cm, clip=true,scale=0.22]{ShockWedge_Schlieren_1} \label{fig:ShockWedge_Schlieren_1} } \subfigure[] { \includegraphics[trim=4.5cm 3.0cm 0.0cm 1.0cm, clip=true,scale=0.22]{ShockWedge_Schlieren_2} \label{fig:ShockWedge_Schlieren_2} }\\ \subfigure[] { \includegraphics[trim=4.5cm 3.0cm 0.0cm 1.0cm, clip=true,scale=0.22]{ShockWedge_Schlieren_3} \label{fig:ShockWedge_Schlieren_3} } \subfigure[] { \includegraphics[trim=4.5cm 3.0cm 0.0cm 1.0cm, clip=true,scale=0.22]{ShockWedge_Schlieren_4} \label{fig:ShockWedge_Schlieren_4} }\\ \subfigure[] { \includegraphics[trim=4.5cm 3.0cm 0.0cm 1.0cm, clip=true,scale=0.22]{ShockWedge_Schlieren_Mesh} \label{fig:ShockWedge_Schlieren_5} } \subfigure[] { \includegraphics[trim=10.5cm 3.5cm 10.5cm 3.5cm, clip=true,scale=0.52]{ShockWedgeExpComparison.pdf} \label{fig:ShockWedgeExpComparison} } \caption{The numerical Schlieren ($\vert\nabla\rho\vert$) at (a) $t=24$ $\mu$s, (b) $t=66$ $\mu$s, (c) $t=80$ $\mu$s, (d) $t=107$ $\mu$s, (e) the adaptive 5-level mesh (the coarsest level is completely refined and hence not seen at this instant) and (f) comparison of the numerical Schlieren with the experiments of \citet{chang2000shock}} \end{figure} \begin{figure} \centering \includegraphics[trim=6.0cm 4.0cm 5.0cm 3.0cm, clip=true,scale=0.55]{ShockWedgeExpComparison_Features} \caption{Schlieren image from the experiment \citep{chang2000shock} (left), and simulation showing the comparison of the features. A: accelerated shock, D: decelerated shock, E: Expansion fan, R: Reflected shock, I: Incident shock T$_1$, T$_2$: Mach triple points, M$_1$, M$_2$: Mach stems , S$_1$, S$_2$: Slip lines (right).} \label{fig:ShockWedgeExpComparison_Features} \end{figure} \subsection{Pitching NACA 0012 airfoil} The transonic buffet phenomenon over a NACA 0012 airfoil is widely studied both experimentally \citep{landon1982naca} and computationally \citep{venkatakrishnan1996implicit,mumtaz2017computational,kirshman2006flutter,schneiders2013accurate}. This test case consists of flow of air ($\gamma=1.4$, $R=287.0$ J/kgK) at Mach 0.755 over a pitching NACA 0012 airfoil with free-stream conditions of $\rho=1.226$ kg/m$^3$, $u=256.9$ m/s and $p=101325.0$ Pa. The pitching motion of the airfoil is about the quarter-chord point $x/c=0.25$, and is defined by the temporally varying angle of attack $\phi(t)=0.016+2.51\sin(\omega t)$ degrees, with $\omega=41.45$ rad/s. A polynomial representation is used to generate the NACA 0012 airfoil \citep{tools2015naca}. The domain size is 20 m $\times$ 10 m, and the base mesh size is 256 $\times$ 128 with three levels of refinement. The refinement criterion is defined to tag cut-cells and high gradient regions as defined by Eqn.\ref{eqn:ShockRefine}. \\ \par The flow over the airfoil is transonic as it accelerates over the surface and becomes supersonic, and forms a shock wave which makes it subsonic. Due to the pitching of the airfoil, the shock wave location is unsteady and an oscillating shock wave pattern known as the transonic buffet can be observed over the airfoil. This causes the pressure over the top and bottom surfaces of the airfoil to fluctuate with time, and leads to a cyclic variation of the lift coefficient, and as a result, the lift coefficient has different values as the airfoil encounters the same angle of attack during the upward and downward motion. The transonic buffet phenomenon can be seen in Fig.~\ref{fig:PitchingAirfoil_Mach_1} and \ref{fig:PitchingAirfoil_Mach_2}, which show the contours of Mach number at an angle of attack $\phi=2.34^o$ and $\phi=-0.54^o$ respectively. Fig.~\ref{fig:PitchingAirfoil_Schlieren_1} and \ref{fig:PitchingAirfoil_Schlieren_2} show the numerical Schlieren ($\vert\nabla\rho\vert$) and the 4-level mesh at $\phi=2.34^o$ and $\phi=-0.54^o$. Fig.~\ref{fig:CompareWithExp} shows the comparison of the numerical and experimental \citep{landon1982naca} pressure coefficient $C_p=\cfrac{p-p_\infty}{1/2\rho U_\infty^2}$ on the top and bottom surfaces of the airfoil at an angle of attack of $\phi=2.34^o$ . Fig.~\ref{fig:cldeg} shows the variation of the lift coefficient as a function of the angle of attack. Good quantitative agreement is observed with the experiments of \citet{landon1982naca}, the numerical simulation of \citet{venkatakrishnan1996implicit}, and the result from a commercial solver (ANSYS) \citep{mumtaz2017computational}. \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=0.0cm 3.0cm 3.0cm 3.0cm, clip=true,scale=0.22]{PitchingAirfoil_Mach_1} \label{fig:PitchingAirfoil_Mach_1} } \subfigure[] { \includegraphics[trim=0.0cm 3.0cm 3.0cm 3.0cm, clip=true,scale=0.22]{PitchingAirfoil_Mach_2} \label{fig:PitchingAirfoil_Mach_2} }\\ \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 0.0cm 0.0cm, clip=true,scale=0.2]{PitchingAirfoil_Schlieren_1} \label{fig:PitchingAirfoil_Schlieren_1} } \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 0.0cm 0.0cm, clip=true,scale=0.2]{PitchingAirfoil_Schlieren_2} \label{fig:PitchingAirfoil_Schlieren_2} }\\ \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.4]{CompareWithExp} \label{fig:CompareWithExp} } \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.4]{cldeg} \label{fig:cldeg} } \caption{The contours of Mach number at (a) $\phi=2.34^o$ and (b) $\phi=-0.54^o$, the 4-level mesh and the numerical Schlieren at (c) $\phi=2.34^o$ and (d) $\phi=-0.54^o$, (e) comparison of the numerical and experimental \citep{landon1982naca} pressure coefficient for the top and bottom surfaces of the airfoil at $\phi=2.34^o$, and (f) comparison of the numerical variation of the lift coefficient as a function of the angle of attack, with the experiment \citep{landon1982naca} and other results in literature \citep{venkatakrishnan1996implicit,mumtaz2017computational}.} \label{fig:NACA 0012} \end{figure} \subsection{Reciprocating piston in an engine-like geometry} To demonstrate the capability of the algorithm to simulate flows with three dimensional complex, moving geometries, we compute the flow inside the geometry shown in Fig.~\ref{fig:ICEGeometry_1}, which mimics an internal combustion engine (without valves). The cross section with the dimensions is shown in Fig.~\ref{fig:ICEGeometry_2}. This is a closed system, and hence the mass within the geometry should remain constant with time, which makes this a good case for testing the conservative nature of the scheme. The piston P is initially located at $x(0)=0.0575$ m, and has a prescribed oscillatory motion given by $x(t) = 0.0375+0.02\cos(375t)$ m, which creates a flow within the geometry. The computational domain has dimensions 0.1 m $\times$ 0.1 m $\times$ 0.1 m, with a base mesh size of 32 $\times$ 32 $\times$ 32, with two levels of refinement, and a constant time step $\Delta t=1.5\times10^{-6}$ s is used. The refinement criterion tags all cut-cells for refinement. \par Fig.~\ref{fig:ICEVelocityMesh_1}-\ref{fig:ICEVelocityMesh_3} show two perpendicular slices with the contours of axial velocity, and the 3-level mesh for different time instants. It can be seen that the geometry is always enclosed within the finest level of refinement. Fig.~\ref{fig:ICEVelocityMesh_4} shows the slices of axial velocity and the 0.5 isocontour of the volume fraction at $t=0.024$ s. As the piston oscillates, the fluid in the geometry is compressed and expanded, and hence the density changes continuously. Since mass remains constant, the average exact density in the geometry at any time can be computed as $\rho_\mathrm{exact}(t)=m(0)/V(t)$, where $m(0)$ is the initial mass in the geometry, and $V(t)$ is the volume enclosed by a geometry at time $t$. Fig.~\ref{fig:DensityVsTime} shows a comparison of the computed and exact average density within the geometry as a function of time over a time period of three cycles of oscillation, which shows good quantitative comparison. To test the conservative nature of the scheme, the percentage error in mass defined as $\Delta m (\%) = \frac{m(t)-m(0)}{m(0)}\times100$, where $m(t) = \int\limits_{V(t)}\rho(t)\,dV$, is computed as a function of time for two mesh sizes -- $32^3$ and $64^3$, and shown in Fig.~\ref{fig:DeltaMVsTime}. The maximum percentage error is for the $32^3$ mesh is $0.92\%$, and reduces to $0.31\%$ for the $64^3$ mesh. Since the error in mass reduces with refinement, it shows that the algorithm ensures conservation. \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=8.0cm 6.0cm 9.0cm 5.0cm, clip=true,scale=0.35]{ICEGeometry_1} \label{fig:ICEGeometry_1} } \hspace{3cm} \subfigure[] { \includegraphics[trim=10.0cm 1.0cm 8.5cm 0.0cm, clip=true,scale=0.40]{ICEGeometry_2} \label{fig:ICEGeometry_2} } \caption{(a) The 0.5 isocontour of the volume fraction and the cross section of the geometry at the bottom dead center and (b) the cross-section of the geometry at the bottom dead center showing the dimensions in centimeters (not to scale).} \label{fig:ICEGeometry} \end{figure} \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=2.0cm 1.0cm 8.0cm 1.0cm, clip=true,scale=0.22]{ICEVelocityMesh_1} \label{fig:ICEVelocityMesh_1} } \hspace{3cm} \subfigure[] { \includegraphics[trim=2.0cm 1.0cm 8.0cm 1.0cm, clip=true,scale=0.22]{ICEVelocityMesh_2} \label{fig:ICEVelocityMesh_2} }\\ \subfigure[] { \includegraphics[trim=2.0cm 1.0cm 8.0cm 1.0cm, clip=true,scale=0.22]{ICEVelocityMesh_3} \label{fig:ICEVelocityMesh_3} } \hspace{3cm} \subfigure[] { \includegraphics[trim=2.0cm 1.0cm 8.0cm 1.0cm, clip=true,scale=0.22]{ICEVelocityMesh_4} \label{fig:ICEVelocityMesh_4} }\\ \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.4]{DensityVsTime} \label{fig:DensityVsTime} } \subfigure[] { \includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.4]{DeltaMVsTime} \label{fig:DeltaMVsTime} } \caption{Contours of axial velocity (m/s) and the 3-level mesh at (a) $t=0.015$ s, (b) $t=0.021$ s and (c) $t=0.024$ s, (d) contours of axial velocity and 0.5 isocontour of volume fraction showing the surface of the geometry, (e) average density as a function of time and (f) the percentage error in mass as a function of time for mesh sizes $32^3$ and $64^3$.} \label{fig:ICE} \end{figure} \subsection{Shock-cone interaction} To demonstrate the capability of the numerical algorithm to simulate high-speed flows with complex geometries and sharp corners in three dimensions, we extend Schardin's problem described in Section~\ref{sec:shock-wedge} to three dimensions, similar to \citet{bennett2018moving}. This test case consists of a rigid right circular cone of radius $R=0.02$ m and height $h=0.02$ m, interacting with a stationary Mach 1.34 shock wave. The domain size is 0.1 m $\times$ 0.1 m $\times$ 0.05 m with a base mesh size 32 $\times$ 64 $\times$ 64, with three levels of refinement. The nose of the cone located at $(x,y,z)=(0.0,0.0,-0.02)$ m at $t=0$. The initial condition corresponds to a stationary shock wave located at $z=-0.02$ m, characterized by the left and right-hand states given by $\rho_L=0.595$ kg/m$^3$, $u_L=459.54$ m/s, $p_L=5e4$ Pa, and $\rho_R=0.944$ kg/m$^3$, $u_R=289.86$ m/s, $p_R=9.6e4$ Pa. The cone has a constant horizontal velocity of $u=u_L=459.54$ m/s. . The refinement criterion tags all cut-cells, and has an additional gradient based detector for resolving high-gradient regions. \par Fig.~\ref{fig:MovingCone_Schlieren_1}-\ref{fig:MovingCone_Schlieren_6} show the temporal evolution of the numerical Schlieren ($\vert\nabla\rho\vert$) on two perpendicular slices. As the cone interacts with the shock, the flow features are noticeably different compared to the shock-wedge interaction case. The high-gradient features evolve spherically, and the interaction of these regions with the strong vortices at the rear of the cone lead to the creation of multiple weak shocks as shown in Fig.~\ref{fig:MovingCone_Schlieren_6}. Fig.~\ref{fig:MovingCone_Schlieren_Mesh} shows the 4-level mesh on the vertical slice showing the effectiveness of the refinement criterion in resolving the high-gradient regions. Fig.~\ref{fig:MovingCone_Schlieren_FineLevel} shows an instantaneous image of the finest level of refinement, showing the three-dimensional spherical-like nature of the high-gradient regions. \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=3.2cm 5.0cm 2.0cm 7.0cm, clip=true,scale=0.28]{MovingCone_Schlieren_1} \label{fig:MovingCone_Schlieren_1} } \subfigure[] { \includegraphics[trim=3.2cm 5.0cm 2.0cm 7.0cm, clip=true,scale=0.28]{MovingCone_Schlieren_3} \label{fig:MovingCone_Schlieren_3} } \subfigure[] { \includegraphics[trim=3.2cm 5.0cm 2.0cm 7.0cm, clip=true,scale=0.28]{MovingCone_Schlieren_4} \label{fig:MovingCone_Schlieren_4} } \subfigure[] { \includegraphics[trim=3.2cm 5.0cm 2.0cm 7.0cm, clip=true,scale=0.28]{MovingCone_Schlieren_6} \label{fig:MovingCone_Schlieren_6} } \subfigure[] { \includegraphics[trim=3.2cm 5.0cm 2.0cm 7.0cm, clip=true,scale=0.28]{MovingCone_Schlieren_Mesh} \label{fig:MovingCone_Schlieren_Mesh} } \subfigure[] { \includegraphics[trim=3.5cm 5.0cm 2.0cm 7.0cm, clip=true,scale=0.28]{MovingCone_Schlieren_FineLevel} \label{fig:MovingCone_Schlieren_FineLevel} } \caption{The 0.5 isocontour of volume fraction and the numerical Schlieren ($\vert\nabla\rho\vert$) images on two perpendicular planes at (a) $t=28.7$ $\mu$s, (b) $t=63.7$ $\mu$s, (c) $t=79.5$ $\mu$s, (d) $t=118$ $\mu$s, (e) the 4-level mesh shown on the vertical plane, and (f) the finest level of refinement showing the high-gradient regions of the flow.} \label{fig:MovingCone} \end{figure} {\color{black} \subsection{Horizontally moving cylinder in initially quiescent flow} This test case consists of a horizontally moving cylinder in initially quiescent ambient fluid with $\rho_\infty=1.226$ kg/m$^3$, $\mu_\infty=0.613$ kg/ms, at a Reynolds number $Re=\rho_\infty U_c D/\mu_\infty=40$, based on the cylinder diameter $D=0.4$ m and cylinder velocity $U_c=50$ m/s. To evaluate the performance of the moving EB numerical scheme, the pressure and skin friction coefficients over the surface of the cylinder are computed and compared with the results in the literature. The pressure coefficient over the surface of the cylinder is given by $C_p = (p-p_\infty)/(1/2\rho_\infty U_c^2)$, and the skin friction coefficient is given by $C_f = \tau_f/(1/2\rho_\infty U_c^2)$, where $\tau_f$ is the shear stress tangential to the surface given by \begin{eqnarray}\label{eqn:tauf} \tau_f = (\tau_{yx}n_x+\tau_{yy}n_y)n_x-(\tau_{xx} n_x + \tau_{xy} n_y)n_y, \end{eqnarray} where $n_x$ and $n_y$ are the components of the surface normal on the body (pointing towards the wall). The domain size is 20 m $\times$ 10 m, with a base mesh size of 512 $\times$ 256, with 3 levels of refinement, which gives a resolution of $\sim$82 points in the cylinder diameter. At $t=0$, the center of the cylinder is located at $(x,y)=(2.0,0.0)$. A geometric refinement criterion is used to track the cylinder and its vicinity, which tags all cells in the domain which satisfies $x_c(t)-3.0 < x < x_c(t)+3.0$ and $y_c(t)-0.25<y<y_c(t)+0.25$, where $(x_c(t),y_c(t))$ are the coordinates of the center of the cylinder at any instant of time. The comparison is performed at a non-dimensional time of $t^*= tU_c/D = 31.25$. Fig.~\ref{fig:MovingCylinder_Re40}(a)-(c) show the instantaneous contours of velocity magnitude at $t=0.075$ s, 0.15 s, and 0.3 s. Fig.~\ref{fig:MovingCylinder_Pressure} and \subref{fig:MovingCylinder_SkinFriction} show the comparison of the pressure coefficient and skin friction coefficient over the surface of the cylinder respectively. The angle $\theta$ is measured from the stagnation point of the cylinder. Good quantitative comparison is observed, although minor oscillations can be seen in the surface data as has been observed by \citet{al2017versatile}. \begin{figure}[htpb!] \centering \subfigure[] { \includegraphics[trim=1.0cm 15.0cm 1.0cm 11.0cm, clip=true,scale=0.4]{MovingCylinder_p075} \label{fig:MovingCylinder_p075} }\\ \subfigure[] { \includegraphics[trim=1.0cm 15.0cm 1.0cm 11.0cm, clip=true,scale=0.4]{MovingCylinder_p15} \label{fig:OscillatingCylinder_SkinFriction_feq1p2f0_HOLS} }\\ \subfigure[] { \includegraphics[trim=1.0cm 15.0cm 1.0cm 11.0cm, clip=true,scale=0.4]{MovingCylinder_p3} \label{fig:OscillatingCylinder_SkinFriction_feqp8f0_HOLS} } \caption{Instantaneous contours of velocity magnitude (m/s) for the horizontally moving cylinder at $Re=40$. The boxes show the three levels of mesh refinement above the base level.} \label{fig:MovingCylinder_Re40} \end{figure} \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=0.5cm 4.5cm 14.0cm 7.0cm, clip=true,scale=0.6]{MovingCylinder_SurfaceData} \label{fig:MovingCylinder_Pressure} } \subfigure[] { \includegraphics[trim=14.0cm 4.5cm 0.5cm 7.0cm, clip=true,scale=0.6]{MovingCylinder_SurfaceData} \label{fig:MovingCylinder_SkinFriction} } \caption{Comparison of (a) pressure coefficient and (b) skin friction coefficient on the surface of the cylinder with the results of \citet{tseng2003ghost} and \citet{muralidharan2018simulation}.} \end{figure} \subsection{Inline oscillating cylinder in initially quiescent flow} A horizontally oscillating cylinder in initially quiescent flow has been widely studied in the literature both experimentally and numerically \citep{dutsch1998low,iliadis1998viscous,al2017versatile}. The test case consists of a cylinder of diameter D in a quiescent fluid with an imposed oscillatory motion given by \begin{eqnarray*} x(t) = -A_e\sin(2\pi f_e t), \end{eqnarray*} where $A_e$ is the amplitude of oscillation with frequency $f_e$. The relevant non-dimensional parameters are the Reynolds number $Re = \rho_\infty U_\mathrm{max}D/\mu_\infty$, and the Keulegan--Carpenter number $KC = U_\mathrm{max}/f_eD$, where $U_\mathrm{max}=2\pi f_e A_e$, is the velocity amplitude attained by the cylinder during the oscillatory motion. Consistent with the test case of \citet{dutsch1998low}, we use $KC=5$, which gives $A_e = 5D/2\pi$ m, and $Re=100$, which gives $f_e = 100\mu_\infty/(5D^2\rho_\infty)$ cycles per second. In the current case, $D=0.4$ m, $\rho_\infty = 1.226$ kg/m$^3$, and $\mu_\infty = 0.3$ kg/ms, which give $A_e = 0.3183$ m, and $f_e = 30.58727$ cycles per second. The domain size is 20 m $\times$ 10 m, with a base mesh size of 512 $\times$ 256, with 2 levels of refinement, which gives a resolution of $\sim$41 points in the cylinder diameter. Since the amplitude of oscillation is small compared to the size of the domain, a static refinement criterion is used, which tags all cells in the domain which satisfies $9.0 < x < 11.0$ and $-0.5<y<0.5$. Fig.~\ref{fig:InlineOscillatingCylinder_VortAndPres} shows the instantaneous contours of vorticity and pressure at various phase positions $\theta=2\pi f_e t = 0^0, 96^0, 192^0, 288^0$. As the cylinder oscillates, symmetric vortices develop, and when the direction of oscillation is reversed, the vortex pair gets separated, and a new pair of vortices are formed resulting in a wake reversal as has been observed by \citet{dutsch1998low}. Fig.~\ref{fig:InlineOscillatingCylinder_Velocities} shows the comparison of the normalized velocities in the horizontal and vertical directions at four different streamwise locations given by $x/D = -0.6, 0.0, 0.6, 1.2$ for different phase positions $\theta=2\pi f_e t = 180^0, 210^0, 330^0$. The total drag force on the cylinder in the streamwise direction is given by the streamwise component of the force vector \begin{eqnarray*} F_D = x~\mathrm{component~of}\int\limits_S (p\bm{n}- \bm{\tau}\cdot\bm{n})\,dA = \int\limits_S (pn_x - (\tau_{xx}n_x+\tau_{xy}n_y))\,dA, \end{eqnarray*} where $S$ denotes the surface of the cylinder. Fig.~\ref{fig:DragHistory_InlineOscillatingCylinder} shows the comparison of the drag force over the cylinder as a function of time. The drag force has been normalized to match the results of \citet{dutsch1998low}. Good quantitative comparison is observed for all quantities. \begin{figure}[htpb!] \centering \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 5.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Vort1} } \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 5.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Pres1} }\\ \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 7.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Vort2} } \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 7.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Pres2} }\\ \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 7.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Vort3} } \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 7.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Pres3} }\\ \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 7.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Vort4} } \subfigure[]{ \includegraphics[trim=4.0cm 9.0cm 2.0cm 7.0cm, clip=true,scale=0.25]{InlineOscillatingCylinder_Pres4} } \caption{Instantaneous contours of (a,c,e,g) vorticity ($s^{-1}$) and (b,d,f,h) pressure (N/m$^2$) for the inline oscillating cylinder at $Re=100$ and $KC=5$ for different phase positions $\theta = 2\pi f t = 0^0, 96^0, 192^0, 288^0$ (top to bottom). The boxes show the two levels of mesh refinement above the base level. } \label{fig:InlineOscillatingCylinder_VortAndPres} \end{figure} \begin{figure}[htpb!] \centering \subfigure[]{ \includegraphics[trim=0.0cm 0.0cm 0.5cm 0.5cm, clip=true,scale=0.35]{Zvel_180_IOC.png} } \subfigure[]{ \includegraphics[trim=0.0cm 0.0cm 0.5cm 0.5cm, clip=true,scale=0.35]{Yvel_180_IOC.png} }\\ \subfigure[]{ \includegraphics[trim=0.0cm 0.0cm 0.5cm 0.5cm, clip=true,scale=0.35]{Zvel_210_IOC.png} } \subfigure[]{ \includegraphics[trim=0.0cm 0.0cm 0.5cm 0.5cm, clip=true,scale=0.35]{Yvel_210_IOC.png} }\\ \subfigure[]{ \includegraphics[trim=0.0cm 0.0cm 0.5cm 0.5cm, clip=true,scale=0.35]{Zvel_330_IOC.png} } \subfigure[]{ \includegraphics[trim=0.0cm 0.0cm 0.5cm 0.5cm, clip=true,scale=0.35]{Yvel_330_IOC.png} } \caption{Comparison of the horizontal (left) and vertical (right) components of normalized velocity at different streamwise locations as a function of the normalized vertical coordinate at different phase positions $\theta = 2\pi f_e t = 180^0, 210^0, 330^0$ (top to bottom), with the symbols showing the experimental results of \citet{dutsch1998low}.} \label{fig:InlineOscillatingCylinder_Velocities} \end{figure} \begin{figure}[htpb!] \centering \includegraphics[trim=0.0cm 0.0cm 1.0cm 1.0cm, clip=true,scale=0.45]{DragHistory_InlineOscillatingCylinder} \caption{The normalized drag force on the inline oscillating cylinder for one full cycle of oscillation.} \label{fig:DragHistory_InlineOscillatingCylinder} \end{figure} \subsection{Transversely oscillating cylinder in quiescent flow at \emph{\textbf{Re}} = 185} A transversely oscillating cylinder in a free stream of initially uniform velocity at various frequencies is a test case that has been widely studied \citep{guilmineau2002numerical}. The vertical position of the cylinder as a function of time is given by \begin{eqnarray*} y(t) = A_e\cos(2\pi f_e t), \end{eqnarray*} where $A_e=0.2D$ is the amplitude of oscillation with frequency $f_e$. The cylinder diameter is $D=0.4$ m, and the free-stream has pressure $p=101325.0$ N/m$^2$, density $\rho_\infty=1.226$ kg/m$^3$, flow velocity $U_\infty=50$ m/s and viscosity $\mu_\infty=0.13254$ kg/ms, which gives the Reynolds number $Re=\rho_\infty U_\infty D/\mu_\infty=185$. The test case is performed for two different oscillation frequencies given by $f_e/f_0=0.8$ and 1.2, where $f_0$ is the natural frequency of vortex shedding from the cylinder. For $Re=185$, the natural frequency of oscillation, $f_0$, corresponds to a Strouhal number $St=f_0D/U_\infty=0.195$ \citep{williamson1998series}, which gives $f_0=24.375$ cycles per second. The domain size is 20 m $\times$ 10 m, with a base mesh size of 512 $\times$ 256, with 3 levels of refinement, which gives a resolution of $\sim$82 points in the cylinder diameter. A geometric refinement criterion is used to the capture the flow features in the vicinity of the cylinder, which tags all cells in the domain which satisfies $7.0 < x < 12.0$ and $-0.25<y<0.25$. Fig.~\ref{fig:Trans_Oscillating_Cylinder_Vort}(a)-(d) show the contours of instantaneous vorticity as the cylinder oscillates. The oscillation leads to a cyclic variation of the surface quantities over the cylinder. Fig.~\ref{fig:OscillatingCylinder_Pressure_HOLS} (a) and (b) show the comparison of the pressure coefficient $C_p = (p-p_\infty)/(1/2\rho_\infty U_\infty^2)$ over the surface of the cylinder when the cylinder is at the extreme upper position for oscillation frequencies of $f_e/f_0 =$ 0.8 and 1.2 respectively, with the body fitted results of \citet{guilmineau2002numerical}. Fig.~\ref{fig:OscillatingCylinder_SkinFriction_HOLS} (a) and (b) show the comparison of the skin friction coefficient $C_f = \tau_f/(1/2\rho_\infty U_\infty^2)$, where $\tau_f$ is the shear stress tangential to the surface given by Eqn.~\ref{eqn:tauf}. The angle $\theta$ is measured from the stagnation point of the cylinder. Good quantitative comparison is observed, although minor oscillations can be seen. \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=1.0cm 12.0cm 0.0cm 11.0cm, clip=true,scale=0.2]{Trans_Oscillating_Cylinder_feqf00033.png} }\quad\quad\quad\quad \subfigure[] { \includegraphics[trim=1.0cm 12.0cm 0.0cm 11.0cm, clip=true,scale=0.2]{Trans_Oscillating_Cylinder_feqf00034.png} }\\ \subfigure[] { \includegraphics[trim=1.0cm 12.0cm 0.0cm 11.0cm, clip=true,scale=0.2]{Trans_Oscillating_Cylinder_feqf00035.png} }\quad\quad\quad\quad \subfigure[] { \includegraphics[trim=1.0cm 12.0cm 0.0cm 11.0cm, clip=true,scale=0.2]{Trans_Oscillating_Cylinder_feqf00036.png} } \caption{Instantaneous vorticity contours for the transversely oscillating cylinder at $Re=185$. The boxes show the three levels of mesh refinement above the base level.} \label{fig:Trans_Oscillating_Cylinder_Vort} \end{figure} \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=5.5cm 5.8cm 6.5cm 5.5cm, clip=true,scale=0.54]{OscillatingCylinder_Pressure_feqp8f0_HOLS.pdf} \label{fig:OscillatingCylinder_Pressure_feqp8f0_HOLS} } \subfigure[] { \includegraphics[trim=0.1cm 0.1cm 1.0cm 1.41cm, clip=true,scale=0.5]{OscillatingCylinder_Pressure_feq1p2f0_HOLS.png} \label{fig:OscillatingCylinder_Pressure_feq1p2f0_HOLS} } \caption{Comparison of the pressure coefficient on the surface of the cylinder when the cylinder is at the extreme upper position for (a) $f_e/f_0=0.8$ and (b) $f_e/f_0=1.2$. Comparison is made with the results of \citet{guilmineau2002numerical} shown in symbols.} \label{fig:OscillatingCylinder_Pressure_HOLS} \end{figure} \begin{figure}[htpb!] \subfigure[] { \includegraphics[trim=6.5cm 3.5cm 8.0cm 3.5cm, clip=true,scale=0.45]{OscillatingCylinder_SkinFriction_feqp8f0_HOLS.pdf} \label{fig:OscillatingCylinder_SkinFriction_feqp8f0_HOLS} } \subfigure[] { \includegraphics[trim=0.1cm 0.1cm 1.0cm 1.41cm, clip=true,scale=0.5]{OscillatingCylinder_SkinFriction_feq1p2f0_HOLS.png} \label{fig:OscillatingCylinder_SkinFriction_feq1p2f0_HOLS} } \caption{Comparison of the skin friction coefficient on the surface of the cylinder when the cylinder is at the extreme upper position for (a) $f_e/f_0=0.8$ and (b) $f_e/f_0=1.2$. Comparison is made with the results of \citet{guilmineau2002numerical} shown in symbols.} \label{fig:OscillatingCylinder_SkinFriction_HOLS} \end{figure} \section{Conclusions}\label{sec:conclusions} A numerical framework has been developed and validated for the compressible, Navier-Stokes equations involving moving boundaries with an embedded boundary approach within the block-structured adaptive mesh refinement framework of AMReX. The flow solver is developed using a finite volume formulation with a conservative, unsplit, cut-cell approach, and a ghost-cell approach has been developed for computing the inviscid fluxes on the moving, embedded boundary faces. A 3$^\mathrm{rd}$ order least squares method was used to approximate the gradients of velocities at the EB faces in the computation of the viscous fluxes. The algorithm is validated against analytical and experimental results, and good quantitative comparison is observed. Simulations of shock-cylinder interaction and shock-wedge interaction with adaptive mesh refinement showed the capability of the algorithm to handle high-speed flows with high gradient regions such as shock waves, and flows with sharp corners. The transonic buffet phenomenon of an oscillating NACA 0012 airfoil was simulated, and the variation of the coefficient of lift with the angle of attack showed good quantitative comparison with previous results in literature. As a test of the conservative nature of the scheme, a closed system was simulated -- an oscillating piston in a cylinder. The percentage error in mass inside the cylinder was found to decrease with refinement, demonstrating that the scheme is conservative. Viscous test cases of a horizontally moving cylinder, inline oscillating cylinder and a transversely oscillating cylinder were performed, and surface quantities -- pressure and skin friction coefficients, were computed and were observed to have good quantitative comparison with results in the literature. \section*{Acknowledgments} This research was supported by the Exascale Computing Project (17-SC-20-SC), a collaborative effort of the U.S. Department of Energy Office of Science and the National Nuclear Security Administration. We also gratefully acknowledge the staff at NREL-HPC for the compute time on the Eagle supercomputer, and their continued support. \clearpage \section*{Appendix} The idea behind computing gradients at a point (green square in Fig.~\ref{fig:EBGradient_LS}) using the least squares technique is to minimize the cumulative error in the fit to a function $\phi(x,y,z)$ over a chosen neighborhood region, with the minimization being done with respect to the gradient quantities at the point in consideration. The value of the function at any point in the neighborhood can be written using a Taylor series expansion about the point at which the gradient needs to be computed. This estimated value will differ from the actual value at the point and the difference between them is the error estimate. The order of the terms retained in the expansion will determine the order of the least squares approximation. For a 3$^\mathrm{rd}$ order least squares method, the cumulative error that needs to be minimized is given by \begin{eqnarray*} E = \sum\limits_{i=1}^N \Big(\phi_i - \Big(\phi_0 &+& \phi_x \Delta x_i + \phi_y\Delta y_i + \phi_z\Delta z_i \\ &+& \phi_{xx}\Delta x_i^2 + \phi_{yy}\Delta y_i^2 + \phi_{zz}\Delta z_i^2\\ &+& \phi_{xy}\Delta x_i\Delta y_i + \phi_{yz}\Delta y_i\Delta z_i + \phi_{zx}\Delta z_i\Delta x_i\Big)\Big)^2, \end{eqnarray*} where $i$ loops over the neighborhood region. In Fig.~\ref{fig:EBGradient_LS}, the neighborhood region consists of the blue circles, which are the centroid of the fluid volumes (the cell containing the face on which the gradient is computed is avoided in the neighborhood region \citep{schwartz2006cartesian}). This error is now minimized with respect to each of the gradient quantities. For eg. $\cfrac{\partial E}{\partial \phi_x} = 0$ gives \begin{eqnarray*} \sum\limits_{i=1}^N (\phi_i -\phi_0)\Delta x_i - \Delta x_i&\times&\Big(\phi_x \Delta x_i + \phi_y\Delta y_i + \phi_z\Delta z_i \\ &+& \phi_{xx}\Delta x_i^2 + \phi_{yy}\Delta y_i^2 + \phi_{zz}\Delta z_i^2\\ &+& \phi_{xy}\Delta x_i\Delta y_i + \phi_{yz}\Delta y_i\Delta z_i + \phi_{zx}\Delta z_i\Delta x_i\Big) = 0, \end{eqnarray*} which can be rearranged to give \begin{eqnarray*} \sum\limits_{i=1}^N \Delta x_i&\times&\Big(\phi_x \Delta x_i + \phi_y\Delta y_i + \phi_z\Delta z_i \\ &+& \phi_{xx}\Delta x_i^2 + \phi_{yy}\Delta y_i^2 + \phi_{zz}\Delta z_i^2\\ &+& \phi_{xy}\Delta x_i\Delta y_i + \phi_{yz}\Delta y_i\Delta z_i + \phi_{zx}\Delta z_i\Delta x_i\Big) = \sum\limits_{i=1}^N (\phi_i -\phi_0)\Delta x_i. \end{eqnarray*} Repeating the above procedure for each of the 9 gradient quantities -- $\phi_x$, $\phi_y$, $\phi_z$, $\phi_{xx}$, $\phi_{yy}$, $\phi_{zz}$, $\phi_{xy}$, $\phi_{yz}$ and $\phi_{zx}$, gives a 9$\times$ 9 system of equations as \begin{equation*} \begin{pmatrix} \sum\Delta x_i\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta y_i\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta z_i\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta x_i^2\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta y_i^2\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta z_i^2\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta x_i\Delta y_i\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta y_i\Delta z_i\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \\ \sum\Delta z_i\Delta x_i\times\Big[\Delta x_i\quad\Delta y_i\quad\Delta z_i\quad\Delta x_i^2\quad\Delta y_i^2\quad\Delta z_i^2\quad\Delta x_i\Delta y_i\quad\Delta y_i\Delta z_i\quad\Delta z_i\Delta x_i \Big]\\ \end{pmatrix} \begin{pmatrix} \cfrac{\partial \phi}{\partial x}\\ \\ \cfrac{\partial \phi}{\partial y}\\ \\ \cfrac{\partial \phi}{\partial z}\\ \\ \cfrac{\partial^2\phi}{\partial x^2}\\ \\ \cfrac{\partial^2\phi}{\partial y^2}\\ \\ \cfrac{\partial^2\phi}{\partial z^2}\\ \\ \cfrac{\partial^2\phi}{\partial x\partial y}\\ \\ \cfrac{\partial^2\phi}{\partial y\partial z}\\ \\ \cfrac{\partial^2\phi}{\partial z\partial x}\\ \end{pmatrix} = \begin{pmatrix} \sum\limits_{N} (\phi_i - \phi_0)\Delta x_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta y_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta z_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta x_i^2\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta y_i^2\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta z_i^2\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta x_i\Delta y_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta y_i\Delta z_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta z_i\Delta x_i\\ \end{pmatrix}, \end{equation*} which is solved using LAPACK \citep{anderson1990lapack} to obtain the gradients at the point. For a 2$^\mathrm{nd}$ order least squares method, the system of equations is given by \begin{equation*} \begin{pmatrix} \sum\limits_{N} \Delta x_i^2 & \sum\limits_{N} \Delta x_i\Delta y_i & \sum\limits_{N} \Delta x_i\Delta z_i \\ & & \\ \sum\limits_{N} \Delta x_i\Delta y_i & \sum\limits_{N} \Delta y_i^2 & \sum\limits_{N} \Delta y_i\Delta z_i \\ & & \\ \sum\limits_{N} \Delta x_i\Delta z_i & \sum\limits_{N} \Delta y_i\Delta z_i & \sum\limits_{N} \Delta z_i^2 \end{pmatrix} \begin{pmatrix} \cfrac{\partial \phi}{\partial x}\\ \\ \cfrac{\partial \phi}{\partial y}\\ \\ \cfrac{\partial \phi}{\partial z} \end{pmatrix} = \begin{pmatrix} \sum\limits_{N} (\phi_i - \phi_0)\Delta x_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta y_i\\ \\ \sum\limits_{N} (\phi_i - \phi_0)\Delta z_i \end{pmatrix}. \end{equation*} } \clearpage \bibliographystyle{plainnat}
1,314,259,996,109
arxiv
\section{Introduction} It is well known that sparse estimation problems can be formulated as convex optimization problems using the $\ell_1$-norm. The $\ell_1$-norm can be generalized to continuous parameter spaces through the so-called atomic norm \cite{Chandrasekaran:2012}. Convex modelling of sparsity constraints has two highly attractive traits: convex optimization problems can easily be solved both in theory \cite{Ne:94} and in practice \cite{Wri:97,Nocedal99}, and, a number of recovery guarantees can be obtained within this framework. Such recovery guarantees are studied in signal processing under the name compressed sensing \cite{Candes:06, Donaho:06, candes:2008} and they generalize nicely to the atomic norm minimization approach \cite{Tang:2013,Bhaskar:2013,Tang:2015,Candes:2014,Candes:2013}. The most prominent example of estimation with the atomic norm is the application to line spectral estimation \cite{Bhaskar:2013,Candes:2014,Candes:2013}, in which case it is known as atomic norm soft thresholding (AST). The popularity of AST is, partly, due to the fact that it can be cast as a semidefinite programming (SDP) problem (we refer to Sec.~\ref{sec:review} for a review of AST,) \begin{equation} \begin{array}{ll} \text{minimize}_{v,x,u} & \norm{x-y}_2^2 + \tau (v + w^{\mathrm{T}} u) \\[1mm] \text{subject to} & \left(\begin{matrix} T(u) & x \\ x^\mathrm{H} & v \end{matrix}\right) \succeq 0, \end{array} \label{problem} \end{equation} where $v\in\mathbb{R}$, $x\in\mathbb{C}^N$, $u\in\mathbb{R}^{2N-1}$ are the variables of the problem and $y\in\mathbb{C}^N$, $\tau\in\mathbb{R}$, $w\in\mathbb{R}^{2N-1}$ are fixed (known) parameters. The function $T(u):\mathbb{R}^{2N-1}\rightarrow\mathbb{C}^{N\times N}$ outputs a complex Hermitian Toeplitz matrix constructed from $u$, such that the first row is $(2u_0,\ldots,u_{N-1}) + j(0,u_N,\ldots,u_{2N-2})$. To be precise, AST is obtained by selecting $w=2e_0$ in \eqref{problem}, where $e_0$ is a vector with $1$ in the first entry and zeros elsewhere. The state-of-the-art method for solving \eqref{problem} is via the alternating direction method of multipliers (ADMM) and used in \cite{Bhaskar:2013, Cho:2015, li-multiple, Yang:15}. While this method is reasonably fast, it has some drawbacks. It requires the calculation of an eigenvalue decomposition in each iteration at cost $\ensuremath{\mathcal{O}}\xspace(N^3)$ floating-point operations (flops). This means that for large $N$ it is exceedingly slow. As is often seen with proximal methods it also has slow convergence if a solution of high accuracy is requested. Da Costa \textit{et al.} \cite{Costa:2017} apply a low-dimensional projection of the observation vector to reduce the problem size and therefore the computational complexity of AST. In the noise-free case and under certain regularity conditions, it is shown that the estimation accuracy is not affected by doing so. However, it is clear that this approach discards observed data and the estimation accuracy will be degraded in the noisy case. Another attempt at a fast solver for AST is \cite{Rao:2015}, but due to its non-SDP implementation utilizing a frequency grid and real, positive coefficients, this approach only allows for a fixed known phase for all components and hence cannot solve the line spectral problem as described in this paper. There has been other attempts to solve atomic norm problems efficiently in e.g.~\cite{Boyd:2017, Vinyes:2017}, but without covering the AST problem. The formulation of AST in \eqref{problem} is casted as an SDP problem. SDP problems have been subject to intensive research since the 1990's and their solution using primal-dual interior-point methods (IPMs) is now understood well \cite{Ne:94, Wri:97, BoVa:04, Sturm:99, Toh:1999}. The Lagrangian dual of \eqref{problem} has $\ensuremath{\mathcal{O}}\xspace(N^2)$ dual variables due to the semidefinite matrix constraint. The direct application of a standard primal-dual IPM thus requires $\ensuremath{\mathcal{O}}\xspace(N^6)$ flops per iteration at best (using direct methods for solving linear systems of equations), but can be reduced by eliminating the dual variables from the linear system (in general, the exact number flops per iteration will depend on the implementation). Compared to this approach, proximal methods (such as ADMM) which require $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops per iteration are preferable, even if they converge much slower than primal-dual IPMs. That explains why primal-dual IPMs have not gained traction for the solution of \eqref{problem}. In this work we reformulate the constraint in \eqref{problem} as a non-symmetric conic constraint on the vector $(v,x^{\mathrm{T}},u^{\mathrm{T}})^{\mathrm{T}}$. This formulation immediately reduces the number of dual variables to $\ensuremath{\mathcal{O}}\xspace(N)$ and sets the scene for a reintroduction of primal-dual IPMs as a very competitive class of algorithms for solution of AST. Primal-dual IPMs for conic programming typically rely on a symmetry between the primal and dual problems. The formulation of such symmetric primal-dual IPMs requires the existence of a self-scaled barrier function for the cone involved in the constraint \cite{NeTo-selfscaled, NeTo-primaldual}. Güler \cite{guler-barrier} showed that such barrier functions exist only for the class of homogeneous and self-dual cones. The cone in our formulation is not self-dual and so a symmetric primal-dual IPM cannot be formulated. Non-symmetric conic optimization has received some attention \cite{nesterov-towards, skajaa-implementing, skajaa-homogeneous, tuncel-generalization}. These methods generally rely on the availability of a barrier function for the dual cone and possibly evaluation of its gradient and Hessian. An easy-to-calculate dual barrier is not available for the cone associated to the constraint of our formulation; only an oracle which can determine membership in the dual cone is available (part of our contribution is to show how such an oracle can be constructed.) To derive a non-symmetric primal-dual IPM which does not rely on evaluating the dual barrier or its derivatives, we formulate the augmented Karush-Kuhn-Tucker conditions and devise a dedicated approach to solving these. This approach is shown to converge to a primal-dual feasible point. A lower bound on the objective function is calculated in those iterations where a dual feasible point (as determined by the oracle) is available. From the lower bound a duality gap can be evaluated, thus providing a method for dynamically updating the barrier parameter. We show that the proposed method enjoys global convergence. Our focus is on obtaining an algorithm which has fast runtime \textit{in practice}, i.e., it has both low per-iteration computational complexity and it exhibits reasonably fast convergence. Theoretical statements regarding for example convergence speed are left for future work. At the core of obtaining a practically fast algorithm lies the already mentioned conic formulation (which brings the number of dual variables down to $\ensuremath{\mathcal{O}}\xspace(N)$), along with techniques for fast evaluation of linear algebra in each step of the algorithm. These evaluations are based on fast algorithms \cite{ammar-generalized, ammar-numerical, levinson-wiener, durbin-fitting} for inversion of Toeplitz matrices. Related techniques are employed in \cite{HansenFR17, musicus-fastmlm, genin-optimization, alkire-autocorrelation}. We dub the algorithm FastAST. Both Newton's method and a quasi-Newton method are considered for evaluation of the search direction in FastAST. When using Newton's method the algorithm requires $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops per iteration, while the quasi-Newton variant only requires $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops per iteration. The numerical experiments in Sec. \ref{sec:numerical} show that the quasi-Newton variant is faster in practice. Due to numerical inaccuracies in the calculation of the search direction the quasi-Newton variant is not able to obtain a solution of very high accuracy. Solving \eqref{problem} to high accuracy makes a difference with very large signal-to-noise ratios and in these cases the variant of FastAST using Newton's method should be used. Both the Newton's and quasi-Newton variants of FastAST are significantly faster than the ADMM-based solvers for \eqref{problem}. Along with the primal-dual IPM presented here, we have also experimented with a primal-only version which is simpler to derive. The primal-only approach does not provide a good way to select the barrier parameter (which we denote $t$, see Sec. \ref{sec:barriers}). This in turn forces the primal-only approach to use either overly conservative short-step \cite{nemirovski-lecturenotes} updates of the barrier parameter or it requires the barrier problem to be solved to high accuracy for each fixed $t$. Both scenarios lead to a significant increase in the number of iterations required by the primal-only algorithm, resulting in significantly increased runtime. On the contrary, the primal-dual version presented in this paper allows for evaluation of a duality gap in each iteration. The duality gap gives a natural way to select the barrier parameter and also provides a very precise stopping criterion. The paper is outlined as follows. In Sec.~\ref{sec:review} we begin with a brief review of atomic norm minimization and its application to line spectral estimation. Sec.~\ref{sec:non-symmetric} details the formulation of \eqref{problem} as a non-symmetric conic optimization program along with the theory of its solution. In Sec.~\ref{sec:method} we present our numerical algorithm along with implementation details. The exploitation of Toeplitz structure for fast evaluation of each step in the algorithm is discussed in Sec.~\ref{sec:complexity}. Numerical experiments which validate the practical efficacy of the proposed algorithm are presented in Sec.~\ref{sec:numerical}. \section{A Review of Atomic Norm Soft Thresholding} \label{sec:review} \subsection{Line Spectral Estimation} Consider an observation vector $y\in\mathbb{C}^N$, \begin{align} y = x + \zeta, \qquad x = \sum_{k=0}^{K-1} c_k a(\omega_k), \label{model} \end{align} where $\zeta\in\mathbb{C}^N$ is a noise vector and $x\in\mathbb{C}^N$ is a signal of interest composed of $K$ sinusoids, each with angular frequency $\omega_k\in[0,2\pi)$ and complex coefficient $c_k\in\mathbb{C}$. The steering vector $a(\omega)$ has entries $\left(a(\omega)\right)_n = \exp(jn\omega)$ for $n=0,\ldots, N-1$ and $j=\sqrt{-1}$. In line spectral estimation the task is to estimate the values $(K, c_0, \ldots, c_{K-1}, \omega_0, \ldots, \omega_{K-1})$. The crux of line spectral estimation lies in obtaining estimates of the model order $K$ and the frequencies $\{\omega_k\}$. When these are available the coefficients $\{c_k\}$ can easily be estimated using a least-squares approach. The problem is ubiquitous in signal processing; examples include direction of arrival estimation using sensor arrays \cite{malioutov-sparse, ottersten-analysis}, bearing and range estimation in synthetic aperture radar \cite{carriere-high}, channel estimation in wireless communications \cite{bajwa-sparsechannel} and simulation of atomic systems in molecular dynamics \cite{andrade-application}. \subsection{Modelling Sparsity With the Atomic Norm} The atomic norm \cite{Chandrasekaran:2012, Tang:2013, Bhaskar:2013, Tang:2015} provides a tool for describing notions of sparsity in a general setting. It is defined in terms of the \textit{atomic set} $\ensuremath{\mathcal{A}}\xspace$. Each member of $\ensuremath{\mathcal{A}}\xspace$ is referred to as an \textit{atom}. The atoms are the basic building block of the signal and define the dictionary in which the signal has a sparse representation. The atomic norm induced by $\ensuremath{\mathcal{A}}\xspace$ is defined as \begin{align} \lVert x \rVert_\ensuremath{\mathcal{A}}\xspace = \inf \{ \alpha>0 : x \in \alpha \mathop{\bf conv} \ensuremath{\mathcal{A}}\xspace \}, \end{align} where $\mathop{\bf conv}\ensuremath{\mathcal{A}}\xspace$ is the convex hull of $\ensuremath{\mathcal{A}}\xspace$. For line spectral estimation the atomic set is selected as the set of complex rotated Fourier atoms \cite{ Tang:2013, Bhaskar:2013, Tang:2015} \begin{align} \ensuremath{\mathcal{A}}\xspace = \{ a(\omega) \exp(j\phi) : \omega\in[0,2\pi), \phi\in[0,2\pi) \} \end{align} and the corresponding atomic norm can be described as \begin{align} \lVert x \rVert_\ensuremath{\mathcal{A}}\xspace = \inf_{K, \{c_k, \omega_k\} } \left\{ \sum_{k=0}^{K-1} |c_k| : x = \sum_{k=0}^{K-1} c_k a(\omega_k) \right\}. \label{atomic_norm} \end{align} It is clear that the atomic norm provides a generalization of the $\ell_1$-norm to the continuous parameter space $\omega_k\in[0,2\pi)$. Through the use of a dual polynomial characterization the atomic norm can be expressed as the solution of an SDP, \begin{equation} \begin{array}{rll} \lVert x \rVert_\ensuremath{\mathcal{A}}\xspace = \hspace{-\medskipamount} & \text{minimize}_{v,u} & \frac{1}{2} \left(v + \frac{1}{N}\tr T(u)\right) \\[1mm] & \text{subject to} & \left(\begin{matrix} T(u) & x \\ x^\mathrm{H} & v \end{matrix}\right) \succeq 0. \end{array} \label{atomic_sdp} \end{equation} \subsection{Atomic Norm Soft Thresholding} AST \cite{Bhaskar:2013} is inspired by the least absolute shrinkage and selection operator (LASSO) \cite{Tibshirani:94} and solves \begin{equation} \label{ast} \begin{array}{rll} \text{minimize}_{x} & \lVert x-y \rVert_2^2 + 2\tau\lVert x \rVert_\ensuremath{\mathcal{A}}\xspace, \end{array} \end{equation} where $\tau>0$ is a regularization parameter to be chosen. It is clear that AST is recovered in \eqref{problem} by selecting $w=2e_0$. Once a solution $(v^\star, x^\star, u^\star)$ of \eqref{ast} has been found, estimates of the model order $K$ and the frequencies $\{\omega_k\}$ can be obtained by examining a certain dual polynomial constructed from $x^\star$. This process determines the solution in \eqref{atomic_norm} for the recovered signal $x^\star$. Under a, somewhat restrictive, assumption concerning separation of the frequencies $\{\omega_k\}$ a number of theoretical statements can be given regarding signal and frequency recovery using AST. We refer to \cite{Tang:2013, Bhaskar:2013, Tang:2015, Candes:2014, Candes:2013} for details. We now consider the selection of the regularization parameter $\tau$. Clearly the choice of $\tau$ crucially influences the estimation accuracy of AST. It is this parameter which determines the trade-off between fidelity and sparsity which is inherent in any estimator involving the model order $K$. With all else being equal, selecting larger $\tau$ gives estimates with smaller values of $K$. Let $\lVert \cdot \rVert_\ensuremath{\mathcal{A}}\xspace^*$ denote the dual norm of the atomic norm $\lVert \cdot\rVert_\ensuremath{\mathcal{A}}\xspace$. Then the theoretical analysis in \cite{Bhaskar:2013} requires $\tau \ge \E{\lVert \zeta \rVert_\ensuremath{\mathcal{A}}\xspace^*}$. For a white, zero-mean circularly symmetric complex Gaussian noise vector $\zeta$ with entry-wise variance $\sigma^2$ such an upper bound is given by \cite{Bhaskar:2013}, \begin{align} \tau = \sigma \frac{\log(N)+1}{\log(N)} \sqrt{ N\log(N) + N\log(4\pi\log(N)) }, \label{tau} \end{align} where $\log()$ is the natural logarithm. This choice has been shown to perform well in practice and we also use it in our simulation study. \begin{comment} \subsection{Reweighted Atomic Norm Minimization} The matrix $T(u)$ can be interpreted as a covariance matrix of the vector $x$. The trace term in \eqref{atomic_sdp} expresses a convex relaxation of the rank of $T(u)$: The rank is given by the $\ell_0$ pseudo-norm (number of non-zero entries) of the vector of eigenvalues of $T(u)$. If the $\ell_0$ pseudo-norm is replaced by its $\ell_1$-norm convex relaxation, the trace of $T(u)$ is obtained (the trace is the sum of the eigenvalues). Assuming $T(u)$ positive-definite, the constraint in \eqref{atomic_sdp} together with the $v$ term in expresses how plausible it is to observe $x$ in a zero-mean circularly symmetric complex Gaussian model with covariance matrix $T(u)$. These observations lead us to intuitively view the minimization in \eqref{atomic_sdp} as an approach to find a low-rank covariance matrix $T(u)$ under which it is plausible to observe $x$. The value of the atomic norm then gives an indication of how well this trade-off can be achieved for a given $x$. The log determinant provides a better, but non-convex, relaxation of the rank. Yang \emph{et al.} \cite{Yang:2016b} propose to replace the trace in \eqref{atomic_sdp} with a log determinant term. It is reported that this results in an increased ability to resolve closely spaced frequencies. It is known as reweighted atomic norm minimization (RAM), because it in many ways generalizes reweighted $\ell_1$-norm minimization \cite{candes-reweighted} to continuous parameter spaces. We now review that idea in our setting. We take the freedom to reformulate the details of the formulation in \cite{Yang:2016b}, but the main ideas are unchanged. RAM is obtained by replacing $\lVert x\rVert_\ensuremath{\mathcal{A}}\xspace$ in \eqref{ast} with the sparsity metric \begin{equation} \begin{array}{rll} \ensuremath{\mathcal{M}}\xspace_\kappa(x) = \hspace{-\medskipamount} & \text{minimize}_{v,u} & \frac{1}{2} \left(v + \frac{1}{N}\log|T(u)+\kappa I|\right) \\[1mm] & \text{subject to} & \left(\begin{matrix} T(u) & x \\ x^\mathrm{H} & v \end{matrix}\right) \succeq 0, \end{array} \end{equation} where $\kappa>0$ is a parameter to be chosen. The log-determinant term is concave and so the resulting problem is not convex. A majorization-minimization approach can find a locally optimal point by repeatedly solving \eqref{problem}. In the $l$th iteration the weighting vector $w$ in \eqref{problem} must be selected as \begin{align} w_l = \frac{1}{N} T^*\!\left( \left(T(u_{l-1}^\star)+\kappa I\right)^{-1} \right), \end{align} where $u_{l-1}^\star$ is the solution obtained in the $(l-1)$th iteration. The function $T^*$ is the adjoint% \footnote{ $T^*$ is easy to calculate: Let $B$ be Hermitian and let $\beta_n$ denote the sum over the $n$th upper diagonal of $B$, i.e., \begin{align*} \beta_n = \sum_{m=0}^{N-1-n} B_{m,m+n}, \end{align*} for $n=0,\ldots,N-1$. Then\\ $T^*\!(B)=(2\beta_0, 2\re(\beta_1), \dots, 2\re(\beta_{N-1}), 2\im(\beta_1), \dots, 2\im(\beta_{N-1}) )^{\mathrm{T}}$. } of the linear map $T$, i.e., $T^* : \mathbb{C}^{N\times N} \rightarrow \mathbb{R}^{2N-1}$ is such that for every Hermitian $B\in\mathbb{C}^{N\times N}$ we have $\tr(T(u)^\mathrm{H} B) = T^*(B)^{\mathrm{T}} u$. In conclusion the formulation in \eqref{problem} directly allows for solution of both AST and RAM by appropriately selecting $w$. Note that a fast method to compute $w_l$ which does not require explicit matrix inversion can be obtained using the methods described in Sec. \ref{sec:complexity}. \end{comment} \section{Non-symmetric Conic Optimization} \label{sec:non-symmetric} We now return to our main focus: That of numerically solving the conic program \eqref{problem}. It can be written in the form \begin{equation} \label{conicproblem} \begin{array}{ll} \text{minimize} & f(\mu) \\[1mm] \text{subject to} & \mu\in\ensuremath{\mathcal{K}}\xspace, \end{array} \end{equation} where $f(\mu) = \norm{x-y}_2^2+\tau(v+w^{\mathrm{T}} u)$ and $\ensuremath{\mathcal{K}}\xspace$ is the cone defined by \begin{align} \ensuremath{\mathcal{K}}\xspace \triangleq \left\{ \mu = \left(\begin{matrix} v \\ x \\ u \end{matrix}\right) : \left(\begin{matrix} T(u) & x \\ x^\mathrm{H} & v \end{matrix}\right) \succeq 0 \right\}. \end{align} As a precursor to deriving a primal-dual IPM, we explore the properties of $\ensuremath{\mathcal{K}}\xspace$ and its dual. It is easy to show that $\ensuremath{\mathcal{K}}\xspace$ is a proper cone (convex, closed, solid and pointed; see \cite{BoVa:04}). The dual cone $\ensuremath{\mathcal{K}}\xspace^*$ of $\ensuremath{\mathcal{K}}\xspace$ is defined as \begin{align} \ensuremath{\mathcal{K}}\xspace^* = \left\{ \lambda : \left< \lambda, \mu \right> \ge 0 \;\; \forall \; \mu\in\ensuremath{\mathcal{K}}\xspace \right\}. \label{Kstar_def} \end{align} Since $\ensuremath{\mathcal{K}}\xspace$ is proper, so is $\ensuremath{\mathcal{K}}\xspace^*$ \cite{BoVa:04}. In this paper (primal) variables in the cone $\ensuremath{\mathcal{K}}\xspace$ are denoted by $\mu=(v,x^{\mathrm{T}},u^{\mathrm{T}})^{\mathrm{T}}$. The (dual) variables in the cone $\ensuremath{\mathcal{K}}\xspace^*$ are denoted by $\lambda=(\rho,s^{\mathrm{T}},z^{\mathrm{T}})^{\mathrm{T}}$, with $\rho\in\mathbb{R}$, $s\in\mathbb{C}^{N}$ and $z\in\mathbb{R}^{2N-1}$. The inner product between them is defined as $\left<\lambda,\mu\right> = \rho v + \re(s^\mathrm{H} x) + z^{\mathrm{T}} u$. \subsection{Checking for Dual Cone Membership} \label{dualmembership} In our proposed method, we need to check for $\lambda\in\ensuremath{\mathcal{K}}\xspace^*$. In order to characterize the dual cone $\ensuremath{\mathcal{K}}\xspace^*$, the cone of positive semidefinite Hermitian Toeplitz matrices is needed: \begin{align} \ensuremath{\mathcal{C}}\xspace \triangleq \{ u \in\mathbb{R}^{2N-1} : T(u) \succeq 0 \}. \end{align} This cone is also proper. The corresponding dual cone $\ensuremath{\mathcal{C}}\xspace^*$ is defined analogously to \eqref{Kstar_def}. Let the function $T^*$ be the adjoint% \footnote{ $T^*$ is easy to calculate: Let $B$ be Hermitian and let $\beta_n$ denote the sum over the $n$th upper diagonal of $B$, \begin{align*} \beta_n = \sum_{m=0}^{N-1-n} B_{m,m+n}, \end{align*} for $n=0,\ldots,N-1$. Then\\ $T^*\!(B)=(2\beta_0, 2\re(\beta_1), \dots, 2\re(\beta_{N-1}), 2\im(\beta_1), \dots, 2\im(\beta_{N-1}) )^{\mathrm{T}}$. } of the linear map $T$, i.e., $T^* : \mathbb{C}^{N\times N} \rightarrow \mathbb{R}^{2N-1}$ is such that for every Hermitian $B\in\mathbb{C}^{N\times N}$ we have $\tr(T(u)^\mathrm{H} B) = T^*(B)^{\mathrm{T}} u$. Then we then have the following lemma. \begin{lemma} \label{Kstar_lemma} The dual cone of $\ensuremath{\mathcal{K}}\xspace$ can be characterized as \begin{align} \ensuremath{\mathcal{K}}\xspace^* = \left\{ \lambda = \left(\begin{matrix} \rho \\ s \\ z \end{matrix}\right) : \rho>0, \left( z - \frac{1}{4\rho} T^*(s s^\mathrm{H}) \right) \in \ensuremath{\mathcal{C}}\xspace^* \right\} \cup \left\{ \lambda : \rho=0, s=0, z\in\ensuremath{\mathcal{C}}\xspace^* \right\}. \end{align} \end{lemma} \begin{IEEEproof} See the appendix. \end{IEEEproof} It is clear that $\ensuremath{\mathcal{K}}\xspace$ is not self-dual ($\ensuremath{\mathcal{K}}\xspace\ne\ensuremath{\mathcal{K}}\xspace^*$) and so \eqref{conicproblem} is a non-symmetric conic program. The cone $\ensuremath{\mathcal{C}}\xspace$ and its dual are defined in terms of real-valued vectors because this description simplifies the derivation of the method in Sec. \ref{sec:method}. These sets are, however, more naturally understood from their corresponding complex-valued forms. We therefore define the vector $u_\mathbb{C} = (u_0, u_1+ju_{N}, u_2+ju_{N+1}, \ldots, u_{N-1}+ju_{2N-2})^{\mathrm{T}}$ and use a similar definition of $z_\mathbb{C}$. The dual cone $\ensuremath{\mathcal{C}}\xspace^*$ turns out to be the set of finite autocorrelation sequences. An excellent introduction to this set and a number of characterizations of it are given in \cite{alkire-autocorrelation} for the case of real-valued sequences. Here we extend the definition to the complex-valued case. \begin{definition} A vector $z$ is a finite autocorrelation sequence if there exists a vector $q\in\mathbb{C}^N$ such that% \footnote{The complex conjugate of $q$ is denoted $\bar{q}$.} \begin{align} (z_\mathbb{C})_k = \sum_{n=0}^{N-1-k} \bar{q}_n q_{n+k}, \qquad k=0,\ldots,N-1. \label{acs_def} \end{align} \end{definition} In other words, $z$ is a finite autocorrelation sequence if \begin{align} \ldots, 0, 0, (\bar{z}_\mathbb{C})_{N-1}, (\bar{z}_\mathbb{C})_{N-2}, \ldots, (\bar{z}_\mathbb{C})_{1}, (z_\mathbb{C})_{0}, (z_\mathbb{C})_{1}, \ldots, (z_\mathbb{C})_{N-1}, 0, 0, \ldots \label{acs} \end{align} is the autocorrelation sequence of some moving average process of order $N-1$ with filter coefficients $q_1, \ldots, q_{N-1}$ and input variance $|q_0|^2$. It is well known from the theory of linear time-invariant systems, that if \eqref{acs} is a valid autocorrelation, then it can be represented by a moving average process (i.e., there exists a coefficient vector $q$ such that \eqref{acs_def} holds). A sequence is a valid autocorrelation sequence if and only if its Fourier transform is non-negative \cite{KN:77}. The Fourier transform of \eqref{acs} is \begin{align} Z(\omega) = (z_\mathbb{C})_0 + 2 \sum_{k=1}^{N-1} \re\!\left((z_\mathbb{C})_k \exp(-j\omega k) \right), \end{align} for $\omega\in[0,2\pi)$. Then $z\in\ensuremath{\mathcal{C}}\xspace^*$ if and only if $Z(\omega)\ge0$ for all $\omega\in[0,2\pi)$. The fast Fourier transform allows $Z(\omega)$ to be evaluated at a large number of points on $[0,2\pi)$ in an efficient way. Using Lemma \ref{Kstar_lemma} we therefore have a low-complexity method of determining if $\lambda\in\ensuremath{\mathcal{K}}\xspace^*$. This approach is approximate in the sense that $Z(\omega)$ is sampled at a finite number of points on $[0,2\pi)$. The approximation can be made arbitrarily accurate by increasing the number of evaluated points. In our simulation study in Sec.~\ref{sec:numerical} it is demonstrated that the approximation is of sufficient accuracy for our algorithm to be utilized in practice. We still haven't shown that the dual of the cone $\ensuremath{\mathcal{C}}\xspace$ is indeed the set of finite autocorrelation sequences. To that end, let $\tilde\ensuremath{\mathcal{C}}\xspace$ be the set of finite autocorrelation sequences and identify $u$ with $u_\mathbb{C}$. Extending the approach of \cite{alkire-autocorrelation} to the complex-valued case, a vector $u$ is in the dual of $\tilde\ensuremath{\mathcal{C}}\xspace$ if and only if $z^{\mathrm{T}} u\ge0$ for every $z\in\tilde\ensuremath{\mathcal{C}}\xspace$, or, in other words, if and only if \begin{align} z^{\mathrm{T}} u &= \re(z_\mathbb{C}^\mathrm{H} u_\mathbb{C}) = \re\!\left( \sum_{k=0}^{N-1}\sum_{n=0}^{N-1-k} (u_\mathbb{C})_k q_n \bar{q}_{n+k} \right) = \frac{1}{2} q^{\mathrm{T}} T(u) \bar{q} \ge 0 \end{align} for every $q\in\mathbb{C}^N$. We can therefore identify $\ensuremath{\mathcal{C}}\xspace$ with $\tilde\ensuremath{\mathcal{C}}\xspace^*$. Since $\tilde\ensuremath{\mathcal{C}}\xspace$ is a proper cone, we have $\ensuremath{\mathcal{C}}\xspace^* = \tilde\ensuremath{\mathcal{C}}\xspace^{**}=\tilde\ensuremath{\mathcal{C}}\xspace$. \subsection{Barrier Functions} \label{sec:barriers} IPMs are built on the idea of a barrier function $F:\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace\rightarrow\mathbb{R}$ associated to the cone $\ensuremath{\mathcal{K}}\xspace$ ($\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$ denotes the interior of $\ensuremath{\mathcal{K}}\xspace$). The barrier function must be a smooth and strictly convex% \footnote{Hessian positive definite everywhere.} function with $F(\mu_k)\rightarrow\infty$ for every sequence of points $\mu_k\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$ with limit point $\tilde\mu\in\mathop{\bf bd}\ensuremath{\mathcal{K}}\xspace$, where $\mathop{\bf bd}\ensuremath{\mathcal{K}}\xspace$ denotes the boundary of $\ensuremath{\mathcal{K}}\xspace$. The typical approach to IPMs also assumes that the barrier function is logarithmically homogeneous (LH). $F$ is a LH barrier function for the cone $\ensuremath{\mathcal{K}}\xspace$ if there exists a $\theta_F>0$ such that $F(\alpha\mu) = F(\mu) - \theta_F\log(\alpha)$ for all $\alpha>0$, $\mu\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$. The value $\theta_F$ is called the degree of the barrier. We will use the following well-known properties of a LH barrier function $F$ for $\ensuremath{\mathcal{K}}\xspace$ \cite{BoVa:04, NeTo-primaldual, NeTo-selfscaled}: If $\mu\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$, then \begin{align} \left<-\nabla_\mu F(\mu), \mu \right> &= \theta_F, \label{graddotprod} \\ -\nabla_\mu F(\mu) &\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace^*, \label{gradindual} \end{align} where the gradient operator is defined as $\nabla_\mu F = (\nabla_v F, \nabla_x F^{\mathrm{T}}, \nabla_u F^{\mathrm{T}})^{\mathrm{T}}$. The gradient with respect to the complex vector $x=a+jb$ is to be understood as% \footnote{This is actually twice the Wirtinger derivative of $F$ with respect to $\bar{x}$.} $\nabla_x F = \nabla_af + j\nabla_bf$. The usefulness of barrier functions is clear when considering their role in path-following methods. A primal-only path-following method finds a solution to \eqref{conicproblem} by iteratively solving \begin{equation} \label{barrierproblem} \begin{array}{ll} \text{minimize} & f(\mu) + t^{-1} F(\mu) \\[1mm] \text{subject to} & \mu\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace \end{array} \end{equation} for an increasing sequence of values $t>0$. In each step $\mu$ is initialized with the solution of the previous step. This approach is desirable because each step can be solved by an algorithm for unconstrained optimization such as Newton's method. In this paper we use the standard log-determinant barrier function for $\ensuremath{\mathcal{K}}\xspace$: \begin{align*} F(\mu) &= - \log \left| \left(\begin{matrix} T(u) & x \\ x^\mathrm{H} & v \end{matrix}\right) \right| \addtocounter{equation}{1}\tag{\theequation}\label{F} \\ &= - \log|T(u)| - \log(v-x^\mathrm{H} T^{-1}(u) x), \;\;\mathrm{for\ }\mu\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace. \end{align*} It is easy to show that it is LH with degree $\theta_F=N+1$. \subsection{Solvability} We now consider conditions for the problem \eqref{conicproblem} to be solvable. An optimization problem is solvable when a feasible point exists and its objective is bounded below on the feasible set. \begin{lemma} \label{bounded_lemma} The function $f(\mu)$ is bounded below on $\mu\in\ensuremath{\mathcal{K}}\xspace$ if and only if $\tau=0$ or $\tau>0$ and $w\in\ensuremath{\mathcal{C}}\xspace^*$. \end{lemma} \begin{IEEEproof} The case $\tau=0$ is trivial. Assume $\tau\ne0$ in the following. If $\tau<0$ or $w\notin\ensuremath{\mathcal{C}}\xspace^*$ there exists $\mu\in\ensuremath{\mathcal{K}}\xspace$ with $x=0$ such that $\tau v + \tau w^{\mathrm{T}} u<0$. Then $\alpha\mu\in\ensuremath{\mathcal{K}}\xspace$ for any $\alpha\ge0$ and $\lim_{\alpha\rightarrow\infty}f(\alpha\mu)=-\infty$, so $f(\mu)$ is unbounded below on $\mu\in\ensuremath{\mathcal{K}}\xspace$. Conversely, if $\tau>0$ and $w\in\ensuremath{\mathcal{C}}\xspace^*$, we have $\tau v\ge0$ and $\tau w^{\mathrm{T}} u\ge0$ for every $\mu\in\ensuremath{\mathcal{K}}\xspace$. So $f(\mu)\ge0$ for $\mu\in\ensuremath{\mathcal{K}}\xspace$. \end{IEEEproof} Since a primal feasible point always exists (take for example $v=1,x=0,u=e_0$), the problem \eqref{conicproblem} is solvable if and only if $\tau=0$ or the conditions in Lemma \ref{bounded_lemma} are fulfilled. These conditions can easily be checked prior to executing the algorithm and we assume that the problem is solvable in the following. \subsection{Optimality Conditions} With the conic modelling machinery in place we can begin to analyze the solution of \eqref{problem} by considering the non-symmetric conic formulation \eqref{conicproblem}. The Lagrangian of \eqref{conicproblem} is \begin{align} L(\mu,\lambda) = \norm{x-y}_2^2 + \tau(v+w^{\mathrm{T}} u) - \left<\lambda,\mu\right> \end{align} and the dual is \begin{equation} \label{dualproblem} \begin{array}{ll} \text{maximize} & -\frac{1}{4}\norm{s}_2^2 - \re(y^\mathrm{H} s) \\[1mm] \text{subject to} & \lambda\in\ensuremath{\mathcal{K}}\xspace^*,\; \rho=\tau,\; z=\tau w. \end{array} \end{equation} Notice that by taking the dual of \eqref{conicproblem} instead of \eqref{problem}, the number of dual variables is reduced from $\ensuremath{\mathcal{O}}\xspace(N^2)$ to $\ensuremath{\mathcal{O}}\xspace(N)$ (see \cite{Bhaskar:2013} for an explicit formulation of the dual of \eqref{problem}). This is the reason why, from a computational point of view, it is beneficial to work with the form \eqref{conicproblem} instead of \eqref{problem}. Since $f$ is convex, the Karush-Kuhn-Tucker (KKT) are necessary and sufficient \cite[Sec. 5.9]{BoVa:04} for variables $(\mu^\star, \lambda^\star)$ to be solutions of the primal and dual problems \eqref{conicproblem} and \eqref{dualproblem}. The KKT conditions are \begin{align} \left\{ \begin{array}{l} \nabla_\mu L\!\left(\mu^\star,\lambda^\star\right) = 0 \\ \mu^\star \in\ensuremath{\mathcal{K}}\xspace \\ \lambda^\star \in\ensuremath{\mathcal{K}}\xspace^* \\ \left<\lambda^\star,\mu^\star\right> = 0 \end{array} \right\}. \label{kkt} \end{align} \newcommand{^{(t)}}{^{(t)}} Instead of directly solving the KKT conditions, our primal-dual IPM finds solutions $(\mu^{(t)}, \lambda^{(t)})$ of the augmented KKT conditions \cite{BoVa:04, NeTo-ipm} \begin{align} \left\{ \begin{array}{l} \nabla_{\mu}L\!\left(\mu^{(t)},\lambda^{(t)}\right) = 0 \\ \mu^{(t)} \in \mathop{\bf int}\ensuremath{\mathcal{K}}\xspace \\ \lambda^{(t)} \in \mathop{\bf int}\ensuremath{\mathcal{K}}\xspace^* \\ \lambda^{(t)} = - t^{-1} \nabla_{\mu} F\!\left(\mu^{(t)}\right) \end{array} \right\} \label{akkt} \end{align} for an increasing sequence of values $t>0$. It is easy to realize that $\left(\mu^{(t)},\lambda^{(t)}\right)$ solves \eqref{akkt} only if $\mu^{(t)}$ is a solution of the barrier problem \eqref{barrierproblem}. This observation provides the link between primal-only barrier methods and primal-dual IPMs. The set of values $\left\{\left( \mu^{(t)},\lambda^{(t)} \right) : t>0\right\}$ is known as the primal-dual central path. The primal-dual central path converges to the desired solution in the sense that $\lim_{t\rightarrow\infty}\left( \mu^{(t)},\lambda^{(t)} \right) = \left( \mu^\star, \lambda^\star \right)$ \cite{BoVa:04, NeTo-ipm}. The last condition in \eqref{akkt} is known as the augmented complementary slackness condition. It follows from \eqref{gradindual} that the second and fourth condition in \eqref{akkt} together imply $\lambda^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace^*$, so the third condition can be dropped. From \eqref{graddotprod} it follows that the duality gap for the primal-dual problems \eqref{conicproblem} and \eqref{dualproblem} at a point on the primal-dual central path is $\left<\lambda^{(t)},\mu^{(t)}\right>=\theta_F/t$ \cite{BoVa:04,NeTo-selfscaled}. So solving the augmented KKT gives a primal feasible solution $\mu^{(t)}$ which is no more than $(N+1)/t$ suboptimal. Consequently, an arbitrarily accurate solution can be obtained by solving \eqref{akkt} for sufficiently large $t$. \subsection{Obtaining a Solution of the Augmented KKT Conditions} We now define $v^{(t)}, x^{(t)}, u^{(t)}, \rho^{(t)}, s^{(t)}$ and $z^{(t)}$ as the entries of $\mu^{(t)}$ and $\lambda^{(t)}$. By solving the first equation in \eqref{akkt} (the stationarity condition) we get \begin{align} \rho^{(t)}=\tau,\quad z^{(t)}=\tau w,\quad s^{(t)}=2(x^{(t)}-y). \label{lambda_t} \end{align} We continue by writing out the last condition of \eqref{akkt}. Solve for $v^{(t)}$ and $x^{(t)}$ and insert the relations above to get \begin{align} v^{(t)} &= (\tau t)^{-1} + \left(x^{(t)}\right)^\mathrm{H} T^{-1}\!\left(u^{(t)}\right) x^{(t)} \label{v_t} \\ x^{(t)} &= T\!\left(u^{(t)}\right) T^{-1}\!\left(u^{(t)}+2^{-1}\tau e_0\right) y. \label{x_t} \end{align} Finally, solve $z^{(t)}=-t^{-1}\nabla_uF\!\left(\mu^{(t)}\right)$ for $u^{(t)}$ and insert the above to obtain \begin{align} \label{compslack_u} \tau w - \tau T^*\!\left( \phi\phi^\mathrm{H} \right) - t^{-1} T^*\!\left( T^{-1}(u^{(t)}) \right) = 0, \end{align} where $\phi = T^{-1}\!\left(u^{(t)}+2^{-1}\tau e_0\right) y$. For a given $t>0$ the corresponding point on the primal-dual central path can be obtained as follows: First a solution $u^{(t)}$ of \eqref{compslack_u} that fulfills $u^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$ is found (existence of such a solution is shown below). Then the point $\left(\mu^{(t)},\lambda^{(t)}\right)$ is obtained by inserting into \eqref{lambda_t}, \eqref{v_t} and \eqref{x_t}. It is easy to show from $u^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$ that $\mu^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$ and so $\left(\mu^{(t)},\lambda^{(t)}\right)$ solves \eqref{akkt} and it is a primal-dual central point. How can we obtain a solution $u^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$ of \eqref{compslack_u}? The left-hand side of \eqref{compslack_u} is recognized as the gradient of $h_t(u) = g(u) + t^{-1} G(u)$, with \begin{align} g(u) &= \tau w^{\mathrm{T}} u + \tau y^\mathrm{H} T^{-1}\!\left(u+2^{-1}\tau e_0\right) y \label{g} \\ G(u) &= - \log|T(u)|. \label{G} \end{align} Now consider the barrier problem \begin{equation} \label{min_g} \begin{array}{ll} \text{minimize} & h_t(u) \\[1mm] \text{subject to} & u\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace. \end{array} \end{equation} The gradient of $h_t$ vanishes at the solution of \eqref{min_g} because $G$ is a LH barrier function for $\ensuremath{\mathcal{C}}\xspace$. So solving \eqref{compslack_u} with $u^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$ is equivalent to solving \eqref{min_g}. Since we have assumed that the problem \eqref{conicproblem} is solvable, then so is \eqref{min_g} (thus proving that there exists a $u^{(t)}\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$ that solves \eqref{compslack_u}). The idea of the primal-dual IPM presented in the following section is to use an iterative algorithm for unconstrained optimization (either Newton's method or a quasi-Newton method) to solve \eqref{min_g}. However, we do not need to exactly solve \eqref{min_g} for a sequence of values $t>0$. In each iteration of the solver the value of $t$ can be updated in a dynamic manner based on the duality gap. \section{The Primal-Dual Interior-Point Method} \label{sec:method} \newcommand{_{i-1}}{_{i-1}} \newcommand{_{i}}{_{i}} \newcommand{_{i+1}}{_{i+1}} We now outline FastAST, a primal-dual IPM for the solution of \eqref{conicproblem}. Let $(\mu_{i},u_{i},\lambda_{i},t_{i})$ denote $(\mu,u,\lambda,t)$ in iteration $i$. The proposed method is given in Algorithm \ref{alg}. In the remainder of this section, each step of the algorithm is discussed in detail. \begin{figure}[t] \let\@latex@error\@gobble \begin{algorithm}[H] \DontPrintSemicolon \KwParam{$\gamma>1$.} \KwIn{ Initial values $u_0\in\ensuremath{\mathcal{C}}\xspace$ and $t_1>0$. } Set objective lower bound $f_|LB|=-\infty$. \; \For{$i=1,2,\ldots$}{ Determine the search direction $\Delta u$. \; Perform a line search along $\Delta u$ to obtain the step size $\alpha$. \; Update estimate $u_{i}=u_{i-1} + \alpha\Delta u$. \; Form primal-dual variables $(\mu_{i}, \lambda_{i})$ using \eqref{lambda_t}, \eqref{v_t} and \eqref{x_t}. \; \If{$\lambda_{i}\in\ensuremath{\mathcal{K}}\xspace^*$}{ Update lower bound on objective $f_|LB| = \max\!\left(f_|LB|, -\frac{1}{4}\norm{s_{i}}_2^2 - \re(y^\mathrm{H} s_{i})\right)$. \; } Evaluate duality gap $\eta_{i} = f(\mu_{i}) - f_|LB|$. \; Terminate if the stopping criterion is satisfied. \; Update barrier parameter $t_{i+1} = \max\!\left( t_{i}, \gamma \frac{N+1}{\eta_{i}} \right)$. \; } \KwOut{ Primal-dual solution $(\mu_{i}, \lambda_{i})$. } \caption{Primal-dual IPM for fast atomic norm soft thresholding (FastAST).} \label{alg} \end{algorithm} \end{figure} Low-complexity evaluation of the steps in FastAST are presented in Sec.~\ref{sec:complexity}. With these approaches, the computational complexity is dominated by the evaluation of the search direction. For this step we propose to use either Newton's method or a quasi-Newton method. The quasi-Newton method has much lower computational complexity per iteration and is also faster in practice. It is, however, not able to obtain a solution of high accuracy. If a highly accurate solution is required, Newton's method is preferred. We refer to the numerical evaluation in Sec.~\ref{sec:numerical} for a detailed discussion thereof. \subsection{Determining the Search Direction Using Newton's Method} Applying Newton's method to solve \eqref{min_g} we get the search direction \begin{align} \Delta u = - \left( \nabla^2_u h_{t_{i}}(u_{i-1}) \right)^{-1} \nabla_u h_{t_{i}}(u_{i-1}), \label{du_newton} \end{align} where $\nabla^2_u h_{t_{i}}(u_{i-1})$ denotes the Hessian of $h_{t_{i}}$ evaluated at $u_{i-1}$. As discussed in Sec.~\ref{sec:complexity} the Hessian can be evaluated in $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops and the same cost is required for solution of the system \eqref{du_newton}. \subsection{Determining the Search Direction Using L-BFGS} \newcommand{_{k}}{_{k}} In scenarios with large $N$ the computation time for evaluation of the Newton search direction can become prohibitively large. In these cases we propose to use the limited-memory Broyden–Fletcher–Goldfarb–Shanno (L-BFGS) algorithm \cite{nocedal-lbfgs} for the solution of \eqref{min_g}. L-BFGS posses two key properties that are instrumental in obtaining an algorithm with low per-iteration computational complexity: \textit{1)} it uses only gradient information and the gradient of $h_{t_{i}}$ can be evaluated with low computational complexity;% \footnote{To speed up convergence, our implementation also uses an approximation of the diagonal of the Hessian of $h_{t_{i}}$.} and \textit{2)} by appropriately modifying the L-BFGS two-loop recursion, it can be used for the solution of \eqref{min_g} in a computational efficient manner even though $t$ is increased in every iteration. It is this property, and not the limited memory requirements, that makes L-BFGS preferable over other quasi-Newton methods (such as vanilla BFGS) for our purposes. In relation to the second property, note that since $t_{i}\neq t_{i-1} \neq \dots$, the normal formulation of L-BFGS does not apply. A simple modification of the L-BFGS two-loop recursion \cite{nocedal-lbfgs} overcomes this limitation. At the end of the $i$th iteration, the following difference vectors are calculated and saved for later use: \begin{align} r_{i} &= u_{i} - u_{i-1} \label{r_i} \\ q_{i} &= \nabla_u g(u_{i}) - \nabla_u g(u_{i-1}) \\ Q_{i} &= \nabla_u G(u_{i}) - \nabla_u G(u_{i-1}) \label{qG_i}. \end{align} This set of vectors is retained for $M$ iterations. The modified two-loop recursion in Algorithm \ref{lbfgs} can then be used to calculate the search direction $\Delta u$. This algorithm calculates the normal L-BFGS search direction for minimization of $h_{t_{i}}$, as if $t_{i} = t_{i-1} = \ldots$. That can be achieved because L-BFGS only depends on $t_{i}$ through $\nabla_uh_{t_{i}}(u_k)=\nabla_ug(u_k) + t_{i}^{-1} \nabla_uG(u_k)$, for $k=i-1,\ldots,i-M-1$. The gradients $\nabla_ug(u_k)$ and $\nabla_uG(u_k)$ need only be calculated once to allow $\nabla_uh_{t_{i}}(u_k)$ to be calculated for any value of $t_{i}$. \begin{figure}[t] \let\@latex@error\@gobble \begin{algorithm}[H] \DontPrintSemicolon \KwParam{Number of saved difference vectors $M$.} \KwIn{Current iteration number $i$ and parameter $t_{i}$. Saved difference vectors $r_k, q_k, Q_k$ for $k=i-1,i-2,\ldots,\max(i-M,1)$. Current gradient vector $\nabla_u h_{t_{i}}(u_{i-1})$ and initial Hessian approximation $\hat H_{i}$. } $d \gets - \nabla_u h_{t_{i}}(u_{i-1})$ \; \For{$k=i-1, i-2, \ldots, \max(i-M,1)$}{ $\psi_{k} \gets q_{k} + t_{i}^{-1} Q_{k} $ \; $\sigma_{k} \gets \frac{r_{k}^{\mathrm{T}} d}{r_{k}^{\mathrm{T}} \psi_{k}}$ \; $d \gets d - \sigma_{k} \psi_{k}$ } $d \gets \hat H_{i}^{-1} d$ \; \For{$k=\max(i-M,1), \max(i-M,1)+1, \ldots, i-1$}{ $\beta_{k} \gets \frac{\psi_{k}^{\mathrm{T}} d}{\psi_{k}^{\mathrm{T}} r_{k}}$ \; $d \gets d + r_{k} \left( \sigma_{k} - \beta_{k} \right)$ } \KwOut{ Search direction $\Delta u = d$. \; } \caption{Modified L-BFGS two-loop recursion for calculation of the search direction.} \label{lbfgs} \end{algorithm} \end{figure} In each iteration the initial Hessian $\hat H_{i}$ should be chosen as an approximation of the Hessian of $h_{t_{i}}$ evaluated at $u_{i-1}$. It is the matrix upon which L-BFGS successively applies rank-2 updates to form the Hessian approximation that is used for calculating the search direction. An easy, and popular, choice for the initial Hessian is the identity matrix $\hat H_{i}=I$. Through numerical experiments we have seen that this choice leads to slow convergence. It turns out that the slow convergence is caused by the scaling of the Hessian, leading to non-acceptance of a full Newton step (i.e., $\alpha$ is selected much smaller than 1). Using a diagonal approximation of the true Hessian remedies this, but, unfortunately, it cannot be calculated with low computational complexity. (Our best attempt at devising a fast evaluation of the Hessian diagonal yielded cubic complexity $\ensuremath{\mathcal{O}}\xspace(N^3)$, the same as evaluation of the full Hessian.) Instead our algorithm uses the following heuristic approximation of the diagonal Hessian \begin{align} \label{Hi} \hat H_{i} = \diag\!\left( 1, \frac{N-1}{2N}, \ldots, \frac{1}{2N}, \frac{N-1}{2N}, \ldots, \frac{1}{2N} \right) \left(\nabla^2_u h_{t_{i}}(u_{i-1})\right)_{0,0}, \end{align} where $\left(\nabla^2_u h_{t_{i}}(u_{i-1})\right)_{0,0}$ is the $(0,0)$th entry of the true Hessian evaluated at $u_{i-1}$. This approximation can be calculated with low computational complexity as demonstrated in Sec.~\ref{sec:complexity}. The approximation is motivated as follows: The diagonal entries are scaled according to the number of times the corresponding entry of $u$ appears in $T(u)$. This scaling resembles that in the biased autocorrelation estimate (except for a factor of $2$ caused by the scaling of the diagonal in the definition of $T(u)$). In our numerical experiments, we have observed the above approximation to be fairly accurate; each entry typically takes a value within $\pm50\,\%$ of the true value. To this end we note that only a crude approximation is needed, since the role of $\hat H_{i}$ is to account for the scaling of the problem. Our numerical investigation suggests that using the approximation \eqref{Hi} leads to only marginally slower convergence, compared to using a diagonal Hessian approximation using the diagonal of the true Hessian. A final note on our adaptation of L-BFGS is that the usual observations regarding positive definiteness of the approximated Hessian remain valid. First note that the objective upon which L-BFGS is applied ($h_{t_{i}}$) is a strictly convex function for $u\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$. It follows that the initial Hessian approximation $\hat H_{i}$ is positive definite. Also, the curvature condition $r_k^{\mathrm{T}}\psi_k>0$ is valid for all $k$. Then the approximated Hessian is positive definite and the calculated search direction $\Delta u$ is a descent direction \cite{nocedal-lbfgs, Nocedal99}. \subsection{Line Search} The line search along the search direction $\Delta u$ is a simple backtracking line search starting at $\alpha=1$. A step size is accepted if the new point is strictly feasible, i.e., if $u_{i-1} + \alpha\Delta u\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace$. It is then easy to show that the primal solution $\mu_{i}$ calculated from inserting $u_{i}$ into \eqref{v_t} and \eqref{x_t} is strictly primal feasible ($\mu_{i}\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$). To guarantee that the objective is sufficiently decreased, the Armijo rule is also required for acceptance of a step size $\alpha$: \begin{align} h_{t_{i}}(u_{i-1}+\alpha\Delta u) &\le h_{t_{i}}(u_{i-1}) + c \alpha \Delta u^{\mathrm{T}} \nabla_u h_{t_{i}}(u_{i-1}), \end{align} where $0<c<1$ is a suitably chosen constant. \subsection{The Duality Gap and Update of \texorpdfstring{$t$}{t}} The line search guarantees that the primal solution is strictly feasible in all iterations, i.e., that $\mu_{i}\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace$. Dual feasibility of a solution $\lambda_{i}$ obtained from \eqref{lambda_t} is not guaranteed. The algorithm therefore checks for $\lambda_{i}\in\ensuremath{\mathcal{K}}\xspace^*$ using the approximate approach described in Sec. \ref{dualmembership}. Let $f^\star$ denote the optimal value of the problem \eqref{conicproblem}. If $\lambda_{i}$ is dual feasible, the objective of the dual \eqref{dualproblem} provides a lower bound on the optimal value, i.e., \begin{align} f^\star \ge -\frac{1}{4}\norm{s_{i}}_2^2 - \re(y^\mathrm{H} s_{i}). \end{align} The algorithm always retains the largest lower bound it has encountered in $f_|LB|$. From the lower bound, a duality gap $\eta_{i}$ can be evaluated in each iteration: \begin{align} \eta_{i} = f(\mu_{i}) - f_|LB|. \label{eta} \end{align} This value gives an upper bound on the sub optimality of the solution $\mu_{i}$, i.e., $f(\mu_{i})-f^\star\le\eta_{i}$. Recall that the algorithm is ``aiming'' for a solution of the augmented KKT conditions \eqref{akkt}. At this solution, the duality gap is $\theta_F/t_{i+1}$. The next value of $t$ can then be determined so that the algorithm is aiming for a suitable (not too large, not too small) decrease in the duality gap, i.e., we select $t_{i+1}$ such that $\eta_i/\gamma = \theta_F/t_{i+1}$ for some preselected $\gamma>1$. To guarantee convergence it is also imposed that $t_{i}$ is a non-decreasing sequence. \subsection{Termination} The duality gap provides a natural stopping criterion. The proposed algorithm terminates based on either the duality gap ($\eta_{i}<\varepsilon_|abs|$) or the relative duality gap ($\eta_{i} / f(\mu_{i}) < \varepsilon_|rel|$). The relative duality gap is a sensible stopping criterion because $f(\mu)\ge0$ as is seen in the proof of Lemma \ref{bounded_lemma}. Algorithm \ref{alg} is guaranteed to terminate at a point that fulfills either of the two stopping criteria listed above. To see why that is the case, consider a scenario where $t_{i}$ converges to some finite constant $\tilde t$ as $i\rightarrow\infty$. Then, as $i\rightarrow\infty$, the algorithm implements L-BFGS with a backtracking line search to minimize $h_{\tilde t}$. Thus $u_{i}$ converges to the minimizer $u^{(\tilde t)}$ of $h_{\tilde t}$. Let $(\mu^{(\tilde t)}, \lambda^{(\tilde t)})$ denote the corresponding primal and dual variables calculated from \eqref{v_t}, \eqref{x_t} and \eqref{lambda_t}. Now, $(\mu^{(\tilde t)}, \lambda^{(\tilde t)})$ constitute a solution to \eqref{akkt} with $t=\tilde t$. Then $\lambda^{(\tilde t)}\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace^*$ follows from \eqref{gradindual}. Further, we have from \eqref{eta}, \eqref{lambda_t} and \eqref{graddotprod} that the duality gap $\eta_{i}$ converges to $\left< \mu^{(\tilde t)}, \lambda^{(\tilde t)} \right> = \theta_F / \tilde t$ as $i\rightarrow\infty$. However, that implies $t_{i+1}=\gamma\theta_F/\eta_i=\gamma\tilde t>\tilde t$ in the limit, a contradiction to the assumption that $t_{i}$ converges to $\tilde t$. This means that $t_{i}$ does not converge to a finite value and, as it is non-decreasing, it must diverge to $+\infty$. It is also evident that the duality gap $\eta_{i}\rightarrow0$ as $t_{i}\rightarrow\infty$, and so either of the stopping criteria are eventually fulfilled. \subsection{Initialization} FastAST must be initialized with a primal variable $u_0\in\ensuremath{\mathcal{C}}\xspace$ and a barrier parameter $t_1>0$. To determine a suitable value of the initial barrier parameter $t_1$ we first identify a primal-dual feasible point from which the duality gap can be evaluated. A primal-dual feasible point can be obtained by assuming% \footnote{The problem \eqref{problem} is solvable if and only if $w\in\ensuremath{\mathcal{C}}\xspace^*$. The restriction to the interior has no practical effect.} $w\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace^*$ and iterating these steps: \begin{enumerate} \item Set $u=10\lVert y\rVert_2^2/N, 0, \ldots, 0)^{\mathrm{T}}$. \item Calculate $(\mu,\lambda)$ from $u$ based on \eqref{lambda_t}, \eqref{v_t} and \eqref{x_t}. \item If $\lambda\in\ensuremath{\mathcal{K}}\xspace^*$, terminate, otherwise double the first entry of $u$ and go to step 2. \end{enumerate} The value of $u$ in Step 1 has been chosen heuristically. It is easy to see that $u$ stays primal feasible throughout. It is guaranteed that a dual feasible point is reached because $u\rightarrow(\infty,0,\ldots,0)^{\mathrm{T}}$. Then, following \eqref{x_t}, we have $x\rightarrow y$ and so $s\rightarrow 0$. Considering the result in Lemma \ref{Kstar_lemma} and the assumption $w\in\mathop{\bf int}\ensuremath{\mathcal{C}}\xspace^*$ we get that $\lambda$ converges to a point $\tilde\lambda\in\mathop{\bf int}\ensuremath{\mathcal{K}}\xspace^*$. When a primal-dual feasible point $(\mu,\lambda)$ has been found the corresponding duality gap is $\eta_0=\left< \mu, \lambda \right>$. The initial value of the barrier parameter is selected as $t_1 = \gamma\theta_F / \eta_0$. The corresponding value of $u$ is used as the initial value of the primal variable $u_0$. \section{Fast Computations} \label{sec:complexity} For brevity iteration indices are dropped in the following. The computationally demanding steps of FastAST (Alg.~\ref{alg}) all involve the determinant or the inverse of Toeplitz matrices $T(u)$ and $T(u+2^{-1}\tau e_0)$. In this section we demonstrate how the Toeplitz structure can be exploited to significantly reduce the computational complexity of these evaluations. The use of such structure for fast solution of optimization problems have previously been seen \cite{HansenFR17, musicus-fastmlm}, including for evaluation of the gradient and Hessian of the barrier function $G$ \cite{genin-optimization, alkire-autocorrelation}. \subsection{Fast Algorithms for Factorizing a Toeplitz Inverse} Our computational approach is based on the following factorizations of Toeplitz inverses. The Gohberg-Semencul formula \cite{ammar-generalized, gohberg-convolution} gives a factorization of the inverse of a Toeplitz matrix $T(u)$, \begin{align} T^{-1}(u) = \delta_{N-1}^{-1} (U^\mathrm{H} U - VV^\mathrm{H}), \label{gohberg-semencul} \end{align} where the entries of Toeplitz matrices $U$ and $V$ are \begin{align} U_{n,m} &= \rho_{N-1+n-m}, \\ V_{n,m} &= \rho_{n-m-1}, \end{align} for $n,m=0,\ldots,N-1$. Note that $\rho_n=0$ for $n<0$ and $n>N-1$; thus $U$ is unit upper triangular ($\rho_{N-1}=1$) and $V$ is strictly lower triangular. The values $\delta_n$ and $\rho_n$ for $n=0,\ldots,N-1$ can be computed with a generalized Schur algorithm in $\ensuremath{\mathcal{O}}\xspace(N\log^2N)$ flops \cite{ammar-generalized}. Alternatively, the Levinson-Durbin algorithm can be used to obtain the decomposition in $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops. The latter algorithm is significantly simpler to implement and is faster for small $N$. In \cite{ammar-numerical} it is concluded that the Levinson-Durbin algorithm requires fewer total operations than the generalized Schur algorithm for $N\le256$. We will also use a Cholesky factorization of $T^{-1}(u)$, \begin{align} T^{-1}(u)=PDP^\mathrm{H}\, \label{PDP} \end{align} where $P$ is unit upper triangular and $D$ is diagonal. The matrix $D=\diag(\delta_0^{-1},\ldots,\delta_{N-1}^{-1})$ is inherently computed when the generalized Schur algorithm is executed \cite{ammar-generalized}. The generalized Schur algorithm does not compute the matrix $P$. The Levinson-Durbin algorithm inherently computes both $P$ and $D$, a property which we exploit for evaluation of the Hessian of the barrier function $G$. In the following we let $\rho_0, \ldots, \rho_{N-1}$ and $\delta_0, \ldots, \delta_{N-1}$ be the entries obtained by executing the generalized Schur or Levinson-Durbin algorithm with either $T^{-1}(u)$ or $T^{-1}(u+2^{-1}\tau e_0)$; which one will be clear from the context. \subsection{Evaluating the Objective and the Primal Variables} We first discuss evaluation of the objective $h_{t}(u) = g(u) + t^{-1} G(u)$. Since $P$ in \eqref{PDP} has unit diagonals, it is easy to obtain \begin{align} G(u) = - \log|T(u)| = - \sum_{n=0}^{N-1} \log\delta_n. \end{align} To evaluate $g(u)$ insert \eqref{gohberg-semencul} into \eqref{g} and realize that all matrix-vector products involve Toeplitz matrices. Vector multiplication onto a Toeplitz matrix can be performed using the fast Fourier transform (FFT) in $\ensuremath{\mathcal{O}}\xspace(N\log N)$ flops (such products are convolutions, see e.g.~\cite{alkire-autocorrelation} for details). In conclusion, the dominant cost of evaluating $h_t(u)$ is the execution of the generalized Schur (or Levinson-Durbin) algorithm. Evaluating the primal variables $v^{(t)}$ and $x^{(t)}$ in \eqref{v_t} -- \eqref{x_t} similarly amounts to vector products onto Toeplitz matrices. The line search in Algorithm \ref{alg} must check for $u\in\ensuremath{\mathcal{C}}\xspace$, i.e., if $T(u)\succ0$. The generalized Schur (or Levinson-Durbin) algorithm can again be used here, as $T(u)\succ0$ if and only if $\delta_n>0$ for $n=0,\ldots,N-1$. \subsection{Evaluating the Gradients} The following gradients must be evaluated in each iteration of Algorithm \ref{alg}: \begin{align} \nabla_u g(u) &= \tau w - \tau T^*\!\left( \phi\phi^\mathrm{H} \right) \\ \nabla_u G(u) &= - T^*\!\left( T^{-1}(u) \right). \end{align} We first consider the term $T^*(\phi\phi^\mathrm{H})$. The vector $\phi$ can be evaluated with low complexity (confer the evaluation of primal variables, above). Let $\beta_n\in\mathbb{C}$ denote the sum over the $n$th upper diagonal of $\phi\phi^\mathrm{H}$ for $n=0,\ldots,N-1$, i.e., \begin{align} \beta_n &= \sum_{m=0}^{N-1-n} (\phi\phi^\mathrm{H})_{m,m+n} = \sum_{m=0}^{N-1-n} \phi_m\bar\phi_{m+n}. \label{sum_T_diag} \end{align} It is recognized that the values $\beta_0,\ldots,\beta_{N-1}$ can be calculated as a correlation, which can be implemented using FFTs in $\ensuremath{\mathcal{O}}\xspace(N\log N)$ flops. Then $T^*(\phi\phi^\mathrm{H})$ can be obtained by concatenating and scaling the real and imaginary parts of $\beta$, \begin{align} T^*(\phi\phi^\mathrm{H}) = (2\beta_0, 2\re(\beta_1), \dots, 2\re(\beta_{N-1}), 2\im(\beta_1), \dots, 2\im(\beta_{N-1}))^{\mathrm{T}}\,. \label{a-to-T} \end{align} Now consider evaluation of the term $T^*\!\left( T^{-1}(u) \right)$. The sum over the $n$th upper diagonal of $T^{-1}(u)$ is denoted as $\tilde\beta_n$ and can be rewritten as \begin{align} \tilde\beta_n &= \sum_{m=0}^{N-1-n} (T^{-1}(u))_{m,m+n} =\delta_{N-1}^{-1} \sum_{k=0}^{N-1} (n-N + 2(k+1)) \rho_{k} \bar\rho_{k + n}\,, \end{align} see \cite{HansenFR17,musicus-fastmlm} for details. The above is recognized as two correlations, thus allowing a low-complexity evaluation. The vector $T^*\!\left( T^{-1}(u) \right)$ is found by concatenating and scaling the real and imaginary parts of $\tilde\beta$, analogously to \eqref{a-to-T}. \subsection{Evaluating the Full Hessian} When Newton's method is used to determine the search direction, the Hessian of $h_t$ must be evaluated. We now derive an approach to calculate the Hessians of $g$ and $G$, from which the required Hessian is easily found. The $(n,m)$th entry of the Hessian of $g$ is \begin{align} \label{hess_g} \left( \nabla^2_ug(u) \right)_{n,m} = 2\tau \phi^\mathrm{H} (E_n+E_n^\mathrm{H}) T^{-1}(u+2^{-1}\tau e_0) (E_m+E_m^\mathrm{H}) \phi, \end{align} where \begin{align} E_n = \begin{cases} I & n = 0 \\ \tilde E^n & 1 \le n \le N-1 \\ -jE_{n-N+1} & N \le n \le 2N-1. \end{cases} \end{align} The matrix $\tilde E$ is the lower shift matrix, i.e., it has ones on the lower subdiagonal and zeros elsewhere. Note that $T(e_n)=E_n+E_n^\mathrm{H}$. The $m$th column of the Hessian is then \begin{align} \left( \nabla^2_ug(u) \right)_{m} &= \tau T^*\! \left( d_m \phi^\mathrm{H} + \phi d_m^\mathrm{H} \right), \end{align} where we let $d_m$ denote a vector $d_m = T^{-1}(u+2^{-1}\tau e_0) (E_m+E_m^\mathrm{H}) \phi$. Assuming the decomposition \eqref{gohberg-semencul} is available, a column of the Hessian can be calculated in $\ensuremath{\mathcal{O}}\xspace(N\log N)$ flops by explicitly forming $d_m$ and performing sums over diagonals (as in \eqref{sum_T_diag}). The full Hessian of $g$ is then obtained in $\ensuremath{\mathcal{O}}\xspace(N^2\log N)$ flops. To evaluate the Hessian of the barrier function $G$ we generalize the approach of \cite{alkire-autocorrelation} to the complex-valued case. The $(n,m)$th entry of the Hessian is \begin{align} \label{hess_G} \left( \nabla_u^2 G(u) \right)_{n,m} &= \tr\!\left( T^{-1}(u) (E_n+E_n^\mathrm{H}) T^{-1}(u) (E_m+E_m^\mathrm{H}) \right). \end{align} Define the $N\times N$ matrices $A$ and $B$ with entries \begin{align} A_{n,m} &= 2 \tr\!\left( T^{-1}(u) E_n T^{-1}(u) E_m \right)\\ B_{n,m} &= 2 \tr\!\left( T^{-1}(u) E_n T^{-1}(u) E_m^{\mathrm{T}} \right). \end{align} Then the Hessian can be written in the form \begin{align} \label{hess_G_block} \nabla^2_uG(u) = \left(\begin{matrix} \re(A+B) & \re(-jAJ^{\mathrm{T}}) \\ \re(-jJA-jJB) & \re(-JAJ^{\mathrm{T}} + JBJ^{\mathrm{T}}) \end{matrix}\right), \end{align} where $J$ is a matrix that removes the first row, i.e., $J=(0,I)$, where $0$ is a column of zeros and $I$ is the $(N-1)\times(N-1)$ identity matrix. At this point, we need a fast way of evaluating matrices $A$ and $B$. Define the discrete Fourier transform matrix $W\in\mathbb{C}^{N_|FFT|\times N}$ with entries \begin{align} W_{n,m} = \exp(-j2\pi nm / N_|FFT|), \end{align} where $N_|FFT|$ is chosen such that $N_|FFT|\ge2N-1$. Recall that the Levinson-Durbin algorithm gives the decomposition $T^{-1}(u) = PDP^\mathrm{H}$, from which $T^{-1}(u) = RR^\mathrm{H}$ is obtained by calculating $R = PD^{\frac{1}{2}}$. Let $S_n$ denote the discrete Fourier transform of the $n$th column of $R$ (denote this column $R_n$), i.e., $S_n = WR_n$. Then by straight-forward generalization of the derivation in \cite{alkire-autocorrelation} to the complex-valued case, we get that $A$ and $B$ can be written in the forms \begin{align} A &= \frac{2}{N_|FFT|^2} W^{\mathrm{T}} \left( \left( \sum_{l=0}^{N-1} S_lS_l^\mathrm{H} \right) \odot \left( \sum_{l=0}^{N-1} S_lS_l^\mathrm{H} \right) \right) W \label{A} \\ B &= \frac{2}{N_|FFT|^2} W^{\mathrm{T}} \left( \left( \sum_{l=0}^{N-1} S_lS_l^\mathrm{H} \right) \odot \left( \sum_{l=0}^{N-1} S_lS_l^\mathrm{H} \right) \right) \bar W, \label{B} \end{align} with $\odot$ denoting the Hadamard (entrywise) product. Using \eqref{A} -- \eqref{B} the Hessian of $G$ can be evaluated in $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops. \subsection{Evaluating the Diagonal Hessian Approximation} The L-BFGS variant of FastAST uses the approximation of the Hessian diagonal \eqref{Hi} which requires calculation of the first entry of the Hessian \begin{align} \left(\nabla^2_u h_{t}(u)\right)_{0,0} = \left(\nabla^2_u g(u)\right)_{0,0} + \frac{1}{t} \left(\nabla^2_u G(u)\right)_{0,0}. \end{align} An $\ensuremath{\mathcal{O}}\xspace(N\log N)$ evaluation of the first term is easily obtained from \eqref{hess_g}. The second term can be evaluated based on \eqref{hess_G_block}, but a more efficient way is as follows: From \eqref{hess_G} we have \begin{align} \left( \nabla_u^2 G(u) \right)_{0,0} &= 4 \tr\!\left( T^{-1}(u) T^{-1}(u) \right). \label{hess_G_first} \end{align} The matrix $T^{-1}(u)$ can be formed explicitly in $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops using the Trench algorithm \cite{trench-algorithm, golub-matrix}. However, since the decomposition \eqref{gohberg-semencul} is already available in our setting it is much easier to form $T^{-1}(u)$ from it by writing for $n=0,\ldots,N-1$ and $m=0,\ldots,N-1-n$ {\small \begin{align*} T^{-1}(u)_{m,m+n} = \delta_{N-1}^{-1} \left( \sum_{k=0}^m \bar\rho_{N-1-k} \rho_{N-1-(k+n)} - \rho_{k-1} \bar\rho_{k+n-1} \right), \end{align*} } i.e., $T^{-1}(u)$ is ``formed along the diagonals''. By implementing the above sum as a cumulative sum, the complete matrix $T^{-1}(u)$ is formed in $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops. Note that since $T(u)$ is both Hermitian and persymmetric, then so is $T^{-1}(u)$. This means that only one ``wedge'' of the matrix, about $N/4$ entries, must be calculated explicitly \cite{golub-matrix}. The trace in \eqref{hess_G_first} is evaluated by taking the magnitude square of all entries in $T^{-1}(u)$ and summing them. \subsection{Analysis of Computational Complexity} To summarize the computational complexity of an implementation of Alg. \ref{alg} based on the low-complexity evaluations above, consider each of the two variants for determining the search direction. \begin{itemize} \item FastAST Newton: The computation time is asymptotically dominated by evaluation and inversion of the Hessian, i.e., $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops. \item FastAST L-BFGS: The computation time is asymptotically dominated by the $\ensuremath{\mathcal{O}}\xspace(MN)$ modified L-BFGS two-loop recursion in Alg. \ref{lbfgs} or by the $\ensuremath{\mathcal{O}}\xspace(N^2)$ evaluation of the diagonal Hessian approximation. \end{itemize} When using the Newton search direction, the decomposition \eqref{PDP} is required and the Levinson-Durbin algorithm must therefore be used to evaluate the factorization of the Toeplitz inverse. When using the L-BFGS search direction either the generalized Schur or the Levinson-Durbin algorithm can be used. The choice does not affect the asymptotic computational complexity, but one may be faster than the other in practice. \section{Numerical Experiments} \label{sec:numerical} \subsection{Setup \& Algorithms} \begin{table}[tbp] \centering \small \begin{tabular}{lcc} \toprule Variant & L-BFGS & Newton \\ \midrule Number of saved difference vectors $M$ & $2N-1$ & - \\ Armijo parameter $c$ & $0.05$ & $0.05$ \\ Barrier parameter multiplier $\gamma$ & $2$ & $10$ \\ Absolute tolerance $\varepsilon_|abs|$ & $10^{-4}$ & $10^{-7}$ \\ Relative tolerance $\varepsilon_|rel|$ & $10^{-4}$ & $10^{-7}$ \\ \bottomrule \end{tabular} \vspace{1mm} \caption{Algorithm parameters.} \label{tab:parameters} \end{table} In our experiments we use the signal model \eqref{model}. The frequencies $\omega_0, \ldots, \omega_{K-1}$ are drawn randomly on $[0,2\pi)$, such that the minimum separation% \footnote{The wrap-around distance on $[0,2\pi)$ is used for all frequency differences.} between any two frequencies is $4\pi/N$. The coefficients $c_0,\ldots,c_{K-1}$ are generated independently random according to a circularly symmetric standard complex Gaussian distribution. After generating the set of $K$ frequencies and coefficients the variance of the noise vector $\zeta$ is selected such that the desired signal-to-noise ratio (SNR) is obtained. The regularization parameter $\tau$ is selected from \eqref{tau} based on the true noise variance. We assess the algorithms based on their ability to solve AST, which is obtained by selecting $w=2e_0$ in \eqref{problem}. We show results for both the L-BFGS and Newton's variants of FastAST% \footnote{Our code is publicly available at github.com/thomaslundgaard/fast-ast.}. For $N\le512$ our implementation uses the Levinson-Durbin algorithm for Toeplitz inversion, while for $N>512$ it uses the generalized Schur algorithm where applicable. The parameters of the algorithm are listed in Table~\ref{tab:parameters}. It is worth to say a few words about the number of saved difference vectors $M$ in L-BFGS. On the one hand, selecting larger values of $M$ can decrease the total number of iterations required (by improving the Hessian approximation), but on the other hand doing so increases the number of flops required per iteration. In our numerical experiments we have found that setting it equal to the size of $u$ ($M=2N-1$) provides a good trade-off. Loosely speaking this choice allows L-BFGS to perform a full-rank update of the Hessian approximation, while it does not increase the asymptotic per-iteration computational complexity. With this choice the algorithm asymptotically requires $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops per iteration. Performance of the ADMM algorithm% \footnote{We use the implementation from github.com/badrinarayan/astlinespec.} \cite{Bhaskar:2013} is also shown along with that of CVX \cite{cvx} applied with both the SeDuMi\cite{Sturm:99} and Mosek\footnote{mosek.com} solvers. \subsection{Solution Accuracy Per Iteration} For this investigation a ground-truth solution of \eqref{problem} is obtained using CVX+SeDuMi with the precision setting set to ``best''. We denote this value as $\mu^\star$. Fig.~\ref{fig:iters} shows the normalize squared error between $\mu^\star$ and the solution in each iteration of the algorithms. The algorithms ignore the stopping criteria and run until no further progress can be made towards the solution. FastAST Newton converges very fast and a solution of very high accuracy is obtained within 25 iterations. This is due to the well-known quadratic convergence of Newton's method. While FastAST L-BFGS converges significantly slower it requires only $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops per iteration versus the $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops per iteration of FastAST Newton. We therefore cannot, at this point, conclude which version of FastAST is faster in practice. Note that ADMM on the other hand requires $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops per iteration, the same as FastAST Newton, but requires significantly more iterations. It is seen that FastAST L-BFGS seems to not make progress after approx.~300 iterations. This happens due to numerical challenges in evaluating the L-BFGS search direction. It is well-known that Woodbury's matrix identity, upon which L-BFGS is based, has limited numerical stability. For this reason FastAST \mbox{L-BFGS} is unable to obtain a solution of the same accuracy as the SeDuMi and Mosek solvers. Despite of this, as seen in the following sections, the solution accuracy of FastAST L-BFGS is sufficiently high in all cases but those with very high SNR. The tolerance values of FastAST L-BFGS are selected larger than for FastAST Newton (Table~\ref{tab:parameters}) because of the mentioned numerical issues with obtaining a high-accuracy solution. FastAST Newton does not suffer from this problem and can obtain a solution of about the same accuracy as SeDuMi and Mosek. ADMM can also obtain a solution of high accuracy but, as can be seen in Fig.~\ref{fig:iters}, it has slow convergence starting around iteration number $175$. It therefore takes a large number of iterations to obtain a solution of the same accuracy as SeDuMi/Mosek or FastAST Newton. \pgfplotsset{global_axis_style/.style={ title style={font=\footnotesize}, legend columns=1, legend style={font=\scriptsize}, legend style={inner xsep=2pt, inner ysep=1pt, nodes={inner sep=0.8pt}}, legend style={/tikz/every even column/.append style={column sep=2pt}}, label style={font=\footnotesize}, xlabel shift=-3pt, ylabel shift=-3pt, xticklabel style={font=\footnotesize}, yticklabel style={font=\footnotesize}, every axis plot/.append style={line width=1pt} }} \setlength\figureheight{32mm} \setlength\figurewidth{60mm} \pgfplotsset{local_axis_style/.style={ mark phase={10}, mark repeat={50}, minor x tick num = 1, xminorgrids=true }} \begin{figure}[t] \centering \input{make_outputs/iters.tikz} \vspace{-3mm} \caption{Solution accuracy versus iteration. The signal length is $N=64$, the number of sinusoids is $K=6$ and the SNR is $20\,\ensuremath{\,\textrm{dB}}\xspace$.} \label{fig:iters} \end{figure} \pgfplotsset{local_axis_style/.style={ transpose legend, legend columns=3, xticklabel={ \pgfkeys{/pgf/fpu=true} \pgfmathparse{exp(\tick)}% \pgfmathprintnumber[fixed, precision=0]{\pgfmathresult} \pgfkeys{/pgf/fpu=false} } }} \begin{figure*}[t] \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin = 7e-4, ymax = 1e-2 }} \input{make_outputs/N_nmse.tikz} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin = 1e-10, ymax = 1e-5 }} \input{make_outputs/N_taumse.tikz} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin =, ymax = }} \input{make_outputs/N_iters.tikz} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin = 4e-3, ymax = 1e2 }} \input{make_outputs/N_time.tikz} \end{minipage}% \vspace{-3mm} \caption{Simulation results for varying problem size $N$. The SNR is $20\,\ensuremath{\,\textrm{dB}}\xspace$ and the number of sinusoids $K$ is selected as $N/10$ rounded to the nearest integer. Results are averaged over $100$ Monte Carlo trials. The legend applies to all plots; only the NMSE of Oracle is shown. In the figure with runtime the asymptotic per-iteration computational complexity is also plotted.} \label{fig:N} \end{figure*} \subsection{Metrics} In the following we perform a Monte Carlo simulation study. Four metrics of algorithm performance and behaviour are considered: normalized mean-square error (NMSE) of the reconstructed signal $x$; mean-square error (MSE) of the frequencies $\{\omega_k\}$ conditioned on successful recovery; number of iterations and algorithm runtime. The NMSE of the reconstructed signal is obtained by estimating the frequencies from the dual polynomial as described in \cite{Bhaskar:2013} and using these to obtain the least-squares solution for the coefficients. An estimate of $x$ is then obtained by inserting into \eqref{model}. This estimate is also known as the \textit{debiased} solution and it is known to have smaller NMSE than the estimate of $x$ directly obtained as the solution of \eqref{problem} \cite{Bhaskar:2013}. In the evaluation of the signal reconstruction the performance of an Oracle estimator is also shown. The Oracle estimator knows the true frequencies and estimates the coefficients using least-squares. To directly assess the accuracy with which the frequencies are estimated we present the MSE of the frequency estimates obtained from the dual polynomial. The MSE of the frequency estimates is only calculated based on these Monto Carlo trails in which the set of frequencies is successfully recovered. Successful recovery is understood as correct estimation of the model order $K$ and that all frequency estimates are within a distance of $\pi/N$ from their true value. The association of the estimated to the true frequencies is obtained by minimizing the frequency MSE using the Hungarian method \cite{kuhn-hungarian}. The simulations are performed on a T470p Lenovo, with an Intel(R) Core(TM) i7-7820HQ CPU @ 2.90GHz, using MATLAB R2018b. MATLAB is restricted to only use a single CPU core, such that the runtime of the algorithms can be compared without differences in the parallelism achieved in the implementations. The computationally heavy steps of FastAST and ADMM are implemented in native code using the automatic code generation (``codegen'') feature of MATLAB. \subsection{Performance Versus Problem Size} The performance versus problem size $N$ is depicted in Fig.~\ref{fig:N}. First note that all algorithms give the same estimation accuracy at all problem sizes, providing strong evidence that they correctly solve \eqref{problem}. The number of iterations of FastAST L-BFGS increases with $N$. It is then expected that the total runtime asymptotically scales at a rate above the per-iteration cost of $\ensuremath{\mathcal{O}}\xspace(N^2)$ flops. Even still, the runtime for $N$ up to $2,048$ scales at a rate of about $\ensuremath{\mathcal{O}}\xspace(N^2)$. The number of iterations of FastAST Newton is practically independent of $N$. We then expect the total runtime to scale asymptotically as $\ensuremath{\mathcal{O}}\xspace(N^3)$. In practice it scales a little better for the values of $N$ considered here. The number of iterations of ADMM increases significantly with $N$ (doubling $N$ roughly doubles the number of iterations). This in turn means that the runtime scales faster than the asymptotic per-iteration cost of $\ensuremath{\mathcal{O}}\xspace(N^3)$ flops. In conclusion both variants of FastAST are faster than ADMM already at $N=128$ and their runtime scales at a rate much slower than ADMM. This means that they are significantly faster than ADMM for large values of $N$. For large $N$ it is also clear that the L-BFGS variant of FastAST is significantly faster than the Newton variant. \subsection{Performance Versus Signal-to-Noise Ratio} Fig.~\ref{fig:snr} shows performance versus the SNR level. Note that the conditional MSE of the frequency estimates is not shown for $0\,\ensuremath{\,\textrm{dB}}\xspace$ SNR because there are no Monte Carlo trials with successful recovery of the frequencies at this SNR. At SNR up to $30\,\ensuremath{\,\textrm{dB}}\xspace$ all the algorithms perform the same in terms of NMSE of $x$ and conditional MSE of the frequency estimates. This means that all algorithms have found a sufficiently accurate solution of \eqref{problem} (relative to the SNR). In SNR larger than $30\,\ensuremath{\,\textrm{dB}}\xspace$ FastAST L-BFGS shows a degraded solution accuracy compared to the remaining algorithms. This is due to the mentioned numerical issues and the consequently larger tolerances selected (cf. Table~\ref{tab:parameters}). In terms of number of iterations and runtime note that both variants of FastAST show roughly unchanged behaviour with different SNR. ADMM on the other hand requires more iterations and has larger runtime for large SNR. In large SNR it is evident that FastAST Newton is preferred due to lower runtime than ADMM and higher estimation accuracy than FastAST L-BFGS. \pgfplotsset{local_axis_style/.style={ transpose legend, legend columns=3, }} \begin{figure*}[t] \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin = 1e-6, ymax = 1e1 }} \input{make_outputs/snr_nmse.tikz} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin =, ymax = }} \input{make_outputs/snr_taumse.tikz} \end{minipage}\vspace{-2mm} \begin{minipage}[t]{0.5\linewidth} \centering \input{make_outputs/snr_iters.tikz} \end{minipage}% \begin{minipage}[t]{0.5\linewidth} \centering \pgfplotsset{local_axis_style/.append style={ ymin = 1e-2, ymax = 3e0 }} \input{make_outputs/snr_time.tikz} \end{minipage}% \vspace{-3mm} \caption{Simulation results for varying SNR. The signal length is $N=64$ and the number of sinusoids is $K=6$. Results are averaged over $100$ Monte Carlo trials. The legend applies to all plots; only the NMSE of Oracle is shown.} \label{fig:snr} \end{figure*} \section{Conclusions} The FastAST algorithm presented in this paper provides a fast approach to solving the atomic norm soft thresholding problem \eqref{problem}. The L-BFGS variant provides a reasonably accurate solution and is much faster than any other algorithm for large problem size $N$. If a solution of high accuracy is requested, which may be desirable in very high SNR, a variant of FastAST based on Newton's method is also provided. This variant can find a solution of high accuracy in a small number of iterations. While it is slower than FastAST L-BFGS, it is significantly faster than the state-of-the-art method based on ADMM. The FastAST algorithm is obtained by reformulating the semidefinite program \eqref{problem} as a non-symmetric conic program \eqref{conicproblem}. This reformulation is of key importance in obtaining a fast algorithm. This work has provided an example of an optimization problem where it is beneficial to formulate it as a non-symmetric conic program instead of the standard, and much better understood, formulation as a symmetric conic program. We have also provided an implementation of a non-symmetric conic solver, thereby demonstrating the practical feasibility of this class of methods. We have demonstrated how the L-BFGS two-loop recursion can be modified to allow a quasi-Newton solution of the barrier problem \eqref{min_g} even when the barrier parameter $t$ is updated in every iteration. This approach can directly be applied in other algorithms based on the barrier method, including primal-only methods. Finally note that there are many examples of optimization problems of practical interest which involve a constraint in either the cone of finite autocorrelation sequences $\ensuremath{\mathcal{C}}\xspace^*$ or the cone $\ensuremath{\mathcal{K}}\xspace$. An example is the gridless SPICE method \cite{Yang:15} for line spectral estimation; or frequency-domain system identification and filter design as summarized in \cite{alkire-autocorrelation}. We expect that equally fast primal-dual IPMs can be derived for all of these problems using the techniques of this paper. We also expect that it is fairly straight-forward to extend FastAST to atomic norm minimization with partial observations \cite{Tang:2013} or multiple measurement vectors \cite{li-multiple}. An interesting, but less obvious, extension is to the multi-dimensional harmonic retrieval problem \cite{chi-twodim}; for that purpose the work \cite{Yang:2016a} may contain some useful insights. \section*{Acknowledgements} We would like to thank Lieven Vandenberghe and Martin Skovgaard Andersen for providing valuable input to the work and pointing us to some important references.
1,314,259,996,110
arxiv
\section{Introduction} \label{Intro} Exceptional points (EPs) were originally introduced~\cite{Heiss1990} in quantum mechanics and are defined as the complex branch point singularities where eigenvectors associated with repeated eigenvalues of a parametric non-Hermitian operator coalesce. This distinguishes an EP from a degeneracy branch point where two or more linearly independent eigenvectors exist with the same eigenvalue. The mathematical aspects of EPs have been discussed in literature \cite{Mailybaev2003, Amore2021}. EPs can widely be found in non-Hermitian systems and have been reported in different physical problems including optics~\cite{Ruter2010,Othman2017} and acoustics~\cite{Lu2018,Maznev2018a,Ding2016}. This work aims to use discrete models of mechanical metamaterials (MMs) to analyze the EPs of two different operators (dynamic matrix and scattering matrix) and the associated scattering behaviors. The EPs of the dynamic matrix are shown to lead to bi-directional transparency, which features zero reflection and unitary transmission with zero phase difference. On the other hand, the EPs of the scattering matrix are associated with spontaneously broken parity-time~($\mathcal{PT}$) symmetry and one-way reflection. These two distinct occurrences of EPs have been reviewed and discussed in literature for optical and photonic systems~\cite{Miri2019}. However, there has been little discussion on the EPs in mechanical context. The introduced discrete systems may be considered as reduced order analogs of continuum micro-structured media and help the conceptual design of these system by removing all but essential dynamic features. The EPs of a dynamic matrix can be found in the eigenfrequency study, where the equations of motion are established for a repeating unit cell (RUC). Bloch-Floquet condition is embedded in the wavenumber-dependent matrices. Such a setup enables the computation of the eigenfrequencies and mode shapes of an infinite periodic array, for any prescribed wavenumber. The eigenfrequency band structure is of prime importance in the studies of MMs and phononic crystals (PCs)~\cite{Liu2000} as it signifies the overall dispersion of the micro-structured medium. Due to the coupling effects between degrees of freedom in a locally resonant structure, the band structure exhibits mode mixing and frequency band gaps. It has been shown~\cite{Amirkhizi2018d} that, an internal resonator does not necessarily lead to a stop band. The existence and the width of a stop band are strongly related to the coupling strength between multiple degrees of freedom. To study this effect with a quantified coupling strength, a tunable discrete model is developed and presented here, in which the coupling tunability is achieved using a skewed resonator. In~\cref{DMEP}, it is shown that a wide band gap is associated with a large coupling constant, while a decoupled resonator leads to independent dispersion branches without a band gap. In cases where coupling is weak, the band gap becomes extremely narrow and the dispersion curves appear to repel each other to form an avoided crossing. Such a gap could lead to incorrect sorting of branches, due to its extremely small width and the sharp changes in mode shapes. This phenomenon is referred to as level repulsion (LR) and has been studied in literature~\cite{Lu2018, Yeh2016, Wang2016, Amirkhizi2018d}. It is hypothesized that the sharp mode changes may be utilized for accurate and robust identification of a perturbative parameter in the operator under study, in this case wavenumber. While the frequency dispersion curves (levels) are repelled in the real wavenumber domain, the two dispersion surfaces intersect each other at an EP in the complex domain. Lu and Srivastava~\cite{Lu2018} introduced a method based on the mode shape continuity around such points to distinguish the real vs. avoided crossing points in the band structure. They showed that the instances of frequency level repulsion in the real wavenumber domain have their associated Riemann surfaces crossing at an EP in the complex wavenumber domain. The exotic topology of the eigenvalue surfaces in the vicinity of EPs has attracted extensive research interest in recent years~\cite{Ryu2015, Doppler2016, Xu2016a, Maznev2018a, Shen2018a, Miri2019}. However, the EPs discussed in literature usually possess complex parameters (e.g., frequency, wavevector components) and are studied only in the eigenfrequency analysis. Similarly, in the first part of the present work, complex wavenumber is used as the parameter leading to non-Hermiticity of the dynamic matrix and controls the location of EPs. The question thus rises: how would an EP of the eigenfrequency band structure affect the scattering of such systems? To answer this question, one may seek to tune the parameters that can break the Hermiticity of an elastic system, and then modulate the system so that the complex singularity point is re-positioned onto the real frequency axis. It is shown that wavenumber-parameterized systems can be tuned by adding loss and gain to various spring elements. This could enable moving the location of EP into the real wavenumber domain as well as making the associated frequency to be real (while in contrast the EPs in an earlier work~\cite{Maznev2018a} had complex frequency). Such a system may be studied in a simple harmonic scattering (real frequency) numerical experiment. In practice loss and gain elements may be realized via viscous or other coupled multi-physics (e.g., piezoelectric) components. We derive the conditions to make the EP locate on the real frequency plane and show that stiffness parameters must have certain compatible loss and gain factors. With an EP re-positioned to real frequency domain, it is then feasible to analyze the scattering behavior when operating near such an EP. Using the transfer matrix method (TMM), which relates the mechanical states on the left and right boundaries of a finite medium, the scattering coefficients, which describes the relation of incoming and outgoing waves through a sample, can then be derived to study the response when operating near EPs. Transfer matrix method is discussed in depth~\cite{Nemat-Nasser2015, Amirkhizi2017, Nanda2018, Amirkhizi2018c,Psiachos2018a,Psiachos2019} for wave propagation problems in 1D systems. It is widely used to determine the band structure and can also be used to compute the reflectance spectrum~\cite{Ardakani2017}. A similar approach to determine the band gap behavior of permuted PCs is the transfer function method~\cite{AlBabaa2017a,AlBabaa2019}. In~\cref{SecTSM}, the transfer and scattering matrices are constructed for the presented discrete MM array of finite length, with adjustable parameters that allow the unit cell to convert into a monatomic, diatomic, or locally resonant cell. The physical behavior of a MM crystal near an EP (e.g., scattering of steady state waves off a finite specimen) can lead to various interesting phenomena. It is illuminating to summarize the restrictions and simplifications that reciprocity (applicable to all 1D linear systems) and symmetry considerations (applicable to specific structures that admit them) provide, particularly when applied to mechanical systems with loss and gain. An example of parity symmetric scattering is shown in~\cref{sec:scatteringexamples}. In this example, the EP of the dynamic matrix is tuned by the loss and gain factors in the viscous or multi-physical springs to have real frequency and wavenumber, so that such a singularity point can be accessed in a scattering experiment. With certain number of unit cells, the MM sample becomes completely invisible in both directions at the EP frequency of the dynamic matrix, and the energy is dynamically balanced, i.e., the loss and gain mechanisms perfectly cancel each other and total mechanical energy is conserved. The discrete modeling approach helps understand the scattering properties analytically, and can be easily adapted for various tuning possibilities. On the other hand, if the sample possesses only the combined parity-time ($\mathcal{PT}$) symmetry, then the \emph{scattering matrix} can exhibit EPs as well. To demonstrate this, we show the scattering response of a~$\mathcal{PT}$ symmetric system near the EPs of its scattering matrix spectrum in~\cref{SMEP}. Non-Hermitian Hamiltonians with~$\mathcal{PT}$ symmetry were first discussed by Bender and Boettcher~\cite{Bender1998a}. A more general category of pseudo-Hermitian systems in elastodynamics is investigated by Psiachos and Sigalas~\cite{Psiachos2018a,Psiachos2019}. The asymmetric scattering responses of~$\mathcal{PT}$ symmetric media have been investigated in electronics~\cite{Sakhdari2018}, photonics~\cite{Ge2012}, and acoustics~\cite{Zhu2014, Shi2016, Fleury2016, Achilleos2017, Fleury2014}. These studies have shown that the EPs of scattering matrix correspond to unidirectional zero reflection and unitary transmission. Moreover, the EPs of the scattering matrix are associated with spontaneous symmetry breaking and mark the spectral boundaries between~$\mathcal{PT}$ broken and unbroken phases. The majority of these studies are performed experimentally or numerically using simulations, which can be time consuming or computationally expensive. In contrast, using the analytical formulas derived in~\cref{SecTSM} and appendices, designing these novel artificial media and tuning towards desired target frequencies can be achieved relatively easily. The feasibility of implementing gain units (represented by complex-valued springs in this work), a necessary ingredient of this study, is a major challenge to experimental realization of such EP-based designs. To this end, a number of studies have demonstrated implementation of gain units, realized by electronic devices~\cite{Popa2014,Fleury2016} or piezoelectric semiconductors~\cite{Christensen2016}. In a recent study, Mokhtari \textit{et~al}. ~\cite{Mokhtari2020a} show the possibility of accessing EPs with fully elastic PCs in real frequency and wave vector domain. With such a 2D scattering setup, it is then possible to take advantage of the spectral properties of EPs to design novel sensing devices~\cite{wang2021}. The structure of this paper is as follows. In \cref{DMEP}, we first study the dynamic matrix of a discrete resonator system and show the relationship between coupling strength and level repulsion. This will be followed by an analytical representation and detailed discussion of the eigenfrequency and eigenvector behaviors in the vicinity of EPs. Then it will be shown that by adjusting the stiffness parameters, EPs can be moved to the real frequency and wavenumber domain. In \cref{SecTSM}, the transfer and scattering matrices for discrete MM arrays are presented, along with a discussion on the restrictions reciprocity and fundamental symmetries enforced on these matrices. In \cref{sec:scatteringexamples}, we examine the scattering behaviors of a parity symmetric system and a parity-time symmetric system, which feature bi-directional transparency and one-way reflection near EPs of the dynamic and scattering matrices, respectively. The paper is concluded with a summary of important results and potentials for application of EPs resulting from their influence on the physical response of mechanical metamaterials. The use of the discrete mass-spring systems provides fundamental insights in MMs and can be easily adapted for various tuning possibilities. \section{Exceptional points in the dynamic matrix eigenspectrum}\label{DMEP} \subsection{Level repulsion and coupling of DOFs} The 1D discrete periodic structure studied in this section is represented in \cref{fig:chain}. Each cell consists of the main ``crystal chain mass''~$M^c$ and the ``internal resonator mass''~$M^i$. In each RUC, a linear spring element with stiffness coefficient~$\beta^i$ connects the resonator to the crystal. There is also a spring with stiffness~$\beta^c$ between every two neighboring crystal masses. To show the level repulsion and EPs with a simple set up, we consider a longitudinal wave propagating along the chain. In this analysis, the crystal masses are constrained to have a single horizontal degree of freedom (DOF). One can assume that the structure is confined in a tube parallel to the~$x$ axis (with frictionless surfaces). It is also assumed that the rotational inertia of the masses are high enough to allow one to ignore the rotational DOFs. The resonator is also constrained to have only one independent DOF,~$u^i_n$, which makes angle~$\theta$ with the horizontal direction and main chain mass DOF,~$u^c_n$. The coupling constant~$\kappa=\cos\theta$ is defined where the angle~$\theta$ is in the range~$[-\pi/2, \pi/2]$. For other values of $\theta$ a simple change of sign in either of the two DOFs will render the following mathematical description identically applicable. When~$\kappa=1$ this model is identical to the 1D lattice with resonator model which can be commonly found in literature \cite{Amirkhizi2018d, Hussein2014a}. \begin{figure}[!ht] \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=120pt]{{figures/chain1}} \caption{\label{fig:chain}} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=150pt]{figures/coup} \caption{\label{fig:cbandplots}} \end{subfigure} \caption{(\subref{fig:chain}) Schematic drawing of the studied 1D infinitely periodic resonator array. (\subref{fig:cbandplots})~Longitudinal wave band structure (real domain) for different values of coupling constant~$\kappa=\cos\theta$. In the example here, all parameters (mass, stiffness) are normalized to one. \label{fig:chainlr}} \end{figure} For the~$n$-th RUC the DOFs that satisfy Bloch-Floquet periodicity can be written as: \begin{align}\label{eq:uc}u^c_{n}&=u^c\ \exp\left[\mathrm{i}(\omega t-nQ)\right],\\ \label{eq:ui}u^i_{n}&=u^i\ \exp\left[\mathrm{i}(\omega t-nQ)\right], \end{align} where~$\omega$ is angular frequency, and~$n$ is an integer representing cell location along the chain. The dimensionless wavenumber~$Q$ represents the phase advance between neighbor cells, and it can be calculated as the product of wavevector component and cell length. In the eigenfrequency study,~$Q$ is usually a prescribed parameter sweeping the Brillouin zone. The complex amplitudes of displacements in harmonic motion~$u^c$ and~$u^i$ are to be determined. To do this, the equations of motion can be written for the~$n$-th cell and resonator (see Appendix~\cref{sec:appEOM}): \begin{align}\label{eq:1dc} M^c\frac{\partial^2u^c_n}{\partial t^2}&=\beta^c(u^c_{n+1}-2u^c_n+u^c_{n-1})+\kappa\beta^i(u^i_n-\kappa u^c_n)-(1-\kappa^2) M^i\frac{\partial^2u^c_n}{\partial t^2}, \\ \label{eq:1di} M^i\frac{\partial^2u^i_n}{\partial t^2}&=\beta^i(\kappa u^c_n-u^i_n), \end{align} rendering, for each value of $Q$, an eigenvalue problem: \begin{align}\label{eq:ruc} [\vect{D}-\lambda \vect{I}]\vect{U}^R&=\vect{0},\\ \vect{U}^{L\dagger}[\vect{D}-\lambda \vect{I}]&=\vect{0}\label{eq:rucleft}, \end{align} where~$\lambda=\omega^2$ is the eigenvalue of the dynamic matrix~$\vect{D}=\vect{M}^{-1}\vect{K}$, $\vect{I}$ is the~$2\times2$ identity matrix,~$\vect{U}^R=[u^c, \ u^i]^\top$ is the right eigenvector,~$\vect{U}^L$ is the left eigenvector, and~$\dagger$ denotes complex conjugate transpose.~$\vect{K}$ and~$\vect{M}$ are the stiffness and mass matrices of the cell, respectively: \begin{equation}\label{eq:Kmatrix} \vect{K}=\begin{pmatrix} 4\beta^c \sin^2\dfrac{Q}{2}+\kappa^2\beta^i &-\kappa\beta^i\\ -\kappa\beta^i&\beta^i \end{pmatrix}, \end{equation} \begin{equation}\label{eq:Mmatrix} \vect{M}=\begin{pmatrix} M^{ci} &0\\ 0&M^i \end{pmatrix}. \end{equation} The coupling constant~$\kappa=\cos\theta$ quantifies the interaction strength between the internal resonator and the main crystal chain, and \begin{equation} M^{ci}=M^c+(1-\kappa^2)M^i \end{equation} is defined as the effective mass associated with ~$u^c_n$ DOF dynamics. Solving the characteristic equation~$|\vect{D}-\omega^2\vect{I}|=0$ yields the frequency band structure, a representation of which is shown in \cref{fig:cbandplots}. In the shown example, all the stiffness and mass values are taken as 1, but the coordinates of a number of important points and other geometrical features can be calculated explicitly in terms of the model parameters. As the coupling constant~$\kappa$ approaches 0, level repulsion (LR) becomes more evident and the dispersion curves appear to approach a crossing point. Only when~$\kappa=0$ the resonator and the cell become fully decoupled leading to an actual crossing of the branches. Then a topological transition occurs in the band structure as the frequency gap disappears. In such a case, the right eigenvectors on the two crossing dispersion curves will stay linearly independent. \newline \subsection{Exceptional points in the complex band structure} For a 2-DOF system like the one shown in \cref{fig:chain}, normally there are two eigenfrequencies for each value of wavenumber. There also exist branch points (BP), potentially in the complex domain, where two frequency solutions match (degeneracy or frequency coalescence). After solving for the frequency as a function of~$Q$ analytically, the location of branch points can be obtained: \begin{align}\label{eq:qep} Q_\mathrm{BP}&=2\arcsin\left(\dfrac{\omega^i}{2\omega^c}\left(1+ \mathrm{i} \kappa\sqrt{\dfrac{M^i}{M^{ci}}}\right)\right),\\ \label{eq:wep}\omega_\mathrm{BP}&=\omega^i\sqrt{1+ \mathrm{i} \kappa\sqrt{\dfrac{M^i}{M^{ci}}}}. \end{align} The shown solution is in the region where $\Re Q\geq0$ and~$\Im Q \geq 0$. Here the symbols~$\Re,\Im$ represent the real and imaginary parts of a complex quantity. A non-zero coupling~$\kappa$ will enforce the branch point to be complex-valued for real parameters in the cell and there are in general eight possible solutions, namely~$( \pm Q_{\mathrm{BP}}, \pm \omega_{\mathrm{BP}})$ and $( \pm Q_{\mathrm{BP}}^*, \pm \omega_{\mathrm{BP}}^*)$. At the branch point \cref{eq:ruc} becomes: \begin{equation} \label{eq:dep} \frac{\kappa \beta^i}{M^{ci}M^i} \begin{pmatrix} \mathrm{i}\sqrt{M^{ci} M^i} & -M^{i} \\ -M^{ci} &-\mathrm{i}\sqrt{M^{ci}M^i} \end{pmatrix} \begin{pmatrix} u^c\\u^i\end{pmatrix}=\vect{0}. \end{equation} When~$\kappa=0$ the~$2\times2$ matrix in \cref{eq:dep} becomes a zero matrix (which means any arbitrary vector in~$\mathbb{C}^2$ is an eigenvector) and the branch point is exactly the crossing point shown in \cref{fig:cbandplots} residing is in the real domain. The two frequency solutions are overlapping each other while two linearly independent eigenvectors exist. Such a case is referred to as a degeneracy. For non-zero~$\kappa$ values, the branch point is referred to as an exceptional point~\cite{Heiss1990} (EP) which is usually in the complex parameter domain. As the~$\kappa$ value gets closer to zero, avoided crossing/level repulsion will be more apparent in the real domain, and the EP location will have smaller imaginary parts. At the EP there exists only one non-trivial right eigenvector: \begin{equation}\label{eq:uep} \vect{U}^R_\mathrm{EP}=\begin{pmatrix}-\mathrm{i} \\ \sqrt{\dfrac{M^{ci}}{M^i}}\end{pmatrix}.\end{equation} The corresponding left eigenvector of matrix~$\vect{D}$ at EP is \begin{equation}\label{eq:ulep} \vect{U}^{L\dagger}_\mathrm{EP}=\begin{pmatrix} -\mathrm{i}, & \sqrt{\dfrac{M^{i}}{M^{ci}}}\end{pmatrix}. \end{equation} Here we show an example of the complex band structure for~$\kappa=0.5$ and allow the wavenumber~$Q$ to be complex. All cell parameters (mass, stiffness) are set to one for the sake of demonstration. The calculated complex frequency and right eigenvectors are shown only in region where~$\Re Q\geq0, \Im Q \geq 0$, and $\Re \omega \geq 0$. \Cref{fig:rew,fig:imw} show the real and imaginary part of the band structure, respectively. The components of the right eigenvector are shown in~\cref{fig:absuc,fig:absui}. For this configuration the EP is located at~$Q_\mathrm{EP}=1.36218+\mathrm{i}0.63297$ and~$\omega_\mathrm{EP}=1.01711+\mathrm{i}0.18580$, as represented by the solid black dot. The two modes are separated based on the continuity of branches in any complex $Q$ disk around the origin that does not include $Q_{\mathrm{EP}}$ and are shown in different colors. The corresponding right eigenvector components are shown in \cref{fig:absuc,fig:absui}. The complex eigenvectors associated with each mode are normalized by a complex factor in such a way that $\norm{\vect{U}^R}=\sqrt{u^{c*} u^c+u^{i*} u^i}=1$, and~$u^i\in \mathbb{R}$. It is important to normalize both the amplitude and complex phase of the eigenvector in such a way to ensure consistency throughout the analysis. To keep~$u^i$ on the real axis, both components of eigenvector are rotated together in the complex plane keeping their ratio unchanged. The major benefit of phase normalization is that the eigenvector components can be shown in a continuous manner (see \crefrange{fig:absuc}{fig:absui}) even in the vicinity of the EPs, thus making it easier to understand the mode shape behavior. It is clear that both the frequencies and eigenvectors form Riemann sheet structures in the vicinity of the EP, and all the complex quantities can be made to behave continuously (when properly normalized) with respect to~$\Re Q$ and~$\Im Q$, at any simply connected neighborhood that does not include $Q_\mathrm{EP}$. For the sake of presentation quality of 3D figures, we show only four representative cuts of these Riemann sheets at~$\Im Q=0,\ 0.3,\ \Im Q_\mathrm{EP},$ and 1. When~$\Im Q=0$ the frequencies are real. The two branches are clearly distinguished by considering $\Re \omega$ and $\angle u^c$. As~$\Re Q$ increases, the amplitudes of eigenvector components increase or decrease monotonically due to local resonance. There is an inverse correlation between the coupling strength (which is quantified as~$\kappa$ in this case) and the abruptness of such change in amplitude. Weaker coupling (smaller but non-zero~$\kappa$) results in more evident frequency level repulsion and sharper changes in displacement amplitudes. The exact~$\pi$ difference between the two lines in \cref{fig:arguc} indicates a sign difference between the~$u^c$ of acoustic and optical branches, with~$u^i$ normalized to be real and positive. A detailed discussion on eigenfrequency and mode shape behaviors in the LR region in real~$Q$ domain can be found in our previous work~\cite{Amirkhizi2018d}. As~$\Im Q$ increases from 0, all the presented quantities show similar trends in terms of continuity. The two branches remain continuous, as long as~$\Im Q<\Im Q_\mathrm{EP}$. When~$\Im Q=\Im Q_\mathrm{EP}$, all these quantities coalesce at the EP. Continuity with respect to~$\Re Q$ can not be used as the basis of branch selection at this point due to the coalescence. In other words, branch sorting becomes ambiguous at~$\Im Q=\Im Q_\mathrm{EP}$. If one seeks to extend the continuous branches (for $\Im Q < \Im Q_\mathrm{EP}$) beyond $\Im Q_\mathrm{EP}$, by maintaining continuity along $\Im Q$, the resulting choices will be discontinuous along $\Re Q$ when $\Im Q > \Im Q_\mathrm{EP}$. If one wishes to maintain the continuity along $\Re Q$, the branches will be discontinuous along $\Im Q$, when $\Re Q > \Re Q_\mathrm{EP}$, see for example the slice at~$\Im Q=1$ in \crefrange{fig:rew}{fig:absui}. In general, it can be seen that both the eigenvalues and the eigenvectors maintain analyticity, except at the EP where the Taylor series expansion fails. \begin{figure \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=159pt]{figures/rew} \caption{\label{fig:rew}} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=159pt]{figures/imw} \caption{\label{fig:imw}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=147pt]{figures/absuc} \caption{\label{fig:absuc}} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=147pt]{figures/arguc} \caption{\label{fig:arguc}} \end{subfigure} \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=147pt]{figures/absui} \caption{\label{fig:absui}} \end{subfigure}% \begin{subfigure}[b]{0.5\linewidth} \centering\includegraphics[height=147pt]{figures/dp} \caption{\label{fig:dp}} \end{subfigure} \caption{Four representative cross sectional cuts (at four different~$\Im Q$ values) of the Riemann sheets showing (\subref{fig:rew})~real and (\subref{fig:imw})~imaginary parts of the frequency, (\subref{fig:absuc})~amplitude and (\subref{fig:arguc})~complex argument of main crystal chain DOF~$u^c$, and (\subref{fig:absui})~resonator DOF~$u^i$ (when eigenvectors are normalized for it to be real). The two modes are sorted based on branch continuity along $\Re Q$. (\subref{fig:dp}) Inner product of left and right eigenvectors which belong to same ($\alpha=\beta$) and different ($\alpha\neq \beta$) modes. \label{fig:cband}} \end{figure} The calculated inner products of the normalized left and right eigenvectors are shown in \cref{fig:dp}, where it can be seen that the left and right eigenvectors corresponding to the different eigenfrequencies are orthogonal to each other, except at the EP. The bi-orthogonality relation of the non-Hermitian system reads: \begin{equation} \langle\vect{U}^L_\alpha,\vect{U}^R_\beta\rangle=\begin{cases} 0 & \text{at EP;}\\ \delta_{\alpha\beta} & \text{elsewhere;} \end{cases} \end{equation} where~$\delta_{\alpha\beta}$ is the Kronecker delta, and subscripts~$\alpha,\beta=1,2$ denote the first or second mode. The self-orthogonality~\cite{NimrodMo} at the EP implies a defect of the Hilbert space~\cite{Kato,Rotter2003}. \subsection{Tuning an EP into real frequency and wavenumber domain} All previous results are based on an EP with complex~$Q$ and~$\omega$ values. It is possible to look for EPs with real~$Q$ using complex stiffness~$\beta$. For physical realization of such systems, see \cref{Intro}. Associated Riemann sheets for the band structure and eigenvectors may be found following the procedure discussed earlier. A similar analysis on bifurcation in the vicinity of EPs in such systems with damping (complex stiffness) can be found in Ref.\cite{Maznev2018a}, where the EP has complex frequency and real wavenumber. If one wishes to study the scattering properties around an EP, one may try instead to design a system including an EP that has both real frequency and $Q$. Such a system may be interrogated experimentally through scattering of harmonic waves off a finite specimen. Given the EP location in \cref{eq:qep} and \cref{eq:wep}, it is possible to make both~$Q_\mathrm{EP}$ and~$\omega_\mathrm{EP}$ real if the complex arguments of stiffness parameters are such that: \begin{equation} \angle \beta^c=-\angle \beta^i=\angle \left[1+ \mathrm{i}\kappa \sqrt{\frac{M^i}{M^{ci}}}\right].\label{eq:epbc} \end{equation} It is inevitable that such process will make one stiffness parameter lossy while the other requires gain (i.e., has negative imaginary part), which is feasible as discussed in literature\cite{Popa2014,Fleury2016,Christensen2016}. With these adjustments in stiffness values, \crefrange{eq:qep}{eq:ulep} are still valid, and the eigenfrequency and eigenvector behaviors are still qualitatively the same as those shown in \cref{fig:cband}, except the location of EP is now adjustable. The behavior of such structures will be further studied in \cref{sec:scatteringexamples}. The measurable response of the micro-structured media is, in fact, not just based on their dispersion surfaces, but rather more thoroughly understood from the scattering off finite specimens. In the following sections, the influence of EPs of the band structure, and independently, those of the scattering matrix, on the overall response of finite specimens are studied. \newline \section{Transfer and scattering matrices}\label{SecTSM} The analysis of steady state waves (real frequency) traveling in an infinite homogeneous domain and interacting with a finite sized specimen of a 1D MM array (with finite number of unit cells) can be solved easily using the transfer matrix (TM) of such structures. This is different from eigenfrequency analysis which analyzes an infinite periodic array of unit cells, though the eigenfrequency band structure are also essentially associated with the eigenvalues of the transfer matrix. In either case, the TM is the matrix form of the linear relationship between the physical states on the left and right boundaries of a control volume or the unit cell. To utilize the transfer matrix method, the unit cell is selected as the part in the dashed rectangle shown in \cref{fig:TMRUC}. It is selected in such a way that the springs connecting crystals are cut in the middle. To be more general, the springs at the left and the right sides of a crystal chain atoms are allowed to be different, as denoted by stiffnesses~$\beta^p$ and~$\beta^q$ in \cref{fig:TMRUC}. \begin{figure}[!h] \centering\includegraphics[width=240pt]{figures/tmruc} \caption{\label{fig:TMRUC} Control volume selection for transfer matrix analysis. Note that the cell is no longer inherently symmetric. } \end{figure} Therefore, the left and right halves of the main crystal chain springs are $2\beta_p$ and $2\beta_q$. This setup allows the analysis of non-periodic/permuted MM samples. The derivation of the TM can be found in Appendix~\cref{appTM}. Applying Bloch-Floquet periodicity in an infinitely periodic array, the governing equation reads: \begin{equation} \label{eq:cellTMeig} [\mathsf{T}^{cell}-\mathrm{e}^{-\mathrm{i}Q}\vect{I}] \begin{pmatrix} v^l\\N^l\end{pmatrix}=\vect{0}, \end{equation} where~$\mathsf{T}^{cell}$ is the transfer matrix of a unit cell,~$Q$ is the phase advance, and~$v^l,N^l$ are the velocity and internal axial force at the left boundary. For any desired frequency, solving the characteristic equation of \cref{eq:cellTMeig} for the normalized wavenumber $Q$ yields the band structure. This is complementary to eigenfrequency calculation, where one would solve for frequencies given a prescribed normalized wavenumber (phase advance), $Q$. \begin{figure}[!h] \centering\includegraphics[width=300pt]{figures/barr} \caption{\label{fig:bar} Scattering set up of~$J$ cells between two semi-infinite domains. } \end{figure} For finite structures, once the TM of one arbitrary cell or a number of them is obtained, one can retrieve the scattering coefficients of the model as shown in \cref{fig:bar}. In the scattering experiment, the sample is set between two homogeneous semi-infinite domains and contains~$J$ cells. The cells of the sample are in general allowed to be different. Here the semi-infinite domains, without loss of generality, are modeled as circular bars. The Young's modulus, mass density, and cross sectional radius of the two identical cylindrical bars are denoted by~$E_0$,~$\rho_0$, and~$r_0$, respectively. We generally equate the measurement/de-embedding locations with the boundaries of the sample, i.e., $x^a = x^l$ and $x^b = x^r$. Note that the superscripts~$l$ and~$r$ here represent locations with respect to the entire sample and their distinction from the cell faces earlier should be clear from the context. The displacements at these locations are assumed to have the form~$A^{(a,b)(+,-)} e^{\mathrm{i}\omega t}$ (harmonic waves), where the superscripts~$+$ and~$-$ represent the waves propagating in positive and negative~$x$ directions in the bars (in the sense of phase advance or flux direction). The scattering matrix \begin{equation}\label{eq:sm} \Tilde{\SM}= \begin{pmatrix} \mathsf{S}_{ba} & \mathsf{S}_{bb} \\ \mathsf{S}_{aa} & \mathsf{S}_{ab} \end{pmatrix} \end{equation} relates the complex displacement or velocity amplitudes of outgoing ($A^{b+}$ and~$A^{a-}$) and incoming ($A^{a+}$ and~$A^{b-}$) waves at measurement locations~$x^l$ and~$x^r$: \begin{equation}\label{eq:asa} \vect{A}_{out}=\begin{pmatrix} A^{b+}\\A^{a-}\end{pmatrix}= \Tilde{\SM} \begin{pmatrix} A^{a+}\\A^{b-} \end{pmatrix}=\Tilde{\SM}\vect{A}_{in}. \end{equation} The derivation of the scattering matrix can be found in Appendix~\cref{appSM}. If one considers~$\abs{\mathsf{T}}=1$ and~$Z^a=Z^b$, then~\crefrange{eq:saa}{eq:dels} will be simplified and identical to the scattering coefficients derived in literature~\cite{Amirkhizi2017}, and the scattering matrix $\Tilde{\SM}$ given by \cref{eq:sm} has the eigenvalues: \begin{equation}\label{eq:seiga} \sigma_{1,2}=\mathsf{S}_{ab}\pm\sqrt{\mathsf{S}_{aa}\mathsf{S}_{bb}}, \end{equation} since~$\mathsf{S}_{ab}=\mathsf{S}_{ba}$. The eigenvectors are \begin{equation}\label{eq:seige} \vect{\mu}_{1,2}=\begin{pmatrix}\pm\sqrt{\mathsf{S}_{bb}}\\\sqrt{\mathsf{S}_{aa}}\end{pmatrix}. \end{equation} Coalescence of the $\Tilde{\SM}$ eigenspectrum occurs if and only if one of the reflection coefficients becomes zero. An EP of the scattering matrix is thus related to one-way reflection phenomenon. \subsection{Effect of reciprocity}\label{sec:recipeffects} Reciprocity is commonly considered~\cite{Cebrecos2019a, Horsley2014, Fleury2015a, Deak2012} as a property observed in scattering measurements, i.e., the equivalence of transmission coefficients when a source and a detector exchange their positions. It is shown in Ref~\cite{Alizadeh2021} that elastodynamic reciprocity imposes certain restrictions on the constitutive tensors of layered media. Here, we consider reciprocity as a fundamental property of linear elastic structures (and a broad class of linear viscoelastic systems, e.g., those that can be represented by simple networks of 2-force spring or dashpot, or continuum elements with isotropic viscoelasticity), described by Betti's reciprocity theorem. It can be seen from~\cref{eq:TMRUC} that this transfer matrix has a determinant of 1, i.e.,~$\abs{\mathsf{T}^{cell}} = 1$. The unimodularity of the TM is equivalent to the reciprocity of the 1D medium. It is worth to study the proof and understand the assumptions that are sometimes implicitly included. Consider an arbitrary control volume in a 1D linear time-invariant system. Assuming absence of body forces, Betti's reciprocity theorem in frequency domain states that: \begin{equation}\label{eq:betti} F^l_\alpha u^l_\beta + F^r_\alpha u^r_\beta = F^l_\beta u^l_\alpha + F^r_\beta u^r_\alpha, \end{equation} where superscript $l$ or $r$ represent the left or right boundary of the control volume and subscripts $\alpha$ and $\beta$ represent two different states, and $u$ and $F$ represent displacement and applied force on the boundaries. The axial traction force (positive in the positive $x$ direction) are $F^l=-N^l$ and $F^r=N^r$, and $N^{l,r}$ are assumed to be tensile positive. The theorem is generally proved in elastic systems based on the existence of the strain energy density function and the subsequent major symmetry of the elasticity tensor~\cite{Achenbach2006}. In linear viscoelastic systems, isotropic material response will also allow for a proof in frequency domain. Analytical considerations have been used to address this in viscoelastic materials~\cite{Rogers1963,Day1971,Matarazzo2001}. For the systems studied here (among a large class of discrete structures), the reciprocity can be proven explicitly. The general proof is omitted, but we show that the reciprocity of the systems considered here is equivalent to the $\abs{\mathsf{T}} = 1$. Assuming harmonic velocities $v^{l,r}=\mathrm{i} \omega u^{l,r}$,~\cref{eq:betti} becomes: \begin{equation}\label{eq:reci} v^r_\alpha N^r_\beta - v^r_\beta N^r_\alpha = v^l_\alpha N^l_\beta - v^l_\beta N^l_\alpha~. \end{equation} Define matrix $\vect{J}$ as \begin{equation}\vect{J}=\begin{pmatrix} 0 &1\\ -1&0 \end{pmatrix}~, \end{equation} and \begin{equation}\vect{\psi}=\begin{pmatrix} v\\ N \end{pmatrix}~. \end{equation} Then the reciprocity condition \cref{eq:reci} can be written as \begin{equation}\label{eq:reci2} (\mathsf{T} \vect{\psi}^l_\alpha)^\top \vect{J} (\mathsf{T} \vect{\psi}^l_\beta)=(\vect{\psi}^l_\alpha)^\top \vect{J} \vect{\psi}^l_\beta~. \end{equation} Since the L.H.S. of \cref{eq:reci2} is identical to $(\vect{\psi}^l_\alpha)^\top (\mathsf{T}^\top \vect{J}\mathsf{T}) \vect{\psi}^l_\beta$, the structure is reciprocal if and only if TM is symplectic, i.e., \begin{equation}\label{eq:symp} \mathsf{T}^\top\vect{J}\mathsf{T}=\vect{J}, \end{equation} which is equivalent to $\abs{\mathsf{T}}=1$ for $2\times2$ matrices. The transfer matrix of any array of cells may be constructed by multiplying their individual transfer matrix in reverse order since, $\vect{\psi}^r$ of one cell is the same as $\vect{\psi}^l$ of the cell to its right. Therefore, the transfer matrix of any array of such cells, will also be reciprocal and have a determinant of 1. This would be true regardless of whether either the parity symmetry (i.e., $\beta^p=\beta^q$) or time-reversal symmetry (i.e., $\{\beta^p,\ \beta^q\}\subset\mathbb{R}$) of any of the cells is broken or not. Since $\abs{\mathsf{T}} = 1$, the two eigenvalues of~$\mathsf{T}$ are inverse of each other, which yields~$Q^-(\omega) = -Q^+(\omega)$, where the superscripts~$+$ and~$-$ represent up to two possible solutions, presumably forward and backward traveling waves in the sense of phase advance or power flux. Note that at least for 1D media, either $\mathcal{P}$ or $\mathcal{T}$ symmetry will inherently lead to reciprocity, but a system that only admits combined $\mathcal{PT}$ symmetry can potentially be non-reciprocal. \subsection{Symmetry considerations}\label{sec:symmetryeffects} The effect of symmetries of the sample are best represented in the transfer matrix formulation and, similar to the reciprocity consideration, they are independent of the environment. However, the effect on scattering response will include the environment properties. The summary of symmetry restrictions is listed in~\cref{table:1}. Here, $$ \vect{F}=\begin{pmatrix} -1 & 0\\ 0&1 \end{pmatrix},~ \vect{P}=\begin{pmatrix} 0 & 1\\ 1 & 0 \end{pmatrix}. $$ The detailed proof is omitted, but can be derived following similar processes presented in literature~\cite{Jin2016, Ge2012, Mostafazadeh2014}. Coalescence of the $\Tilde{\SM}$ eigenvectors and its inherent one-way reflection phenomenon can not be achieved by a parity symmetric system (including the identical bars on either side) because~$\mathcal{P}$ symmetry simply imposes strong condition that enforces equal reflections, i.e.,~$\mathsf{S}_{aa}=\mathsf{S}_{bb}$. A system that has time reversal~$\mathcal{T}$ symmetry does not support the coalescence of $\Tilde{\SM}$ eigenvectors either because it leads to $|\mathsf{S}_{aa}|=|\mathsf{S}_{bb}|$, which again precludes one-way reflection. However, a~$\mathcal{PT}$ symmetric system that lacks individual $\mathcal{T}$ and $\mathcal{P}$ symmetries may demonstrate non-trivial coalescence of~$\Tilde{\SM}$ eigenvectors and the one-way reflection phenomenon. While~$\mathcal{PT}$ symmetry is usually studied with electromagnetic/optical setups~\cite{Hu2017b,Suneera2018}, the conclusions here apply as well. \begin{table}[h!] \centering \caption{Effects of symmetries on the transfer and scattering matrices. Notice that the~$\mathsf{T}$ restrictions of the~$\mathcal{P}$ symmetry implicitly leads to reciprocity through~$|\mathsf{T}|=1$ as well, which is why it is not shown in the table, but in case of ~$\mathcal{T}$ it is an additional conclusion.} \label{table:1} \begin{tabular}{lll} % \hline\noalign{\smallskip} % Symmetry & $\mathsf{T}$ restrictions & $\Tilde{\SM}$ restrictions \\ \noalign{\smallskip}\hline\noalign{\smallskip} $\mathcal{P}$ & $\mathsf{T}=\vect{F}\mathsf{T}^{-1}\vect{F}$ & $\Tilde{\SM}=\vect{P} \Tilde{\SM} \vect{P} $ \\ $\mathcal{T}$ & $ \mathsf{T}=\vect{F}\mathsf{T}^{*}\vect{F}$, $|\mathsf{T}|=1$ & $\Tilde{\SM}=\vect{P} (\Tilde{\SM}^{*})^{-1} \vect{P}$ \\ $\mathcal{PT}$ & $ \mathsf{T}=(\mathsf{T}^*)^{-1} $ & $\Tilde{\SM}=(\Tilde{\SM}^*)^{-1}$ \\ \noalign{\smallskip}\hline \end{tabular} \end{table} The summary here is applicable for 1D linear media. The excitation frequencies are prescribed to be real. Although this paper mainly uses discrete systems as examples, the conclusions here are independent from the discrete setup and are applicable for 1D continuum systems as well. One can use these relations to simplify the process and reduce the number of needed experiments in real scattering experiments~\cite{Aghighi2019,Abedi2020}. In the following, we choose~$Z^a=Z^b=Z_0=\pi r_0^2 \sqrt{E_0 \rho_0}$ so that we can focus on the symmetry properties of the MM samples only. \newline \section{Examples of scattering responses}\label{sec:scatteringexamples} In the following illuminating examples the same setup in \cref{SecTSM} is used, where the bars have Young's modulus~$E_0=\SI{69}{GPa}$, density~$\rho_0=\SI{2710}{kg/m^3}$, and radius~$r_0=\SI{1}{mm}$. The unit cell length is~$d=\SI{0.1}{m}$. Different cases are demonstrated by varying the number and properties of the unit cells. \subsection{Scattering at the EP of dynamic matrix spectrum ($\mathcal{P}$ symmetric system) } \label{EGEP} \begin{figure}[!h] \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_D_4_Am} \caption{\label{fig:d4am}} \end{subfigure}% \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_D_4_Pha} \caption{\label{fig:d4ph}} \end{subfigure} \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_D_4_En} \caption{\label{fig:d4en}} \end{subfigure}% \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_D_4_sigma} \caption{\label{fig:d4sig}} \end{subfigure} \begin{subfigure}[b]{0.89\linewidth} \centering\includegraphics[height=140pt]{figures/EP_D_5_bs} \caption{\label{fig:d4bs}}% \end{subfigure} \caption{Scattering response of the sample with 4 cells. The exceptional point belongs to the eigenfrequency band structure and is moved to real frequency domain ($\omega_\mathrm{EP}\approx\SI{338.1}{rad/s}$) by using suitable loss and gain springs. The band structure shown in~(\subref{fig:d4bs}) is calculated using TMM. The scattering amplitudes, phases, and power fluxes (as well as net power loss) are shown in ~(\subref{fig:d4am}),~(\subref{fig:d4ph}) and~(\subref{fig:d4en}), respectively. The eigenvalues of~$\Tilde{\SM}$, denoted by $\sigma$, are shown in~(\subref{fig:d4sig}). \label{fig:D4EP}} \end{figure} In this section, we examine the scattering properties at the EP of the dynamic matrix eigenspectrum derived in~\cref{DMEP}. To study the scattering of steady state harmonic waves, the EP is located in the real domain. In this section the unit cell is set to be symmetric (i.e.,~$\beta^p=\beta^q=\beta^c$, see \cref{fig:TMRUC} and \cref{eq:TMRUC}). The main chain crystal and resonator masses are chosen to be equal~$M^c=M^i=\SI{0.1}{kg}$ and the coupling constant is selected as~$\kappa=0.5$. The stiffness values are~$\beta^p=\beta^q=\beta^{i*}\approx\SI{10+i3.78}{kN/m}$, based on \cref{eq:epbc}. The band structure of such a unit cell can be obtained using \cref{eq:cellTMeig}, and is shown in \cref{fig:d4bs}. The two solutions are denoted by~$Q^\pm$. The EP is calculated based on \crefrange{eq:qep}{eq:wep} and is located at~$\omega_\mathrm{EP}\approx \SI{338.1}{rad/s}$, and~$Q_\mathrm{EP}^{\pm}=\pm\pi/2$, which are both real due to the choice of $\beta$ values. As pointed out by Maznev~\cite{Maznev2018a}, no branch bifurcation can be observed due to the fact that the wavenumber~$Q$ is solved as a complex function of real frequency in the scattering analysis. This band structure is associated with an infinitely periodic array and therefore agnostic to the number of cells in the scattering analysis. However, the actual scattering responses of such structures are affected by the number of unit cells. To demonstrate this a sample consisting of~$J=4$ unit cells is analyzed. The amplitude, phase, and associated power flux scattering coefficients are shown in \crefrange{fig:d4am}{fig:d4en}. The eigenvalues~$\sigma$ of the scattering matrix are shown in \cref{fig:d4sig}. At the frequency of the EP, unitary transmission and zero reflection can be observed. The energy is dissipated at most frequencies due to the lossy nature of cell springs~$\beta^p$ and~$\beta^q$. The net absorption (loss) of energy per unit time is shown by in \cref{fig:d4en}. At the EP frequency, the power generated by the gain in resonator springs compensates exactly for the loss from main chain springs. This fact can also be seen in the eigenvalue plot \cref{fig:d4sig} as only at the EP frequency, the two eigenvalues of~$\Tilde{\SM}$ reach the unit circle (shown as the dashed circle), indicating the amplitude of incoming and outgoing waves are equal. In the cases studied here, the phases of the stiffness parameters follow the relation in \cref{eq:epbc}, while ~$\Re \beta^i=\Re \beta^c$ and~$M^i=M^c$. Then based on \cref{eq:qep}, \cref{eq:wep} and \cref{eq:TMRUC} it can be found that at the EP the sample transfer matrix becomes \begin{equation} \mathsf{T}(\omega_\mathrm{EP})=(-\vect{I})^{N/2},\label{eq:TSEN} \end{equation} when~$N$ is even. Under such circumstances, the transmission coefficients~$\mathsf{S}_{ab}$ and~$\mathsf{S}_{ba}$ will have unitary amplitudes, and the reflection coefficients~$\mathsf{S}_{aa}$ and~$\mathsf{S}_{bb}$ will vanish. This bi-directional reflectionlessness at~$\omega_\mathrm{EP}$ is not affected by the outside material (assuming two bars are identical), as all the impedance terms in the S-parameters will be canceled out. The frequency-dependent reflectionless scattering property can lead to design of wave filtering devices that only allow waves with certain frequency to transmit. According to \cref{eq:TSEN}, the sample becomes fully invisible at~$\omega_\mathrm{EP}$ if cell number is~$J = 4, 8, 12, \cdots$, as if the two boundaries~$x^l$ and~$x^r$ are directly connected, irrespective of the outside bar material. However, for~$J=2,6,10\cdots$ cells, an extra phase of~$\pi$ will be added to each scattering coefficient, with the amplitude being the same as observed also in~$J = 4, 8, 12, \cdots$ cases. At the frequency of the dynamic matrix EP, the sample with even $N$ happens to satisfy an apparent overall~$\mathcal{T}$ symmetry condition for $\Tilde{\SM}$, with balanced energy gain and loss in the system. Multiple application scenarios could arise with such exotic properties of EPs. For example, the bi-directional reflectionless features of EPs can be used for acoustic camouflage. Since the reflected waves will be suppressed at the EPs’ frequencies, a target object covered by a properly designed micro-structured medium will be undetectable by a sonar-based sensor. The scattering matrix has repeating eigenvalues at this frequency, but the two eigenvectors remain linearly independent and, not surprisingly, there is no reason for the scattering matrix to have an EP related to that of the dynamic matrix. The EP and coalescence of the scattering matrix eigenspectrum shall be analyzed next. \subsection{Scattering at the EP of scattering matrix spectrum ($\mathcal{PT}$ symmetric system)}\label{SMEP} The scattering matrix may have an EP only when the system exhibits $\mathcal{PT}$ symmetry but not individual $\mathcal{P}$ or $\mathcal{T}$ symmetries. Since the EP of the scattering matrix is generally unrelated to the eigenfrequency band structure of locally resonant systems and to simplify further exposition, we remove the internal resonator so that~$M^i = 0$. Consider two spring constants~$\beta^q=\beta^{p*}$ with $\beta^p$ in the first quadrant (lossy). A cell with~$\beta^q$ on right and~$\beta^p$ on the left and mass $M^g$ is referred to as~$g$. The transfer matrix of the~$g$ cell is: \begin{equation}\label{eq:TMRUCba} \mathsf{T}^{(g)}= \begin{pmatrix} 1-\dfrac{M^{g}\omega^2}{2\beta^q} & \mathrm{i}\omega\dfrac{2\beta^p+2\beta^q-M^{g}\omega^2}{4\beta^p\beta^q} \\ \mathrm{i}M^{g}\omega & 1-\dfrac{M^{g}\omega^2}{2\beta^p} \end{pmatrix}~. \end{equation} A cell with~$\beta^p$ on the right and~$\beta^q$ on the left and mass $M^l$ is referred to as~$l$. The transfer matrix of the~$l$ cell is denoted as~$\mathsf{T}^{(l)}$, which can be derived by changing the superscript~$p$ into~$q$ and vice versa in \cref{eq:TMRUCba} as well as changing $M^g$ to $M^l$. We construct a sample consisting of five cells, for which total transfer matrix is~$\mathsf{T}=\mathsf{T}^{(g)}\mathsf{T}^{(l)}\mathsf{T}^{(g)}\mathsf{T}^{(l)}\mathsf{T}^{(g)}$. A numerical example based on ~$\beta^p = (10+1\mathrm{i}) \ \mathrm{kN/m}$ and~$\beta^q = (10-1\mathrm{i}) \ \mathrm{kN/m}$ with masses~$M^{g} = \SI{0.12}{kg}$ and~$M^{l} = \SI{0.10}{kg}$ is studied here. \begin{figure}[!h] \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_S_Am} \caption{\label{fig:slam}} \end{subfigure}% \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_S_TrAm} \caption{\label{fig:stram}} \end{subfigure} \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_S_ph} \caption{\label{fig:saphs}} \end{subfigure}% \begin{subfigure}[b]{0.4385\linewidth} \centering\includegraphics[height=140pt]{figures/EP_S_Sig} \caption{\label{fig:ssig}} \end{subfigure} \begin{subfigure}[b]{0.89\linewidth} \centering\includegraphics[height=140pt]{figures/EP_S_eigvec} \caption{\label{fig:SEigVec}} \end{subfigure}% \caption{Scattering response near two EPs of the scattering matrix for a $\mathcal{PT}$ symmetric system. (\subref{fig:slam}) Amplitudes of the scattering coefficients in logarithmic scale.~(\subref{fig:stram}) Amplitudes of the transmission coefficients in linear scale for clarification.~(\subref{fig:saphs}) the scattering coefficient phases, ~(\subref{fig:ssig}) the eigenvalues of~$\Tilde{\SM}$ matrix, and~(\subref{fig:SEigVec}) the second components of the eigenvectors~\cref{eq:mu} of~$\Tilde{\SM}$ matrix. \label{fig:SMEP}} \end{figure} The scattering responses are calculated and shown in \cref{fig:SMEP}, where the frequency range is~$[320,\ 420]\ \mathrm{rad/s}$. \Cref{fig:slam} shows the amplitudes of scattering coefficients in the logarithmic scale, where two poles of~$\mathsf{S}_{aa}$ can be found at~$\omega_\mathrm{EP1}\approx\SI{348.7}{rad/s}$ and~$\omega_\mathrm{EP2}\approx\SI{405.4}{rad/s}$. At these two frequencies, the left reflection coefficient~$\mathsf{S}_{aa}$ becomes essentially zero, indicating one-way reflection. Based on \cref{eq:seiga,eq:seige}, the~$\Tilde{\SM}$ matrix exhibits coalescing eigenvalues and eigenvectors at~$\omega_\mathrm{EP(1,2)}$, as shown in \cref{fig:ssig,fig:SEigVec}. The eigenvectors are re-normalized as~ \begin{equation}\label{eq:mu} \vect{\mu}=\begin{pmatrix}1 \\ \mu\end{pmatrix}, \end{equation} and only the second component is shown in \cref{fig:SEigVec}. The two EPs are labelled as EP1 and EP2, and they correspond to the phase transition thresholds where the~$\mathcal{PT}$ symmetry of the eigenvectors is spontaneously broken. At these EPs, the transmission amplitude becomes one, as shown in \cref{fig:stram}. Both~$\sigma$ and~$\mu$ bifurcate at the EPs. For the frequency smaller than~$\omega_\mathrm{EP1}$ or larger than~$\omega_\mathrm{EP2}$, the system is in the~$\mathcal{PT}-$unbroken phase, where the two reflection coefficients have the same phase (see \cref{fig:saphs}) and the transmission coefficient has amplitude smaller than one. The non-degenerate eigenvalues are both unimodular but different in phase, and the eigenvectors are real. The frequency range~$(\omega_\mathrm{EP1},\omega_\mathrm{EP2})$ represents the~$\mathcal{PT}-$broken phase. In the~$\mathcal{PT}-$broken phase, the eigenvalues have same phases but inverse amplitudes, i.e.,~$|\sigma_1\sigma_2|=1$. The second component of the eigenvector becomes purely imaginary. The reflection coefficients have exactly~$\pi$ difference in their phases. The transmission amplitude exceeds one, and all single-sided incident waves will be amplified. This single-sided reflection is most easily observed in the the transmission and reflection amplitudes shown in \cref{fig:slam}. Due to the defectiveness of the scattering matrix, prescribed or measured states at~$\omega_\mathrm{EP(1,2)}$ can not be decomposed into the eigenvectors of ~$\Tilde{\SM}$. On the other hand, a scattering state can always be decomposed into the two eigen-modes when operating in the~$\mathcal{PT}-$broken or unbroken phases. In the symmetry-unbroken phase, the two basis modes have purely real components and are invariant under~$\mathcal{PT}$ reversal. In the symmetry-broken phase, the two basis vectors no longer maintain the symmetry due to the imaginary components. Nevertheless, the~$\mathcal{PT}$ symmetry conditions in~\cref{table:1} are always satisfied. \section{Conclusions} Using a simple tunable discrete model setup, the exceptional points (EP) of the dynamic and scattering matrices of monatomic, diatomic, or locally resonant mechanical systems are analyzed. It is shown that the eigenfrequency band structure of a micro-structured medium can possess EPs as complex singularities or defects of the associated linear operators. Various phenomena associated with wave propagation in mechanical materials can be categorized and analyzed with this tool set. To summarize, the highlights of this work are: \begin{itemize} \item Elucidation of EP-related phenomena such as level repulsion, mode coalescence, mode switching and self-orthogonality in a simple yet physical setup, \item Summary of the transfer and scattering matrix properties for general 1D (discrete and continuous) systems, \item Demonstration of unique scattering behavior at the EPs of the dynamic and scattering matrices (bi-directional reflectionless and single-sided reflection, respectively). \end{itemize} This study of discrete metamaterial systems contributes to fundamental understanding of the EPs in mechanical micro-structured media, and will be of interest to novel applications such as robust sensing and filtering. The complex valued springs (especially the ones with gains) used in this paper represent some practical challenges. However, they are helpful in the theoretical investigation of the topological and spectral properties of EPs, and ideas for their realization are already presented in literature. Furthermore, the discrete modeling approach can be utilized for conceptual design of novel devices as well as transferring the knowledge from EM and photonics domain into mechanical counterpart devices. See for example, the potential lasing mechanism~\cite{Zhang2019d} and prototype EP-enabled lasing devices studied in optics~\cite{Peng2016a}. \newline \noindent\textbf{ACKNOWLEDGEMENTS} \newline The authors acknowledge NSF grant \#1825969 to the University of Massachusetts, Lowell. \newline \noindent\textbf{DATA AVAILABILITY} \newline The data that support the findings of this study are available from the corresponding author upon reasonable request. \noindent\textbf{ACKNOWLEDGEMENTS} \newline The Version of Record of this article is published in The European Physical Journal Plus, and is available online at https://doi.org/10.1140/epjp/s13360-022-02626-6 \newline \begin{appendices} \section{Equations of motion derivation}\label{sec:appEOM} To derive the equations of motion (EOMs) of a unit cell in~\cref{DMEP}, an illustration is shown in \cref{fig:appEOM}. The cell springs~$\beta^c$ and neighbor cells are not shown here. \begin{figure}[!ht] \centering\includegraphics[width=180pt]{figures/EOMforce} \caption{\label{fig:appEOM} Illustration of the forces and displacement of the local resonator. The subscript~$n$ is omitted here.} \end{figure} In one unit cell, the forces acting on the internal resonator include two components. The spring force~$F^{is}$ is parallel to the~$u^i$ direction, and is defined positive if spring is in tension. The rigid crystal mass wall applies a force~$F^{in}$ normal to~$u^i$ (parallel to component~$u^c\sin\theta$) since all contacting surfaces are frictionless. The crystal mass is kept from vertical motion by frictionless walls above and below it and therefore that DOFs does not enter the kinematics or dynamics equations. The net length increased in the resonator spring is~$\Delta l^i=u^i-u^c\cos\theta=u^i-\kappa u^c$. The tensile spring force is \begin{equation}\label{eq:fis} F^{is}=\beta^i\Delta l^i=\beta^i(u^i_n-\kappa u^c_n). \end{equation} The EOM of the~$u^i$ DOF is \begin{equation} M^i\frac{\partial^2u^i_n}{\partial t^2}=-F^{is}=\beta^i(\kappa u^c_n-u^i_n), \end{equation} The resonator also has a dependent DOF normal to~$u^i$, which is simply~$u^c \sin\theta$. The acceleration in this direction is caused by the wall force~$F^{in}$. Therefore, we have \begin{equation}\label{eq:fin} M^i\sin\theta\frac{\partial^2u^c_n}{\partial t^2}=F^{in}. \end{equation} For the cell mass~$M^c$, its motion is allowed only in the horizontal direction. Therefore, its EOM is \begin{equation}\label{eq:cellEOM1} M^c\frac{\partial^2u^c_n}{\partial t^2}=\beta^c(u^c_{n+1}-2u^c_n+u^c_{n-1})+F^{is}\cos\theta-F^{in}\sin\theta. \end{equation} The first term on the R.H.S. of~\cref{eq:cellEOM1} is the force applied by neighbor cells (not shown in~\cref{fig:appEOM}). The second and third terms are the forces supplied by the resonator, projected onto the horizontal direction. Substituting~\cref{eq:fis} and~\cref{eq:fin} into~\cref{eq:cellEOM1} yields \begin{equation} M^c\frac{\partial^2u^c_n}{\partial t^2}=\beta^c(u^c_{n+1}-2u^c_n+u^c_{n-1})+\kappa\beta^i(u^i_n-\kappa u^c_n)-(1-\kappa^2) M^i\frac{\partial^2u^c_n}{\partial t^2}, \end{equation} where~$\kappa=\cos\theta$ and~$1-\kappa^2=\sin^2\theta$. \section{Unit cell transfer matrix}\label{appTM} At the boundaries of the cell, the state vectors are: \begin{equation} \vect{\psi}^{l,r}=\begin{pmatrix}v^{l,r} \\ N^{l,r}\end{pmatrix}, \end{equation} where~$l$ or $r$ denotes left or right,~$v^{l,r}=\mathrm{i}\omega u^{l,r}$ is the particle velocity in~$x$ direction, and~$N^{l,r}$ is the internal normal traction force in the springs applied at the boundary (tensile positive, relating to normal stress component in a continuum system). The spring constitutive equations are: \begin{align} N^l&=2\beta^p(u^c-u^l),\label{TL}\\ N^r&=2\beta^q(u^r-u^c)\label{TR}. \end{align} The equation of motion for the main crystal chain mass DOF is: \begin{equation} N^r-N^l+\kappa\beta^i(u^i-\kappa u^c)+\omega^2M^{ci}u^c=0,\label{TC} \end{equation} where~$\kappa$ and~$M^{ci}$ are quantities defined in the main text. The equation of motion for the internal resonator is: \begin{equation} \beta^i(u^i-\kappa u^c)=\omega^2 M^{i}u^i.\label{TI} \end{equation} Combining \crefrange{TC}{TI}, the crystal displacement can be written as: \begin{equation} u^c=\frac{N^l-N^r}{K_T},\label{eq:uckt} \end{equation} where \begin{equation} K_T=\dfrac{\kappa^2 M^i \omega^2}{1-(\omega/\omega^i)^2}+M^{ci}\omega^2, \end{equation} which would also simplify to $K_T = M^c \omega^2$ in the limit when $M^i = 0$. Substituting \cref{eq:uckt} into \crefrange{TL}{TR}, the state vectors at the boundaries of a unit cell can be written as: \begin{equation} \label{eq:cellTM} \begin{pmatrix} v^r\\N^r\end{pmatrix}= \mathsf{T}^{cell} \begin{pmatrix} v^l\\N^l\end{pmatrix}, \end{equation} with \begin{equation}\label{eq:TMRUC} \mathsf{T}^{cell}= \begin{pmatrix} 1-\dfrac{K_T}{2\beta^q} & \mathrm{i}\omega\dfrac{2\beta^p+2\beta^q-K_T}{4\beta^p\beta^q} \\ \dfrac{\mathrm{i}K_T}{\omega} & 1-\dfrac{K_T}{2\beta^p} \end{pmatrix}~. \end{equation} The eigenvectors of a non-defective transfer matrix span~$\mathbb{C}^2$ and therefore form a basis for any possible state. Thus any state vector $\vect{\psi}$ observed can be decomposed into a superposition of two eigenmodes, i.e., one can identify and separate (any wave or any linear combination of) the forward and backward wave components in this 1D case into eigenmodes of the transfer matrix of the finite specimen. \section{Scattering matrix}\label{appSM} The state vectors at locations~$x^a$ and~$x^b$ are simply derived using superposition: \begin{equation}\label{eq:yab} \vect{\psi}^{a,b}=\mathrm{i} \omega e^{\mathrm{i} \omega t}\begin{pmatrix} 1&1\\-Z^{a,b} &Z^{a,b} \end{pmatrix}\begin{pmatrix} A^{(a,b)+}\\A^{(a,b)-} \end{pmatrix}, \end{equation} where the impedance for bar $a$ or $b$ is~$Z^{a,b}=-N^{(a,b)+}/v^{(a,b)+}=\pi r_0^2\sqrt{E_0\rho_0}$. For a continuum~\cite{Amirkhizi2017}, the impedance is simply~$Z=-\sigma/v=\sqrt{E_0 \rho_0}$. For the continuum-discrete interfaces here, one needs to include the bar cross-sectional area in calculation. Since it was chosen that $x^{a,b} = x^{l,r}$ (associated with the left and right boundaries of the full system) then \begin{equation} \label{eq:sampleTM} \vect{\psi}^b= \mathsf{T} \vect{\psi}^a=\begin{pmatrix} \mathsf{T}_{11} &\mathsf{T}_{12} \\ \mathsf{T}_{21} &\mathsf{T}_{22} \end{pmatrix} \vect{\psi}^a, \end{equation} due to the construction of~$\mathsf{T}=\mathsf{T}^{(j)} \cdots\mathsf{T}^{(2)}\mathsf{T}^{(1)}$ as the transfer matrix of entire sample (in total $J$ cells) from~$x^l$ to~$x^r$, where and~$\mathsf{T}^{(j)}$ is the TM of~$j$-th cell counting from the left interface~$x^l$, and $\vect{\psi}^{a,b} = \vect{\psi}^{l,r}$. Substituting \cref{eq:yab} into \cref{eq:sampleTM}, the scattering coefficients in \cref{eq:asa} can be obtained analytically: \begin{align} \mathsf{S}_{aa}&=\frac{-\mathsf{T}_{21}+\mathsf{T}_{22}Z^b-\mathsf{T}_{11} Z^a+\mathsf{T}_{12}Z^a Z^b}{\Delta},\label{eq:saa}\\ \mathsf{S}_{bb}&=\frac{-\mathsf{T}_{21}+\mathsf{T}_{11}Z^a-\mathsf{T}_{22}Z^b+\mathsf{T}_{12}Z^a Z^b}{\Delta},\label{eq:sbb}\\ \mathsf{S}_{ba}&=\frac{2Z^b}{\Delta}\abs{\mathsf{T}},\label{eq:sba}\\ \mathsf{S}_{ab}&=\frac{2Z^a}{\Delta},\label{eq:sab}\\ \Delta&=\mathsf{T}_{21}+\mathsf{T}_{11}Z^a+\mathsf{T}_{22}Z^b+\mathsf{T}_{12}Z^a Z^b,\label{eq:dels} \end{align} \end{appendices} \normalem \bibliographystyle{aipnum4-1}
1,314,259,996,111
arxiv
\section{Introduction} At extremely high energies, nucleus-nucleus collisions may be described by parton interactions in the framework of perturbative QCD (pQCD)-inspired models \cite{HIJING,PCM,DTUNUC}. In this framework, hard or semihard scatterings among partons dominate the reaction dynamics. They can liberate partons from the individual confining nucleons, thus producing large amount of transverse energy in the central region \cite{JBAM,KLL}, and drive the initially produced parton system toward equilibrium \cite{PCM,SHUR,BDMTW,XS93}. In principle, the same kind of hard processes, such as open charm production \cite{BMXW92,KG93,LG94}, direct photon and dilepton production \cite{SX93,STLD}, can also be used as direct probes of the early parton dynamics and the evolution of the quark-gluon plasma. Unlike strange quarks, charm quarks cannot be easily produced during the mixed and hadronic phases of the dense matter since the charm mass is much larger than the corresponding temperature scale. The only period when charm quarks can be easily produced is during the early stage of the parton evolution when the effective temperature is still high. At this stage, the parton gas is still not fully equilibrated yet so that the temperature is only an effective parameter describing the average momentum scale. By measuring this pre-equilibrium charm production, one can thus probe the initial parton density in phase space and shed light on the equilibration time \cite{BMXW92}. Roughly speaking, ultrarelativistic heavy-ion collisions in a partonic picture can be divided into three stages: (1) During the early stage, hard or semihard parton scatterings, which happen on a time scale of about 0.2 fm/$c$, produce a hot and dilute parton gas. This parton gas is dominated by gluons and its quark content is far below the chemical equilibrium value. Multiple hard scatterings suffered by a single parton during this short period of time when the beam partons pass through each other are suppressed due to the interference embedded in the Glauber formula for multiple scatterings \cite{WANG95}. This leads to the observed disapparence of the Cronin effect at high energy and at large transverse momentum \cite{CRON2}. Interference and parton fusion also lead to the depletion of small-$x$ partons in the effective parton distributions inside a nucleus \cite{BRLU,EQW94}. This nuclear shadowing of parton distributions reduces the initial parton production \cite{WG92}. (2) After two beams of partons pass through each other, the produced parton gas in the central rapidity region starts its evolution toward (kinetic) thermalization mainly through elastic scatterings and expansion. The kinematic separation of partons through free-streaming gives an estimate of the time scale $\tau_{\rm iso}\sim 0.5 - 0.7$ fm/$c$ \cite{BDMTW,KEXW94a}, when local isotropy in momentum distributions is reached. (3) Further evolution of the parton gas toward a fully (chemically) equilibrated parton plasma is dictated by the parton proliferation through induced radiation and gluon fusion. Due to the consumption of energy by the additional parton production, the effective temperature of the parton plasma cools down considerably faster than the ideal Bjorken's scaling solution. Therefore, the life time of the plasma is reduced before the temperature drops below the QCD phase transition temperature \cite{BDMTW}. Similarly, charm production can also be divided into three different contributions in the history of the evolution of the parton system: (1) initial production during the overlapping period ; (2) pre-thermal production from secondary parton scatterings during the thermalization, $\tau<\tau_{\rm iso}$; (3) and thermal production during the parton equilibration, $\tau>\tau_{\rm iso}$, in the expanding system. In this paper, we will first review the equilibration of the initially produced parton gas in Sec. II, incorporating the result of an improved perturbative QCD analysis of Landau-Pomeranchuk-Migdal (LPM) effect \cite{GWLPM1,GWLPM2}. Then we will discuss the three stages of open charm production in a reversed order, starting with the charm production during the final stage of parton equilibration in Sec. III. For pre-thermal charm production, we will consider the space-momentum correlation in the initial parton phase-space distributions, which will suppress open charm production during this period as compared to previous estimates \cite{BMXW92}. In Sec. IV, we will compare the results to the charm production during the initial hard or semi-hard scatterings and also to the results in Geiger's calculation \cite{KG93} which is about 40-50 times higher than our estimates here. We will also discuss the change in charm production due to the uncertainties in the initial parton density and effective temperature. Finally we give our conclusions in Sec. V. \section{Parton Production and Equilibration} At collider energies ($\sqrt{s}> 100$ GeV), hard or semihard parton scatterings are believed to be the dominant mechanism for transverse energy production in the central region \cite{JBAM,KLL}. These hard processes happen on a short time scale and they generally break color coherence inside individual nucleons \cite{BMW92}. After the fast parton pass through each other and leave the central region, a partonic gas will be left behind which is not immediately in thermal and chemical equilibrium. The partons inside such a system will then undergo further interactions and free-streaming. Neglecting parton scatterings in this period of time, the kinematic separation of partons with different rapidities in a cell establishes local momentum isotropy at the time of the order of $\tau_{\rm iso}=0.7$ fm/$c$ \cite{BDMTW,KEXW94a}. If we assume this is the actual kinetic equilibration (or thermalization) time for the partonic system, The subsequent chemical equilibration can then be described by a set of rate equations. In this section we will review parton equilibration following Ref.~\cite{BDMTW} with improved estimate of the gluon equilibration rate. \subsection{Initial conditions: a hot and dilute gluonic gas} Currently there are many models for incorporating hard and semi-hard processes in hadronic and nuclear collisions \cite{HIJING,PCM,DTUNUC}. We will use the results of the HIJING Monte Carlo model \cite{HIJING} to estimate the initial parton production. In this model, multiple hard or semi-hard parton scatterings with initial and final state radiation are combined together with Lund string phenomenology \cite{LUND} for the accompanying soft nonperturbative interactions. Let us first estimate the initial conditions at time, $\tau_{\rm iso}$, from the HIJING results. Since we are here primarily interested in the chemical equilibration of the parton gas which has already reached local isotropy in momentum space, we shall assume that the parton distributions can be approximated by thermal phase space distributions with non-equilibrium fugacities $\lambda_i$: \begin{equation} f(k;T,\lambda_i) = \lambda_i\left( e^{u\cdot k /T} \pm \lambda_i\right)^{-1}, \label{eq:eq1} \end{equation} where $u^{\mu}$ is the four-velocity of the local comoving reference frame. When the parton fugacities $\lambda_i$ are much less than unity as may happen during the early evolution of the parton system, we can neglect the quantum corrections in Eq.~(\ref{eq:eq1}) and write the momentum distributions in the factorized form, \begin{equation} \label{19} f(k;T,\lambda _i)=\lambda _i\left (e^{ u\cdot k /T}\pm 1\right)^{-1}. \end{equation} Using this form of distributions, one has the parton and energy densities, \begin{equation} n = (\lambda_g a_1 +\lambda_q b_1)T^3, \quad \varepsilon = (\lambda_g a_2 +\lambda_q b_2) T^4. \label{eq:eq2} \end{equation} where $a_1=16\zeta (3)/\pi^2\approx 1.95$, $a_2=8\pi^2/15\approx 5.26$, for a Bose distribution, $b_1=9\zeta (3)N_f/\pi^2\approx 2.20$ and $b_2=7\pi^2N_f/20 \approx 6.9$ for a Dirac distribution. For a baryon symmetric system, $\lambda_q=\lambda_{\bar q}$. Since boost invariance has been demonstrated to be a good approximation for the initially produced partons \cite{KEXW94a}, we can then estimate the initial parton fugacities, $\lambda_{g,q}^0$ and temperature $T_0$ from \begin{equation} n_0 = \frac{1}{\pi R^2_{A} \tau_{\rm iso}} \frac{dN}{dy}\; , \quad \varepsilon_0 = n_0 \frac{4}{\pi}\langle k_T\rangle, \label{eq:eq3} \end{equation} where $\langle k_T\rangle$ is the average transverse momentum. The quark fugacity is taken as $\lambda_q^0 = 0.16 \lambda_g^0$, corresponding to a ratio 0.14 of the initial quark(antiquark) number to the total number of partons. Table \ref{table1} shows these relevant quantities at the moment $\tau_{\rm iso}$, for Au + Au collisions at Brookhaven National Laboratory Relativistic Heavy Ion Collider (RHIC) and CERN Large Hadron Collider (LHC) energies. One can observe that the initial parton gas is rather hot reflecting the large average transverse momentum. However, the parton gas is very dilute as compared to the ideal gas at the same temperature. The gas is also dominated by gluons with its quark content far below the chemical equilibrium value. We should emphasize that the initial conditions listed here result from HIJING calculation of parton production through semihard scatterings. Soft partons, {\em e.g.}, due to parton production from the color field \cite{KEMG}, are not included. \subsection{Master rate equations} In general, chemical reactions among partons can be quite complicated because of the possibility of initial and final-state gluon radiations. Interference effects due to multiple scatterings inside a dense medium, {\em i.e.}, LPM suppression of soft gluon radiation has to be taken into account. One lesson one has learned from LPM effect \cite{GWLPM1,GWLPM2} is that the radiation between two successive scatterings is the sum, {\em on the amplitude level}, of both the initial state radiation from the first scattering and the final state radiation from the second one. Since the off-shell parton is space-like in the first amplitude and time-like in the second, the picture of a time-like parton propagating inside a medium in the parton cascade simulations \cite{PCM} shall break down. Instead, we shall here consider both initial and final state radiations together associated with a single scattering (To the leading order, a single additional gluon is radiated, such as $gg\to ggg$), in which we can include LPM effect by a radiation suppression factor. The analysis of QCD LPM effect in Ref.~\cite{GWLPM1,GWLPM2} has been done for a fast parton traveling inside a parton plasma. We will use the results for radiations off thermal partons who average energy is about $T$, since we expect the same physics to happen. In order to permit the approach to chemical equilibrium, the reverse process, {\em i.e.}, gluon absorption, has to be included as well, which is easily achieved making use of detailed balance. We consider only the dominant process $gg\to ggg$. Radiative processes involving quarks have substantially smaller cross sections in pQCD, and quarks are considerably less abundant than gluons in the initial phase of the chemical evolution of the parton gas. Here we are interested in understanding the basic mechanisms underlying the formation of a chemically equilibrated quark-gluon plasma, and the essential time-scales. We hence restrict our considerations to the dominant reaction mechanism for the equilibration of each parton flavor. These are just four processes \cite{MSM86}: \begin{equation} gg \leftrightarrow ggg, \quad gg\leftrightarrow q\overline{q}.\label{eq:eq4} \end{equation} Other scattering processes ensure the maintenance of thermal equilibrium $(gg\leftrightarrow gg, \; gq \leftrightarrow gq$, etc.) or yield corrections to the dominant reaction rates $(gq\leftrightarrow qgg$, etc.). Restricting to the reactions in Eq.~(\ref{eq:eq4}) and assuming that elastic parton scatterings are sufficiently rapid to maintain local thermal equilibrium, the evolution of the parton densities is governed by the master equations \cite{BDMTW}: \begin{eqnarray} \partial_{\mu}(n_gu^{\mu}) &= & \frac{1}{2}\sigma_3 n_g^2 \left( 1-\frac{n_g}{\tilde n_g}\right) -\frac{1}{2}\sigma_2 n_g^2 \left( 1 - \frac{n_q^2 \tilde n_g^2} {\tilde n_q^2 n_g^2}\right), \label{eq:eq5}\\ \partial_{\mu} (n_qu^{\mu}) &= & \frac{1}{2}\sigma_2 n_g^2 \left( 1 - \frac{n_q^2 \tilde n_g^2} {\tilde n_q^2 n_g^2}\right), \label{eq:eq6} \end{eqnarray} where ${\tilde n_i}\equiv n_i/\lambda_i$ denote the densities with unit fugacities, $\lambda_i=1$, $\sigma_3$ and $\sigma_2$ are thermally averaged, velocity weighted cross sections, \begin{equation} \sigma_3 = \langle\sigma(gg\to ggg)v\rangle, \quad \sigma_2 = \langle \sigma (gg\to q\bar q)v\rangle. \label{eq:eq7} \end{equation} We have also assumed detailed balance and a baryon symmetric matter, $n_q=n_{\bar q}$. If we neglect effects of viscosity due to elastic scattering \cite{KEMG,VISC}, we then have the hydrodynamic equation \begin{equation} \partial_{\mu} (\varepsilon u^{\mu}) + P\;\partial_{\mu} u^{\mu} = 0, \label{eq:eq8} \end{equation} which determines the evolution of the energy density. For a time scale $\tau\ll R_A$, we can neglect the transverse expansion and consider the expansion of the parton plasma purely longitudinal, which leads to the Bjorken's scaling solution \cite{BJOR} of the hydrodynamic equation: \begin{equation} {d\varepsilon\over d\tau} + {\varepsilon+P\over\tau} = 0. \label{eq:eq9} \end{equation} We further assume the ultrarelativistic equation of motion, $\varepsilon=3 P$ with $n_i$ and $\varepsilon$ given by Eq.~(\ref{eq:eq2}). We can then solve the hydrodynamic equation, \begin{equation} [\lambda_g + \frac{b_2}{a_2}\lambda_q]^{3/4} T^3\tau = \hbox{const.} \;\; , \label{eq:eq10} \end{equation} and rewrite the rate equation in terms of time evolution of the parameters $T(\tau)$, $\lambda_g(\tau)$ and $\lambda_q(\tau)$, \begin{eqnarray} \frac{\dot\lambda_g}{\lambda_g} + 3\frac{\dot T}{T} + \frac{1}{\tau} &= &R_3 (1-\lambda_g)-2R_2 \left(1- \frac{\lambda_q^2}{\lambda_g^2} \right) \label{eq:eq11} \\ \frac{\dot\lambda_q}{\lambda_q} + 3\frac{\dot T}{T} + \frac{1}{\tau} &= &R_2 {a_1\over b_1} \left( \frac{\lambda_g}{\lambda_q} - \frac{\lambda_q}{\lambda_g}\right), \label{eq:eq12} \end{eqnarray} where the density weighted reaction rates $R_3$ and $R_2$ are defined as \begin{equation} R_3 = \textstyle{{1\over 2}} \sigma_3 n_g, \quad R_2 = \textstyle{{1\over 2}} \sigma_2 n_g. \label{eq:eq13} \end{equation} Notice that for a fully equilibrated system ($\lambda_g=\lambda_q=1$), Eq. (\ref{eq:eq10}) corresponds to the Bjorken solution, $T(\tau)/T_0=(\tau_0/\tau)^{1/3}$. \subsection{Parton equilibration rates} To take into account of the LPM effect in the calculation of the reaction rate $R_3$ for $gg\rightarrow ggg$, we simply impose the LPM suppression of the gluon radiation whose effective formation time $\tau_{\rm QCD}$ is much longer than the mean-free-path $\lambda_f$ of multiple scatterings to each $gg\rightarrow ggg$ process. In the mean time, the LPM effect also regularizes the infrared divergency associated with QCD radiation. However, $\sigma_3$ still contains infrared singularities in the gluon propagators. For an equilibrium system one can in principle apply the resummation technique developed by Braaten and Pisarski \cite{BP90} to regularize the electric part of the propagators, though the magnetic sector still has to be determined by an unknown magnetic screening mass which can only be calculated nonperturbatively \cite{TBBM93} up to now. Since we are dealing with a nonequilibrium system, Braaten and Pisarski's resummation may not be well defined. As an approximation, we will use the Debye screening mass \cite{BMW92}, \begin{equation} \mu_D^2 = {6g^2\over \pi^2} \int_0^{\infty} kf(k) dk =4\pi\alpha_s T^2\lambda_g, \label{eq:eq14} \end{equation} to regularize all singularities in both the scattering cross sections and the radiation amplitude. To further simplify the calculation we approximate the LPM suppression factor in Ref.~\cite{GWLPM1,GWLPM2} by a $\theta$-function, $\theta(\lambda_f-\tau_{\rm QCD})$, where \begin{equation} \tau_{\rm QCD}=\frac{C_A}{2C_2}\frac{2\cosh y}{k_{\perp}}, \end{equation} is the effective formation time of the gluon radiation in QCD which depends on the second Casimir of the beam parton representation in $SU(3)$, {\em e.g.}, $C_2=C_A=3$ for a gluon. In the previous calculation of the interaction rate \cite{BDMTW}, this color factor was not taken into account. The modified differential cross section for $gg\rightarrow ggg$ is then, \begin{equation} \frac{d\sigma_3}{dq_{\perp}^2 dy d^2k_{\perp}} =\frac{d\sigma_{\rm el}^{gg}}{dq_{\perp}^2}\frac{dn_g}{dy d^2k_{\perp}} \theta(\lambda_f-\tau_{QCD})\theta(\sqrt{s}-k_{\perp}\cosh y), \end{equation} where the second step-function accounts for energy conservation, and $s=18T^2$ is the average squared center-of-mass energy of two gluons in the thermal gas. The regularized gluon density distribution induced by a single scattering is \cite{GUNION}, \begin{equation} \frac{dn_g}{dy d^2k_{\perp}} =\frac{C_A\alpha_s}{\pi^2} \frac{q_{\perp}^2}{k_{\perp}^2[({\bf k}_{\perp} -{\bf q}_{\perp})^2 +\mu_D^2]}. \label{eq:dng} \end{equation} Similarly, the regularized small angle $gg$ scattering cross is, \begin{equation} \frac{d\sigma_{\rm el}^{gg}}{dq_{\perp}^2} =\frac{9}{4}\frac{2\pi\alpha_s^2}{(q_{\perp}^2+\mu_D^2)^2}. \end{equation} The mean-free-path for elastic scatterings is then, \begin{equation} \lambda_f^{-1}\equiv n_g\int_0^{s/4}dq_{\perp}^2 \frac{d\sigma_{\rm el}^{gg}}{dq_{\perp}^2} =\frac{9}{8}a_1\alpha_s T\frac{1}{1+8\pi\alpha_s\lambda_g/9}, \end{equation} which depends very weekly on the gluon fugacity $\lambda_g$ as compared to the independent one used in a previous study \cite{BDMTW}. Using, \begin{equation} \int_0^{2\pi}d\phi \frac{1}{({\bf k}_{\perp}-{\bf q}_{\perp})^2+\mu_D^2} =\frac{2\pi}{\sqrt{(k_{\perp}^2+q_{\perp}^2+\mu_D^2)^2 -4q_{\perp}^2k_{\perp}^2}}, \end{equation} we can complete part of the integrations and have, \begin{equation} R_3/T=\frac{32}{3a_1}\alpha_s\lambda_g(1+8\pi\alpha_s\lambda_g/9)^2 {\cal I}(\lambda_g), \end{equation} where ${\cal I}(\lambda_g)$ is a function of $\lambda_g$, \begin{eqnarray} {\cal I}(\lambda_g)=\int_1^{\sqrt{s}\lambda_f}dx \int_0^{s/4\mu_D^2}&dz& \frac{z}{(1+z)^2} \left\{ {\cosh^{-1}(\sqrt{x}) \over x\sqrt{[x+(1+z)x_D]^2-4x\;z\;x_D}}\right. \nonumber \\ &+& \left. \frac{1}{s\lambda_f^2}{\cosh^{-1}(\sqrt{x}) \over \sqrt{[1+x(1+z)y_D]^2-4x\;z\;y_D}}\right\}, \end{eqnarray} where $x_D=\mu_D^2\lambda_f^2$ and $y_D=\mu_D^2/s$. We can evaluate the integration numerically and find out the dependence of $R_3/T$ on the gluon fugacity $\lambda_g$. In Fig.~\ref{fig1}, $R_3/T$ is plotted versus $\lambda_g$ for a coupling constant $\alpha_s=0.3$. The gluon production rate increases with $\lambda_g$ and then saturates when the system is in equilibrium. Note that in principle one should multiply the phase-space integral by $1/3!$ to take into account of the symmetrization of identical particles in $gg \rightarrow ggg$ as in Ref.~\cite{SX93}. However, for the dominant soft radiation we consider here, the radiated soft gluon does not overlap with the two incident gluons in the phase-space. Thus we only multiply the cross section by $1/2!$ to obtain the equilibration rate. The calculation of the quark equilibration rate $R_2$ for $gg\rightarrow q\bar{q}$ is more straightforward. Estimate in Ref.~\cite{BDMTW} gives, \begin{equation} R_2 = {1\over 2}\sigma_2 n_g \approx 0.24 N_f \;\alpha_s^2 \lambda_g T \ln (5.5/\lambda_g). \label{56} \end{equation} The dashed line in Fig.~\ref{fig1} shows the normalized rate $R_2/T$ for $N_f=2.5$, taking into account the reduced phase space of strange quarks at moderate temperatures, as a function of the gluon fugacity. \subsection{Evolution of the parton plasma} With the parton equilibration rates which in turn depend on the parton fugacity, we can solve the master equations self-consistently and obtain the time evolution of the temperature and the fugacities. Shown in Figs.~\ref{fig2} and \ref{fig3} are the time dependence of $T$, $\lambda_g$, and $\lambda_q$ for initial conditions listed in Table~\ref{table1} at RHIC and LHC energies. We find that the parton gas cools considerably faster than predicted by Bjorken's scaling solution $(T^3\tau$ = const.) shown as dotted lines, because the production of additional partons approaching the chemical equilibrium state consumes an appreciable amount of energy. The accelerated cooling, in turn, slows down the chemical equilibration process, which is more apparent at RHIC than at LHC energies. Therefore, the parton system can hardly reach its equilibrium state before the effective temperature drops below $T_c \approx 200$ MeV in a short period of time of 1-2 fm/$c$ at RHIC energy. At LHC energy, however, the parton gas becomes very close to its equilibrium and the plasma may exist in a deconfined phase for as long as 4-5 fm/$c$. Another important observation is that quarks never approach to chemical equilibrium at both energies. This is partially due to the small initial quark fugacity and partially due to the small quark equilibration rate. We note that the initial conditions used here result from the HIJING model calculation in which only initial direct parton scatterings are taken into account. Due to the fact that HIJING is a pQCD motivated phenomenological model, there are some uncertainties related to the initial parton production, as listed in Ref.~\cite{BDMTW}. We can estimate the effect of the uncertainties in the initial conditions on the parton gas evolution by multiplying the initial energy and parton number densities at RHIC energy by a factor of 4. This will result in the initial fugacities, $\lambda_g^0=0.2$ and $\lambda_q^0=0.024$. With these high initial densities, the parton plasma can evolve into a nearly equilibrated gluon gas as shown in Fig.~\ref{fig4}. The deconfined phase will also last longer for about 4 fm/$c$. Though, the system is still dominated by gluons with few quarks and antiquarks as compared to a fully chemical equilibrated system. If the uncertainties in the initial conditions are caused by the soft parton production from the color mean fields, the initial effective temperature will decrease. Therefore, we can alternatively increase the initial parton density by a factor of 4 and decrease $T_0$ to 0.4 GeV at the same time. This leads to higher initial fugacities, $\lambda_g^0=0.52$ and $\lambda_q^0=0.083$. As shown in Fig.~\ref{fig4} by the curves with stars, this system evolves faster toward equilibrium, however, with shorter life-time in the deconfined phase due to the reduced initial temperature. We thus can conclude that perturbative parton production and scatterings are very likely to produce a quark-gluon plasma ( or more specifically a gluon plasma) in ultrarelativistic heavy ion collisions at LHC energy. However, the fate of the quark-gluon plasma at RHIC energy has to be determined by a more careful examination of the uncertainties in the initial conditions. These uncertainties will surely affect the open charm production during the equilibration as we shall discuss. \section{Thermal charm production during equilibration} With the given evolution of the parton gas, we can now calculate open charm production during the parton equilibration. Similar to light quarks, charm quarks are produced through gluon fusion $gg\rightarrow c\bar{c}$ and quark antiquark annihilation $q\bar{q}\rightarrow c\bar{c}$ during the evolution of the parton plasma. However, since the number of charm quarks is very small as compared to gluons and light quarks, we can neglect the back reactions, $c\bar{c}\rightarrow gg,\ q\bar{q}$ and their effect on the parton evolution. Given the phase-space density of the equilibrating partons, $f_i(k)$, the differential production rate is \cite{BMXW92}, \begin{eqnarray} E\frac{d^3A}{d^3p}&=&\frac{1}{16(2\pi)^8}\int \frac{d^3k_1}{\omega_1} \frac{d^3k_2}{\omega_2}\frac{d^3p_2}{E_2}\delta^{(4)}(k_1+k_2-p-p_2) \nonumber \\ & & \left[\frac{1}{2}g_G^2f_g(k_1)f_g(k_2) |\overline{\cal M}_{gg\rightarrow c\bar{c}}|^2+g_q^2f_q(k_1)f_{\bar{q}}(k_2) |\overline{\cal M}_{q\bar{q}\rightarrow c\bar{c}}|^2\right] , \label{eq:therm1} \end{eqnarray} where $g_G$=16, $g_q=6N_f$, are the degeneracy factors for gluons and quarks (antiquarks) respectively, $|\overline{\cal M}_{gg\rightarrow c\bar{c}}|^2$, $|\overline{\cal M}_{q\bar{q}\rightarrow c\bar{c}}|^2$ are the {\em averaged} matrix elements for $gg\rightarrow c\bar{c}$ and $q\bar{q}\rightarrow c\bar{c}$ processes, respectively, \begin{eqnarray} \frac{|\overline{\cal M}_{gg\rightarrow c\bar{c}}|^2}{\pi^2\alpha_s^2} &=& \frac{12}{\hat{s}^2}(M^2-\hat{t})(M^2-\hat{u})+\frac{8}{3} \left(\frac{M^2-\hat{u}}{M^2-\hat{t}} +\frac{M^2-\hat{t}}{M^2-\hat{u}}\right) \nonumber \\ &-&\frac{16M^2}{3} \left[ \frac{M^2+\hat{t}}{(M^2-\hat{t})^2} +\frac{M^2+\hat{u}}{(M^2-\hat{u})^2} \right] -\frac{6}{\hat{s}}(2M^2-\hat{t}-\hat{u}) \nonumber \\ &+&\frac{6}{\hat{s}}\frac{M^2(\hat{t}-\hat{u})^2} {(M^2-\hat{t})(M^2-\hat{u})} -\frac{2}{3}\frac{M^2(\hat{s}-4M^2)}{(M^2-\hat{t})(M^2-\hat{u})},\\ \frac{|\overline{\cal M}_{q\bar{q}\rightarrow c\bar{c}}|^2}{\pi^2\alpha_s^2} &=&\frac{64}{9\hat{s}^2} \left[(M^2-\hat{t})^2+(M^2-\hat{u})^2+2M^2\hat{s}\right], \end{eqnarray} Due to the small charm density, we can neglect the Pauli blocking of the final charm quarks. For large charm quark mass, $M$, we can approximate the phase-space density $f_i(k)$ by a Boltzmann distribution. We further assume that the distributions are boost invariant, {\em i.e.}, \begin{equation} f_i(k)\approx \lambda_i e^{-k_{\perp}\cosh(y-\eta)}, \end{equation} where $\eta=0.5\ln(t+z)/(t-z)$ is the spatial rapidity of a space-time cell at $(t,z)$. Neglecting the transverse expansion, the above assumption implies that the space-time cell at $(t,z)$ have a flow velocity, $u=(\cosh\eta,\sinh\eta)$. We can now complete the integral over $\eta$ in $\int d^4x=\pi R_A^2\int d\eta d\tau$ and obtain, \begin{eqnarray} \frac{dN_{\rm th}}{dyd^2p_{\perp}}&=&\frac{\pi R_A^2}{16(2\pi)^8} \int_{\tau_{\rm iso}}^{\tau_c}\tau d\tau \int p_{\perp 2}dp_{\perp 2} d\phi_2 dy_2 d\phi_{k1} dy_{k1} \frac{2k_{\perp 1}^2}{\hat{s}} 2 K_0(Q_{\perp}/T) \nonumber \\ & &\left[\frac{1}{2}g_G^2\lambda_g^2 |\overline{\cal M}_{gg\rightarrow c\bar{c}}|^2 +g_q^2\lambda_q^2|\overline{\cal M}_{q\bar{q}\rightarrow c\bar{c}}|^2\right], \label{eq:therm2} \end{eqnarray} where $K_0$ is the modified Bassel function and $\tau_c$ is the time when the temperature, $T$, drops below 200 MeV. The kinematic variables are chosen such that, \begin{eqnarray} p_2&=&(M_{\perp 2}\cosh y_2,p_{\perp 2}\cos\phi_2,p_{\perp 2}\sin\phi_2, M_{\perp 2}\sinh y_2), \;\; M_{\perp 2}=\sqrt{M^2+p_{\perp 2}^2}; \nonumber \\ k_i&=&k_{\perp i}(\cosh y_{ki},\cos\phi_{ki},\sin\phi_{ki},\sinh y_{ki}), \ \ i=1,2 \ . \end{eqnarray} The center-of-mass momentum, $Q=(Q_{\perp}\cosh y_Q,{\bf q_{\perp}},Q_{\perp}\sinh y_Q)$, is defined as $Q=p+p_2=k_1+k_2$, and \begin{eqnarray} Q^2&=&\hat{s}=2[M^2+M_{T}M_{\perp 2}\cosh(y-y_2) -p_{\perp}p_{\perp 2}\cos\phi_2],\nonumber\\ q_{\perp}^2&=&p_{\perp}^2+p_{\perp 2}^2 +2p_{\perp}p_{\perp 2}\cos\phi_2,\nonumber \\ Q_{\perp}^2&=&Q^2+q_{\perp}^2=M_{T}^2+M_{\perp 2}^2 +2M_{T}M_{\perp 2}\cosh(y-y_2). \end{eqnarray} Using these variables and the energy-momentum conservation, we have, \begin{eqnarray} k_{\perp 1}&=&\frac{Q^2/2}{M_{\perp}\cosh(y-y_{k1}) +M_{\perp 2}\cosh(y_2-y_{k1}) -q_{\perp}\cos\phi_{1q}}, \nonumber \\ \cos\phi_{1q}&=&[p_{\perp}\cos\phi_{k1} +p_{\perp 2}\cos(\phi_2-\phi_{k1})]/q_{\perp}. \end{eqnarray} In the integral over $\tau$, we shall use the time evolution of the temperature, $T(\tau)$, and fugacities, $\lambda_i(\tau)$, as given in the previous section. \section{Pre-thermal charm production} Before the parton distributions reach local isotropy in momentum space so that the rate equations can be applied to describe the equilibration of the parton system, scatterings among free-streaming partons can also lead to charm production. Since the system during this period consists dominantly of gluons, we shall only consider gluon fusions. To model the phase-space distribution, we take into account the distribution of the initial production points which spread over a region of width, \begin{equation} \Delta_k\approx \frac{2}{k_{\perp}\cosh y}, \end{equation} in $z$ coordinate. Following Ref.~\cite{LG94}, we assume free-streaming until $\tau_{\rm iso}$ and neglect the expansion in the transverse direction. The correlated phase-space distribution function is given by \begin{equation} f(k,x)=\frac{1}{g_G\pi R_A^2}g(k_{\perp},y) \frac{e^{-(z-t\tanh y)^2/2\Delta_k^2}}{\sqrt{2\pi}\Delta_k} \theta(R_A-r)\theta(\tau_{\rm iso}\cosh y-t), \label{eq:phase} \end{equation} where $g(k_{\perp},y)$ is the parametrization of the parton spectrum given by HIJING simulations, \begin{equation} g(k_{\perp},y)=\frac{(2\pi)^3}{k}\frac{dN_g}{dyd^2k_{\perp}} =\frac{(2\pi)^2}{k}h(k_{\perp})\frac{1}{2Y}\theta(Y^2-y^2). \end{equation} The phase-space distribution is normalized such that $\lim_{t \rightarrow \infty} g_G \int d^3 x f(k,x)/(2\pi)^3 =d^3N_g/d^3k$. The function $h(k_{\perp})$ and the rapidity width $Y$ are given in Table~\ref{table2} for central $Au+Au$ collisions at RHIC and LHC energies which also gives the initial conditions as listed in Table~\ref{table1}. Substituting the phase-space distribution, into Eq.~(\ref{eq:therm1}), and integrate over space and time, we obtain the charm production distribution in the pre-thermal period, \begin{eqnarray} \frac{dN_{\rm pre}}{dyd^2p_{\perp}}&=&\frac{1}{16(2\pi)^8\pi R_A^2} \int p_{\perp 2}dp_{\perp 2} d\phi_2 dy_2 d\phi_{k1} dy_{k1} \frac{2k_{\perp 1}^2}{\hat{s}} g(k_{\perp 1},y_{k1})g(k_{\perp 2},y_{k2})\nonumber \\ & &\frac{1}{2}|\overline{\cal M}_{gg\rightarrow c\bar{c}}|^2 \frac{1}{\sqrt{2\pi}\Delta_{\rm tot}} \int_0^{t_f}dt e^{-t^2(\tanh y_1-\tanh y_2)^2/2\Delta^2_{\rm tot}}, \label{eq:preth} \end{eqnarray} \begin{equation} \Delta_{\rm tot}=\sqrt{\Delta_{k1}^2 + \Delta_{k2}^2}, \;\; t_{\rm f}=\tau_{\rm iso}\min(\cosh y_{k1},\cosh y_{k2}). \end{equation} where the kinematic variables are similarly defined as in Eq.~(\ref{eq:therm2}), and in addition, \begin{eqnarray} k_{\perp 2}^2&=&Q_{\perp}^2+k_{\perp 1}^2 -2k_{\perp 1}[M_{\perp}\cosh(y-y_{k1}) +M_{\perp 2}\cosh(y_2-y_{k1})], \nonumber \\ \sinh y_{k2}&=&[M_{\perp}\sinh y +M_{\perp 2}\sinh y_2-k_{\perp 1}\sinh y_{k1}]/k_{\perp 2}. \end{eqnarray} Note that the correlation between momentum and space-time in the phase-space distribution was not considered in a previous calculation \cite{BMXW92}. As we will show this correlation is very important and will reduce the pre-thermal charm production as compared to the uncorrelated distributions. Similar effect was recently discussed by Lin and Gyulassy in Ref.~\cite{LG94}, where formation time effect is also included which is expected to further suppress pre-thermal charm production. \section{Initial fusion} During the initial interaction period, charm quarks are produced together with minijets through gluon fusion and quark anti-quark annihilation. Like gluon and light quark production, charm production through the initial fusion is very sensitive to the parton distributions inside nuclei. In addition, the cross section is also very sensitive to the value of charm quark mass, $M$. If higher order corrections are taken into account, the production cross section depends also on the choices of the renormalization and factorization scales. Detailed studies on the next-leading-order calculation\cite{NDE,SVN,ISPV,VOGT} shows, however, that higher order corrections to the total charm production cross section can be accounted for by a constant $K$-factor of about 2. This is what we will use next. For consistency we use $M=1.5$ GeV for all calculations. Shown in Figs.~\ref{fig5} and \ref{fig6} as solid lines are the initial charm production given by HIJING calculations at RHIC and LHC energies, with MRSD$-'$ \cite{MRS} parton distributions. The corresponding total integrated cross sections are, $\sigma_{c\bar{c}}=0.16$ (5.75) mb at RHIC (LHC) energy, where nuclear shadowing of the gluon distribution function is also taken into account. In HIJING calculations, high order corrections are included via parton cascade in both initial and final state radiations. The resultant distributions in $c\bar{c}$-pair momentum are very close to the explicit higher order calculations \cite{VOGT}. Plotted in Figs.~\ref{fig5} and \ref{fig6} as dot-dashed and dashed lines are the pre-thermal and thermal production. In the calculation, a factor of 2 is also multiplied to the lowest order matrix elements of charm production. Both contributions are much smaller than the initial charm production at both energies. The pre-thermal contributions shown are also much smaller than what was found in Ref.~\cite{BMXW92}. This is because momentum and space-time correlation was not taken into account in Ref.~\cite{BMXW92} which suppresses the pre-thermal charm production. Similar results are also found in a study by Lin and Gyulassy \cite{LG94}. As we have already discussed, the initial conditions in Tables~\ref{table1} and \ref{table2} given by HIJING calculations have many uncertainties. If one increases the initial parton number density at RHIC energy by a factor of 4 with the same initial temperature, charm production from both pre-thermal and thermal sources will increase about a factor of 12 as shown in Fig.~\ref{fig7}, leading to a total secondary contribution comparable to the initial charm production. In the extreme limit, a fully equilibrated parton plasma ($\lambda_g=\lambda_q=1$) at the same initial temperature would give an enhancement of charm production about 4 times higher than the initial production, shown as dotted lines in Figs.~\ref{fig5} and \ref{fig6}. In this case, the enhancement not only comes from higher parton densities, but also from the much longer life time of the parton plasma (cf. Figs.~\ref{fig2} and \ref{fig3}). Much higher enhancements predicted in Ref.~\cite{KG93} are due to the overestimate of the intrinsic charm production as pointed by Gyulassy and Lin in a recent paper \cite{LG94}. Though the intrinsic charm production is important in the forward direction at large $x_f$ \cite{VBH}, it is strongly suppressed in the mid-rapidity region due to the interference among pQCD amplitudes to the same order \cite{CSS86}. To test the sensitivity of open charm production to uncertainties in initial fugacities and temperature separately, we consider an alternative scenario as we have discussed in the parton evolution. We assume the initial parton densities to be 4 times higher than given in Table~\ref{table1} at RHIC energy but with lower initial temperature, $T_0=0.4$ GeV. Accordingly, the initial phase-space distribution is also modified to: $h(k_{\perp})=9649.2 e^{-k_{\perp}/0.65}/(k_{\perp}+0.3)$ from the one in Table~\ref{table2}, which gives 4 times of the initial parton density but smaller average transverse momentum, $\langle k_{\perp}\rangle = 0.85$ GeV. The reduced average transverse momentum corresponds to lower initial effective temperature. This system with higher initial fugacities evolves faster toward equilibrium but the life-time of the deconfined phase is shorter due to the reduced temperature as we have discussed. The corresponding open charm production is shown in Fig.~\ref{fig7} by the lines with stars. We observe that open charm production from both pre-thermal and thermal contribution is reduced due to the reduction in initial temperature and life-time of the parton plasma, even though the initial fugacities are much higher and the evolution toward equilibrium is faster. Thus, open charm production is much more sensitive to the change in the initial temperature than the parton fugacities. We also note from Eqs.~(\ref{eq:therm2}) and (\ref{eq:preth}) that the pre-thermal and thermal charm production depends on the thermalization time $\tau_{\rm iso}$ and the life time of the parton plasma. Therefore, by measuring the charm enhancement, we can probe the initial parton phase-space distribution, initial temperature and the thermalization and equilibration time. \section{Conclusions} In this paper, we have calculated open charm production in an equilibrating parton plasma, taking into account the evolution of the effective temperature and parton fugacities according to the solution of a set of rate equations. In the evaluation of the interaction rate $R_3$ for induced gluon radiation, a color dependent effective formation time was used which reduces the gluon equilibration rate through LPM suppression of soft gluons. In the calculation of the pre-thermal contribution to open charm production, correlation between momentum and space-time was also included. This correlation reduces the pre-thermal charm production as compared to the uncorrelated one used in a previous estimate \cite{BMXW92}. We found that both the thermal contribution during the parton equilibration and pre-thermal contribution with the current estimate of the initial parton density from HIJING Monte Carlo simulation are much smaller than the initial direct charm production. However, the final total charm production is very sensitive to the initial condition of the parton evolution. If uncertainties in the initial parton production can increase the initial parton density, {\em e.g.}, by a factor of 4, the total secondary charm production will become comparable or larger than the initial production, due to both the increased production rate and longer life time of the parton plasma. We also found that open charm production is more sensitive to the initial temperature of the parton system than the initial parton fugacities. Therefore, open charm production is a good probe of the initial parton distribution in phase-space and the thermalizaion and equilibration time of the parton plasma. \section*{Acknowledgments} P. L. and X.-N. W. would like to thank M. Asakawa and M. Gyulassy for helpful discussions. B. M. and X.-N. W. thank T. S. Bir\`o, E. van Doorn, and M. H. Thoma for their early collaboration in the study of parton equilibration. This work was supported by the Director, Office of Energy Research, Division of Nuclear Physics of the Office of High Energy and Nuclear Physics of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098 and DE-FG05-90ER40592. P. L. and X.-N. W. were also supported by the U.S. - Hungary Science and Technology Joint Fund J. F. No. 378.
1,314,259,996,112
arxiv
\section{Introduction}\label{sec:introduction} Manipulating quantum systems with high efficiency \cite{nielsen2010quantum,wiseman2009quantum,dong2010quantum} is a major challenge in developing quantum technology and provides recipes for many applications such as quantum computation \cite{nielsen2010quantum}, quantum communication, and quantum sensing \cite{giovannetti2011advances}. To achieve quantum operations with high efficiency, control methods, such as optimal control theory \cite{dong2010quantum}, closed-loop learning control algorithms \cite{rabitz2000whither} and Lyapunov control approaches \cite{kuang2017rapid}, have been developed for manipulating quantum systems. Among them, gradient algorithms have been used for numerically finding an optimal field \cite{khaneja2005optimal}. Evolutionary computing methods such as genetic algorithm (GA) and differential evolution (DE) have been utilized in optimizing molecular systems \cite{ma2015differential, ma2017quantum,dong2019differential,dong2015sampling}. However, in many practical applications, the gradient information may not be easy to obtain and evolutionary algorithms usually involve a process of evolving a population and tend to be time-consuming when solving complex problems. Machine learning has attracted increasing attention in recent years owing to its powerful computing capability and has been gradually applied in various quantum tasks in recent years \cite{biamonte2017quantum}. Particularly, reinforcement learning (RL) \cite{sutton2018reinforcement} offers a considerable advantage in controlling realistic systems without constructing a reliable effective model. For example, a fidelity-based probabilistic Q-learning approach that incorporates fidelity into updating Q-values and action selection probabilities achieves a better balance between exploitation and exploration when dealing with quantum systems \cite{chen2013fidelity}. It has also been found that RL-aided approaches succeed in identifying variational protocols with nearly optimal fidelity, even in the glassy phase, where optimal state manipulation is exponentially hard \cite{bukov2018reinforcement}. Recently, the combination of RL and deep learning \cite{lecun2015deep,goodfellow2016deep}, i.e., deep reinforcement learning (DRL), exhibits effective representations for learning agents \cite{mnih2015human,li2019deep,zhao2018special,wang2020coordinated} and therefore achieves efficient control of different quantum systems \cite{chen2013fidelity,bukov2018reinforcement, zhang2019does,mackeprang2019reinforcement,an2019deep,niu2019universal,fosel2018reinforcement,xu2019generalizable,bharti2019teach,chen2019extreme,porotti2019coherent,haug2020classifying}. For example, a network-based ``agent” is designed for discovering complete quantum-error-correction strategies to protect a collection of qubits against noise \cite{fosel2018reinforcement}. With the help of DRL, nearly extreme spin squeezing with a one-axis twisting interaction is achieved using merely a handful of rotation pulses \cite{chen2019extreme}. In addition, DRL realizes efficient and precise gate control \cite{an2019deep} and exhibits strong robustness when dealing with parameter fluctuations \cite{niu2019universal}. Besides, DRL methods are also employed for discovering quantum configurations of measurement settings and quantum states for violating various Bell inequalitie\cite{bharti2019teach}, and designing generalizable control for quantum parameter estimation \cite{xu2019generalizable,schuff2020improving}. When optimizing the control fields using RL, the learning process is usually quite slow due to complex dynamics, or sparse reward signals. In that case, the early transitions are likely to terminate on states that are easy to reach, and those states that are difficult to reach are usually found later in the training process \cite{narvekar2020curriculum}. However, in practical settings, these easy-to-reach states may not provide a reward signal, which might hinder the training process of the DRL agent. In addition, it usually takes millions of episodes for an RL agent to learn a good policy for a difficult problem, with many suboptimal actions taken during the learning process. When optimizing complex quantum systems, such as multi-level quantum systems, the complexity of the system dynamics increases sharply with the size of quantum systems \cite{niu2019universal}. Also, the dissipation part in a quantum system may irreversibly lead it away from an equilibrium state \cite{altafini2004coherent}, greatly increasing the difficulty of manipulating its dynamics. It is highly desirable to design an efficient DRL approach to achieve a fast and reliable control of complex quantum systems. Owing to the observation that students usually learn easy courses before they start to learn complex courses, curriculum learning \cite{bengio2009curriculum,narvekar2020curriculum} has emerged as a general and powerful tool for solving difficult problems and has also been applied in optimizing RL agents \cite{ren2018self,shao2018starcraft}. For example, the introduction of curriculum learning allows the RL agents to make the best use of transitions and finally achieves higher scores when playing with complex games \cite{ren2018self}. Actually, curriculums can be defined in different levels, including ordering of tasks or ordering of individual samples \cite{ren2018self}. However, creating a curriculum at the sample level can be computationally difficult for large sets of tasks since the entire set of samples from a task (or multiple tasks) is typically not available ahead of time. In addition, the samples experienced in a task depend on the agent's behavior policy, which can be influenced by previous tasks. Therefore, a simplified representation of a curriculum is often adopted in practical applications to eliminate the need for the knowledge of all samples. In this paper, we introduce a task-level curricluum to the deep reinforcement learning control for quantum systems and propose a novel curriculum-based deep reinforcement learning (CDRL) with the purpose of achieving reliable and fast manipulation of quantum systems. In particular, a task is defined by a threshold of fidelity (defined as a target fidelity), which also represents the difficulty of one task. In particular, two methods of task generation for a curriculum are introduced, including presetting static fixed tasks using empirical knowledge or dynamically generating tasks based on the performance of the agent. By sequencing a set of tasks with increasing difficulties and reusing knowledge between different tasks, the RL agent is able to focus on easy tasks at the early stage, gradually improve its goal after grasping basic skills and therefore achieve the final task. In CDRL, a target fidelity is closely related to each task, which allows for an early termination of each episode once a satisfactory effect is achieved. Such a mechanism makes it possible to manipulate quantum systems within shorter control pulses thus protecting systems against decoherence. This is particularly significant for open quantum systems, where dissipation typically drives the system to a mixed state and an optimal control might keep the system from the maximum entropy state for some specific dissipation channels \cite{lin2020time}. From this respective, the proposed method is an attempt to utilize machine learning techniques to realize efficient manipulation of quantum systems, which not only saves operation time but also keeps systems away from unwanted decoherence \cite{berry2009transitionless}. In this paper, we focus on a basic and crucial issue of quantum state preparation, which aims at steering a quantum system from an initial state towards a target state. To test the effectiveness of the proposed method, numerical simulations on closed quantum systems and open quantum systems are implemented. The main contributions of this paper are summarized as follows. \begin{itemize} \item A curriculum-based deep reinforcement learning method (CDRL) is proposed for quantum systems where tasks among a curriculum are defined by fidelities and are linearly sequenced with increasing fidelities. \item The CDRL method is applied to closed quantum systems and open quantum systems to achieve enhanced performance where open quantum systems suffer from undesirable dissipation effects. \item The CDRL method has the advantage of searching for shorter control pulses, thus providing insights for utilizing machine learning techniques to achieve fast control of complex quantum systems. \end{itemize} The rest of this paper is organized as follows. Section \ref{Sec:problem} introduces several basic concepts about RL, curriculum learning and quantum systems. In Section \ref{Sec:method}, the CDRL method is presented in detail with curriculum design and algorithm implementation for quantum systems. Numerical results for both closed and open quantum systems are presented in Section \ref{Sec:simulation}. Concluding remarks are drawn in Section \ref{Sec:conclusion}. \section{Preliminaries}\label{Sec:problem} \subsection{Reinforcement Learning}\label{Subsec:rl} \emph{1) Markov Decision Process:} RL is commonly studied based on the framework of \emph{Markov Decision Process} (MDP). An MDP can be described by a tuple of $\langle S,A,P,R,\gamma \rangle$ \cite{sutton2018reinforcement}, where $S$ is the state space, $A$ is the action space, $P$: $S\times A\times S \to [0,1)$ is the state transition probability, $R$: $S\times A\to \mathbb{R}$ is the reward function, and $\gamma\in[0,1]$ is the discount factor. A policy is a mapping from the state space $S$ to the action space $A$. At each time step $t\in[0,T]$, where $T$ is the terminal time, the agent forms the state $s_{t}\in S$, takes an action $a_{t}\in A$ according to a certain policy $\pi$: $a_{t} = \pi(s_{t})$, transits to the next state $s_{t+1}$ and gets a scalar reward signal $r_{t}$ from the environment. RL aims at determining an optimal action $a_t^{*}$ at each state $s_{t}$ so as to maximize the cumulative discounted future rewards of return $R_t = \sum_{k=0}^{T-t}\gamma^{k}r_{t+k}$. \emph{2) Deep Q Network:} Similar to Q-learning \cite{watkins1992q}, deep Q-network (DQN) is a value-based DRL method, and it employs a function with parameters $\xi$ to approximate Q-value for each state-action pair $Q(s,a)$, i.e., $Q(s,a;\xi) \approx Q(s,a)$. Such a network can be trained by minimising the loss function as: $Loss(\theta) = [(y - Q(s,a;\xi^{-}))^{2}]$ with $y = r + \gamma\max\limits_{a^{\prime}}Q(s^{\prime},a^{\prime};\xi^{-})$ \cite{baird1995residual}, where $s^{\prime}$ is the next state after taking action $a$ at state $s$. $\xi^{-}$ denotes the parameters of the target network which is fixed during the computation of $y$ and is usually updated after some training iterations. Differentiate the loss function with respect to $\xi$, the gradient is formulated as \begin{equation}\label{eq:gradient} \resizebox{.9\hsize}{!}{$\nabla_{\xi}Loss =[r + \gamma\max\limits_{a'}Q(s',a';\xi^{-})-Q(s,a;\xi)]\nabla_{\xi}Q(s,a;\xi)$}. \end{equation} After training the network, an optimal control strategy is generated by selecting the action with the largest Q-function, i.e., $a^{*}=Q(s,a)$. \subsection{Curriculum learning} Curriculum learning is a methodology to optimize the order in which experiences are accumulated by the agent \cite{narvekar2020curriculum}. During this process, knowledge is transferred from easy tasks to difficult ones, which helps achieve enhanced performance on a hard problem or reduce the time it takes to converge to an optimal policy. A curriculum is commonly defined as an ordering of tasks; while an ordering of individual experience samples can also be regarded as a curriculum at a more fundamental level. In addition, a curriculum is not always a simple linear sequence. In fact, one task building upon knowledge gained from multiple source tasks is also acceptable and useful, which is similar to the case in human education, where courses can build off multiple prerequisites. There have been growing interests in exploring how to leverage curriculum learning to speed up the RL agents. To achieve this, there are three key elements to be considered: (i) Task generation: a set of intermediate tasks can be pre-specified ahead of time or dynamically generated based on the previous learning performance during the curriculum construction. (ii) Sequencing: similar to task generation, it can be predefined or automatically realized. (iii) Knowledge transfer: before moving to the next task, reusable knowledge acquired from one task is required to extract and pass to the next one \cite{lazaric2008transfer,taylor2009transfer}. From this respective, designing a good curriculum depends on generating appropriate and useful tasks to transfer or reuse knowledge between different tasks. \subsection{Quantum dynamics} The state of a finite-dimensional closed quantum system can be represented by a unit complex vector $|{\psi}\rangle$ and its dynamics can be described by the Schr\"{o}dinger equation: \begin{equation} \frac{d}{dt}|{\psi}(t)\rangle=-\frac{\rm{i}}{\hbar}(H_0+\sum_{m=1}^{M}u_m(t)H_m)|{\psi(t)}\rangle, \label{eq:schron} \end{equation} where $\hbar$ is the reduced Planck constant (hereafter, we set $\hbar=1$), $H_0$ denotes the time-independent free Hamiltonian of the system, and the control Hamiltonian operators $H_m$ represent the interaction of the system with the control fields. To drive the quantum system from an initial state $|\psi(0)\rangle=|\psi_0\rangle$ to a target state $|\psi_f\rangle$ within a given time period $T$, we adopt the fidelity between the actual state $|\psi(T)\rangle$ and the target state $|\psi_f\rangle$, i.e., $J(u)= |\langle \psi(T)\rangle | \psi_f\rangle |^2$, to evaluate the control performance \cite{wiseman2009quantum}. In practical applications, a quantum system usually suffers from the interaction with its environments and is then regarded as an open quantum one \cite{altafini2004coherent}. In such a case, the state of the quantum system is described by a Hermitian, positive semidefinite matrix $\rho$ satisfying $\textup{Tr}(\rho)=1$ and $\textup{Tr}(\rho^2)\leq 1$. Its dynamics under Markovian approximation can be described by the Lindblad master equation \cite{dong2010quantum}: \begin{equation} {\rm{i}}\dot{\rho}(t)=[H_0+\sum_{m=1}^{M}u_m(t)H_m,\rho(t)]+\sum_k \gamma_k \mathcal{D}[L_k](\rho(t)), \label{eq:lindblad} \end{equation} with $\mathcal{D}[L_k](\rho)=L_k \rho L_k^{\dagger}-\frac{1}{2} L_k^{\dagger} L_k \rho-\frac{1}{2}\rho L_k^{\dagger} L_k$, where $L_k$ represents the Lindblad operators and the coefficients $\gamma_k\geq 0$ characterize the relaxation rates. For an $n$-level quantum system, denote a set of orthogonal generators as $\{{\rm{i}}Y_l\}_{l=1}^{N}$ ($N=n^2$) of $\mathbf{SU}(\textit{n})$ with each generator satisfies (i) $\textup{tr}(Y_l)=0$, (ii) $\ Y_l^\dagger=Y_l$, (iii) $\textup{tr}(Y_l Y_m)=2\delta_{lm}$. Then, a coherent Bloch vector can be defined as $\textbf{y}(t):=(y_1(t),...,y_N(t))^\top$, $y_l(t)=\textup{tr}(Y_l \rho(t))$, which also means that a density operator can be rewritten as $\rho(t)=I/n+\frac{1}{2}\sum_{l=1}^{N}y_l(t) Y_l$. In that case, the evolution of $\textbf{y}(t)$ is deduced as \begin{equation} \dot{\textbf{y}}(t)=(\mathcal{L}_{H_0}+\mathcal{L}_D)\textbf{y}(t)+\sum_{m=1}^{M}u_m(t)\mathcal{L}_{H_m}\textbf{y}(t)+s_0,\quad\textbf{y}(0)=\textbf{y}_0, \end{equation} where $\mathcal{L}_{H_0}$, $\mathcal{L}_D$ and $\mathcal{L}_{H_m}$ are $N \times N$ superoperators and the the inhomogeous cource term $s_0$ is a $N\times 1$ column vector \cite{yang2013exploring}. The goal is to steer the system from an initial state $ \textbf{y}_0 $ to a final state $ \textbf{y}(T)$ as close to a target state $ \textbf{y}_f$ as possible, and we can take the following cost function \cite{yang2013exploring}: \begin{equation} J(u)=1-\frac{n}{8(n-1)}\parallel\textbf{y}_f -\textbf{y}(T)\parallel^2. \label{eq:open loss} \end{equation} \section{Curriculum-based deep reinforcement learning for quantum control}\label{Sec:method} In this section, the framework of CDRL for quantum control is presented, and the key elements for designing a curriculum are elaborated in detail, and then the ingredients of applying DRL to quantum systems are provided. Finally, the implementation of CDRL for quantum control is summarized. \subsection{Framework of CDRL} Curriculum learning aims at generating appropriate and useful tasks and reusing knowledge between different tasks. This allows the agent to focus on easy tasks at the early stage and gradually move onto difficult tasks. For each task, it is essential to train a DRL agent to achieve the task. The framework of CDRL is presented in Fig. \ref{fig:framework}, which can be summarized as two aspects: (i) Curriculum construction and management; (ii) Train the DRL agent for one task. In Fig. \ref{fig:framework}(a), a set of tasks is constructed and then compose a curriculum. During the learning process, once a task is generated, it is sent to the RL agent for learning. Meanwhile, the training performance collected from the RL agent is sent back to the curriculum agent for determining whether a stop criterion is met. Before moving to the next task, knowledge acquired from the previous task is collected and passed to the next one. In addition, some measures are taken to excite the RL agent to maintain a certain exploration for a new task. In Fig. \ref{fig:framework}(b), the RL agent trains its network by trial-and-error for each task. After receiving a task, the RL agent observes its current state $s_t$ and determines an action $a_t$ by inferring from the deep neural networks. The control fields $u(t)=\{u_m(t), m=1,2,...,M\}$ based on $a_t$ are performed on the quantum system, resulting in a next state $s_{t+1}$ and fidelity $F_t$ and reward signal $r_t$. The transition $(s_t,a_t,r_t,s_{t+1},F_t)$ is collected and stored into a memory pool. Meanwhile, a batch of samples is selected from the memory and fed into the neural network to update its parameters. It is worthy to note that the experiences from the past task are collected as reusable knowledge for the next task. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{overall_framework.png} \caption{Framework of CDRL approach for quantum control. (a) Curriculum construction and management. The curriculum agent generates a task for training the RL agent and utilizes the performance of the RL agent to determine whether a stop criterion is satisfied. Once the previous task has been achieved, knowledge is transferred between different tasks by reusing past samples. Meanwhile, measures are taken to excite the RL agent before the subsequent task is scheduled. (b) Train the RL agent for one task. At each time step $t$, the RL agent observes its current state $s_t$ (step 1) and suggests a control action $a_t$ (step 2), which is mapped to the control fields $\{u_m(t)\}$ (step 3). Then, the quantum system takes the proposed control strategy and obtains the next state $s_t$, with fidelity $F_t$ and reward signal $r_t$ (step 4). The transition $e_t=(s_t,a_t,r_t,s_{t+1},F_t)$ is put into a large memory buffer (step 5). Finally, a batch of transitions is selected from the buffer and then fed into the networks to update its weights or parameters (step 6).} \label{fig:framework} \end{figure*} \subsection{Curriculum design for quantum systems} \subsubsection{Task and curriculum} Most quantum control problems can be generalized as a goal task, which means that the agent starts from an initial state and attempts to approach a target state as close as possible with a given time period. In quantum information theory, the ``closeness" between two quantum states is usually measured by fidelity. However, the ideal value of $F(|\psi(T)\rangle,|\psi_f\rangle)=1$ does not necessarily mean $|\psi(T)\rangle=|\psi_f\rangle$. In fact, there are multiple states $|\psi^{\prime}\rangle=\exp{(\rm{i}\theta)}|\psi(T)\rangle, \theta \in R$ that can achieve the same fidelity, i.e., $F(|\psi(T)\rangle,|\psi_f\rangle)=F(|\psi^{\prime}\rangle,|\psi_f\rangle)$. In that case, $|\psi^{\prime}\rangle$ is equivalent to $|\psi(T)\rangle$ neglecting the global phase. For ideal fidelity $1$, the final states are not unique, and the states in the last but one step can be multiple when tracing one step backward. From this respective, there exist multiple branches or trajectories that result in final states with the same fidelity. Hence, when designing a curriculum for quantum systems, we do not take the actual physical states as the intermediate tasks. Instead, a target fidelity is utilized to distinguish between different tasks. Based on that, we have the following definition. \begin{definition}[Task] For a quantum system with a target state $|\psi_f\rangle $, one task $v$ is defined as the process of driving the system from an initial state $|\psi_0\rangle$ to an actual state $|\psi(T)\rangle$ with $F(|\psi(T)\rangle,|\psi_f\rangle) \geq D(v)$. $D(v)$ represents the difficulty of the task $v$. \end{definition} Recall that a curriculum is composed of a set of tasks. The ordering of tasks among a curriculum is similar to the way that vertices and edges form a graph. Hence, we adopt the concept of graph to define a curriculum. In particular, a task is defined as a vertice of a graph, and the relationship between two successive tasks are used to define an edge of a graph. To achieve a final goal, it is natural to sequence the tasks according to their difficulty. Denote the $i$-th task as a vertice $v_i$. Then a directed edge $<v_i,v_j>$ can be utilized to denote the relationship between two tasks, which means that samples associated with $v_i$ should be trained before samples associated with $v_j$. By sequencing a set of different tasks in a similar fashion, a task-level curriculum can be in the following way: \begin{definition}[Curriculum] Let the task set be $V$ and samples relating to $V$ be $\mathbb{M}^{V}$. A curriculum can be defined as a directed acyclic graph $C = (V,\varepsilon,g)$, where $V$ is the set of vertices, $\varepsilon\subseteq \{(v_i,v_j) | (v_i,v_j)\in V \times V \bigwedge D(v_i) < D(v_j)\}$ is the set of directed edges, and $g: V \rightarrow \mathbb{P}(\mathbb{M}^{V})$ is a function that associates vertices to subsets of samples in $\mathbb{M}^{V}$, where $\mathbb{P}(\mathbb{M}^{V}) $ is the power set of $\mathbb{M}^{V}$. \end{definition} Based on the definition of tasks, a curriculum for quantum problems can be simplified by sequencing tasks in a linear way. In that case, the graph is reduced to a chain, where the indegree and outdegree in the graph $C$ are at most 1. Following the linear chain of a curriculum, the whole learning can be described as Fig. \ref{fig:multi-stage DRL}, where the learning process is in line with a flow of tasks with increasing difficulties. For each task $v_i$, the agent tries to achieve a better performance than the target fidelity $D(v_i)$. After learning for some episodes, the agent moves on to the next task $v_{i+1}$. Although there are oscillations during each task, the average performance reveals that the agent has climbed to a higher point. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fixedtasks.png} \caption{Illustration of the curriculum learning process with a flow of tasks. } \label{fig:multi-stage DRL} \end{figure} \subsubsection{Task generation for a curriculum} During the construction of a curriculum, generating useful and appropriate tasks is a crucial procedure \cite{narvekar2016source,narvekar2017autonomous} since the quantity of the created tasks has a strong impact on the search space and efficiency of curriculum sequencing algorithms. Therefore, all the generated tasks should be relevant to the final task(s). A common choice is to manually design a set of intermediate tasks such that transferring knowledge through them is beneficial. This is usually realized based on the empirical knowledge about the problem. Recently, there are efforts to automatically create tasks online using a set of rules or a generative process \cite{narvekar2017autonomous}, even though these approaches may still rely on tuning hyper-parameters, such as the number of tasks generated. Recalling that a task for quantum systems is determined by a fidelity. In principle, it can achieve any value that lies between 0 and 1 when arbitrary control laws are available. Hence, any value lies in $[0,1]$ can be utilized as an indicator for one task. Moreover, tasks are usually ordered with increasing difficulty. Based on this knowledge, the preset tasks should be arranged in such a way that the front vertices are assigned with low target fidelities; while those vertices at the rear are attached with difficulties approaching $1$. Moreover, the difficulty of controlling quantum systems increases tremendously. When determining a set of tasks with increasing difficulties, the gap between two successive tasks $\delta_i=D(v_{i+1})-D(v_{i})$ should be lowered down with increasing $i$. Another way is to dynamically generate tasks during the construction of a curriculum. To start a curriculum, an initial task should be set first. A practical way is to preset an appropriate initial value which is easy to achieve without learning. Such a value can be given based on prior knowledge, or by observing some random trials of the RL agents. For the left tasks, it is important to perform data statistics on the performance for the previous task to suggest a candidate fidelity for the next task. For example, it is useful to measure the ``mean/median/max" of the fidelities during the past episodes. Likewise, the determination of the next task should follow the principle that the new task is assigned with higher difficulty compared with the previous task. \subsection{CDRL for quantum control} \subsubsection{DRL for quantum control} When applying RL to quantum systems, a maximum number of control pieces is usually defined as $N_{\max}$, which limits the maximum steps in one episode. This is consistent with traditional learning methods such as gradient method (GD) and genetic algorithm (GA), where the performance of one trial is evaluated after all pieces of control fields have been performed. In RL, the control performance is evaluated step by step, and the reward signal at the current step is sent to the agent to suggest the control field for the next step. It is possible that in one episode, the performance does not always increase with steps and may decrease to a lower value after performing additional control pulses, especially when the previous step has achieved excellent performance. Hence, we take a smart solution in this approach, which is to terminate the current episode when certain requirements are satisfied \cite{mackeprang2019reinforcement,zhang2019does}. Recall that the RL agent usually takes some episodes to realize each subtask. We associate the two factors together and introduce a new definition for one episode. \begin{definition}[Smart Episode] For an RL agent aiming at achieving task $v$, a smart episode is defined as a state-action chain $s_0 \xrightarrow {a_0} s_1 \xrightarrow { a_1} ... s_j \xrightarrow {a_j} s_{j+1}... $ with termination conditions as (i) the number of steps taken reaches the maximum steps, i.e., $j=N_{\max}$; or (ii) the current performance surpasses the target fidelity, i.e., $F(|\psi_{j+1}\rangle,|\psi_f\rangle)\geq D(v)$. \end{definition} From this respective, the number of control pieces in the DRL approach can be less than $N_{\max}$. This measure enables the RL agent to search for shorter control pulses. For convenience, we assume an equal time duration for each piece, denoted as $dt$ and a non-negative integer $j$ is used to indicate the time of $jdt$. For example, we have $|\psi_j\rangle = |\psi(jdt)\rangle$ and $F_j=F(|\psi_{j+1}\rangle,|\psi_f\rangle)$. Another significant procedure for RL methods is to determine an appropriate reward signal \cite{sutton2018reinforcement}. Recalling that, the performance in one episode is not necessarily increasing, the reward signal is only given at the end of the episode. In this work, the reward is calculated based on the infidelity and the function $\log_{10}(\cdot)$ to attach higher reward signal to higher fidelity. Finally, the reward signal can be formulated as \begin{equation} \resizebox{.95\hsize}{!}{$ r_j = \left\{\begin{array}{ll} k_1+k_2\log_{10}(1-F_j) & \textup{if} \quad F_j>D(v) \quad \textup{or} \quad j=N_{max} \\ 0 & \textup{else} \end{array}\right. $} \label{eq:rewardpolicy} \end{equation} where $k_2$ represents the slope of reward function, and $k_1$ acts as a bias. Actually, $k_1$ and $k_2$ can be adjusted according to the different fidelity intervals \cite{zhang2019does}. In principle, a large value of $k_2$ should be set for a large fidelity such as $[0.99,0.999]$, while the value of $k_1$ guarantees that reward increases with fidelity between different intervals. When training DRL agents, experience replay \cite{lin1993reinforcement} is utilized to store the agents' past experiences into a big memory pool for replaying \cite{an2019deep}. Considering that transitions may be more or less surprising, redundant, or task-relevant. When attempting to achieve a certain task, we employ a smart store mechanism to make better use of significant samples. For each step, the fidelity between $s_{j+1}$ and the target state is informative, and we define a transition as $e_j=(s_j,a_j,r_j,s_{j+1},F_j)$. If the older one has higher fidelity than the new one, the new sample should be moved onto the next pointer for replacement and storing. This helps retain some significant samples while storing the latest experiences into the memory. It is worthy to note, such a practice is based on the hypothesis that good transitions are usually not ordered nearby and the next sample after a good sample is rarely a good one owing to the smart episode. Besides, this measure only takes effect with a certain probability. \subsubsection{Knowledge transfer between different tasks} After training the DRL agent for each task, it is important to transfer and reuse knowledge among different tasks. Given two tasks, the process of transferring knowledge can be summarized as four procedures in Fig. \ref{fig:knowledgetransfer}. (i) Exploration: On the previous task $v_A$, the agent attempts to achieve a higher fidelity than $D(v_A)$. After many trials, the agent acquires the knowledge by updating its weights using the useful transitions during task $v_A$. (ii) Storing transitions: Meanwhile, those useful transitions are collected and stored in the memory pool. (iii) Transferring knowledge: Those transitions from task $v_A$ provide useful resources and can be replayed to train the agent when striving for the subsequent tasks. (iv) Exploitation: When attempting to achieve task $B$, the agent already has the capacity to achieve a fidelity $F(|\psi\rangle,|\psi_f\rangle) >D(v_A)$ within $n_a$ steps by repeating the past transitions. Continuing that trajectory, the agent attempts to search another $\delta_n$ steps with final state $|\psi^{\prime}\rangle$, hopefully to achieve a better fidelity $F(|\psi^{\prime}\rangle,|\psi_f\rangle) > D(v_B)$. This guarantees that the agent reviews the knowledge accumulated from task $v_A$ and explores new transitions under task $v_B$. From this respective, the process for transferring knowledge between different tasks is a combination process of exploitation and exploration. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{transferknowledge.png} \caption{Illustration of transferring knowledge between two tasks.} \label{fig:knowledgetransfer} \end{figure} \subsubsection{Stop criterion and excitation} To achieve a better balance between exploration and exploitation for curriculum learning, it is important to set a suitable stopping criterion for each task. Typically, training is terminated when performance on a task or a set of samples has converged. However, training to convergence on an intermediate task is not always necessary since the convergence of DRL is not available and is difficult to evaluate \cite{narvekar2020curriculum}. Another option is to train on each task for a fixed number of episodes. In practical applications, the performance of different episodes tends to vary frequently since the randomness or uncertainties in actions and the stochastic nature of batch sampling for updating parameters might result in fluctuation of the DRL agent. When dealing with quantum systems, the subsequent state varies greatly when taking one step following different actions, thus leading to great difference in the final performance. In that case, a fixed number of episodes may not be enough to guarantee good performance on one task. In addition, the process of winning a task may take more episodes for a harder task. Based on the above observations, we propose a novel stop criterion. For task $v_i$, a non-negative integer $w$ is introduced to measure the time of hitting the task, that is to update $w \leftarrow w+1$ once the current episode achieves $F_t \geq D(v_i)$ with $t\leq N_{\max}$. Besides, an integer $SC$ is defined to terminate the current task when $w \geq SC$. During each task, the randomness factor for the DRL agent is usually decreased using a decay factor $\lambda \in(0,1)$ to guarantee a certain convergence. For example, the greedy factor $\epsilon$ for discrete control cases tends to reach 0, or the disturbance amplitude for continuous control cases approches nearly 0. To achieve a smooth transition between two tasks, an excitation operation is required to reset the randomness degree of the DRL agent to guarantee a certain exploration for the new task. \subsection{Integrated algorithm} \begin{algorithm*}[ht] \caption{Algorithm description for CDRL}\label{al:CDRL} \KwIn{Initial state $|\psi_0\rangle$, target state $|\psi_f\rangle$, maximum steps $N_{\max}$, decay period $E$, $\epsilon$-decay factor $\lambda$, \newline batch size $B$, memory size $K$, count threshold $SC$, probability threshold $\mu_1$} \KwOut{Optimal set of control pulses $a_j=\max_{a} Q(s_j,a)$} Build a DQN agent $Q(s,a|\xi)$ and initialize $i=0$, $k=0$ and a memory pool $\mathcal{M}$\; Initialize \emph{task} $v_0$ with an initial target fidelity $D(v_0)$\; \While{ curriculum not finished} { Initialize the number of hitting a task as $w=0$ and a null list $L$ to store the past performance\; \While{ stop criterion for \emph{task} $v_i$ not met} { Initialize step $j=0$ and transform initial quantum state $|\psi_0\rangle$ into $s_0$\; \While{$s_j$ is not termination state} { Generate actions $a_j$ based on $Q(s_j,a_j|\xi)$ following the $\epsilon$-greedy action selection policy\; Perform the control pulses obtained from $a_j$, obtain next state $s_{j+1}$ and fidelity $F_j=F(|\psi_{j+1}\rangle,|\psi_f\rangle)$\; Determine $r_{j}$ based on $F_j$ and $D(v_i)$\; \If{$F(|\psi_j\rangle,|\psi_f\rangle) \geq D({v_i})$ or $j\geq N_{\max}$ } { $s_j$ is termination state;} { $j \leftarrow j+1$ } \If{Memory pool is full} { Sample a batch of samples and update parameters $\xi$ according to Eq. (\ref{eq:gradient})\; Observe the fidelity of $k$-th sample in $\mathcal{M}$, denoted as $h(k)$\; \If{ $ h(k)> F_j$ \textup{and} $\textup{rand}(1) < \mu_1$ } { $k \leftarrow k+1 $\; } } Store transition $e_j=(s_j,a_j,r_j,s_{j+1},F_j)$ as the $k$-th sample into $\mathcal{M}$ and set $k \leftarrow k+1 $\; } Decrease $\epsilon$ every $E$ episodes as $(1-\epsilon) \leftarrow \frac{(1-\epsilon)} {\lambda}$\; Label the performance of current episode as $F^*=F_j$ and append $F^*$ into $L$\; \If {$F^* \geq D(v_i)$} {Update the number of hitting current task $w \leftarrow w+1$\;} \If{$w > SC $} { Stop criterion for the current task is met\; } } Determine the candidate difficulty based on $L$ and assign it to the next task \emph{task} $v_{i+1}$\; Reactivate DQN agent and reset $\epsilon$ to guarantee exploration\; $i \leftarrow i+1$ } \end{algorithm*} For the implementation of CDRL for quantum control, recalling that task generation lies in the heart of curriculum construction, we take the approach of dynamically generating tasks as an example. Besides, an effective DRL method is required to generate an optimal policy for the agent. Considering the wide application of DQN \cite{mnih2015human}, we adopt DQN as the baseline algorithm to train the agent towards a subtask. When searching for the optimal control fields, the set of $2^{M}$ possible choices for the action vector are formulated as \cite{an2019deep} \begin{equation} \mathcal{A}(u)=\{(u_1,...,u_M), u_m \in \{-G_m,G_m\},m=1,...,M\}, \label{eq: Action} \end{equation} where $\{G_m>0\}$ are preset bounds for each control. During the traning process, the Q-values for all possible actions, i.e., $Q(s,a_1), Q(s,a_2), ...,$ are predicted and the actual action is obtained in an $\epsilon$-greedy way. That is to select a random action $a_t$ with probability $\epsilon$ or to select $a_j = \max_a Q(s_j,a)$ with probability $(1-\epsilon)$. The randomness factor $\epsilon$ reflects the greedy degree, which means that a high value favors exploration while a low value encourages exploitation. Finally, an integrated CDRL algorithm using dynamically generated tasks and DQN baseline is summarized as in Algorithm \ref{al:CDRL}. The curriculum agent launches a task $v_i$, with an appropriate difficulty $D(v_i)$, to activate the DRL agent to learn a good policy through trial-and-error. During each step, the agent encounters one transition $(s_j,a_j,r_j,s_{j+1},F_j)$. By evaluating $F_j$, the transition $e_j$ is stored in the memory pool in the right order. Meanwhile, the DQN agent updates its weights by sampling a batch of those transitions. During this time, the performance of each episode is collected and transmitted to the agent for the determination of the stop criterion. Once it is met, the curriculum agent schedules a new task based on the past performance. This procedure is carried out iteratively until the final task is achieved. \begin{remark} In practical implementation, the gap of difficulty between successive tasks, i.e., $\delta_{i}=D(v_{i+1})-D(v_{i})$, should be evaluated to determine whether or not to terminate the learning process. For example, if $\delta_{i}$ is below a small threshold (such as 0.00005), there is no need to introduce additional tasks, since the actual performance of one episode can be in principle larger than $D(v_i)$. As for the case of manually generated tasks, the agent may fail to achieve a better performance regarding a difficult task. To overcome this problem, a careful check is required to judge whether or not the current task is too hard for the agent. For example, the agent is thought to reach its final task if it fails to hit the task in the early episodes. \end{remark} \section{Numerical simulations}\label{Sec:simulation} To test the performance of the proposed CDRL algorithm for manipulating quantum systems, several groups of numerical simulations are carried out for closed and open quantum systems, and the results are demonstrated and analyzed with detailed parameter settings. \subsection{Parameter settings }\label{Subsec:parameter} Recall that there are two ways to generate tasks for a curriculum. For static one, several benchmark values are usually selected based on empirical knowledge. For example, it is appropriate to initialize the first task with difficulty $0.9$, while $0.99$ means a higher requirement and can be assigned to another task. Generally, the ability to achieve $0.999$ usually means an ideal case, and therefore a final task with difficulty $0.999$ can be obtained. Since a direct increase from $0.9$ to $0.99$ may not be achieved in the learning process, finely grained transitions between two tasks are required to generate more tasks. In the simulations, an incremental value between $0.9$ and $0.99$ is set as 0.02 and 0.01 for 2-qubit and 3-qubit systems, respectively; while an incremental value of 0.001 between 0.99 and 0.999 is adopted. As for dynamic task generation, we initialize the first task with fixed target fidelity of $0.9$. For the left tasks, we take the median value of the fidelities collected during the previous task since a median value reflects the average performance and is robust to extreme values among the data. For reward singnal, we design four piecewise functions to indicate different rewarding schemes. The parameters for $k_1$ and $k_2$ in (\ref{eq:rewardpolicy}) are given as follows: \begin{equation} (k_1,k_2)=\left\{\begin{array}{ll} (0,-10)& F_j \in (0,0.9) \\ (60,-10)& F_j \in [0.9,0.99) \\ (-10,-100) & F_j \in [0.99,0.999) \\ (-800,-400) & F_j \in [0.999,1.000) \end{array}\right. \label{eq:reward} \nonumber \end{equation} The discount factor is set as $\gamma=0.95$, and the $\epsilon$-greedy factor is initialized as $\epsilon=0.2$. The trained policy of DRL is approximated by a deep neural network with two 256-unit hidden layers. Parameters are optimized using adam with the learning rate of $\alpha=0.0001$. The decay period is set as $E=10$ and the batch size is set as $B=128$. The threshold for the stop criterion of each task is set as $SC=2000$. The above parameters are used in all simulations. The other parameters in different simulations are summarized as in Table \ref{tab:paraDRL}. It is worthy to note that a prioritized experience replay method \cite{schaul2015prioritized} is applied and the probability threshold for the smart storing mechanism is set as $\mu_1=0.8$. \begin{table}[!htbp] \renewcommand\arraystretch{2} \centering \caption{hiper parameters} \label{tab:paraDRL} \scalebox{0.95}{ \begin{tabular}{c|c|c|c|c} \hline Parameters & 2-qubit & 3-qubit & 3-level open & 4-level open\\ \hline Memory size & 20000 & 200000 & 20000 & 100000\\ \hline Replace iteration & 100 & 500 & 100 & 300\\ \hline $\epsilon$-decay factor & 0.999 & 0.9995 & 0.999 & 0.999\\ \hline \end{tabular}} \label{tab:parameters} \end{table} To verify the effectiveness of CDRL, we also compare CDRL with two traditional DRL versions. (i) DRL-1 searches the exact $N_{\max}$ pulses of control for each episode. (ii) DRL-2 takes an empirical target fidelity to terminate an episode. To achieve a valid comparison, DRL-1 and DRL-2 employ DQN with the same experience replay method as the baseline and adopt the same parameter settings as CDRL. For DRL-2, the target fidelity is set as 0.999 and 0.99 for 2-qubit and 3-qubit systems, respectively. For open quantum systems, it takes a value of 0.99 and 0.97 for three-level and four-level quantum systems, respectively. In addition, we also include the simulation of GD and GA for comparison. Their algorithm description and parameter settings are given in Appendix \ref{app:GDsetting}. In this paper, the simulations for each task are run multiple times and a seed is utilized to control the randomness for each running. The simulations can be implemented in two scenarios according to the initial states: (i) a benchmark case means 10 runs with an identical initial state and different seeds, (ii) a random case means 10 runs with identical seed and different initial states. \subsection{Numercial results for closed quantum systems}\label{Sec:closed} To apply DRL methods to quantum systems, we firstly build a map between their concepts. Considering that $|\psi_j\rangle$ is usually represented in a complex vector, a map is utilized to transform a quantum state to a real vector, which can be formulated as \begin{equation} s_j=[\textup{Re}(\langle 0 |\psi_j\rangle),\textup{Im}(\langle 0 |\psi_j\rangle),...,\langle n-1 |\psi_j\rangle),\textup{Im}(\langle n-1 |\psi_j\rangle)], \end{equation} where $\{|k\rangle\}_{k=0}^{n-1}$ is a set of basis states for an $n$-level quantum system. $\textup{Re}(·)$ and $\textup{Im}(·)$ are the functions of taking the real and imaginary parts of a complex number, respectively. \subsubsection{2-qubit quantum systems}\label{Sec:system} \begin{figure*}[ht] \centering \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_bench_EAsBF.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_random_EAsBF.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_trac.png}} \end{minipage} \caption{Comparison between GD, GA, CDRL on 2-qubit quantum systems. (a) Benchmark case, (b) Random case, (c) An example of trajectories.} \label{fig:level4EAs} \end{figure*} \begin{figure*}[ht] \centering \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_bench_DRL_fidelity.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_bench_DRL_steps.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_bench_DRL_avg.png}} \end{minipage} \caption{Comparison between different DRL approaches on 2-qubit systems for benchmark case. (a) Fidelity at the end of one episode, (b) Steps in one episode, (c) Average reward for one episode.} \label{fig:close2qubit-bench-DRL} \end{figure*} \begin{figure*}[ht] \centering \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_random_DRL_fidelity.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_random_DRL_steps.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_random_DRL_avg.png}} \end{minipage} \caption{Comparison between different DRL approaches on 2-qubit systems for random case. (a) Fidelity at the end of one episode, (b) Steps in one episode, (c) Average reward for one episode.} \label{fig:close2-qubit-random-DRL} \end{figure*} \begin{figure*}[ht] \centering \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_count_DRL_fidelity.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_count_DRL_steps.png}} \end{minipage} \hspace{1.2cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2in]{1Statetransfer_level4_count_DRL_BFfidelity.png}} \end{minipage} \caption{Comparison of two CDRL versions with different parameter settings. (a) Fidelity at the end of one episode, (b) Steps in one episode, (c) The best fidelities for 10 runnings.} \label{fig:CDRLtwoversion} \end{figure*} Consider a 2-qubit system with Hamiltonian $ H(t)=H_0+\sum_{m=1}^{4}u_m(t)H_m$. Denote $I_n$ as the identity operator for $n$-level systems. Denote the Pauli operators as \begin{equation}\label{eq:Pauli operators} \sigma_{x}=\begin{pmatrix} 0 & 1 \\ 1 & 0 \\ \end{pmatrix} , \ \ \ \ \sigma_{y}=\begin{pmatrix} 0 & -i \\ i & 0 \\ \end{pmatrix} , \ \ \ \ \sigma_{z}=\begin{pmatrix} 1 & 0 \\ 0 & -1 \\ \end{pmatrix}. \end{equation} We assume the free Hamiltonian is $H_0=\sigma_{z}\otimes \sigma_{z}$ and the control Hamiltonian operators are $H_1=\sigma_{x}\otimes I_2$, $H_2= I_2 \otimes \sigma_{x}$, $H_3=\sigma_{y}\otimes I_2$, $H_4= I_2 \otimes \sigma_{y}$, respectively. The maximum control step is defined as $N_{\max}=40$ and the control magnitudes are constrained as $G_1=G_2=G_3=G_4=4$. Here, we take a static method to generate tasks for curriculum construction. The numerical comparison of CDRL, GD, and GA under $dt=0.0275$ is revealed in Fig. \ref{fig:level4EAs}. From Fig. \ref{fig:level4EAs}(a) and Fig. \ref{fig:level4EAs}(b), it is clear that GD achieves the best fidelities, closely followed by CDRL in both benchmark and random scenarios. In addition, CDRL achieves better fidelities than GA in searching for discrete control fields. The trajectories of the optimal control pulses learned by three methods are depicted in Fig. \ref{fig:level4EAs}(c), where CDRL finds an optimal control strategy with 13 control pulses to achieve a final fidelity of 0.9999. While the optimal control pulses searched by GA and GD take the maximum steps. In particular, the fidelity of CDRL increases with step, while the other two methods do not have this benefit. In this sense, CDRL method has an advantage to search shorter control pulses without sacrificing its fidelity. The comparison between CDRL, DRL-1 and DRL-2 for the benchmark case is summarized as in Fig. \ref{fig:close2qubit-bench-DRL}. As we can see, CDRL converges to a similar fidelity with comparison to DRL-1 and DRL-2, although they display a little difference in the early stage in Fig. \ref{fig:close2qubit-bench-DRL}(a). From Fig. \ref{fig:close2qubit-bench-DRL}(b), the average steps learnt by CDRL are much lower than the other two methods. In Fig. \ref{fig:close2qubit-bench-DRL}(c), CDRL ranks first regarding the average reward in an episode (defined as $\frac{1}{N}\sum_{j=0}^{N} r_j$). For the random case in Fig. \ref{fig:close2-qubit-random-DRL}, CDRL tends to achieve a little better fidelity, as the violet line is higher than the other two lines. Similar to the benchmark case, CDRL also converges to a small value of steps compared with the other two DRL methods. It is worthy to note that the steps in an episode actually means the number of control pulses for a control strategy. Based on the above analysis, the convergence to shorter steps (in an episode) reveals that CDRL has the potential to search for shorter control pulses compared to DRL-1 and DRL-2. Recall that success count $SC$ and decay factor $\lambda$ are two important factors for CDRL, we compare two versions of CDRL with different parameter settings: (i) CDRL-v1 with $SC=500$ and $\lambda=0.99$ and (ii) CRDL-v2 with $SC=2000$ and $\lambda=0.999$. Their comparison results are summarized as in Fig. \ref{fig:CDRLtwoversion}, where they converge to similar fidelities in Fig. \ref{fig:CDRLtwoversion}(a), but exhibit different performance in steps in Fig. \ref{fig:CDRLtwoversion}(b). Regarding the best fidelity, CDRL-v1 is shown to achieve superior performance over CDRL-v2 in Fig. \ref{fig:CDRLtwoversion}(c). The above results reveal that CDRL with a longer learning period for each task tends to converge to smaller steps, while a shorter learning period helps explore a better fidelity. \subsubsection{3-qubit quantum systems} For a 3-qubit system, its Hamiltonian is assumed to be $H(t)=H_0+\sum_{m=1}^{6}u_m(t)H_m$. Denote $\sigma_x^{(12)}=\sigma_x\otimes \sigma_x \otimes I_2 $, $\sigma_x^{(23)}= I_2\otimes \sigma_x \otimes \sigma_x$, $\sigma_x^{(13)}= \sigma_x \otimes I_2 \otimes \sigma_x $. The free Hamiltonian is $ H_0=0.1\sigma_x^{(12)}+0.1\sigma_x^{(23)}+0.1\sigma_x^{(13)}$. The control Hamiltonian operators are $H_1=\sigma_x\otimes I_2 \otimes I_2 $, $H_2= I_2 \otimes \sigma_x \otimes I_2$, $H_3= I_2 \otimes I_2 \otimes \sigma_x$, and $H_4=\sigma_z\otimes I_2 \otimes I_2 $, $H_5= I_2 \otimes \sigma_z \otimes I_2$, $H_6= I_2 \otimes I_2 \otimes \sigma_z$. The maximum control step is set as $N_{\max}=100$ and the control bounds are set as $G_1=G_2=G_3=G_4=G_5=G_6=1$. \begin{figure}[ht] \centering \includegraphics[width=3.5in]{1Statetransfer_level8_bench_BF.png} \caption{Comparsion of 3-qubit systems using different methods for benchmark case. } \label{fig:level8bench} \end{figure} \begin{figure} \centering \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=3.5in]{1Statetransfer_level8_random_DRL_fidelity.png}} \end{minipage} \vfill \begin{minipage}{0.45\linewidth} \centerline{\includegraphics[width=3.5in]{1Statetransfer_level8_time.png}} \end{minipage} \caption{Comparison between CDRL and other two DRL methods, where DRL-1 and DRL-2 show the same performance in this case. (a) Fidelity of CDRL and DRL, (b) Best fidelity under different time durations.} \label{fig:level8random} \end{figure} For the benchmark scenario, we take a static task-generation scheme. The numerical results under $dt=0.14$ are shown as in Fig. \ref{fig:level8bench}, revealing that CDRL achieves a comparative performance to GD and a superior result than GA. Among the three DRL methods, CDRL achieves the best performance, with DRL-1 falling far behind. For the random scenario, a dynamic task-generation scheme is adopted since a static one might be inappropriate for different initial states. The comparison performance between CDRL, DRL-1, and DRL-2 is summarized as in Fig. \ref{fig:level8random}. As we can see, DRL-2 with fixed target fidelity fails to achieve fidelity larger than $0.99$ and is reduced to DRL-1, and hence they show the same performance. From Fig. \ref{fig:level8random}(a), there is a wide gap between the fidelity of CDRL and that of the other two DRL methods. A further comparison regarding different time durations is summarized as in Fig. \ref{fig:level8random}(b), demonstrating that CDRL achieves the best results with different time durations. \subsection{Numerical results for open quantum systems}\label{Sec:open} For open quantum systems with a density operator $\rho(t)$, a real vector can be used to represent the current state in DRL and is defined as \begin{equation} s_j=(y_1(t),...,y_N(t))^\top, \quad y_i(t)={\rm{tr}}(Y_i \rho(t)). \end{equation} Considering the complex dynamics of open systems, a dynamical task-generation mechanism is adopted in this section. \begin{figure}[htp] \centering \begin{minipage}{0.4\linewidth} \centerline{\includegraphics[width=3.5in]{4Opencontrol_level3_trac.png}} \end{minipage} \vfill \begin{minipage}{0.4\linewidth} \centerline{\includegraphics[width=3.5in]{4Opencontrol_level3_EAstime.png}} \end{minipage} \caption{Comparison of CDRL, GA and GD on three-level open quantum systems. (a) Trajectories with steps by the optimal control pulses, (b) Best fidelity under different time durations.} \label{fig:open3EAs} \end{figure} \begin{figure*}[ht] \centering \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2.5in]{4Opencontrol_level3_bench_DRL_fidelity.png}} \end{minipage} \hspace{1.5cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2.5in]{4Opencontrol_level3_bench_DRL_steps.png}} \end{minipage} \hspace{1.5cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2.5in]{4Opencontrol_level3_bench_DRL_avg.png}} \end{minipage} \caption{Training performance of three-level open quantum systems. (a) Fidelity at the end of one episode, (b) Steps in one episode, (c) Average reward for one episode.} \label{fig:open3DRL} \end{figure*} \begin{figure*}[ht] \centering \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2.5in]{4Opencontrol_level4_bench_DRL_fidelity.png}} \end{minipage} \hspace{1.5cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2.5in]{4Opencontrol_level4_bench_DRL_steps.png}} \end{minipage} \hspace{1.5cm} \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=2.5in]{4Opencontrol_level4_bench_DRL_avg.png}} \end{minipage} \caption{Training performance of four-level open systems. (a) Fidelity at the end of one episode, (b) Steps in one episode, (c) Average reward for one episode.} \label{fig:open4DRL} \end{figure*} \begin{figure}[ht] \centering \includegraphics[width=3.5in]{4Opencontrol_level4_time.png} \caption{Best fidelity under different time durations.} \label{fig:open4time} \end{figure} \subsubsection{Three-level open quantum systems} For a three-level open quantum system, let the three energy levels be $|1\rangle=(1,0,0)^\top$, $|2\rangle=(0,1,0)^\top$, $|3\rangle=(0,0,1)^\top$. The free Hamiltonian is assumed to be $H_0=\sum_{j=1}^{3}0.2 j(j+1)|j\rangle\langle j|$ and the control Hamiltonian operators are chosen as $H_1=\sum_{i=1}^2(|i \rangle \langle i+1|+|i+1 \rangle \langle i|)$ and $H_2=|1 \rangle \langle 3|+|3 \rangle \langle 1|$. The Lindblad operators are given as \cite{Jirari2005Optimal} \begin{equation} L_1=\tau_{12} |1\rangle \langle 2|,\quad L_2=\tau_{13} |1\rangle \langle 3|,\quad L_3=\tau_{23} |2\rangle \langle 3|, \end{equation} with $\tau_{12}=0.4$, $\tau_{23}=0.3$, $\tau_{13}=0.2$. Consider a control task from an initial state $|\psi_0 \rangle =|1\rangle$ to the target state $|\psi_f \rangle =|3\rangle$, we define a maximum step $N_{\max}=100$. Fig. \ref{fig:open3EAs}(a) reveals three trajectories with the optimal control pulses under $dt=0.05$. A closer look at the inset of Fig. \ref{fig:open3EAs}(a), CDRL achieves the best fidelity, and GA achieves a slightly better result than GD. The curves in Fig. \ref{fig:open3EAs}(b) demonstrate that CDRL obtains better results than GA and GD under different time durations. In addition, the curve of CDRL is close to a horizontal straight line, while GA and GD exhibit similar performance, with fidelity decreasing with time duration. The comparison of training performance using the three DRL variants is summarized as in Fig. \ref{fig:open3DRL}, where CDRL exhibits a better performance than DRL-2 regarding fidelity. Since CDRL might explore a few steps to achieve a tiny increment of fidelity, the average steps of CDRL are a little higher than DRL-2. While DRL-1 falls far away from CDRL and DRL-2 regarding fidelity, steps, and average reward. \subsubsection{Four-level open quantum systems} For a four-level open quantum system, we assume that the four energy levels are $|1\rangle=(1,0,0,0)^\top$, $|2\rangle=(0,1,0,0)^\top$, $|3\rangle=(0,0,1,0)^\top$, $|4\rangle=(0,0,0,1)^\top$. Let the free Hamiltonian be $H_0=0.25\sum_{j=1}^{4}j(j+1)|j\rangle\langle j|$ and the control Hamiltonian operators be $H_1=\sum_{i=1}^3(|i \rangle \langle i+1|+|i+1 \rangle \langle i|)$, $H_2=\sum_{i=1}^2(|i \rangle \langle i+2|+|i+2 \rangle \langle i|)$ and $H_3=|1 \rangle \langle 4|+|4 \rangle \langle 1|$. The Lindblad operators are given as \cite{Jirari2005Optimal} \begin{align} &L_1=\tau_{12} |1\rangle \langle 2|,\quad L_2=\tau_{13} |1\rangle \langle 3|,\quad L_3=\tau_{14} |1\rangle \langle 4|,\quad \\& \nonumber L_4=\tau_{12} |2\rangle \langle 3|,\quad L_5=\tau_{13} |2\rangle \langle 4|,\quad L_6=\tau_{12} |3\rangle \langle 4|, \end{align} with $\tau_{12}=0.4$, $\tau_{13}=0.3$, $\tau_{14}=0.2$. Consider a control task from an initial state $|\psi_0 \rangle =|1\rangle$ to the target state $|\psi_f \rangle =|4\rangle$, we set the maximum step as $N_{\max}=100$. The comparison result of the three DRL methods under $dt=0.06$ is summarized as in Fig. \ref{fig:open4DRL}, where CDRL converges to a better fidelity compared with the other two DRL variants in Fig. \ref{fig:open4DRL}(a). In addition, CDRL achieves the best performance regarding both steps in Fig. \ref{fig:open4DRL}(b) and average reward in Fig. \ref{fig:open4DRL}(c). The steps in an episode mean the number of control pulses for a control strategy, suggesting that CDRL has the potential to search for control fields that can achieve higher fidelity within shorter control pulses for four-level open quantum systems compared to DRL-1 and DRL-2. We also implement simulations with different time durations. The comparison results are summarized as in Fig. \ref{fig:open4time}, where the performance of CDRL is much higher than that of DRL-1. In addition, the curve of CDRL is close to a horizontal straight line, while the curves of DRL-1 and DRL-2 decrease greatly with the increase of time duration. \section{Conclusions}\label{Sec:conclusion} In this paper, a curriculum-based deep reinforcement learning (CDRL) approach is presented for the learning control design of quantum systems. In CDRL, a curriculum is constructed using a set of intermediate tasks. In particular, fidelity is utilized to indicate the difficulty of one task. By sequencing a set of tasks, the agent learns to grasp easy knowledge and gradually increase the difficulty until the final task is achieved. The numerical results show that CDRL achieves comparative performance to GD, and achieves better performance than GA. Moreover, CDRL exhibits superior performance over two traditional DRL methods regarding fidelity and tends to find shorter control pulses. For our future work, we will extend this work to a sample-level curriculum to achieve more accurate manipulation of quantum systems. In addition, we will focus on the investigation of curriculum-based learning approaches for continuous control fields and integrate curriculum learning into other DRL methods such as DDPG \cite{lillicrap2016continuous} and TRPO \cite{schulman2015trust}.
1,314,259,996,113
arxiv
\section{Introduction} Fix a prime $p \in {\bf{Z}}$. Given a profinite group $G$, let $\Lambda(G)$ denote the ${\bf{Z}}_p$-Iwasawa algebra of $G$, which is the completed group ring \begin{align*}\Lambda(G) = {\bf{Z}}_p[[G]] = \varprojlim_{U} {\bf{Z}}_p[G/U].\end{align*} Here, the projective limit runs over all open normal subgroups $U$ of $G$. Note that the elements of $\Lambda(G)$ can be viewed in a natural way as ${\bf{Z}}_p$-valued measures on $G$. Let $E$ be an elliptic curve defined over ${\bf{Q}}$ of conductor $N$. Hence $E$ is modular by fundamental work of Wiles \cite{Wi}, Taylor-Wiles \cite{TW}, and Breuil-Conrad- Diamond-Taylor \cite{BCDT}, with Hasse-Weil $L$-function $L(E,s)$ given by that of a cuspidal newform $f \in S_2(\Gamma_0(N))$. Let $k$ be an imaginary quadratic field. The Hasse-Weil $L$-function $L(E/k, s)$ of $E$ over $k$ is given by the Rankin-Selberg $L$-function $L(f \times \Theta_k, s)$, where $\Theta_k$ is the theta series associated to $k$ by a classical construction (as described for instance in \cite{GZ}). Let $k_{\infty}$ denote the compositum of all ${\bf{Z}}_p$-extensions of $k$, which by class field theory is a ${\bf{Z}}_p^2$-extension. Let $G$ denote the Galois group ${\operatorname{Gal}}(k_{\infty}/k)$. The complex conjugation automorphism of ${\operatorname{Gal}}(k/{\bf{Q}})$ acts on $G$ with eigenvalues $\pm 1$. Let $k^{{\operatorname{cyc}}}$ denote the ${\bf{Z}}_p$-extension associated to the $+1$-eigenspace, which is the cyclotomic ${\bf{Z}}_p$-extension of $k$. Let $D_{\infty}$ denote the ${\bf{Z}}_p$-extension associated to the $-1$-eigenspace, which is the dihedral or anticyclotomic ${\bf{Z}}_p$-extension of $k$. Let $\Gamma$ denote the cyclotomic Galois group ${\operatorname{Gal}}(k^{{\operatorname{cyc}}}/k)$, and let $\Omega$ denote the dihedral or anticyclotomic Galois group ${\operatorname{Gal}}(D_{\infty}/k)$. Let $H$ denote the Galois group ${\operatorname{Gal}}(k_{\infty}/k^{{\operatorname{cyc}}})$, which is naturally isomorphic to $\Omega \cong {\bf{Z}}_p$. Let $X(E/k_{\infty})$ denote the Pontryagin dual of the $p^{\infty}$-Selmer group of $E$ over $k_{\infty}$, which has the natural structure of a compact $\Lambda(G)$-module. The subject of this note is the following conjecture, made in the spirit of Iwasawa (but often attributed to Greenberg and Mazur), known as the {\it{two-variable main conjecture of Iwasawa theory for elliptic curves}}: \begin{conjecture}\label{2vmc} Let $E$ be an elliptic curve defined over ${\bf{Q}}$, and $p$ a prime where $E$ has either good ordinary or multiplicative reduction. \begin{itemize} \item[(i)] There is a unique element $L_p(E/k_{\infty}) \in \Lambda(G)$ that interpolates $p$-adically the central values $L(E/k, \mathcal{W}, 1)/ \Omega_f$. Here, $L(E/k, \mathcal{W}, s)$ is the Hasse-Weil $L$-function of $E$ over $k$ twisted by a finite order character $\mathcal{W}$ of $G$, and $\Omega_f$ is a suitable complex period for which the quotient $L(E/k, \mathcal{W}, 1)/ \Omega_f$ lies in $\overline{{\bf{Q}}}$ (and hence in $\overline{{\bf{Q}}}_p$ via any fixed embedding $\overline{{\bf{Q}}} \rightarrow \overline{{\bf{Q}}}_p$). \item[(ii)] The dual Selmer group $X(E/k_{\infty})$ is $\Lambda(G)$-torsion, hence has a $\Lambda(G)$-characteristic power series $\operatorname{char}_{\Lambda(G)}X(E/k_{\infty})$. \item[(iii)] The equality of ideals $\left( L_p(E/k_{\infty}) \right) = \left( \operatorname{char}_{\Lambda(G)}X(E/k_{\infty}) \right)$ holds in $\Lambda(G).$ \end{itemize} \end{conjecture} In the setting where $E$ has complex multiplication by $k$, much is known about this conjecture thanks to work of Rubin \cite{Ru} (see also \cite{Ru2}), building on previous work of Coates-Wiles \cite{CW} and Yager \cite{Ya}. Here, we consider the somewhat more mysterious setting where $E$ does {\it{not}} have complex multiplication, and in particular what can be deduced from known Iwasawa theoretic results for the one-variable cases corresponding to the Galois groups $\Gamma$ and $\Omega$. We start with the construction of $p$-adic $L$-functions, (i). Given a finite order character $\mathcal{W}$ of $G$, let $\mathcal{W}\left(\lambda\right)$ denote the specialization to $\mathcal{W}$ of an element $ \lambda \in \Lambda(G)$. That is, writing $d\lambda$ to denote the measure associated to $\lambda$, let $$\mathcal{W}\left(\lambda\right) = \int_G \mathcal{W}(g)\cdot d\lambda(g).$$ Fix a cuspidal Hecke eigenform $f \in S_2(\Gamma_0(N))$ of weight $2$, level $N$, and trivial Nebentypus. Such an eigenform $f \in S_2(\Gamma_0(N))$ is said to be {\it{$p$-ordinary}} if its $T_p$-eigenvalue is a $p$-adic unit with respect to any embedding $\overline{\bf{Q}} \rightarrow \overline{{\bf{Q}}}_p.$ Let \begin{align*} \langle f, f \rangle_N = \int_{\Gamma_0(N) \backslash \mathfrak{H}} \vert f \vert^2 dx dy \end{align*} denote the Petersson inner product of $f $ with itself. Let $L(f \times \Theta(\mathcal{W}), s)$ denote the Rankin-Selberg $L$-function of $f$ times the theta series $ \Theta(\mathcal{W})$ associated to $\mathcal{W}$, normalized to have central value at $s=1$. The ratio \begin{align*} \frac{L(f \times \Theta(\mathcal{W}), 1)}{8 \pi^2 \langle f ,f \rangle_N} \end{align*} lies in $\overline{\bf{Q}}$ by an important theorem of Shimura \cite{Sh1}. Using this fact, along with the constructions of Hida \cite{Hi} and Perrin-Riou \cite{PR0}, we obtain the following result. \begin{theorem}[Theorem \ref{2vinterpolation}]\label{2vplfn} Fix an embedding $\overline{\bf{Q}} \rightarrow \overline{\bf{Q}}_p.$ Let $f \in S_2(\Gamma_0(N))$ be a $p$-ordinary eigenform of weight $2$, level $N$, and trivial Nebentypus. Assume that $N$ is prime to the discriminant of $k$, and that $p \geq 5$. There exists an element $\mu_f \in \Lambda(G)$ whose specialization to any finite order character $\mathcal{W}$ of $G$ satisfies the interpolation formula \begin{align*}\mathcal{W}\left( \mu_f \right) = \eta\cdot \frac{L(f \times \Theta(\overline{\mathcal{W}}), 1)}{8 \pi^2 \langle f ,f \rangle_N} \in \overline{{\bf{Q}}}_p, \end{align*} where $\eta = \eta(f, \mathcal{W})$ is a certain explicit (nonvanishing) algebraic number. \end{theorem} Hence, we obtain a $p$-adic $L$-function $L_p(E/k_{\infty}) = L_p(f/k_{\infty}) \in \Lambda(G)$ associated to this measure $\mu_f$. \begin{remark}The two-variable $p$-adic $L$-function $L_p(f/k_{\infty})$ corresponding to $d\mu_f$ also satisfies a functional equation, as described in Corollary \ref{FE} below. \end{remark} We now consider the Iwasawa module structure theory of (ii), using standard techniques. Recall that we let $H$ denote the Galois group ${\operatorname{Gal}}(k_{\infty}/k^{{\operatorname{cyc}}})$, which is naturally isomorphic to the dihedral or anticyclotomic Galois group $\Omega \cong {\bf{Z}}_p$. If $E$ has good ordinary reduction at $p$, then an important theorem of Kato \cite{KK} with a nonvanishing theorem of Rohrlich \cite{Ro} implies that the dual Selmer group $X(E/k^{{\operatorname{cyc}}})$ is $\Lambda(\Gamma)$-torsion. To be more precise, the construction of Kato \cite{KK} with the nonvanishing theorem of Rohlich \cite{Ro} show that the dual Selmer group $X(E/{\bf{Q}}^{{\operatorname{cyc}}})$ is $\Lambda({\operatorname{Gal}}({\bf{Q}}^{{\operatorname{cyc}}}/{\bf{Q}}))$-torsion, where ${\bf{Q}}^{{\operatorname{cyc}}}$ denotes the cyclotomic ${\bf{Z}}_p$-extension of ${\bf{Q}}$. It then follows from a simple restriction argument, using Artin formalism for abelian $L$-functions, that the analogous structure theorem holds for $E$ in the cyclotomic ${\bf{Z}}_p$-extension of any abelian number field. In particular, $X(E/k^{{\operatorname{cyc}}})$ is $\Lambda(\Gamma)$-torsion, and hence has a $\Lambda(\Gamma)$-characteristic power series with associated cyclotomic Iwasawa invariants $\mu_E(k) = \mu_{\Lambda(\Gamma)}\left( X(E/k^{{\operatorname{cyc}}}) \right)$ and $\lambda_E(k) =\lambda_{\Lambda(\Gamma)} \left( X(E/k^{{\operatorname{cyc}}}) \right)$. Using this result, we then deduce the following structure theorem for the dual Selmer group $X(E/k_{\infty})$. \begin{theorem} Let $E/{\bf{Q}}$ be an elliptic curve with good ordinary reduction at each prime above $p$ in k. \begin{itemize} \item[(i)] (Theorem \ref{torsion}) The dual Selmer group $X(E/ k_{\infty})$ is $\Lambda(G)$-torsion, hence has a $\Lambda(G)$-characteristic power series $\operatorname{char}_{\Lambda(G)}X(E/k_{\infty})$. \item[(ii)] (Theorem \ref{2mu}) If the cyclotomic invariant $\mu_E(k)$ vanishes, then the two-variable invariant $\mu_{\Lambda(G)}\left( X(E/k_{\infty}) \right)$ also vanishes. \item[(iii)] (Theorem \ref{Euler}) Let $\operatorname{char}_{\Lambda(G)} X(E/k_{\infty})(0)$ denote the image of the characteristic power series $\operatorname{char}_{\Lambda(G)}X(E/k_{\infty})$ under the augmentation map $\Lambda(G) \longrightarrow {\bf{Z}}_p$. If $p \geq 5$ and the $p^{\infty}$-Selmer group ${\operatorname{Sel}}(E/k)$ is finite, then \begin{align*} \vert \operatorname{char}_{\Lambda(G)}X(E/k_{\infty})(0)\vert_p &= \frac{\vert E(k)_{p^{\infty}} \vert^2 }{\vert {\mbox{\textcyr{Sh}}}(E/k)(p)\vert} \cdot \frac{\prod_v \vert c_v \vert_p}{\prod_{v \mid p}\vert \widetilde{E}_v(\kappa_v)(p)\vert^2 }. \end{align*} Here, ${\mbox{\textcyr{Sh}}}(E/k)(p)$ denotes the $p$-primary part of the Tate-Shafarevich group ${\mbox{\textcyr{Sh}}}(E/k)$ of $E$ over $k$, $E(k)_{p^{\infty}}$ the $p$-primary part of the Mordell-Weil group $E(k)$, $\kappa_v$ the residue field at $v$, $\widetilde{E}_v$ the reduction of $E$ over $\kappa_v$, and $c_v=[E(k_v):E_0(k_v)]$ the local Tamagawa factor at a prime $v \subset \mathcal{O}_k$. \item[(iv)] (Theorem \ref{mhg}) If $\mu_E(k)=0$, then there is an isomorphism of $\Lambda(H)$-modules $X(E/k_{\infty}) \cong \Lambda(H)^{\lambda_E(k)}$. \end{itemize} \end{theorem} We also obtain from this the following application to Tate-Shafarevich ranks. Consider the short exact descent sequence of discrete $\Lambda(H)$-modules \begin{align*} 0 \longrightarrow E(k_{\infty})\otimes {\bf{Q}}_p/{\bf{Z}}_p \longrightarrow {\operatorname{Sel}}(E/k_{\infty}) \longrightarrow {\mbox{\textcyr{Sh}}}(E/k_{\infty})(p) \longrightarrow 0. \end{align*} Here, $E(k_{\infty})$ denotes the Mordell-Weil group of $E$ over $k_{\infty}$, and ${\mbox{\textcyr{Sh}}}(E/k_{\infty})(p)$ denotes the $p$-primary part of the Tate-Shafarevich group of $E$ over $k_{\infty}$. \begin{proposition}[Proposition \ref{tsrank}] Assume that $p$ is odd, and moreover that $p$ does not divide the class number of $k$ if the root number $\epsilon(E/k, 1)$ equals $-1$. If $E$ has good ordinary reduction at $p$ with $\mu_E(k)=0$, then \begin{align*}\operatorname{corank}_{\Lambda(H)}{\mbox{\textcyr{Sh}}}(E/k_{\infty})(p) = \begin{cases}\lambda_E(k) &\text{if $\epsilon(E/k, 1)=+1$}\\ \lambda_E(k) -1&\text{if $\epsilon(E/k, 1)=-1.$}\end{cases} \end{align*} \end{proposition} \begin{example} Consider the elliptic curve $E=53a: y^2 +xy +y = x^3 -x^2$ at $p=5$ over $k={\bf{Q}}(\sqrt{-31})$. The discriminant of $k$ is $-31$, which is prime to both $5$ and the conductor $53$ of $E$. A simple calculation shows that the root number $\epsilon(E/k, 1)$ is $+1$. Moreover, the mod $5$ Galois representation associated to $E$ is surjective, as shown by the calculations in Serre \cite[$\S$ 5.4]{Se}. Computations of Pollack \cite{Po} show that $\mu_E(k) =0$ with $\lambda_E(k)=9$ (and moreover that the Mordell-Weil rank of $E(k)$ is $1$), from which we deduce that ${\mbox{\textcyr{Sh}}}(E/k_{\infty})(5)$ has $\Lambda(H)$-corank $9$. In particular, ${\mbox{\textcyr{Sh}}}(E/k_{\infty})(5)$ contains infinitely many copies of ${\bf{Q}}_5 / {\bf{Z}}_5$.\end{example} Finally, we establish the following criterion for one divisibility of (iii) in terms of specializations to cyclotomic characters, following a suggestion of Ralph Greenberg. To be more precise, let $\Psi$ denote the set of finite order characters of the Galois group $\Gamma = {\operatorname{Gal}}(k^{{\operatorname{cyc}}}/k)$. Given a character $\psi \in \Psi$, let us write $\mathcal{O}_{\psi}$ to denote the ring obtained from adjoining to ${\bf{Z}}_p$ the values of $\psi$. Let $L_p(E/k_{\infty})\vert_{\Omega}$ denote the image of the two-variable $p$-adic $L$-function $L_p(E/k_{\infty})$ in the Iwasawa algebra $\Lambda(\Omega)$. \begin{theorem}[Corollary \ref{gpw}] Assume that $p$ does not divide $L_p(E/k_{\infty})\vert_{\Omega}$, and that for each character $\psi \in \Psi$, we have the inclusion of ideals \begin{align}\label{HDiv} \left( \psi \left( L_p(E/k_{\infty}) \right) \right) \subseteq \left( \psi \left( \operatorname{char}_{\Lambda(G)}X(E/k_{\infty}) \right) \right) \text{ in } \mathcal{O}_{\psi}[[G]]. \end{align} Then, we have the inclusion of ideals \begin{align}\label{2VDiv} \left( L_p(E/k_{\infty}) \right) \subseteq \left( \operatorname{char}_{\Lambda(G)}X(E/k_{\infty}) \right) \text{ in } \Lambda(G). \end{align} \end{theorem} We deduce from this the following result. Let $K$ be any finite extension of $k$ contained in the cyclotomic ${\bf{Z}}_p$-extension $k^{{\operatorname{cyc}}}$. Let $\Omega_K$ denote the Galois group ${\operatorname{Gal}}(KD_{\infty}/k)$, which is topologically isomorphic to ${\bf{Z}}_p$. Let $L_p(E/k_{\infty})\vert_{\Omega_K} $ denote the image of the two-variable $p$ adic $L$-function $L_p(E/k_{\infty})$ in the Iwasawa algebra $\Lambda(\Omega_K)$. Let $\Psi_K$ denote the set of characters of order $[K:k]$ of the Galois group ${\operatorname{Gal}}(K/k)$. Let us consider as well the following condition(s), so that we can invoke the recent work of Pollack-Weston \cite{PW}. \begin{hypothesis}\label{XXX} Let $\epsilon(E/k, 1) \in \lbrace \pm 1 \rbrace$ denote the root number of the complex $L$-function $L(E/k, s) = L(f \times \theta_k, s)$. We assume that: \begin{itemize} \item[(i)] The mod $p$ Galois representation $\overline{\rho}_E$ associated to $E$ is surjective. \item[(ii)] If $\epsilon(E/k, 1) = +1$, then $p \geq 5$ and the conductor $N$ is prime to the discriminant of $k$. This latter condition determines an integer factorization $N = N^+ N^{-}$ of $N$, where $N^+$ is divisible only by primes that split in $k$, and $N^{-}$ is divisible only by primes that remain inert in $k$; we then assume that $N^{-}$ is the squarefree product of an odd number of primes. \end{itemize}\end{hypothesis} We obtain the following main result. \begin{proposition}[Proposition \ref{bccrit}] Assume that the root number $ \epsilon(E/k, 1)$ of $L(E/k, 1)$ is $+1$. Assume additionally that for a finite extension $K$ of $k$ contained in the cyclotomic ${\bf{Z}}_p$-extension $k^{{\operatorname{cyc}}}$, we have the inclusion of ideals \begin{align}\label{BCDiv} \left( L_p(E/k_{\infty})\vert_{\Omega_K} \right) \subseteq \left( \operatorname{char}_{\Lambda(\Omega_K)}X(E/KD_{\infty}) \right) \text{ in } \Lambda(\Omega_K), \end{align} with equality for $K=k$. Then, there exists a nontrivial character $\psi \in \Psi_K$ such that the specialization divisibility $(\ref{HDiv})$ holds. In particular, if Hypothesis \ref{XXX} (i) and (ii) hold, then we obtain the inclusion of ideals \begin{align*} \left( L_p(E/k_{\infty}) \right) \subseteq \left( \operatorname{char}_{\Lambda(G)}X(E/k_{\infty}) \right) \text{ in } \Lambda(G). \end{align*}\end{proposition} Though we do not discuss the issue here, the equality condition for $k=K$ would follow from the nonvanishing criterion of Howard \cite[Theorem 3.2.3 (c)]{Ho} for dihedral/anticyclotomic $p$-adic $L$-functions, as explained in \cite[$\S 5$]{VO}. Hence, by Proposition \ref{bccrit}, this criterion would also imply one divisibility of the two-variable main conjecture in the setting where the root number $\epsilon(E/k,1)$ is $1$. \begin{remark}[Acknowledgements.] It is a pleasure to thank John Coates, Ralph Greenberg, David Loeffler, Robert Pollack, Christopher Skinner and Christian Wuthrich for various helpful discussions. In particular, it is a pleasure to thank Christopher Skinner for informing me of the three-variable main conjecture proved in \cite{SU}, which I had not been aware of before writing this work. It is also a pleasure to thank the anonymous referee for various helpful comments that have done much to improve the exposition, as well as the correctness of some of the writing. \end{remark} \tableofcontents \section{Two-variable $p$-adic $L$-functions} We start with the proof of Theorem \ref{2vinterpolation}, following closely the constructions of Hida \cite{Hi} and Perrin-Riou \cite{PR0}. Both of the these constructions depend in an essential way on the bounded linear form defined in \cite{Hi}, which we review below. \begin{remark} The results described below hold more generally for $f$ any $p$-ordinary eigenform of weight $l \geq 2$ and nontrivial Nebentypus, following the same methods described below with \cite[Th\'eor\`eme B]{PR0}. We have restricted to the setting of eigenforms associated to modular elliptic curves for simplicity of exposition.\end{remark} \begin{remark}[Hida's bounded linear form.] We follow Hida \cite[$\S 4$]{Hi}, using the same notations for spaces of modular forms and Hecke algebras used there. Suppose we have a modular form \begin{align*} f(z) = \sum_{n\geq0} a_n(f) e^{2\pi i nz} \in M_l(\Gamma_{*}(M), \xi; L_0), \end{align*} with $l$ and $M$ positive integers, $* = 0$ or $1$, $\xi$ a Dirichlet character mod $M$, and $L_0 = {\bf{Q}}(a_n(f))_{n\geq0}$ the extension of ${\bf{Q}}$ generated by the Fourier coefficients of $f$. We define a norm $\vert \cdot \vert_p$ on $f \in M_l(\Gamma_{*}(M), \xi; L_0)$ by letting \begin{align*} \vert f \vert_p = \sup_n \left| a_n(f) \right|_p. \end{align*} Let $L$ denote the closure of $L_0$ in $\overline{\bf{Q}}_p$ with respect to a fixed embedding $\overline{\bf{Q}} \rightarrow \overline{\bf{Q}}_p$. Let $M_l(\Gamma_{*}(M), \xi; L)$ denote the completion of the space $M_l(\Gamma_{*}(M), \xi; L_0)$ with respect to $\vert \cdot \vert_p$. Let $\mathcal{O} = \mathcal{O}_L$. Define a subspace of {\it{integral forms}} \begin{align*} M_l(\Gamma_{*}(M), \xi; \mathcal{O}) = \lbrace f \in M_l(\Gamma_{*}(M), \xi; L): \vert f \vert_p \leq 1\rbrace. \end{align*} Let us write ${\bf{T}}(M, \xi; L)$ to denote the algebra of Hecke operators acting on $M_l(\Gamma_{*}(M), \xi; L),$ as defined in \cite[p. 171]{Hi}. Hence, ${\bf{T}}(M, \xi; L)$ denotes the $L$-subalgebra of the ring of all $L$-linear endomorphisms of $M_l(\Gamma_{*}(M), \xi; L)$ generated by Hecke operators. If given integers $n \geq m \geq 0$, then the restriction ${\bf{T}}(Mp^n, \xi; \mathcal{O})$ of ${\bf{T}}(Mp^n, \xi; L)$ to $M_l(\Gamma_{*}(Mp^m), \xi; \mathcal{O})$ defines an $\mathcal{O}$-algebra homomorphism \begin{align*} M_l(\Gamma_{*}(Mp^n), \xi; \mathcal{O}) \longrightarrow M_l(\Gamma_{*}(Mp^m), \xi; \mathcal{O}). \end{align*} We define the {\it{extended Hecke algebra}} by passage to the inverse limit with respect to these homomorphisms, \begin{align*} {\bf{T}}(M, \xi; \mathcal{O}) = \ilim n {\bf{T}}(Mp^n, \xi, \mathcal{O}). \end{align*} Let us now fix a $p$-ordinary eigenform \begin{align*}f(z) = \sum_{n\geq1} a_n(f) e^{2\pi i nz} \in S_2(\Gamma_0(N)) \end{align*} of weight $2$, level $N$, and trivial Nebentypus. Let $\Psi$ denote the principal or trivial character modulo $N$ (hence $\psi(p) =1$ if $p$ does not divide $p$, and $\psi(p)=0$ otherwise). Let $\alpha_{p}(f)$ denote the $p$-adic unit root of the polynomial \begin{align*} x^2 - a_p(f)x + p\psi(p), \end{align*} and $\beta_p(f)$ the non-unit root. Let $f_0$ denote $p$-stabilization of $f$, which is the unique ordinary form associated to $f$ by Hida \cite[Lemma 3.3]{Hi}. That is, let \begin{align*} f_0(z) = \begin{cases} f(z) &\text{if $p \mid N$}\\ f(z) - \beta_p(f) f(pz) &\text{if $p \nmid N.$} \end{cases}\end{align*} This eigenform $f_0$ has level $N_0$, where \begin{align*} N_0 = \begin{cases} Np &\text{if $p \nmid N$}\\ N &\text{if $p \mid N.$}\end{cases}\end{align*} Its Fourier coefficients $a_n(f_0)$ satisfy the relations \begin{align*}a_n(f_0) = \begin{cases} a_n(f) &\text{if $(n,p)=1$}\\ \alpha_p(f) &\text{if $n=p.$} \end{cases} \end{align*} We now recall briefly the definition of idempotent operators in extended Hecke algebras, following \cite[pp. 171 - 172]{Hi}. That is, let ${\bf{T}}(Np^m) = {\bf{T}}(\Gamma_0(Np^m); \mathcal{O})$ denote the $\mathcal{O}$-algebra generated by Hecke operators acting on the space of cusp forms $S_2(\Gamma_0(Np^m);\mathcal{O})$, with $T_p = T_p(Np^m)$ denoting the Hecke operator at $p$. Let $\overline{T}_{p}$ denote the image of $T_p$ in the quotient ${\bf{T}}(Np^m)/p$. This reduction $\overline{T}_p$ can be decomposed uniquely into semisimple and nilpotent parts. Since ${\bf{T}}(Np^m)/p$ is a finitely-generated, commutative ${\bf{F}}_p$-algebra, it follows that $\overline{T}_{p}^{~p^r}$ is semisimple for $r$ sufficiently large. Hence, $\overline{T}_{p}^{~ up^r}$ is idempotent for some integer $u$. Let $e_m$ denote the unique lift to ${\bf{T}}(Np^m)$ of $\overline{T}_{p}^{~ up^r}$. Note that this lift does not depend on the choice of integer $u$. \begin{definition}The {\it{idempotent}} ${\bf{e}}$ in the extended Hecke algebra ${\bf{T}}(N) = \ilim m {\bf{T}}(Np^m)$ is defined to be the projective limit $ {\bf{e}} = \ilim m e_m. $\end{definition} \begin{proposition}\emph{(Hida)} Let $f \in S_2(\Gamma_0(N))$ be a $p$-ordinary eigenform, with $f_0$ its associated ordinary form. There is a decomposition ${\bf{T}}(N;L) \cong A \oplus L$ induced by the split exact sequence \begin{align}\label{hida4.4}\begin{CD}0 @>>> A \oplus L @>>> {\bf{T}}(N;L) @>{\phi(f_0)}>> {\bf{T}}^{(0)}(N;L) @>>>0. \end{CD} \end{align} Here, $\phi(f_0)$ is the map that sends $T_n \longmapsto a_n(f_0)$, with ${\bf{T}}^{(0)}(N;L) \cong L$ the direct summand of ${\bf{T}}(N;L)$ through which this map factors, and $A$ the complementary direct summand. \end{proposition} \begin{proof} See \cite[Proposition 4.4 and (4.5)]{Hi}. $\Box$ \end{proof} We now use this result to define the following operator. \begin{definition} Let $f \in S_2(\Gamma_0(N))$ be a $p$-ordinary eigenform with associated ordinary form $f_0.$ We let ${\bf{1}}_{f_0}$ denote the component of the idempotent ${\bf{e}}$ corresponding to the summand ${\bf{T}}^{(0)}(N)$ in the spit exact sequence $(\ref{hida4.4})$ above. \end{definition} \begin{definition} Let $f \in S_2(\Gamma_0(N))$ be a $p$-ordinary eigenform with associated ordinary form $f_0.$ Let $m \geq 0$ be an integer. {\it{Hida's bounded linear form $l_{f_0}$}} of level $Np^m$ is then given by the map \begin{align*}l_{f_0}: M_2(\Gamma_{*}(Np^m), \xi; L) \longrightarrow L, ~~~ g \longmapsto a_1\left( g\vert_{ {\bf{e}}\circ{\bf{1}}_{f_0}} \right),\end{align*} in other words by the map that sends a modular form $g \in M_2(\Gamma_{*}(Np^m), \xi; L)$ to the the first Fourier coefficient of its image under the operation ${\bf{e}}\circ {\bf{1}}_{f_0}$. \end{definition} \begin{proposition}\emph{(Hida)} The linear form $l_{f_0}: M_2(\Gamma_{*}(Np^n), \xi; L) \longrightarrow L$ is given explicitly on any $g \in M_2(\Gamma_{*}(Np^m), \xi; L)$ by the map \begin{align*} g \longmapsto \alpha_p(f_0)^{-m} \cdot p \cdot \frac{ \langle h_m, g \rangle_{Np^m}}{ \langle h ,f_0 \rangle_{N_0}}. \end{align*} Here, $h = \overline{f}_0(z) |_2 \left(\begin{array} {cc} 0& -1\\ N_0 & 0 \end{array}\right)$ with $\overline{f}_0(z) = \overline{f_0(- \overline{z})},$ and $h_m(z) = h(p^mz)$. \end{proposition} \begin{proof} See \cite[Proposition 4.5]{Hi}. $\Box$ \end{proof} \begin{lemma}\label{hidaintegrality} The linear form $l_{f_0}$ sends $M_2(\Gamma_{*}(Np^m), \xi; \mathcal{O})$ to $ \mathcal{O}.$ \end{lemma} \begin{proof} Fix $g \in M_2(\Gamma_{*}(Np^m), \xi; \mathcal{O})$. We know that $\vert \alpha_p(f) \vert_p = \vert a_p(f_0)\vert_p = 1.$ On the other hand, the operator $\phi(f_0)$ in the split exact sequence $(\ref{hida4.4})$ sends $T_p(Np^m) \longmapsto a_p(f_0)$ for each $m \geq 0$. It follows that $\phi(f_0)$ sends the idempotent ${\bf{e}} = \ilim m e_m$ to the unit defined by $\lim_r a_p(f_0)^{p^r}= \lim_r \alpha_p(f_0)^{p^r}.$ Now, the action of ${\bf{T}}(N)$ maps the space $M_2(\Gamma_{*}(Np^m); \mathcal{O})$ to itself for any $m\geq 0,$ as explained for instance in \cite[$\S 4$]{Hi}. Thus if $\vert g \vert_p \leq 1$, then $g \vert_{ {\bf{e}}\circ {\bf{1}}_{f_0}} = \left( g\vert_{{\bf{e}}}\right)\vert_{{\bf{1}}_{f_0}}$ has the property that $\left| a_1\left( g \vert_{ {\bf{e}}\circ {\bf{1}}_{f_0}} \right) \right|_p \leq 1$. The result follows. $\Box$ \end{proof} \end{remark} \begin{remark}[Some $p$-adic convolution measures.] We now give a sketch of Perrin-Riou's construction of the measure $d\mu_f$, \cite{PR0}, starting with the setup described above. This construction is made up of several constituent measures that a priori take values in the spaces $M_l(\Gamma_{*}(M), \xi; L)$, but can be seen to take values in the integral subspaces $M_l(\Gamma_{*}(M), \xi; \mathcal{O}),$ as we show in Proposition \ref{integrality}. Let us fix throughout a finite order character $\mathcal{W}$ of $G$. We commit an abuse of notation in viewing $\mathcal{W}$ as a character on the ideals of $k$ via class field theory. Observe that we can always write such a character $\mathcal{W}$ as the product of characters $\rho \chi \circ {\bf{N}}$, where $\rho$ is a character of $G$ that factors though the dihedral ${\bf{Z}}_p$-extension $D_{\infty}$ of $k$, and $\chi \circ {\bf{N}}$ a character of $G$ that factors though the cyclotomic ${\bf{Z}}_p$-extension $k^{{\operatorname{cyc}}}$ of $k$. Here, the cyclotomic character $\chi \circ {\bf{N}}$ is given by the composition with the norm homomorphism ${\bf{N}}$ on ideals of $k$ with some Dirichlet character $\chi$ that factors through the cyclotomic ${\bf{Z}}_p$-extension of ${\bf{Q}}$. Hence, we fix a finite order character $\mathcal{W}$ of $G$ with dihedral/cyclotomic factorization \begin{align} \label{decomposition}\mathcal{W} = \rho \chi \circ {\bf{N}}. \end{align} Let $c(\mathcal{W})$ denote the conductor of $\mathcal{W}$, with $c(\rho)$ the conductor of the dihedral or anticyclotomic part $\rho$. Let $D = D_{k/{\bf{Q}}}$ denote the discriminant of $k$. Let $\omega = \omega_{k/{\bf{Q}}}$ denote the quadratic character associated to $k$. A classical construction associates to the $\mathcal{W}$ a theta series of weight $1$, level $\Delta = \Delta(\mathcal{W})= \vert D\vert {\bf{N}}c(\mathcal{W})^2,$ and Nebentypus $\omega \chi^2.$ To be more precise, let $\mathcal{O}_{c(\rho)} = {\bf{Z}} + c(\rho)\mathcal{O}_k$ denote the ${\bf{Z}}$-order of conductor $c(\rho)$ in $\mathcal{O}_k$. Fix an element of the class group $ A \in{\operatorname{Pic}}\mathcal{O}_{c(\rho)}$. Fix a representative $\mathfrak{a} \in A$. We then define a $\chi$-twisted theta series associated to $A$, \begin{align*}\Theta_A(\chi)(z) &= \frac{1}{u} \sum_{x \in \mathfrak{a}} \chi \left( \frac{{N_{k/ {\bf{Q}}}}(x) }{ {\bf{N}}\mathfrak{a} } \right) e^{\frac{2\pi iN_{k/ {\bf{Q}} }(x) z}{ {\bf{N}}\mathfrak{a} }} = \frac{1}{u} + \sum_{n\geq 1} \chi(n) r_A(n) e^{2\pi i nz}. \end{align*} Here, $x$ runs over points in the lattice defined by $\mathfrak{a}$, $ u = 2 \vert \mathcal{O}_{k}^{\times}\vert$ is twice the number of units of $k$ , and $r_A(n)$ is the number of ideals of norm $n$ in $A$. This series does not depend on choice of representative $\mathfrak{a} \in A$. It is seen to lie in $M_1(\Gamma_0(\Delta), \omega\chi^2)$ by a standard application of Poisson summation. Taking the $\rho$-twisted sum over classes $A \in {\operatorname{Pic}}\mathcal{O}_{c(\rho)},$ it gives rise to a modular form \begin{align*}\Theta(\mathcal{W})(z) = \sum_{A}\rho(A) \Theta_A(\chi)(z) \in M_1(\Gamma_0(\Delta), \omega\chi^2)\end{align*} associated to $\mathcal{W}$. We refer the reader to $\cite{GZ}$, \cite{Hi} or $\cite{He}$ for proofs of these facts. In what follows, we fix a finite order character $\mathcal{W}$ of $G$ having the decomposition $(\ref{decomposition})$ above. We fix a ring class $A \in {\operatorname{Pic}}\mathcal{O}_{c(\rho)}$. We then construct a measure associated to the underlying Dirichlet character $\chi$ in the decomposition $(\ref{decomposition}).$ In fact, to follow \cite{PR0}, we shall suppose more generally that $\chi$ is any finite order character of ${\bf{Z}}_p^{\times}.$ Taking the $\rho$-twisted sum over classes $A \in {\operatorname{Pic}}\mathcal{O}_{c(\rho)}$ then gives the appropriate measure in $\mathcal{O}[[G]]$ whose specialization to $\mathcal{W}$ interpolates the value \begin{align*} \frac{L(f \times \Theta(\overline{\mathcal{W}}), 1)}{8 \pi^2 \langle f ,f \rangle_N} \in \overline{\bf{Q}}_p \end{align*} up to some algebraic factor (which can be made explicit). We give only a sketch of this construction, referring the reader to \cite{PR0} for proofs and calculations. We start with the following constituent constructions. \begin{definition}[Theta series measures.] Fix an integer $m \geq 1$. Consider the series defined by \begin{align*} \Theta_A(\chi)(a, p^m)(z) = \sum_{x \in \mathfrak{a} \atop \frac{ N_{k/{\bf{Q}}}(x)}{ {\bf{N}}(\mathfrak{a})} \equiv a \operatorname{mod} p^m} \chi \left( \frac{ N_{k/{\bf{Q}}}(x) }{ {\bf{N}}\mathfrak{a} } \right) e^{\frac{2\pi i N_{k/{\bf{Q}}}(x) z }{ {\bf{N}}\mathfrak{a} } } .\end{align*} Let $d\Theta_A(\chi)$ denote the measure on ${\bf{Z}}_{p}^{\times}$ given by the rule \begin{align*}\int_{a + p^m{\bf{Z}}_p^{\times}} d\Theta_A(\chi) = \Theta_A(\chi)(z).\end{align*} \end{definition} \begin{lemma}\label{thetaintegrality} The measure $d\Theta_A(\chi)$ takes values in the space $M_1(\Gamma_0(\Delta), \omega\chi^2; \mathcal{O})$ if $p \geq 5$. \end{lemma} \begin{proof} The result follows plainly from the $q$-expansion of $\Theta_A(\chi)(z)$.$\Box$ \end{proof} \begin{remark} We impose the condition $p \geq 5$ to deal with the $\frac{1}{u}$ term in the $q$-expansion of $\Theta_A(\chi)(z)$, since we could in exceptional cases have $u=4$ or $u=6$. \end{remark} \begin{definition} [Eisenstein series measures.] Let $\xi$ be an odd Dirichlet character modulo an integer $M >2.$ Let $E_M(\xi)$ denote the Eisenstein series of weight $1$ given by \begin{align*} E_M(\xi)(z) = \frac{L(\xi, 0)}{2} + \sum_{n \geq 1}\left( \sum_{d >0 \atop d \mid n} \xi(d)\right) e^{2\pi i n z}. \end{align*} Here, \begin{align*}L(\xi, s) = \sum_{n\geq 1} \xi(n) n^{-s}\end{align*} is the standard Dirichlet $L$-series associated to $\xi$. The series $E_{M}(\xi)(z)$ lies in $M_1(\Gamma_0(M), \xi)$, as shown for instance in \cite{H2}. Fix an integer $m \geq 1.$ Let $M = Np^m$. Consider the series defined by \begin{align*}E(\xi)(a, M)(z) = \frac{L(\xi, 0)}{2} + \sum_{n \geq 1}\left( \sum_{d >0 , d \mid n \atop d \equiv a \operatorname{mod} M} \xi(d)\right) e^{2\pi i n z}.\end{align*} Fix an integer $C>1$ prime to $M$. Let $C^{-1}$ denote the inverse class of $C$ modulo $M$. Consider the difference defined by \begin{align*}E^{C}(\xi)(a, M)(z) = E(\xi)(a, M)(z) - C E(\xi)(C^{-1}a, M)(z).\end{align*} It is well known that $E^{C}(\xi)(a, M)(z)$ is a bounded distribution on the product ${\bf{Z}}_{p}^{\times} \times \left({\bf{Z}}/N\right)^{\times}$ (see \cite{Hi}, \cite{Ka} or \cite{Ka2}). Let $dE^{C}(\xi)(a, M)$ denote the measure on ${\bf{Z}}_{p}^{\times} \times \left( {\bf{Z}}/N\right)^{\times}$ given by the rule \begin{align*} \int_{a + Np^m{\bf{Z}}_p^{\times}} dE^C(\xi) (a, M) = E^C(\xi)(a, Np^m)(z).\end{align*} Note that this measure takes values in certain spaces of Eisenstein series. To be more precise, we have the following result. \begin{lemma}\label{eisensteinintegrality} The measure $dE^{C}(\xi)(a, M)$ takes values in the space $M_1(\Gamma_0(M),\xi; \mathcal{O})$. \end{lemma} \begin{proof}The result follows from the Key Lemma of Katz \cite[1.2.1, Key Lemma for $\Gamma(N)$]{Ka}, which shows that the Eisenstein measure takes $p$-integral values at an elliptic curve with level structure defined over a $p$-integral ring. Note also that $dE^{C}(\xi)(a, M)$ arises from a one-dimensional part of the Eisenstein pseudo-distribution $2H^{(a,b)}$ given in \cite[$\S 3.4$]{Ka} (i.e. with $a=C$). This pseudo-distribution can be shown to take integral values by \cite[Key Lemma 1.2.1]{Ka}, e.g. by the proof given in \cite[Theorem 3.3.3]{Ka} (cf. also \cite[$\S 3.5,(3.5.5)$]{Ka}). $\Box$ \end{proof} \end{definition} \begin{definition}[Convolution measures.] Fix a class $A \in {\operatorname{Pic}} \mathcal{O}_{c(\rho)}.$ Fix integers $a, m \geq 1.$ Fix an integer $C>1$ prime to $pND.$ Consider the series defined by \begin{align*} \Phi^C_A(\chi)(a, p^m)(z) = \sum_{\alpha \in \left({\bf{Z}}/N\Delta\right)^{\times}} \Theta_A(\chi)(\alpha^2 a, p^m)(Nz) E^C(\omega\chi^2)(\alpha, N\Delta)(z).\end{align*} The function $\Phi_{A}^{C}(a, p^m)(z)$ can be seen to define a bounded distribution on ${\bf{Z}}_{p}^{\times}$ (see \cite[Lemme 4]{PR0}). Let $d\Phi_{A}^{C}(\chi)$ denote the measure on ${\bf{Z}}_{p}^{\times}$ given by this function. \begin{lemma}\label{convolutionintegrality} The measure $d\Phi_{A}^{C}(\chi) = \Phi_{A}^{C}(a, p^m)(z)$ takes values in the space $M_2(\Gamma_0(N\Delta), \omega\chi^2; \mathcal{O})$, at least if $p \geq 5$. \end{lemma} \begin{proof} The function $\Phi_A(a, p^m)(z)$ lies in $M_2(\Gamma_0(N\Delta), \omega\chi^2)$ (see \cite[Lemme 5]{PR0}). We then deduce from Lemmas \ref{thetaintegrality} and \ref{eisensteinintegrality} that it lies in $M_2(\Gamma_0(N\Delta), \omega\chi^2; \mathcal{O})$. $\Box$ \end{proof} \end{definition} \begin{definition}[Trace operators.] Keep the setup used to define the convolution measure $d\Phi_A^C(\chi)$ above. Fix a set representatives $\mathcal{R}$ for the space $\Gamma_0(N\Delta)\backslash \Gamma_0(N).$ We define the trace operator ${\operatorname{Tr}}_{N}^{N\Delta}: M_2(\Gamma_0(N\Delta), \xi)\longrightarrow M_2(\Gamma_0(N), \xi)$ by the rule \begin{align*} h(z) \longmapsto \sum_{\gamma \in \mathcal{R}} \xi(a_{\gamma}) \cdot h\vert_2 \gamma, ~~~ \gamma = \left(\begin{array} {cc} a_{\gamma}& b_{\gamma} \\ c_{\gamma} & d_{\gamma} \end{array}\right). \end{align*} \begin{lemma}\label{traceintegrality} The composition function $ {\operatorname{Tr}}_{N}^{N\Delta} \circ \Phi_{A}^{C}(\chi)(a, p^m)(z)$ takes values in the space $M_2(\Gamma_0(N), \omega\chi^2; \mathcal{O})$, at least if $p \geq 5$. \end{lemma} \begin{proof} Given the result of Lemma \ref{traceintegrality}, the assertion can be deduced from explicit computations of the Fourier series expansion of the trace form. If $N$ and $D$ are both prime, then the result follows from the computation given in Gross \cite[Proposition 9.3, 2)]{G}. In the more general case with $(N, D)=1$, it follows from the computation of the coefficients given in Gross-Zagier \cite[IV$\S2$ Proposition (2.4) and $\S3,$ Proposition (3.2)]{GZ}. $\Box$ \end{proof} \end{definition} \begin{definition}[Mesures fondamentales.] Keep the setup from above. Recall that we let $f_0$ denote the $p$-stabilization of $f$, which is the unique ordinary form associated to $f$ by Hida \cite[Lemma 3.3]{Hi}. Let $l_{f_0}: M_2(\Gamma_0(N), \omega\chi^2; L) \longrightarrow L$ denote Hida's bounded linear form, as defined above. Let $L$ denote the closure of the field of values $L_0= {\bf{Q}}(\omega\chi^2(n))_{n\geq0}$ in $\overline{\bf{Q}}_p$. Let $d\phi_{A}^{C}(\chi)$ denote the measure on ${\bf{Z}}_{p}^{\times}$ given by the rule \begin{align*} \int_{a + p^m{\bf{Z}}_p^{\times}} d\phi_{A}^{C}(\chi) = l_{f_0} \circ {\operatorname{Tr}}_{N}^{N\Delta} \circ \Phi_{A}^{C}(\chi)(a, p^m)(z). \end{align*} \begin{proposition} \label{integrality} The measure $d\phi_{A}^{C}(\chi)$ takes values in the ring $\mathcal{O} = \mathcal{O}_L$, at least if $p \geq 5$. \end{proposition} \begin{proof} The result follows from Lemmas \ref{thetaintegrality}, \ref{eisensteinintegrality}, \ref{convolutionintegrality}, \ref{traceintegrality} and \ref{hidaintegrality}. $\Box$ \end{proof} \end{definition} We can now at last define the two-variable measures that gives rise to $d\mu_f$. \begin{definition} Keep the notations above, with $C>1$ an integer prime to $pND$. Let $L(\rho)$ denote the closure of the field of values $L_0(\rho(A))_{A \in {\operatorname{Pic}} \mathcal{O}_{c(\rho)}}$ in $\overline{\bf{Q}}_p$. Let $\mathcal{O} = \mathcal{O}_{L(\rho)}$. Let $d\mu_f^C$ denote the $\mathcal{O}$-valued function on $G$ defined by the rule \begin{align*} \int_G \mathcal{W} \mu_f^C = \sum_{A \in {\operatorname{Pic}}\mathcal{O}_{c(\rho)} } \rho(A) d\phi_A^C(\chi).\end{align*} This function is seen easily to be a well-defined distribution on $G$ (see \cite[$\S$ 5]{PR0}), and hence (by Proposition \ref{integrality}) a measure on $G$. That is, the distribution is seen easily to be bounded for any choice of $p$, and integral for any choice $p \geq 5$. It is also seen to be integral for any choice of $p$ if $\rho \neq {\bf{1}}$ (in which case the twisted sum of theta series $\sum_A \rho(A)\Theta_A(\chi)(z)$ is cuspidal). \end{definition} \end{remark} \begin{remark}[Interpolation properties and functional equation.] Let us keep all of the notations above, with $C>1$ an integer prime to $pND$. The two-variable measure $d\mu_f^C$ satisfies the following interpolation property. Let $\tau(\mathcal{W})$ denote the Artin root number of $L(\mathcal{W}, s)$. Recall that $\Delta = \Delta(\mathcal{W})$ denotes the level of the theta series $\Theta(\mathcal{W})(z)$. Let $\psi$ denote the principal character modulo $N$ as above (hence, $\psi(p) =1$ if $p$ does not divide $N$ and zero otherwise). Recall as well that we let $\alpha_p$ denote the unique $p$-adic unit root of the polynomial $X^2-a_p(f)X - p\psi(p)$. Given an integer $r \geq 1$, let us write $\alpha_{p^r}$ to denote $\alpha_{p}^r$. Let us also write $N'$ to denote the prime-to-$p$ part of $N$. Let $\beta$ denote the $p$-primary component of the level $\Delta$ of $\Theta(\mathcal{W})$. Finally, let us commit an abuse of notation in using the same notations used to denote the measures defined on ${\bf{Z}}_p^{\times}$ above to denote the induced measures defined on ${\bf{Z}}_p$. \begin{theorem}\label{2vinterpolation} There exists for each integer $C>1$ prime to $pND$ an $\mathcal{O}$-valued measure $d\mu_{f}^{C}$ on $G$ such that for any finite order character $\mathcal{W}$ of $G$, \begin{align*} \int_G \mathcal{W} d\mu_{f}^{C} &= \left( 1 - \frac{\psi(p)}{\alpha_{p^2}}\right)^{-1} \left( 1 + \frac{p\psi(p)}{\alpha_{p^2}}\right)^{-1} \prod_{\mathfrak{p} \mid p } \left( 1 - \frac{\mathcal{W}(\mathfrak{p})}{\alpha_{{\bf{N}}\mathfrak{p}}} \right) \left( 1 - \frac{\overline{\mathcal{W}}\psi({\bf{N}}\mathfrak{p})(\mathfrak{p})}{\alpha_{{\bf{N}}\mathfrak{p}}} \right) \\ &\times \omega(-N') \mathcal{W}(N') \left( 1 - C \omega(C) \overline{\mathcal{W}}(C)\right) \frac{{\Delta}^{\frac{1}{2}}}{\alpha_{p^{\beta}}}\tau(\mathcal{W})\\ & \times \frac{L(f \times \Theta(\overline{\mathcal{W}}), 1)}{8 \pi^2 \langle f ,f \rangle_{N}}. \end{align*} Here, the product runs over all primes $\mathfrak{p}$ of $\mathcal{O}_k$ that divide $p$. \end{theorem} \begin{proof} See Perrin-Riou \cite[Th\'eor\`eme A]{PR0}, along with Proposition \ref{integrality} above. That is, fix a finite order character $\mathcal{W}$ of $G$ having the decomposition $(\ref{decomposition})$. Fix an integer $C >1$ prime to $pND$. A simple argument shows that $d\mu_f^C$ is a well-defined distribution on $G$ (see \cite[$\S$ 5]{PR0}). On the other hand, we know that $d\mu_f^C$ takes values in $\mathcal{O} = \mathcal{O}_{L(\rho)}$ (by Proposition \ref{integrality}). Hence, $d\mu_f^C$ is an $\mathcal{O}$-valued measure on $G$, corresponding to an element of the completed group ring $\mathcal{O}[[G]]$. The calculation of the interpolation value is given in \cite[$\S$ 4]{PR0}. $\Box$ \end{proof} We may now define the two-variable $p$-adic $L$-function associated to a $p$-ordinary eigenform $f \in S_2(\Gamma_0(N))$ in the tower $k_{\infty}/k,$ following Perrin-Riou \cite{PR0}. Observe that this definition does not depend on the choice of auxiliary integer $C >1$ prime to $pND$ thanks to Theorem \ref{2vinterpolation}. \begin{definition} Let $\eta:G \rightarrow {\bf{Z}}_{p}^{\times}$ be a continuous character. Let $\mathfrak{D}$ denote the different of $k$. Let $C >1$ be any integer prime to $pND$. The {\it{two-variable $p$-adic $L$-function}} $L_p(f, k)(\eta)$ of $f$ in $k_{\infty}/k$ is then defined to be \begin{align*} L_p(f, k)(\eta) &= \left( 1 - \frac{\psi(p)}{\alpha_{p^2}}\right) \left( 1 + \frac{p\psi(p)}{\alpha_{p^2}}\right) \\ &\times \eta^{-1}(\mathfrak{D}'N') \left( 1 - C \omega(C) \eta^{-1}(C) \right)^{-1}\\ &\times \int_G \eta(g) d\mu_{f}^C(g). \end{align*} Here, $\mathfrak{D}'$ and $N'$ denote the prime to $p$ parts of $\mathfrak{D}$ and $N$ respectively. \end{definition} \begin{corollary} \label{FE}The function $L_p(f, k)$ is an Iwasawa function on $G$ with coefficients in ${\bf{Z}}_p$. Moreover, the Iwasawa function defined by \begin{align*} \Lambda_p(f, k)(\eta) = \eta^{\frac{1}{2}}(N')\eta(\mathfrak{D}')L_p(f,k)(\eta) \end{align*} satisfies the functional equation \begin{align*} \Lambda_p(f, k)(\eta^{-1}) = - \omega(N') \Lambda_p(f,k)(\eta).\end{align*} \end{corollary} \begin{proof} See \cite[Corollaire, Th\'eor\`eme A]{PR0} or \cite[Corollaire, Th\'eor\`eme B]{PR0}. $\Box$ \end{proof} \end{remark} \section{Iwasawa module structure theory} We now describe the Iwasawa module structure theory of the dual Selmer group of $E$ over $k_{\infty}$, along with that of the $p$-primary component of the associated Tate-Shafarevich group. We follow closely many of the arguments of Coates-Sujatha-Schneider \cite{CSS}, as well as the refinements of those given by Hachimori-Venjakob \cite{HV} for the somewhat analogous setting of the false Tate curve extension. \begin{remark}[Some definitions.] Fix $S$ a finite set of primes of $k$ containing both the primes above $p$ and the primes where $E$ has bad reduction. Let $k^S$ denote the maximal Galois extension of $k$ that is unramified outside of $S$ and the archimedean primes of $k$. Note that since $k_{\infty}$ is unramified outside of primes above $p$, we have the inclusion $k_{\infty} \subset k^S$. Given $L$ any finite extension of $k$ contained in $k_{\infty}$, let $G_S(L)$ denote the Galois group ${\operatorname{Gal}}(k^S/L)$. The $p^{\infty}$-Selmer group ${\operatorname{Sel}}(E/L)$ of $E$ over $L$ is defined classically as the kernel of the localization map, \begin{align*} {\operatorname{Sel}}(E/L) &= \ker \left( \lambda_E(L): H^1(G_S(L), E_{p^{\infty}}) \longrightarrow \bigoplus_{v \in S}J_v(L)\right). \end{align*} Here, $E_{p^{\infty}} = E(k^S)_{p^{\infty}}$ denotes the $p$-power torsion: $E_{p^{\infty}} = \bigcup_{n \geq 0} E_{p^n}$ where $E_{p^n} = \ker ([p^n]:E \rightarrow E)$. We also write \begin{align*} J_v(L) &= \bigoplus_{w \mid v} H^1(L_w, E(\overline{L}_w))(p), \end{align*} where the sum runs over all primes $w$ above $v$ in $L$. Note that this group fits into the classical short exact descent sequence \begin{align*} 0 \longrightarrow E(L) \otimes {\bf{Q}}_p/{\bf{Z}}_p \longrightarrow {\operatorname{Sel}}(E/L) \longrightarrow {\mbox{\textcyr{Sh}}}(E/L)(p) \longrightarrow 0, \end{align*} where ${\mbox{\textcyr{Sh}}}(E/L)(p)$ denotes the $p$-primary component of the Tata-Shafarevich group ${\mbox{\textcyr{Sh}}}(E/L)$ of $E$ over $L$. Let $L_{\infty}$ be any infinite extension of $k$ contained in $k_{\infty}$. We then define the Selmer group of $E$ over $L_{\infty}$ to be the inductive limit \begin{align*}{\operatorname{Sel}}(E/L_{\infty}) &= \varinjlim_L {\operatorname{Sel}}(E/L). \end{align*} Here, the limit is taken over all finite extensions $L$ of $k$ contained in $L_{\infty}$ with respect to the natural restriction maps on cohomology. We write \begin{align*} X(E/L_{\infty}) &= {\operatorname{Hom}}({\operatorname{Sel}}(E/L_{\infty}), {\bf{Q}}_p/{\bf{Z}}_p) \end{align*} to denote the Pontryagin dual of ${\operatorname{Sel}}(E/L_{\infty})$. \end{remark} \begin{remark}[$\Lambda(\Gamma)$-module structure.] Let us first review the cyclotomic structure theory implied by work of Kato and Rohrlich. \begin{theorem}\emph{(Kato-Rohrlich)}\label{kato-rohrlich} If $E/{\bf{Q}}$ has good ordinary reduction at each prime above $p$ in $k$, then the dual Selmer group $X(E/k^{\text{cyc}})$ is $\Lambda(\Gamma)$-torsion. \end{theorem} \begin{proof} The result follows from the Euler system method of Kato \cite[Theorems 14.2 and 17.4]{KK}, which requires for nontriviality the nonvanishing theorem of Rohrlich \cite{Ro}. $\Box$ \end{proof} We may then invoke the structure theory of finitely generated torsion $\Lambda(\Gamma)$-modules (\cite[Chapter VII, $\S 4.5$]{BB}) to obtain a $\Lambda(\Gamma)$-module pseudoisomorphism \begin{align}\label{1vs} X(E/k^{{\operatorname{cyc}}}) \longrightarrow \bigoplus_{i=1}^r \Lambda(\Gamma)/p^{m_i} \oplus \bigoplus_{j=1 }^s\Lambda(\Gamma)/\gamma_{j}^{n_j}.\end{align} Here, the indices $r$, $s$, $m_i$ and $n_j$ are all positive integers, and each $\gamma_j$ can be viewed as an irreducible monic distinguished polynomial $\gamma_j(T)$ (with respect to a fixed isomorphism $\Lambda(\Gamma) \cong {\bf{Z}}_p[[T]]$). The $\Lambda(\Gamma)$-characteristic power series \begin{align*} \operatorname{char}_{\Lambda(\Gamma)} X(E/k^{{\operatorname{cyc}}}) &= \prod_{i=1}^r p^{m_i} \cdot \prod_{j=1}^s \gamma_{j}^{n_j} \end{align*} is defined uniquely up to unit in $\Lambda(\Gamma)$. One defines from it the $\Lambda(\Gamma)$-module invariants \begin{align*} \mu_{\Lambda(\Gamma)}\left( X(E/k^{{\operatorname{cyc}}})\right) = \sum_{i=1}^r m_i ~~~\text{and}~~~ \lambda_{\Lambda(\Gamma)}\left( X(E/k^{{\operatorname{cyc}}})\right) = \sum_{j=1} n_j \cdot \deg(\gamma_j).\end{align*} We shall often for simplicity denote these by \begin{align*}\mu_E(k) = \mu_{\Lambda(\Gamma)}\left(X(E/k^{{\operatorname{cyc}}})\right) ~~~\text{and}~~~ \lambda_E(k) = \lambda_{\Lambda(\Gamma)}\left( X(E/k^{{\operatorname{cyc}}}) \right). \end{align*} respectively. We refer the reader to the monograph of Coates-Sujatha \cite{CS2} for further discussion, for instance on how to compute the (finite) $G$-Euler characteristic of ${\operatorname{Sel}}(E/k^{{\operatorname{cyc}}})$, or equivalently how to compute $\vert \operatorname{char}_{\Lambda(\Gamma)}X(E/k^{{\operatorname{cyc}}})(0) \vert_p^{-1}$, where $\operatorname{char}_{\Lambda(\Gamma)}X(E/k^{{\operatorname{cyc}}})(0)$ denotes the image of the characteristic power series $\operatorname{char}_{\Lambda(\Gamma)}X(E/k^{{\operatorname{cyc}}})$ under the natural augmentation map $\Lambda(\Gamma) \longrightarrow {\bf{Z}}_p$. \end{remark} \begin{remark}[$\Lambda(G)$-module structure.] We now use the $\Lambda(\Gamma)$-module structure of $X(E/k^{{\operatorname{cyc}}})$ to study the $\Lambda(G)$-module structure of $X(E/k_{\infty})$, following the main ideas of \cite{CSS} and \cite{HV}. Let us first consider the following standard result. Let $\mathfrak{S}(E/L)$ denote the compactified Selmer group of $E$ over any finite extension $L$ of $k$ contained in $k_{\infty}$, which is defined as the projective limit \begin{align*}\mathfrak{S}(E/L) &=\varprojlim_n \ker \left(H^1(G_S(L), E_{p^n}) \longrightarrow \bigoplus J_v(L) \right) \end{align*} taken with respect to the natural maps $E_{p^{n +1}} \rightarrow E_{p^n}$ induced by multiplication by $p$. Given any infinite extension $L_{\infty}$ of $k$ contained in $k_{\infty}$, we then define \begin{align*} \mathfrak{S}(E/L_{\infty}) &= \varprojlim_L \mathfrak{S}(E/L)\end{align*} to be the projective limit over all finite extensions $L$ of $k$ contained in $L_{\infty}$, taken with respect to the natural corestriction maps. \begin{proposition}\label{injection} Let $\Omega = \text{Gal}(L_{\infty}/k)$ be any infinite pro-$p$ group. If $E(L_{\infty})_{p^{\infty}}$ is finite, then there is a $\Lambda(\Omega)$-module injection \begin{align*} \mathfrak{S}(E/L_{\infty}) \longrightarrow \text{Hom}_{\Lambda(\Omega)}(X(E/L_{\infty}), \Lambda(\Omega)). \end{align*} \end{proposition} \begin{proof} See for instance \cite[Theorem 7.1]{HV}. $\Box$ \end{proof} We use this to deduce the following result. \begin{theorem}\label{cycloc} If $E$ has good ordinary reduction at each prime above $p$ in $k$, then the cohomology group $H^2(G_S(k^{\text{cyc}}), E_{p^{\infty}})$ vanishes. In particular, the localization map \begin{align*} \lambda_S(k^{\text{cyc}}): H^1(G_S(k^{\text{cyc}}), E_{p^{\infty}})\longrightarrow \bigoplus_{v \in S} J_v(k^{\text{cyc}})\end{align*} is surjective, and hence we have a short exact sequence of $\Lambda(\Gamma)$-modules \begin{align}\label{locsurj} 0 \longrightarrow {\operatorname{Sel}}(E/k^{\text{cyc}}) \longrightarrow H^1(G_S(k^{\text{cyc}}),E_{p^{\infty}}) \longrightarrow \bigoplus_{v \in S} J_v(k^{\text{cyc}}) \longrightarrow 0.\end{align}\end{theorem} \begin{proof} Consider the Cassels-Poitou-Tate exact sequence \begin{align*}0 \longrightarrow {\operatorname{Sel}}(E/k^{\text{cyc}}) &\longrightarrow H^1(G_S(k^{\text{cyc}}), E_{p^{\infty}}) \longrightarrow \bigoplus_{v \in S}J_v(k^{\text{cyc}}) \\ &\longrightarrow \mathfrak{S}(E/k^{\text{cyc}})^{\vee}\longrightarrow H^2(G_S(k^{\text{cyc}}), E_{p^{\infty}}) \longrightarrow 0.\end{align*} Here, $\mathfrak{S}(E/k^{{\operatorname{cyc}}})^{\vee}$ is the Pontryagin dual of $\mathfrak{S}(E/k^{{\operatorname{cyc}}})$. Now, the $p$-power torsion subgroup $E(k^{{\operatorname{cyc}}})_{p^{\infty}}$ is finite by Imai's theorem \cite{Im}. Hence, we can invoke Proposition \ref{injection} to obtain an injection $\mathfrak{S}(E/k^{{\operatorname{cyc}}}) \rightarrow {\operatorname{Hom}}_{\Lambda(\Gamma)}(X(E/k^{{\operatorname{cyc}}}), \Lambda(\Gamma))$. Now, by the main result of Kato \cite{KK}, the dual Selmer group $X(E/k^{{\operatorname{cyc}}})$ is $\Lambda(\Gamma)$-torsion. Hence, we have a $\Lambda(\Gamma)$-module injection \begin{align*}\mathfrak{S}(E/k^{{\operatorname{cyc}}}) &\hookrightarrow {\operatorname{Hom}}_{\Lambda(\Gamma)}(X(E/k^{{\operatorname{cyc}}}), \Lambda(\Gamma)) =0. \end{align*} It follows that $\mathfrak{S}(E/k^{{\operatorname{cyc}}})^{\vee}=0$, and hence that $H^2(G_S(k^{{\operatorname{cyc}}}), E_{p^{\infty}})=0$. See also the argument of Kato \cite[$\S\S 13, 14$]{KK} for this latter vanishing. $\Box$\end{proof} Let us now consider invariants under the Galois group $H = {\operatorname{Gal}}(k_{\infty}/k^{{\operatorname{cyc}}}).$ Note that by Serre's refinement \cite{Se3} of Lazard's theorem \cite{Laz}, a $p$-adic Lie group with no element of order $p$ has $p$-cohomological dimension ${\operatorname{cd}}_p$ equal to its dimension as a $p$-adic Lie group. Since $G$ has no element of order $p$, we can and will invoke this characterization throughout. Hence (for instance), ${\operatorname{cd}}_p(G)=2$ with ${\operatorname{cd}}_p(H) = {\operatorname{cd}}_p(\Gamma) =1$. To show the main result of this paragraph, we first establish the following standard lemmas. \begin{lemma}\label{Hloc} If $E$ has good ordinary reduction at each prime above $p$ in $k$, then there is a short exact sequence \begin{align*}\begin{CD} 0 @>>>{\operatorname{Sel}}(E/k_{\infty})^H @>>> H^1(G_S(k_{\infty}), E_{p^{\infty}})^H \\@. @>{\eta_S(k_{\infty})}>> \bigoplus_{v \in S}J_v(k_{\infty})^H @>>> 0.\end{CD}\end{align*} Here, $\eta_S(k_{\infty})$ is the map induced by localization map \begin{align*}\lambda_S(k_{\infty}): H^1(G_S(k_{\infty}), E_{p^{\infty}}) &\longrightarrow \bigoplus_S J_v(k_{\infty}).\end{align*}\end{lemma} \begin{proof} See \cite[Lemma 2.3]{CSS}. That is, consider the fundamental diagram $$\begin{CD} 0 @>>> {\operatorname{Sel}}(E/k_{\infty})^{H} @>>> H^1(G_S(k_{\infty}), E_{p^{\infty}})^{H} @>{\eta_S(k_{\infty})}>> \bigoplus_{v \in S}J_v(k_{\infty})^{H} \\ @. @AAA @AAA @AA{\gamma_S(k^{\text{cyc}})}A @. \\ 0 @>>> {\operatorname{Sel}}(E/k^{\text{cyc}}) @>>> H^1(G_S(k^{\text{cyc}}), E_{p^{\infty}}) @>{\lambda_S(k^{\text{cyc}})}>> \bigoplus_{v \in S}J_v(k^{\text{cyc}}).\\ \end{CD}$$ Here, the horizontal rows are exact, and the vertical arrows are induced by restriction on cohomology. We have that $${\operatorname{coker}}(\gamma_S(k^{\text{cyc}})) = \bigoplus_{w |v\in S} {\operatorname{coker}}(\gamma_w(k^{\text{cyc}})),$$ with $w$ ranging over places in $k^{{\operatorname{cyc}}}$ above $ v \in S$. Note that only finitely many such primes exist, as no finite prime splits completely in $k^{{\operatorname{cyc}}}$ (see for instance \cite[Theorem 2.13]{Wash}). Given a prime $w$ above $ v$ in $k_{\infty}$, let $\Omega_w$ denote the decomposition subgroup of $H$ at $w$. Note that ${\operatorname{cd}}_p \left( \Omega_w \right) \leq 1$, and so $H^2(\Omega_w, E_{p^{\infty}})=0.$ If $w \nmid p$, then standard arguments (see for instance \cite[Lemma 3.7]{C}) show that $${\operatorname{coker}}(\gamma_w(k^{\text{cyc}})) = H^2(\Omega_w, E_{p^{\infty}}) =0.$$ If $w|p$, then the main result of Coates-Greenberg \cite{CG} shows that $${\operatorname{coker}}(\gamma_w(k^{\text{cyc}})) = H^2(\Omega_w, \widetilde{E}_{w, p^{\infty}}) =0.$$ Here, $\widetilde{E}_{w, p^{\infty}}$ denotes the image under reduction modulo $w$ of $E_{p^{\infty}}.$ Hence, we find that ${\operatorname{coker}}(\gamma_w(k^{\text{cyc}})) =0$ for each prime $w$ above $v$ in $k_{\infty}$. It follows that $\gamma_S(k^{\text{cyc}})$ is surjective. Since the map $\lambda_S(k^{\text{cyc}})$ is also surjective by $(\ref{locsurj})$, it follows that $\eta_S(k_{\infty})$ is surjective as required. $\Box$ \end{proof} \begin{lemma}\label{hsss} If $E$ has good ordinary reduction at each prime above $p$ in $k$, then for all $i\geq 1$, $H^i(H, H^1(G_S(k_{\infty}), E_{p^{\infty}})) = 0.$ \end{lemma} \begin{proof} See \cite[Lemma 2.4]{CSS}. The same proof works here, using Theorem \ref{cycloc} with the fact that ${\operatorname{cd}}_p \left( H \right) =1$. $\Box$ \end{proof} \begin{lemma}\label{H1vanish} If $E$ has good ordinary reduction at each prime above $p$ in $k$, then $H^1(H,{\operatorname{Sel}}(E/k_{\infty}))=0.$ \end{lemma} \begin{proof} See \cite[Lemma 2.5]{CSS}. Let $A_{\infty} = \text{Im}(\lambda_S(k_{\infty})).$ Lemma $\ref{hsss}$ with $i=1$ gives the exact sequence \begin{align}\label{Ainfty} 0 \longrightarrow {\operatorname{Sel}}(E/k_{\infty})^H \longrightarrow H^1(G_S(k_{\infty}), E_{p^{\infty}})^H \longrightarrow A_{\infty}^H \longrightarrow H^1(H, {\operatorname{Sel}}(E/k_{\infty})) \longrightarrow 0. \end{align} Recall that the map $\eta_S(k_{\infty}): H^1(G_S(k_{\infty}), E_{p^{\infty}})^H \longrightarrow A_{\infty}^H$ is surjective by Lemma $\ref{Hloc}.$ Now, \begin{align*} A_{\infty}^{H} = \bigoplus_{v \in S}J_v(k_{\infty})^H, \end{align*} and so it follows that $H^1(H, S(E/k_{\infty}))=0$. $\Box$ \end{proof} \begin{lemma}\label{Jvanish}If $E$ has good ordinary reduction at each prime above $p$ in $k$, then \newline $H^1(H, \bigoplus_{v \in S} J_v(k_{\infty}))=0.$ \end{lemma} \begin{proof} See \cite[Lemma 2.8]{CSS}. The same proof works here, using the fact that ${\operatorname{cd}}_p \left( H \right) =1$. $\Box$ \end{proof} We may now deduce the following result. \begin{theorem} \label{torsion} If $E$ has good ordinary reduction at each prime above $p$ in $k$, then $X(E/k_{\infty})$ is $\Lambda(G)$-torsion.\end{theorem} \begin{proof} See the arguments of \cite[Theorem 2.8, and Corollary 2.9]{HV}, following \cite[Proposition 2.9]{CSS}. A standard deduction, as given for instance in \cite[ $\S 2$, Remark 2.5]{HV}, reduces the claim to showing the surjectivity of the localization map \begin{align*} \lambda_S(k_{\infty}): H^1(G_S(k_{\infty}), E_{p^{\infty}}) &\longrightarrow \bigoplus J_v(k_{\infty}). \end{align*} So, let $A_{\infty} = {\operatorname{im}}(\lambda_S(k_{\infty})).$ Taking the $H$-cohomology of the exact sequence \begin{align*} 0 \longrightarrow {\operatorname{Sel}}(E/k_{\infty})\longrightarrow H^1(G_S(k_{\infty}), E_{p^{\infty}}) \longrightarrow A_{\infty} \longrightarrow 0, \end{align*} we obtain from Lemma \ref{hsss} the identification \begin{align*} H^1(H, A_{\infty}) = H^2(H, {\operatorname{Sel}}(E/k_{\infty})).\end{align*} Note that since ${\operatorname{cd}}_p(H)=1$, we have that $H^2(H,{\operatorname{Sel}}(E/k_{\infty}))=0$, and hence that $H^1(H, A_{\infty}) =0$. Let $B_{\infty} = {\operatorname{coker}}(\lambda_S(k_{\infty})).$ By Lemma $(\ref{Jvanish})$, \begin{align*} H^1(H, \bigoplus_{v \in S}J_v(k_{\infty}))=0. \end{align*}Taking $H$-cohomology of the exact sequence \begin{align*}0 \longrightarrow A_{\infty} \longrightarrow \bigoplus_{v \in S}J_v(k_{\infty}) \longrightarrow B_{\infty} \longrightarrow 0, \end{align*} we deduce from Lemma $\ref{Hloc}$ that \begin{align*} B_{\infty}^{H} = H^1(H, A_{\infty}) =0. \end{align*} Since $H$ is pro-$p$, and $B_{\infty}$ a discrete $p$-primary $H$-module, it follows that $B_{\infty}$ itself must vanish. Hence $\lambda_S(k_{\infty})$ is surjective. $\Box$ \end{proof} When $X(E/k_{\infty})$ is $\Lambda(G)$-torsion, the structure theory of torsion $\Lambda(G)$-modules (\cite[Chapter VII, $\S 4.5$]{BB}) gives a pseudoisomorphism \begin{align}\label{2vs} X(E/k_{\infty}) \longrightarrow \bigoplus_{i=1}^t \Lambda(G)/p^{a_i} \oplus \bigoplus_{j=1}^u \Lambda(G)/g_j^{b_j}.\end{align} Here, the indices $s$, $t$, $a_i$ and $b_j$ are all positive integers, and each $g_j$ can be viewed as an irreducible monic distinguished polynomial $g_j(T_1, T_2)$ (with respect to a fixed isomorphism $\Lambda(G) \cong {\bf{Z}}_p[[T_1, T_2]]$). The characteristic power series \begin{align*} \operatorname{char}_{\Lambda(G)} X(E/k_{\infty}) &= \prod_{i=1}^t p^{a_i} \cdot \prod_{j=1}^u g_j^{b_j} \end{align*} is again well defined up to unit in $\Lambda(G)$. As in the cyclotomic setting, one uses it to define the $\Lambda(G)$-module invariants \begin{align*} \mu_{\Lambda(G)}(X(E/k_{\infty})) = \sum_{i=1}^t a_i ~~~\text{and~~} \lambda_{\Lambda(G)}\left(X(E/k_{\infty})\right) = \sum_{j=1}^u b_j \cdot \deg(g_j).\end{align*} \end{remark} \begin{remark}[The invariant $ \mu_{\Lambda(G)}(X(E/k_{\infty})) $.] Let us now review what is known about the invariant $\mu_{\Lambda(G)}(X(E/k_{\infty}))$. Suppose more generally that $G$ is any pro-$p$ group, and $Y$ any finitely-generated torsion $\Lambda(G)$-module. The structure theory of $\Lambda(G)$ modules shown in \cite[Chapter VII, $\S 4.5$]{BB}) again gives a pseudoisomorphism analogous to $(\ref{2vs})$, and so we may define the associated invariant $\mu_{\Lambda(G)}(Y).$ Let $Y(p)$ denote the submodule of elements of $Y$ annihilated by some power of $p$. It is well known (see for instance \cite{H}) that the cohomology groups $H^i(G, Y)$ are finitely-generated ${\bf{Z}}_p$-modules for all $i \geq 0$, and hence that the cohomology groups $H^i(G, Y(p))$ are finite for all $i \geq 0.$ The invariant $\mu_{\Lambda(G)}(Y)$ is then seen to be given by the formula \begin{align}\label{muformula}p^{\mu_{\Lambda(G)}(Y)} = \prod_{i \geq 0} \lvert H_i(G, Y(p))\rvert^{(-1)^i} = \chi(G, Y(p)),\end{align} where $\chi(G, Y(p))$ is by definition the finite $G$-Euler characteristic of $Y(p)$. Given $L$ any extension of $k$ contained in $k_{\infty}$, let us write $$\mathfrak{X}(E/L) = X(E/L)/X(E/L)(p).$$ \begin{proposition} \label{generalmu} If $E$ has good ordinary reduction at $p$, and $\mathfrak{X}(E/k_{\infty})$ is finitely-generated over $\Lambda(H)$, then $\mu_{\Lambda(G)}(X(E/k_{\infty})) = \mu_E(k)$. \end{proposition} \begin{proof} See \cite[Propostion 2.9]{CSS}, we give a sketch of the proof. Note that we have $X(E/k_{\infty})_H =H_0(H, X(E/k_{\infty})).$ Note as well that $H_1(H, X(E/k_{\infty})) =0$ by Lemma \ref{H1vanish}. Taking $H$-homology of the short exact sequence $$0 \longrightarrow X(E/k_{\infty})(p) \longrightarrow X(E/k_{\infty}) \longrightarrow \mathfrak{X}(E/k_{\infty}) \longrightarrow 0,$$ we obtain a short exact sequence of $\Lambda(\Gamma)$-modules \begin{align*} 0 \longrightarrow H_1(H, \mathfrak{X}(E/k_{\infty})) &\longrightarrow H_0(H, X(E/k_{\infty})(p))\\ &\longrightarrow H_0(H, X(E/k_{\infty})) \longrightarrow H_0(H, \mathfrak{X}(E/k_{\infty})) \longrightarrow 0.\end{align*} Following \cite[Proposition 1.9]{H}, we then show that the alternating sum of $\mu_{\Lambda(\Gamma)}$-invariants along this sequence vanishes. Moreover, the $\mu_{\Lambda(\Gamma)}$-invariants of the two central terms can be computed as follows. For $H_0(H, X(E/k_{\infty})) = X(E/k_{\infty})_H,$ it is well known (see the proof of Theorem \ref{mhg} below for instance) that restriction on cohomology induces a $\Lambda(\Gamma)$-homomorphism $$\alpha: X(E/k_{\infty})_H \longrightarrow X(E/k^{\text{cyc}})$$ with $\ker(\alpha)$ finitely-generated over ${\bf{Z}}_p$ and ${\operatorname{coker}}(\alpha)$ finite. We deduce that $$\mu_{\Lambda(\Gamma)}((X(E/k_{\infty})_H) = \mu_{\Lambda(\Gamma)}((X(E/k^{\text{cyc}})) = \mu_E(k).$$ For $H_0(H, X(E/k_{\infty})(p)),$ consider the Hochschild-Serre spectral sequence \begin{align*} 0 \longrightarrow H_0(\Gamma, H_i(H, X(E/k_{\infty})(p)) &\longrightarrow H_i(G, X(E/k_{\infty})(p))\\ &\longrightarrow H_1(\Gamma, H_{i-1}(H, X(E/k_{\infty})(p)) \longrightarrow 0. \end{align*} We deduce that $$\chi(G, X(E/k_{\infty})(p)) = \prod_{i=0}^{1} \chi(\Gamma, H_i(H, X(E/k_{\infty})(p)))^{(-1)^i},$$ and so $$\mu_{\Lambda(G)}(X(E/k_{\infty})) = \sum_{i=0}^{1}(-1)^i \mu_{\Lambda(\Gamma)}(H_i(H, X(E/k_{\infty})(p))).$$ Putting terms together from the first (alternating sum) sequence above, we find that $\mu_{\Lambda(G)}(X(E/k_{\infty})) =$ $$\mu_E(k) + \sum_{i =0}^{1} (-1)^{i+1}\mu_{\Lambda(\Gamma)}(H_i(H, \mathfrak{X}(E/k_{\infty}))) + \sum_{i =0}^{1} (-1)^{i}\mu_{\Lambda(\Gamma)}(H_i(H, X(E/k_{\infty})(p))).$$ Recall that $H_i(H, X(E/k_{\infty}))=0$ for all $i \geq 0$ by Lemma $\ref{H1vanish}$. Taking $H$-cohomology of the short exact sequence $$0 \longrightarrow X(E/k_{\infty})(p) \longrightarrow X(E/k_{\infty}) \longrightarrow \mathfrak{X}(E/k_{\infty}) \longrightarrow 0,$$ obtain that $H_1(H, X(E/k_{\infty})(p))) = H_2(H, \mathfrak{X}(E/k_{\infty}))=0.$ Deduce that $$\mu_{\Lambda(G)}(X(E/k_{\infty})) =\mu_E(k) + \sum_{i =0}^{1} (-1)^{i+1}\mu_{\Lambda(\Gamma)}(H_i(H, \mathfrak{X}(E/k_{\infty}))).$$ Since we assume that $\mathfrak{X}(E/k_{\infty})$ is finitely-generated over $\Lambda(H),$ it follows that $\mathfrak{X}(E/k_{\infty})_H$ is finitely-generated over ${\bf{Z}}_p$. Thus, $$\mu_{\Lambda(\Gamma)}(H_i(H, \mathfrak{X}(E/k_{\infty})))=0.$$ In particular, $\mu_{\Lambda(G)}(X(E/k_{\infty})) = \mu_E(k)$ as claimed. $\Box$ \end{proof} \end{remark} \begin{remark}[The $G$-Euler characteristic of ${\operatorname{Sel}}(E/k_{\infty})$.] We now give a formula for the $G$-Euler characteristic of ${\operatorname{Sel}}(E/k_{\infty})$, \begin{align*} \chi(G, {\operatorname{Sel}}(E/k_{\infty})) &= \prod_{i \geq 0} \vert H^i(G, {\operatorname{Sel}}(E/k_{\infty})) \vert^{(-1)^ i}, \end{align*} which in the setup described above is well defined (i.e. finite). Note that this invariant is related to the characteristic power series $\operatorname{char}_{\Lambda(G)} X(E/k_{\infty}) $ by the formula \begin{align*}\chi(G, {\operatorname{Sel}}(E/k_{\infty})) &= \vert \operatorname{char}_{\Lambda(G)} X(E/k_{\infty})(0) \vert_p^{-1}, \end{align*} where $\operatorname{char}_{\Lambda(G)} X(E/k_{\infty})(0)$ denotes the image of $\operatorname{char}_{\Lambda(G)} X(E/k_{\infty}) $ under the natural augmentation map $\Lambda(G) \longrightarrow {\bf{Z}}_p$. We must first establish the following result. \begin{lemma}\label{finite} If $E$ has good ordinary reduction at each prime above $p$ in $k$, then the $p$-primary torsion subgroup $E(k_{\infty})_{p^{\infty}}$ is finite. \end{lemma} \begin{proof} See the argument of \cite{HV}[Lemma 3.12]. We present the following alternative proof. Fix a rational prime $v$ that remains inert in $k$ and does not equal $p$. Write $k_v$ to denote the localization of $k$ at the prime above $v$. Write $k_v^{{\operatorname{cyc}}}$ to denote the cyclotomic ${\bf{Z}}_p$-extension of $k_v$. By Imai's theorem \cite{Im} (cf. \cite{CS2}[A.2.7]), the $p$-primary subgroup of $E(k_v^{{\operatorname{cyc}}})$ is finite. On the other hand, the prime above $v$ in $k$ splits completely in $D_{\infty}$ by class field theory. Hence, writing $D_{\infty, w}$ to denote the union of all completions of $D_{\infty}$ at primes above $v$, we have an isomorphism of local fields $D_{\infty, w} \cong k_v$. This induces an isomorphism of Mordell-Weil groups $E(D_{\infty, w}) \cong E(k_v)$. Hence, writing $k_{\infty, \mathfrak{w}}$ to denote the union of all completions of $k_{\infty}$ at primes above $v$, we have the identifications \begin{align*} E(k_{\infty, \mathfrak{w}}) \cong E(D_{\infty. w} \cdot k_v^{{\operatorname{cyc}}}) \cong E(k_v^{{\operatorname{cyc}}}). \end{align*} Hence, the $p$-primary part of $E(k_{\infty, \mathfrak{w}})$ is seen to be finite by Imai's theorem. Since $E(k_{\infty})_{p^{\infty}}$ injects into the $p$-primary part of $E(k_{\infty, \mathfrak{w}})$, the result follows. $\Box$ \end{proof} \begin{theorem}\label{Euler} Assume that $E$ has good ordinary reduction at all primes above $p$ in $k$, that $p \geq 5$, and that ${\operatorname{Sel}}(E/k)$ is finite. Then, the $G$-Euler characteristic of ${\operatorname{Sel}}(E/k_{\infty})$ is well defined, and given by the formula \begin{align*} \chi(G, {\operatorname{Sel}}(E/k_{\infty})) &= \frac{\vert {\mbox{\textcyr{Sh}}}(E/k)(p) \vert}{\vert E(k)_{p^{\infty}} \vert^2} \cdot \prod_{v \mid p} \vert \widetilde{E}_v(\kappa_v)(p)\vert^2 \cdot \prod_{v} \vert c_v \vert_p^{-1}. \end{align*} Here, ${\mbox{\textcyr{Sh}}}(E/k)(p)$ denotes the $p$-primary part of ${\mbox{\textcyr{Sh}}}(E/K)$, $E(k)_{p^{\infty}}$ the $p$-primary part of $E(k)$, $\kappa_v$ the residue field at $v$, $\widetilde{E}_v$ the reduction of $E$ over $\kappa_v$, and $c_v=[E(k_v):E_0(k_v)]$ the local Tamagawa factor at a prime $v \subset \mathcal{O}_k$. \end{theorem} \begin{proof} See for instance \cite{HV}[Theorem 4.1] The proof is a standard computation, using the facts that (i) $X(E/k_{\infty})$ is $\Lambda(G)$-torsion (by Theorem \ref{torsion} above), (ii) $E(k_{\infty})_{p^{\infty}}$ is finite (by Lemma \ref{finite} above), and (iii) $p$ is totally ramified in $k_{\infty}$. $\Box$ \end{proof}\end{remark} \begin{remark}[$\Lambda(H)$-module structure.] Let us assume now that $\mu_E(k)=0$. We obtain the following $\Lambda(H)$-module structure theory for $X(E/k_{\infty})$. \begin{theorem} \label{mhg} Suppose that $E$ has good ordinary reduction at $p$, with $\mu_E(k)=0$. Then, there is a $\Lambda(H)$-module isomorphism $X(E/k_{\infty})\cong \Lambda(H)^{\lambda_E(k)}.$ \end{theorem} \begin{proof} By Nakayama's lemma, $X(E/k_{\infty})$ is finitely generated over $\Lambda(H)$ if and only if $X(E/k_{\infty})_H$ is finitely generated over ${\bf{Z}}_p$, hence by duality if and only if $S(E/k_{\infty})^H$ is co-finitely generated over ${\bf{Z}}_p$. Given $n \geq 0$ an integer, let $D_n$ denote the degree-$p^n$ extension of $k$ contained in $D_{\infty}$, with $D_{n}^{{\operatorname{cyc}}}$ its cyclotomic ${\bf{Z}}_p$-extension. Let $H_n = {\operatorname{Gal}}(k_{\infty}/D_{n}^{{\operatorname{cyc}}})$. Note that ${\operatorname{cd}}_p(H_n) \leq 1$. Consider the diagram $$\begin{CD} 0 @>>> S(E/k_{\infty})^{H_n} @>>> H^1(G_S(k_{\infty}), E_{p^{\infty}})^{H_n} @>>> \bigoplus_{v \in S}J_v(k_{\infty})^{H_n} \\ @. @AA{\alpha_n}A @AA{\beta_n}A @AA{\gamma_n}A @. \\ 0 @>>> S(E/D_{n}^{\text{cyc}}) @>>> H^1(G_S(D_{n}^{\text{cyc}}), E_{p^{\infty}}) @>>> \bigoplus_{v \in S}J_v(D_{n}^{\text{cyc}}).\\ \end{CD}$$ Here, the horizontal rows are exact sequences, and the vertical maps are induced by restriction on cohomology. We have by inflation-restriction that ${\operatorname{coker}}(\beta_n) \cong H^2(H_n, E_{p^{\infty}})=0$ and that $\ker(\beta_n) \cong H^1(H_n, E_{p^{\infty}}).$ Note that $H^1(H_n, E_{p^{\infty}})$ has cardinality equal to that of $E(D_{n}^{\text{cyc}})_{p^{\infty}},$ which is finite by Imai's theorem \cite{Im}. Given $v \in S$, fix a place $w$ above $v$ in $k_{\infty}$. We can then write the local restriction map as $$ \gamma_n = \bigoplus_w \gamma_{n, w},$$ where the direct sum ranges over the primes above each $v \in S$ in $D_n$. Let $\Omega_{n, w}$ denote the decomposition group of $H_n$ at $w$. We argue as in the proof of Lemma $\ref{cycloc}$ that ${\operatorname{coker}}(\gamma_n) =0.$ Following \cite[Lemma 3.7]{C} also find that $${\operatorname{coker}}(\gamma_{n,w}) \cong H^2(\Omega_{n,w}, E_{p^{\infty}}) = 0 ~\text{~and } \ker(\gamma_{n,w}) \cong H^1(\Omega_{n,w}, E_{p^{\infty}}).$$ In particular, since the latter group is known to be finite, it follows that $\ker(\gamma_n)= \bigoplus_w \ker(\gamma_{n,w})$ is finite. It then follows from the snake lemma that $\ker(\alpha_n)$ and ${\operatorname{coker}}(\alpha_n)$ must be finite. Now, recall that $X(E/k^{{\operatorname{cyc}}})$ is $\Lambda(\Gamma)$-torsion by Theorem \ref{kato-rohrlich}. Matsuno's theorem $\cite{M}$ then implies that $X(E/k^{{\operatorname{cyc}}})$ has no nontrivial finite $\Lambda(\Gamma)$-submodule. On the other hand, since $\mu_E(k)=0$, Hachimori and Matsuno's analogue of Kida's formula $\cite{HM}$ implies that $X(E/D_{n}^{{\operatorname{cyc}}})$ is $\Lambda(\Gamma_n)$-torsion with $\Gamma_n = {\operatorname{Gal}}(D_{n}^{{\operatorname{cyc}}}/D_n)$ and cyclotomic Iwasawa invariants $$\lambda_E(D_n) = [D_n:k]\cdot\lambda_E(k) ~\text{ and }~ \mu_E(D_n)=\mu_E(k)=0.$$ Since $D_n$ is not totally real, it follows from Proposition 7.5 of Matsuno \cite{M} that $X(E/D_{n}^{{\operatorname{cyc}}})$ has no nontrivial finite $\Lambda(\Gamma_n)$-submodule. In particular, since $\mu_E(D_n)=0$ for each $n \geq 0$, Matsuno's theorem implies that $X(E/D_{n}^{{\operatorname{cyc}}})$ has no nontrivial finite ${\bf{Z}}_p$-submodule for any $n \geq 0$. This makes the inverse limit $X(E/k_{\infty}) = \ilim n X(E/D_{n}^{{\operatorname{cyc}}})$ ${\bf{Z}}_p$-torsionfree, from which it follows that $\ker(\alpha_n) = {\operatorname{coker}}(\alpha_n) = 0$ for any $n \geq 0$. Thus, we find an isomorphism of ${\bf{Z}}_p$-modules $\alpha_0: X(E/k_{\infty})_{H_0} \cong X(E/k^{{\operatorname{cyc}}}).$ Let us now put $r = \lambda_E(k).$ Let $x_1, \ldots, x_r$ denote a lift to $X(E/k_{\infty})$ of a fixed ${\bf{Z}}_p$-basis of $X(E/k_{\infty})_H$. Let $I(H)$ denote the augmentation ideal of $H$ in $\Lambda(H)$. Note that $X(E/k_{\infty})_H = X(E/k_{\infty})/I(H).$ Let $Y$ denote the $\Lambda(H)$-submodule of $X(E/k_{\infty})$ generated by $x_1, \ldots x_r$. Observe that $$I(H)(X(E/k_{\infty})/Y) = (I(H)X(E/k_{\infty})+Y)/Y = X(E/k_{\infty})/Y,$$ and so $X(E/k_{\infty})=Y$ by Nakayama's lemma. In particular, this gives an isomorphism of $\Lambda(H)$-modules $$X(E/k_{\infty})\cong \Lambda(H)^r, ~~~~~ \sum_i a_ix_i \longmapsto \sum_i a_i e_i,$$ where $e_1, \ldots, e_r$ is a standard $\Lambda(H)$-basis of $\Lambda(H)^r.$ Observe now that $X(E/k_{\infty})$ has no nontrivial finite $\Lambda(H)$-submodule, thus making it $\Lambda(H)$-torsionfree. $\Box$ \end{proof} \begin{corollary}\label{2mu} Suppose that $E$ has good ordinary reduction at each prime above $p$ in $k$, with $\mu_E(k)=0$. Then, $\mu_{\Lambda(G)}\left( X(E/k_{\infty})\right) = \mu_E(k) = 0$.\end{corollary} \begin{proof} The result follows from argument of Theorem \ref{mhg} above, namely by using Matsuno's theorem \cite{M} and the main result of Hachimori-Matsuno \cite{HM} to deduce that $X(E/k_{\infty})$ is $\Lambda(H)$-torsionfree. $\Box$ \end{proof} We also deduce from Theorem \ref{mhg} the following consequence for the $\Lambda(H)$-corank of the $p$-primary parts of the Tate-Shafarevich group ${\mbox{\textcyr{Sh}}}(E/k_{\infty})$. That is, recall that we consider the short exact descent sequence of $\Lambda(H)$-modules \begin{align*} 0 \longrightarrow E(k_{\infty})\otimes {\bf{Q}}_p/{\bf{Z}}_p \longrightarrow {\operatorname{Sel}}(E/k_{\infty}) \longrightarrow {\mbox{\textcyr{Sh}}}(E/k_{\infty})(p) \longrightarrow 0, \end{align*} as well as the dual exact sequence \begin{align}\label{dualSES} 0 \longrightarrow {\mbox{\textcyr{Zh}}}(E/k_{\infty}) \longrightarrow X(E/k_{\infty}) \longrightarrow \mathcal{E}(E/k_{\infty}) \longrightarrow 0. \end{align} Here, ${\mbox{\textcyr{Zh}}}(E/k_{\infty})$ is the Pontryagin dual of ${\mbox{\textcyr{Sh}}}(E/k_{\infty})(p)$, and $\mathcal{E}(E/k_{\infty})$ is that of $E(k_{\infty}) \otimes {\bf{Q}}_p /{\bf{Z}}_p$. Recall that we let $\epsilon(E/k, 1) = \epsilon(f/k, 1)$ denote the root number of the complex $L$-function $L(E/k, s) = L(f \times \Theta_k, s)$. \begin{proposition}\label{tsrank} Assume that $p$ is odd, and moreover that $p$ does not divide the class number of $k$ if the root number $\epsilon(E/k, 1)$ equals $-1$. If $E$ has good ordinary reduction at each prime above $p$ in $k$ with $\mu_E(k)=0$, then \begin{align*}{\operatorname{rk}}_{\Lambda(H)}{\mbox{\textcyr{Zh}}}(E/k_{\infty}) = \begin{cases}\lambda_E(k) &\text{if $\epsilon(E/k, 1)=+1$} \\ \lambda_E(k) -1&\text{if $\epsilon(E/k, 1)=-1.$}\end{cases} \end{align*} \end{proposition} \begin{proof} Observe that $(\ref{dualSES})$ is a short exact sequence of finitely generated $\Lambda(H)$-modules. We know by Theorem \ref{mhg} that the $\Lambda(H)$-rank of $X(E/k_{\infty})$ is $\lambda_E(k)$. On the other hand, we claim that \begin{align}\label{MWrank}{\operatorname{rk}}_{\Lambda(H)}\mathcal{E}(E/k_{\infty}) = \begin{cases}0 &\text{if $\epsilon(E/k, 1)=+1$}\\ 1&\text{if $\epsilon(E/k, 1)=-1.$}\end{cases} \end{align} To see why this is so, let $K$ be any finite extension of $k$ contained in $k^{{\operatorname{cyc}}}$. A simple exercise shows that $K$ is a totally imaginary quadratic extension of its maximal totally real subfield $F$. Let $D_{\infty}^K$ denote the compositum extension $KD_{\infty}$, with Galois group $\Omega_K = {\operatorname{Gal}}(D_{\infty}^K/K)$. We claim that for any such $K$, we have the rank formula \begin{align*} {\operatorname{rk}}_{\Lambda(\Omega_K)}\mathcal{E}(E/D^K_{\infty}) = \begin{cases}0 &\text{if $\epsilon(E/k, 1)=+1$} \\1&\text{if $\epsilon(E/k, 1)=-1.$}\end{cases} \end{align*} Indeed, in the first case with $\epsilon(E/k, 1)=+1$, the formula follows from the relevant nonvanishing theorem of Cornut-Vatsal \cite[Theorem 1.4]{CV} over $F$ plus the relevant rank theorem(s) of Nekovar \cite[Theorem B, Theorem B', and Corollary]{Nek}. In the second case with $\epsilon(E/k, 1)=-1$, the formula follows form the relevant nonvanishing theorem of Cornut-Vatsal \cite[Theorem 1.5]{CV} over $F$ plus the relevent rank theorem of Howard \cite[Theorem B (a)]{Ho1}. Note that to invoke the result of Howard \cite{Ho1} in the latter setting, we have used the classical result due to Iwasawa \cite{Iw} that if $p$ does not divide the class number of $k$, then $p$ does not divide the class number of any finite extension $K$. Taking the inductive limit over all finite extensions $K$ of $k$ contained in $k^{{\operatorname{cyc}}}$, we obtain the stated formula $(\ref{MWrank})$. The result then follows immediately from the exactness of $(\ref{dualSES})$. $\Box$ \end{proof} \end{remark} \section{Divisibility criteria} We now discuss various divisibility criteria for the two-variable main conjecture (Conjecture \ref{2vmc} (iii) above). In particular, granted suitable hypotheses, we prove one divisibility of the two-variable main conjecture. \begin{remark}[Greenberg's criterion.] The following criterion was suggested to the author by Ralph Greenberg. It reduces one divisibility of the two-variable main conjecture (Conjecture \ref{2vmc} (iii)) to a certain specialization criterion for finite order characters of the Galois group $\Gamma = {\operatorname{Gal}}(k^{cyc}/k)$. Let us first fix an isomorphism \begin{align}\label{fixedisom} \Lambda(G) \cong {\bf{Z}}_p[[T_1, T_2]], ~ \left( \gamma_1, \gamma_2 \right) &\longmapsto \left( T_1 +1, T_2 +1 \right).\end{align} Here, we have fixed a topological generator $\gamma_1$ of $\Gamma$, as well as a topological generator $\gamma_2$ of $\Omega$. Fix $f \in S_2(\Gamma_0(N))$ a $p$-ordinary eigenform, as required for the construction of the $p$-adic $L$-function of Theorem \ref{2vinterpolation}. Recall that we write $X(f/k_{\infty})$ to denote the Pontryagin dual of the $p^{\infty}$-Selmer group associated to $f$ in $k_{\infty}/k$. If $f$ is the eigenform associated to an elliptic curve $E$ defined over ${\bf{Q}}$, then a standard argument allows us to make the identification $X(f/k_{\infty}) = X(E/k_{\infty})$. In what follows, we shall fix an elliptic curve $E$ over ${\bf{Q}}$ of conductor $N$ as described in the introduction, with $f$ the eigenform associated to $E$ by modularity. We shall then make the identification $X(f/k_{\infty}) = X(E/k_{\infty})$ implicitly in what follows. Let $g(T_1, T_2)$ denote the $\Lambda(G)$-characteristic power series of $X(f/k_{\infty})$, or rather its image under the fixed isomorphism $(\ref{fixedisom})$. (We take this to be zero if $X(f/k_{\infty})$ is not $\Lambda(G)$-torsion). Let $L(T_1, T_2) = L_p(f, k)(T_1, T_2)$ denote the image under $(\ref{fixedisom})$ of the two-variable $p$-adic $L$-function $L_p(f, k) \in \Lambda(G)$ associated to $f$ by Theorem \ref{2vinterpolation}. Recall that we write $\Psi$ to denote the set of finite order characters of $\Gamma = {\operatorname{Gal}}(k^{cyc}/k)$. Given an element element $\lambda \in \Lambda(G) $ with associated power series $\lambda(T_1, T_2) \in {\bf{Z}}_p[[T_1, T_2]]$, we can and will invoke the usual Weierstrass preparation theorem for $\lambda(T_1, T_2)$ as an element of the one-variable power series ring $R[[T_1]]$ with $R= {\bf{Z}}_p[[T_2]]$. We refer the reader to the discussion in Venjakob \cite[Example 2.4, Theorem 3.1, and Corollary 3.2]{Ven} for a more general account of the situation. \begin{theorem}\label{greenberg} Suppose that $p$ does not divide the specialization $g(T_1,0)$. Assume that for each character $\psi \in \Psi$, we have the inclusion of ideals \begin{align} \label{harddiv} \left( L(T_1, \psi(T_2)) \right) \subseteq \left( g(T_1,\psi(T_2)) \right) \text{ in $\mathcal{O}_{\psi}[[T_1, T_2]]$.} \end{align} Then, we have the inclusion of ideals \begin{align*} \left( L(T_1, T_2) \right) \subseteq \left( g(T_1,T_2) \right) \text{ in ${\bf{Z}}_p[[T_1, T_2]]$.} \end{align*} \end{theorem} \begin{proof} Observe that we may write $$g(T_1,T_2) = \sum_{i=0}^{\infty}a_i(T_2) \cdot T_1^i,$$ with $a_i(T_2) \in {\bf{Z}}_p[[T_2]]$. Since we assume that $p \nmid g(T_1,0)$, it follows that for some minimal positive integer $m$, $$g(T_1,0) = \sum_{i=0}^{m} a_i(0) \cdot T_1^i,$$ with $a_i(0) \in {\bf{Z}}_p^{\times}$. We claim it then follows that $$L(T_1,T_2) = h(T_1,T_2) \cdot g(T_1,T_2) + r(T_1,T_2),$$ with $h(T_1,T_2)$ a polynomial in ${\bf{Z}}_p[[T_1, T_2]]$, and $r(T_1,T_2)$ a remainder polynomial in ${\bf{Z}}_p[[T_2]]$ of degree less than $m$. Now, the remainder term is given by $$r(T_1,T_2) = \sum_{j=0}^{m-1} c_j(T_2) \cdot T_1^j,$$ with $c_j(T_2) \in {\bf{Z}}_p[[T_2]].$ Granted the inclusion $(\ref{harddiv})$ for each $\psi \in \Psi$, we have that $$r(T_1, \psi(T_2)) = 0$$ for each $\psi \in \Psi$. It then follows from the Weierstrass preparation theorem that $$c_j(\psi(T_2)) = 0$$ for each $\psi \in \Psi$ and $j \in \lbrace 0, \ldots, m-1 \rbrace.$ Hence, we conclude that $r(T_1,T_2)=0$. $\Box$ \end{proof} We obtain the following immediate consequence. \begin{corollary}\label{gpw} Assume Hypothesis \ref{XXX} (i) and (ii). Suppose that for each character $\psi \in \Psi$, we have the inclusion of ideals \begin{align} \label{harddiv} \left( L(T_1, \psi(T_2)) \right) \subseteq \left( g(T_1,\psi(T_2)) \right) \text{ in $\mathcal{O}_{\psi}[[T_1, T_2]]$.} \end{align} Then, we have the inclusion of ideals \begin{align*} \left( L(T_1, T_2) \right) \subseteq \left( g(T_1,T_2) \right) \text{ in ${\bf{Z}}_p[[T_1, T_2]]$.} \end{align*} \end{corollary} \begin{proof} Theorem \ref{greenberg} requires that $p$ does not divide the specialization of the characteristic power series $g(T_1, 0)$, equivalently that the dihedral or anticyclotomic $\mu$-invariant associated to $f$ in the tower $D_{\infty}/k$ vanishes. Assuming Hypothesis \ref{XXX} (i) and (ii), the main result of Pollack-Weston \cite{PW} shows that this is always the case if the underlying eigenform $f$ is $p$-ordinary. $\Box$ \end{proof} \end{remark} \begin{remark}[A basechange criterion] Let $K$ be any finite extension of $k$ contained in the cyclotomic extension $k^{{\operatorname{cyc}}}$. Let $D^K_{\infty}$ denote the compositum extension $K D_{\infty},$ with $\Omega_K = {\operatorname{Gal}}(D^K_{\infty}/K)$ the corresponding Galois group. Note that $\Omega_K$ is topologically isomorphic to ${\bf{Z}}_p.$ Let $\Psi_K$ denote the set of (primitive) characters of order $[K:k]$ of the Galois group ${\operatorname{Gal}}(K/k)$. Hence, we have the decomposition \begin{align*} \Psi = \bigcup_{k \subset K \subset k^{{\operatorname{cyc}}}} \Psi_K. \end{align*} Recall that given a character $\psi \in \Psi,$ we write $\mathcal{O}_{\psi}$ to denote the ring of integers obtained from ${\bf{Z}}_p$ by adding the values of $\psi.$ Let us also write $\mathcal{O}_{\Psi_K}$ to denote the ring of integers obtained by adding to ${\bf{Z}}_p$ the values of each of the characters in the set $\Psi_K$. Given a polynomial $f(T_1, T_2) \in {\bf{Z}}_p[[T_1, T_2]]$, let us write \begin{align}\label{prodspec} f(T_1, T_2^K) = \prod_{\psi \in \Psi_K} f(T_1, \psi(T_2))\end{align} to denote the product of specializations of $f(T_1, T_2)$ to the characters of the set $\Psi_K.$ Note that this specialization product $f(T_1, T_2^K)$ lies in the polynomial ring ${\bf{Z}}_p[[T_1, T_2^K]] = \mathcal{O}_{\Psi_K}[[T_1]]$. Note as well that we have the identifications \begin{align*} f(T_1, T_2^k) = f(T_1, {\bf{1}}(T_2)) = f(T_1, 0) \in {\bf{Z}}_p[[T_1]]. \end{align*} \begin{proposition}\label{bccrit} Assume that for any finite extension $K$ of $k$ contained in $k^{{\operatorname{cyc}}}$, we have the inclusion of ideals \begin{align}\label{divK} \left( L(T_1, T_2^K) \right) \subseteq \left( g (T_1, T_2^K) \right) \text{ in } \mathcal{O}_{\Psi_K}[[T_1]]. \end{align} Assume additionally that the root number of the central value $L(f/ k, 1)$ is $+1$, and moreover that we have a nontrivial equality of ideals for $K=k,$ \begin{align}\label{eqk}\left( L(T_1, 0) \right) = \left( g(T_1,0) \right) \text{ in } {\bf{Z}}_p[[T_1]]. \end{align} Then, for each character $\psi \in \Psi$, we have the inclusion of ideals \begin{align*} \left( L(T_1, \psi(T_2)) \right) \subseteq \left( g(T_1, \psi(T_2))\right) \text{ in } \mathcal{O}_{\psi}[[T_1]]. \end{align*}\end{proposition} \begin{proof} Since we assume that the root number $\epsilon(f/k,1)$ is equal to $+1$, we know for instance by the nonvanishing theorems of Vatsal \cite{Va} and more generally Cornut-Vatsal \cite{CV} that the $p$-adic $L$-function $L(T_1, 0)$ does not vanish identically. Let $K$ be any finite extension of $k$ contained in $k^{{\operatorname{cyc}}}$. Using the equality $(\ref{eqk})$, we may then divide each side of $(\ref{divK})$ by the corresponding ideals in $(\ref{eqk})$ to obtain for each extension $K$ the inclusion of ideals \begin{align}\label{quotient} \left( \frac{L(T_1, T_2^K)}{L(T_1, 0)} \right) \subseteq \left( \frac{g(T_1, T_2^K)}{g(T_1, 0)} \right) \text{ in } \mathcal{O}_{\Psi_K}[[T_1]]. \end{align} Now, the divisibility $(\ref{divK})$ implies that we have for each extension $K$ the relation $$g(T_1, T_2^K) = f(T_1, T_2^K) \cdot L(T_1, T_2^K) + r(T_1, T_2^K).$$ Here, $f(T_1, T_2^K)$ denotes some polynomial in ${\bf{Z}}_p[[T_1, T_2^K]] = \mathcal{O}_{\Psi_K}[[T_1]]$, and $r(T_1, T_2^K)$ the corresponding remainder term. It then follows from $(\ref{quotient})$ that \begin{align*} \prod_{\psi \in \Psi_K \atop \psi \neq 1} r(T_1, \chi(T_2)) =0.\end{align*} Hence, we deduce that for each finite extension $K$ of $k$ contained in $k^{{\operatorname{cyc}}},$ there exists a nontrivial character $\psi \in \Psi_K$ such that \begin{align}\label{twisted} \left( L(T_1, \psi(T_2)) \right) \subseteq \left( g(T_1, \psi(T_2))\right) \text{ in } \mathcal{O}_{\psi}[[T_1]]. \end{align} We now argue that if the divisibility $(\ref{twisted})$ holds for one (nontrivial) character in $\Psi_K$, then it holds for all (nontrivial) characters in $\Psi_K$. To see why this is, let $\mathcal{L}(E/k, \mathcal{W}, 1) = \mathcal{L}(f \times \Theta(\mathcal{W}), 1)$ denote the value \begin{align}\label{algL} \frac{L(f \times \Theta(\mathcal{W}), 1)}{8\pi \langle f, f \rangle}, \end{align} where $\mathcal{W}$ is any finite order character of the Galois group $G$. Recall that the value $(\ref{algL})$ is algebraic by Shimura's theorem \cite{Sh1}. In particular, for any finite order character $\rho$ of $\Omega$, the values $\mathcal{L}(f \times \Theta(\rho\psi), 1)$ with $\psi \in \Psi_K$ are Galois conjugate by Shimura's theorem. Hence, by uniqueness of interpolation series, we deduce that the specializations $L(T_1, \psi(T_2))$ with $\psi \in \Psi_K$ are Galois conjugate. We can then deduce that if the divisibility $(\ref{twisted})$ holds for one character $\psi \in \Psi_K$, then it holds for all characters $\psi \in \Psi_K$. Taking the union of all finite extensions $K$ of $k$ contained in $k^{{\operatorname{cyc}}}$, the result follows. $\Box$ \end{proof} \begin{corollary} Keep the hypotheses of Proposition \ref{bccrit} above. If $p$ does not divide the specialization $g(T_1, 0)$, then there is an inclusion of ideals \begin{align} \label{harddiv} \left( L(T_1, \psi(T_2)) \right) \subseteq \left( g(T_1,\psi(T_2)) \right) \text{ in $\mathcal{O}_{\psi}[[T_1, T_2]]$.} \end{align} \end{corollary} \begin{proof} Apply Theorem \ref{greenberg} to Proposition \ref{bccrit} above. $\Box$ \end{proof} \begin{remark}[Some remarks on further reductions.] A simple argument shows that each finite extension $K$ of $k$ contained in $k^{{\operatorname{cyc}}}$ is a totally imaginary quadratic extension of its maximal totally real subfield $F$. Each such totally real field $F$ is abelian. Hence, we can associate to $f$ a Hilbert modular eigenform ${\bf{f}}$ over $F$ via the theory of cyclic basechange. It is then simple to see (via Artin formalism for instance) that the root number of the complex Rankin-Selberg $L$-function $L({\bf{f}} \times \Theta_K, s)$ is equal to that of $L(E/k, s) = L(f \times \Theta_k, s)$. In particular, the divisibilities $(\ref{divK})$ of Proposition \ref{bccrit} would follow from the dihedral/anticyclotomic main conjectures for ${\bf{f}}$ in the dihedral/anticyclotomic ${\bf{Z}}_p^d$-extension of $K$, where $d = [F:{\bf{Q}}]$. For results in this direction, see for instance the generalizations to totally real fields of work of Bertolini-Darmon \cite{BD} (as well as Pollack-Weston \cite{PW}) by Longo \cite{Lo3} and the author \cite{VO2}. For the equality condition $(\ref{eqk})$ of Proposition \ref{bccrit}, see the result of Howard \cite[Theorem 3.2.3]{Ho} with the main result of Pollack-Weston \cite{PW}. These works combined show that the inclusion $(L(T_1, 0)) \subseteq (g(T_1, 0))$ often holds, in which case the reverse inclusion $(g(T_1, 0)) \subseteq (L(T_1, 0))$ can be reduced by Howard \cite[Theorem 3.2.3(c)]{Ho} to a certain nonvanishing criterion for the associated $p$-adic $L$-functions $L(T_1, 0) \in \Lambda(\Omega)$. \end{remark} \begin{remark}[Some remarks on the setting of root number minus one.] In the setting where the root number $\epsilon(f/k, 1)$ of $L(f/k, 1)$ is equal to $-1$, then we know that $L(T_1, T_2) =0$ by the functional equation for $L(T_1, T_2)$ given in Corollary \ref{FE} (derived from the fact that the complex central value $L(f/k, 1)$ vanishes). It follows that $L(T_1, T_2^K) =0$ for all finite extensions $K$ of $k$ contained in $k^{{\operatorname{cyc}}}$. Hence in this setting, the hypotheses of Proposition \ref{bccrit} do not hold. Indeed, consider the basechange setup described in the remark above, where ${\bf{f}}$ is the basechange Hilbert modular eigenform defined over the maximal totally real subfield $F$ of $K$. The formulation of the analogous dihedral/anticyclotomic main conjecture in this setting asserts that each dual Selmer group $X({\bf{f}}/D_{\infty}^K)$ has $\Lambda(\Omega_K)$-rank one, and moreover that there is an equality of ideals \begin{align*} \left( \operatorname{char}_{\Lambda(\Omega_K)} (X({\bf{f}}/D_{\infty}^K)_{{\operatorname{tors}}} ) \right) &= \left( \operatorname{char}_{\Lambda(\Omega_K)} (\mathfrak{X}({\bf{f}}/D_{\infty}^K)) \right) \text{in $\Lambda(\Omega_K)$} \end{align*} Here, $X({\bf{f}}/D_{\infty}^K)_{{\operatorname{tors}}} $ denotes the $\Lambda(\Omega_K)$-torsion submodule of $X({\bf{f}}/D_{\infty}^K)$, and $\mathfrak{X}({\bf{f}}/D_{\infty}^K)$ is the $\Lambda(\Omega_K)$-torsion submodule defined by $\mathfrak{S}({\bf{f}}/D_{\infty}^K)/ H({\bf{f}}/D_{\infty}^K)$, where $\mathfrak{S}({\bf{f}}/D_{\infty}^K)$ is the compactified Selmer group of ${\bf{f}}$ over $D_{\infty}^K$, and $H({\bf{f}}/D_{\infty}^K)$ is the so-called Heegner submodule generated by CM points (defined on an associated quaternionic Shimura curve). We refer the reader to Howard \cite[Theorem B]{Ho1} or Perrin-Riou \cite{PR} for more details on this formulation. Anyhow, the dual Selmer group $X({\bf{f}}/D^K_{\infty})$ does not have a $\Lambda(\Omega_K)$ characteristic power series in this setting. If we adopt the standard convention of taking the characteristic power series to be $0$ in this case, then we obtain for each extension $K$ the trivial equality of ideals $\left( L(T_1, T_2^K) \right) = \left( g (T_1, T_2^K) \right) \text{ in } \mathcal{O}_{\Psi_K}[[T_1]]$. It therefore seems unlikely that we can do any better than Theorem \ref{gpw} for determining a two-variable divisibility criterion by considering main conjecture divisibilities via basechange. This is especially apparent after noting of the shape of the two-variable main conjecture in this case, as described for instance in Howard \cite{Ho0}. To be somewhat more precise, recall that we fixed a topological generator $\gamma_2$ of $\Gamma$ for our fixed isomorphism $(\ref{fixedisom})$. The two-variable $p$-adic $L$-function $L_p(f, k_{\infty})$ can then be written as a power series \begin{align*} \mathcal{L}_f = \mathcal{L}_{f, 0} + \mathcal{L}_{f, 1} \cdot (\gamma_2 -1) + \ldots \in \Lambda(G), \end{align*} with coefficients $\mathcal{L}_{f, n} \in {\bf{Z}}_p[[\Omega]]$. In the case where the root number $\epsilon(f/k, 1)$ is $-1$, we know by the associated functional equation(s) that $\mathcal{L}_{f,0} =0$. Another result of Howard (proving one divisibility of a conjecture made by Perrin-Riou in \cite{PR}) shows that the second term $\mathcal{L}_{f,1}$ can be expressed as a certain twisted sum of images under any appropriate $p$-adic height pairing of some associated regularized Heegner points (see \cite[Theorem A]{Ho0}). If $p$ does not divide the level $N$ of $f$, then we know by Theorem \ref{torsion} that $\operatorname{char}_{\Lambda(G)}X(f/k_{\infty})$ exists, equivalently that $g(T_1, T_2) \neq 0.$ Now, two-variable characteristic power series $\operatorname{char}_{\Lambda(G)}X(f/k_{\infty})$ can be written as a power series \begin{align*} \mathcal{G}_f = \mathcal{G}_{f, 0} + \mathcal{G}_{f, 1} \cdot (\gamma_2 -1) + \ldots \in \Lambda(G), \end{align*} with coefficients $\mathcal{G}_{f, n} \in {\bf{Z}}_p[[\Omega]]$. Hence, if we know that $g(T_1, 0) =0$, then we find that $\mathcal{G}_{f,0}=0.$ This would reduce our task to showing $\mathcal{G}_f \mid \mathcal{L}_f$ in $\Lambda(G)$, where both $\mathcal{G}_f$ and $\mathcal{L}_f$ correspond under the fixed isomorphism $(\ref{fixedisom})$ to power series that vanish at $T_2 =0.$ It is then apparent from this fact that comparing the products of specializations to characters $\psi \in \Psi_K$ of these power series alone will not give much more information, as $\Psi_K$ contains the trivial character. \end{remark} \end{remark}
1,314,259,996,114
arxiv
\section{INTRODUCTION} \par The feedback classification and equivalence problem has drawn attention of the researchers for several decades. A variety of approaches for its treatment has been proposed, developed and documented in extensive existing literature. They are the following. The first one is Cartan's equivalence method originally proposed by Ellie Cartan in \cite{Cartan} and developed by his numerous heirs (\cite{G,Ku}). Another one is based on Hamiltonian formalism for optimal control systems (See \cite{B,J2}). One more is the geometric approach that uses the equivalence of the distributions and affine distributions. It was first formulated by Jacubczyk in \cite{J1} and followed by \cite{Br}--\cite{ZR}. An alternative approach is the formal one proposed by Kang in \cite{KK} and further studied in \cite{K}--\cite{TR}. In this paper we employ the approach based on differential invariants introduced in \cite{VKL1986} and developed in \cite{PO}. \par Our paper contributes to the existing literature in several ways. We provide a complete description of the field of Petrov invariants of {\it quasi-harmonic oscillation equation} (QHOE) with respect to Lie pseudo-group of local feedback transformations. We also provide straightforward algorithm for calculating the Petrov invariants up to the $k$-th order. Furthermore, we specify some simple free coordinate forms of QHOE based on the derived differential invariants. \par Feedback transformations of the control-parameter-dependent systems are analogous to Lie transformations of the differential equations. They are often used for the reduction of such systems to canonical or normal forms and simplification of equations. \par The structure of this paper is as follows. In Section 2 we derive a set of basal vector fields and local translation groups (local feedback transformations, local diffeomorphisms) along their trajectories. These transformations preserve a class of QHOE. Finally, the action of local feedback transformations on QHOE is given. In Section 3 we provide an algorithm for the calculation of the differential invariants up to the $k$-th order. Using this algorithm we find second- and third-order differential invariants of QHOE. Finally we specify some simple free coordinate forms of QHOE. In Section 4 we define G-invariant derivation and find their explicit form for QHOE. We also obtain syzygies for the third-order differential invariants. In Section 5 we show that differential invariants obtained in Section 3 are Petrov invariants. We then calculate the dimension of algebra of Petrov invariants. Finally, we formulate a theorem that provides a complete description of the field of Petrov invariants. In Section 6 we summarize main results of the paper and discuss further possible studies. \par To perform the required calculations MAPLE 15 software has been employed, especially DifferentialGeometry and JetCalculus packages developed by I. Anderson. \section{ADDMISSIBLE FEEDBACK TRANSFORMATIONS} \par It is generally known that the classification problem for the ordinary differential equations (ODEs) with respect to Lie transformations is among the key ones in the theory of the differential equations. Construction of algebra of differential invariants is the first step to classification of equations. \par In this paper we construct the field of rational differential invariants with respect to the local feedback transformations \begin{equation} \label{transformation} \varphi: (x,y,u)\longmapsto (X(x,y),Y(x,y),U(u)) \end{equation} for equations of the following form: \begin{equation}\label{mainEq} \frac{d^{2}y}{d x^{2}}+f(y,u)=0. \end{equation} \par Here $y=y(x)$ and $u=u(x)$ are scalar functions of a scalar argument $x$. The function $f(y,u)$ is smooth, i.e. of class $C^\infty$. The function $u=u(x)$ is a control parameter. Equations (\ref{mainEq}) describe, for example, the mechanical one-dimensional conservative systems. We call equations (\ref{mainEq}) {\it{control-parameter-dependent quasi-harmonic oscillaton equations}} (QHOE). \par Having followed an approach proposed by Sophus Lie \cite{Lie2011}, consider one-parameter local group of transformations \begin{equation}\label{IZtransformation} \varphi_t: (x,y,u)\longmapsto (X_t(x,y),Y_t(x,y),U_t(u)), \end{equation} instead of generalized point transformations (\ref{transformation}). Here $t$ is a scalar parameter $t\in I$, where $I\subset \mathbb{R}$ is an open interval, $0\in I$ and $X_t(x,y),Y_t(x,y),U_t(u)$ are one-parameter families of smooth functions which depend on the parameter $t$ smoothly as well. \par We suppose that at the point $t=0$ transformation (\ref{IZtransformation}) is identical, i.e. $ X_0(x,y)=x, \ \ Y_0(x,y)=y, \ \ U_0(u)=u$. Transformations (\ref{IZtransformation}) are generated by vector fields of the form \begin{equation}\label{VectorFieldX} X=A(x,y)\frac{\partial}{\partial x}+B(x,y)\frac{\partial}{\partial y}+C(u)\frac{\partial}{\partial u}, \end{equation} i.e. the transformation $\varphi_t$ is a translation along trajectories of the vector field $X$ from $t=0$ to $t$. Transformations (\ref{IZtransformation}) form Lie pseudo-group. Let us find feedback transformations (\ref{transformation}) preserving a class of QHOE. Denote the space of 2-jets of smooth vector-functions $$\mathbb{R}\rightarrow \mathbb{R}^2, \quad x \mapsto (y(x),u(x))$$ by $J^2(1,2)$. Let $x, y, u, y_x, u_x, y_{xx}, u_{xx}$ be canonical coordinates on $J^2(1,2)$ (see \cite{VKL1986}). This means that the Cartan forms have the following coordinate representations: $$\omega_1=dy-y_xdx,\, \omega_2=du-u_xdx,\, \omega_3=dy_x-y_{xx}dx,\, \omega_4=du_x-u_{xx}dx.$$ These 1-forms define the Cartan distribution $\mathcal{C}$ on $J^2(1,2)$: $$ \mathcal{C}: J^2(1,2)\ni \theta \mapsto \mathcal{C}(\theta)=\bigcap_{i=1}^4\ker\omega_i\subset T_\theta J^2(1,2). $$ Here $T_\theta J^2(1,2)$ is a tangent space of $J^2(1,2)$ at the point $\theta$. \par Equation (\ref{mainEq}) defines a hyper-surface \begin{equation*} \mathcal{E}_f=\{y_{xx}+f(y,u)=0\} \subset J^2(1,2). \end{equation*} \par Now let us find the vector fields (\ref{VectorFieldX}), whose flows are of the form (\ref{IZtransformation}) and preserve a class of equations (\ref{mainEq}). Thereafter the prolongations of vector fields and infinitesimal transformations in the space of 2-jets are denoted by $X^{(2)}$ and $\varphi_t^{(2)}$, respectively (See \cite{VKL1986}). Transformations (\ref{IZtransformation}) preserve the class of equations (\ref{mainEq}) if and only if \begin{equation}\label{equality} \left( \varphi_t^{(2)}\right)^*(y_{xx}+f(y,u))=\lambda_t(y_{xx}+g_t(y,u)), \end{equation} where $\lambda_t$ is a local one parameter family of smooth functions on the space $J^2(1,2)$, and $g_t(y,u)$ is a local one parameter family of smooth functions of variables $y$ and $u$, such that $\lambda_0=1$ and $g_0(y,u)=f(y,u)$. Taking the derivative of (\ref{equality}) with respect to $t$ at $t=0$, we obtain: \begin{align}\label{ld} & \left. \frac{d}{dt}\right|_{t=0} \left( \varphi^{(2)}_t \right)^* (y_{xx}+f(y, u)) = \lambda_0 \left. \frac{d}{dt}\right|_{t=0} (y_{xx}+g_t(y, u)) + \\ \nonumber & +(y_{xx}+g_0(y, u))\left. \frac{d\lambda_t}{dt}\right|_{t=0}=\left. \frac{d}{dt}\right|_{t=0} g_t(y, u) + (y_{xx}+f(y, u))\left. \frac{d\lambda_t}{dt}\right|_{t=0}.\label{equal} \end{align} \par The left side of (\ref{ld}) is the Lie derivative along the vector field $X^{(2)}$ of the function $y_{xx}+f(y_0,u)$. A restriction of the last equality of (\ref{ld}) on $\mathcal{E}_f$ is: \begin{equation*} \left. L_{X^{(2)}}(y_{xx}+f(y,u))\right|_{\mathcal{E}_f} = G(y,u), \end{equation*} or \begin{equation}\label{LieDerivative} \left.X^{(2)}(y_{xx}+f(y, u))\right|_{\mathcal{E}_f} = G(y, u), \end{equation} where \begin{equation*} G(y,u)=\left.\frac{d}{dt}\right|_{t=0} g_t(y, u). \end{equation*} Vector equation (\ref{LieDerivative}) can be rewritten as a system of scalar linear differential equations with respect to functions $A$, $B$: \[ \left\{ \begin{array}{l} A_{{yy}}=0,\\ B_{{yy}}-2\,A_{{xy}}=0,\\ 2\,B_{{xy}}-A_{{xx}}-3\,fA_{{y}}=0,\\ B_{{xx}}-Cf_{{u}}-Bf_{{y}}-G \left( y,u \right) +fB_{{y}}-2\,fA_{{x}}=0. \end{array} \right. \] These equations must be satisfied identically for all functions $f$ and some function $G$. The general solution of this system is: $$ A(x, y)=\alpha x+\beta, \quad B(x, y)=\gamma+\delta y. $$ Here $\alpha,\beta,\gamma,\delta$ are arbitrary constants. Hence, any vector fields preserving the class of equations (\ref{mainEq}) can be presented as a linear combination with constant coefficients of the following basal vector fields: \begin{equation*} X_1=\frac{\partial}{\partial x},\quad X_2=\frac{\partial}{\partial y}, \quad X_3=y\frac{\partial}{\partial y}, \quad X_4=x\frac{\partial}{\partial x},\quad X_5=C(u)\frac{\partial}{\partial u}. \end{equation*} Local translation groups along these fields can be written as follows: \begin{align*} \varphi_{1, t}& :(x, y, u)\longmapsto( x+t, y, u), \quad &\varphi_{2, t}& :(x, y, u)\longmapsto( x, y+t, u),\\ \varphi_{3, t}& :(x, y, u)\longmapsto( x, e^t y, u),\quad &\varphi_{4,t}& :(x, y, u)\longmapsto( e^t x, y, u),\\ \varphi_{5, t}& :(x, y, u)\longmapsto( x, y, U_t(u)). \end{align*} Let us calculate how the transformations $\varphi_{i,t}$ act on equation (\ref{mainEq}) and on the function $f$. Transformation $\varphi_{1,t}$ doesn't change the form of equation (\ref{mainEq}). Applying transformations $\varphi_{2,t}^{(2)},\dots, \varphi_{5,t}^{(2)}$ to the left side of (\ref{mainEq}) result in: \begin{align*} \left( \varphi_{2, t}^{(2)}\right)^{*} \left( y_{xx}+f(y, u)\right) & =y_{xx}+f(y+t, u) \\ \left( \varphi_{3, t}^{(2)}\right)^{*} \left( y_{xx}+f(y, u)\right) & =y_{xx}+e^{-t}f(y, u),\\ \left( \varphi_{4, t}^{(2)}\right)^{*} \left( y_{xx}+f(y, u)\right) & =y_{xx}+e^{\frac{t}{2}}f(y, u),\\ \left( \varphi_{5, t}^{(2)}\right)^{*} \left( y_{xx}+f(y, u)\right) & =y_{xx}+f(y, U_t(u)). \end{align*} \section{DIFFERENTIAL INVARIANTS} Construct a trivial vector bundle \begin{equation*} \pi: \mathbb{R}^3\rightarrow \mathcal{B}=\mathbb{R}^{2}, \qquad \pi: (y,u,z)\mapsto (y,u). \end{equation*} Sections \begin{equation*} s_f: (y, u) \mapsto (y, u, f(y,u)) \end{equation*} of this bundle are smooth functions on the base $\mathbb{R}^2$: $z=f(y,u)$. Transformations $\varphi_{2,t}^{(2)},\dots, \varphi_{5,t}^{(2)}$ form a Lie pseudo-group and act on the total space of $\pi$. This pseudo-group we denote by $G$. Corresponding vector fields are: \begin{equation*} Y_1=\frac{\partial}{\partial y},\quad Y_2=y\frac{\partial}{\partial y}, \quad Y_3=z\frac{\partial}{\partial z},\quad Y_4=H(u)\frac{\partial}{\partial u}. \end{equation*} \par Let $J^k(\pi)$ be the space of $k$-jets of sections of the bundle $\pi$ and $y, u, z, z_{y}, z_{u}, z_{yy}, z_{yu}, z_{uu},\ldots, z_{u\dots u}$ be the canonical coordinates on this space. Let $\varphi^{(k)}$ and $Y^{(k)}$ be the prolongations to the space $J^k(\pi)$ of a transformation $\varphi\in G$ and a vector field $Y\in \mathcal{G}$ respectively. A set $$ \mathcal{O}^k(\theta)=\bigcup_{\varphi\in G}\varphi^{(k)}(\theta) $$ is called an {\it orbit} of the point $\theta\in J^k(\pi)$. For a point $\theta\in J^k(\pi)$ define a tangent subspace $$ \mathcal{H}^k(\theta)=\mathrm{span}\bigcup_{H\in C^\infty(\mathbb{R})}\left(H(u)\frac{\partial}{\partial u}\right)^{(k)} $$ of $T_\theta J^k(\pi)$. A point $\theta\in J^k(\pi)$ is called {\it regular} if the dimension of the tangent subspace \begin{equation}\label{Zk} Z^k(\theta)=\mathrm{span}\left(\mathbb{R}Y_{1,\theta}^{(k)}, \mathbb{R}Y_{2,\theta}^{(k)},\mathbb{R}Y_{3,\theta}^{(k)}, \mathcal{H}(\theta)\right)\subset T_\theta J^k(\pi) \end{equation} is maximal and {\it singular} otherwise. An orbit is called {\it{regular}} if all its points are regular. Consider, for instance, the space of 0-jets $J^0(\pi)$. If a point $\theta$ belongs to the hyperplane $\{z=0\}\subset J^0(\pi)$ then $\dim Z^0(\theta)=2$ and $\dim Z^0(\theta)=3$ for other points. We see that any point of the hyperplane $\{z=0\}$ is singular. Moreover, since the Lie group $G$ acts transitively on $\{z=0\}$, this hyperplane is a singular orbit. Hyperplane $\{z=0\}$ divides the space $J^0(\pi)$ into two connected components. Lie pseudo-group $G$ operates transitively at any connected component. Points of the hyperplanes $\{z=0\}$, $\{z_y=0\}$ and $\{z_u=0\}$ are singular as well. A function $J$ on the space $J^{k}(\pi)$ smooth in its domain of definition and rational with respect to variables $z_{x}, z_{y}, z_{xx}, z_{xy}, \ldots$ is called {\it{ differential invariant of order $\leq k$}} of the Lie pseudo-group $G$ if it is a constant on the orbits of prolonged Lie pseudo-group $G^{(k)}$, or, equivalently, $$ \left(\varphi^{(k)}\right)^\ast(J)=J. $$ (See \cite{Al1988}). Rational differential invariants form a field (in algebraic sense) \cite{KL2013}. Following \cite{KL2013} rational differential invariants of {\it QHOE} we call {\it{Petrov invariants}}. Having solved the system of differential equations \begin{equation} Y^{(k)}(J) = 0, \label{invariants} \end{equation} for any $Y \in\mathcal{G}$ we find differential invariants of order $\leq k$ of $G$. First non-trivial differential invariants appear on the space of 2-jets. We find them solving system (\ref{invariants}) for $k=2$. The prolongations of the vector fields $Y_i$ to the space of 2-jets can be written as: \begin{align*} Y_1^{(2)} & = Y_1,\\ Y_2^{(2)} & = Y_2 - z_{y} \frac{\partial}{\partial z_{y}} - 2 z_{yy} \frac{\partial}{\partial z_{yy}} - 2 z_{yu} \frac{\partial}{\partial z_{yu}}, \\ Y_3^{(2)} & = Y_3 + z_{y} \frac{\partial}{\partial z_{y}} + z_{u} \frac{\partial}{\partial z_{u}} + z_{yy} \frac{\partial}{\partial z_{yy}} + z_{yu} \frac{\partial}{\partial z_{yu}} + z_{uu} \frac{\partial}{\partial z_{uu}}, \\ Y_4^{(2)} & = Y_4 - H_{u}(u) z_{u} \frac{\partial}{\partial z_{u}} - H_{u}(u) z_{yu} \frac{\partial}{\partial z_{yu}} - \Bigl( H_{uu}(u) z_{u} +2 H_{u}(u) z_{uu} \Bigr) \frac{\partial}{\partial z_{uu}}. \end{align*} The system (\ref{invariants}) being solved at $k=2$ gives two second-order differential invariants: \begin{equation} \label{inv2} J_{21} =\frac{z_{yy}z}{z_y^2},\quad J_{22} =\frac{z_{yu}z}{z_{y}z_u} \end{equation} \par The prolongations of vector fields $Y_i$ to the space of 3-jets can be written as: \begin{align*} Y_1^{(3)} & = Y_1, \\ Y_2^{(3)} & = Y_2^{(2)} - 3 z_{yyy} \frac{\partial}{\partial z_{yyy}} - 2 z_{yyu}\frac{\partial}{\partial z_{yyu}} - z_{yuu}\frac{\partial}{\partial z_{yuu}}, \\ Y_3^{(3)} & = Y_3^{(2)} + z_{yyy}\frac{\partial}{\partial z_{yyy}} + z_{yyu}\frac{\partial}{\partial z_{yyu}} + z_{yuu}\frac{\partial}{\partial z_{yuu}} + z_{uuu}\frac{\partial}{\partial z_{uuu}}, \\ Y_4^{(3)} & = Y_4^{(2)} - H_{u}(u) z_{yyu} \frac{\partial}{\partial z_{yyu}} - \Bigl(H_{uu}(u) z_{yu} + 2 H_{u}(u) z_{yuu} \Bigr)\frac{\partial}{\partial z_{yuu}} - \\ & - \Bigl(H_{uuu}(u) z_{u} + 3 H_{uu}(u) z_{uu} + 3 H_{u}(u) z_{uuu} \Bigr) \frac{\partial}{\partial z_{uuu}} . \end{align*} Resolving system (\ref{invariants}) for $k=3$ we obtain three third-order differential invariants: \begin{equation*}\label{inv3} J_{31} =\frac{z_{yyy}z^2}{z_y^3}, \quad J_{32} =\frac{z_{yyu}z^2}{z_y^2z_u}, \quad J_{33} =\frac{z_{yuu}z^2}{z_u^2z_y}-J_{22}\frac{z_{uu}z}{z_{u}^{2}}. \end{equation*} Expressions for higher-order differential invariants are given in Appendix A. Constructed invariants allow us to describe some QHOE in free-coordinate forms. \begin{thm} 1. Equation (\ref{mainEq}) is locally equivalent to the equation \begin{equation*} \frac{d^2 y}{d x^2}+ (\alpha(u)y)^n = 0 \end{equation*} with respect to feedback transformations if and only if \begin{equation}\label{nf1} \left\{ \begin{array}{ll} J_{21}(f) &= \frac{n-1}{n}, \quad J_{22}(f) = 1, \quad J_{31}(f) = \frac{n^2 -3n +2}{n^2},\\ J_{32}(f) &= \frac{n-1}{n}, \quad J_{33}(f) = 0 \end{array} \right. \end{equation} for some natural number $n$. 2. Equation (\ref{mainEq}) is locally equivalent to the equation \[ \frac{d^2 y}{d x^2}+\alpha(u) y+\beta(u)=0\] if and only if \begin{equation}\label{nf2} J_{21}(f)=0. \end{equation} 3. Equation (\ref{mainEq}) is locally equivalent to the equation \[ \frac{d^2 y}{d x^2}+\alpha(u) y^2+\beta(u) y +\gamma(u)=0\] if and only if \begin{equation}\label{nf3} J_{31}(f)=0. \end{equation} Here $\alpha, \beta, \gamma$ are of class $C^\infty$ and $J(f)$ is restriction of the invariant $J$ to the function $f$. \end{thm} \begin{pf} To prove the theorem it is sufficient to solve differential equations (\ref{nf1}), (\ref{nf2}), (\ref{nf3}). \end{pf} \section{INVARIANT DERIVATIONS} Differential operator \begin{equation}\label{operator} \nabla = M\frac{d}{dy} + N\frac{d}{du} \end{equation} is called $G$-{\it invariant derivation} if it commutes with every element of any prolongation of the Lie algebra $\mathcal{G}$, where $M$ and $N$ are the functions on $J^\infty(\pi)$. It means that the following diagram $$ \xymatrix{ {C^\infty(J^\infty{{(\pi)}})} \ar[rrr]^{\nabla} \ar[dd]_{X^{(\infty)}} &&& {C^\infty(J^\infty{{(\pi)}})} \ar[dd]^{X^{(\infty)}} \\ &&&\\ {C^\infty(J^\infty{{(\pi)}})} \ar[rrr]_{\nabla} &&& {C^\infty(J^\infty{{(\pi)}})} } $$ is commutative for any vector field $Y^{*} \in \mathcal{G}^{(\infty)}$. Here \begin{align*} \frac{d}{dy} = \frac{\partial}{\partial y} + z_{y} \frac{\partial}{\partial z} + z_{yy} \frac{\partial}{\partial z_{y}} + z_{yyy} \frac{\partial}{\partial z_{yy}} + \ldots,\\ \frac{d}{du} = \frac{\partial}{\partial u} + z_{u} \frac{\partial}{\partial z} + z_{uu} \frac{\partial}{\partial z_{u}} + z_{uuu} \frac{\partial}{\partial z_{uu}} + \ldots \end{align*} are the operators of total differentiation by the variables $y$ and $u$ respectively (See \cite{KLR2007}). Invariant derivation let us construct new differential invariants based on known ones. Indeed, let $J$ be the differential invariant and $\nabla$ be the invariant derivation. Then \begin{equation*} Y^\ast(\nabla(J))=\nabla(Y^\ast(J))=0 \end{equation*} for any vector field $Y^\ast \in \mathcal{G}^{(\infty)}$. Therefore the function $\nabla (J)$ is also differential invariant. \begin{lem} Differential operators \begin{equation}\label{invdiff} \nabla_1 = \frac{z}{z_{y}} \frac{d}{dy},\qquad\mbox{and}\qquad \nabla_2 = \frac{z}{z_{u}} \frac{d}{du} \end{equation} are $G$-invariant derivations. \end{lem} \begin{pf} According to \cite{Lych2009} if the functions $M$ and $N$ satisfy the following system of differential equations: \[ \left\{ \begin{array}{ll} X^{(\infty)}(M) + \frac{d}{dy}\left(\frac{\partial h}{\partial z_y} \right)M + \frac{d}{du}\left(\frac{\partial h}{\partial z_u} \right)N& = 0,\\ X^{(\infty)}(N) + \frac{d}{dy}\left(\frac{\partial h}{\partial z_y} \right)M + \frac{d}{du}\left(\frac{\partial h}{\partial z_u} \right)N & = 0, \end{array} \right. \] for any vector field $X \in \mathcal{G}$, then operator (\ref{operator}) is invariant derivation. Here $h$ is a generating function of the vector field $X^{(1)}$, i.e. $h=\omega(X^{(1)})$, where $$\omega= dz-z_ydy-z_udu$$ is the Cartan form on $J^1(\pi)$. Having resolved this system restricted on the space of 2-jets for the vector fields $Y_1,\ldots,Y_4$ we obtain: $$ M = \frac{C_1 z}{z_{y}},\qquad N = \frac{C_2 z}{z_{u}}, $$ where $C_1, C_2$ are arbitrary constants. Having assumed, $C_1 = 1$, $C_2 = 1$ we obtain invariant derivations (\ref{invdiff}). \end{pf} \par Invariant derivation is determined up to multiplication by a differential invariant. Note that \begin{equation*} [\nabla_1,\nabla_2] = (-1 + J_{22}) \nabla_1 + (1 - J_{22}) \nabla_2, \end{equation*} where $J_{22}$ is the second order differential invariant (see (\ref{inv2})). Applying the constructed invariant derivations $\nabla_{1,2}$ to the invariants (\ref{inv2}) we obtain: \begin{equation}\label{NablaOnInvariants} \left\{ \begin{array}{ll} \nabla_1( J_{21}) & = J_{21} - 2 J_{21}^2 + J_{31}, \\ \nabla_2( J_{21}) & = J_{21} - 2 J_{21} J_{22} + J_{32}, \\ \nabla_1 (J_{22}) & = J_{22} - J_{21} J_{22} - J_{22}^2 + J_{32}, \\ \nabla_2 (J_{22}) & = J_{22} - J_{22}^2 + J_{33}. \end{array} \right. \end{equation} Syzygies for higher-order differential invariants are given in Appendix B. \section{STRUCTURE OF THE FIELDS OF INVARIANTS} We say that a set of differential invariants $J_1,\dots, J_\nu$ of order $\leq k$ form a {\it{local basis}} of the field of Petrov invariants in an open domain $\mathcal{D}\subset J^k(\pi)$ if the following conditions hold: \begin{enumerate} \item the invariants $J_1,\dots, J_\nu$ are functionally independent in the domain $\mathcal{D}$, i.e. $dJ_1\wedge\dots\wedge dJ_\nu\neq 0$ in $\mathcal{D}$; \item in the domain $\mathcal{D}$ any differential invariant of order $\leq k$ is a function of $J_1,\dots, J_\nu$ . \end{enumerate} In this case we say that the Petrov invariants $J_1,\dots, J_\nu$ are {\it{basic}} invariants of order $\leq k$ in the domain $\mathcal{D}$. \begin{lem}\label{LemmaNu} The number of basic Petrov invariants of order $\leq k$ $(k\geq 1)$ is \begin{equation}\label{nu} \nu(k)=\frac{k^2+k-2}{2}. \end{equation} \end{lem} \begin{pf} Construct projections $$ \pi_{k,0}: J^k(\pi)\rightarrow \mathcal{B},\quad \pi_{k,r}: [s]^{k}_a\mapsto a, $$ where $[s]^{k}_a$ is the $k$-jet of the section $s$ of the bundle $\pi$ at the point $a\in \mathcal{B}$. The Lie pseudo-group $G$ acts transitively on the base $\mathcal{B}$ of the bundle $\pi$. Because of this without loss of generality we can choose the point $0$ with $y=0, u=0$ on $\mathcal{B}$. Let $N^k$ be the stratum of this point $0$: $N^k=\pi^{-1}(0)$. The stratum $N^k$ is a smooth manifold with coordinates $z, z_y, z_u, z_{yy},z_{yu},\dots, z_{u\dots u}$, \begin{equation}\label{binom} \dim N^k={k+2 \choose 2}. \end{equation} The stationary subalgebra $\mathcal{G}_0$ of the point $0\in \mathcal{B}$ is generated by the vector fields $$ Z_1=y\frac{\partial}{\partial y}, \quad Z_2=z\frac{\partial}{\partial z},\quad Z_3=u l(u)\frac{\partial}{\partial u}, $$ where $l=l(u)$ is a smooth function. Evolutionary parts (see \cite{VKL1986}) of the vector fields $Z_1^{(k)}, Z_2^{(k)}, Z_3^{(k)}$ are tangent to $N^k$. Let $\overline{Z_1}^{(k)}, \overline{Z_2}^{(k)}, \overline{Z_3}^{(k)}$ be restrictions of these fields to $N^k$. Simple calculations show that in a regular point $\theta\in N^k$ the rank of the system of the tangent vectors $\overline{Z_{1,\theta}}^{(k)}, \overline{Z_{2,\theta}}^{(k)}, \overline{Z_{3,\theta}}^{(k)}$ is equal to $k+2$. Let $G_0$ be the Lie pseudo-group that is generated by subalgebra $\mathcal{G}_0$ and $G_0^{(k)}$ be its prolongation to the space of $k$-jets $J^k(\pi)$. The codimension of the $G_0^{(k)}$-orbit of the point $\theta$ is equal to: $$ {k+2 \choose 2}-(k+2)=\frac{k^2+k-2}{2}. $$ But the number of basic differential invariants of order $\leq k$ is a codimension of a regular orbit. So, we obtain formula (\ref{nu}). \end{pf} The following theorem gives a complete description of the structure of Petrov differential invariants of QHOE. \begin{thm}\label{thm7} The field of Petrov invariants of QHOE is generated by two second-order differential invariants $J_{21}$, $J_{22}$ and invariant derivations $\nabla_1$ and $\nabla_2$. This field separates regular orbits. \end{thm} \begin{pf} By lemma \ref{LemmaNu}, the number of basic Petrov invariants of order $k$ ($k\geq 2$) is equal to $$ \mu(k)=\nu(k)-\nu(k-1)=k. $$ So, for $k=2$ we have two invariants: $J_{21}$ and $J_{22}$. Applying to them invariant derivations $\nabla_1$ and $\nabla_2$ we obtain four invariants of third order. Since $\mu(3)=3$, there exists one syzygy between them. Indeed, this syzygy we obtain from (\ref{NablaOnInvariants}): \begin{equation}\label{syzygy} \nabla_2( J_{21})-J_{21} + J_{21} J_{22} = \nabla_1 (J_{22})- J_{22} + J_{22}^2. \\ \end{equation} For $k=4$ we obtain $\mu(4)=4$, but applying to the invariants of third order we can obtain six invariants of fourth order. Therefore there exists two syzygies. We obtain them from (\ref{syzygy}) applying invariant derivations. So, in order to construct all invariants of order $k+1$ it is sufficient to apply operators $\nabla_1$ and $\nabla_2$ to constructed invariants of $k$-th order. The fact that Petrov invariants separate regular orbits follows from \cite{Kruglikov}. \end{pf} \section{RESULTS AND DISCUSSION} \par We have completely described the field of rational differential invariants of one class of second order ordinary differential equations with scalar control parameter with respect to Lie pseudo-group of feedback transformations. In particular, the explicit expressions for differential invariants of different orders and for invariant derivations have been obtained. General expression for number of basic differential invariants up to fifth order has been presented. Normal forms of QHOE have been specified. All the results obtained are local. \par However, we understand that we have described a limited class of conservative systems. Non-conservative systems are of the great interest. In particular, introducing dissipation to the system can make it rather more interesting for engineering applications, but on same time may lead to a significant complication of its complete description. Furthermore, different types of stochastic terms can be included in the equation considered. In this case, the powerful apparatus of mathematical statistics can be applied simultaneously with the jet space theory. Finally, the developed approach is applicable for classical systems. However, we believe that possible generalization of the proposed methods may contribute to the field of quantum mechanics and quantum field theory.
1,314,259,996,115
arxiv
\section{Introduction} Cavity optomechanics, exploring the interaction between light fields and mechanical motions, has attracted a lot of attention in the past few years for its potential application in the ultrasensitive detection of tiny mass, force, and displacement \cite{A1,A2,A3,A4}. One standard and simplest optomechanical setup is a Fabry-Perot cavity with one end mirror being a micro- or nano-mechanical vibrating object \cite{B1,B2,B3}. Other various optomechanical experimental system are designed and investigated such as silica toroidal optical microresonators \cite{C1,C2,C3}, photonic crystal cavities \cite{D1,D2}, micromechanical membranes \cite{E1,E2}, typical optomechanical cavities confining cold atoms \cite{f1,f2}, superconducting circuits \cite{g1,g2}, and so on. Typically, when driving an otomechanical cavity by a red-detuned laser, the mechanical oscillator can be cooled to its quantum ground-state \cite{h1,h2,h3}. Moreover, in this red-detuned regime, some well-known phenomena in atomic ensemble can find their analogy in optomechanical system. Specifically, under a strong driving, normal mode splitting \cite{j1,j2}(called Autler-Townes effects in atomic physics) can be observed. On the contrary, for a relatively weak driving (much less than the cavity dissipation rate), an electromagnetically induced transparency like phenomenon, called optomechanically induced transparency\cite{L1,L2,L3}, has been theoretically predict and experimentally verified. This phenomenon can be used to slow down and even stop light signals \cite{M1,M2} in the long-lived mechanical vibrations. On the other hand, when a driving laser applied on the mechanical blue sideband, the mechanical element of an optomecanical system can be heated, leading to phonon lasing \cite{i1,i2,i3}and probe amplification \cite{p1,p2,N1,N2}. In our previous work, we have investigated coherent perfect transmission and absorption in a double-cavity optomechanical system driven by two pump fields on red mechanical sideband \cite{q1}. While in this paper, we study the optomechanically induced amplification and perfect transparency in the same system driven under a different type of drive. We find that if driving the double-cavity optomechanical system by a red sideband laser from one side and a blue sideband one from the other side and appropriately manipulating the amplitudes of them, optomechanically induced amplification phenomenon can occur for a nearly resonant weak signal field (probe field). In addition, by adjusting the control fields, an interesting perfect optomechanically induced transparency (with transmission coefficient rigorously equal to 1) can be realized under the same type of drive. When this perfect transmission occur, quantum coherence process due to the double-driving can totally suppress the decoherence due to the dissipation of mechanical resonator. This double-driving device could be used to realize signal quantum amplifier, quantum switch, quantum memory and so on. The rest of this paper is organized as follows. In Section II, we introduce the double-cavity optomechanical model, obtain the equations of motion for the mechanical resonator and the two cavity modes, and solve them and obtain the output fields. In Section III, we show how to realize perfect optomechanically induced transparency even though with big mechanical decay rate $\gamma_{m}$. In Section IV, we show how to realize optomechanically induced amplification about the weak signal field (probe field), meanwhile, the system holds below the phonon lasing threshold. And the conclusions are given in the Section V. \section{Model and Equations} \begin{figure}[ptbh] \includegraphics[width=0.45\textwidth]{Fig1.eps}\caption{(Color online) A double-cavity optomechanical system with a mechanical resonator (MR) inserted between two fixed mirrors. The two cavities have identical cavity lengths $L$ and mode frequencies $\omega_{0}$ in the absence of radiation pressure. Coupling field and driving field with frequencies $\omega_{c}, \omega_{d}$ and amplitudes $\varepsilon_{c}, \varepsilon_{d}$ respectively, act upon opposite sides of the double-cavity system. The probe field with frequency $\omega_{p}$ and amplitude $\varepsilon_{p}$ is injected into the left optical cavity. \label{Fig1 \end{figure} We consider a double-cavity hybrid system with one mechanical resonator (MR) of perfect reflection inserted between two fixed mirrors of partial transmission (see Fig. 1). The MR has an eigen frequency $\omega_{m}$ and a decay rate $\gamma_{m}$ and thus exhibits a mechanical quality factor $Q=\omega_{m}/\gamma_{m}$. Two identical optical cavities of lengths $L$ and frequencies $\omega_{0}$ are got when the MR is at its equilibrium position in the absence of external excitation. We describe the two optical modes, respectively, by annihilation (creation) operators $c_{1}$ ($c_{1}^{\dagger}$) and $c_{2}$ ($c_{2}^{\dagger}$) while the only mechanical mode by $b$ ($b^{\dagger}$). These annihilation and creation operators are restricted by the commutation relation $[c_{i},c_{i}^{\dagger}]=1$ ($i=1,2$) , $[c_{1 ,c_{2}]=0$, and $[b,b^{\dagger}]=1$. Two coupling fields are used to drive the double-cavity system from either left or right fixed mirrors with their amplitudes denoted by $\varepsilon_{c}=\sqrt{2\kappa\wp_{c}/(\hbar\omega_{c )}$ and $\varepsilon_{d}=\sqrt{2\kappa\wp_{d}/(\hbar\omega_{d})}$ and one probe field is injected into the left optical cavity with amplitude denoted by $\varepsilon_{p}=\sqrt{2\kappa\wp_{p}/(\hbar\omega_{p})}$. Here $\wp_{c}$, $\wp_{d}$ and $\wp_{p}$, are relevant field powers,\ $\kappa$ is the common decay rate of both cavity modes, and $\omega_{c}$, $\omega_{d}$, and $\omega_{p}$, are relevant field frequencies. Then the total Hamiltonian in the rotating-wave frame of frequency $\omega_{c}+\omega_{d}$ can be written as \begin{align} H & =\hbar\Delta_{c}c_{1}^{\dagger}c_{1}+\hbar\Delta_{d}c_{2}^{\dagger c_{2}+\hbar g_{0}(c_{2}^{\dagger}c_{2}-c_{1}^{\dagger}c_{1})(b^{\dagger }+b)\label{Eq1}\\ & +\hbar\omega_{m}b^{\dagger}b+i\hbar\varepsilon_{c}(c_{1}^{\dagger -c_{1})+i\hbar\varepsilon_{d}(c_{2}^{\dagger}-c_{2})\nonumber\\ & +i\hbar(c_{1}^{\dagger}\varepsilon_{p}e^{-i\delta t}-c_{1}\varepsilon _{p}^{\ast}e^{i\delta t})\nonumber \end{align} with $\Delta_{c}=\omega_{0}-\omega_{c}$ ($\Delta_{d}=\omega_{0}-\omega_{d}$) being the detuning between cavity modes and coupling field (driving field), $\delta=\omega_{p}-\omega_{c}$ being the detuning between the probe field and the coupling field, and $g_{0}=\frac{\omega_{0}}{L}\sqrt{\frac{\hbar {2m\omega_{m}}}$ being the hybrid coupling constant between mechanical and optical modes. The dynamics of the system is described by the quantum Langevin equations for relevant annihilation operators of mechanical and optical mode \begin{align} \dot{b} & =-i\omega_{m}b-ig_{0}(c_{2}^{\dagger}c_{2}-c_{1}^{\dagger c_{1})-\frac{\gamma_{m}}{2}b+\sqrt{\gamma_{m}}b_{in},\label{Eq2}\\ \dot{c}_{1} & =-[\kappa+i\Delta_{c}-ig_{0}(b^{\dagger}+b)]c_{1 +\varepsilon_{c}+\varepsilon_{p}e^{-i\delta t}+\sqrt{2\kappa}c_{1 ^{in},\nonumber\\ \dot{c}_{2} & =-[\kappa+i\Delta_{d}+ig_{0}(b^{\dagger}+b)]c_{2 +\varepsilon_{d}+\sqrt{2\kappa}c_{2}^{in}\nonumber \end{align} with $b_{in}$ being the thermal noise on the MR with zero mean value, $c_{1}^{in}$ ($c_{2}^{in}$) is the input quantum vacuum noise from the left (right) cavity with zero mean value. Because we deal with the mean response of the system, we do not include these noise terms in the discussion that follows. In the absence of probe field $\varepsilon_{p}$, Eq.s (2) can be solved with the factorization assumption $\left\langle bc_{i}\right\rangle =\left\langle b\right\rangle \left\langle c_{i}\right\rangle $ to generate the steady-state mean value \begin{align} \langle b\rangle & =b_{s}=\frac{-ig_{0}(\left\vert c_{2s}\right\vert ^{2}-\left\vert c_{1s}\right\vert ^{2})}{\frac{\gamma_{m}}{2}+i\omega_{m },\label{Eq3}\\ \langle c_{1}\rangle & =c_{1s}=\frac{\varepsilon_{c}}{\kappa+i\Delta_{1 },\nonumber\\ \langle c_{2}\rangle & =c_{2s}=\frac{\varepsilon_{d}}{\kappa+i\Delta_{2 }\nonumber \end{align} with $\Delta_{1,2}=\Delta_{c,d}\mp g_{0}(b_{s}+b_{s}^{\ast})$ denoting the effective detunings between cavity modes and coupling field, driving field when the membrane oscillator deviates from its equilibrium position. Note in particular, that $g_{0}\left\vert b_{s}\right\vert $ is typically very small as compared to $\omega_{m}$ and becomes even exactly zero in the case of $\left\vert c_{1s}\right\vert =\left\vert c_{2s}\right\vert $ ($\left\vert \varepsilon_{c}\right\vert =\left\vert \varepsilon_{d}\right\vert $). In the presence of probe field, however, we can write each operator as the sum of its mean value and its small fluctuation ($b=b_{s}+\delta b,c_{1 =c_{1s}+\delta c_{1},c_{2}=c_{2s}+\delta c_{2}$) to solve Eq. (2) when the coupling field and the driving field are sufficiently strong. Then keeping only the linear terms of fluctuation operators and moving into an interaction picture by introducing $\delta b\rightarrow\delta be^{-i\omega_{m}t}$, $\delta c_{1}\rightarrow\delta c_{1}e^{-i\Delta_{1}t}$, $\delta c_{2}\rightarrow\delta c_{2}e^{-i\Delta_{2}t}$, we obtain the linearized quantum Langevin equation \begin{align} \delta\dot{b} & =-ig_{0}(c_{2s}^{\ast}\delta c_{2}e^{-i(\Delta_{2}-\omega _{m})t}-c_{1s}^{\ast}\delta c_{1}e^{-i(\Delta_{1}-\omega_{m})t})\label{Eq4}\\ & -ig_{0}(c_{2s}\delta c_{2}^{\dagger}e^{i(\Delta_{2}+\omega_{m})t -c_{1s}\delta c_{1}^{\dagger}e^{i(\Delta_{1}+\omega_{m})t})-\frac{\gamma_{m }{2}\delta b,\nonumber\\ \delta\dot{c}_{1} & =-\kappa\delta c_{1}+ig_{0}c_{1s}(\delta be^{-i(\omega _{m}-\Delta_{1})t}+\delta b^{\dagger}e^{i(\omega_{m}+\Delta_{1})t})\nonumber\\ & +\varepsilon_{p}e^{-i(\delta-\Delta_{1})t},\nonumber\\ \delta\dot{c}_{2} & =-\kappa\delta c_{2}-ig_{0}c_{2s}(\delta be^{-i(\omega _{m}-\Delta_{2})t}+\delta b^{\dagger}e^{i(\omega_{m}+\Delta_{2})t}).\nonumber \end{align} If the coupling field drives at the mechanical red sideband while the driving field drives at the blue sideband ($\Delta_{1}\approx\omega_{m}$, $\Delta _{2}\approx-\omega_{m}$), the hybrid system is operating in the resolved sideband regime ($\omega_{m}>>\kappa$), the membrane oscillator has a high mechanical quality factor ($\omega_{m}>>\gamma_{m}$), and the mechanical frequency $\omega_{m}$ is much larger than $g_{0}\left\vert c_{1s}\right\vert $ and $g_{0}\left\vert c_{2s}\right\vert $, Eq.s (4) will be simplified to \begin{align} \delta\dot{b} & =-ig_{0}(c_{2s}\delta c_{2}^{\dagger}-c_{1s}^{\ast}\delta c_{1})-\frac{\gamma_{m}}{2}\delta b,\label{Eq5}\\ \delta\dot{c}_{1} & =-\kappa\delta c_{1}+ig_{0}c_{1s}\delta b+\varepsilon _{L}e^{-ixt},\nonumber\\ \delta\dot{c}_{2} & =-\kappa\delta c_{2}-ig_{0}c_{2s}\delta b^{\dagger }\nonumber \end{align} with $x=\delta-\omega_{m}$. We can examine the expectation values of small fluctuations by the following three coupled dynamic equations \begin{align} \left\langle \delta\dot{b}\right\rangle & =-ig_{0}(c_{2s}\left\langle \delta c_{2}^{\dagger}\right\rangle -c_{1s}^{\ast}\left\langle \delta c_{1 \right\rangle )-\frac{\gamma_{m}}{2}\left\langle \delta b\right\rangle ,\label{Eq6}\\ \left\langle \delta\dot{c}_{1}\right\rangle & =-\kappa\left\langle \delta c_{1}\right\rangle +ig_{0}c_{1s}\left\langle \delta b\right\rangle +\varepsilon_{p}e^{-ixt},\nonumber\\ \left\langle \delta\dot{c}_{2}\right\rangle & =-\kappa\left\langle \delta c_{2}\right\rangle -ig_{0}c_{2s}\left\langle \delta b^{\dagger}\right\rangle. \nonumber \end{align} We assume the steady-state solutions of above equations have form: $\langle\delta s\rangle=\delta s_{+}e^{-ixt}+\delta s_{-}e^{ixt}$ with $s=b,c_{1},c_{2}$. Then it is straightforward to obtain the following results \begin{align} \delta b_{+} & =\frac{iG\varepsilon_{p}}{(\kappa-ix)(\frac{\gamma_{m} {2}-ix)+G^{2}(1-n^{2})},\\ \delta c_{1+} & =\frac{\varepsilon_{p}[-n^{2}G^{2}+(\kappa-ix)(\frac {\gamma_{m}}{2}-ix)]}{(\kappa-ix)^{2}(\frac{\gamma_{m}}{2}-ix)+G^{2 (1-n^{2})(\kappa-ix)},\nonumber\\ \delta c_{2-} & =\frac{-nG^{2}\varepsilon_{p}}{(\kappa+ix)^{2}(\frac {\gamma_{m}}{2}+ix)+G^{2}(1-n^{2})(\kappa+ix)},\nonumber \end{align} where $G=g_{0}c_{1s}$ is the effective optomechanical coupling rate and $\left\vert c_{2s}/c_{1s}\right\vert ^{2}=n^{2}$ is the photon number ratio of two cavity modes. In deriving Eqs. (7), we have also assumed that $c_{1s,2s}$ is real-valued without loss of generality. Based on Eqs. (7), we can further determine the left-hand output field $\varepsilon_{outL}$ and the right-hand output field $\varepsilon_{outR}$ through the following input-output relation \cite{Walls} \bigski \begin{align} \varepsilon_{outL} & =2\kappa\langle\delta c_{1}\rangle-\varepsilon _{p}e^{-ixt}\label{Eq8}\\ \varepsilon_{outR} & =2\kappa\langle\delta c_{2}\rangle,\nonumber \end{align} where the oscillating terms can be removed if we set $\varepsilon _{outL}=\varepsilon_{outL+}e^{-ixt}+\varepsilon_{outL-}e^{ixt}$ and $\varepsilon_{outR}=\varepsilon_{outR+}e^{-ixt}+\varepsilon_{outR-}e^{ixt}$. Note that the output components $\varepsilon_{outL+}$ and $\varepsilon _{outR-}$ have the same frequency $\omega_{p}$\ as the input probe fields $\varepsilon_{p}$, while the output components $\varepsilon_{outL-}$ and $\varepsilon_{outR+}$ are generated at frequencies $2\omega_{c}-\omega_{p}$ and $2\omega_{d}-\omega_{p}$, respectively, in a nonlinear wave-mixing process of optomechanical interaction. Then with Eqs. (8) we can obtain \begin{align} \varepsilon_{outL+} & =2\kappa\delta c_{1+}-\varepsilon_{p},\label{Eq9}\\ \varepsilon_{outR-} & =2\kappa\delta c_{2-}\nonumber \end{align} oscillating at frequency $\omega_{p}$ of our special interest. In this paper, we discuss the perfect optomechanically induced amplification and transparency under the realistic parameters in a optomechanical experiment \cite{j2}. That is, $L=25$ mm, $m=145$ ng, $\kappa=2\pi\times215$ kHz, $\omega_{m}=2\pi\times947$ kHz, and $\gamma_{m}=2\pi\times141$ Hz. In addition, the laser wavelength is $\lambda$ $=$ $2\pi c/\omega_{c}=1064$ nm and the mechanical quality factor is $Q=$ $\omega_{m}/\gamma_{m}=6700$. \section{Perfect optomechanically induced transparency} Now we study the perfect optomechanically induced transparency for the probe field. The quadrature of the optical components with frequency $\omega_{p}$ in the output field can be defined as $\varepsilon_{T}=2\kappa\delta c_{1+}/\varepsilon_{p}$ \cite{L2} . Specifically, it can be written as \begin{align} \varepsilon_{T} =\frac{2\kappa[-n^{2}G^{2}+(\kappa-ix)(\frac{\gamma_{m} {2}-ix)]}{(\kappa-ix)^{2}(\frac{\gamma_{m}}{2}-ix)+G^{2}(1-n^{2})(\kappa-ix)}, \end{align} whose real and imaginary part ($Re[\varepsilon_{T}]$ and $Im[\varepsilon_{T}]$) represent the absorptive and dispersive behavior of the optomechanical system, respectively. It is well-known that in a standard optomechanical system with single optical cavity, the optomechanically induced transparency dip is not perfect as the decay $\gamma_{m}$ of the mechanical resonator is not zero. However, we can see from Eq.(10) that, in the double-cavity optomechanical system studied here, if setting the ratio $n=\sqrt{\gamma_{m \kappa/2G^{2}}$, the optomechanically induced transparency dip will be perfect even though remarkale mechanical decay $\gamma_{m}$ exists. \begin{figure}[ptbh] \centering \includegraphics[width=0.45\textwidth]{Fig2.eps}\caption{The real part of $\varepsilon_{T}$ vs. the normalized frequency detuning $x/\kappa$: $n=0$ (red-dashed) and $n=0.7$ (black-solid) with $\gamma_{m}=2\pi\times14.1$ kHz and $\wp_{c}=1mW$. In the inset: $n=0.7$, $\gamma_{m}=2\pi\times141$ Hz and $\wp_{c}=1mW$. \label{Fig2 \end{figure} \begin{figure}[ptbh] \centering \includegraphics[width=0.45\textwidth]{Fig3.eps}\caption{The imaginary part of $\varepsilon_{T}$ vs. the normalized frequency detuning $x/\kappa$: $n=0$ (red-dashed) and $n=0.7$ (black-solid) with $\gamma_{m}=2\pi\times14.1$ kHz and $\wp_{c}=1mW$. \label{Fig3 \end{figure} To see this clearly, in Fig. 2, we plot the $Re[\varepsilon_{T}]$ versus the normalized frequency $x/\kappa$ with $\gamma_{m}=2\pi\times14.1$ kHz and $\wp_{c}=1mW$ for different $n$. We can see form Fig. 2 that when $n=0$ (i.e. the usual optomechanically induced transparency case), the optomechanically induced transparency dip will become shallow with a large mechanical decay $\gamma_{m}$ (red-dashed). However, when an additional blue-sideband driving field satisfying the condition $n=\sqrt{\gamma_{m}\kappa/2G^{2}}\approx0.7$ applied, the transparency dip will become perfect, exhibiting totally transmission of probe laser (black-solid). Physically, it means that the dissipative energy through the decay $\gamma_{m}$ of the mechanical resonator can be compensated by applying the right-hand driving field with amplitude $\varepsilon_{d}=\varepsilon_{c \sqrt{\gamma_{m}\kappa/2G^{2}}$ and the blue mechanical sideband frequency. When $\omega_{p}\approx\omega_{0}$, $n=\sqrt{\gamma_{m}\kappa/2G^{2}}$ and the beat frequency $\omega_{p}-\omega_{c}=\omega_{m} (x=0)$, thus, the MR is driven by a force oscillating at its eigenfrequency $\omega_{m}$ and the resonator starts to oscillate coherently. This motion will generate photons with frequency $\omega_{p}$ that interfere destructively with the probe beam, leading to a optomechanically induced transparency dip. In Fig. 3, we plot the the dispersion curve $Im[\varepsilon_{T}]$ versus the normalized frequency $x/\kappa$ with $\gamma_{m}=2\pi\times14.1$ kHz and $\wp_{c}=1mW$ for different $n$. Clearly, the curve with $n=0.7$ (black-solid) is much steeper than the one with $n=0$ (red-dashed) in the vicinity of $x=0$. It means that we can easily control the dispersive behavior of the optomechanical system by applying the blue-detuned driving field with amplitude $\varepsilon_{d}=n\varepsilon_{c}$, which can possibly be used to control slow light in optomechanical systems. \section{optomechanically induced amplification} In this section, we study the optomechanically induced amplification in this double-cavity optomechanical system. If the ratio $n>\sqrt{\gamma_{m}\kappa/2G^{2}}$, we find the $Re[\varepsilon _{T}]$ will become negative in the vicinity of $x=0$ (see the inset in Fig. 2). It means that optomechanically induced gain (amplification) can be realized in this double-cavity system by applying a blue-detuned driving field to the right-side cavity with amplitude $\varepsilon_{d}=n\varepsilon_{c}$. Note that when the system works under the condition $x=0$, $n=\sqrt {1+\frac{\gamma_{m}\kappa}{2G^{2}}}$, $Re[\varepsilon _{T}]$ will be divergent. In addition, the system will work into the parametric instability regime as $n\gtrsim1$ when the input power $\wp_{c =1$mW, and so we limit ourselves to the case where $n\leq1$. \begin{figure}[ptbh] \centering \includegraphics[width=0.45\textwidth]{Fig4.eps}\caption{The normalized mechanical oscillation $|\kappa\delta b_{+}/\varepsilon_{p}|^{2}$ vs. the normalized frequency detuning $x/\kappa$: $n=0$ (black-dotted), $n=0.7$ (green-dotted-dashed), $n=0.8$ (blue-dashed), and $n=0.9$ (red-solid) with $\wp_{cL}=1mW$. In the inset, we plot the normalized mechanical oscillation $|\kappa\delta b_{+}/\varepsilon_{p}|^{2}$ vs. the ratio $n$. \label{Fig4 \end{figure} In Fig. 4, we plot the mechanical oscillation $|\kappa\delta b_{+ /\varepsilon_{p}|^{2}$ (normalized to probe field $\varepsilon_{p}$) versus the normalized frequency $x/\kappa$ for different $n$. In the inset we plot the $|\kappa\delta b_{+}/\varepsilon_{p}|^{2}$ as a function of $n$ for $x=0$. It can be seen clearly from Fig. 4 that the mechanical oscillation peak value locates at $x=0$, and increases with $n$ [$n=0$ (black-dotted), $n=0.7$ (green-dotted-dashed), $n=0.8$ (blue-dashed), $n=0.9$ (red-solid)]. And when $n$ increases up to 1, the mechanical oscillation peak value will increase approximately to $6.1\times10^{5}$ (see the inset in Fig. 4). It means that the optomechanical effect will become stronger for bigger $n$ (less than or equal to 1) when $\omega_{p}-\omega_{c}=\omega_{m} (x=0)$ and $\omega_{d}-\omega_{0}=\omega_{m}$. The reason for this is that, the blue-mechanical sideband (heating sideband) of right-hand cavity generating much phonons which will be absorbed by the Anti-Stokes processes in left-hand cavity for the red-mechanical sideband (cooling sideband). Then, the optomechanical effect of the double-cavity system is resonantly enhanced. \begin{figure}[ptbh] \centering \includegraphics[width=0.45\textwidth]{Fig5.eps}\caption{The normalized left-hand output energy $|\varepsilon_{outL+}/\varepsilon_{p}|^{2}$ vs. the normalized frequency detuning $x/\kappa$: $n=0$ (black-dotted), $n=0.7$ (green-dotted-dashed), $n=0.8$ (blue-dashed), and $n=0.9$ (red-solid) with $\wp_{cL}=1mW$. In the inset, we plot the normalized output energy $|\varepsilon_{outL+}/\varepsilon_{p}|^{2}$ vs. the ratio $n$. \label{Fig5 \end{figure} \begin{figure}[ptbh] \centering \includegraphics[width=0.45\textwidth]{Fig6.eps}\caption{The normalized right-hand output energy $|\varepsilon_{outR-}/\varepsilon_{p |^{2}$ vs. the normalized frequency detuning $x/\kappa$: the parameters are the same as in Fig. 4. In the inset, we plot the normalized output energy $|\varepsilon_{outR-}/\varepsilon_{p}|^{2}+1$ vs. the ratio $n$. \label{Fig6 \end{figure} In Fig. 5-6, we plot the output power $|\varepsilon_{outL+}/\varepsilon _{p}|^{2}$ and $|\varepsilon_{outR-}/\varepsilon_{p}|^{2}$ normalized to the input probe field $\varepsilon_{p}$ respectively, versus the normalized frequency $x/\kappa$ for different $n$. It can be seen clearly from Fig. 5-6 that the output energies $|\varepsilon_{outL+}/\varepsilon_{p}|^{2}$ and $|\varepsilon_{outR-}/\varepsilon_{p}|^{2}$ get the maximum value at $x=0$ for a certain value $n$. When $x=0$, the output normalized energies $|\varepsilon _{outL+}/\varepsilon_{p}|^{2}$ and $|\varepsilon_{outR-}/\varepsilon_{p}|^{2}$ will increase with $n$ which is similar to the mechanical oscillation $|\kappa\delta b_{+}/\varepsilon_{p}|^{2}$. This is because that when $x=0$, the optomechanical effect will be strongest for a certain value $n$ as discussed above. The curves of the output normalized energies $|\varepsilon_{outL+}/\varepsilon_{p}|^{2}$ and $|\varepsilon_{outR- /\varepsilon_{p}|^{2}$ almost have the same line shape, except that the output normalized energy $|\varepsilon_{outL+}/\varepsilon_{p}|^{2}$ starts from 1 with the increase of $n$ for $x=0$ while the output normalized energy $|\varepsilon_{outR-}/\varepsilon_{p}|^{2}$ starts from 0 (see the insets in Fig. 5-6). This shows that the double-cavity optomechanical system will be reduced to the standard one-cavity optomechanical model ($|\varepsilon_{outR- /\varepsilon_{p}|^{2}=0$) when $n=0$. When $n$ increases up to 1, the output normalized energies $|\varepsilon_{outL+}/\varepsilon_{p}|^{2}$ and $|\varepsilon_{outR-}/\varepsilon_{p}|^{2}$ will increase approximately to $1.6\times10^{5}$ (see the insets in Fig. 5-6). The reason for this is that, the existing of blue-detuned driving field with $\omega_{d}-\omega_{0}=\omega_{m $ will coherently enhance the oscillation of the MR (see Fig. 4), leading to optomechanically induced amplification. Thus, we can realize the optomechanically induced amplification for a resonantly injected probe in the double-cavity optomechanical system by appropriately adjusting the ratio $n$ of the two strong field amplitudes $\varepsilon_{c,d}$. \section{Conclusions} In summary, we have studied in theory a double-cavity optomechanical system driven by a red sideband laser from one side and a blue sideband one from the other side. Our analytical and numerical results show that if adjusting the amplitude-ratio of the two driving fields $n>\sqrt{\gamma_{m}\kappa/2G^{2}}$ , the optomechanically induced amplification for a resonantly incident probe (i.e., $\omega_{p}-\omega_{c}-\omega_{m}=0$) can be realized in this system. Typically, remarkable amplification can be obtained when $n\sim1$. The reason for this is that, the Stokes processes in the blue-sideband driven cavity can generate phonons in the mechanical elements, and these phonons will be further absorbed by the Anti-Stokes processes in the red-sideband driven cavity. As a result, the optomechanical effect of the double-cavity system is resonantly enhanced. In addition, the perfect optomechanically induced transparency can be realized if we set the ratio $n=\sqrt {\gamma_{m}\kappa/2G^{2}}$. Different from usual optomechanical induced transparency, this phenomenon is robust to mechanical dissipation, namely, the perfect transparency window can preserve even if the mechanical resonator has a relatively large decay rate $\gamma_{m}$. We expect that our study can be used to realize signal quantum amplifier and light storage in the quantum information processes. \begin{acknowledgments} This work is supported by the National Natural Science Foundation of China (61378094). W. Z. Jia is supported by the National Natural Science Foundation of China under Grant No. 11347001 and the Fundamental Research Funds for the Central Universities 2682014RC21. \end{acknowledgments} \bigskip
1,314,259,996,116
arxiv
\section{Introduction} \label{sec:intro} Measurements of the production cross sections for both $\ensuremath{W}$ and $\ensuremath{Z}$ bosons in high energy $p\overline{p}$ collisions are important tests of the Standard Model (SM) of particle physics. At hadron colliders the $\ensuremath{W}$ and $\ensuremath{Z}$ bosons can most easily be detected through their leptonic decay modes. This paper presents measurements of $\sigma_{W} \cdot Br(\wlnu)$, $\sigma_{Z} \cdot Br(\zll)$, and their ratio, \begin{equation} R = \frac{\sigma_{W} \cdot Br(\wlnu)}{\sigma_{Z} \cdot Br(\zll)} \label{eq:rdef} \end{equation} for $\ell$ = $e$ and $\mu$ based on 72.0 $\ensuremath{\mathrm{pb}^{-1}}$ of $p\overline{p}$ collision data collected in 2002-2003 by the upgraded Collider Detector at Fermilab (CDF) at a center-of-mass energy of 1.96 $\ensuremath{\mathrm{Te\kern -0.1em V}}$. These measurements are also described in ~\cite{int:ourprl}. These measurements provide a test of SM predictions for the $\ensuremath{W}$ and $\ensuremath{Z}$ boson production cross sections, $\sigma_{W}$ and $\sigma_{Z}$, as well as a precise indirect measurement of the total decay width of the $\ensuremath{W}$ boson, $\Gamma(W)$, within the framework of the SM. This analysis is sensitive to deviations in $\Gamma(W)$ from the SM predictions at the level of about 2~$\!\%$. We also use our results to extract the leptonic branching fraction, $Br(\wlnu)$, and the Cabibbo-Kobayashi-Maskawa (CKM) matrix element, $V_{cs}$. Finally, we test the lepton universality hypothesis for the couplings of the $\ensuremath{W}$ boson to $e$ and $\mu$ leptons. \subsection{$\ensuremath{W}$/$\ensuremath{Z}$ production and decay} \label{sec:I.A} \begin{figure}[t] \includegraphics*[width=8.5cm]{figures/Vprod.eps} \caption{Diagrams for production and leptonic decay of a vector boson $V$ = $\ensuremath{W}$, $\ensuremath{Z}$ at leading (upper left) and next-to-leading order (others).} \label{fig:vprod} \end{figure} The $\ensuremath{W}$ and $\ensuremath{Z}$ bosons, together with the massless photon ($\gamma$), compose the bosonic fields of the unified electroweak theory proposed by Weinberg~\cite{int:weinberg}, Salam~\cite{int:salam}, and Glashow~\cite{int:glashow}. The $\ensuremath{W}$ and $\ensuremath{Z}$ bosons were discovered in 1983 using the UA1 and UA2 detectors~\cite{int:wdisc,int:wdisc2,int:zdisc,int:zdisc2} which were designed and built for this very purpose. The transverse momentum (\ensuremath{p_{T}}) distribution of the reconstructed leptons in $\wlnu$ events was used to determine the $\ensuremath{W}$ mass, while the $\ensuremath{Z}$ mass was determined by directly reconstructing the invariant mass of dilepton pairs in $Z \rightarrow \ell \ell$ events. Present experimental measurements of electroweak parameters including vector boson masses and decay widths are precise enough to provide tests of Quantum Chromodynamics (QCD) and the electroweak part of the Standard Model beyond leading order. These precise measurements not only test the electroweak theory but also provide possible windows to sectors of the theory at mass scales higher than those directly observable at current accelerator energies. These sectors enter into the electroweak observables through radiative corrections. While the parameters of the $\ensuremath{Z}$ boson have been well studied \cite{res:lepcomb}, the properties of the charged current carriers, the $\ensuremath{W}$ bosons, are known with less precision. In hadron-antihadron collisions the $\ensuremath{W}$ and $\ensuremath{Z}$ are predominantly produced via the processes illustrated in Fig.~\ref{fig:vprod}. The production of $p\overline{p} \rightarrow \gamma^{*}/Z$ where a quark in one hadron annihilates with an antiquark in the other hadron to produce the resulting vector boson is often referred to as the Drell-Yan~\cite{int:drellyan} production process. Calculations of the total production cross sections for $\ensuremath{W}$ and $\ensuremath{Z}$ bosons incorporate parton cross sections, parton distribution functions, higher-order QCD effects, and factors for the couplings of the different quarks and antiquarks to the $\ensuremath{W}$ and $\ensuremath{Z}$ bosons. Beyond the leading order Born processes, a vector boson $V$ can also be produced by $q(\bar{q})g$ interactions, so the Parton Distribution Functions (PDFs) of the proton and antiproton play an important role at higher orders. Theoretical calculations of the $\ensuremath{W}$ and $\ensuremath{Z}$ production cross sections have been carried out in next-to-leading order (NLO)~\cite{int:nnlo00,int:nnlo0} and next-to-next-to-leading order (NNLO)~\cite{int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3,acc:slac}. The NLO and NNLO computations used in this article are in the modified minimal-subtraction ($\overline{MS}$)~\cite{msbar,msbar2} renormalization prescription framework. The full order $\alpha_{\mathrm{s}}^{2}$ calculation has been made and includes final states containing the vector boson $V$ and up to two additional partons. The two-loop splitting function is used and the running of $\alpha_{\mathrm{s}}$ includes thresholds for heavy flavors. The NLO cross section is $\sim$ 25$\!\%$ larger than the Born-level cross section, and the NNLO cross section is an additional $\sim$ 3$\!\%$ higher. The main contribution to the calculated cross section is from $q\overline{q}$ interactions. The contribution of $q(\bar{q})g$ interactions to the calculated cross section is negative at the Tevatron collision energy. The decay modes of the $\ensuremath{W}$ boson are $\wlnu$ ($\ell =$ $e$, $\mu$, and $\tau$) and $q\bar{q}^{\prime}$, where the main modes $u\bar{d}$, $u\bar{s}$, $c\bar{s}$ and $c\bar{d}$ have branching ratios proportional to their corresponding CKM matrix elements. The measured value for the branching fraction of the three combined leptonic modes is 32.0 $\pm$ 0.4~$\!\%$~\cite{int:pdg}, where the remaining fraction is assigned to the hadronic decay modes. The partial width into fermion pairs is calculated at lowest order to be~\cite{int:pdg} \begin{equation} \Gamma_0(W \rightarrow {\rm{f}\bar{f}^{\prime}}) = |V_{{\rm{ff^{\prime}}}}|^2 N_{\mathrm{C}} G_{\mathrm{F}} M_{\ensuremath{W}}^{3} / (6 \sqrt{2} \pi), \label{eq:vckm} \end{equation} \noindent where $V_{{\rm{ff^{\prime}}}}$ is the corresponding CKM matrix element for quark pairs or one for leptons. $M_{\ensuremath{W}}$ is the $\ensuremath{W}$ boson mass and $G_{\mathrm{F}}$ is the Fermi coupling constant. $N_{\mathrm{C}}$ is the corresponding color factor which is three for quarks and one for leptons. The expression for the partial decay widths into quark pairs also has an additional QCD correction due to vertex graphs involving gluon exchange and electroweak corrections due to next-to-leading order graphs which alter the effective coupling at the $\ensuremath{W}$-fermion vertex for all fermions. Within the context of the Standard Model, there are also vertex and bremsstrahlung corrections~\cite{int:wdecays} that depend on the top quark and Higgs boson masses. The corrections can be summarized in the equation \begin{eqnarray} \Gamma(W \rightarrow {\rm{f \bar{f}^{\prime}}})_{\mathrm{SM}} = && \Gamma_0(W \rightarrow {\rm{f \bar{f}^{\prime}}}) \nonumber\\ &&\times [1+\delta_{\mathrm{V}} + \delta_{\mathrm{W(0)}} +\delta_\mu], \end{eqnarray} where $\delta_{\mathrm{W(0)}}$ is the correction to the width from loops at the $\ensuremath{W}$-fermion vertex involving the $\ensuremath{Z}$ boson or a SM Higgs boson, $\delta_{\mathrm{V}}$ arises from the boson self-energies, and $\delta_\mu$ is a correction required when the couplings are parametrized using the $\ensuremath{W}$ mass and the value of $G_{\mathrm{F}}$ from muon decay measurements~\cite{int:corrmu,int:corrmu2}. Since all of these corrections are small ($\sim$~0.35~$\!\%$), the measurement of $\Gamma(W)$ is not very sensitive to these higher order effects. Higher order QCD corrections originating from quark mass effects are also small. \subsection{Measurement of $\Gamma(W)$ from the $\ensuremath{W}$ and $\ensuremath{Z}$ cross sections} \label{sec:I.B} The width of the $\ensuremath{W}$ boson can be extracted from the measurement of the ratio $R$, which is defined in Eq.~\ref{eq:rdef}. This method was first proposed by Cabibbo in 1983 as a method to determine the number of light neutrino species~\cite{int:cabibbo} and has been adopted as a method to indirectly measure the branching ratio for the $\wlnu$ decay mode. The ratio $R$ can be expressed as \begin{equation} R = \frac{\sigma_{W}}{\sigma_{Z}} \frac{\Gamma(\wlnu)}{\Gamma(Z \rightarrow \ell \ell)} \frac{\Gamma(Z)}{\Gamma(W)}. \label{eq:ratio} \end{equation} On the right hand side of Eq.~\ref{eq:ratio}, the ratio of the $\ensuremath{W}$ and $\ensuremath{Z}$ production cross sections can be calculated from the boson couplings and knowledge of the proton structure. The $\ensuremath{Z}$ boson total width, $\Gamma(Z)$, and leptonic partial width, $\Gamma(Z \rightarrow \ell \ell)$, have been measured very precisely by the LEP experiments~\cite{res:lepcomb}. With the measured value of $R$ the branching ratio $Br(\wlnu) = \Gamma(\wlnu)/\Gamma(W)$ can be extracted directly from Eq.~\ref{eq:ratio}. The total width of the $\ensuremath{W}$ boson, $\Gamma(W)$, can also be determined indirectly using the SM prediction for the partial width, $\Gamma(\wlnu)$. As shown in Eq.~\ref{eq:vckm}, $\Gamma(W)$ depends on electroweak parameters and certain CKM matrix elements. We also use our measurement of the total $\ensuremath{W}$ width to constrain the associated sum over CKM matrix elements in the formula for $\Gamma(W)$ and derive an indirect value for $V_{cs}$ which is the least experimentally constrained element in the sum. Finally, the ratios of the muon and electron $\wlnu$ cross section measurements are used to determine the ratios of the coupling constants of the $\ensuremath{W}$ boson to the different lepton species, providing a test of the lepton universality hypothesis. For reference, Table~\ref{tab:history} provides a summary of previous experimental results for $\sigma_{W} \cdot Br(\wlnu)$ and $\sigma_{Z} \cdot Br(\zll)$ along with the measured values for $R$ and the extracted values of $\Gamma(W)$. The most recent direct measurement of $\Gamma(W)$ obtained by LEP is 2.150 $\pm$ 0.091 \ensuremath{\mathrm{Ge\kern -0.1em V}}~\cite{res:lepcomb}. \begin{table*}[t] \caption{Previous measurements of the $\ensuremath{W}$ and $\ensuremath{Z}$ production cross sections times branching ratios along with the measured values of $R$ and the extracted values of $\Gamma(W)$.} \centering{ \begin{tabular}{l c c c c c r} \hline \hline Experiment & $\sqrt{s}$ & Mode & $\sigma_{W} \cdot Br(\wlnu)$ & $\sigma_{Z} \cdot Br(\zll)$ & $R$ & $\Gamma(W)$ \\ & ($\ensuremath{\mathrm{Te\kern -0.1em V}}$) & & ($\ensuremath{\mathrm{nb}}$) & ($\ensuremath{\mathrm{pb}}$) & & ($\ensuremath{\mathrm{Ge\kern -0.1em V}}$) \\ \hline CDF(Run I)~\cite{int:cdf_sigmas,int:cdf_ratio1,int:cdf_ratio2,int:cdf_zmm,acc:cdfzy} & 1.80 & $e$ & 2.49 $\pm$ 0.12 & 231 $\pm$ 12 & 10.90 $\pm$ 0.43 & 2.064 $\pm$ 0.084 \\ D\O(Run IA)~\cite{int:d0_A} & 1.80 & $e$ & 2.36 $\pm$ 0.15 & 218 $\pm$ 16 & & \\ D\O(Run IA)~\cite{int:d0_A} & 1.80 & $\mu$ & 2.09 $\pm$ 0.25 & 178 $\pm$ 31 & & \\ D\O(Run IA)~\cite{int:d0_A,int:d0_A2} & 1.80 & $e+\mu$ & & & 10.90 $\pm$ 0.49 & 2.044 $\pm$ 0.093 \\ D\O(Run IB)~\cite{int:d0_B} & 1.80 & $e$ & 2.31 $\pm$ 0.11 & 221 $\pm$ 11 & 10.43 $\pm$ 0.27 & 2.17 $\pm$ 0.07 \\ \hline \hline \end{tabular} } \label{tab:history} \end{table*} \subsection{Overview of this measurement}\label{sec:I.C} The signature of high transverse momentum leptons from $\ensuremath{W}$ and $\ensuremath{Z}$ decay is very distinctive in the environment of hadron collisions. As such, the decay of $\ensuremath{W}$ and $\ensuremath{Z}$ bosons into leptons provides a clean experimental measurement of their production rate. Experimentally, the cross sections times branching ratios are calculated from \label{sigmas_formulae} \begin{equation} \sigma_{W} \cdot Br(\wlnu) =\frac{N_W^{\mathrm{obs}}-N_W^{\mathrm{bck}}}{A_W \cdot \epsilon_W \cdot \int {\cal L} dt} \label{eq:wsigma} \end{equation} \begin{equation} \sigma_{Z} \cdot Br(\zll) =\frac{N_Z^{\mathrm{obs}}-N_Z^{\mathrm{bck}}}{A_Z \cdot \epsilon_Z \cdot \int {\cal L} dt}, \label{eq:zsigma} \end{equation} where $N_W^{\mathrm{obs}}$ and $N_Z^{\mathrm{obs}}$ are the numbers of $\wlnu$ and $Z \rightarrow \ell \ell$ candidates observed in the data; $N_W^{\mathrm{bck}}$ and $N_Z^{\mathrm{bck}}$ are the numbers of expected background events in the $\ensuremath{W}$ and $\ensuremath{Z}$ boson candidate samples; $A_W$ and $A_Z$ are the acceptances of the $\ensuremath{W}$ and $\ensuremath{Z}$ decays, defined as the fraction of these decays satisfying the geometric constraints of our detector and the kinematic constraints of our selection criteria; $\epsilon_W$ and $\epsilon_Z$ are the combined efficiencies for identifying $\ensuremath{W}$ and $\ensuremath{Z}$ decays falling within our acceptances; and $\int {\cal L} dt$ is the integrated luminosity of our data samples. In measuring the ratio of the cross sections some of the inputs and their experimental uncertainties cancel. The strategy of this measurement is to select $\ensuremath{W}$ and $\ensuremath{Z}$ boson decays with one or both leptons ($e$ or $\mu$) falling within the central region of the CDF detector. This region is well instrumented and understood and has good detection efficiencies for both lepton species. Using common lepton selection criteria (contributing to the factors $\epsilon_W$ and $\epsilon_Z$) for the $\ensuremath{W}$ and $\ensuremath{Z}$ channels has the great advantage of decreasing the systematic uncertainty in the measurement of $R$. The resulting smaller systematic uncertainty offsets the expected increase in statistical uncertainty originating from the requirement of a common central lepton. For each lepton species, the selection criteria are optimized to obtain the least overall experimental uncertainty. The measurement of the ratio $R$ is sensitive to new physics processes which change the $\ensuremath{W}$ or $\ensuremath{Z}$ production cross sections or the $\wlnu$ branching ratio. The $\wlnu$ branching ratio could be directly affected by new decay modes of the $\ensuremath{W}$ boson, such as supersymmetric decays that do not similarly couple to the $\ensuremath{Z}$ boson. A new resonance at a higher mass scale that decays to $\ensuremath{W}$ or $\ensuremath{Z}$ bosons may change the production cross sections. One example of a particle with a larger mass is the top quark at $m_{\mathrm{t}} =$ 174.3 $\pm$ 5.1 $\ensuremath{\GeV\!/c^2}$, which decays to a $\ensuremath{W}$ boson and a bottom quark~\cite{int:pdg}. In $p\overline{p}$ collisions at $\sqrt{s}$ = 1.8~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ the production cross section for $\mathrm{t} \bar{\mathrm{t}}$ pairs is 6.5$^{+1.7}_{-1.4}$ \ensuremath{\mathrm{pb}}~\cite{int:cdf_top}, about 3000 times smaller than direct $\ensuremath{W}$ boson production~\cite{int:cdf_sigmas}. The decays of $t \bar{t}$ pairs which result in the production of two $\ensuremath{W}$ bosons should change the measured value of $R$ by about 7 $\times$ 10$^{-4}$, which is well below our sensitivity. The total width of the $\ensuremath{W}$ boson can also get contributions from processes beyond the SM. For example, in supersymmetry, the decay W$^{+}\rightarrow\chi^{+}\chi^{0}$ may be possible if the charginos and neutralinos are light~\cite{int:susy} and so a precise measurement of $\Gamma(W)$ can constrain the properties of these particles. \subsection{Outline of the paper} \label{sec:I.D} This paper is organized as follows: in Sec.~\ref{sec:exp} the CDF detector is described, with particular attention given to the subdetectors essential in the identification of charged leptons and the inference of neutrinos. Section~\ref{sec:data} describes the data samples used in this analysis, and the selection of the $\ensuremath{W}$ and $\ensuremath{Z}$ candidate events is described in Sec.~\ref{sec:evsel}. Section~\ref{sec:acc} describes the calculation of the geometric and kinematic acceptances of our candidate samples, and the methods used to determine the efficiencies for identifying events within our acceptances are presented in Sec.~\ref{sec:eff}. The estimation of the contributions to our candidate samples from background processes are discussed in Sec.~\ref{sec:backg}, and finally the calculation of the cross sections along with the resulting value of $R$ and other extracted quantities are summarized in Sec.~\ref{sec:results}. \section{The Experimental Apparatus} \label{sec:exp} The data used for the measurements reported in this note were collected with the upgraded Collider Detector (CDF)~\cite{det:tdr} at the Fermilab Tevatron $p\overline{p}$ collider. Detector upgrades were made to accommodate the higher luminosities and new beam conditions resulting from concurrent upgrades to the Tevatron accelerator complex. In addition to the increases in luminosity, the $p\overline{p}$ center of mass energy was also increased from $\sqrt{s} =$ 1.80 $\ensuremath{\mathrm{Te\kern -0.1em V}}$ to $\sqrt{s} =$ 1.96 $\ensuremath{\mathrm{Te\kern -0.1em V}}$. The relatively small change in beam energies leads to a substantial increase in the production cross sections for high-mass objects such as $\ensuremath{W}$/$\ensuremath{Z}$ bosons ($\sim$~9~$\!\%$) and top quark pairs ($\sim$~30~$\!\%$). We highlight the upgrades to the Run I detectors and electronics in the following sections. \subsection{The CDF II Detector} \label{subsec:det} CDF is a general-purpose detector~\cite{det:tdr,det:det,det:dettop} designed to detect particles produced in $p\overline{p}$ collisions. As illustrated in Fig.~\ref{fig:det_long}, the detector has a cylindrical layout centered on the accelerator beamline. Tracking detectors are installed in the region directly around the interaction point to reconstruct charged-particle trajectories inside a 1.4 $\ensuremath{\mathrm{T}}$ uniform magnetic field (along the proton beam direction). The field is produced by a 5~$\mathrm{m}$ long superconducting solenoid located at the outer radius of the tracking region (1.5~$\mathrm{m}$). Calorimeter modules are arranged in a projective tower geometry around the outside of the solenoid to provide energy measurements for both charged and neutral particles. The outermost part of the detector consists of a series of drift chambers used to detect muons which are minimum-ionizing particles that typically pass through the calorimeter. \begin{figure}[t] \includegraphics*[width=8.5cm]{figures/mydet_long.eps} \caption{Elevation view of half of the CDF Run II detector.} \label{fig:det_long} \end{figure} The $z$-axis of the CDF coordinate system is defined to be along the direction of the incoming protons. A particle trajectory is then described by $\theta$, the polar angle relative to the incoming proton beam; $\phi$, the azimuthal angle about this beam axis; and $z_{0}$, the intersection point of the particle trajectory with the beam axis. The pseudorapidity of a particle trajectory is defined as $\eta = -\ln(\tan(\theta/2))$. The transverse momentum, $\ensuremath{p_{T}}$, is the component of the momentum projected on a plane perpendicular to the beam axis. Similarly, the transverse energy, $\ensuremath{E_{T}}$, of a shower or an individual calorimeter tower is given by $E \cdot \sin\theta$. The total transverse energy in an event is given by a sum over all calorimeter towers $\sum_iE_{T}^i\hat{n}_i$ where $E_{T}^i$ is the transverse energy measured in the $i$th tower and $\hat{n}_i$ is the projection of the vector pointing from the event vertex to the i$th$ calorimeter tower onto the plane perpendicular to the beam axis (unit normalized). The vector sum of transverse energies measured in the calorimeter is corrected to account for muons which deposit only a fraction of their energy in the calorimeter. The missing transverse energy in an event is the equal magnitude vector opposite to this vector sum of transverse energies. Fixed points on the detector are described using polar coordinates ($r$,$\phi$,$z$) where $r$ is the radial distance from the beam axis, $\phi$ is the azimuthal direction about the beam axis, and $z$ is the distance from the detector center in the direction along the beam axis. In some cases we also use a detector pseudorapidity variable, $\eta_{{\mathrm{det}}}$, to refer to fixed locations within the detector. This variable is based on the standard definition of pseudorapidity given above where the angle $\theta$ is redefined in the context of a fixed location as $\theta = \arctan(r/z)$. \subsection{Tracking System} \label{subsec:track} All of the detectors in the inner tracking region have been replaced for Run II. The new silicon tracking system consists of three concentric detectors located just outside the beam interaction region. In combination, these detectors provide high resolution tracking coverage out to $\absdeteta <$~2. For the measurements presented here, silicon tracking information is incorporated solely to aid in the rejection of cosmic ray events from our muon samples. The relevant hit information comes from the Silicon Vertex Detector (SVX-II) \cite{det:svx} which contains five layers of double-sided micro-strip detectors at radii of 2.4 to 10.7~$\mathrm{cm}$ from the center of the detector. The SVX-II detector consists of three barrels divided into 12 wedges in $\phi$. In total, the three barrels cover roughly 45~$\mathrm{cm}$ along the $z$-axis on each side of the detector interaction point. The new open-cell drift chamber referred to as the Central Outer Tracker (COT) \cite{det:cot,det:cot2} sits directly outside of the silicon tracking detectors in the radial direction. The measured momenta and directions of the high $\ensuremath{p_{T}}$ lepton candidates in our event samples are obtained from track reconstruction based solely on COT hit information. The chamber consists of eight superlayers of 310~$\mathrm{cm}$ length cells at radii between 40 and 132~$\mathrm{cm}$ from the beam axis. Each superlayer contains 12 layers of sense wires strung between alternating layers of potential wires. The wires in four of the superlayers (axial layers) are strung to be parallel to the beam axis, providing particle track reconstruction in the transverse plane. In the other four superlayers (stereo layers), the wires are strung at $\pm$~2 degree angles with respect to the beam axis to allow also for particle tracking in the $z$-direction. The two superlayer types are alternated in the chamber within the eight radial layers starting with the innermost stereo layer. The COT chamber has over 30,000 readout channels, roughly five times the number in the Run I tracking chamber \cite{det:cotrun1}. Particles traversing the central region of the detector ($\absdeteta <$1) are expected to be measured by all eight superlayers. The COT is filled with a gas mixture of 50$\!\%$ argon and 50$\!\%$ ethane. This mixture was chosen to ensure a fast drift velocity ($\sim$ 50 $\mu \mathrm{m}/\mathrm{ns}$) compatible with the short interval between beam bunch crossings and the expected rise in instantaneous luminosity. The maximum drift distance in the chamber is 0.88~$\mathrm{cm}$ corresponding to a drift time on the order of 200~$\mathrm{ns}$. The single-hit resolution in the chamber has been studied using the high $\ensuremath{p_{T}}$ muon tracks in $Z \rightarrow \mu \mu$ candidate events. The measured offset between the individual hits associated with these muons and the reconstructed path of the muon track is shown in Fig.~\ref{fig:hitres}. Based on this distribution, we measure a COT single-hit resolution of 180~$\mu \mathrm{m}$. \begin{figure} \includegraphics*[width=8.5cm]{figures/residuals_muons.eps} \caption{COT single-hit residual distribution obtained from $Z \rightarrow \mu \mu$ events.} \label{fig:hitres} \end{figure} The solenoid produces a 1.4~$\ensuremath{\mathrm{T}}$ magnetic field inside the tracking volume that is uniform to 0.1~$\!\%$ in the region $|z|<$~150~$\mathrm{cm}$ and $|r|<$~150~$\mathrm{cm}$. The transverse momentum of a reconstructed track, $\ensuremath{p_{T}}$ (in $\ensuremath{\mathrm{Ge\kern -0.1em V}}/c$), is determined from $\ensuremath{p_{T}} = 0.3qBr_{c}$ where $B$ (in $\ensuremath{\mathrm{T}}$) is the magnetic field strength, the total particle charge is $qe$ ($e$ is the magnitude of the electron charge), and $r_{c}$ (in $\mathrm{m}$) is the measured radius of curvature of the track. The resolution of the COT track momentum measurement decreases for high $\ensuremath{p_{T}}$ tracks which bend less in the magnetic field. The curvature resolution has been studied by comparing the inward and outgoing track legs of reconstructed cosmic ray events. The difference in the measured curvature for the two track legs in these events is shown on the top of Fig.~\ref{fig:curvres}. We determine a COT curvature resolution of 3.6~$\times$~10~$^{-6} \mathrm{cm}^{-1}$, estimated from the $\sigma$ of this distribution divided by $\sqrt{2}$. This corresponds to a momentum resolution of $\sigma_{\ensuremath{p_{T}}}/\ensuremath{p_{T}}^2 \simeq$~1.7~$\times$~10~$^{-3}[\ensuremath{\GeV\!/c}]^{-1}$. The COT track momentum resolution is also studied using the $E/p$ distribution (see Sec.~\ref{sec:evsel}) of electron candidates in $\wenu$ events. This distribution is shown on the bottom of Fig.~\ref{fig:curvres}. Since the COT track momentum resolution measurement is less precise at high $\ensuremath{p_{T}}$ than the corresponding calorimeter energy measurement, the Gaussian width of this distribution for $0.8 < E/p < 1.08$ provides an additional measure of the curvature resolution. The resulting value is in good agreement with that obtained from studying cosmic ray events. \begin{figure} \includegraphics*[width=8.5cm]{figures/dcurv1.eps} \includegraphics*[width=8.5cm]{figures/eopres.eps} \caption{Distribution of the difference in curvature for the two tracks associated with a cosmic ray event as reconstructed by the COT, using cosmic ray data (top). Distribution of the $E/p$ variable defined in Sec.~\ref{sec:evsel} for $\wenu$ events (bottom). The mean and $\sigma$ are obtained from the Gaussian fit in the range $0.8 < E/p < 1.08$.} \label{fig:curvres} \end{figure} \subsection{Calorimeters} Calorimeter modules used to measure the energy of both charged and neutral particles produced in $p\overline{p}$ collisions are arranged around the outer edges of the central tracking volume. These modules are sampling scintillator calorimeters with a tower based projective geometry. The inner electromagnetic sections of each tower consist of lead sheets interspersed with scintillator, and the outer hadronic sections are composed of scintillator sandwiched between sheets of steel. The CDF calorimeter consists of two sections: a central barrel calorimeter ($\absdeteta<$1) and forward end plug calorimeters (1.1$<\absdeteta<$3.64). The scintillator planes in the central barrel lie parallel to the beam line, while those in the forward end plugs are arranged in the transverse direction. The central barrel consists of projective readout towers, each subtending 0.1 in $\eta_{\mathrm{det}}$ and 15$^{\circ}$ in $\phi$. Each end plug also has projective readout towers, the sizes of which vary as a function of $\eta_{\mathrm{det}}$ (0.1 in $\eta_{\mathrm{det}}$ and 7.5$^{\circ}$ in $\phi$ at $\absdeteta =$~1.1 to 0.5 in $\eta_{\mathrm{det}}$ and 15$^{\circ}$ in $\phi$ at $\absdeteta =$~3.64). The central barrel section of the CDF calorimeter is unchanged from Run I. It consists of an inner electromagnetic (CEM) calorimeter and an outer hadronic (CHA) calorimeter~\cite{det:cem}. The end-wall hadronic (WHA) calorimeter completes the coverage of the central barrel calorimeter in the region 0.6~$< \absdeteta <$~1.0 and provides additional forward coverage out to $\absdeteta =$~1.3~\cite{det:had}. As part of the CDF Run II upgrade, the original gas calorimetry of the end plug region ($\absdeteta >$~1.1) was replaced with scintillator plate calorimetry using scintillator tiles read out by wavelength shifting fibers embedded in the scintillator~\cite{det:plug,det:plug2}. The new design has an improved sampling fraction and reduces forward gaps that existed in the old gas calorimeter system. The new plug electromagnetic (PEM) calorimeter provides coverage in the 1.1~$< \absdeteta <$~3.6 region and the new plug hadronic (PHA) calorimeter provides coverage in the 1.3~$< \absdeteta <$~3.6 region~\cite{det:pha}. Both the PEM and PHA incorporate the same polystyrene based scintillator and similar photomultiplier tubes used in the CEM. Calorimeter energy resolutions are measured using test-beam data. The measured energy resolutions for electrons in the electromagnetic calorimeters are 14~$\!\%/\sqrt{\ensuremath{E_{T}}}$ (CEM) and 16~$\!\%/\sqrt{E} ~\oplus$ 1~$\!\%$ (PEM) \cite{det:tdr} where the units of $\ensuremath{E_{T}}$ and $E$ are $\ensuremath{\mathrm{Ge\kern -0.1em V}}$. We also measure the single-particle (pion) energy resolution in the hadronic calorimeters to be 75~$\!\%$/$\sqrt{E}$ (CHA), 80~$\!\%$/$\sqrt{E}$ (WHA), and 80~$\!\%/\sqrt{E} ~\oplus$ 5~$\!\%$ (PHA) \cite{det:tdr}. The energy resolution in the electromagnetic calorimeters is also determined using $Z \rightarrow e e$ candidate events. The calorimeter energy scale is set so that the mean of the Gaussian fit to the dielectron invariant mass peak is 91.1 $\ensuremath{\mathrm{Ge\kern -0.1em V}}/c^{2}$. This procedure results in a CEM energy resolution of 13.5~$\!\%/\sqrt{\ensuremath{E_{T}}} ~\oplus$ 1.5~$\!\%$, in good agreement with the test-beam result~\cite{det:calresp}. Jet energy resolution in the hadronic calorimeter sections~\cite{det:jetres} is determined using photon-jet balancing. In events in which a photon recoils against a jet and no other activity is observed, the transverse energies associated with the two objects must be equal and opposite. The photon energy measured in the electromagnetic section of the calorimeter provides a reference point against which the energy deposition associated with the jet can be compared. The resolution of the large component of jet energy deposition in the hadronic calorimeters can be determined based on this comparison. The vast majority of hadronic particle showers are completely contained within the calorimeter. The combined longitudinal depth of the central calorimeter module in interaction lengths is roughly 5.5 $\lambda$ and the equivalent depth in the plug modules is roughly 8.0 $\lambda$. However, some small fraction of hadronic particle showers does leak out from the back end of the calorimeter, complicating muon identification. Proportional chambers (CES) are embedded in the electromagnetic section of the central barrel at a radiation length depth of roughly 6~$X_0$ corresponding to the region of maximum shower intensity for electrons. These chambers are used to measure the profile of a shower and extract the location of the incident particle within a given tower. The increased shower position resolution provides additional selection criteria for electron candidates based on track-shower matching. The chambers, two per calorimeter wedge, utilize wires in the $r$-$\phi$ view and cathode strips in the $z$ view to determine the three-dimensional position of each shower. The resolution of the CES position measurement in $r$-$\phi$ is roughly 0.2~$\mathrm{cm}$. Each calorimeter module also has a second set of chambers (CPR) situated on the front of the corresponding electromagnetic section which presamples each shower to provide additional information useful in electron identification and pion-photon separation. The first layer of the plug electromagnetic calorimeter is used as a preshower detector (PPR). Its scintillator is polyvinyltoluene-based, and it is twice as thick as the other sampling layers in the PEM. It has the same transverse segmentation as the PEM, but each scintillator tile in the PPR is read out individually. The PEM also has a shower maximum detector (PES) embedded in it at a depth of $\sim$~6~$X_0$ \cite{det:pes}. The PES consists of two layers of 5~$\mathrm{mm}$ wide polyvinyltoluene-based scintillator strips, with each layer having a 45$^{\circ}$ crossing angle relative to the other. The PES provides coverage in the 1.1 $< \absdeteta <$ 3.5 region. \subsection{Muon detectors} In order for a muon to pass through the calorimeter and into the most central portion of the CDF muon detector ($\absdeteta \leq$ 0.6), it must have a minimum $\ensuremath{p_{T}}$ on the order of 1.4 $\ensuremath{\mathrm{Ge\kern -0.1em V}}/c$. In order to reach the outer portion of the central detector or the more forward detectors (0.6 $< \absdeteta <$ 1.0), the muon is required to pass through an additional layer of steel absorber. Muons with a momentum above 3.0 $\ensuremath{\mathrm{Ge\kern -0.1em V}}/c$ are essentially 100~$\!\%$ efficient for traversing the steel absorber over the entire solid angle of the combined muon detector coverage. The amount of energy deposited in the calorimeter by high $\ensuremath{p_{T}}$ muons produced in $\ensuremath{W}$ and $\ensuremath{Z}$ boson decays is observed to be Landau distributed with means of 0.3~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ for deposits in the electromagnetic section and 2.0~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ for those in the hadronic section. Reconstructed particle tracks in the COT matched to minimum ionizing-like energy deposits in the calorimeter are treated as ``stubless'' muon candidates even in cases where the tracks are not matched with any hits in the muon detectors. The muon offline reconstruction forms stubs based on hit information in the muon detector and matches found stubs with the reconstructed COT tracks to determine our highest quality muon candidates. This final set of muon candidates includes only a small percentage of non-muon fakes originating from other hadronic particles that are not fully contained within the calorimeter (hadronic punchthrough). Despite the fact that a non-negligible number of hadrons (on the order of 1 in 220) pass through the entire calorimeter, the majority of those that enter the muon detector are absorbed in the filtering steel and do not produce associated hits in the outer sections of the detectors. Conversely, ``stubless'' muon candidates include a substantially larger fraction of non-muon fakes, and the presence of additional physics objects (such as a second higher quality muon) associated with these candidates is typically required to increase the purity of the sample. The CDF muon detector is made up of four independent detector systems outside the calorimeter modules. The Central Muon Detector (CMU) \cite{det:muon} is mounted directly around the outer edge of the central calorimeter module. The CMU is an original Run I detector component containing 2,304 single-wire drift chambers arranged in four concentric radial layers. The drift chamber wires are strung parallel to the direction of the incoming beams, and wire pairs on layers 1 and 3 and layers 2 and 4 project radially back to the nominal beam interaction point, allowing for a coarse $\ensuremath{p_{T}}$ measurement based on the difference in signal arrival times on the two wires within a pair. The CMU system provides symmetrical coverage in $\phi$ in the central part of the detector ($\absdeteta \leq$~0.6). The drift chambers have been upgraded to operate in proportional mode (in Run~I these chambers were run in streamer mode). Operating in this mode reduces the high voltage settings for the chambers and helps to prevent voltage sagging which is an issue due to the higher hit rates at Run II luminosities. Precision position measurements in the $\phi$ direction are made by converting signal arrival times into drift distances in the plane orthogonal to the wire direction. The wires of cells in neighboring stacks are connected via resistive wires at the non-readout end of cells to also provide a coarse measurement of each hit position along the direction of the wire ($z$ coordinate). The measurement is made by comparing time-over-threshold for the signals observed at the readout end of the two neighboring stacks. The maximum drift time within a CMU cell is 800~$\mathrm{ns}$ which is longer than the 396~$\mathrm{ns}$ spacing between bunch crossings in the accelerator. The ambiguity as to which beam crossing a particular CMU hit originates from is resolved in both the trigger and the offline reconstruction using timing information associated with a matched COT track and/or matching energy in the calorimeter. The Central Muon Upgrade Detector (CMP) and Central Muon Extension Detector (CMX) were also part of the CDF Run I configuration \cite{det:newmuon}. The individual wire drift chambers of these detectors are identical except for their lengths along the direction of the wire which is larger for CMP chambers. These drift cells are roughly a factor of two wider than those in the CMU detector resulting in a longer maximum drift time of 1.8~$\mu \mathrm{s}$. Matching scintillator detectors (CSP,CSX) installed on the outer edges of these systems can in principle provide timing information to resolve the three beam-crossing ambiguity arising from the long drift time. In practice, however, occupancies in these chambers are small enough at current luminosities to uniquely determine the appropriate beam-crossing from COT track matching. CSX timing information is used in the trigger to eliminate out-of-time hits from the beam halo associated with particle losses in the accelerator tunnel, but information from the scintillator systems is not currently utilized in muon reconstruction in this analysis. The CMP/CMX drift chambers are also run in proportional mode. The CMP chambers are arranged in a box-like structure around the outside of the CMU detector and an additional 3$\lambda$ of steel absorber which is sandwiched between the two detectors. The additional steel greatly reduces hadronic punchthrough into the CMP chambers and allows for cleaner muon identification. A total of 1,076 drift cells arranged in four staggered layers form the four-sided CMP structure which provides additional coverage for the central part of the detector ($\absdeteta \leq$~0.6) with variable coverage in $\phi$. Drift cells in the CMX detector are arranged in conical arrays of eight staggered layers to extend muon coverage up to $\absdeteta \leq$~1.0. The partial overlap between drift tubes in the CMX conical arrangement allows for a rough hit position measurement in the $z$ coordinate utilizing the different stereo angles of each cell with respect to the beam axis. The Run~I configuration consisted of 1,536 drift cells arranged in four 120$^{\circ}$ sections providing coverage between $-45^{\circ}$ to 75$^{\circ}$ and 105$^{\circ}$ to 225$^{\circ}$ in $\phi$ on both ends of the detector. An additional 60$^{\circ}$ of CMX coverage on the bottom of the detector at both ends has been added for Run~II, but these new components were still being commissioned in early running and are not utilized in the measurements reported here. The Barrel Muon Upgrade Detector (BMU) is another new addition for Run~II which provides additional muon coverage in the regions 1.0~$<\absdeteta<$~1.5. This new detector system was also being commissioned in the initial part of Run~II and is not used in these measurements. \subsection{Cherenkov Luminosity Counters} The small-angle Cherenkov Luminosity Counters (CLC) detector is used to measure the instantaneous and integrated luminosity of our data samples. This detector system is an additional Run~II upgrade~\cite{det:CLC_NIM} that allows for high-precision luminosity measurements up to the highest expected instantaneous luminosities. The CLC consists of two modules installed around the beampipe at each end of the detector, which provide coverage in the regions 3.6~$< \absdeteta <$~4.6. Each module consists of 48 long, conical gas Cherenkov counters pointing to the collision region. The counters are arranged in three concentric layers of 16 counters each, around the beam-pipe. The counters in the two outer layers are about 1.8~$\mathrm{m}$ and those in the inner layer are 1.1~$\mathrm{m}$ long. Each counter is made of highly reflective aluminized Mylar with a light collector that gathers the Cherenkov light into fast, radiation hard photomultiplier tubes with good ultraviolet quantum efficiency. The modules are filled with isobutane gas at about 22 $\mathrm{psi}$ which is an excellent radiator while having good ultraviolet transparency. The Cherenkov light cone half-angle, $\theta_{\mathrm{c}}$, is $3.1^{\circ}$ corresponding to a momentum threshold for light emission of 9.3~$\ensuremath{\MeV\!/c}$ for electrons and 2.6~$\ensuremath{\GeV\!/c}$ for pions. The expected number of photoelectrons, $N_{\mathrm{pe}}$, for a single counter is given by $N_{\mathrm{pe}} = N_{\mathrm{o}} \cdot L \cdot sin^{2}\theta_{\mathrm{c}}$ where $L$ is the distance traversed by the particle in the medium and $N_{\mathrm{o}} = 370~\mathrm{cm}^{-1} \ensuremath{\mathrm{e\kern -0.1em V}}^{-1} \int \epsilon_{\mathrm{col}} (E) \epsilon_{\mathrm{det}} (E) dE$. The $\epsilon_{\mathrm{det}}$ and $\epsilon_{\mathrm{col}}$ terms are defined as the light detection and collection efficiencies, respectively, and are functions of the energy $E$ of the Cherenkov photon (in $\ensuremath{\mathrm{e\kern -0.1em V}}$). Our design results in $N_{\mathrm{o}} \sim 200~\mathrm{cm}^{-1}$ corresponding to $N_{\mathrm{pe}} \sim 0.6/\mathrm{cm}$~\cite{det:CLC_NIM1}. The details of the luminosity measurement are described in Sec.~\ref{sec:data}. \subsection{Trigger systems} \label{subsec:trig} The CDF trigger system~\cite{det:trig,det:l3} was redesigned for Run II because of the changes in accelerator operating conditions. The upgraded trigger system reduces the raw event rate in the detector (the nominal 2.5~$\mathrm{MHz}$ beam crossing rate) to 75~$\mathrm{Hz}$, the typical rate at which events can be recorded. The corresponding event rejection factor of roughly $3 \times 10^{4}$ is obtained using a three-level system where each level is designed to provide sufficient rejection to allow for processing with minimal deadtime at the subsequent level. The first level of the trigger system (Level~1) utilizes custom hardware to select events based on information in the calorimeters, tracking chambers, and muon detectors. Three parallel, synchronous hardware processing streams are used to create the trigger primitive data required to make the Level~1 decision. All detector data are fed into 6~$\mu \mathrm{s}$ pipelines to allow for processing time required at Level~1. The global Level~1 decision must be made and returned to the front-end detector hardware before the corresponding collision data reach the end of the pipeline. Trigger decisions are made at the 2.5~$\mathrm{MHz}$ crossing rate, providing dead-time free operation. One set of Level~1 hardware is used to find calorimeter objects (electrons and jets) and calculate the missing transverse energy and total transverse energy seen by the calorimeter in each event. At Level~1, electron and jet candidates are defined as single-tower energy deposits above some threshold in the electromagnetic or electromagnetic plus hadronic sections of trigger towers, respectively. Calorimeter energy quantities are calculated by summing the transverse components of all single tower deposits assuming a collision vertex of $z =$~0. A second set of hardware is utilized to select muon candidates from observed hits in the muon detector wire chamber and scintillator systems. A loose $\ensuremath{p_{T}}$ threshold is applied based on differences in signal arrival times on pairs of projective wires in the CMU and CMX chambers. CMP primitives obtained from a simple pattern finding algorithm using observed hits on the four drift cell layers are matched to high $\ensuremath{p_{T}}$ CMU candidates, and CSX hits within a certain time window consistent with collision-produced particles are matched to CMX candidates. An important element of the Run II CDF trigger upgrade is the third set of hardware which identifies COT track candidates within the tight Level~1 timing constraints. The eXtremely Fast Tracker (XFT)~\cite{det:xft} hardware examines hits on each axial superlayer of the COT and combines them into track segments. The found segments on the different layers are then linked to form tracks. The triggers used to collect the data samples utilized in these measurements are based on XFT tracks with reconstructed segments on all four COT axial superlayers. As discussed in more detail in Sec.~\ref{sec:evsel}, this requirement has a small effect on the geometrical acceptance for lepton track candidates in our samples. The hit requirement for XFT track segments was changed from hits on 10/12 layers to hits on 11/12 layers during the data collection period for the samples used in these measurements. This change led to a few percent drop in the trigger efficiency for high $\ensuremath{p_{T}}$ tracks but provided a substantial increase in overall Level~1 event rejection. The XFT hardware reports tracks in 1.25$^{\circ}$ bins in $\phi$. If more than one track is reconstructed within a given $\phi$ bin, the track with the highest $\ensuremath{p_{T}}$ is used. The XFT feeds its lists of found tracks to another piece of hardware known as the track extrapolation unit (XTRP). The XTRP determines the number of tracks above certain $\ensuremath{p_{T}}$ thresholds and makes this information available for the global Level~1 decision. It also extrapolates each track based on its reconstructed $\ensuremath{p_{T}}$ into the calorimeter and muon detectors to determine into which $\phi$ slices of each system the track points based on the potential effects of multiple scattering. This information is passed to the calorimeter and muon parts of the Level~1 trigger hardware in two sets of 2.5$^{\circ}$ $\phi$ bins corresponding to groups of tracks above two programmable $\ensuremath{p_{T}}$ thresholds. Using this information, tracks are then matched to electron and muon primitives identified in those pieces of the Level~1 hardware to produce the final lists of electron and muon objects. The final Level~1 trigger decision is made based on the number of physics objects (electrons, muons, jets, and tracks) found by the hardware and the calculated global calorimeter energy quantities. The maximum Level~1 event accept rate is roughly 20~$\mathrm{kHz}$ corresponding to an available Level~2 processing time of 50 $\mu \mathrm{s}$ per event. Events accepted at Level~1 are stored in one of four buffers in the front-end readout hardware. Multiple event buffers allow for additional Level~1 triggers to be accepted during the Level~2 processing of a previously accepted event. The Level~2 trigger system utilizes a combination of dedicated hardware and modified commercial processors to select events. There are two main pieces of dedicated Level~2 hardware. The first is the cluster finder hardware which merges the observed energy deposits in neighboring calorimeter towers to form clusters, and the second is the silicon vertex tracking hardware (SVT) \cite{det:svt} which uses silicon detector hit information to search for tracks with displaced vertices. These systems are asynchronous in that processing time is dependent on the amount of input data associated with a given event. The output of these systems is passed to the global Level~2 processor along with the input data utilized in the Level~1 decision and additional hit information from the CES to aid in low $\ensuremath{E_{T}}$ electron selection. The data are fed into the Level~2 processor board and simple selection algorithms, optimized for speed, are run to determine which events are passed to Level~3. The processor board has been designed to simultaneously read in one event while processing another which streamlines operation and helps to keep data acquisition deadtime at a minimum ~\cite{det:l3}. Events selected by the Level~2 trigger hardware are read out of the front-end detector buffers into the Level~3 processor farm. The current maximum Level~2 accept rate for events into Level~3 is roughly 300 $\mathrm{Hz}$. Level~3 processors run a speed-optimized version of the offline reconstruction code and impose loose sets of selection cuts on the reconstructed objects to select the final 75 $\mathrm{Hz}$ of events which are recorded for further processing. The Level~3 processor farm is made up of roughly 300 commercial dual processor computers running Linux to allow for one second of processing time for each event. The software algorithms run at Level~3 take advantage of the full detector information and improved resolution unavailable at the lower trigger levels. The Level~3 algorithms are based on full three-dimensional track reconstruction (including silicon hit information) which allows for tighter track matching with electromagnetic calorimeter clusters and reconstructed stubs in the muon detector for improved lepton identification. \section{Data Samples and Luminosity} \label{sec:data} The $\wlnu$ and $Z \rightarrow \ell \ell$ candidate event samples used to make the measurements reported here are selected from datasets collected using high $\ensuremath{E_{T}}$ lepton trigger requirements. Additional data samples used in the evaluation of efficiencies and backgrounds are discussed in further detail in the corresponding subsequent sections. Here, we present the trigger requirements for events contained within the datasets from which our candidate samples are selected. We also briefly describe data processing, the event quality criteria applied to our data samples, and the measurement of the integrated luminosities corresponding to our datasets. \subsection{Trigger requirements} The datasets used to select our candidate events are composed of events collected with well-defined trigger requirements at each of the three levels within the CDF trigger architecture (see Sec.~\ref{sec:exp}). The specific trigger requirements associated with the datasets used to make our measurements are summarized here. The measured efficiencies of these trigger requirements are presented in Sec.~\ref{sec:eff}. \subsubsection{Central electron trigger} The trigger requirements for the dataset used to select $\wenu$ and $Z \rightarrow e e$ candidate events are described here. Both candidate samples are selected from central, high $\ensuremath{E_{T}}$ electron triggered events, corresponding to the region $\absdeteta <$~1.0. At Level~1, energies in physical calorimeter towers of 0.1$\times$15$^{\circ}$ in $\eta_{\mathrm{det}}$-$\phi$ space are first summed into 0.2$\times$15$^{\circ}$ trigger towers. At least one trigger tower is required to have $\ensuremath{E_{T}} >$~8~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and the ratio of the hadronic to electromagnetic energies in that tower, $E_{\mathrm{had}}/E_{\mathrm{em}}$, must be less than 0.125 (for measured $\ensuremath{E_{T}} <$~14~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$). In addition, at least one COT track with $\ensuremath{p_{T}} >$~8~$\ensuremath{\GeV\!/c}$ pointing in the direction of the tower must be found by the XFT hardware. A clustering algorithm is run at Level~2 to combine associated energy deposits in neighboring calorimeter towers. Adjacent ``shoulder'' towers with $\ensuremath{E_{T}} >$~7.5~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ are added to the central ``seed'' tower found at Level~1. The total $\ensuremath{E_{T}}$ of the cluster is required to be above 16~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and the $E_{\mathrm{had}}/E_{\mathrm{em}}$ ratio of the cluster is required to be less than 0.125. The presence of an XFT track with $\ensuremath{p_{T}} >$~8~$\ensuremath{\GeV\!/c}$~ matched to the seed tower of the central cluster is also reconfirmed. Finally, in Level~3 an electromagnetic cluster with $\ensuremath{E_{T}} >$~18~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and $E_{\mathrm{had}}/E_{\mathrm{em}} <$~0.125 must be found by the offline reconstruction algorithm. A track pointing at the cluster with $\ensuremath{p_{T}} >$~9~$\ensuremath{\GeV\!/c}$ must also be found by the full three-dimensional COT track reconstruction algorithm run in the Level~3 processors. At each level of the trigger, the rate of accepted events is significantly reduced. At typical luminosities ($\sim$~2.5~$\times$~10$^{31} \mathrm{cm}^{-2} \mathrm{s}^{-1}$), the accepted rate of events for the above trigger requirements are 25~$\mathrm{Hz}$, 3~$\mathrm{Hz}$, and 1~$\mathrm{Hz}$ for Levels~1, 2, and~3, respectively. \subsubsection{Central muon triggers} The dataset used to select our $\wmnu$ and $Z \rightarrow \mu \mu$ candidate samples is made of events collected using two analogous sets of trigger requirements. In the most central region of the detector ($\absdeteta <$~0.6), trigger requirements are designed to select high $\ensuremath{p_{T}}$ muon candidates which deposit hits in both the CMU and CMP wire chambers. An independent but similar set of requirements is used to collect high $\ensuremath{p_{T}}$ candidates in the extended central region (0.6~$<\absdeteta <$~1.0) which produce hits in CMX wire chambers. The specific trigger requirements for the central region at Level~1 are matched hits in one or more CMU projective wire pairs with arrival times within 124~$\mathrm{ns}$ of each other, a pattern of CMP hits on three of four layers consistent in $\phi$ with the observed CMU hits, and a matching COT track found by the XFT with $\ensuremath{p_{T}} >$~4~$\ensuremath{\GeV\!/c}$. For the early part of the run period corresponding to our datasets we make no additional requirements at Level~2, but for the later portion we require at least one COT track with $\ensuremath{p_{T}} >$~8~$\ensuremath{\GeV\!/c}$ in the list of Level~1 XFT tracks passed to the Level~2 processor boards. Because no muon trigger information was available at Level~2 during this run period, the higher $\ensuremath{p_{T}}$ track was not required to match the CMU or CMP hits associated with the Level~1 trigger. Finally for Level~3, a reconstructed three-dimensional COT track with $\ensuremath{p_{T}} >$~18~$\ensuremath{\GeV\!/c}$ matched to reconstructed stubs in both the CMU and CMP chambers is required based on the offline reconstruction algorithms for muons. The analogous trigger requirements for the extended central region at Level~1 are matched hits in one or more CMX projective wire pairs with arrival times within 124~$\mathrm{ns}$ of each other and a matching COT track found by the XFT with $\ensuremath{p_{T}} >$~8~$\ensuremath{\GeV\!/c}$. For the latter part of our data collection period, a matching hit in the CSX scintillator counters consistent in time with a beam-produced particle is also required to help reduce the trigger rate from non-collision backgrounds. No additional requirements are made at Level~2. In Level~3, a reconstructed three-dimensional COT track with $\ensuremath{p_{T}} >$~18~$\ensuremath{\GeV\!/c}$ matched to a reconstructed stub in the CMX chambers is required based on the offline reconstruction algorithms for muons. At typical luminosities ($\sim$~2.5~$\times$~10$^{31} \mathrm{cm}^{-2} \mathrm{s}^{-1}$), the accepted rate of events for the central trigger requirements are 30~$\mathrm{Hz}$, 4~$\mathrm{Hz}$, and 0.15~$\mathrm{Hz}$ for Levels~1, 2, and~3, respectively. For the extended central muon trigger requirements, the corresponding rates are 2~$\mathrm{Hz}$, 2~$\mathrm{Hz}$, and 0.1~$\mathrm{Hz}$. \subsection{Luminosity Measurement} \label{subsec:lummeas} The total integrated luminosity ($L$) is derived from the rate of the inelastic $p\overline{p}$ events measured with CLC, $R_{p\overline{p}}$, the CLC acceptance, $\epsilon_{\mathrm{CLC}}$, and the inelastic $p\overline{p}$ cross section at 1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$, $\sigma_{\mathrm{in}}$, according to the expression \begin{eqnarray} L = {R_{p\overline{p}} \over \epsilon_{\mathrm{CLC}} \cdot \sigma_{\mathrm{in}}}. \end{eqnarray} The CLC acceptance is measured from tuned simulation and compared against the value obtained from a second method that relies on both data and simulation through the formula \begin{equation} \epsilon_{\mathrm{CLC}} = \frac{N_{\mathrm{EW}}}{N_{\mathrm{CLC+Plug}}} \cdot \frac{N_{\mathrm{CLC+Plug}}}{N_{\mathrm{inelastic}}}, \end{equation} where $N_{\mathrm{CLC+Plug}}$ is the number of inelastic events tagged by the CLC and plug calorimeter, $N_{\mathrm{EW}}$ is a subset of those which contain an east-west hit coincidence and pass the online selection criteria, and $N_{\mathrm{inelastic}}$ is the total number of inelastic collisions. The fraction $N_{\mathrm{CLC+Plug}}/N_{\mathrm{inelastic}}$ is extracted from simulation while the ratio $N_{\mathrm{EW}}/N_{\mathrm{CLC+Plug}}$ is measured from data. The acceptance calculated using this procedure is $\epsilon_{\mathrm{CLC}} =$~60.2~$\pm$~2.6~$\!\%$ which is in good agreement with the value obtained directly from simulation. The value $\sigma_{\mathrm{in}} =$~60.7~$\pm$~2.4~$\ensuremath{\mathrm{mb}}$ is obtained by extrapolating the combined result for the inelastic $p\overline{p}$ cross section at $\sqrt{s} =$~1.8~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ based on CDF and E811 measurements (59.3~$\pm$~2.3~$\ensuremath{\mathrm{mb}}$)~\cite{data:lumi} to 1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$. Using these numbers, and restricting ourselves to runs with a good detector status, the total luminosity of our datasets is estimated to be 72.0~$\pm$~4.3~$\ensuremath{\mathrm{pb}^{-1}}$. The 6~$\!\%$ quoted uncertainty is dominated by the uncertainty in the absolute normalization of the CLC acceptance for a single $p\overline{p}$ inelastic collision~\cite{data:lumi}. The complete list of systematic uncertainties, including uncertainties from the inelastic cross section and luminosity detector, is given in Table~\ref{tab:lum}. \begin{table}[t] \caption{ Systematic uncertainties in the luminosity calculation based on the CLC measurement and the combined value of the CDF and E811 inelastic cross section measurements at $\sqrt{s} =$~1.80~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ extrapolated to $\sqrt{s} =$~1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$. The total uncertainty in the CLC measurement is dominated by the uncertainty in the CLC acceptance. The detector instability and calibration uncertainties are components of the overall CLC measurement uncertainty and therefore not included in the calculation of the total uncertainty.} \centering{ \begin{tabular}{l r} \hline \hline Effect & Uncertainty Estimate \\ \hline Inelastic Cross Section & 4.0~$\!\%$ \\ CLC Measurement & 4.4~$\!\%$ \\ Detector Instability & $<$ 2.0~$\!\%$ \\ Detector Calibration & $<$ 1.5~$\!\%$ \\ \hline Total Uncertainty & $\sim$~6.0~$\!\%$ \\ \hline \hline \end{tabular} } \label{tab:lum} \end{table} \section{Event selection} \label{sec:evsel} We search for $\ensuremath{W}$ bosons decaying into a highly energetic charged lepton ($\ell = e, \mu$) and a neutrino, which is identified via large $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ in the detector. The $Z \rightarrow \ell \ell$ ($\ell = e, \mu$) events are selected based on the two energetic, isolated leptons originating from a single event vertex. The two leptons produced in the decays are oppositely charged, and the charge information for leptons is included as part of the selection criteria when available. The reconstructed dilepton invariant mass is also required to lie within a mass window consistent with the measured $\ensuremath{Z}$ boson mass. The complete set of selection criteria used to identify $\wlnu$ and $Z \rightarrow \ell \ell$ events are described here. As the selection of $\ensuremath{W}$ and $\ensuremath{Z}$ bosons proceeds from lepton identification, we also describe in some detail the variables used to select good lepton candidates. \subsection{Track Selection} \label{sec:tracksel} The track quality requirements are common to electron and muon selection. As the silicon tracking information is not vital to our measurements, we remove all silicon hits from the tracks and refit them, including the position of the beamline in the transverse direction as an additional constraint in the fit. The beamline position is measured independently for each run period contributing to our datasets using the reconstructed COT track data contained within events from that period. The removal of silicon hits from tracks makes our measurements insensitive to the time-dependent efficiencies of the individual pieces of the silicon detector and allows us to include data from run periods when the silicon detector was not operational. The resulting beam-constrained COT tracks are used in the subsequent analysis work presented here. All of the kinematic track parameters used in these analyses with the one exception of the $r-\phi$ track impact parameter variable, $d_0$, used in muon selection, are based on these beam-constrained COT tracks. The reconstructed tracks obtained using the method described above have small residual curvature biases primarily due to COT misalignments that are not currently corrected for in our offline tracking algorithms. We correct our track $\ensuremath{p_{T}}$ measurement for misalignment effects based on the observed $\phi$-dependence of the electron candidate $E/p$ distribution (see Sec.~\ref{sec:elesel}). The form of the correction is \begin{equation} \frac{Q}{\ensuremath{p_{T}}^{\prime}} = \frac{Q}{\ensuremath{p_{T}}} - 0.00037 - 0.00110 \cdot \sin(\phi+0.28), \end{equation} where $\ensuremath{p_{T}}^{\prime}$ and $\ensuremath{p_{T}}$ are the transverse momenta in $\ensuremath{\GeV\!/c}$ of the corrected and uncorrected track, respectively, $Q$ is the charge of the track, and $\phi$ is given in radians. We apply additional selection criteria on our reconstructed tracks to ensure that only high-quality tracks are assigned to lepton candidates. Each track is required to pass a set of minimum hit criteria. The reconstructed tracks are required to have a minimum of seven out of twelve possible hits on at least three of four axial and stereo superlayers within the COT. The minimum hit criteria for reconstructed tracks is less restrictive than that used to select Level~1 trigger track candidates (see Sec.~\ref{subsec:trig}) to ensure high selection efficiencies for both triggerable and non-triggerable track candidates in our events. In addition, to restrict ourselves to a region of high track reconstruction efficiency, we require the $z$ coordinate of the lepton track intersection with the beam axis in the $r-z$ plane, $z_0$, be within 60 $\mathrm{cm}$ of the center of the detector. To help reduce real muon backgrounds from cosmic rays and $\pi/K$ decays, we impose additional quality requirements on muon track candidates. For muon track candidates only, we incorporate silicon hit information in track reconstruction when available to calculate a more precise value for the $r-\phi$ impact parameter of the track, $d_0$. Cosmic ray muons and muons produced in $\pi/K$ decays are less likely to point back to the event vertex and therefore will typically have larger measured impact parameters. We apply different cuts on the $d_0$ of muon track candidates depending on whether or not the tracks contain any silicon hit information; $|d_{0}| <$ 0.2 $\mathrm{cm}$ for tracks with no silicon hits and $|d_{0}| <$ 0.02 $\mathrm{cm}$ for tracks with silicon hits. We also make a requirement on the quality of the final COT beam-constrained track fit for muon candidates. The track fit for muon backgrounds not originating from the event vertex will typically be worse when the additional constraint of the beamline position is included. For muon track candidates we require that $\chi^2/n_{\mathrm{df}} <$ 2.0 where $n_{\mathrm{df}}$ is the number of degrees of freedom in the track fit (the number of hits on the fitted track minus the five free parameters of the fit). We additionally restrict muon track candidates in $\theta$ to ensure that the tracks lie in a fiducial region of high trigger and reconstruction efficiency well-modeled by our detector simulation. We require that each muon track passes through all eight COT superlayers by making a minimum requirement on the exit radius of the track at the endplates of the COT tracking chamber. The exit radius is defined as \begin{equation} \rho_{\mathrm{COT}} = (z_{\mathrm{COT}} -z_0) \cdot \tan\theta, \end{equation} where $z_{\mathrm{COT}}$ is the distance of the COT endplates from the center of the detector ($155$~cm for tracks with $\eta >$~0 and $-155$~cm for those with $\eta <$~0). Here, $\eta$ and $\theta$ are the previously defined pseudorapidity and polar angle of the track with respect to the directions of the colliding beams. A comparison of the $\rho_{\mathrm{COT}}$ distribution for CMX muons from $Z \rightarrow \mu \mu$ candidate events in data and Monte Carlo (MC) simulation (see Sec.~\ref{sec:acc}) is shown in Fig.~\ref{fig:cotexitradius}. The distributions do not match in the region $\rho_{\mathrm{COT}} <$~140~$\mathrm{cm}$ due to a loss of data events in this region originating from the XFT track trigger requirements, at least ten (or eleven) hits out of a possible twelve for each of the four axial COT superlayers, which is not accounted for in the simulation. Based on this comparison, we require $\rho_{\mathrm{COT}} >$~140~$\mathrm{cm}$ for muon track candidates. \begin{figure} \includegraphics*[width=8.5cm]{figures/cotexitradius.eps} \caption{The COT exit radius for CMX muons in $Z \rightarrow \mu \mu$ candidate events. The points are the data and the histogram is simulation. The selected CMX muons from data events are required to satisfy the high $\ensuremath{p_{T}}$ muon trigger criteria, but no trigger requirement is made on the muons selected from simulation. The two histograms are normalized to have the same number of events over the region 150~$\mathrm{cm} < \rho_{\mathrm{COT}} <$~280~$\mathrm{cm}$. The arrow indicates the location of the muon track selection cut made on the $\rho_{\mathrm{COT}}$ variable.} \label{fig:cotexitradius} \end{figure} Track selection requirements are summarized in Table~\ref{tab:trackqual}. Distributions of the track quality variables used in the selection of all lepton tracks are shown in Fig.~\ref{fig:trkqual}, and those used solely in the selection of muon tracks are shown in Fig~\ref{fig:trkqualmu}. The distributions are constructed from second, unbiased lepton legs in $Z \rightarrow \ell \ell$ candidate data events. Based on these distributions, we expect the measured inefficiencies of our track selection criteria (see Sec.~\ref{sec:eff}) to be on the order of a few percent. \begin{figure*} \includegraphics*[width=8.cm]{figures/TrkZ0.eps} \includegraphics*[width=8.cm]{figures/TrkAxSeg.eps} \includegraphics*[width=8.cm]{figures/TrkStSeg.eps} \caption{Distributions of the $z_0$ and number of axial and stereo COT superlayers contributing seven or more hits. These track quality variables are from unbiased, second lepton legs of $Z \rightarrow \ell \ell$ candidate events in data. The arrows indicate the locations of selection cuts applied on these variables.} \label{fig:trkqual} \end{figure*} \begin{figure*} \includegraphics*[width=8.cm]{figures/TrkSiD0.eps} \includegraphics*[width=8.cm]{figures/TrkNoSiD0.eps} \includegraphics*[width=8.cm]{figures/TrkChi2.eps} \includegraphics*[width=8.cm]{figures/TrkrhoCOT.eps} \caption{Distributions of the $d_0$ (with and without attached silicon hits), $\chi^{2}/n_{\mathrm{df}}$, and $\rho_{\mathrm{COT}}$. These track quality variables are for muons from unbiased, second muon legs of $Z \rightarrow \mu \mu$ candidate events in data. The arrows indicate the locations of selection cuts applied on these variables.} \label{fig:trkqualmu} \end{figure*} \begin{table} \caption{Summary of track selection requirements.} \begin{tabular}{l r} \hline \hline Variable & Cut \\ \hline All Tracks: & \\ $\#$ Axial COT Superlayers & $\ge$~3 with $\ge$~7 hits \\ $\#$ Stereo COT Superlayers & $\ge$~3 with $\ge$~7 hits \\ $|z_0|$ & $<$ 60~$\mathrm{cm}$ \\ \hline Muon Tracks: & \\ $|d_{0}|$ & $<$ 0.2~$\mathrm{cm}$ (no silicon hits) \\ $|d_{0}|$ & $<$ 0.02~$\mathrm{cm}$ (silicon hits) \\ $\chi^{2}/n_{\mathrm{df}}$ & $<$ 2.0 \\ $\rho_{\mathrm{COT}}$ & $>$ 140~$\mathrm{cm}$ \\ \hline \hline \end{tabular} \label{tab:trackqual} \end{table} \subsection{Electron Selection} \label{sec:elesel} Electron candidates are reconstructed in either the central barrel or forward plug calorimeters. The clustering algorithms and selection criteria used to identify electrons in the two sections are different, as we do not make use of tracking information in the forward detector region ($\absdeteta >$~1) where standalone track reconstruction is less reliable due to the smaller number of available tracking layers. Here, we discuss the specific identification criteria for both central and plug electrons. \subsubsection{Central Electron Identification} Electron objects are formed from energy clusters in neighboring towers of the calorimeter. An electron cluster is made from an electromagnetic seed tower and at most one additional tower that is adjacent to the seed tower in $\eta_{\mathrm{det}}$ and within the same $\phi$ wedge. The seed tower must have $\ensuremath{E_{T}} >$~2~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and a reconstructed COT track which extrapolates to that tower. The hadronic energy in the corresponding towers is required to be less than 0.125 times the electromagnetic energy of the cluster. Electron candidates for these measurements must lie within the well-instrumented regions of the calorimeter. The cluster position within the calorimeter is determined by the location of the associated CES shower. The CES shower must lie within 21~$\mathrm{cm}$ of the tower center in the $r-\phi$ view for the shower to be fully contained within the active region. We also exclude electrons reconstructed in the region where the two halves of the central calorimeter meet ($|z| <$ 9~$\mathrm{cm}$) and the outer half of the most forward CEM towers ($|z| >$~230~$\mathrm{cm}$) where there is substantial electron shower leakage into the hadronic part of the calorimeter. Finally, we exclude events in which the electron is reconstructed near the uninstrumented region surrounding the cryogenic connections to the solenoidal magnet (0.77~$< \eta_{\mathrm{det}} <$~1.0, 75$^{\circ}<\phi<$~90$^{\circ}$, and $|z| >$~193~$\mathrm{cm}$). The selection requirements listed in Table~\ref{tab:eidcuts} are applied to electron candidates in the well-instrumented regions of the central calorimeter. We cut on the ratio of the hadronic to electromagnetic energies, $E_{\mathrm{had}}/E_{\mathrm{em}}$, for the candidate clusters. Electron showers are typically contained within the electromagnetic calorimeter, while hadron showers spread across both the hadronic and electromagnetic sections of the calorimeter. We require $E_{\mathrm{had}} /E_{\mathrm{em}} <$ 0.055 $+$ 0.00045 $\cdot E$ where $E$ is the total energy of the cluster in $\ensuremath{\mathrm{Ge\kern -0.1em V}}$. The linear term in our selection criteria accounts for the increased shower leakage of higher-energy electrons into the hadronic calorimeter sections. We also cut on the ratio of the electromagnetic cluster transverse energy to the COT track transverse momentum, $E/p$. This ratio is nominally expected to be unity, but in cases where the electron radiates a photon in the material of the inner tracking volume, the measured momentum of the COT track can be less than the measured energy of the corresponding cluster in the calorimeter. In cases where the electron is highly energetic, the photon and electron will be nearly collinear and are likely to end up in the same calorimeter tower. The measured COT track momentum will, however, correspond to the momentum of the electron after emitting the photon and thus be smaller than the original electron momentum. We require $E/p <$ 2.0 which is efficient for the majority of electrons which emit a bremsstrahlung photon. Since this cut becomes unreliable for very large values of track $\ensuremath{p_{T}}$, we do not apply it to electron clusters with $\ensuremath{E_{T}} >$ 100~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. The lateral shower profile variable, $L_{\mathrm{shr}}$ \cite{det:dettop}, is used to compare the distribution of adjacent CEM tower energies in the cluster as a function of seed tower energy to shapes derived from electron test-beam data. We also perform a $\chi^{2}$ comparison of the CES lateral shower profile in the $r-z$ view to the profile extracted from the electron test-beam data. For central electrons, we require $L_{\mathrm{shr}} <$ 0.2 and $\chi^{2}_{\mathrm{strips}} <$ 10.0. Since central electron candidates include a COT track, we can further reduce electron misidentification by cutting on track-shower matching variables. We define $Q \cdot \Delta x$ as the distance in the $r-\phi$ plane between the extrapolated beam-constrained COT track and the CES cluster multiplied by the charge of the track to account for asymmetric tails originating from bremsstrahlung radiation. The variable $\Delta z$ is the corresponding distance in the $r-z$ plane. We require $-$~3.0~$\mathrm{cm}$ $< Q \cdot \Delta x <$ 1.5~$\mathrm{cm}$ and $|\Delta z| <$ 3.0~$\mathrm{cm}$. Distributions of central electron identification variables are shown in Figs.~\ref{fig:idcalvar} and~\ref{fig:idcaltrkvar}. The plotted electron candidates are the unbiased, second electron legs in $Z \rightarrow e e$ events in which both electrons are reconstructed within the central calorimeter and the first electron is found to satisfy the full set of identification criteria. Based on these distributions, we expect a high efficiency for our central electron selection criteria (see Sec.~\ref{sec:eff}). \begin{table}[t] \caption{Calorimeter variables and electron identification requirements.} \begin{center} \begin{tabular}{ l r } \hline \hline Variable & Cut \\ \hline Central & $\absdeteta <$ 1.0 \\ \hline $E_{\mathrm{had}}/E_{\mathrm{em}}$ & $<$ 0.055 $+$ 0.00045 $\cdot E [$\ensuremath{\mathrm{Ge\kern -0.1em V}}$]$ \\ $E/p$ (for $\ensuremath{E_{T}} <$ 100 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$) & $<$ 2.0 \\ $L_{\mathrm{shr}}$ & $<$ 0.2 \\ $Q \cdot \Delta x$ & $>$ -3.0~$\mathrm{cm}$, $<$ 1.5~$\mathrm{cm}$ \\ $|\Delta z|$ & $<$ 3.0~$\mathrm{cm}$ \\ $\chi^{2}_{\mathrm{strips}}$ & $<$ 10.0 \\ \hline Plug & 1.2 $< \absdeteta <$ 2.8 \\ \hline $E_{\mathrm{had}}/E_{\mathrm{em}}$ & $<$ 0.05 \\ $\chi^2_{\mathrm{PEM}}$ & $<$ 10.0 \\ \hline \hline \end{tabular} \label{tab:eidcuts} \end{center} \end{table} \begin{figure*}[t] \includegraphics*[width=14.cm]{figures/Z_ID_CalVar.eps} \caption{Distributions of $L_{\mathrm{shr}}$, $E/p$, $E_{\mathrm{had}}/E_{\mathrm{em}}$, and $\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}}$ (see Sec.~\ref{sec:wsel}) central calorimeter electron selection variables from unbiased, second electron legs of $Z \rightarrow e e$ candidate events in data. The arrows indicate the locations of selection cuts applied on these variables. No arrow is shown on the $E_{\mathrm{had}}/E_{\mathrm{em}}$ distribution since the cut on this variable is dependent on the electron energy.} \label{fig:idcalvar} \end{figure*} \begin{figure*}[t] \includegraphics*[width=14.cm]{figures/Z_ID_CalTrkVar.eps} \caption{Distributions of the $z_0$, $\chi^{2}_{\mathrm{strips}}$, $Q \cdot \Delta x$, and $\Delta z$ central calorimeter electron selection variables from unbiased, second electron legs of $Z \rightarrow e e$ candidate events in data. The arrows indicate the locations of selection cuts applied on these variables.} \label{fig:idcaltrkvar} \end{figure*} \subsubsection{Plug Electron Identification} Electron candidate clusters in the plug calorimeter are made from a seed tower and neighboring towers within two towers in $\eta_{\mathrm{det}}$ and $\phi$ from the seed tower. As for central electrons, the hadronic energy of the cluster is required to be less than 0.125 times the electromagnetic energy. We also require plug electrons to be reconstructed in a well-instrumented region of the detector, defined as 1.2~$< \absdeteta <$~2.8. The additional selection criteria applied to plug electron candidates are summarized in Table~\ref{tab:eidcuts}. Fewer variables are available for selecting plug electrons due to the lack of matching track information for candidates in the forward region of the detector. As in the case of central electrons, we cut on the ratio of hadronic to electromagnetic energies in the cluster, $E_{\mathrm{had}}/E_{\mathrm{em}}$, which is required to be less than 0.05. We also compare the distribution of tower energies in a 3$\times$3 array around the seed tower to distributions from electron test-beam data, forming the variable $\chi_{\mathrm{PEM}}^{2}$ which we require to be less than 10.0. Distributions of the plug electron selection variables are shown in Fig.~\ref{fig:plugidvar}. The plotted electron candidates are the unbiased, second plug electron legs in $Z \rightarrow e e$ events in which the first electron is reconstructed within the central calorimeter and found to satisfy a set of more restrictive cuts on the previously described central electron identification variables. \begin{figure*}[t] \includegraphics*[width=17.cm]{figures/z_prd_plugdisbns.eps} \caption{Distributions of the $\chi^2_{\mathrm{PEM}}$, $\ensuremath{E_{T}}^{\mathrm{iso}}$ (see Sec.~\ref{sec:wsel}), and $E_{\mathrm{had}}/E_{\mathrm{em}}$ plug calorimeter electron selection variables from unbiased, second electron legs of $Z \rightarrow e e$ candidate events in data. The arrows indicate the locations of selection cuts applied on these variables.} \label{fig:plugidvar} \end{figure*} \subsection{Muon Selection} \label{sec:muonsel} Muon candidates used in these measurements must have reconstructed stubs in both the CMU and CMP chambers (CMUP muons) or a reconstructed stub in the CMX chambers. CMX chambers were offline for the first 16.5~$\ensuremath{\mathrm{pb}^{-1}}$ of integrated luminosity corresponding to our datasets, and the reduced muon detector coverage during this period is taken into account in our measured acceptances for events in the muon candidate samples (see Sec.~\ref{sec:sigaccpythia}). The muon candidate tracks are required to extrapolate to regions of the muon chambers with high single wire hit efficiencies to ensure that chamber-edge effects do not contribute to inefficiencies in muon stub-reconstruction and stub-track matching (see Sec.~\ref{sec:eff}). We measure the location of an extrapolated muon track candidate with respect to the drift direction (local $x$) and wire axis (local $z$) of a given chamber. The extrapolation assumes that no multiple scattering takes place, and in some cases muons that leave hits in the muon detectors extrapolate to locations outside of the chambers. In the CMP and CMX chambers, we require that the extrapolation is within the chamber volume in local $x$, and at least 3~$\mathrm{cm}$ away from the edges of the chamber volume in local $z$. Studies of unbiased muons in $Z \rightarrow \mu \mu$ events show that these regions of chambers are maximally efficient for hit-finding. No such requirement is needed for the CMU chambers. Some sections of the upgraded muon detectors were not yet fully commissioned for the period of data-taking corresponding to our datasets, and we exclude all muon candidates with stubs in those sections. The selection criteria applied to muon candidates are summarized in Table~\ref{tab:muoncuts}. We require that the measured energy depositions in the electromagnetic and hadronic sections of the calorimeters along the muon candidate trajectory, $E_{\mathrm{em}}$ and $E_{\mathrm{had}}$, are consistent with those expected from a minimum-ionizing particle. The positions of the reconstructed chamber stubs are required to be near the locations of the extrapolated tracks. The track-stub matching variable $|\Delta X|$ is the distance in the $r-\phi$ plane between the extrapolated COT track and the CMU, CMP, or CMX stub. Fig.~\ref{fig:dxmuon} shows the $\Delta X$ distributions for unbiased, CMU, CMP and CMX second muons in $Z \rightarrow \mu \mu$ events. Energetic cosmic ray muons traverse the detector at a significant rate, depositing hits in both muon chambers and the COT, and can in a small fraction of cases satisfy the requirements of the high $\ensuremath{p_{T}}$ muon trigger paths and the offline selection criteria. We remove cosmic ray events from our sample using the previously discussed track quality cuts for muon candidates and a cosmic ray tagging algorithm (see Sec.~\ref{sec:cosmicwbkg}) based on COT hit timing information. \begin{table}[t] \caption{Calorimeter and muon chamber variables used in muon identification. The fiducial distance variables are defined as the extrapolated position of the muon track candidate with respect to the edges of a given muon chamber. The fiducial distance is negative if this position lies within the chamber and positive otherwise. CMUP muon candidates are those with reconstructed stubs in both the CMU and CMP detectors. CMX muon candidates have reconstructed stubs in the CMX detector.} \begin{tabular}{ l r } \hline \hline Variable & Cut \\ \hline Minimum Ionizing Cuts: & ($\ensuremath{\mathrm{Ge\kern -0.1em V}}$) \\ \hline $E_{\mathrm{em}}$ ($p \leq$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 2 \\ $E_{\mathrm{em}}$ ($p >$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 2 + ($p$$-$100) $\cdot$ 0.0115 \\ $E_{\mathrm{had}}$ ($p \leq$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 6 \\ $E_{\mathrm{had}}$ ($p >$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 6 + ($p$$-$100) $\cdot$ 0.0280 \\ \hline Muon Stub Cuts: & ($\mathrm{cm}$) \\ \hline $|\Delta X_{\mathrm{CMU}}|$ (CMUP) & $<$ 3.0 \\ $|\Delta X_{\mathrm{CMP}}|$ (CMUP) & $<$ 5.0 \\ $|\Delta X_{\mathrm{CMX}}|$ (CMX) & $<$ 6.0 \\ CMP $x$-fiducial distance (CMUP) & $<$ 0.0 \\ CMP $z$-fiducial distance (CMUP) & $<$ $-3.0$ \\ CMX $x$-fiducial distance (CMX) & $<$ 0.0 \\ CMX $z$-fiducial distance (CMX) & $<$ $-3.0$ \\ \hline \hline \end{tabular} \label{tab:muoncuts} \end{table} \begin{figure*}[t] \includegraphics[width=8.cm]{figures/TrkCMUdX.eps} \includegraphics[width=8.cm]{figures/TrkCMPdX.eps} \includegraphics[width=8.cm]{figures/TrkCMXdX.eps} \caption{Distributions of the CMU, CMP, and CMX $\Delta X$ muon selection variables from unbiased, second muon legs of $Z \rightarrow \mu \mu$ candidate events in data. The arrows indicate the locations of selection cuts applied on these variables.} \label{fig:dxmuon} \end{figure*} \subsection{$\wlnu$ Selection} \label{sec:wsel} $\wlnu$ events are selected by first requiring a high-$\ensuremath{p_{T}}$ charged lepton in the central detectors, as described above. Electrons must have electromagnetic-cluster $\ensuremath{E_{T}}$ greater than 25 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and COT track $\ensuremath{p_{T}}$ greater than 10 $\ensuremath{\GeV\!/c}$. Muons must have COT track $\ensuremath{p_{T}}$ greater than 20 $\ensuremath{\GeV\!/c}$. The leptons from decays of $\ensuremath{W}$ and $\ensuremath{Z}$ bosons are often isolated from hadronic jets, in contrast to leptons originating from decays of heavy-flavor hadrons. We therefore require that the calorimeter energy in a cone of radius $\Delta{R} = \sqrt{\Delta{\eta}^{2} + \Delta{\phi}^{2}} \leq$~0.4 around the lepton excluding the energy associated with the lepton, $\ensuremath{E_{T}}^{\mathrm{iso}}$, be less than 10~$\!\%$ the energy of the lepton ($\ensuremath{E_{T}}$ for electrons and $\ensuremath{p_{T}}$ for muons). Fig.~\ref{fig:idcalvar} shows the isolation distribution for unbiased central electrons from $Z \rightarrow e e$ decays. We also require evidence for a neutrino in $\ensuremath{W}$ candidate events in the form of an imbalance of the measured momentum of the event since neutrinos do not interact with our detector. The initial state of the colliding partons has $\ensuremath{p_{T}} \simeq 0$, but unknown $p_z$ due to the unknown value of initial parton momentum. Therefore, we identify the missing transverse energy, $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$, in the event with the $\ensuremath{p_{T}}$ of the neutrino. In muon events, we correct the $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ measured in the calorimeter to account for the energy carried away by the muon, a minimum-ionizing particle. The muon momentum is used in place of the calorimeter energy deposits observed along the path of the muon. For $\wmnu$ candidate events we require $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$~20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and tighten the requirement for $\wenu$ events, $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$~25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$, to further reduce backgrounds from hadron jets. A background to $\wlnu$ is the $Z \rightarrow \ell \ell$ channel, when one of the leptons falls into an uninstrumented region of the detector, creating false $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$. This is a bigger problem in the muon channel since the coverage of the muon detectors is in general less uniform than that of the calorimeter. Therefore, in the muon channel we reject events with additional minimum-ionizing tracks in the event with $\ensuremath{p_{T}} >$~10~$\ensuremath{\GeV\!/c}$, $E_{\mathrm{em}} <$~3~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ ($E_{\mathrm{em}} <$~3 + 0.0140~$\cdot$~($p$$-$100)~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ if $p >$~100~$\ensuremath{\GeV\!/c}$) and $E_{\mathrm{had}} <$~6~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ ($E_{\mathrm{had}} <$~6 + 0.0420~$\cdot$~($p$$-$100)~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ if $p >$~100~$\ensuremath{\GeV\!/c}$). Studies of simulated $\wmnu$ and $Z \rightarrow \mu \mu$ event samples (see Sec.~\ref{sec:acc}) show that this additional rejection criteria removes 54.7~$\!\%$ of the $Z \rightarrow \mu \mu$ background while retaining 99.6~$\!\%$ of the $\wmnu$ signal. Further discussion of backgrounds to the $\wlnu$ channels is found in Section~\ref{sec:backg}. \subsection{$Z \rightarrow \ell \ell$ Selection} \label{sec:zsel} We select events which contain an electron or muon that passes the same identification requirements as the lepton in $\wlnu$ candidate events. As described in Sec.~\ref{sec:intro}, systematic uncertainties are reduced by using a common lepton selection in the $\ensuremath{W}$ and $\ensuremath{Z}$ analyses. We identify a second lepton in these events using less restrictive (``loose'') selection criteria to increase our $Z \rightarrow \ell \ell$ detection efficiency. The invariant mass of the two leptons is required to be between 66 and 116~$\ensuremath{\GeV\!/c^2}$. After selecting the first electron, $Z \rightarrow e e$ events are identified by the presence of another isolated electron in the central calorimeter with $\ensuremath{E_{T}} >$~25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ passing less restrictive selection criteria or an isolated electron in the plug calorimeter with $\ensuremath{E_{T}} >$~20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. The selection criteria for each type of electron are summarized in Table~\ref{tab:zeecuts}. In the calculation of $\ensuremath{E_{T}}$ for the plug electron, the $z$-vertex is taken from the COT track of the central electron in the event. In the case of two central electrons, we also require they be oppositely charged, with both electron tracks passing the track quality criteria listed in Table~\ref{tab:trackqual}. After selecting the first muon, $Z \rightarrow \mu \mu$ events are identified by the presence of another oppositely charged, isolated muon track with $\ensuremath{p_{T}} >$~20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ originating from a common vertex. The muon-stub criteria are dropped for the second leg to gain signal acceptance with very little increase in background; this second muon is merely a minimum-ionizing track. Table~\ref{tab:zmmcuts} shows the complete set of selection criteria used to identify $Z \rightarrow \mu \mu$ events. Again, we require both tracks to pass the track requirements of Table~\ref{tab:trackqual}. \begin{table*}[t] \caption{$Z \rightarrow e e$ selection criteria.} \begin{tabular}{ l c c r } \hline \hline Variable & ``Tight'' Central $e$ & ``Loose'' Central $e$ & Plug $e$ \\ \hline $\ensuremath{E_{T}}$ & $>$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ & $>$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ & $>$ 20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ \\ $\ensuremath{p_{T}}$ & $>$ 10~$\ensuremath{\GeV\!/c}$ & $>$ 10~$\ensuremath{\GeV\!/c}$ & \\ $\ensuremath{E_{T}}^{\mathrm{iso}}$ & $<$ 0.1 $\cdot \ensuremath{E_{T}}^{\mathrm{cluster}}$ & $<$ 0.1 $\cdot \ensuremath{E_{T}}^{\mathrm{cluster}}$ & $<$ 4~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ \\ $E_{\mathrm{had}}/E_{\mathrm{em}}$ & $<$ 0.055 + 0.00045 $\cdot$ E & $<$ 0.055 + 0.00045 $\cdot$ E & $<$ 0.05 \\ $E/p$ & $<$ 2.0 (or $\ensuremath{p_{T}} >$ 50~$\ensuremath{\GeV\!/c}$) & & \\ $L_{\mathrm{shr}}$ & $<$ 0.2 & & \\ $Q \cdot \Delta x$ & $>$ $-3.0$ $\mathrm{cm}$, $<$ 1.5 $\mathrm{cm}$ & & \\ $|\Delta z|$ & $<$ 3.0 $\mathrm{cm}$ & & \\ $\chi^{2}_{\mathrm{strips}}$ & $<$ 10.0 & & \\ $\chi^2_{\mathrm{PEM}}$ & & & $<$ 10.0 \\ \hline \hline \end{tabular} \label{tab:zeecuts} \end{table*} \begin{table} \caption{$Z \rightarrow \mu \mu$ selection criteria.} \begin{tabular}{ l r } \hline \hline Variable & Cut \\ \hline Fiducial and Kinematic: & \\ $|\eta_{\mu}^{(1)}|$ & $<$ 1.0 (CMUP+CMX) \\ $|\eta_{\mu}^{(2)}|$ & $<$ 1.2 (Track) \\ $\ensuremath{p_{T}}^{\mu(1)}$ & $>$ 20~$\ensuremath{\GeV\!/c}$ \\ $\ensuremath{p_{T}}^{\mu(2)}$ & $>$ 20~$\ensuremath{\GeV\!/c}$ \\ \hline Both Muon Legs: & \\ $E_{\mathrm{em}}$ ($p \leq$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 2~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ \\ $E_{\mathrm{em}}$ ($p >$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 2 + ($p$$-$100) $\cdot$ 0.0115~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ \\ $E_{\mathrm{had}}$ ($p \leq$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 6~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ \\ $E_{\mathrm{had}}$ ($p >$ 100~$\ensuremath{\GeV\!/c}$) & $<$ 6 + ($p$$-$100) $\cdot$ 0.0280~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ \\ $\ensuremath{E_{T}}^{\mathrm{iso}}$ & $<$ 0.1 $\cdot \ensuremath{p_{T}}$ \\ \hline First Muon Leg: & \\ $|\Delta X_{\mathrm{CMU}}|$ & $<$ 3.0 $\mathrm{cm}$ (CMUP) \\ $|\Delta X_{\mathrm{CMP}}|$ & $<$ 5.0 $\mathrm{cm}$ (CMUP) \\ $|\Delta X_{\mathrm{CMX}}|$ & $<$ 6.0 $\mathrm{cm}$ (CMX) \\ CMP $x$-fiducial distance & $<$ 0.0 $\mathrm{cm}$ (CMUP) \\ CMP $z$-fiducial distance & $<$ $-3.0$ $\mathrm{cm}$ (CMUP) \\ CMX $x$-fiducial distance & $<$ 0.0 $\mathrm{cm}$ (CMX) \\ CMX $z$-fiducial distance & $<$ $-3.0$ $\mathrm{cm}$ (CMX) \\ \hline \hline \end{tabular} \label{tab:zmmcuts} \end{table} \subsection{Event Selection Summary} \label{sec:evselsummary} Using the selection criteria described here, we find a total of 37,584 $\wenu$ candidate events. In the muon channel, we find 21,983 $\ensuremath{W}$ boson candidates with CMUP muons and 9,739 with CMX muons for a grand total of 31,722 $\wmnu$ candidates. In the $\ensuremath{Z}$ boson decay channel, we find 1,730 events with two reconstructed electrons in the central calorimeter and 2,512 events in which the second electron is reconstructed in the plug calorimeter giving a total of 4,242 $Z \rightarrow e e$ candidates. From our high $\ensuremath{p_{T}}$ muon dataset, we find 1,371 CMUP + track and 677 CMX + track $Z \rightarrow \mu \mu$ candidates. There is an overlap of 263 events between these two samples in which one candidate track is matched to stubs in the CMU and CMP muon chambers and the other is matched to a stub in the CMX chamber. Taking this overlap into account, we obtain a total of 1,785 $Z \rightarrow \mu \mu$ candidate events. \section{Signal Acceptance} \label{sec:acc} \subsection{Introduction} \label{sec:accintro} The acceptance terms in Eqs.~\ref{eq:wsigma} and~\ref{eq:zsigma} are defined as the fraction of $\wlnu$ or $Z \rightarrow \ell \ell$ events produced in $p\overline{p}$ collisions at $\sqrt{s} =$ 1.96 $\ensuremath{\mathrm{Te\kern -0.1em V}}$ that satisfy the geometrical and kinematic requirements of our samples. Lepton reconstruction in our detector is limited by the finite fiducial coverage of the tracking, calorimeter, and muon systems. Several kinematic requirements are also made on candidate events to help reduce backgrounds from non-$\ensuremath{W}/\ensuremath{Z}$ processes. The reconstructed leptons in these events are required to pass minimum calorimeter cluster $\ensuremath{E_{T}}$ and/or track $\ensuremath{p_{T}}$ criteria. In addition, a minimum requirement on the total measured missing $\ensuremath{E_{T}}$ is made on events in our $\wlnu$ candidate samples, and the invariant mass of $Z \rightarrow \ell \ell$ candidate events is restricted to a finite range around the measured $\ensuremath{Z}$ boson mass. The fraction of signal events that satisfy the geometrical and kinematic criteria outlined above for each of our samples is determined using simulation. One geometrical cut on candidate events for which we measure the acceptance directly from data is the requirement that the primary event vertex for each event lies within 60.0~$\mathrm{cm}$ of the detector origin along the $z$-axis (parallel to the direction of the beams). Our simulation does include a realistic model of the beam interaction region, but we obtain a more accurate estimation of the selection efficiency for the event vertex requirement from studies of minimum bias events in the data as described in Sec.~\ref{sec:eff}. Since the geometrical and kinematic acceptance for candidate events with a primary vertex outside our allowed region is significantly smaller, we remove the subset of simulated events with vertices outside this region from our acceptance calculations to avoid double-counting correlated inefficiencies. There is one additional complication involved in determining the kinematic and geometrical acceptances for our $Z \rightarrow \ell \ell$ candidate samples. Because we make our $\gamma^{*}/Z \rightarrow \ell \ell$ production cross section measurements in a specific invariant mass range, 66~$\ensuremath{\GeV\!/c^2}$ to 116~$\ensuremath{\GeV\!/c^2}$, we need to account for events outside this mass range that are reconstructed in the detector to sit within this range due to the effects of detector resolution and final state radiations. To include events of this type in our acceptance calculations, we use simulated $\gamma^{*}/Z \rightarrow \ell \ell$ event samples generated over a wider invariant mass range ($M_{\ell\ell} >$~30~$\ensuremath{\GeV\!/c^2}$). In order for an event to contribute to the denominator of our acceptance calculations, we require that the invariant mass of the lepton pair at the generator level prior to application of final state radiative effects lies within the range of our measurement (66~$\ensuremath{\GeV\!/c^2} < M_{\ell\ell} <$~116~$\ensuremath{\GeV\!/c^2}$). The generator-level invariant mass requirement is not made on events contributing to the numerator of our acceptance calculations, however, so that $\gamma^{*}/Z \rightarrow \ell \ell$ events generated outside the invariant mass range of our measurement which have reconstructed masses within this range are properly accounted for in the acceptance calculations. \subsection{Event and Detector Simulation} \label{sec:eventsim} The simulated events used to estimate the acceptance of our samples were generated with {\sc pythia} 6.203 ~\cite{acc:pythia}. The default set of Parton Distribution Functions (PDFs) used in the generation of these samples is CTEQ5L~\cite{acc:cteq5}. {\sc pythia} generates the hard, leading-order (LO) QCD interaction, $q+\bar{q} \rightarrow \gamma^{*}/Z$ (or $q+ \bar{q^\prime} \rightarrow W$), simulates initial-state QCD radiation via its parton-shower algorithms, and generates the decay, $\gamma^{*}/Z \rightarrow \ell \ell$ (or $\wlnu$). No restrictions were placed at the generator level on the $\ensuremath{p_{T}}$ of the final-state leptons or on their pseudorapidity. Both initial- and final-state radiation were turned on in the event simulation. In order to model the data accurately, the beam energy was set to 980~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$, and the beam parameters were adjusted to provide the best possible match with data. The profile of the beam interaction region in $z$ was matched to data by setting the mean of the vertex distribution to 3.0~$\mathrm{cm}$ in the direction along the beams and the corresponding Gaussian spread to 25.0~$\mathrm{cm}$. The offset of the beam from the nominal center of the detector in the $r-\phi$ plane is also taken into account. In the simulation, the position of the beams at $z=0$ is offset by -0.064~$\mathrm{cm}$ in $x$ and +0.310~$\mathrm{cm}$ in $y$ to provide a rough match with the measured offsets in data. The location of a given vertex within the $r-\phi$ plane is also observed to depend on its location along the $z$-axis due to the non-zero angle between the beams and the central axis of the detector. Slopes of -0.00021 and 0.00031 are assigned in the simulation to the direction of the beams relative to the $y-z$ and $x-z$ detector planes to model this effect. The intermediate vector boson $\ensuremath{p_{T}}$ distribution in the simulation is tuned to match the CDF Run~I measurement of the $d \sigma / d \ensuremath{p_{T}}$ spectrum for electron pairs in the invariant mass range between 66~$\ensuremath{\GeV\!/c^2}$ and 116~$\ensuremath{\GeV\!/c^2}$~\cite{acc:cdfzpt}. The tuning is done using {\sc pythia}'s nonperturbative ``$K_{T}$ smearing'' parameters, {\sc parp(91)} and {\sc parp(93)}, and shower evolution $Q^2$ parameters, {\sc parp(62)} and {\sc parp(64)}. The {\sc parp(91)} parameter affects the location of the peak in the $d \sigma / d \ensuremath{p_{T}}$ distribution in the vicinity of 3~$\ensuremath{\GeV\!/c}$, and the {\sc parp(62)} and {\sc parp(64)} parameters affect the shape of the distribution in the region between 7~$\ensuremath{\GeV\!/c}$ and 25~$\ensuremath{\GeV\!/c}$. A comparison between the ``tuned'' $\gamma^* / Z$ $\ensuremath{p_{T}}$ distribution from simulation and the measured Run~I spectrum is shown in Fig.~\ref{fig:bosonpt}. We assume that the optimized {\sc pythia} tuning parameters obtained from data collected at the Run~I center of mass energy ($\sqrt{s} =$ 1.80 $\ensuremath{\mathrm{Te\kern -0.1em V}}$) remain valid at the increased Run~II center of mass energy ($\sqrt{s} =$ 1.96 $\ensuremath{\mathrm{Te\kern -0.1em V}}$). The underlying event model in {\sc pythia} is also tuned based on observed particle distributions in minimum bias events \cite{acc:ricktunea}. \begin{figure} \includegraphics[width=3.5in]{figures/zptp621a.eps} \caption{\label{fig:bosonpt} Tuned {\sc pythia} 6.21 $d \sigma / d \ensuremath{p_{T}}$ in $\ensuremath{\mathrm{pb}}$ per $\ensuremath{\GeV\!/c}$ (on average) of $\gamma^{*}/Z \rightarrow e e$ pairs in the mass region 66~$\ensuremath{\GeV\!/c^2} < M_{ee} <$~116$\ensuremath{\GeV\!/c^2}$ (histogram) versus the measurement made by CDF in Run I (points).} \end{figure} The shape of the boson rapidity distribution is strongly dependent on the choice of PDFs. The shape of the $d \sigma /dy$ distribution for $\gamma^{*}/Z \rightarrow e e$ pairs in the mass region, 66~$\ensuremath{\GeV\!/c^2} < M_{ee} <$~116~$\ensuremath{\GeV\!/c^2}$, was measured by CDF in Run~I\cite{acc:cdfzy}. The good agreement observed between the measured shape of $d \sigma / dy$ with that obtained from simulation using CTEQ5L PDFs motivates the selection of this PDF set for our event generation. A comparison between the shape of the Run~I measured $d \sigma / dy$ distribution and the shape of the same distribution from {\sc pythia} 6.21 simulated event samples generated with CTEQ5L is shown in Fig.~\ref{fig:bosony}. \begin{figure} \includegraphics[width=3.5in]{figures/zyp621a.eps} \caption{\label{fig:bosony} Tuned {\sc pythia} 6.21 $d \sigma / dy$ in $\ensuremath{\mathrm{pb}}$ per 0.1 of $\gamma^{*}/Z \rightarrow e e$ pairs in the mass region 66~$\ensuremath{\GeV\!/c^2} < M_{ee} <$~116~$\ensuremath{\GeV\!/c^2}$ (histogram) versus the measurement made by CDF in Run~I (points).} \end{figure} A detector simulation based on {\sc geant3}~\cite{acc:geant,acc:geant2} is used to model the behavior of the CDF detector. The {\sc gflash}~\cite{acc:gflash} package is used to decrease the simulation time of particle showers within the calorimeter. \subsection{Signal Acceptances from {\sc pythia}} \label{sec:sigaccpythia} Additional tuning is performed after detector simulation to improve modeling of the data further. A detailed description of the techniques used to determine the post-simulation tunings described here and the associated acceptance uncertainties is provided in sections~\ref{subsec:remod} and~\ref{subsec:epres}. The tuned, simulated event samples are used to determine the acceptances of each $\ensuremath{W}$ and $\ensuremath{Z}$ event sample. As discussed in Sec.~\ref{sec:accintro}, events with a primary event vertex outside our allowed region ($|z_{\mathrm{vtx}}| <$ 60~$\mathrm{cm}$) are removed from both the numerator and denominator of our acceptances. The $\wlnu$ acceptance calculations are outlined in Tables~\ref{tab:welacc} and~\ref{tab:wmuacc} for the electron and muon candidate samples. The geometric and kinematic requirements listed in each table define the acceptances for the corresponding samples. The number of simulated events which satisfy each of the successive, cumulative criteria is shown in the tables along with the resulting net acceptances based on the total number of events with primary vertices inside our allowed region. The $\wmnu$ events which contain reconstructed muons with stubs in the CMX region of the muon detector are assigned a weight of 55.5/72.0 in the numerator of the acceptance calculation to account for the fact that the CMX detector was offline during the first 16.5~$\ensuremath{\mathrm{pb}^{-1}}$ of integrated luminosity that define our samples. The largest uncertainties attached to the individual luminosity measurements (see Sec.~\ref{subsec:lummeas}) cancel in our weighting ratio for CMX events and the residual uncertainty on this ratio has a negligible effect on the overall acceptance uncertainty. \begin{table*} \caption{$\wenu$ selection acceptance from {\sc pythia} Monte Carlo simulation. Statistical uncertainties are shown.} \begin{tabular}{l c c} \hline \hline Selection Criteria & Number of Events & Net Acceptance \\ \hline Total Events & 1933957 & - \\ $|z_{\mathrm{vtx}}| <$ 60~$\mathrm{cm}$ & 1870156 & - \\ Central EM Cluster & 927231 & 0.4958 $\pm$ 0.0004 \\ Calorimeter Fiducial Cuts & 731049 & 0.3909 $\pm$ 0.0004 \\ Electron Track $\ensuremath{p_{T}} >$ 10~$\ensuremath{\GeV\!/c}$ & 647691 & 0.3463 $\pm$ 0.0003 \\ EM Cluster $\ensuremath{E_{T}} >$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ & 488532 & 0.2612 $\pm$ 0.0003 \\ Event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ & 447836 & 0.2395 $\pm$ 0.0003 \\ \hline \hline \end{tabular} \label{tab:welacc} \end{table*} \begin{table*} \caption{$\wmnu$ selection acceptance from {\sc pythia} Monte Carlo simulation. Statistical uncertainties are shown.} \begin{tabular}{l c c} \hline \hline Selection Criteria & Number of Events & Net Acceptance \\ \hline Total Events & 2017347 & - \\ $|z_{\mathrm{vtx}}| <$ 60~$\mathrm{cm}$ & 1951450 & - \\ CMUP or CMX Muon & 545221 & 0.2794 $\pm$ 0.0003 \\ Muon Chamber Fiducial Cuts & 523566 & 0.2683 $\pm$ 0.0003 \\ Muon Track $\ensuremath{p_{T}} >$ 20~$\ensuremath{\GeV\!/c}$ & 435373 & 0.2231 $\pm$ 0.0003 \\ Muon Track Fiducial Cuts & 411390 & 0.2108 $\pm$ 0.0003 \\ Event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$ 20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ & 383787 & 0.1967 $\pm$ 0.0003 \\ \hline \hline \end{tabular} \label{tab:wmuacc} \end{table*} The $Z \rightarrow \ell \ell$ acceptance calculations are outlined in Tables~\ref{tab:zelacc} and~\ref{tab:zmuacc} for the corresponding electron and muon candidate samples. As previously stated in Sec.~\ref{sec:accintro}, the acceptances that we define for these samples are for $\gamma^{*}/Z \rightarrow \ell \ell$ in the invariant mass range 66~$\ensuremath{\GeV\!/c^2} < M_{\ell\ell} <$~116~$\ensuremath{\GeV\!/c^2}$. The simulated event samples used to estimate the $\ensuremath{Z}$ boson acceptances are generated with a looser invariant mass requirement, $M_{\ell\ell} >$ 30~$\ensuremath{\GeV\!/c^2}$. Generated events with an invariant mass outside our allowed range do not contribute to the denominator of these acceptance calculations but can contribute to the numerator if the final reconstructed invariant mass turns out to lie within our allowed region due to radiative and/or detector resolution effects. In order for an event to contribute to the denominator of the $\ensuremath{Z}$ boson acceptance calculations, we require that the invariant mass of the dilepton pair at the generator level before application of any final state radiative effects lies within the correct invariant mass range, 66~$\ensuremath{\GeV\!/c^2} < M_{\ell\ell}$(Gen)~$<$~116~$\ensuremath{\GeV\!/c^2}$. As in the case of the $\wmnu$ acceptance calculation, events in the numerator of the $Z \rightarrow \mu \mu$ acceptance calculation must be weighted to account for the fact that the CMX portion of the muon detector was offline for the first subset of integrated luminosity that defines our samples. In order to account for this effect, a weight of (55.5/72.0) is applied to events contributing to the numerator of the $Z \rightarrow \mu \mu$ acceptance calculation which contain a CMX muon candidate satisfying the three muon geometric and kinematic requirements listed in Table~\ref{tab:zmuacc} and no CMUP muon candidates satisfying these same three requirements. \begin{table*}[t] \caption{$Z \rightarrow e e$ selection acceptance from {\sc pythia} Monte Carlo simulation. Statistical uncertainties are shown.} \begin{tabular}{l c c} \hline \hline Selection Criteria & Number of Events & Net Acceptance \\ \hline Total Events & 507500 & - \\ $|z_{\mathrm{vtx}}| <$ 60~$\mathrm{cm}$ & 490756 & - \\ 66~$\ensuremath{\GeV\!/c^2} < M_{ee}$(Gen) $<$ 116~$\ensuremath{\GeV\!/c^2}$ & 376523 & - \\ \hline Central EM Cluster & 363994 & 0.9667 $\pm$ 0.0003 \\ Calorimeter Fiducial Cuts & 299530 & 0.7955 $\pm$ 0.0007 \\ Electron Track $\ensuremath{p_{T}} >$ 10~$\ensuremath{\GeV\!/c}$ & 252881 & 0.6716 $\pm$ 0.0008 \\ EM Cluster $\ensuremath{E_{T}} >$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ & 186318 & 0.4948 $\pm$ 0.0008 \\ Second EM cluster (Central or Plug) & 176417 & 0.4685 $\pm$ 0.0008 \\ Second Cluster Calorimeter Fiducial Cuts & 146150 & 0.3882 $\pm$ 0.0008 \\ Second Electron Track $\ensuremath{p_{T}} >$ 10~$\ensuremath{\GeV\!/c}$ (Central) & 138830 & 0.3687 $\pm$ 0.0008 \\ Second EM Cluster $\ensuremath{E_{T}} >$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ (Central), 20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ (Plug) & 125074 & 0.3322 $\pm$ 0.0008 \\ Second EM Cluster $E_{\mathrm{had}}/E_{\mathrm{em}}$ $<$ 0.125 (Plug) & 124881 & 0.3317 $\pm$ 0.0008 \\ 66~$\ensuremath{\GeV\!/c^2} < M_{ee}$(Rec) $<$ 116~$\ensuremath{\GeV\!/c^2}$ & 120575 & 0.3202 $\pm$ 0.0008 \\ Opposite Charge (Central-Central) & 119925 & 0.3185 $\pm$ 0.0008 \\ \hline \hline \end{tabular} \label{tab:zelacc} \end{table*} \begin{table*}[t] \caption{$Z \rightarrow \mu \mu$ selection acceptance from {\sc pythia} Monte Carlo simulation. Statistical uncertainties are shown.} \begin{tabular}{l c c} \hline \hline Selection Criteria & Number of Events & Net Acceptance \\ \hline Total Events & 507500 & - \\ $|z_{\mathrm{vtx}}| <$ 60~$\mathrm{cm}$ & 490755 & - \\ 66~$\ensuremath{\GeV\!/c^2} < M_{\mu\mu}$(Gen) $<$ 116~$\ensuremath{\GeV\!/c^2}$ & 375981 & - \\ \hline CMUP or CMX Muon & 217041 & 0.5773 $\pm$ 0.0008 \\ Muon Chamber Fiducial Cuts & 209693 & 0.5577 $\pm$ 0.0008 \\ Muon Track Fiducial Cuts & 199940 & 0.5318 $\pm$ 0.0008 \\ Muon Track $\ensuremath{p_{T}} >$ 20~$\ensuremath{\GeV\!/c}$ & 157244 & 0.4182 $\pm$ 0.0008 \\ Second Track with $\ensuremath{p_{T}} >$ 10~$\ensuremath{\GeV\!/c}$ & 91048 & 0.2422 $\pm$ 0.0007 \\ Second Track Fiducial Cuts & 62663 & 0.1667 $\pm$ 0.0006 \\ Second Track $\ensuremath{p_{T}} >$ 20~$\ensuremath{\GeV\!/c}$ & 56459 & 0.1502 $\pm$ 0.0006 \\ 66~$\ensuremath{\GeV\!/c^2} < M_{\mu\mu}$(Rec) $<$ 116~$\ensuremath{\GeV\!/c^2}$ & 52160 & 0.1387 $\pm$ 0.0006 \\ \hline \hline \end{tabular} \label{tab:zmuacc} \end{table*} \subsection{Improved Acceptance Calculations} The tuned {\sc pythia} simulated event samples are designed to provide the best possible model for our $\ensuremath{W}$ and $\ensuremath{Z}$ boson candidate samples. However, the actual boson production cross section calculation made by {\sc pythia} is done only at leading order (LO); see Fig.~\ref{fig:vprod}. The complex topologies of higher-order contributions are modeled using a backward shower evolution algorithm which includes initial-state radiative effects and a separate, post-generation algorithm for including final-state radiation. A better description of boson production can be obtained from recently developed NNLO theoretical calculations of the double-differential production cross sections, $d^{2}\sigma/dydM$, as a function of boson rapidity ($y$) and mass ($M$), for both $W^{\pm}$ and Drell-Yan production~\cite{acc:slac}. The calculations are based on the MRST 2001 NNLL PDF set~\cite{int:mrst1} and electroweak parameters taken from~\cite{int:pdg}. The single differential cross sections, $d\sigma/dy$, are obtained by integrating over the mass range, 66~$\ensuremath{\GeV\!/c^2} < M_{\ell\ell} <$ 116~$\ensuremath{\GeV\!/c^2}$ for Drell-Yan production and 40~$\ensuremath{\GeV\!/c^2} < M_{\ell\nu} <$ 240~$\ensuremath{\GeV\!/c^2}$ for $W^{\pm}$ production. \begin{figure} \includegraphics[width=3.3in]{figures/ay.eps} \caption{\label{fig:ay} Acceptance as a function of boson rapidity, $A(y)$, for our four candidate samples: $\wenu$ (squares), $\wmnu$ (points), $Z \rightarrow e e$ (stars), and $Z \rightarrow \mu \mu$ (triangles).} \end{figure} We use these NNLO theoretical calculations of $d\sigma/dy$ for Drell-Yan and $W^{\pm}$ production to obtain improved acceptance estimates for our candidate samples. First, the tuned {\sc pythia} event simulation is used to create acceptance functions for each candidate sample as a function of boson rapidity, $A(y)$. These functions provide the acceptance in each boson rapidity bin based on our modeling of the CDF detector contained in the event simulation. Fig.~\ref{fig:ay} shows the $A(y)$ acceptance functions for each of our four candidate samples. The $Z \rightarrow e e$ sample has larger acceptance at higher boson rapidity due to the plug calorimeter modules which provide additional coverage in the forward part of the detector for the second electron in these events. Based on these distributions, the acceptance of each sample, $\overline{A}$, is then calculated as \begin{eqnarray} \overline{A} = \frac{\int \frac{d\sigma}{dy} \cdot A(y) \cdot dy}{\int \frac{d\sigma}{dy} \cdot dy}. \label{eq:accclc} \end{eqnarray} The acceptance values obtained with this approach are shown in Table~\ref{tab:accres} and compared with values obtained directly from the {\sc pythia} simulated event samples. The results all agree within 0.4~$\!\%$ indicating that the shapes of the NNLO $d\sigma/dy$ distributions are very similar to those computed with the {\sc pythia} simulation. The acceptance values obtained using the NNLO theoretical differential cross section calculations are used for our measurements. \begin{table} \caption{Central acceptance values for our candidate samples based on $d\sigma/dy$ distributions obtained from both NNLO and {\sc pythia} simulation.} \begin{tabular}{l c c r} \hline \hline Acceptance & NNLO Calc. & {\sc pythia} & Difference ($\!\%$) \\ \hline $A_{\wmnu}$ & 0.1970 & 0.1967 & +0.15 \\ $A_{\wenu}$ & 0.2397 & 0.2395 & +0.08 \\ $A_{Z \rightarrow \mu \mu}$ & 0.1392 & 0.1387 & +0.36 \\ $A_{Z \rightarrow e e}$ & 0.3182 & 0.3185 & -0.09 \\ $A_{Z \rightarrow \mu \mu}/A_{\wmnu}$ & 0.7066 & 0.7054 & +0.17 \\ $A_{Z \rightarrow e e}/A_{\wenu}$ & 1.3272 & 1.3299 & -0.20 \\ \hline \hline \end{tabular} \label{tab:accres} \end{table} \subsection{Uncertainties in NNLO Calculation} Uncertainties in the NNLO calculations of the differential boson production cross sections lead to uncertainty on our calculated acceptance values. The theoretical calculations require a large number of input parameters taken from world average experimental results that have their own associated uncertainties. The renormalization scale used in the calculations is another source of uncertainty. The default renormalization scales used in the calculations are $M_{Z}$ for Drell-Yan production and $M_{W}$ for $W^{\pm}$ production. To study the effect of this scale on our central acceptance values, we recalculate the $d\sigma/dy$ production cross sections using renormalization scales twice and one-half of the default values. For both cases, we find the net change in our calculated acceptances to be less than 0.1~$\!\%$ which has a negligible effect on our overall acceptance uncertainty. We perform a computational consistency check on the NLO component of the NNLO $d\sigma/dy$ calculation~\cite{acc:slac} with a different $\overline{MS}$ NLO computation of $d\sigma/dy$~\cite{int:nnlo00,int:nnlo0,int:nnlo4,int:nnlo2}. We find that the resulting acceptance values differ by no more than 0.1~$\!\%$. Based on this agreement between the two calculations, we assign no additional uncertainty to our acceptance values based on the calculation itself. However, our default calculation is still susceptible to uncertainties from higher order effects beyond NNLO. To place a conservative limit on the magnitude of higher-order uncertainties, we compare acceptance values based on the NLO and NNLO versions of our default $d\sigma/dy$ production cross section calculations and assign an uncertainty based on the differences. The results are shown in Table~\ref{tab:therr}. The largest difference is seen in the acceptance for the $Z \rightarrow \mu \mu$ candidate sample, which has the narrowest acceptance window in boson rapidity. \begin{table}[t] \caption{Comparison of acceptances for our candidate samples based on $d\sigma/dy$ distributions from the NNLO and NLO versions of our default theoretical calculation. The difference is taken as an uncertainty on higher-order contributions.} \begin{tabular}{l c c r} \hline \hline Acceptance & NNLO & NLO & Difference ($\!\%$) \\ \hline $A_{\wmnu}$ & 0.1970 & 0.1975 & 0.25 \\ $A_{\wenu}$ & 0.2397 & 0.2404 & 0.29 \\ $A_{Z \rightarrow \mu \mu}$ & 0.1392 & 0.1402 & 0.72 \\ $A_{Z \rightarrow e e}$ & 0.3182 & 0.3184 & 0.06 \\ $A_{Z \rightarrow \mu \mu}/A_{\wmnu}$ & 0.7066 & 0.7101 & 0.50 \\ $A_{Z \rightarrow e e}/A_{\wenu}$ & 1.3272 & 1.3246 & 0.20 \\ \hline \hline \end{tabular} \label{tab:therr} \end{table} \subsection{Uncertainties from PDF Model} The largest uncertainties on our acceptance values arise from uncertainties on the momentum distributions of quarks and gluons inside the proton modeled in the PDF sets used as inputs to our theoretical calculations. The choice of PDF input has a significant effect on the shape of the $d\sigma/dy$ distributions, and consequently a significant effect on the calculated acceptances for our candidate samples. As noted earlier, our theoretical calculations use the best-fit MRST 2001 NNLL PDF set \cite{int:mrst1}. The input PDF sets are created by fitting relevant experimental results to constrain the parameters which describe the quark/gluon momentum distributions in the proton. Currently, the NNLL PDF set provided by the MRST group is the only one available to us. NLL PDF sets are available from both groups (MRST01E~\cite{int:mrst1,int:mrst2} and CTEQ6.1~\cite{int:cteq}), however. To investigate differences between the CTEQ and MRST PDF sets, we calculate $d\sigma/dy$ at NLO using each group's NLL PDF set and check for differences in the acceptance values for our candidate samples based on each calculation. The results are shown in Table~\ref{tab:pdferr1}. The differences are significant, especially for the $Z \rightarrow \mu \mu$ candidate sample. \begin{table}[t] \caption{Comparison of acceptances for our candidate samples based on $d\sigma/dy$ distributions from NLO theoretical calculations using NLL MRST and CTEQ PDF sets.} \begin{tabular}{l c c r} \hline \hline Acceptance & MRST & CTEQ & Difference ($\!\%$) \\ \hline $A_{\wmnu}$ & 0.1976 & 0.1960 & 0.82 \\ $A_{\wenu}$ & 0.2405 & 0.2385 & 0.84 \\ $A_{Z \rightarrow \mu \mu}$ & 0.1401 & 0.1376 & 1.82 \\ $A_{Z \rightarrow e e}$ & 0.3183 & 0.3164 & 0.60 \\ $A_{Z \rightarrow \mu \mu}/A_{\wmnu}$ & 0.7088 & 0.7021 & 0.95 \\ $A_{Z \rightarrow e e}/A_{\wenu}$ & 1.3235 & 1.3264 & 0.22 \\ \hline \hline \end{tabular} \label{tab:pdferr1} \end{table} Another recent development from the CTEQ and MRST groups is the release of ``error'' PDF sets at NLL which map out the space of potential PDF parameter values based on the uncertainties of the experimental results used to constrain them. The CTEQ PDF parameterization is based on 20 parameters, $P_{\mathrm{i}}$, which are tuned to their most likely values based on a minimization of the $\chi^2$ of a global fit to the experimental data. The equivalent MRST parameterization uses only 15 parameters. As the covariance matrix of the $P_{\mathrm{i}}$ is not diagonal at the minimum, it is difficult to propagate fit errors on the $P_{\mathrm{i}}$ directly into uncertainties on experimentally measured quantities such as acceptances. However, both groups construct different sets of eigenvectors, $Q_{\mathrm{i}}$, which do diagonalize the covariance matrix of the fit in the vicinity of the minimum. The $Q_{\mathrm{i}}$ are linearly independent by design, which allows experimental uncertainties based on deviations in each parameter to be added in quadrature. The MRST and CTEQ groups transform individual $\pm$~1~$\sigma$ variations of each $Q_{\mathrm{i}}$ back into the $P_{\mathrm{i}}$ parameter space and generate sets of ``up'' and ``down'' error PDFs. This procedure outputs two PDF sets per parameter for a total of 40 CTEQ (30 MRST) error PDF sets. The $\pm$~1~$\sigma$ variations of each eigenvector for the MRST01E and CTEQ6.1 error PDFs are different. These variations are based on the following values for the global fit $\chi^{2}$ from its minimum: $\Delta \chi^{2} =$ 50 for MRST01E and $\Delta \chi^{2} =$ 100 for CTEQ6.1. To determine the uncertainty on the acceptance values for our candidate samples based on the CTEQ and MRST error PDF sets, we perform the NLO $d\sigma/dy$ production cross section calculations for each error PDF set and check how much the acceptance values based on each calculation deviate from the values obtained using the best-fit NLL PDF set. The uncertainty associated with each $Q_{\mathrm{i}}$ is determined from the changes in acceptance between the best-fit PDF set and both the ``up'' and ``down'' error PDF sets associated with the given parameter, $\Delta A^{\mathrm{i}}_{\uparrow}$ and $\Delta A^{\mathrm{i}}_{\downarrow}$. In most cases the two acceptance differences lie in opposite directions and can be treated independently, but in a small number of cases both differences lie in the same direction and a different procedure needs to be followed. Table~\ref{tab:errpro} defines both the positive and negative uncertainties assigned to the acceptance uncertainty for each $Q_{\mathrm{i}}$ based on the relative signs of $\Delta A^{\mathrm{i}}_{\uparrow}$ and $\Delta A^{\mathrm{i}}_{\downarrow}$. \begin{table*}[t] \caption{Contributions to the positive and negative acceptance uncertainties based on acceptance differences between the ``up'' and ``down'' error PDF sets associated with a given $Q_{\mathrm{i}}$ and the best-fit PDF set.} \begin{tabular}{l c c } \hline \hline Direction of Acceptance Shifts & $+$ Uncertainty & $-$ Uncertainty \\ \hline \vspace{-0.4cm} & \\ \vspace{0.1cm} $\Delta A^{\mathrm{i}}_{\uparrow} >$~0 and $\Delta A^{\mathrm{i}}_{\downarrow} >$~0 & $\sqrt{({\Delta A^{\mathrm{i}}_{\uparrow}}^{2}+{\Delta A^{\mathrm{i}}_{\downarrow}}^{2})/2}$ & 0 \\ \vspace{0.1cm} $\Delta A^{\mathrm{i}}_{\uparrow} >$~0 and $\Delta A^{\mathrm{i}}_{\downarrow} <$~0 & $\Delta A^{\mathrm{i}}_{\uparrow}$ & $\Delta A^{\mathrm{i}}_{\downarrow}$ \\ \vspace{0.1cm} $\Delta A^{\mathrm{i}}_{\uparrow} <$~0 and $\Delta A^{\mathrm{i}}_{\downarrow} >$~0 & $\Delta A^{\mathrm{i}}_{\downarrow}$ & $\Delta A^{\mathrm{i}}_{\uparrow}$ \\ \vspace{0.1cm} $\Delta A^{\mathrm{i}}_{\uparrow} <$~0 and $\Delta A^{\mathrm{i}}_{\downarrow} <$~0 & 0 & $\sqrt{({\Delta A^{\mathrm{i}}_{\uparrow}}^{2}+{\Delta A^{\mathrm{i}}_{\downarrow}}^{2})/2}$ \\ \hline \hline \end{tabular} \label{tab:errpro} \end{table*} The positive and negative uncertainties associated with each of the individual $Q_{\mathrm{i}}$ (20 CTEQ and 15 MRST) are summed in quadrature to determine the overall PDF model uncertainty on our acceptance values. The results of these calculations using both the CTEQ and MRST error PDF sets are shown in Table~\ref{tab:pdferr2}. We note that the MRST uncertainties are a factor of 2-3 lower than the CTEQ uncertainties which is most likely related to different choices for the $\Delta \chi^{2}$ values used by the two groups to choose the $\pm$~1~$\sigma$ points associated with each of the $Q_{\mathrm{i}}$. We choose to use the larger CTEQ uncertainties based on the fact that the magnitude of those uncertainties is more consistent with the differences observed between the acceptance values for our samples calculated with the best-fit NLL CTEQ and MRST PDF sets (see Table~\ref{tab:pdferr1}). Based on the technique outlined above, we also determine the PDF model uncertainties associated with three additional quantities useful in the calculation of $\Gamma(W)$ and $g_{\mu}/g_{e}$ detailed in Sec.~\ref{sec:results}. These values are given in Table~\ref{tab:pdferr3}. \begin{table*}[t] \caption{PDF model acceptance uncertainties based on the CTEQ and MRST error PDF sets.} \begin{tabular}{l c c c c} \hline \hline & CTEQ & CTEQ & MRST & MRST \\ Acceptance & $+$ Uncertainty & $-$ Uncertainty & $+$ Uncertainty & $-$ Uncertainty \\ & ($\!\%$) & ($\!\%$) & ($\!\%$) & ($\!\%$) \\ \hline $A_{\wmnu}$ & 1.13 & 1.47 & 0.46 & 0.57 \\ $A_{\wenu}$ & 1.16 & 1.50 & 0.48 & 0.58 \\ $A_{Z \rightarrow \mu \mu}$ & 1.72 & 2.26 & 0.67 & 0.87 \\ $A_{Z \rightarrow e e}$ & 0.69 & 0.84 & 0.27 & 0.33 \\ $A_{Z \rightarrow \mu \mu}/A_{\wmnu}$ & 0.67 & 0.86 & 0.26 & 0.31 \\ $A_{Z \rightarrow e e}/A_{\wenu}$ & 0.74 & 0.56 & 0.29 & 0.23 \\ \hline \hline \end{tabular} \label{tab:pdferr2} \end{table*} \begin{table*}[t] \caption{Additional PDF model acceptance uncertainties based on the CTEQ and MRST error PDF sets.} \begin{tabular}{l c c c c } \hline \hline & CTEQ & CTEQ & MRST & MRST \\ Acceptance & $+$ Uncertainty & $-$ Uncertainty & $+$ Uncertainty & $-$ Uncertainty \\ & ($\!\%$) & ($\!\%$) & ($\!\%$) & ($\!\%$) \\ \hline ${\sigma_{W} \cdot A_{\wmnu}}/{\sigma_{Z} \cdot A_{Z \rightarrow \mu \mu}}$ & 1.03 & 1.06 & 0.52 & 0.42 \\ ${\sigma_{W} \cdot A_{\wenu}}/{\sigma_{Z} \cdot A_{Z \rightarrow e e}}$ & 0.70 & 1.06 & 0.42 & 0.62 \\ $A_{\wenu}/A_{\wmnu}$ & 0.03 & 0.04 & 0.01 & 0.01 \\ \hline \hline \end{tabular} \label{tab:pdferr3} \end{table*} \subsection{Uncertainties from Boson $\ensuremath{p_{T}}$ Model} As discussed in Sec.~\ref{sec:eventsim}, the boson $\ensuremath{p_{T}}$ distributions in our {\sc pythia} simulated event samples are tuned based on the CDF Run I measurement of the $d \sigma / d \ensuremath{p_{T}}$ spectrum of electron pairs in the mass region between 66~$\ensuremath{\GeV\!/c^2}$ and 116~$\ensuremath{\GeV\!/c^2}$ (see Sec.~\ref{sec:eventsim}). The simulated $\gamma^{*}/Z$ $\ensuremath{p_{T}}$ distribution at $\sqrt{s} =$ 1.8~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ after tuning is shown in Fig.~\ref{fig:bosonpt} along with the measured distribution from Run~I. The values for the four parameters we use in {\sc pythia} for this tuning are chosen using a $\chi^{2}$ comparison of the $\ensuremath{Z}$ boson $\ensuremath{p_{T}}$ spectrum measured in Run~I and the {\sc pythia} generated spectra obtained while varying the values of our tuning parameters. The acceptance uncertainties related to our boson $\ensuremath{p_{T}}$ model come primarily from the Run~I measurement uncertainties. We quantify the effect of these uncertainties on our measured acceptances using the uncertainties returned from the $\chi^{2}$ fits used to obtain the four {\sc pythia} tuning parameters. We choose to use conservative $\pm$~3~$\sigma$ fit errors since the fit values for each of the tuning parameters, {\sc parp(64)} in particular, are somewhat inconsistent with expectation. We study the effects of changes in the boson $\ensuremath{p_{T}}$ distributions on our measured acceptances by re-weighting events in the default simulated event samples based on differences between the default boson $\ensuremath{p_{T}}$ distribution and those obtained from $\pm$~3~$\sigma$ changes in our individual tuning parameters. Table~\ref{tab:parp} summarizes the best fit values and $\pm$~3~$\sigma$ variations obtained for each tuning parameter and the corresponding acceptance uncertainties for each candidate sample. Changes to the {\sc parp(93)} tuning parameter were found to have a negligible effect on the boson $\ensuremath{p_{T}}$ spectrum and the measured acceptances. Uncertainties associated with the other three tuning parameters are taken in quadrature to determine an overall acceptance uncertainty associated with the boson $\ensuremath{p_{T}}$ model. \begin{table*}[t] \caption{Fit results for {\sc pythia} boson $\ensuremath{p_{T}}$ tuning parameters and corresponding uncertainties on the measured acceptances of our candidate samples.} \begin{tabular}{l c c c c c c } \hline \hline & & $\pm$~3~$\sigma$ & & & & \\ Parameter & Best Fit & Variation & $\Delta A_{\wmnu}$ & $\Delta A_{\wenu}$ & $\Delta A_{Z \rightarrow \mu \mu}$ & $\Delta A_{Z \rightarrow e e}$ \\ & & & ($\!\%$) & ($\!\%$) & ($\!\%$) & ($\!\%$) \\ \hline {\sc parp(62)} & 1.26 & 0.30 & 0.01 & 0.00 & 0.01 & 0.01 \\ {\sc parp(64)} & 0.2 & 0.03 & 0.03 & 0.04 & 0.08 & 0.06 \\ {\sc parp(91)} & 2.0 & 0.3 & 0.02 & 0.02 & 0.00 & 0.02 \\ {\sc parp(93)} & 14 & 3 & - & - & - & - \\ Combined & & & 0.04 & 0.04 & 0.08 & 0.06 \\ \hline \hline \end{tabular} \label{tab:parp} \end{table*} \subsection{Uncertainties from Recoil Energy Model} \label{subsec:remod} An accurate model of the event recoil energy in the simulation is important for estimating the acceptance of the event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ criteria applied to our $\wlnu$ candidate events. Simulated recoil energy distributions are dependent on the models for hadronic showering, the boson recoil energy, and the underlying event energy. In addition, the simulation used in these measurements does not model other mechanisms that also contribute to the residual recoil energy in data events such as multiple interactions and accelerator backgrounds. To account for the effects of these differences, the simulated recoil energy distributions are tuned to match those in the data. As discussed in Sec.~\ref{sec:sigaccpythia}, the event recoil energy is defined as the total energy observed in the calorimeter after removing the energy deposits associated with the high $\ensuremath{p_{T}}$ leptons in our $\wlnu$ and $Z \rightarrow \ell \ell$ candidates. To tune the simulated recoil energy distributions, we separate the observed recoil $\ensuremath{E_{T}}$ in each event into components that are parallel and perpendicular to the transverse direction of the highest $\ensuremath{p_{T}}$ lepton in the event. The two components, $U^{\mathrm{recl}}_{\parallel}$ and $U^{\mathrm{recl}}_{\perp}$, are each assigned energy shift~($C$) and scale~($K$) corrections in the form \begin{eqnarray} (U^{\mathrm{recl}}_{\parallel})^{\prime} & = & ( K_{\parallel} \times U^{\mathrm{recl}}_{\parallel}) + C_{\parallel}, \label{eq:fixpar} \\ (U^{\mathrm{recl}}_{\perp})^{\prime} & = & ( K_{\perp} \times U^{\mathrm{recl}}_{\perp}) + C_{\perp}. \label{eq:fixperp} \end{eqnarray} The scale corrections are used to account for problems in the calorimeter response model and the effects of multiple interactions, the underlying event model, and accelerator backgrounds which are in principle independent of the lepton direction. The shift corrections are designed to account for simulation deficiencies that have a lepton-direction dependence such as the $\ensuremath{W}$ boson recoil model and the model for lepton energy deposition in the calorimeter. Based on the nature of these effects, we expect that the scaling corrections in both directions, $K_{\parallel}$ and $K_{\perp}$, should be equivalent and that the shift correction in the perpendicular direction, $C_{\perp}$, should be zero. We check these assumptions, however, by keeping each parameter independent in the fitting procedure used to determine the best values for tuning the recoil energy in simulated events. To determine the best values for these scaling and shifting constants, we perform $\chi^{2}$ fits between the data recoil energy distributions and corrected distributions from the simulation based on a range of scaling and shifting constants. An iterative process is used in which we first determine the best possible shifting constants and then fit for scaling constants based on those values. We repeat this process until the $\chi^{2}$ fits for both the scaling and shifting constants stabilize at set values. No effects are expected which can give rise to shifts in the energy perpendicular to the lepton momentum and the fitted shifts of these distributions are consistent with zero. We set $C_{\perp}$ to zero. We also find that the fitted scale factors for both recoil energy components agree well with each other in both the electron and muon candidate samples. Based on this agreement, we also make a combined fit to both components for a single correction scale factor. We use this single scaling factor to correct both recoil energy components. A comparison of the $U^{\mathrm{recl}}_{\parallel}$ and $U^{\mathrm{recl}}_{\perp}$ distributions for $\wenu$ candidate events in tuned simulation and data are shown in Figs.~\ref{fig:upar_el} and~\ref{fig:uper_el}. \begin{figure} \includegraphics[width=3.5in]{figures/upar_ele.eps} \caption{Comparison of $U^{\mathrm{recl}}_{\parallel}$ recoil energy distributions for $\wenu$ candidate events in tuned simulation and data.} \label{fig:upar_el} \end{figure} \begin{figure} \includegraphics[width=3.5in]{figures/uperp_ele.eps} \caption{Comparison of $U^{\mathrm{recl}}_{\perp}$ recoil energy distributions for $\wenu$ candidate events in tuned simulation and data.} \label{fig:uper_el} \end{figure} The uncertainties on our measured acceptances related to the recoil energy model in the simulation are estimated using the $\pm$~3~$\sigma$ values of the scale and shift correction factors returned from our fit procedure. As in the case of boson $\ensuremath{p_{T}}$ model uncertainties, we choose to use the $\pm$~3~$\sigma$ values rather than the $\pm$~1~$\sigma$ values as we are using these parameters to cover a wide range of effects that are potentially incorrectly modeled in our simulated event samples. Since the tuning parameters are not directly related to the underlying mechanisms that affect the recoil energy distributions, we choose to be conservative in how we estimate the associated acceptance uncertainties via this procedure. We recalculate the acceptance of our candidate samples with each of the individual tuning parameters changed to its $\pm$~3~$\sigma$ values and assign an uncertainty based on the differences between these results and our default acceptance values. The changes in acceptance found from modifying the overall scale correction $K$ and the shift corrections for both directions, $C_{\parallel}$ and $C_{\perp}$, are added in quadrature to estimate the total uncertainties on our measured acceptances due to the recoil energy model in the simulation. To be conservative we choose to include an uncertainty based on fit results for $C_{\perp}$ even though this parameter is set to zero for tuning the simulated recoil energy distributions. Table~\ref{tab:recerr} summarizes the best fit values and $\pm$~3~$\sigma$ variations with respect to the best fit values obtained for each of the scaling and shifting parameters used to tune the recoil energy model in simulation and the corresponding acceptance uncertainties for the $\wlnu$ candidate samples. \begin{table*}[t] \caption{Summary of simulation recoil energy tuning parameter values and uncertainties obtained from our fit procedure and the corresponding uncertainties on our measured acceptance values.} \begin{tabular}{l c c c c c c } \hline \hline Tuning & $\wenu$ & $\wenu$ & $\wmnu$ & $\wmnu$ & & \\ Parameter & Fit Value & $\pm$~3~$\sigma$ variation & Fit Value & $\pm$~3~$\sigma$ variation & $\Delta A_{\wenu}$ & $\Delta A_{\wmnu}$ \\ & & & & & ($\!\%$) & ($\!\%$) \\ \hline $K_{\parallel}$ & 1.06 & 0.02 & 1.06 & 0.03 & - & - \\ $K_{\perp}$ & 1.04 & 0.02 & 1.05 & 0.02 & - & - \\ $K$ & 1.05 & 0.02 & 1.05 & 0.02 & 0.17 & 0.20 \\ $C_{\parallel}$ & -0.4 & 0.1 & -0.1 & 0.1 & 0.18 & 0.29 \\ $C_{\perp}$ & 0.0 & 0.1 & 0.0 & 0.1 & 0.00 & 0.00 \\ Combined & & & & & 0.25 & 0.35 \\ \hline \hline \end{tabular} \label{tab:recerr} \end{table*} \subsection{Uncertainties from Energy and Momentum Scale/Resolution} \label{subsec:epres} The modeling of COT track $\ensuremath{p_{T}}$ scale and resolution in the simulation affects our acceptance estimates for the minimum track $\ensuremath{p_{T}}$ requirements made on muon and electron candidates in our samples. Similarly, the model of cluster $\ensuremath{E_{T}}$ scale and resolution for the electromagnetic sections of the calorimeter can change the acceptance estimates for the minimum cluster $\ensuremath{E_{T}}$ requirements on electrons. Lepton energy and momentum measurements can also alter the event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ calculation, and incorrect modeling of these quantities can therefore also affect our acceptance estimates for the minimum $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ criteria applied to our $\wlnu$ samples. \begin{figure} \includegraphics[width=3.5in]{figures/z_mumu.eps} \caption{$\gamma^{*}/Z \rightarrow \mu \mu$ invariant mass distribution in data and tuned simulation normalized to the data. The arrows indicate the invariant mass range of our $\gamma^{*}/Z$ cross section measurement.} \label{fig:ptfit} \end{figure} We check the scale and resolution of the track $\ensuremath{p_{T}}$ and cluster $\ensuremath{E_{T}}$ measurements in the simulation using the invariant mass distributions of $\gamma^{*}/Z \rightarrow \mu \mu$ and $\gamma^{*}/Z \rightarrow e e$ candidate events. A direct comparison of these distributions in data and simulation is possible due to the small level of background contamination in these samples. We first define scale factors for COT track $\ensuremath{p_{T}}$ ($K_{\ensuremath{p_{T}}}$) and cluster $\ensuremath{E_{T}}$ ($K_{\ensuremath{E_{T}}}$) in the simulation via the expressions \begin{eqnarray} p^{\prime}_{T} & = & K_{\ensuremath{p_{T}}} \times \ensuremath{p_{T}}, \label{eq:fixpt} \\ E^{\prime}_{T} & = & K_{\ensuremath{E_{T}}} \times \ensuremath{E_{T}}. \label{eq:fixet} \end{eqnarray} The best values for these scale factors are determined by making a series of $\chi^{2}$ fits between the $\gamma^{*}/Z \rightarrow \ell \ell$ invariant mass distributions in data and tuned simulation based on a range of values for the scale factors. The best $\chi^{2}$ fit for the track $\ensuremath{p_{T}}$ scale factor is $K_{\ensuremath{p_{T}}} =$ 0.997. Since the mean of the $\gamma^{*}/Z \rightarrow \mu \mu$ invariant mass peak in the simulation is centered on the measured $\ensuremath{Z}$ boson mass, the best fit value for $K_{\ensuremath{p_{T}}}$ is indicative of the fact that the current $\ensuremath{p_{T}}$ scale for reconstructed tracks in the data is low. This result is consistent with track $\ensuremath{p_{T}}$ scaling factors for simulation obtained from similar fits to the $J/\Psi \rightarrow \mu \mu$ and $\Upsilon \rightarrow \mu \mu$ invariant mass peaks indicating that the resulting scale factor is not $\ensuremath{p_{T}}$ dependent. The best fits for the cluster $\ensuremath{E_{T}}$ scale factors in the central and plug calorimeter modules are $K_{\ensuremath{E_{T}}}$(central)~$=$~1.000 and $K_{\ensuremath{E_{T}}}$(plug)~$=$~1.025 indicative of a model that underestimates energy deposition in the plug modules but is accurate for the central module. \begin{figure} \includegraphics[width=3.5in]{figures/z_ee.eps} \caption{$\gamma^{*}/Z \rightarrow e e$ invariant mass distribution in data and tuned simulation normalized to the data. The arrows indicate the invariant mass range of our $\gamma^{*}/Z$ cross section measurement.} \label{fig:etfit} \end{figure} Comparisons of the $\gamma^{*}/Z \rightarrow \ell \ell$ invariant mass distributions in the data and the simulation are used to tune the track $\ensuremath{p_{T}}$ and cluster $\ensuremath{E_{T}}$ resolution in the simulation. We smear these values in simulated events by generating a random number from a Gaussian distribution with mean equal to one and width equal to a chosen $\sigma$ for each lepton candidate in our samples. The resolution smearing is obtained by multiplying the track $\ensuremath{p_{T}}$ and/or cluster $\ensuremath{E_{T}}$ by the different random numbers obtained from our distribution. Setting $\sigma$ equal to zero adds no smearing since each generated random number equals one by definition. The best values for $\sigma$ are obtained from $\chi^{2}$ fits between the $\gamma^{*}/Z \rightarrow \ell \ell$ invariant mass distributions in data and tuned simulation corresponding to a range of values for $\sigma$. The best $\chi^{2}$ fits for track $\ensuremath{p_{T}}$ and central calorimeter $\ensuremath{E_{T}}$ resolution are found to be for the case of $\sigma$ equal to zero indicating that these resolutions are well-modeled in the simulation. The best $\chi^{2}$ fit for plug calorimeter $\ensuremath{E_{T}}$ resolution is for a value of $\sigma$ above zero indicating that the simulation model for $\ensuremath{E_{T}}$ resolution in the plug modules needs to be degraded to match the data better. Fig.~\ref{fig:ptfit} and Fig.~\ref{fig:etfit} show comparisons between the $\gamma^{*}/Z \rightarrow \mu \mu$ and $\gamma^{*}/Z \rightarrow e e$ invariant mass distributions in data and tuned simulation. \begin{table*}[t] \caption{Summary of simulation track $\ensuremath{p_{T}}$ scale and resolution tuning parameters and corresponding uncertainties on our measured acceptance values.} \begin{tabular}{l c c c c c c c c} \hline \hline Tuning & $Z \rightarrow \mu \mu$ & $Z \rightarrow \mu \mu$ & $Z \rightarrow e e$ & $Z \rightarrow e e$ & & & & \\ Parameter & Fit Value & $\pm$~3~$\sigma$ variation & Fit Value & $\pm$~3~$\sigma$ variation & $\Delta A_{\wenu}$ & $\Delta A_{\wmnu}$ & $\Delta A_{Z \rightarrow e e}$ & $\Delta A_{Z \rightarrow \mu \mu}$ \\ & & & & & ($\!\%$) & ($\!\%$) & ($\!\%$) & ($\!\%$) \\ \hline $K_{\ensuremath{p_{T}}}$ & 0.997 & 0.003 & - & - & 0.03 & 0.21 & 0.04 & 0.05 \\ $\sigma_{\ensuremath{p_{T}}}$ & 1.000 & 0.003 & - & - & 0.00 & 0.00 & 0.00 & 0.00 \\ $K_{\ensuremath{E_{T}}}$ (Central) & - & - & 1.000 & 0.003 & 0.34 & 0.00 & 0.23 & 0.00 \\ $\sigma_{\ensuremath{E_{T}}}$ (Central) & - & - & 1.000 & 0.015 & 0.03 & 0.00 & 0.05 & 0.00 \\ $K_{\ensuremath{E_{T}}}$ (Plug) & - & - & 1.025 & 0.006 & 0.00 & 0.00 & 0.11 & 0.00 \\ $\sigma_{\ensuremath{E_{T}}}$ (Plug) & - & - & 1.027 & 0.011 & 0.00 & 0.00 & 0.05 & 0.00 \\ \hline \hline \end{tabular} \label{tab:perr} \end{table*} The effects of uncertainties in the simulation model for the scale and resolution of track $\ensuremath{p_{T}}$ and cluster $\ensuremath{E_{T}}$ on our measured acceptances are estimated based on the $\pm$~3~$\sigma$ values of the corresponding tuning parameters obtained from our fit procedure. Our choice of using the $\pm$~3~$\sigma$ values to estimate acceptance uncertainties is conservatively based on the idea that these tuning parameters are not directly related to the underlying mechanisms that set the scale and resolution of track $\ensuremath{p_{T}}$ and cluster $\ensuremath{E_{T}}$ in the detector. The acceptance uncertainties are estimated by observing the changes in measured acceptance for each candidate sample that occur when each individual tuning parameter is changed between its default and $\pm$~3~$\sigma$ values. A summary of the fitted values and uncertainties of the scale and resolution tuning parameters for track $\ensuremath{p_{T}}$ and cluster $\ensuremath{E_{T}}$ is given in Table~\ref{tab:perr} along with the estimated uncertainties on the measured acceptances of our candidate samples associated with each parameter. \subsection{Uncertainties from Detector Material Model} The acceptances of the kinematic selection criteria applied to electron candidates are dependent on the amount of material in the detector tracking volume since electrons can lose a significant fraction of their energy prior to entering the calorimeter via bremsstrahlung radiation originating from interactions with detector material. The electron $E/p$ distribution, because of its sensitivity to radiation, is used to compare the material description in the detector simulation in the central region with that of the real detector as observed in data. One measure of the amount of material that electrons pass through in the tracking region is the ratio of the number of events in the peak of the $E/p$ distribution (0.9~$< E/p <$~1.1) to the number of events in the tail of the distribution (1.5~$< E/p <$~2.0). We study the uncertainty in the amount of material in the simulation by varying the thickness of a cylindrical layer of material in the detector simulation geometry description in the region between the silicon and COT tracking volumes. We choose to use copper as the material for this cylindrical layer as it best describes the silicon tracker copper readout cables and is also supported by independent studies of muon energy loss in the calorimeter. Based on electron candidates produced in decays of both $\ensuremath{W}$ and $\ensuremath{Z}$ bosons, we determine that the matching of the $E/p$ distribution between data and simulation has an uncertainty corresponding to $\pm$~1.5~$\!\%$ of a radiation length ($X_0$) of copper. This variation in the thickness of the cylindrical layer is used to model the acceptance uncertainties originating from the model of the detector material in the simulation. This result is cross-checked by counting the fraction of electrons in $\wenu$ candidate events which form ``tridents'' (see Sec.~\ref{sec:backg}). The probability of finding a trident, created when an electron radiates a photon which immediately converts into an electron-positron pair, is strongly dependent on the amount of material traversed by the electron inside the tracking volume. We also compare the resolution of the $Z \rightarrow e e$ invariant mass peak in data and simulation which is sensitive to the rate of radiative interactions within the tracking volume. The results of these studies are consistent with the $E/p$ results. Fig.~\ref{fig:epmatch} shows the $E/p$ distributions for electron candidates in our $Z \rightarrow e e$ data and simulated event samples. The $\pm$~1~$\sigma$ material samples are simulated using $\pm$~1.5~$\!\%$ of a radiation length of copper. We observe good agreement between data and our default simulation in the region below $E/p =$ 2.5. In the high $E/p$ tail above this value, the comparison is biased by dijet background events in the data. \begin{figure} \includegraphics[width=3.5in]{figures/eop_log.eps} \caption{\label{fig:epmatch} Comparison of $E/p$ distribution for electron candidates in $Z \rightarrow e e$ events in data and simulation. The $\pm$~1~$\sigma$ samples are simulated with $\pm$~1.5~$\!\%$ of a radiation length of copper in the tracking volume.} \end{figure} The tracks associated with electron candidates in the calorimeter plug modules have a low reconstruction efficiency due to the limited number of tracking layers in the forward region. Therefore, the plug preradiator detector is used to study the detector material in the simulation for plug electron candidates. The amount of energy deposited in the plug preradiator depends on the shower evolution of the electron in front of the calorimeter which is itself dependent on the amount of material the electron passes through before entering the calorimeter. On average, electrons passing through more material inside the tracking volume will have more evolved showers at the inner edges of the calorimeter and therefore deposit more energy in the plug preradiator. To study the detector simulation material description in the forward part of the tracking volume, we compare the ratio of energies observed in the plug preradiator and remaining plug calorimeter sections for forward electron candidates in data and simulation. As in the central region, we study the material in the simulation by varying the thickness of an iron disk in the volume between the tracking chamber endplate and the inner edge of the plug calorimeter. These studies indicate that our model for detector material in the forward region has an uncertainty corresponding to $\pm$~16.5~$\!\%$ of a radiation length ($X_0$) in the iron disk. Fig.~\ref{fig:pprmatch} shows the ratio of energies observed in the plug preradiator (PPR) and the plug electromagnetic calorimeter (PEM) for electron candidates in data and simulation as a function of both the combined energy (PPR + PEM) and pseudorapidity of the candidates. \begin{figure*}[t] \includegraphics[width=6.in]{figures/ppr_491ratio.eps} \caption{\label{fig:pprmatch} Comparison of observed ratio of energies in plug preradiator and plug electromagnetic calorimeter for electron candidates as a function of the combined energy (left) and pseudorapidity (right) of candidates. Data distributions are denoted by the filled circles. The open triangles and associated shaded band show the distribution and uncertainty range obtained from simulation when $\pm$~16.5~$\!\%$ of a radiation length thick iron disk is used in the detector material description.} \end{figure*} Acceptance uncertainties coming from the simulation material model are determined by generating simulated event samples with the thicknesses of the extra material layers set one at a time in the simulation to the lower and upper limits of their uncertainty ranges. The changes in measured acceptance for the $\wenu$ and $\gamma^{*}/Z \rightarrow e e$ samples relative to the default simulation for the modified detector material models are summarized in Table~\ref{tab:matsyst}. \begin{table} \caption{Summary of acceptance uncertainties due to detector tracking volume material model in simulation.} \begin{tabular}{l c r} \hline \hline Material Model & $\Delta A_{\wenu}$ & $\Delta A_{Z \rightarrow e e}$ \\ \hline Central & 0.73~$\!\%$ & 0.94~$\!\%$ \\ Plug & - & 0.21~$\!\%$ \\ \hline \hline \end{tabular} \label{tab:matsyst} \end{table} \subsection{Acceptance Uncertainty Summary} The acceptance uncertainties on our event samples are summarized in Table~\ref{tab:accerr}. \begin{table*}[t] \caption{Summary of estimated uncertainties on the measured acceptances for our four candidate samples.} \begin{tabular}{l c c c c } \hline \hline Uncertainty Category & $\Delta A_{\wenu}$ & $\Delta A_{\wmnu}$ & $\Delta A_{Z \rightarrow e e}$ & $\Delta A_{Z \rightarrow \mu \mu}$ \\ & ($\!\%$) & ($\!\%$) & ($\!\%$) & ($\!\%$) \\ \hline NNLO $d\sigma/dy$ Calculation & 0.29 & 0.25 & 0.06 & 0.72 \\ PDF Model (positive) & 1.16 & 1.13 & 0.69 & 1.72 \\ PDF Model (negative) & 1.50 & 1.47 & 0.84 & 2.26 \\ Boson $\ensuremath{p_{T}}$ Model & 0.04 & 0.04 & 0.06 & 0.08 \\ Recoil Energy Model & 0.25 & 0.35 & 0.00 & 0.00 \\ Track $\ensuremath{p_{T}}$ Scale/Resolution & 0.03 & 0.21 & 0.04 & 0.05 \\ Cluster $\ensuremath{E_{T}}$ Scale/Resolution & 0.34 & 0.00 & 0.26 & 0.00 \\ Detector Material Model & 0.73 & 0.00 & 0.96 & 0.00 \\ Simulated Event Statistics & 0.13 & 0.14 & 0.24 & 0.41 \\ Total (positive) & 1.46 & 1.22 & 1.23 & 1.94 \\ Total (negative) & 1.75 & 1.57 & 1.26 & 2.44 \\ \hline \hline \end{tabular} \label{tab:accerr} \end{table*} \section{Efficiency} \label{sec:eff} \subsection{Introduction} The acceptance values estimated from our simulated samples are corrected for additional inefficiencies from event selection criteria that are either not modeled in the simulation or are better measured directly from data. We determine a combined efficiency, $\epsilon_{\mathrm{tot}}$, for each candidate sample based on measured efficiencies for the individual selection criteria. We account for correlations between different selection criteria by having a specific order in which individual efficiency measurements are made. The efficiency measurement for a given selection criterion is made using a subset of candidates that passes the full set of selection criteria ordered prior to the one being measured. In addition, since the efficiency is applied as a correction to the acceptance, candidates used to measure efficiencies are also required to meet the geometrical and kinematic requirements used to define these acceptances. The ordering and definitions of the individual selection criteria efficiencies are presented in this introductory section. The following two sections describe how these individual efficiencies are combined to obtain the total event efficiencies for our $\wlnu$ and $Z \rightarrow \ell \ell$ candidate samples. The remaining sections describe how each of the individual efficiency terms is measured. The first efficiency term is $\epsilon_{\mathrm{vtx}}$, the fraction of $p\overline{p}$ collisions that occur within $\pm$~60~$\mathrm{cm}$ of the center of the detector along the $z$-axis. We impose this requirement as a fiducial cut to ensure that $p\overline{p}$ interactions are well-contained within the geometrical acceptance of the detector. The $z$-coordinate of the event vertex for a given event is taken from the closest intersection point of the reconstructed high $\ensuremath{p_{T}}$ lepton track(s) with the $z$-axis. Since event selection criteria can bias our samples against events originating in the outer interaction region, the efficiency of our vertex position requirement, $\epsilon_{\mathrm{vtx}}$, is measured directly from the observed vertex distribution in minimum-bias events. We define $\epsilon_{\mathrm{trk}}$ as the efficiency for reconstructing the track of the high $\ensuremath{p_{T}}$ lepton in the COT and $\epsilon_{\mathrm{rec}}$ as the efficiency for matching the found track to either a reconstructed electromagnetic cluster in the calorimeter (electrons) or a reconstructed stub in the muon chambers (muons). The $\epsilon_{\mathrm{rec}}$ term incorporates both the reconstruction efficiency for the cluster or stub and the matching efficiency for connecting the reconstructed cluster or stub with its associated COT track. For reconstructed leptons (tracks matched to clusters or stubs), $\epsilon_{\mathrm{id}}$ is the efficiency of the lepton identification criteria used to increase the purity of our lepton samples. To increase the number of events in our $Z \rightarrow \ell \ell$ candidate samples, we use a looser set of identification criteria on the second lepton leg in these events. The loose lepton selection criteria are a subset of the set of cuts applied to the single lepton in $\wlnu$ events and the first lepton leg in $Z \rightarrow \ell \ell$ events. The combined efficiency for the loose subset of cuts is referred to as $\epsilon_{\mathrm{lid}}$, and we define $\epsilon_{\mathrm{tid}}$ as the efficiency for the set of remaining identification cuts not included in the loose subset. The efficiency of our lepton isolation requirement, which helps to reduce non-$\ensuremath{W}/\ensuremath{Z}$ backgrounds in our samples, is defined independently as $\epsilon_{\mathrm{iso}}$. It is important to avoid double-counting correlated efficiency losses when measuring the efficiencies for our two sets of identification cuts and the isolation requirement. We eliminate this problem by defining a specific ordering of these terms ($\epsilon_{\mathrm{lid}}$, $\epsilon_{\mathrm{iso}}$, $\epsilon_{\mathrm{tid}}$) and measuring each efficiency term using the subset of lepton candidates that meets the requirements associated with all of the efficiency terms ordered prior to that being measured. A natural consequence of using this procedure is that the total lepton identification efficiency, $\epsilon_{\mathrm{id}}$, is necessarily equal to the product of $\epsilon_{\mathrm{lid}}$ and $\epsilon_{\mathrm{tid}}$. As discussed previously, the high $\ensuremath{p_{T}}$ electron and muon data samples used to make the production cross section measurements are collected with lepton-only triggers. We define $\epsilon_{\mathrm{trg}}$ as the efficiency for an isolated, high quality reconstructed lepton to have satisfied all of the requirements of the corresponding lepton-only trigger path. CDF has a three-level trigger system, and the value of $\epsilon_{\mathrm{trg}}$ is determined from the product of the efficiencies measured for each of the levels. The measured efficiency for a specific level of the trigger is based on the subset of reconstructed track candidates that satisfy the trigger requirements of the levels beneath it. This additional requirement is made to avoid double-counting correlated losses in efficiency observed in the different trigger levels. Finally, there are two efficiencies that are applied only in measurements made in the muon decay channels. We define $\epsilon_{\mathrm{cos}}$ as the efficiency for signal events not to be tagged as cosmic ray candidates via our tagging algorithm. The cosmic ray tagging algorithm is not based on the properties of a single muon, but rather on the full set of tracking data available from the COT in each event. As a result, $\epsilon_{\mathrm{cos}}$ is determined as an overall event efficiency rather than an additional lepton efficiency. Due to topological differences between $\wmnu$ and $Z \rightarrow \mu \mu$ events, the fraction of signal events tagged by the algorithm as cosmic rays is different for the two candidate samples. We refer to the efficiency term for the $\wmnu$ sample as $\epsilon^{W}_{\mathrm{cos}}$ and that for the $Z \rightarrow \mu \mu$ sample as $\epsilon^{Z}_{\mathrm{cos}}$. One additional event selection made only in the case of our $\wmnu$ candidate sample is the $\ensuremath{Z}$-rejection criteria. Due to the non-uniform coverage of the muon chambers, we find cases in which only one of the two high $\ensuremath{p_{T}}$ muon tracks originating from a $\ensuremath{Z}$-boson decay has a matching stub in the muon detector. The additional selection criteria made to eliminate these events from our $\wmnu$ candidate sample has a corresponding efficiency defined as $\epsilon_{\mathrm{z-rej}}$. \subsection{$\wlnu$ Efficiency Calculation} The efficiency of detecting a $\wlnu$ decay that satisfies the kinematic and geometrical criteria of our samples is obtained from the formula shown in Eq.~\ref{eq:weffcalc}. \begin{eqnarray} \epsilon_{\mathrm{tot}} = && \epsilon_{\mathrm{vtx}} \times \epsilon_{\mathrm{trk}} \times \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{id}} \nonumber\\ && \times \epsilon_{\mathrm{iso}} \times \epsilon_{\mathrm{trg}} \times \epsilon_{\mathrm{z-rej}} \times \epsilon^{W}_{\mathrm{cos}} \label{eq:weffcalc} \end{eqnarray} As described in detail above, the ordering of the cuts, as shown by their left to right order in the formula, is important. Each efficiency term is an efficiency for the subset of $\wlnu$ events that satisfies the kinematic and geometric criteria of our samples as well as the requirements associated with each of the efficiency terms to the left of the term under consideration. For example, the trigger efficiency term in the formula, $\epsilon_{\mathrm{trg}}$, is an efficiency for reconstructed leptons that satisfy the geometrical, kinematic, identification, and isolation criteria used to select the high $\ensuremath{p_{T}}$ lepton in our $\wlnu$ candidate events. As noted previously, the $\epsilon_{\mathrm{z-rej}}$ and $\epsilon^{W}_{\mathrm{cos}}$ terms in the formula apply to the $\wmnu$ candidate sample only. Table~\ref{tb:ceneff} summarizes the measurements of the individual efficiency terms (described in detail below) and the resulting combined efficiencies for our $\wlnu$ candidate samples. The electron efficiencies shown in Table~\ref{tb:ceneff} are for central calorimeter electrons only since our $\wenu$ cross section measurement is also restricted to candidates in this part of the detector. \begin{table*}[t] \caption{Summary of the individual efficiency terms for $\wlnu$.} \begin{tabular}{l c c r} \hline \hline Selection Criteria & Label & $\wenu$ & $\wmnu$ \\ \hline Fiducial Vertex & $\epsilon_{\mathrm{vtx}}$ & 0.950 $\pm$ 0.004 & 0.950 $\pm$ 0.004 \\ Track Reconstruction & $\epsilon_{\mathrm{trk}}$ & 1.000 $\pm$ 0.004 & 1.000 $\pm$ 0.004 \\ Lepton Reconstruction & $\epsilon_{\mathrm{rec}}$ & 0.998 $\pm$ 0.004 & 0.954 $\pm$ 0.007 \\ Lepton ID & $\epsilon_{\mathrm{id}}$ & 0.840 $\pm$ 0.007 & 0.893 $\pm$ 0.008 \\ Lepton Isolation & $\epsilon_{\mathrm{iso}}$ & 0.973 $\pm$ 0.003 & 0.982 $\pm$ 0.004 \\ Trigger & $\epsilon_{\mathrm{trg}}$ & 0.966 $\pm$ 0.001 & 0.925 $\pm$ 0.011 \\ $\ensuremath{Z}$-Rejection Cut & $\epsilon_{\mathrm{z-rej}}$ & - & 0.996 $\pm$ 0.002 \\ Cosmic Ray Tagging & $\epsilon^{\ensuremath{W}}_{\mathrm{cos}}$ & - & 0.9999 $\pm$ 0.0001 \\ \hline Total & $\epsilon_{\mathrm{tot}}$ & 0.749 $\pm$ 0.009 & 0.732 $\pm$ 0.013 \\ \hline \hline \end{tabular} \label{tb:ceneff} \end{table*} \subsection{$Z \rightarrow \ell \ell$ Efficiency Calculation} For both electrons and muons, we define a loose set of lepton selection criteria for the second leg of $Z \rightarrow \ell \ell$ events to increase the size of our candidate samples. The efficiency calculation for these samples is complicated by the fact that in many events both leptons from the $\ensuremath{Z}$ boson decay can satisfy the tight lepton selection criteria which are required for only one of the two legs. In the electron channel, we allow for two different types of loose lepton legs. The second leg can be either a central calorimeter electron candidate passing a looser set of selection criteria or an electron reconstructed in the forward part of the calorimeter (plug modules). For $Z \rightarrow \mu \mu$ candidates, a loose track leg is not required to have a matching reconstructed stub in the muon detectors. For this sample, the second muon leg is simply required to be a high $\ensuremath{p_{T}}$, isolated track satisfying the subset of muon identification cuts corresponding to the track itself. The breakdown of lepton identification cut efficiencies between the loose and tight criteria is shown in Table~\ref{tb:looseeff} for both muons and central electrons. There is no reconstruction inefficiency associated with loose muon legs since track candidates are not required to have a matching muon detector stub. \begin{table}[t] \caption{Breakdown of loose and tight lepton identification efficiencies.} \begin{tabular}{l c c c} \hline \hline Selection Criteria & Label & Central Electron & Muon \\ \hline Loose Lepton ID & $\epsilon_{\mathrm{lid}}$ & 0.960 $\pm$ 0.004 & 0.933 $\pm$ 0.006 \\ Tight Lepton ID & $\epsilon_{\mathrm{tid}}$ & 0.876 $\pm$ 0.007 & 0.957 $\pm$ 0.005 \\ All Lepton ID & $\epsilon_{\mathrm{id}}$ & 0.840 $\pm$ 0.007 & 0.893 $\pm$ 0.008 \\ \hline \hline \end{tabular} \label{tb:looseeff} \end{table} Efficiencies for loose plug electrons are given in Table~\ref{tb:plugeff}. There is no track reconstruction component in the plug electron selection efficiency since a matched track is not required for candidates in the plug region of the calorimeter. Also, since no matching between tracks and clusters is done in this region, the plug lepton reconstruction efficiency is 100~$\!\%$. There are no dead calorimeter towers in the data-taking period used in these measurements. We also find that kinematic distributions for tight central electron legs in our central-plug $Z \rightarrow e e$ event sample are somewhat different from those in the central-central sample. These kinematic differences have a small effect on the electron identification efficiencies for the central legs in central-plug $Z \rightarrow e e$ events. In order to correct for this effect, we measure a central leg scale factor, $S^{\mathrm{plug}}_{\mathrm{cl}}$, which is the ratio of central leg efficiencies in central-plug $Z \rightarrow e e$ events to those in central-central events. The value of this scale factor given in Table~\ref{tb:plugeff} is determined from simulation and is applied as an extra term in the overall selection efficiency for plug electrons. \begin{table} \caption{Plug electron efficiencies.} \begin{tabular}{l c c} \hline \hline Selection Criteria & Label & Plug Electron \\ \hline Lepton Reconstruction & $\epsilon^{\mathrm{plug}}_{\mathrm{rec}}$ & 1.000 \\ Lepton ID & $\epsilon^{\mathrm{plug}}_{\mathrm{id}}$ & 0.876 $\pm$ 0.015 \\ Lepton Isolation & $\epsilon^{\mathrm{plug}}_{\mathrm{iso}}$ & 0.993 $\pm$ 0.003 \\ Central Leg Scale Factor & $S^{\mathrm{plug}}_{\mathrm{cl}}$ & 1.014 $\pm$ 0.002 \\ Total & $\epsilon^{\mathrm{plug}}_{\mathrm{tot}}$ & 0.883 $\pm$ 0.015 \\ \hline \hline \end{tabular} \label{tb:plugeff} \end{table} To determine a total event selection efficiency for $Z \rightarrow e e$ events, we first calculate efficiencies for the central-central and central-plug samples which are independent of one another by definition. The total efficiency is a weighted sum of the efficiencies for the two samples. The weighting factors are determined from the relative numbers of central-central and central-plug events in our simulated sample. The fraction of central-plug events, $f_{\mathrm{cp}}$, is determined to be 0.655~$\pm$~0.001. Eq.~\ref{eq:cceff} shows the efficiency calculation for central-central $Z \rightarrow e e$ events: \begin{eqnarray} \epsilon_{\mathrm{tot}}^{\mathrm{cc}} = && \epsilon_{\mathrm{vtx}} \times \epsilon^{2}_{\mathrm{trk}} \times \epsilon^{2}_{\mathrm{rec}} \times \epsilon^{2}_{\mathrm{lid}} \times \epsilon^{2}_{\mathrm{iso}} \nonumber\\ && \times [\epsilon_{\mathrm{tid}} \times (2 - \epsilon_{\mathrm{tid}})] \nonumber\\ && \times [\epsilon_{\mathrm{trg}} \times (2 - \epsilon_{\mathrm{trg}})]. \label{eq:cceff} \end{eqnarray} The squared terms in the formula apply to efficiency terms that are applied twice (we require two reconstructed central electrons passing loose identification and isolation criteria). In order for this treatment to be correct, the efficiencies of the two electron legs in the $Z \rightarrow e e$ candidates are required to be uncorrelated. Using our sample of simulated $Z \rightarrow e e$ events, we look for correlations between the efficiencies for the two electron legs and find them to be negligible. The tight identification and trigger criteria can be satisfied by either of the two electrons. The combined efficiency for one of two objects to satisfy a particular requirement can be written as $\epsilon^{2} + 2 \times \epsilon \times (1 - \epsilon) = \epsilon \times (2 - \epsilon)$. The efficiency calculation for central-plug $Z \rightarrow e e$ events is given in Eq.~\ref{eq:cpeff}. \begin{eqnarray} \epsilon_{\mathrm{tot}}^{\mathrm{cp}} = && \epsilon_{\mathrm{vtx}} \times \epsilon_{\mathrm{trk}} \times \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{lid}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{iso}} \times \epsilon_{\mathrm{trg}} \nonumber\\ && \times \epsilon^{\mathrm{plug}}_{\mathrm{rec}} \times \epsilon^{\mathrm{plug}}_{\mathrm{id}} \times S^{\mathrm{plug}}_{\mathrm{cl}} \times \epsilon^{\mathrm{plug}}_{\mathrm{iso}} \label{eq:cpeff} \end{eqnarray} In these events only the central electron leg can satisfy the tight identification and trigger criteria so these efficiencies are only applied to the one central leg. Similarly, the plug efficiencies are applied only to the plug electron leg. Based on Eqs.~\ref{eq:cceff} and~\ref{eq:cpeff} the event efficiency for our combined $Z \rightarrow e e$ sample takes the form: \begin{eqnarray} \epsilon^{Z \rightarrow e e}_{\mathrm{tot}} = && \epsilon_{\mathrm{vtx}} \times \epsilon_{\mathrm{trk}} \times \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{lid}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{iso}} \times \epsilon_{\mathrm{trg}}\nonumber\\ && \times [(1 - f_{\mathrm{cp}}) \times \epsilon_{\mathrm{trk}} \times \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{lid}} \times \epsilon_{\mathrm{iso}} \nonumber\\ && \times (2 - \epsilon_{\mathrm{tid}}) \times (2 - \epsilon_{\mathrm{trg}}) \nonumber\\ && + f_{\mathrm{cp}} \times \epsilon^{\mathrm{plug}}_{\mathrm{rec}} \times \epsilon^{\mathrm{plug}}_{\mathrm{id}} \times S^{\mathrm{plug}}_{\mathrm{cl}} \times \epsilon^{\mathrm{plug}}_{\mathrm{iso}}]. \label{eq:zeeeff} \end{eqnarray} The calculation of the total selection criteria efficiency for $Z \rightarrow \mu \mu$ candidate events is similar to that for events in the electron channel but involves some additional complications. As discussed above we increase our acceptance for $Z \rightarrow \mu \mu$ events by releasing the muon detector stub requirements for one of the two candidate track legs. The second muon leg in our candidate events can be any COT track passing the track quality, isolation, and minimum ionizing calorimeter energy deposition criteria used in this analysis for selecting muon track candidates. Since the track selection criteria are applied to both muon legs in our candidate events, the corresponding terms in the overall efficiency formula are squared. Only one of the two muon track candidates is required to have a matching stub in the muon detectors that satisfies our stub selection criteria. For roughly 40~$\!\%$ of our candidate events, both of the muon track legs point to active regions of the muon detectors. In these cases, either of the two legs can have a matching stub in the muon detectors and satisfy the tight leg criteria. In other cases, one of the two legs will not point to an active detector region, and the stub-matching criteria must be satisfied by the one leg that is pointed at the muon detectors. In order to determine the total efficiency for $Z \rightarrow \mu \mu$ candidate events, we first determine the total selection efficiencies for both of these event classes. The event selection efficiency for the combined sample is then extracted as the weighted sum of the efficiencies for the two different event types. The efficiency calculation for the subset of $Z \rightarrow \mu \mu$ events in which only one of the two muon tracks points to an active region of the muon detector is shown in Eq.~\ref{eq:zeffmu1}. The efficiencies corresponding to selection criteria applied to both muon legs (track reconstruction, loose identification, and isolation) enter into the formula as squared terms. The track leg pointing at the inactive regions of the muon detector can not have an associated reconstructed stub so the other track leg in the event must have a matching stub for the event to satisfy the $Z \rightarrow \mu \mu$ selection criteria. This leg must also satisfy the tight muon identification and event trigger requirements since an associated reconstructed muon detector stub is a necessary pre-condition for a muon leg to meet these criteria. Since the muon stub reconstruction, tight identification, and trigger selection criteria can only be satisfied by one of the two muon legs in these events, the corresponding efficiency terms enter into Eq.~\ref{eq:zeffmu1} linearly. As previously mentioned, the efficiency for $Z \rightarrow \mu \mu$ events not to be tagged as cosmics, $\epsilon^{Z}_{\mathrm{cos}}$, is independent of the measured value for $\wmnu$ events. As described subsequently, we measure this efficiency in $Z \rightarrow \mu \mu$ events to be $\epsilon^{Z}_{\mathrm{cos}} =$~0.9994~$\pm$~0.0006. \begin{eqnarray} \epsilon^{\mu \mathrm{trk}}_{\mathrm{tot}} = && \epsilon_{\mathrm{vtx}} \times \epsilon^{Z}_{\mathrm{cos}} \times \epsilon^{2}_{\mathrm{trk}} \times \epsilon^{2}_{\mathrm{lid}} \times \epsilon^{2}_{\mathrm{iso}} \nonumber\\ && \times \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{trg}} \label{eq:zeffmu1} \end{eqnarray} This situation is more complicated for the class of $Z \rightarrow \mu \mu$ events where both muon legs point to active regions of the muon detector. For these events both legs can individually satisfy the stub reconstruction, tight identification, and trigger criteria of the sample. In order to simplify the efficiency calculation, we require that at least one of the two muon legs in each candidate event satisfies the requirements associated with all three of the above criteria. With this additional restriction, the overall event selection efficiency in the subset of $Z \rightarrow \mu \mu$ candidates where both muon legs point at active regions of the muon detector can be written as shown in Eq.~\ref{eq:zeffmu2}. The combined efficiency for a muon leg to satisfy the stub reconstruction, tight identification, and trigger criteria ($\epsilon^{*} = \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{trg}}$) enters into Eq.~\ref{eq:zeffmu2} in the form $\epsilon^{*} \times (2 - \epsilon^{*})$ which, as described above, is the resulting efficiency for a set of criteria required for one of two identical objects within an event. \begin{eqnarray} \epsilon^{\mu \mu}_{\mathrm{tot}} = && \epsilon_{\mathrm{vtx}} \times \epsilon^{Z}_{\mathrm{cos}} \times \epsilon^{2}_{\mathrm{trk}} \times \epsilon^{2}_{\mathrm{lid}} \times \epsilon^{2}_{\mathrm{iso}} \nonumber\\ && \times [\epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{trg}} \nonumber\\ && \times (2 - \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{trg}})] \label{eq:zeffmu2} \end{eqnarray} In order to combine Eqs.~\ref{eq:zeffmu1} and~\ref{eq:zeffmu2} into a formula for the total event efficiency of our combined sample, we need to introduce an additional parameter, $f_{\mathrm{dd}}$, which is defined as the fraction of $Z \rightarrow \mu \mu$ events within our geometric and kinematic acceptance in which both muon legs are found to point at active regions of the muon detector. This quantity is determined from the simulated event sample. For our candidate sample, we obtain $f_{\mathrm{dd}} = 0.3889 \pm 0.0021$, which is a luminosity weighted average of the values for the different run periods in which the CMX was either offline or online. Using this additional factor, we determine a formula for the total event efficiency of our candidate sample by adding the expressions in Eqs.~\ref{eq:zeffmu1} and~\ref{eq:zeffmu2} weighted by factors of $1 - f_{\mathrm{dd}}$ and $f_{\mathrm{dd}}$ respectively. Finally, we obtain the expression shown in Eq.~\ref{eq:zeffmu3} for the total selection efficiency for events in our $Z \rightarrow \mu \mu$ candidate sample. \begin{eqnarray} \epsilon^{Z \rightarrow \mu \mu}_{\mathrm{tot}} = && \epsilon_{\mathrm{vtx}} \times \epsilon^{Z}_{\mathrm{cos}} \times \epsilon^{2}_{\mathrm{trk}} \times \epsilon^{2}_{\mathrm{lid}} \times \epsilon^{2}_{\mathrm{iso}} \nonumber\\ && \times \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{trg}} \times [1 + f_{\mathrm{dd}} \nonumber\\ && \times (1 - \epsilon_{\mathrm{rec}} \times \epsilon_{\mathrm{tid}} \times \epsilon_{\mathrm{trg}})] \label{eq:zeffmu3} \end{eqnarray} Based on the expressions in Eqs.~\ref{eq:zeeeff} and~\ref{eq:zeffmu3}, we can substitute our measured values for the individual efficiency terms and determine the combined event selection efficiencies for our $Z \rightarrow \ell \ell$ candidate samples. The resulting values are shown in Table~\ref{tb:zefffin}. \begin{table}[h] \caption{Results of $Z \rightarrow \ell \ell$ combined event efficiency calculations.} \begin{center} \begin{tabular}{l r} \hline \hline Candidate Sample & $\epsilon_{\mathrm{tot}}$ \\ \hline $Z \rightarrow e e$ & 0.713 $\pm$ 0.012 \\ $Z \rightarrow \mu \mu$ & 0.713 $\pm$ 0.015 \\ \hline \hline \end{tabular} \label{tb:zefffin} \end{center} \end{table} \subsection{Vertex Finding Efficiency} Our requirement that the $z$-position of the primary event vertex be within 60~$\mathrm{cm}$ of the center of the CDF detector ($|Z_{\mathrm{vtx}}| \leq$~60~$\mathrm{cm}$) limits the event acceptance to a fraction of the full luminous region for $p\overline{p}$ collisions. However, the luminosity estimate used in our cross section measurements is based on the full luminous range of the beam interaction region. We use minimum-bias data to measure the longitudinal profile of the $p\overline{p}$ luminous region, and this profile is subsequently used to estimate the fraction of interactions within our fiducial range in $z$. Fig.~\ref{zvSetA} shows the distribution of measured positions along the $z$-axis (parallel to beams) for reconstructed primary vertices in minimum-bias events. The minimum-bias events are taken from the same set of runs from which our candidate samples are constructed. In addition, the minimum-bias data is weighted to ensure that it has the same run-by-run integrated luminosity as the cross section event samples. We fit the distribution in Fig.~\ref{zvSetA} using the following form of the Tevatron beam luminosity function: \begin{equation} \frac{d {\cal L}(z)}{d z} = N_0 \:\: \frac{\exp{(-z^{2}/2 \sigma_{z}^{2})}} {\sqrt{ [1 + (\frac{ z - z_{01} }{\beta^*})^2 ] [1 + (\frac{ z - z_{02} }{\beta^*})^2 ] }} \label{eq:zvtx} \end{equation} The five free parameters of the fit are $N_0$, $\sigma_{z}$, $z_{01}$, $z_{02}$, and $\beta^{*}$. The $Z_{\mathrm{vtx}}$ distribution has some biases at large values of $z$ due to increased contamination from non-$p\overline{p}$ interactions such as those originating from beam-gas collisions and due to the decrease of COT tracking acceptance far away from the center of the detector. We avoid these biases by only fitting the measured $Z_{\mathrm{vtx}}$ distribution to our function for $d {\cal L}(z) / d z$ in the region where $|z| <$~60~$\mathrm{cm}$. Within this finite range in $z$, the fraction of events not from $p\overline{p}$ collisions is negligible and the COT tracking acceptance is high and uniform. \begin{figure} \includegraphics[width=3.5in]{figures/zvSetA.eps} \caption{The measured $Z_{\mathrm{vtx}}$ distribution. The units on the horizontal axis are $\mathrm{cm}$ and there are a total of 100 bins from $-$100~$\mathrm{cm}$ to $+$100~$\mathrm{cm}$. The curve is the fit to the luminosity function (Eq.~\ref{eq:zvtx}) for $|z| <$~60~$\mathrm{cm}$, and the resulting fit with 55 degrees of freedom has a $\chi^2$ of 119.} \label{zvSetA} \end{figure} The acceptance of our requirement on the $z$-position of the primary event vertex ($|Z_{\mathrm{vtx}}| <$~60~$\mathrm{cm}$) is calculated as \begin{equation} \epsilon_{\mathrm{vtx}}(|z| < 60) = \frac{ \int_{- 60}^{+ 60} \: [ d {\cal L}(z) / dz ] \: dz } { \int_{- \infty}^{+ \infty} \;\; [ d {\cal L}(z) / dz ] \: dz } \: . \end{equation} We perform the fit to the data and evaluate the acceptance for both the full sample and several sub-samples of our minimum-bias data set. We observe slight differences in the various sub-samples indicating small changes over time in the $z$-profile of $p\overline{p}$ collisions in the interaction region of our detector. The maximum shift seen in the measured acceptance among the various sub-samples is 0.6~$\!\%$, and we assign half of this value as a systematic uncertainty on the efficiency measurement. The statistical uncertainty on the measurement is assigned based on fit errors obtained from the $z_{\mathrm{vtx}}$ fit for the full minimum-bias sample. Using the techniques described above, we measure the signal acceptance of our cut on the $z$-position of the primary event vertex to be \begin{displaymath} \epsilon_{\mathrm{vtx}} = 0.950 \pm 0.002 \: (stat.) \pm 0.003 \: (syst.) \: . \end{displaymath} \subsection{Tracking Efficiency} We define tracking efficiency as the fraction of high $\ensuremath{p_{T}}$ leptons contained within our geometrical acceptance for which our offline tracking algorithm is able to reconstruct the lepton track from hits observed in the COT. We measure this quantity using a sample of clean, unbiased $\wenu$ candidate events based on a tight set of calorimeter-only selection criteria. The events for this sample were collected using a trigger path based on calorimeter $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ requirements to ensure that the sample is unbiased with respect to XFT tracking requirements in the hardware portion of the trigger and track reconstruction in the software portion. Events are required to have an electromagnetic calorimeter cluster with $\ensuremath{E_{T}} >$~20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and overall event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$~25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. Since we can not use a track matching requirement to help reduce non-electron backgrounds, we apply a tighter than normal set of electron identification criteria on the electromagnetic cluster itself. We also remove candidate events containing additional reconstructed jets in the calorimeter with $\ensuremath{E_{T}} >$~3~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and require that the $\ensuremath{p_{T}}$ of the reconstructed $\ensuremath{W}$ boson is above 10~$\ensuremath{\GeV\!/c}$. These cuts are designed to remove background events in our sample originating from QCD dijet processes. Our tracking efficiency measurement is obtained from the fraction of events in this candidate sample which have a COT track pointing to the electromagnetic cluster. Matching, reconstructed tracks in the COT are required to point within 5~$\mathrm{cm}$ of the calorimeter electromagnetic cluster seed tower. In order to be absolutely sure that we are not including track-less background events in our efficiency calculation, we also require that our candidate events have a reconstructed track based entirely on hits in the silicon tracking detector (independent of the central outer tracking chamber) pointing at the electromagnetic cluster. A total of 1368 candidate events in our 72.0~$\ensuremath{\mathrm{pb}^{-1}}$ sample have a matching silicon track. Of these, 1363 events also contain a matching reconstructed track based solely on hits in the central outer tracker yielding a COT tracking efficiency of $\epsilon_{\mathrm{trk}}(Data) =$~0.9963~$^{+0.0035}_{-0.0040}$. The uncertainty on the measurement is primarily systematic and is based on studies of both silicon-only track fake rates and correlated failures in COT and silicon based tracking algorithms. We compare the tracking efficiency measured in the data with an equivalent measurement based on our $\wenu$ simulated event sample. Using the same technique, we obtain a simulation tracking efficiency of $\epsilon_{\mathrm{trk}}(MC) =$ 0.9966~$^{+0.0015}_{-0.0024}$, consistent with our measured value from data. A study of failing simulated events reveals that the small tracking inefficiency we measure is mainly due to bremsstrahlung radiation where the silicon-only track points in the direction of the hard photon and the COT track follows the path of the soft electron (pointing away from the high $\ensuremath{E_{T}}$ electromagnetic cluster). Since the loss of events due to this process is already accounted for in our acceptance calculation, we avoid double-counting by taking the ratio of the tracking efficiency measured in data to that measured in simulation as our net tracking efficiency. Based on this approach, our final value for the COT tracking efficiency is $\epsilon_{\mathrm{trk}} =$~1.000~$\pm$~0.004 where the uncertainty is the combined statistical and systematic uncertainty of our measurement technique. \subsection{Reconstruction Efficiency} The lepton reconstruction efficiency is defined as the fraction of real leptons that is within our geometrical acceptance and has matching, reconstructed COT tracks which are subsequently reconstructed as leptons by our offline algorithms. In the case of electrons, this efficiency corresponds to the combined probability for forming the electromagnetic cluster and matching it to the associated COT track. For muons, it is the probability for reconstructing a stub in the muon detectors and matching the stub to the corresponding COT track. The reconstruction efficiency is measured using the unbiased, second legs of $Z \rightarrow \ell \ell$ decays. The leptons from $\ensuremath{Z}$ boson decays have a similar momentum spectrum to those originating from $\ensuremath{W}$ boson decays and are embedded in a similar event environment. Events are required to have at least one fully reconstructed lepton leg that satisfies the complete set of lepton identification criteria used in the selection of our candidate samples. The same lepton leg must also satisfy the requirements of the corresponding high $\ensuremath{p_{T}}$ lepton trigger path to ensure that the second leg is unbiased with respect to the trigger. A lepton leg satisfying these requirements is then paired with a second opposite-sign, high $\ensuremath{p_{T}}$ track in the event. If the invariant mass of a lepton-track pair lies within the $\ensuremath{Z}$ boson mass window, 80~$\ensuremath{\GeV\!/c^2}$~$< M_{\ell\ell} <$~100~$\ensuremath{\GeV\!/c^2}$, the second track leg is utilized as a candidate for testing the lepton reconstruction efficiency. In the case of $Z \rightarrow \mu \mu$ candidate events only, the second track leg is also required to have associated calorimeter energy deposition consistent with a minimum ionizing particle which reduces backgrounds from fake muons without biasing the measurement. In the subset of $Z \rightarrow \ell \ell$ candidate events where each track leg is a reconstructed lepton passing the full set of identification and trigger criteria, both legs are unbiased lepton candidates and included in the efficiency measurement. \begin{figure} \includegraphics[width=8.5cm]{figures/recoeff_cmup.eps} \caption{\label{fig:recoeff_cmup}Invariant mass of muon-track pairs for the muon reconstruction-efficiency measurement. We show the distribution for pairs in which the track is a reconstructed as a muon track (open histogram) and for pairs in which the track is not reconstructed as a muon track (solid histogram). Only the region between 80~$\ensuremath{\GeV\!/c^2}$ and 100~$\ensuremath{\GeV\!/c^2}$ is used for the efficiency calculation.} \end{figure} Each candidate track leg is extrapolated to determine if it points at an active area of the calorimeter or muon detectors as appropriate. If the track does point at an active detector region, it is expected to be reconstructed as a lepton. The fraction of this subset of candidate tracks which are in fact reconstructed as leptons provides our measurement of the reconstruction efficiency. Fig.~\ref{fig:recoeff_cmup} shows the invariant mass distributions for muon-track pairs in cases where the second track is and is not reconstructed as a muon. The small peak seen near the $\ensuremath{Z}$ boson mass in the latter case indicates that we do observe a non-negligible muon reconstruction inefficiency in the data. However, some of the measured inefficiency is attributable to the effects of multiple scattering. A particle associated with a track that points at an active detector region can in some cases pass outside of this region due to the cumulative effects of interactions with material in the detector. This effect is modeled using the simulated event samples. All real reconstruction inefficiencies observable in the simulation are accounted for in the acceptance calculation and must not be double-counted in the lepton reconstruction efficiency measurement. Therefore, we determine our net lepton reconstruction efficiency by dividing the value measured in data by the value obtained from an equivalent measurement using simulation. The lepton reconstruction efficiency measurements for electrons and muons are summarized in Table~\ref{tb:leptrec}. Plug electron candidates are not required to have a matching reconstructed track and therefore by our definition have a fixed reconstruction efficiency of 100~$\!\%$. We make additional checks to confirm that the leptons in our test samples are a good match for the leptons in our candidate samples and based on this agreement take the statistical uncertainty of our measurements as the total uncertainty on the reconstruction efficiencies. \begin{table*}[t] \caption{Summary of lepton reconstruction efficiency measurements. Because plug electron candidates are not required to have a matching reconstructed track, the corresponding reconstruction efficiency is one by definition.} \begin{tabular}{l c c c} \hline \hline Lepton & Data Efficiency & Simulation Efficiency & Net Efficiency \\ \hline Central Electons & 0.990 $\pm$ 0.004 & 0.992 $\pm$ 0.001 & 0.998 $\pm$ 0.004 \\ Plug Electrons & 1.000 & 1.000 & 1.000 \\ Muons & 0.935 $\pm$ 0.007 & 0.980 $\pm$ 0.001 & 0.954 $\pm$ 0.007 \\ \hline \hline \end{tabular} \label{tb:leptrec} \end{table*} \subsection{Lepton Identification and Isolation Cut Efficiencies} The efficiencies of our lepton identification and isolation cuts are also determined directly from the data using $Z \rightarrow \ell \ell$ events. We use slightly different techniques for measuring these efficiencies for electrons and muons. The motivation for using separate methods is the non-negligible fraction of background events in the $Z \rightarrow e e$ candidate sample in which at least one of the reconstructed electrons is either a fake or the direct semileptonic decay product of a hadron. In order to accurately measure the selection efficiencies for electrons originating from $\ensuremath{W}$ and $\ensuremath{Z}$ boson decays, it is important to correct for the contribution of these background events to our efficiency calculation. Since these types of backgrounds are negligible in our $Z \rightarrow \mu \mu$ candidate sample, we are able to use a more aggressive approach which maximizes the statistical size of the muon candidates used to determine these efficiencies. As previously mentioned, the identification and isolation efficiencies for leptons are determined in a specific order to avoid double-counting correlated inefficiencies between different groups of selection criteria. The order we employ in making these measurements is the following: efficiencies from loose identification cuts, isolation cut efficiencies, and efficiencies from tight identification cuts. This ordering is chosen to simplify the extraction of combined selection efficiencies for $Z \rightarrow \ell \ell$ events from our individual, measured efficiency terms. To protect this ordering, we require that lepton candidates used to measure each group of selection efficiencies satisfy the selection criteria associated with all groups defined to be earlier within our assigned order. To minimize backgrounds in the $Z \rightarrow e e$ event sample used to make the efficiency measurements, we require that at least one of the two reconstructed electrons passes the full set of identification and isolation criteria used in the $\wenu$ analysis. The second electron leg in each event, referred to here as the probe electron, is simply required to satisfy the geometric and kinematic cuts that define the acceptance of our candidate samples. In addition, the invariant mass of the electron pair is required to be within a tight window centered on the measured $\ensuremath{Z}$ boson mass (75~$\ensuremath{\GeV\!/c^2} < M_{ee} <$~105~$\ensuremath{\GeV\!/c^2}$), which further reduces non-$\ensuremath{Z}$ backgrounds in the sample. By definition, the electron passing the complete set of identification and isolation criteria is a central electron. Central-central $Z \rightarrow e e$ events satisfying the criteria listed above are used to measure central electron efficiencies, and central-plug events are used to measure plug electron efficiencies. We define the number of central-central $Z \rightarrow e e$ candidates passing our criteria as $N_{\mathrm{tc}}$. As mentioned above, each event has at least one electron passing the full set of identification and isolation criteria. Electrons of this type are referred to as tight. In some number of events in our candidate sample, $N_{\mathrm{tt}}$, both electrons are found to satisfy the tight criteria. In the remaining events, the probe electron necessarily fails at least one part of our selection criteria. However, some number of these remaining events will satisfy a particular subset of the identification and isolation requirements corresponding to an efficiency term that we want to measure. The total number of events where the probe leg is found to satisfy a given subset of cuts is referred to as $N_{\mathrm{t}i}$. In this case, the corresponding efficiency for the subset of cuts being studied is determined from the expression given in Eq.~\ref{eq:eid1}. The variable $i$ in this expression refers to the three sets of selection cut efficiencies to be measured (1 = loose identification, 2 = isolation, and 3 = tight identification). In the second two cases, we limit our sample of probe electrons to those that satisfy the criteria associated with the lower numbered efficiency terms to avoid the double-counting problem discussed above. The net result is that for the second two cases $N_{\mathrm{tc}} = N_{\mathrm{t}(i-1)}$ and $N_{\mathrm{t}i}$ is re-defined as the number of events where the probe leg is found to pass the requirements associated with the efficiency term being measured and those numbered below it. This new definition implies that for the final case $N_{\mathrm{t}i}$ is simply equal to $N_{\mathrm{tt}}$. \begin{eqnarray} \epsilon_{i} &=& {{N_{\mathrm{t}i} + N_{\mathrm{tt}}} \over {N_{\mathrm{tc}} + N_{\mathrm{tt}}}}, \label{eq:eid1} \end{eqnarray} One additional complication is that we must subtract the contribution of background to each of the input event totals in Eq.~\ref{eq:eid1} to accurately measure the efficiencies for electrons produced in $\ensuremath{W}$ and $\ensuremath{Z}$ boson decays. For central-central $Z \rightarrow e e$ events, the background in each event subset is determined from the number of equivalent same-sign events observed in the data sample. A correction for tridents (real $Z \rightarrow e e$ events where the charge of one electron is measured incorrectly due to the radiation of a hard bremsstrahlung photon) in the same-sign event totals is made based on the relative numbers of opposite-sign and same-sign events in our $Z \rightarrow e e$ simulated event sample. The event counts and background corrections for each of the input parameters used in the efficiency calculations are given in Table~\ref{tb:ceid}. \begin{table*}[t] \caption{$Z \rightarrow e e$ event counts used as inputs to the calculation of electron identification and isolation efficiencies.} \begin{tabular}{l c c c c c c c c c} \hline \hline Efficiency Measurement & Symbol & $i$ & $N_{\mathrm{tc}}$ & $N_{\mathrm{t}i}$ & $N_{\mathrm{tt}}$ & $N_{\mathrm{tc}}$ & $N_{\mathrm{t}i}$ & $N_{\mathrm{tt}}$ & Efficiency \\ & & & & & & Background & Background & Background & \\ \hline Loose Identification Cuts & $\epsilon_{\mathrm{lid}}$ & 1 & 1901 & 1751 & 1296 & 28.3 & 6.1 & 0.6 & 0.960 $\pm$ 0.004 \\ Isolation Cut & $\epsilon_{\mathrm{iso}}$ & 2 & 1751 & 1663 & 1296 & 6.1 & -0.4 & 0.6 & 0.973 $\pm$ 0.003 \\ Tight Identification Cuts & $\epsilon_{\mathrm{tid}}$ & 3 & 1663 & 1296 & 1296 & -0.4 & 0.6 & 0.6 & 0.876 $\pm$ 0.007 \\ \hline \hline \end{tabular} \label{tb:ceid} \end{table*} The fraction of background events in the central-plug $Z \rightarrow e e$ candidate sample used to measure plug electron efficiencies is much larger than that in the central-central sample. In order to eliminate some of this additional background, we make an even tighter set of requirements on the isolation and electron quality variables associated with the central electron to pick the candidate events used to measure these efficiencies. As the probe leg in these candidates is the only plug electron of interest in the event, efficiencies are measured simply as the fraction of probe legs that satisfy the associated set of selection criteria. In the analyses reported here, plug electrons are utilized only as loose second legs for selecting $Z \rightarrow e e$ candidate events. There is therefore no corresponding tight identification cut efficiency to measure for plug electrons. However, the ordering of the loose identification and isolation cuts for plug electrons is identical to that used for electrons in the central region. We account for this ordering by requiring that the probe electrons used to measure the efficiency of the isolation cut satisfy the full set of loose plug electron identification cuts. We correct the number of probe legs in both the numerator and denominator of our efficiency calculations for the residual backgrounds remaining in our candidate sample. These backgrounds are estimated using electron fake rate calculations outlined in Sec.~\ref{sec:backg}. Based on this method, we obtain independent estimates for the background contributions from both QCD dijet and $\wenu$ plus jet processes and sum them to obtain our final background estimates. The inputs to our plug electron efficiency measurements and the resulting efficiency values are summarized in Table~\ref{tb:peid}. \begin{table*}[t] \caption{Input parameters to plug electron identification and isolation efficiency measurements using central-plug $Z \rightarrow e e$ candidates.} \begin{tabular}{l c c c c c c} \hline \hline Efficiency & Symbol & Number of & Number passing & Probe Electron & Passing Electron & Efficiency \\ Measurement & & Probe Electrons & Selection Criteria & Background & Background & \\ \hline Identification Cuts & $\epsilon^{\mathrm{plug}}_{\mathrm{id}}$ & 2517 & 2126 & 108.4 & 15.0 & 0.876 $\pm$ 0.015 \\ Isolation Cut & $\epsilon^{\mathrm{plug}}_{\mathrm{iso}}$ & 2126 & 2111 & 15.0 & 14.1 & 0.993 $\pm$ 0.003 \\ \hline \hline \end{tabular} \label{tb:peid} \end{table*} The calculation of muon identification and isolation efficiencies is simplified by the lack of significant backgrounds in our $Z \rightarrow \mu \mu$ candidate samples. To obtain the largest possible sample of probe muons for measuring these efficiencies, we make only a minimal set of requirements on the first muon leg in these events. In order to avoid selection biases, we simply require that at least one muon leg in each event satisfies both the trigger requirements and loose cuts used to select events into our high $\ensuremath{p_{T}}$ muon sample from which the candidate events are chosen. The second muon leg in each of these events is then utilized as an unbiased probe leg for measuring our selection efficiencies. In the subset of candidate events where both muon legs satisfy the trigger and loose selection requirements of our sample, both muons are unbiased and included in our sample of probe muons. To ensure that we are selecting probe muons from a clean (low background) sample of $Z \rightarrow \mu \mu$ candidate events, we do require that the invariant mass of each muon pair lies within a tight window around the measured $\ensuremath{Z}$ boson mass (80~$\ensuremath{\GeV\!/c^2} < M_{\mu\mu} <$~100~$\ensuremath{\GeV\!/c^2}$) and remove any events identified by our tagging algorithm as cosmic ray candidates. After applying these criteria, we find that only 3 of over 1,500 probe muons come from same-sign candidate events confirming the negligible background fraction in the event sample used for these measurements. As in the case of electrons, the full set of muon identification cuts is divided into loose and tight subsets to simplify the calculation of the combined event selection efficiency for $Z \rightarrow \mu \mu$ candidate events. The second muon track leg in these events is not required to have a matching stub in the muon detector. Therefore, the identification cuts for muons which we refer to as loose are those that are applied to the track itself. The remaining tight selection cuts are those applied only to muon track legs with matching muon detector stubs. In some sense, the reconstruction of a matching stub in the muon detector is therefore also a tight selection criteria although we choose to treat the efficiency for this requirement separately. We use the same ordering of selection criteria (loose identification, isolation, and tight identification) as that used for electrons to avoid the double-counting of correlated muon inefficiencies. Muon probe legs used to measure the efficiency for each set of selection criteria are required to satisfy all selection cuts corresponding to previously ordered efficiency terms. Table~\ref{tb:mid} summarizes the inputs to the muon efficiency calculations and the resulting efficiency values. \begin{table*}[t] \caption{Input parameters to muon identification and isolation efficiency measurements using $Z \rightarrow \mu \mu$ candidates.} \begin{tabular}{l c c c c c c} \hline \hline Efficiency Measurement & Symbol & Number of & Number passing & Efficiency \\ & & Probe Muons & Selection Criteria & \\ \hline Loose Identification Cuts & $\epsilon_{\mathrm{lid}}$ & 1574 & 1469 & 0.933 $\pm$ 0.006 \\ Isolation Cut & $\epsilon_{\mathrm{iso}}$ & 1469 & 1443 & 0.982 $\pm$ 0.003 \\ Tight Identification Cuts & $\epsilon_{\mathrm{tid}}$ & 1443 & 1381 & 0.957 $\pm$ 0.005 \\ \hline \hline \end{tabular} \label{tb:mid} \end{table*} \subsection{Trigger Efficiency} As described in Sec.~\ref{sec:data}, the data samples used to select our candidate events are collected via high $\ensuremath{p_{T}}$ lepton-only trigger paths. The three-level trigger system utilized by the upgraded CDF data acquisition system reduces the 2.5~$\mathrm{MHz}$ beam-interaction rate into a final event collection rate on the order of 75~$\mathrm{Hz}$. The first two levels utilize dedicated hardware to select events for readout from the detector, and the third level is a processor farm that runs a fast version of the full event reconstruction to pick out the final set of events to be written to tape. Level~1 lepton triggers are constructed from high $\ensuremath{p_{T}}$ COT tracks identified in the fast tracking hardware matched with single tower electromagnetic energy deposits in the calorimeter (electrons) or groups of hits in the outer wire chambers (muons). Level~2 hardware is used to perform a more sophisticated calorimeter energy clustering algorithm on electron candidates to obtain improved $\ensuremath{E_{T}}$ resolution. The improved $\ensuremath{E_{T}}$ variable is utilized at Level~2 to make tighter kinematic cuts on the electron candidates. No additional requirements are made on muon candidates at Level~2. Events selected at Level~2 are read out of the detector and passed to the Level~3 processor farm. A fast version of the offline lepton reconstruction algorithms are run on each event, and the identified leptons are subjected to both kinematic and loose quality selection cuts. The measurement of trigger efficiencies for electrons is simplified by the availability of secondary trigger paths that feed into our $\wenu$ candidate sample. A trigger path based solely on calorimeter quantities is used to measure the efficiency of tracking requirements at each of the three trigger levels. This path utilizes identical calorimeter cluster requirements to those in the default electron path but does not require matching tracks to be found at any level. Instead, events are selected based on the presence of large $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ in the calorimeter (15~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ at Level~1/Level~2 and 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ at Level~3) associated with the high energy neutrino in the $\ensuremath{W}$ boson decays. For $\wmnu$ candidate events, the muon deposits only a small fraction of its energy into the calorimeter and hence the residual $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ in the calorimeter is too small to allow for an equivalent trigger path for muon candidates. To measure the efficiencies of the electron trigger path track requirements, we select events from the secondary trigger path that pass the complete set of $\wenu$ selection criteria. The fraction of events in this unbiased sample that satisfy the track requirements of our lepton-only trigger path at each of the three levels gives the corresponding efficiency for those requirements. The double-counting of correlated inefficiencies between the different trigger levels is avoided by requiring that events used to measure higher level trigger efficiencies pass all of the tracking requirements associated with levels below that being measured. Due to slight changes in the track trigger requirements over time, the corresponding efficiencies are measured in three run ranges. A final efficiency is determined by taking the luminosity weighted average of the results obtained for each run range. The event samples used to make these measurements were studied to look for possible trigger efficiency dependencies on other event variables such as electron isolation, number of additional jets in the events, total event energy, and electron charge. No dependencies were found for these variables, within the statistical uncertainties of our sample. We did observe a small trigger efficiency dependence as a function of the measured pseudorapidity of the electron track. We observe a small inefficiency for tracks near $\eta_{\mathrm{det}} \sim$~0 due to wire spacers in the tracking chamber and reduced overall charge collection due to the shorter track path length through the chamber. However, the effect of this dependence on our final efficiency results was found to be negligible within our measurement uncertainties. The final efficiency results for the electron trigger path tracking requirements at each trigger level are shown in Table~\ref{tb:etrgeff1}. \begin{table*}[t] \caption{Efficiencies for tracking requirements in high $\ensuremath{E_{T}}$ electron trigger path.} \begin{tabular}{l c r} \hline \hline Trigger Level & Track Requirement & Measured Efficiency \\ \hline Level~1 & Fast Tracker ($\ensuremath{p_{T}} >$ 8~$\ensuremath{\GeV\!/c}$) & 0.974 $\pm$ 0.002 \\ Level~2 & Fast Tracker ($\ensuremath{p_{T}} >$ 8~$\ensuremath{\GeV\!/c}$) & 1.000 $\pm$ 0.000 \\ Level~3 & Full Reconstruction ($\ensuremath{p_{T}} >$ 9~$\ensuremath{\GeV\!/c}$) & 0.992 $\pm$ 0.001 \\ Combined & Level~1 $\rightarrow$ Level~3 & 0.966 $\pm$ 0.002 \\ \hline \hline \end{tabular} \label{tb:etrgeff1} \end{table*} In order to measure the total efficiency of our electron trigger path, we additionally need to measure the efficiencies of the calorimeter cluster requirements at each level of the trigger. The requirement of an electromagnetic cluster with $\ensuremath{E_{T}} >$ 8~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ at Level~1 is studied using reconstructed electromagnetic objects found in muon-triggered events. We determine the highest energy trigger tower associated with each object and check to see if the Level~1 trigger bit corresponding to this tower is turned on in the data. We measure a turn-on efficiency of 99.5~$\!\%$ for trigger towers with a measured electromagnetic energy between 8~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and 14~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ and 100~$\!\%$ for those measured above 14~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. The small inefficiency observed for towers with measured energies below 14~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ is due to an additional Level~1 requirement placed on the ratio of hadronic and electromagnetic energies ($E_{\mathrm{had}}/E_{\mathrm{em}} <$~0.05) in towers with energies below this cut-off value. The effect of this inefficiency on the fully reconstructed electrons in our $\wenu$ candidate events is determined by checking how often the associated trigger tower with the highest electromagnetic $\ensuremath{E_{T}}$ has a measured energy below 14~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. We find that less than 1~$\!\%$ of the reconstructed electrons in our candidate sample ($\ensuremath{E_{T}} >$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$) do not have at least one associated trigger tower with $\ensuremath{E_{T}} >$ 14~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. Based on these numbers, we estimate the overall trigger efficiency for the Level~1 electromagnetic cluster requirement to be 100~$\!\%$ for the events in our candidate samples. Additional secondary trigger paths are used to measure the efficiencies of the Level~2 and Level~3 cluster requirements in our default electron trigger path. The efficiency of the Level~2 cluster requirement is obtained using events collected with two additional secondary trigger paths that have no Level~2 selection requirements other than simple prescales. The Level~1 and Level~3 trigger requirements in these paths are equivalent to in one case those of the default path and in another those of the path used to collect events for measuring the efficiencies of track requirements. The subset of these events that pass our full set of $\wenu$ selection criteria are also found to satisfy the Level~2 cluster trigger criteria. Based on these samples we conclude that the efficiency of the Level~2 electron cluster requirement is 100~$\!\%$ for reconstructed electrons also satisfying our selection criteria for tight central electrons. Since the electron clustering algorithm run in the Level~3 processor farms is nearly identical to that used in offline reconstruction, we expect candidate events with high $\ensuremath{E_{T}}$ electrons to also satisfy the Level~3 cluster requirements of our trigger path. However, due to slight differences in the calorimeter energy corrections applied at Level~3 and offline, it is possible that we could observe trigger inefficiencies close to the $\ensuremath{E_{T}}$ threshold utilized for Level~3 clusters. To check for this inefficiency, we collect events on an additional secondary trigger path which is based on the Level~1 and Level~2 requirements of our default electron trigger path but no requirements at Level~3 other than a simple prescale. We find that all of the events collected on this path which satisfy our event selection criteria also satisfy the Level~3 cluster criteria of our default trigger path. Based on this study, the efficiency of the Level~3 cluster requirement for events in our candidate samples is also 100~$\!\%$. Since we do not measure inefficiencies for the cluster requirements of our trigger path at any of the three levels, we conclude that the overall efficiency of our default trigger path for electrons is completely determined by the measured efficiencies of the track criteria given in Table~\ref{tb:etrgeff1}. As mentioned above we do not have the benefit of an equivalent set of secondary trigger paths for collecting $\wmnu$ candidate events to measure the efficiencies of our muon trigger path requirements. Instead, we use $Z \rightarrow \mu \mu$ candidate events in which both muons satisfy the full set of isolation and identification cuts used to define our samples. To avoid background events we require that the invariant mass of the dimuon pair lies in a tight window around the $\ensuremath{Z}$ boson mass (76~$\ensuremath{\GeV\!/c^2} < M_{\mu\mu} <$~106~$\ensuremath{\GeV\!/c^2}$) and that the event has not been identified as a cosmic ray by our tagging algorithm. In this sample we know that at least one of the two muons in the event satisfied the muon trigger path requirements and can make a measurement of the muon trigger efficiency based on the fraction of events in which both muons meet the criteria of our trigger path. If we define $\epsilon_{\mathrm{trg}}$ as the single muon trigger efficiency we want to measure, then $(\epsilon_{\mathrm{trg}})^{2}$ is the fraction of events containing two triggered muons, and $2(\epsilon_{\mathrm{trg}})(1-\epsilon_{\mathrm{trg}})$ is the fraction of events with only one triggered muon. There is also a remaining fraction of events $(1-\epsilon_{\mathrm{trg}})^{2}$ which contain no triggered muons, but these events do not make it into our $Z \rightarrow \mu \mu$ candidate sample. Based on these definitions, the number of candidate events in our sample in which both muons meet the trigger criteria, $N_{2\mathrm{trg}}$ divided by the total number of events in the sample, $N_{\mathrm{tot}}$, can be expressed with Eq.~\ref{eq:mutrg1}. From this expression we obtain the formula shown in Eq.~\ref{eq:mutrg2} which gives the muon trigger efficiency as a function of this fraction $F$. \begin{equation} F = \frac{N_{2\mathrm{trg}}}{N_{\mathrm{tot}}} = \frac{(\epsilon_{\mathrm{trg}})^{2}}{(\epsilon_{\mathrm{trg}})^{2} + 2(\epsilon_{\mathrm{trg}})(1-\epsilon_{\mathrm{trg}})} \label{eq:mutrg1} \end{equation} \begin{equation} \epsilon_{\mathrm{trg}} = \frac{2 \cdot F}{1 + F} \label{eq:mutrg2} \end{equation} To check whether an individual muon in our candidate sample satisfies the requirements of our muon trigger path, we first look at the hits on the reconstructed muon stub to determine the position of the muon with respect to the 144 Level~1 muon trigger towers (2.5 degrees each in $\phi$) defined in the hardware. We then check to see if the trigger bits corresponding to each individual requirement of our trigger path are set for the matched trigger tower. The Level~1 requirements of our trigger path include both a high $\ensuremath{p_{T}}$ COT track identified in the fast tracking hardware and a sufficient set of matching hits in the muon detector wire chamber(s) along the path of the reconstructed muon. Matching CSX scintillator hits are additionally required in the region of the muon detector between 0.6 and 1.0 in $\eta_{\mathrm{det}}$ (CMX region). No significant additional trigger requirements are made at Level~2 for muon candidates. In order to measure the efficiency of the muon reconstruction algorithms at Level~3, we use the subset of events in the $Z \rightarrow \mu \mu$ candidate sample described above in which both muons are found to satisfy the Level~1 trigger criteria. This restriction is made to ensure that we do not double-count correlated inefficiencies between the different trigger levels. In addition, we require that one of the two muons is found in the region of the muon detector between 0.0 and 0.6 in $\eta_{\mathrm{det}}$ (CMUP region) while the other is found in the region between 0.6 and 1.0 in $\eta_{\mathrm{det}}$ (CMX region). Since different Level~3 muon reconstruction algorithms are run in these two regions, it is simple to check if both or only one of the muons in these events satisfy the Level~3 requirements of our muon trigger path. The input parameters to our muon trigger path efficiency calculations are shown in Table~\ref{tb:mutrgeff} along with the final results of these calculations. \begin{table*}[t] \caption{Efficiencies for high $\ensuremath{p_{T}}$ muon trigger path.} \begin{tabular}{l c c c} \hline \hline Trigger Level & Number of $Z \rightarrow \mu \mu$ & Number of Events & Efficiency \\ & Candidate Events & with 2 Muon Triggers & \\ \hline Level~1 & 338 & 293 & 0.929 $\pm$ 0.011 \\ Level~3 & 138 & 137 & 0.996 $\pm$ 0.004 \\ Combined & - & - & 0.925 $\pm$ 0.011 \\ \hline \hline \end{tabular} \label{tb:mutrgeff} \end{table*} \subsection{Cosmic Tagger Efficiency} The tagging algorithm used to remove cosmic ray events in our $\wmnu$ and $Z \rightarrow \mu \mu$ candidate samples is discussed in Sec.~\ref{sec:cosmicwbkg}. We measure the fraction of real events tagged as cosmic rays by this algorithm for both candidate samples using the corresponding electron decay mode samples. The tagging algorithm is based solely on the hit timing information associated with reconstructed tracks in the COT. Since the kinematics of $\ensuremath{W}$ and $\ensuremath{Z}$ boson decays into electrons and muons are nearly identical, we expect that the reconstructed electron tracks in $\wenu$ and $Z \rightarrow e e$ candidate events are a good model for the muon tracks in the corresponding decay channels. Unlike the muon channels, however, the electron decay mode candidate samples have a negligible cosmic background. Therefore, we obtain a measurement of the fraction of real $\wmnu$ and $Z \rightarrow \mu \mu$ signal events tagged as cosmic ray candidates directly from the observed fraction of events in the corresponding electron channels which our algorithm identifies as cosmic ray candidates. In order to make the tracks in the electron events match as closely as possible with those in the muon events, we first apply the muon track impact parameter cut described in Sec.~\ref{sec:evsel} to each of the electron candidate tracks in these samples. This additional requirement reduces the number of events in the $\wenu$ candidate sample to 37,070. Of the remaining events, only five are tagged as cosmic ray candidates by our modified version of the cosmic tagging algorithm. The resulting efficiency for a $\ensuremath{W}$ boson decay not to be tagged as a cosmic by our algorithm is $\epsilon^{\ensuremath{W}}_{\mathrm{cos}} =$~0.9999~$\pm$~0.0001. Applying the track impact parameter cut to the $Z \rightarrow e e$ sample reduces the total number of candidate events to 1,680. Of these events, only one is tagged as a cosmic by our modified tagging algorithm. The resulting efficiency for a $\ensuremath{Z}$ boson decay not to be tagged as a cosmic by our algorithm is $\epsilon^{\ensuremath{Z}}_{\mathrm{cos}} =$~0.9994~$\pm$~0.0006. \subsection{Over-Efficiency of $\ensuremath{Z}$-Rejection Criteria} The criteria for rejecting $Z \rightarrow \mu \mu$ events in our $\wmnu$ candidate sample are defined in Sec.~\ref{sec:evsel}. A small fraction of real signal events are also removed from our candidate sample via this selection criteria. We measure the efficiency for signal events to survive the $\ensuremath{Z}$-rejection cuts directly from simulation. The resulting value, 0.9961~$\pm$~0.0001, is determined by the number of $\wmnu$ candidate events in our simulated sample that exclusively fail the $\ensuremath{Z}$-rejection criteria. The systematic uncertainty on this efficiency is based on a comparison of the shape of the invariant mass spectrum for the muon plus track candidate events rejected solely due to this criteria to the shape of the same spectrum obtained from $\gamma^{*}/Z \rightarrow \mu \mu$ simulated events. A comparison of the ratio of rejected events inside and outside the $\ensuremath{Z}$-mass window (66~$\ensuremath{\GeV\!/c^2} < M_{\mu\mu} <$~116~$\ensuremath{\GeV\!/c^2}$) to that found in the $\gamma^{*}/Z$ simulation sample provides a good measure whether our rejected events are a relatively pure sample of $\gamma^{*}/Z$ decays. Based on this approach, we measure an additional systematic of $\pm$~0.17~$\!\%$ to apply to the $\ensuremath{Z}$-rejection efficiency value obtained from simulation. The final result is $\epsilon_{\mathrm{z-rej}} =$~0.9961~$\pm$~0.0017. \section{Backgrounds} \label{sec:backg} Other physics processes can produce events that mimic the signature of $\wlnu$ and $Z \rightarrow \ell \ell$ events in our detector. Some processes have similar final-state event topologies to those of our signal samples and others can fake similar topologies if a non-lepton object within the event is misidentified as an electron or muon. In this section, the sources of backgrounds to $\ensuremath{W}$ and $\ensuremath{Z}$ events are discussed. We separate the background sources into three main categories: events in which hadronic jets fake leptons; events from other electroweak processes; and events from non-collision cosmic ray backgrounds. The techniques used to estimate the contribution to our candidate samples from each background source are given in this section along with the final estimates. \subsection{Hadron Jet Background in $\wlnu$} \label{sec:qcdwbkg} Extracting the contribution of events to the $\wlnu$ candidate samples in which real or fake leptons from hadronic jets are reconstructed in the detector is one of the more challenging components of our measurements. Real leptons are produced both in the semileptonic decay of hadrons and by photon conversions in the detector material. Some events also contain other particles in hadronic jets which are misidentified and reconstructed as leptons. Typically, these types of events will not be accepted into our $\ensuremath{W}$ candidate samples because we require large event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$. In a small fraction of these events, however, a significant energy mismeasurement does reproduce the $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ signature of our samples. Because of the large total cross section for hadronic jets in our detector, even this small fraction results in a substantial number of background events in our $\ensuremath{W}$ candidate samples. These events are particularly difficult to model in the simulation since the associated energy mismeasurement makes them unrepresentative of typical hadronic events. In order to estimate the background contribution of these sources to our samples, we release the selection criteria on lepton isolation and event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ and use events with low lepton isolation and low $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ as a model of the background in the low lepton isolation and high $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ $\ensuremath{W}$ signal region. The contributions in the low and high $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ regions are normalized to the number of events in those regions with high lepton isolation based on the assumption that there is no correlation between lepton isolation and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ in the hadronic background. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{figures/isovsmet_ele.eps} \end{center} \caption{$\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}}$ versus event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ for $\wenu$ candidates (no cuts on the lepton isolation fraction variable or the event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$). The definitions of regions A, B, and C which are used in the calculation of the hadronic background are provided in the text.} \label{fig:IsoVsMet} \end{figure} Fig.~\ref{fig:IsoVsMet} shows the lepton isolation fraction variable plotted against event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ for $\wenu$ candidates (no cuts on lepton isolation fraction or event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$). In the lepton isolation fraction versus $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ parameter space, we define four regions as follows: \begin{itemize} \item{Region A: $\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}} <$ 0.1 and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} <$ 10 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$} \item{Region B: $\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}} >$ 0.3 and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} <$ 10 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$} \item{Region C: $\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}} >$ 0.3 and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$ 25 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$ (20 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$ for $\wmnu$)} \item{Region W: $\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}} <$ 0.1 and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} >$ 25 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$ (20 $\ensuremath{\mathrm{Ge\kern -0.1em V}}$ for $\wmnu$)} \end{itemize} Region W is the $\wlnu$ signal region and the others contain mostly hadronic background events. The background contribution to the $\ensuremath{W}$ signal region, $N_W^{\mathrm{bck}}$, is estimated using \begin{eqnarray} \frac{N_W^{\mathrm{bck}}}{N_{\mathrm{evt}}^{\mathrm{C}}} = \frac{N_{\mathrm{evt}}^{\mathrm{A}}}{N_{\mathrm{evt}}^{\mathrm{B}}}, \label{eqn:back} \end{eqnarray} \noindent where $N_{\mathrm{evt}}^{\mathrm{A}}$, $N_{\mathrm{evt}}^{\mathrm{B}}$, $N_{\mathrm{evt}}^{\mathrm{C}}$ are the number of events in regions A, B and C, respectively, as defined above. This technique has been previously described in~\cite{det:dettop,int:cdf_ratio2} and more recently in~\cite{back:qcdrun2}. A simple approach would be to assume that all of the events in regions A, B, and C are hadronic background events. In that case, the observed number of data events in each region can be used directly in Eq.~\ref{eqn:back} to extract the hadronic background contribution to the $\ensuremath{W}$ signal region. We further improve our estimate, however, by accounting for the fact that these regions contain small fractions of signal events and events from other electroweak background processes such as $Z \rightarrow \ell \ell$ and $\wtnu$ in addition to hadronic background events. Fig.~\ref{fig:isometmc} shows distributions of lepton isolation fraction versus $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ for simulated events passing the full set of selection criteria (no cuts on lepton isolation fraction or $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$) for the $\wenu$ signal, $Z \rightarrow e e$ background, and $\wtnu$ background samples. From these distributions, and the equivalent ones for $\wmnu$ candidates, we obtain modeled event fractions in regions A, B, and C relative to the signal region for the signal and other electroweak background processes. Based on these fractions and our estimates for the relative contributions of $\wlnu$, $Z \rightarrow \ell \ell$, and $\wtnu$ in the signal region (see Sec.~\ref{sec:ewkwbkg}), we correct the observed numbers of events in regions A, B, and C to remove the contributions from non-hadronic backgrounds. A more accurate estimate of the hadronic background in the $\ensuremath{W}$ signal region is then obtained from Eq.~\ref{eqn:back} using these corrected inputs. Table~\ref{tab:qcdwback} summarizes both the corrected and uncorrected hadronic background estimates for the $\ensuremath{W}$ signal region obtained from Eq.~\ref{eqn:back} for the $\wenu$ and $\wmnu$ decay channels. \begin{figure}[t!] \includegraphics[width=3.5in]{figures/isovsmet_wenumc.eps} \includegraphics[width=3.5in]{figures/isovsmet_wtaunumc.eps} \includegraphics[width=3.5in]{figures/isovsmet_zeemc.eps} \caption{$\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}}$ versus event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ for the simulated $\wenu$ signal, $\wtnu$ background, and $Z \rightarrow e e$ background samples. We correct the observed number of data events in regions A, B, C to account for events from these processes when estimating the hadronic background in the $\wlnu$ candidate samples.} \label{fig:isometmc} \end{figure} \begin{table*}[t] \caption{Summary of hadronic background event contribution estimates to the $\wenu$ and $\wmnu$ candidate samples. The statistical and systematic uncertainties are indicated.} \begin{tabular}{l c c c c} \hline \hline & Uncorrected & Corrected & Uncorrected & Corrected \\ & $\wenu$ & $\wenu$ & $\wmnu$ & $\wmnu$ \\ \hline Region A & 30023 & 26655 & 3926 & 3575 \\ Region B & 5974 & 5972 & 5618 & 5615 \\ Region C & 228 & 131 & 496 & 345 \\ Region W & 37584 & 37584 & 31722 & 31722 \\ \hline Hadronic Background & 1146 & 587 & 346 & 220 \\ Statistical Error & 78 & 52 & 17 & 13 \\ Systematic Error & - & 294 & - & 110 \\ Background Fraction & 3.0 $\pm$ 0.2$\!\%$ & 1.6 $\pm$ 0.8$\!\%$ & 1.1 $\pm$ 0.1$\!\%$ & 0.7 $\pm$ 0.4$\!\%$ \\ \hline \hline \end{tabular} \label{tab:qcdwback} \end{table*} Since the lower limit on lepton isolation fraction and upper limit on event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ used to define regions A, B, and C are arbitrary choices, we check the robustness of our technique for obtaining the hadronic background estimates by raising and lowering the cuts used to define these regions. We take observed changes in the estimated hadronic backgrounds as a systematic uncertainty on our measurement technique. Fig.~\ref{fig:backdep} shows the dependence of the estimated hadronic background contribution to the signal region as a function of the lepton isolation fraction and event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ values used to define the non-signal regions both before and after correcting the number of observed events in regions A, B, C for $\wenu$ signal and other background processes. We observe similar dependencies using the $\wmnu$ candidate sample. The background estimate is mostly independent of the selection of the lower $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ border for regions A and B but does depend on the location of the upper lepton isolation fraction border for regions B and C. Although we observe some evidence from simulated event samples that the observed fluctuations are a feature of the hadronic background, we choose to use a conservative systematic uncertainty that covers the full range of the fluctuations seen in Fig.~\ref{fig:backdep}. We estimate the range of the observed fluctuations to be within 50~$\!\%$ of our central values corresponding to uncertainty estimates of $\pm$~294 events in the $\wenu$ candidate event sample and $\pm$~110 events in the $\wmnu$ candidate sample (see Table~\ref{tab:qcdwback}). \begin{figure}[t!] \includegraphics[width=3.5in]{figures/bkg_dep_met_ele.eps} \includegraphics[width=3.5in]{figures/bkg_dep_iso_ele.eps} \caption{Dependence of hadronic background estimate on the $\ensuremath{E_{T}}^{\mathrm{iso}}/\ensuremath{E_{T}}$ and event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ cut values used to define the control regions for $\wenu$. The results both before and after corrections for signal and electroweak background contributions to regions A, B, and C are applied are shown in triangles and circles, respectively.} \label{fig:backdep} \end{figure} We make an independent cross-check of the estimated hadronic background in $\wenu$ events by applying a measured rate for jets faking electrons to a generic hadronic jet sample. The rate for jets faking electrons is measured from events with at least two jets with $\ensuremath{E_{T}} >$ 15~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$, $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} <$ 15~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$, and no more than one loose electron. These requirements ensure that the input sample has a negligible contribution from real $\ensuremath{W}$ and $\ensuremath{Z}$ events. From this sample, the jet fake rate is defined as the fraction of reconstructed jets with $\ensuremath{E_{T}} >$ 30~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ that are also found to pass the standard set of tight electron cuts. We use the $\ensuremath{E_{T}}$ dependence of the jet fake rate in the background estimate. Because of differences in the clustering algorithms used for electrons and jets, the reconstructed energies of electrons originating from hadronic jets are smaller than the reconstructed energies of the jets. Scale factors are applied to convert the measured jet energies into corresponding electron cluster energies, and as a consequence the lowest $\ensuremath{E_{T}}$ bins are not included in the fitted constant for the jet fake rate. A significant uncertainty on the final background estimate is assigned, however, based on the results obtained using different models for fitting the $\ensuremath{E_{T}}$ dependence of the jet fake rate. The measured fake rate is applied to jets in an inclusive jet data sample to determine how often these types of hadronic events with fake electrons satisfy the additional selection criteria of our $\wenu$ candidate sample. Jets in the inclusive sample are required to have $\ensuremath{E_{T}}^{\mathrm{scaled}} >$ 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ where $\ensuremath{E_{T}}^{\mathrm{scaled}}$ is the jet $\ensuremath{E_{T}}$ scaled down to the $\ensuremath{E_{T}}$ of the fake electron to match the electron selection criteria of our sample. The distribution in Fig.~\ref{fig::jetfakerate} is the resulting $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ distribution for the inclusive jet sample weighted by the jet fake rate. The events above the candidate sample $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ cut of 25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ are integrated giving 800~$\pm$~300 background events, consistent with the result obtained using our default technique. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{figures/prd_wfakeratemet.eps} \end{center} \caption{$\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ distribution for inclusive jet sample weighted by measured jet fake rate. The arrow indicates the location of the selection cut on $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ used to select $\wenu$ candidate events.} \label{fig::jetfakerate} \end{figure} \subsection{Hadron Jet Background in $\gamma^{*}/Z \rightarrow \ell \ell$} \label{sec:qcdzbkg} Our $Z \rightarrow \ell \ell$ candidate samples have smaller overall contributions from background sources than the $\wlnu$ samples. One common background source is events in which one or both leptons are either real or fake leptons from hadronic jets. We expect that the two leptons in these types of events have no charge correlation so that the numbers of opposite-sign and same-sign lepton pairs from this source are roughly equal. Based on this assumption, we use the number of same-sign lepton pair candidates to place an upper limit on the number of hadronic background events in our opposite-sign dilepton pair candidate samples. This approach is only viable for events with two central leptons where the lepton charge is taken from the reconstructed track. As discussed later in this section, the background contribution to $Z \rightarrow e e$ events with one central electron and one plug electron is measured using a variation of the jet fake rate method described previously. Since the calorimeter energy associated with muon candidates is required to be consistent with a minimum-ionizing particle, the probability for a hadronic jet to fake a muon is significantly smaller than that for an electron. Despite the fact that we make no opposite-sign charge requirement on the two muon legs in our $Z \rightarrow \mu \mu$ candidate events, none of the 1,785 events in this sample are observed to contain a same-sign muon pair. Based on finding no such events, we estimate a background contribution of 0.0~$\!^{+1.1}_{-0.0}$ events from muons produced in hadronic jets. The number of same-sign events observed in the $Z \rightarrow e e$ candidate sample needs to be corrected for a fraction of real $Z \rightarrow e e$ events that are reconstructed as same-sign electron pairs. We observe a total of 22 events with same-sign electron pairs corresponding to our sample 1730 $Z \rightarrow e e$ candidate events with two central electrons. The invariant mass distributions for both the opposite-sign and same-sign electron pairs in our candidate sample are shown in Fig.~\ref{fig:etfit} and Fig.~\ref{fig:invmass}. Both distributions show a peak in the $\ensuremath{Z}$ boson mass window indicating that at least some fraction of the same-sign electron pairs are produced in $\ensuremath{Z}$ decays. These events result from decays in which one of the electrons radiates a high $\ensuremath{E_{T}}$ photon which subsequently converts in the detector material producing an electron-positron pair. We call this type of event a ``trident'' event. If the track associated with the positron from the photon conversion is matched to the corresponding electron cluster, both electrons in the event will be assigned the same charge. We remove the contribution of real $Z \rightarrow e e$ events from the number of observed same-sign electron pairs by subtracting the observed number of opposite-sign events in the data scaled by the fraction of same-sign to opposite-sign candidates in our simulated samples. The remaining number of same-sign electron pair candidate events is then used to estimate the background contribution from electrons produced in hadronic jets to the opposite-sign candidate sample. Using this technique, we estimate 20.4 same-sign events from $\ensuremath{Z}$ decays in the invariant mass window between 66~$\ensuremath{\GeV\!/c^2}$ and 116~$\ensuremath{\GeV\!/c^2}$. Subtracting this estimate from the total number of observed same-sign events (22), we estimate the contribution from electrons originating from hadronic jets to be 1.6~$\!^{+4.7}_{-1.6}$ where the uncertainty is based solely on the statistics of our sample. \begin{figure}[t!] \begin{center} \includegraphics[width=3.4in,height=3.in]{figures/Z_InvMassSS_nocuts.eps} \caption{Reconstructed invariant mass of two central electrons in $Z \rightarrow e e$ candidate events in data (points) and simulation (histogram). This distribution is for events in which the electrons are reconstructed with the same charge. The distribution for events with two electrons of opposite sign is shown in Fig.~\ref{fig:etfit}. The number of events in the simulated distributions are normalized so that the number of opposite-sign events in the simulated sample is equal to the number of opposite-sign events in the data. The arrows indicate the location of the invariant mass cuts used to select our candidate samples.} \label{fig:invmass} \end{center} \end{figure} The dominant source of systematic uncertainty on the background contribution from events with electrons originating from hadronic jets comes from the simulation detector material model. The probability for an electron to radiate a bremsstrahlung photon prior to entering the calorimeter is strongly dependent on the amount of material in the tracking volume. We study the effect of the material model using the two previously described samples of simulated events generated with $\pm$~1.5~$\!\%$ of a radiation length of copper added in a cylinder between the silicon and COT tracking detectors. We estimate the systematic uncertainty based on differences in the number of same-sign events observed after subtracting the predicted number of real $Z \rightarrow e e$ events based on the default and modified simulations. The resulting systematic uncertainty on our estimate is 5.2 events which when added in quadrature with the statistical error results in a final background estimate of 1.6~$\!^{+7.0}_{-1.6}$. This technique outlined above can not be used to estimate the background contribution from electrons originating from hadronic jets in $Z \rightarrow e e$ candidates with one central and one plug electron owing to the undetermined charge of the plug candidate. Instead, we estimate the background contamination based on a variation of the previously described method using measured jet fake rates. In order to measure the background contribution to the combined $Z \rightarrow e e$ sample from hadronic events producing two fake electrons, we need to measure the jet fake rates for tight central, loose central, and plug electrons. We remove $\ensuremath{W}$ and $\ensuremath{Z}$ boson candidates from the inclusive jet sample used to make the fake rate measurements by selecting events with no more than one loose electron and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} <$ 15~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$. Based on this inclusive sample, the jet fake rates are defined as the fractions of central jets reconstructed as either tight or loose electrons and plug jets reconstructed as plug electrons. The measured jet fake rates for reconstructed tight central and plug electrons as a function of jet $\ensuremath{E_{T}}$ are shown in Fig.~\ref{fig::fakerates}. \begin{figure}[t] \begin{center} \includegraphics*[width=3.5in]{figures/prd_fakerates_central.eps} \includegraphics*[width=3.5in]{figures/prd_fakerates_plug.eps} \end{center} \caption{Measured tight central and plug electron jet fake rates as a function of jet $\ensuremath{E_{T}}$.} \label{fig::fakerates} \end{figure} As previously mentioned, the reconstructed energy of the electrons produced by hadronic jets is smaller than the reconstructed energy of the jets themselves. To account for these differences, we fit the distributions of $\ensuremath{E_{T}}^{\mathrm{ele}}/\ensuremath{E_{T}}^{\mathrm{jet}}$ to a Gaussian for the jets reconstructed as tight central, loose central, and plug electrons. The means of the fits are used as scaling factors to convert raw jet energies into scaled electron energies, $\ensuremath{E_{T}}^{\mathrm{scaled}}$. To obtain the background contribution of events with two electrons originating from hadronic jets to the $Z \rightarrow e e$ sample, we apply the measured jet fake rates and energy scalings to a generic multi-jet data sample. Events containing either two central jets with $\ensuremath{E_{T}}^{\mathrm{scaled}} >$~25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ or one central jet and a plug jet with $\ensuremath{E_{T}}^{\mathrm{scaled}} >$~20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ are used to extract dijet invariant mass distributions to model the hadronic background for $Z \rightarrow e e$. The weights assigned to each event in these distributions is set equal to the product of the jet fake rates for the two jets based on the parameterizations shown in Fig.~\ref{fig::fakerates}. The final weighted dijet invariant mass distributions for central-central and central-plug events are shown in Fig.~\ref{fig::mjjE}. The resulting distributions are integrated over the invariant mass window of our measurements (66~$\ensuremath{\GeV\!/c^2} < M_{ee} <$~116~$\ensuremath{\GeV\!/c^2}$) to obtain an estimate for the number of background events in the $Z \rightarrow e e$ candidate sample (after scaling upward by the trigger prescale used to collect events in the generic multi-jet sample). \begin{figure}[hbt] \begin{center} \includegraphics*[width=3.5in]{figures/prd_dijetmass.eps} \caption{Di-jet invariant mass distributions for central-central and central-plug events in generic multi-jet data. Events are weighted by the product of the measured jet fake rates for each jet. The scaled energies of both jets must pass the electron $\ensuremath{E_{T}}$ requirements of our $Z \rightarrow e e$ candidate sample (25~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ for central electrons and 20~$\ensuremath{\mathrm{Ge\kern -0.1em V}}$ for plug electrons).} \label{fig::mjjE} \end{center} \end{figure} As illustrated in Fig.~\ref{fig::fakerates}, the jet fake rates measured as a function of jet $\ensuremath{E_{T}}$ need to be assigned an additional uncertainty based on the assumed shape of the fit. We fit the jet fake rate distributions using several different functional forms and assign an additional systematic uncertainty of 30~$\!\%$ based on the resulting spread in background estimates. Based on this technique the measured background contribution of events with two electrons originating from hadronic jets to the $Z \rightarrow e e$ candidate sample is 2.4~$\pm$~1.0 central-central and 39~$\pm$~17 central-plug events. The estimated number of central-central events is in good agreement with the result obtained using the observed number of same-sign events in our candidate sample. Using the central-central background estimate based on same-sign events and the central-plug estimate based on the jet fake rate method, we obtain a combined estimate for the background contribution of events with two electrons originating from hadronic jets of 41~$\pm$~18 events. \subsection{Electroweak Backgrounds in $\wlnu$} \label{sec:ewkwbkg} $Z \rightarrow \ell \ell$ events mimic the signature of $\wlnu$ events in cases where one of the two leptons passes through an uninstrumented region of the detector creating an imbalance in the observed event $\ensuremath{E_{T}}$. The $\wlnu$ signature can also be reproduced by $\wtnu$ events in which the $\tau$ lepton subsequently decays into an electron or muon. Background contributions from both diboson and $t \bar{t}$ production processes are negligibly small. The contribution of these electroweak background sources to our $\wlnu$ candidate samples are obtained from simulation. The $\gamma^{*}/Z \rightarrow \ell \ell$ and $\wtnu$ simulated event samples are obtained from the equivalent {\sc pythia} event generation and detector simulation used to produce the signal samples (see Sec.~\ref{sec:acc}). The complete set of $\wlnu$ selection criteria as described in Sec.~\ref{sec:evsel} are applied to the simulated events in these samples to obtain the fraction of events from each process that satisfy the criteria of our candidate samples. Then, based on Standard Model predictions for the relative production rates of our signal process and the two background processes, we use the estimated acceptances from simulation to obtain the relative contributions of each process to our candidate samples. The Standard Model predicts equivalent production cross sections for $\wenu$, $\wmnu$ and $\wtnu$, while the $Z \rightarrow \ell \ell$ production cross sections are related to the corresponding $\wlnu$ cross sections via the ratio $R$ defined in Eq.~\ref{eq:rdef}. In order to extract the relative contributions of $\gamma^{*}/Z \rightarrow \ell \ell$ events to our $\wlnu$ candidate samples, an input value for $R$ is required. We choose to use the value $R =$~10.67~$\pm$~0.15 for $\ensuremath{W}$ and $\ensuremath{Z}$ boson production at $\sqrt{s} =$~1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ obtained from the NNLO theoretical calculation ~\cite{int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3}. However, to be conservative we inflate the uncertainty on the predicted value for $R$ based on the CDF Run~I measured value of $R =$~10.90~$\pm$~0.43~\cite{int:cdf_ratio1,int:cdf_ratio2}. The difference in the values of $R$ at $\sqrt{s} =$~1.80~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ and 1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ is expected to be negligible. Based on this prediction, the measured value is in good agreement with the theoretical value. To account for the current level of experimental uncertainty, we add an additional 3.9~$\!\%$ systematic uncertainty to the NNLO prediction resulting in a value of $R =$~10.67~$\pm$~0.45. \begin{table}[t] \caption{Estimated $\wlnu$ backgrounds from other electroweak production processes.} \begin{center} \begin{tabular}{l c c} \hline \hline Source & $\wenu$ & $\wmnu$ \\ & Background & Background \\ \hline $Z \rightarrow \ell \ell$ & 426~$\pm$~19 & 2229~$\pm$~96 \\ $\wtnu$ & 749~$\pm$~17 & 988~$\pm$~24 \\ \hline \hline \end{tabular} \label{tab:ewkwback} \end{center} \end{table} The relative contributions of $\wlnu$, $Z \rightarrow \ell \ell$, and $\wtnu$ in our $\wlnu$ candidate samples are estimated based on the above value for $R$ and the simulated acceptances for each process. The relative acceptances are normalized to the total number of events in each candidate sample after subtracting contributions from non-electroweak backgrounds (events with reconstructed leptons originating from hadronic jets and cosmic rays). The final background estimates for electroweak backgrounds in the $\wlnu$ candidate samples are summarized in Table~\ref{tab:ewkwback}. \subsection{Electroweak Backgrounds in $\gamma^{*}/Z \rightarrow \ell \ell$} \label{sec:ewkzbkg} Several electroweak processes also contribute background events to our $Z \rightarrow \ell \ell$ candidate samples. $Z \rightarrow \tau \tau$ events mimic the $Z \rightarrow \ell \ell$ event signature when both $\tau$ leptons decay into or are reconstructed as an electron or muon pair with a reconstructed invariant mass within the mass window of our $Z \rightarrow \ell \ell$ measurements. As in the previous section, this background is estimated using a simulated $Z \rightarrow \tau \tau$ event sample obtained from the equivalent {\sc pythia} event generation and detector simulation used to produce the $Z \rightarrow \ell \ell$ signal samples. The full set of $Z \rightarrow \ell \ell$ selection criteria is applied to the simulated $Z \rightarrow \tau \tau$ and $Z \rightarrow \ell \ell$ samples to determine the relative acceptances. Based on the Standard Model prediction of equivalent production cross sections for $Z \rightarrow e e$, $Z \rightarrow \mu \mu$, and $Z \rightarrow \tau \tau$, the number of $Z \rightarrow \tau \tau$ background events in each candidate sample is extracted using the relative acceptances from the total number of events after removing non-electroweak background contributions. A comparison of the reconstructed invariant mass distributions for simulated $\gamma^{*}/Z \rightarrow e e$ and $\gamma^{*}/Z \rightarrow \tau \tau$ events passing the $Z \rightarrow e e$ selection criteria is shown in Fig.~\ref{fig:invmassbackg}. The majority of $\gamma^{*}/Z \rightarrow \tau \tau$ events are observed to have a reconstructed invariant mass below the mass window used in our measurements. As a result, the contribution of this background source to our candidate samples is small, 3.7~$\pm$~0.4 events in the $Z \rightarrow e e$ sample and 1.5~$\pm$~0.3 events in the $Z \rightarrow \mu \mu$ sample. An identical approach is used to estimate $Z \rightarrow \ell \ell$ background contributions from both top quark and diboson production. The estimated background contributions from each of these sources is found in all cases to be less than one event and therefore considered to be negligible. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{figures/Z_e_tau_invmass_allcuts_overlay.eps} \caption{Reconstructed invariant mass distribution for simulated $\gamma^{*}/Z \rightarrow e e$ (open histogram) and $\gamma^{*}/Z \rightarrow \tau \tau$ (solid histogram) events satisfying the $Z \rightarrow e e$ candidate sample selection criteria.} \label{fig:invmassbackg} \end{center} \end{figure} An additional source of background events to the $Z \rightarrow e e$ candidate sample is $\wenu$ events with an associated hadronic jet that results in a second reconstructed electron within the event. We use our simulated $\wenu$ sample to estimate the background contribution from this source by applying previously determined jet fake rates for the hadronic jets in these events with scaled $\ensuremath{E_{T}}$ above the corresponding electron thresholds. The relative acceptance of simulated $\wenu$ events, weighted by the measured jet fake rates, and $Z \rightarrow e e$ signal events are used to extract the number of background events from this process based on the value for $R$ presented in the previous section. Once again, the relative acceptances are applied to the final candidate sample after subtracting the estimated number of background events from non-electroweak sources. The estimated number of $\wenu$ background events in the $Z \rightarrow e e$ sample is 16.8~$\pm$~2.8 events. \subsection{Cosmic Ray Backgrounds in $\wmnu$} \label{sec:cosmicwbkg} Energetic cosmic ray muons traverse the detector at a significant rate, depositing hits in both muon and COT chambers, and in some cases can mimic the signatures of our $\wmnu$ and $Z \rightarrow \mu \mu$ candidate events. A cosmic ray muon passing through the detector is typically reconstructed as a pair of incoming and outgoing legs relative to the beam line of the detector. The reconstructed muon legs tend to be isolated and pass our muon selection criteria. In some cases, one of the two cosmic legs is not reconstructed due to fiducial and/or timing constraints. These events typically satisfy both the $Z$-rejection and $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ criteria of our $\wmnu$ candidate sample due to the lack of an additional track and the resulting transverse momentum imbalance. We remove cosmic ray events from our $\wmnu$ candidate sample using a tagging algorithm based on the timing information associated with hits in the COT. The algorithm uses a multi-parameter fit over the full set of hits left by the incoming and outgoing cosmic legs. The leg belonging to the reconstructed muon serves as the seed track for the fit. The other leg is referred to as the opposite-side track. The algorithm performs the following steps to determine if an event is consistent with the cosmic ray hypothesis: \begin{itemize} \item Hits belonging to the seed track are refitted with the five helix parameters and a floating global time shift, $t_{0}$. \item Based on the best fit values, an incoming or outgoing hypothesis is assigned to the seed track. \item The refitted seed track is used to search for the hits belonging to the second cosmic leg on the opposite side of the COT. \item If enough hits are found on the other side of the COT, a similar fit procedure is performed to identify the opposite-side track. \item If both legs are found, a simultaneous fit is performed to combine all hits from the seed and opposite-side legs into a single helix. \end{itemize} The final decision of the cosmic tagger depends on the quality of the simultaneous fit to the hits on both legs. If one leg is recognized as incoming and fits well to an outgoing leg on the other side of the detector, the event is tagged as a cosmic ray. As described in greater detail below, we observe that our tagging algorithm identifies most of the cosmic background events in our candidate sample. We also find that the algorithm tags very few real events as cosmic rays (see Sec.~\ref{sec:eff}). After removing tagged events from our $\wmnu$ sample, we need to estimate the remaining background from cosmic rays. This estimate is made by searching for hits in the muon chambers on the opposite side of the reconstructed muon track in our final candidate events. These hits are present for a large fraction of cosmic ray muons even in cases where the second leg is not identified by our algorithm. Since the muon chamber hits are not used in the tagging algorithm, their presence is unbiased with respect to its decision. The $\Delta \phi$ distribution for matched hits produced by cosmic ray muons with respect to the direction of the muon candidate track is sharply peaked in the region around 180~$\!^{\circ}$. These events sit on top of a flat event background in $\Delta \phi$ originating from random coincidences between the muon track and unrelated matched hits in the muon chambers. The contribution of cosmic ray events to the candidate $\Delta \phi$ distribution is determined by counting the number of events with matched muon chamber hits in a 10~$\!^{\circ}$ window centered on $\Delta \phi =$~180~$\!^{\circ}$ and subtracting a fitted contribution from the flat background. Using this approach, we would estimate a cosmic background contribution of 54.7~$\pm$~5.0 events in our 31,722 event $\wmnu$ candidate sample. Some of the cosmic ray background events in our candidate sample, however, do not have opposite-side muon chamber hits due to gaps in the muon detector coverage. In order to estimate the total cosmic ray background in our candidate sample from the observed number of events with matched opposite-side hits, we apply an acceptance correction based on the fraction of $\wmnu$ candidate events in which the reconstructed muon track points at an active region of muon chambers when extrapolated to the opposite side of the detector. We extrapolate the 31,722 muon tracks in our $\wmnu$ candidate events to the opposite side of the detector and find that 58~$\pm$~30~$\!\%$ point at active regions of the muon chambers. Our acceptance correction assumes that the spacial distribution of muons originating from cosmic rays is similar to that of our $\wmnu$ candidate sample. We assign a large systematic uncertainty on the measured acceptance to account for the non-uniform spacial distribution (most enter from the top side of the detector) of cosmic rays and the reconstruction biases associated with their entry locations and angles of incidence on the detector. To complete the cosmic background measurement for our $\wmnu$ candidate sample, we also need to estimate the contribution of $Z \rightarrow \mu \mu$ events to the observed excess of events in the window around $\Delta \phi =$~180~$\!^{\circ}$. $Z \rightarrow \mu \mu$ events that contain a second reconstructed track passing a loose set of minimum ionizing cuts are rejected from our candidate sample via the $Z$-rejection selection criteria. However, a small fraction of muon tracks from $Z \rightarrow \mu \mu$ events are embedded in jets and fail the loose minimum ionizing cuts or in other cases are simply not reconstructed. Since the muons in $Z \rightarrow \mu \mu$ decays are typically produced in roughly opposite directions to one another, the non-identified tracks in these events can also produce muon chamber hits on the opposite side of the one reconstructed muon in these events. This background is estimated from our simulated $Z \rightarrow \mu \mu$ event sample. Based on this sample, we estimate the number of $Z \rightarrow \mu \mu$ background events in our $\wmnu$ candidate sample with matched muon chamber hits in the 10~$\!^{\circ}$ window centered on $\Delta \phi =$~180~$\!^{\circ}$ to be 35.4~$\pm$~9.1. The uncertainty assigned to this background is based on our use of different techniques for looking at opposite side muon chamber hits in data and simulation. The final estimate of the cosmic ray background in the $\wmnu$ sample, $N^{\mathrm{cos}}_{\mathrm{bg}}$, is obtained from \begin{eqnarray} N^{\mathrm{cos}}_{\mathrm{bg}} = \frac{N^{\mathrm{MH}}_{\mathrm{evt}} - N^{\mathrm{MH}}_{Z \rightarrow \mu \mu}}{A^{\mathrm{opp}}_{\mu}} \end{eqnarray} where $N^{\mathrm{MH}}_{\mathrm{evt}}$ is the number of $\wmnu$ candidate events with matched hits in the tight window centered on $\Delta \phi =$~180~$\!^{\circ}$, $N^{\mathrm{MH}}_{Z \rightarrow \mu \mu}$ is the predicted number of $Z \rightarrow \mu \mu$ background events with matched hits in the same window, and $A^{\mathrm{opp}}_{\mu}$ is muon chamber acceptance for muon tracks in $\wmnu$ candidate events extrapolated to the opposite side of the detector. Using the input values obtained above, we estimate a total cosmic background of 33.1~$\pm$~22.9 events for our $\wmnu$ candidate sample. \subsection{Cosmic Ray Backgrounds in $Z \rightarrow \mu \mu$} \label{sec:cosmiczbkg} Cosmic rays also contribute to the $Z \rightarrow \mu \mu$ candidate sample. The majority of these events are removed using the cosmic ray tagging algorithm described in the previous section. The remaining cosmic ray background is estimated based on the distribution of the three-dimensional opening angle between the muon tracks in candidate events. The two reconstructed muon legs in the cosmic ray background events are typically back-to-back with opening angles at or near ~180~$\!^{\circ}$. The residual background is estimated by fitting the opening angle distribution for data events in the region below 2.8~radians (assumed to be background free) to the same distribution for simulated $Z \rightarrow \mu \mu$ events. The output of the fit is a scale factor for the distribution from simulation which is also applied in the region above 2.8~radians. The number of scaled simulation events with an opening angle greater than 2.8~radians is compared to the number of data candidate events in the same region. The observed excess in data over simulation is taken as our estimate of the cosmic ray background. Using this technique, we estimate a total of 12~$\pm$~12 cosmic ray background events in our $Z \rightarrow \mu \mu$ candidate sample where the quoted uncertainties are based on the statistics of our data sample. A comparison of the opening angle distribution between data and scaled simulation is shown in Fig.~\ref{fig:cosmic4}. \begin{figure}[t] \begin{center} \includegraphics[width=3.5in]{figures/zback.eps} \caption{Comparison of the three-dimensional opening angle distribution for muon tracks in $Z \rightarrow \mu \mu$ candidate events with the same distribution from simulated events. The simulated distribution is scaled to match the data in the region below 2.8~radians.} \label{fig:cosmic4} \end{center} \end{figure} \subsection{Background Summary} \label{sec:bkgsummary} Based on the information presented in the preceding sections, the estimated background contributions to the $\wenu$, $\wmnu$, $Z \rightarrow e e$ and $Z \rightarrow \mu \mu$ candidate samples are summarized in Table~\ref{tab:backsum}. \begin{table*}[t] \caption{Summary of background event estimates for the $\wlnu$ and $Z \rightarrow \ell \ell$ candidate samples.} \begin{center} \begin{tabular}{l c c c c} \hline \hline Background source & $\wenu$ & $\wmnu$ & $Z \rightarrow e e$ & $Z \rightarrow \mu \mu$ \\ \hline Multi-jet & 587 $\pm$ 299 & 220 $\pm$ 112 & 41 $\pm$ 18 & $0^{+1}_{-0}$ \\ $Z \rightarrow \ell \ell$ & 426 $\pm$ 19 & 2229 $\pm$ 96 & - & - \\ $Z \rightarrow \tau \tau$ & negl. & negl. & 3.7 $\pm$ 0.4 & 1.5 $\pm$ 0.3 \\ $\wtnu$ & 749 $\pm$ 17 & 988 $\pm$ 24 & negl. & negl. \\ $\wlnu$ & - & - & 16.8 $\pm$ 2.8 & negl. \\ Cosmic rays & negl. & 33 $\pm$ 23 & negl. & 12 $\pm$ 12 \\ \hline Total & 1762 $\pm$ 300 & 3469 $\pm$ 151 & 62 $\pm$ 18 & 13 $\pm$ 13 \\ \hline \hline \end{tabular} \label{tab:backsum} \end{center} \end{table*} \section{Results} \label{sec:results} Using the measured event counts, kinematic and geometric acceptances, event selection efficiencies, background estimates, and integrated luminosities for our candidate samples, we extract the $\ensuremath{W}$ and $\gamma^{*}/Z$ boson production cross sections multiplied by the leptonic ($e$ and $\mu$) branching ratios. We also determine a value for the ratio of $\wlnu$ to $Z \rightarrow \ell \ell$ cross sections, $R$, taking advantage of correlated uncertainties in the two cross section measurements which cancel in the ratio. To test for lepton universality, we use the measured ratio of $\wlnu$ cross sections in the muon and electron channels to extract a ratio of the $\wlnu$ coupling constants, $g_{\mu}/g_e$. Then, based on the assumption of lepton universality, we increase the precision of our results by combining the production cross section and cross section ratio measurements obtained from the electron and muon candidate samples. The resulting combined value of $R$ is used to extract the total decay width of the $\ensuremath{W}$ boson, $\Gamma(W)$, and the $\ensuremath{W}$ leptonic branching ratio, $Br(\wlnu)$, which are compared with Standard Model predictions. The measurement of $\Gamma(W)$ is also used to constrain the CKM matrix element $V_{cs}$. \subsection{$\wlnu$ Cross Section} \label{subsec:res_wlnu} The cross section $\sigma (p\overline{p} \rightarrow W)$ times the branching ratio $Br(\wlnu)$ is calculated using Eq.~\ref{eq:wsigma} given in Sec.~\ref{sec:intro}. The measurements of the required input parameters for the electron and muon candidate samples are described in the previous sections and summarized in Table~\ref{tab:wlnusigma}. Based on these values, we obtain \begin{eqnarray} \sigma_{W} \cdot Br(\wenu) = 2.771 &\pm& 0.014({\it stat.}) \nonumber\\ &\pm& ^{0.062}_{0.056}({\it syst.})\nonumber\\ &\pm& 0.166({\it lum.})~\ensuremath{\mathrm{nb}} \nonumber\\ \end{eqnarray} and \begin{eqnarray} \sigma_{W} \cdot Br(\wmnu) = 2.722 &\pm& 0.015({\it stat.}) \nonumber\\ &\pm& ^{0.066}_{0.061}({\it syst.}) \nonumber\\ &\pm& 0.163({\it lum.})~\ensuremath{\mathrm{nb}}~{\rm .}\nonumber\\ \end{eqnarray} We compare our measurements to a recent NNLO total cross section calculation for $\sqrt{s} =$~1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$~\cite{int:mrst1} which utilizes the MRST 2002 NNLL PDF set~\cite{int:mrst1,int:mrst2}. The resulting predicted $\wlnu$ cross section is 2.687~$\pm$~0.054~$\ensuremath{\mathrm{nb}}$, which agrees well with our measured values in both lepton channels. The uncertainty on the predicted cross section is mostly due to PDF model uncertainties derived from the MRST error PDF sets. We also perform an independent calculation of the uncertainty on the total $\wlnu$ cross section originating from uncertainties in the PDF model using the method described in Sec.~\ref{sec:acc}. Based on this method, we obtain a consistent 1.3~$\!\%$ uncertainty based on the MRST error PDF sets and a 3.9~$\!\%$ uncertainty based on the CTEQ6 error PDF sets. \begin{table}[t] \caption{Summary of the input parameters to the $\wlnu$ cross section calculations for the electron and muon candidate samples.} \begin{center} \begin{tabular}{l c c} \hline \hline & $\wenu$ & $\wmnu$ \\ \hline $N_{W}^{\mathrm{obs}}$ & 37584 & 31722 \\ $N_{W}^{\mathrm{bck}}$ & 1762 $\pm$ 300 & 3469 $\pm$ 151 \\ $A_W$ & 0.2397 $^{+0.0035}_{-0.0042}$ & 0.1970 $^{+0.0024}_{-0.0031}$ \\ $\epsilon_W$ & 0.749 $\pm$ 0.009 & 0.732 $\pm$ 0.013 \\ $\int \mathcal{ L} dt$ ($\ensuremath{\mathrm{pb}^{-1}}$) & 72.0 $\pm$ 4.3 & 72.0 $\pm$ 4.3 \\ \hline \hline \end{tabular} \label{tab:wlnusigma} \end{center} \end{table} \begin{figure*}[t] \begin{center} \includegraphics[width=3.2in]{figures/WPT_CMUPX.eps} \includegraphics[width=3.2in]{figures/et_bless_mulike.eps} \caption{Muon $\ensuremath{p_{T}}$ (left) and electron $\ensuremath{E_{T}}$ (right) distributions for $\wlnu$ candidate events in data (points). The solid lines are the sum of the predicted shapes originating from the signal and background processes weighted by their estimated contributions to our candidate samples. The separate contributions originating from the signal and each individual background process are also shown.} \label{fig:ptmuo_etele} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=3.2in]{figures/WMET_CMUPX.eps} \includegraphics[width=3.2in]{figures/met_bless_mulike.eps} \caption{Event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ distributions for $\wlnu$ candidate events in data (points). The selection requirement on event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ has been removed to include candidate events with low $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$. The solid lines are the sum of the predicted shapes originating from the signal and background processes weighted by their estimated contributions to our candidate samples. The separate contributions originating from the signal and each individual background process are also shown. The arrows indicate the location of the event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ selection criteria used to define our candidate samples.} \label{fig:wmet} \end{center} \end{figure*} \begin{figure*}[t] \begin{center} \includegraphics[width=3.2in]{figures/WTMass_CMUPX.eps} \includegraphics[width=3.2in]{figures/mt_bless_mulike.eps} \caption{Transverse mass ($M_{T}$) distributions for $\wlnu$ candidate events in data (points). The solid lines are the sum of the predicted shapes originating from the signal and background processes weighted by their estimated contributions to our candidate samples. The separate contributions originating from the signal and each individual background process are also shown.} \label{fig:wmt} \end{center} \end{figure*} Distributions of electron $\ensuremath{E_{T}}$, muon $\ensuremath{p_{T}}$, event $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$, and $\ensuremath{W}$ transverse mass $(M_{T} = \sqrt{2 [\ensuremath{E_{T}}\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$} - (\ensuremath{E_{x}} \mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{T,x}\:$} + \ensuremath{E_{y}}\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_{T,y}\:$})]})$ for events in our $\wlnu$ candidate samples are shown in Figs.~\ref{fig:ptmuo_etele} -~\ref{fig:wmt}. The data distributions are compared against a sum of the predicted shapes of these distributions for the $\wlnu$ signal and each contributing background process ($Z \rightarrow \ell \ell$, $\wtnu$, and hadronic jets). The predicted shapes are obtained from our simulated event samples except in the case of the background arising from hadronic jets, which is modeled using events in the data containing non-isolated leptons that otherwise satisfy the $\wlnu$ selection criteria. In the sum, the predicted shape obtained for each process is weighted by the estimated number of events in our $\wlnu$ candidate samples originating from that process (see Table~\ref{tab:backsum}). In the case of the $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ distribution, we remove the selection cut on the $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ variable to include events with low $\mbox{${\hbox{$E$\kern-0.6em\lower-.1ex\hbox{/}}}_T\:$}$ in the comparison and highlight the significant background contribution from hadronic jets in this region. \subsection{$\gamma^{*}/Z \rightarrow \ell \ell$ Cross Section} \label{subsec:res_zll} Similarly, the cross section $\sigma (p\overline{p} \rightarrow \gamma^{*}/Z)$ times the branching ratio $Br(\gamma^{*}/Z \rightarrow \ell \ell)$ is calculated using Eq.~\ref{eq:zsigma} given in Sec.~\ref{sec:intro}. The measurements of the required input parameters for the electron and muon candidate samples are described in the previous sections and summarized in Table~\ref{tab:zllsigma}. Based on these values, we obtain \begin{eqnarray} \sigma_{\gamma^{*}/Z} \cdot Br(\zgee) = 255.8 &\pm& 3.9({\it stat.}) \nonumber\\ &\pm& ^{5.5}_{5.4}({\it syst.}) \nonumber\\ &\pm& 15.3({\it lum.})~\ensuremath{\mathrm{pb}} \nonumber\\ \end{eqnarray} and \begin{eqnarray} \sigma_{\gamma^{*}/Z} \cdot Br(\zgmm) = 248.0 &\pm& 5.9({\it stat.}) \nonumber\\ &\pm& ^{8.0}_{7.2}({\it syst.}) \nonumber\\ &\pm& 14.8({\it lum.})~\ensuremath{\mathrm{pb}}~{\rm .} \nonumber\\ \end{eqnarray} These measurements are the cross sections for dileptons produced in the mass range 66~$\ensuremath{\GeV\!/c^2} < M_{\ell\ell} <$~116~$\ensuremath{\GeV\!/c^2}$ where both $\gamma^{*}$ and $\ensuremath{Z}$ boson exchange contribute. A correction factor of $F =$~1.004~$\pm$~0.001 determined from a NNLO $d\sigma/dy$ calculation, {\sc phozpr}~\cite{int:nnlo1,int:nnlo2,int:nnlo3}, using MRST 2002 NNLL PDFs~\cite{int:mrst2}, is needed to convert these measured cross sections into those for pure $\ensuremath{Z}$ boson exchange over the entire dilepton mass range; the measured cross sections need to be multiplied by $F$. We compare the corrected cross sections for pure $\ensuremath{Z}$ boson exchange to the recent NNLO total cross section calculations for $\sqrt{s} =$~1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$~\cite{int:mrst1}. The $Z \rightarrow \ell \ell$ production cross section predicted by these calculations is 251.3~$\pm$~5.0~$\ensuremath{\mathrm{pb}}$, which is in good agreement with the corrected, measured values obtained in both lepton channels. The uncertainty on the predicted $\ensuremath{Z}$ boson production cross section is also primarily due to uncertainties in the PDF model derived from the MRST error PDF sets. Our independent estimates for these uncertainties using the method described in Sec.~\ref{sec:acc} are a consistent 1.2~$\!\%$ uncertainty based on the MRST error PDF sets and a somewhat larger 3.7~$\!\%$ uncertainty based on the CTEQ6 error PDF sets. Fig.~\ref{fig:ptfit} and Fig.~\ref{fig:etfit} show the invariant mass distributions for events in our $Z \rightarrow \ell \ell$ candidate samples. The data distributions are compared against predicted shapes from our simulated $Z \rightarrow \ell \ell$ event samples. The predicted shapes are normalized to the total number of events in the candidate samples. In making these comparisons, we ignore background processes which account for less than 1~$\!\%$ of the events in these samples (see Table~\ref{tab:backsum}). \begin{table}[t] \caption{Summary of the input parameters to the $\gamma^{*}/Z \rightarrow \ell \ell$ cross section calculations for the electron and muon candidate samples.} \begin{center} \begin{tabular}{l c c} \hline \hline & $\gamma^{*}/Z \rightarrow e e$ & $\gamma^{*}/Z \rightarrow \mu \mu$ \\ \hline $N_{Z}^{\mathrm{obs}}$ & $4242$ & 1785 \\ $N_{Z}^{\mathrm{bck}}$ & 62 $\pm$ 18 & 13 $\pm$ 13 \\ $A_Z$ & 0.3182 $^{+0.0039}_{-0.0041}$ & 0.1392 $^{+0.0027}_{-0.0033}$ \\ $\epsilon_Z$ & 0.713 $\pm$ 0.012 & 0.713 $\pm$ 0.015 \\ $\int \mathcal{ L} dt$ ($\ensuremath{\mathrm{pb}^{-1}}$) & 72.0 $\pm$ 4.3 & 72.0 $\pm$ 4.3 \\ \hline \hline \end{tabular} \label{tab:zllsigma} \end{center} \end{table} \subsection{Ratio of $\wlnu$ to $Z \rightarrow \ell \ell$} \label{subsec:res_eratio} \begin{table}[t] \caption{Summary of the input parameters to the $R$ calculations for the electron and muon candidate samples.} \begin{center} \begin{tabular}{ l c c } \hline \hline & R$_e$ & R$_{\mu}$ \\ \hline $N_{W}^{\mathrm{obs}}$ & 37584 & 31722 \\ $N_{W}^{\mathrm{bck}}$ & 1762 $\pm$ 300 & 3469 $\pm$ 151 \\ $N_{Z}^{\mathrm{obs}}$ & 4242 & 1785 \\ $N_{Z}^{\mathrm{bck}}$ & 62 $\pm$ 18 & 13 $\pm$ 13 \\ $\frac{A_Z}{A_W}$ & 1.3272 $\pm$ 0.0109 & 0.7066 $\pm$ 0.0068 \\ $\frac{\epsilon_Z}{\epsilon_W}$ & 0.952 $\pm$ 0.011 & 0.974 $\pm$ 0.010 \\ $F$ & 1.004 $\pm$ 0.001 & 1.004 $\pm$ 0.001 \\ \hline \hline \end{tabular} \label{tab:finalr} \end{center} \end{table} Precision measurements of the ratio of $\wlnu$ to $Z \rightarrow \ell \ell$ production cross sections, $R$, are used to test the Standard Model. The Standard Model parameters $\Gamma(W)$ and $Br(\wlnu)$ can be extracted from our measured values of this ratio and are sensitive to non-Standard Model processes that result in additional decay modes for the $\ensuremath{W}$ boson. A new high-mass resonance which decays to either $\ensuremath{W}$ or $\ensuremath{Z}$ bosons could also have a direct effect on the measured value for $R$. The ratio of cross sections can be expressed in terms of measured quantities: \begin{equation} R = \frac{1}{F} \cdot \frac{N_{W}^{\mathrm{obs}}-N_{W}^{\mathrm{bck}}} {N_{Z}^{\mathrm{obs}}-N_{Z}^{\mathrm{bck}}} \cdot \frac{A_Z}{A_W} \cdot \frac{\epsilon_Z}{\epsilon_W}, \label{eq::rex} \end{equation} where $F$ is the correction factor for converting the measured $\gamma^{*}/Z \rightarrow \ell \ell$ cross section into the cross section for pure $\ensuremath{Z}$ boson exchange and the other parameters are as defined for the $\ensuremath{W}$ and $\ensuremath{Z}$ production cross section measurements. The integrated luminosity terms in the $\ensuremath{W}$ and $\ensuremath{Z}$ cross section calculations along with their associated uncertainties cancel completely in the $R$ calculation, allowing for a significantly more precise measurement of the ratio than is possible for the individual cross sections. In addition, we take advantage of many correlated uncertainties in the event selection efficiencies and kinematic and geometric acceptances of our $\ensuremath{W}$ and $\ensuremath{Z}$ candidate samples which cancel in the ratios $A_{Z}/A_{W}$ and $\epsilon_{Z}/\epsilon_{W}$. For example, uncertainties on the acceptances arising from the PDF model are significantly smaller for the ratio of the $Z \rightarrow \ell \ell$ and $\wlnu$ acceptances than for either individual acceptance. The calculation of $A_{Z}/A_{W}$ and $\epsilon_{Z}/\epsilon_{W}$ for our electron and muon candidate samples and the treatment of the correlated uncertainties in these ratios are discussed in Secs.~\ref{sec:acc} and~\ref{sec:eff}. The event counts and background estimates for the $\wlnu$ and $Z \rightarrow \ell \ell$ candidate samples are the same as those used in the individual cross section calculations. Table~\ref{tab:finalr} summarizes the input parameters used to calculate $R$ using the electron and muon candidate samples. Substituting these values into Eq.~\ref{eq::rex}, we obtain \begin{equation} R_{e} = 10.79 \pm 0.17({\it stat.}) \pm 0.16({\it syst.}) \end{equation} and \begin{equation} R_{\mu} = 10.93 \pm 0.27({\it stat.}) \pm 0.18({\it syst.})~{\rm .} \end{equation} Based on the calculations of the production cross sections for $\wlnu$ and $Z \rightarrow \ell \ell$ provided by \cite{int:nnlo0,int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3}, the expected value for $R$ at $\sqrt{s} =$~1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$ is 10.69. To obtain an accurate estimate for the uncertainty on this prediction, we need to account for correlated uncertainties in the individual cross section predictions. The error originating from PDF model uncertainties has the largest contribution to the total uncertainty. We estimate the magnitude of this contribution using the previously defined method in Sec.~\ref{sec:acc} and obtain a 0.45~$\!\%$ uncertainty based on the MRST error PDF sets and a larger 0.56~$\!\%$ uncertainty based on the CTEQ6 error PDF sets. We also need to account for the effect of additional uncertainties in the values of the electroweak parameters and CKM matrix elements used in the cross section calculations. We estimate these uncertainties using the $\overline{MS}$ NNLO total cross section calculation, {\sc zwprod} \cite{int:nnlo1,int:nnlo2}. We have updated the calculation code to incorporate the CTEQ and MRST PDFs and variations of the electroweak parameters and CKM matrix elements. We obtain an uncertainty of 0.15~$\!\%$ for the $\sigma_{Z}$ calculation and 0.40~$\!\%$ for the $\sigma_{W}$ calculation. The larger uncertainty associated with the $\sigma_{W}$ calculation is due primarily to experimental uncertainties on the CKM matrix values. To be conservative, we add the larger PDF model uncertainty (0.56~$\!\%$) in quadrature with the individual cross section calculation uncertainties (0.15~$\!\%$ and 0.40~$\!\%$) to obtain a combined uncertainty on the prediction for $R$ of 0.70~$\!\%$. The resulting prediction, 10.69 $\pm$ 0.08, agrees with the measured values of $R$ in both lepton channels. \subsection{$\mu$-e Universality in $\ensuremath{W}$ Decays} \label{subsec:gmuge} Stringent tests of lepton universality at LEP provide strong evidence for lepton universality in $Z \rightarrow \ell \ell$ production. We make a similar test for lepton universality in $\wlnu$ production by extracting the ratio of $\wlnu$ couplings, $g_{\mu}/g_e$, from the measured ratio of the $\wmnu$ and $\wenu$ cross sections. The $\wlnu$ couplings are related to the measured ratio $U$ of the cross sections, defined as \begin{equation} U \equiv \frac{\sigma_{W} \cdot Br(\wmnu)} {\sigma_{W} \cdot Br(\wenu)} = \frac{\Gamma(\wmnu)}{\Gamma(\wenu)} = \frac{g_{\mu}^2}{g_{e}^2}~{\rm .} \label{eq:gratio} \end{equation} As in the case of the $R$ measurements described in the previous section, many of the uncertainties associated with the individual cross section measurements cancel in the ratio. Table~\ref{tab:Usys} summarizes the uncorrelated uncertainties between the two cross section measurements that contribute to the overall uncertainty on $g_{\mu}/g_e$. The uncertainties due to the PDF model cancel almost completely in the ratio. The major remaining contributions to the systematic uncertainty come from the uncorrelated event selection efficiencies for the electron and muon candidate samples. Since these efficiencies are measured directly from $Z \rightarrow \ell \ell$ candidate events in the data, the associated uncertainties will decrease as additional data are analyzed. In this sense, the remaining uncertainty on $g_{\mu}/g_e$ is primarily statistical in nature and can be reduced with larger data samples. Using the input parameters to our $\wlnu$ cross section measurements, we obtain \begin{equation} \frac{g_\mu}{g_e} = 0.991 \pm 0.012~{\rm .} \end{equation} Using Eq.~\ref{eq:gratio} and the current world average of experimental results for $Br(\wmnu) =$~0.1057~$\pm$~0.0022 and $Br(\wenu) =$~0.1072~$\pm$~0.0016~\cite{int:pdg}, the expected value of $g_{\mu}/g_{e}$ is 0.993~$\pm$~0.013 which is in good agreement with our measured value. \begin{table}[t] \caption{Uncertainties on the measured ratio of $\wmnu$ and $\wenu$ cross sections, $U$.} \begin{center} \begin{tabular}{l r} \hline \hline Category & Uncertainty \\ \hline Statistical Uncertainty & 0.0075 \\ \hline Acceptance Ratio: & \\ Simulation Statistics & 0.0019 \\ Boson $\ensuremath{p_{T}}$ Model & 0.0001 \\ PDF Model & $^{+0.0003}_{-0.0004}$ \\ $\ensuremath{p_{T}}$ Scale and Resolution & 0.0018 \\ $\ensuremath{E_{T}}$ Scale and Resolution & 0.0034 \\ Material Model & 0.0072 \\ Recoil Energy Model & 0.0010 \\ \hline Efficiency Ratio: & \\ Uncorrelated & 0.0199 \\ \hline Backgrounds: & \\ Hadronic & 0.0043 \\ Electroweak & 0.0030 \\ Cosmic Ray & 0.0008 \\ \hline \hline \end{tabular} \label{tab:Usys} \end{center} \end{table} \subsection{Combined Results from the Electron and Muon Channels} \label{subsec:res_emu} Since our measurement of $g_\mu/g_e$ supports the conclusion of lepton universality in $\wlnu$ production, we proceed to combine our measurements of the $\wlnu$ and $Z \rightarrow \ell \ell$ production cross section measurements in the electron and muon channels to increase the overall precision of these results. We also combine our measurements of $R_{e}$ and $R_{\mu}$ to determine a precision value for $R$ which is used to test the Standard Model. \subsubsection{Combination of the Cross Sections} \label{subsec:res_combcs} \begin{table} \caption{Uncertainty categories for the inclusive $\ensuremath{W}$ cross section measurements. These values are absolute contributions to $\sigma_{W}$ in $\ensuremath{\mathrm{pb}}$. The uncertainties in the electron and muon channels for each category are treated as either 100~$\!\%$ correlated (1.0) or uncorrelated (0.0).} \begin{center} \begin{tabular}{l c c c} \hline \hline Category & Electron & Muon & Correlation \\ \hline Statistical Uncertainty & 14.3 & 15.3 & 0.0 \\ \hline Acceptance: & & & \\ Simulation Statistics & 3.6 & 3.9 & 0.0 \\ Boson $\ensuremath{p_{T}}$ Model & 1.2 & 1.0 & 1.0 \\ PDF Model & 36.9 & 35.4 & 1.0 \\ $\ensuremath{p_{T}}$ Scale and Resolution & 0.8 & 5.6 & 1.0 \\ $\ensuremath{E_{T}}$ Scale and Resolution & 9.5 & 0.0 & 0.0 \\ Material Model & 20.2 & 0.0 & 0.0 \\ Recoil Energy Model & 6.8 & 9.4 & 1.0 \\ \hline Efficiencies: & & & \\ Vertex $z_0$ Cut & 11.7 & 11.5 & 1.0 \\ Track Reconstruction & 11.1 & 10.9 & 1.0 \\ Trigger & 2.9 & 32.7 & 0.0 \\ Lepton Reconstruction & 11.1 & 19.7 & 0.0 \\ Lepton Identification & 24.1 & 23.8 & 0.0 \\ Lepton Isolation & 9.4 & 9.7 & 0.0 \\ $\ensuremath{Z}$-rejection Cut & 0.0 & 4.6 & 0.0 \\ Cosmic Ray Algorithm & 0.0 & 0.3 & 0.0 \\ \hline Backgrounds: & & & \\ Hadronic & 23.1 & 10.8 & 1.0 \\ $Z \rightarrow \ell \ell$ & 1.5 & 9.2 & 1.0 \\ $\wtnu$ & 1.3 & 2.3 & 1.0 \\ Cosmic Ray & 0.0 & 2.2 & 0.0 \\ \hline \hline \end{tabular} \label{tab:Werrcat} \end{center} \end{table} We use the Best Linear Unbiased Estimate (BLUE)~\cite{res:blue,res:blue2} method to combine measurements in the electron and muon channels. For the $\wlnu$ measurements, we identify twenty categories of uncertainties, several of which are correlated in the electron and muon channels. Table~\ref{tab:Werrcat} lists these categories and summarizes the raw contribution of each (in $\ensuremath{\mathrm{pb}}$) to the $\ensuremath{W}$ cross section measurements in the electron and muon channels. Based on the information in this table, we combine the measurements in the two lepton channels and obtain \begin{eqnarray} \sigma_{W} \cdot Br(\wlnu) = 2.749 &\pm& 0.010({\it stat.}) \nonumber\\ &\pm& 0.053({\it syst.})\nonumber\\ &\pm& 0.165({\it lum.})~\ensuremath{\mathrm{nb}}, \nonumber\\ \end{eqnarray} which has a precision of 2.0~$\!\%$, not including the uncertainty associated with the measured integrated luminosity of our samples. The uncertainty on luminosity is not included in the calculation of the combined value. The combination of the $Z \rightarrow \ell \ell$ cross section measurements in the electron and muon channels is based on the same procedure. In this case, we identify seventeen categories of uncertainties, some of which are correlated between channels. Table~\ref{tab:Zerrcat} provides a list of these categories and summarizes the raw contribution of each (in $\ensuremath{\mathrm{pb}}$) to the $\ensuremath{Z}$ cross section measurements in the electron and muon channels. The additional acceptance for forward electrons in the plug calorimeter modules reduces the statistical uncertainty associated with the $\ensuremath{Z}$ cross section measurement in the electron channel, which thus has a larger weight in the final combination. The combined result is \begin{eqnarray} \sigma_{\gamma^{*}/Z} \cdot Br(\zgll) = 254.9 &\pm& 3.3({\it stat.}) \nonumber\\ &\pm& 4.6({\it syst.}) \nonumber\\ &\pm& 15.2({\it lum.})~\ensuremath{\mathrm{pb}}, \nonumber\\ \end{eqnarray} which has a precision of 2.2~$\!\%$, not including the uncertainty associated with the measured integrated luminosity of our samples. As discussed previously, the combined cross section given here is the cross section for dileptons in the mass range 66~$\ensuremath{\GeV\!/c^2} < M_{\ell\ell} <$~116~$\ensuremath{\GeV\!/c^2}$ including contributions from both $\gamma^{*}$ and $\ensuremath{Z}$ boson exchange. In order to convert the measured cross section into a cross section for pure $\ensuremath{Z}$ boson exchange over the entire mass range, one must multiply the measured value by the correction factor presented earlier, $F =$~1.004~$\pm$~0.001. A comparison of the predictions from~\cite{int:nnlo0,int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3} for $\sigma_{W} \cdot Br(\wlnu)$ and $\sigma_{Z} \cdot Br(\zll)$ as a function of the $p\overline{p}$ center of mass energy, $E_{\mathrm{CM}}$, with our measured values and other experimental results~\cite{int:ua1,int:cdf_sigmas,int:d0_B} are shown in Fig.~\ref{fig:sigma_th}. \begin{figure*}[t] \begin{center} \includegraphics[width=5.in]{figures/Xsec_th.eps} \caption{$\wlnu$ and $Z \rightarrow \ell \ell$ cross section measurements as a function of the $p\overline{p}$ center-of-mass energy, $E_{\mathrm{CM}}$. The solid lines correspond to the theoretical NNLO Standard Model calculations from \cite{int:nnlo0,int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3}.} \label{fig:sigma_th} \end{center} \end{figure*} \begin{table} \caption{Uncertainty categories for the inclusive $\ensuremath{Z}$ cross section measurements. These values are absolute contributions to $\sigma_{Z}$ in $\ensuremath{\mathrm{pb}}$. The uncertainties in the electron and muon channels for each category are treated as either 100~$\!\%$ correlated (1.0) or uncorrelated (0.0).} \begin{center} \begin{tabular}{l c c c} \hline \hline Category & Electron & Muon & Correlation \\ \hline Statistical Uncertainty & 3.93 & 5.87 & 0.0 \\ \hline Acceptance: & & & \\ Simulation Statistics & 0.61 & 1.01 & 0.0 \\ Boson $\ensuremath{p_{T}}$ Model & 0.16 & 0.19 & 1.0 \\ PDF Model & 1.96 & 4.94 & 1.0 \\ $\ensuremath{p_{T}}$ Scale and Resolution & 0.10 & 0.13 & 1.0 \\ $\ensuremath{E_{T}}$ Scale and Resolution & 0.67 & 0.00 & 0.0 \\ Material Model & 2.45 & 0.00 & 0.0 \\ Recoil Energy Model & 0.00 & 0.00 & 0.0 \\ \hline Efficiency: & & & \\ Vertex $z_0$ Cut & 1.08 & 1.04 & 1.0 \\ Track Reconstruction & 1.42 & 1.98 & 1.0 \\ Trigger & 0.17 & 2.05 & 0.0 \\ Lepton Reconstruction & 1.43 & 1.24 & 0.0 \\ Lepton Identification & 3.39 & 3.48 & 0.0 \\ Lepton Isolation & 1.21 & 1.77 & 0.0 \\ Cosmic Ray Algorithm & 0.00 & 0.15 & 0.0 \\ \hline Backgrounds: & & & \\ Hadronic & 1.10 & 0.08 & 1.0 \\ $Z \rightarrow \tau \tau$ & 0.02 & 0.04 & 1.0 \\ $\wlnu$ & 0.17 & 0.00 & 1.0 \\ Cosmic Ray & 0.00 & 1.76 & 0.0 \\ \hline \hline \end{tabular} \label{tab:Zerrcat} \end{center} \end{table} \subsubsection{Combination of the $R$ measurements} \label{subsec:res_combr} The same BLUE method is also used to combine our measurements of $R_{e}$ and $R_{\mu}$. For our cross section ratio measurements we identify fifteen categories of uncertainties, some of which are correlated between our measurements in the electron and muon channels. Table~\ref{tab:Rerrcat} lists these categories and summarizes the raw contribution of each to the $R_{e}$ and $R_{\mu}$ measurements. Since most of the uncertainties related to efficiency factors are uncorrelated in the electron and muon channels, the corresponding uncertainties are combined into a single net uncertainty for uncorrelated efficiencies. The exception is the uncertainty on COT track reconstruction efficiency which is 100~$\!\%$ correlated between the two channels. The combined result is \begin{equation} R = 10.84 \pm 0.15({\it stat.}) \pm 0.14({\it syst.}) \end{equation} which is precise to 1.9~$\!\%$. \begin{table} \caption{Uncertainty categories for the $R$ measurements. The uncertainties in the electron and muon channels for each category are treated as either 100~$\!\%$ correlated (1.0) or uncorrelated (0.0).} \begin{center} \begin{tabular}{l c c c} \hline \hline Category & Electron & Muon & Correlation \\ \hline Statistical Uncertainty & 0.1748 & 0.2659 & 0.0 \\ \hline Acceptance Ratio: & & & \\ Simulation Statistics & 0.0293 & 0.0472 & 0.0 \\ Boson $\ensuremath{p_{T}}$ Model & 0.0020 & 0.0044 & 1.0 \\ PDF Model & 0.0701 & 0.0836 & 1.0 \\ $\ensuremath{p_{T}}$ Scale and Resolution & 0.0012 & 0.0167 & 1.0 \\ $\ensuremath{E_{T}}$ Scale and Resolution & 0.0184 & 0.0000 & 0.0 \\ Material Model & 0.0322 & 0.0000 & 0.0 \\ Recoil Energy Model & 0.0267 & 0.0377 & 1.0 \\ \hline Efficiency Ratio: & & & \\ Uncorrelated & 0.1204 & 0.0999 & 0.0 \\ Track Reconstruction & 0.0169 & 0.0437 & 1.0 \\ \hline Backgrounds: & & & \\ Hadronic & 0.0437 & 0.0399 & 1.0 \\ Uncorrelated Electroweak & 0.0089 & 0.0094 & 0.0 \\ Correlated Electroweak & 0.0057 & 0.0369 & 1.0 \\ Cosmic Ray & 0.0000 & 0.0689 & 0.0 \\ \hline Correction Factor, $F$ & 0.0107 & 0.0109 & 1.0 \\ \hline \hline \end{tabular} \label{tab:Rerrcat} \end{center} \end{table} \subsection{Extraction of Standard Model Parameters} As previously discussed, the precision value for $R$ obtained from the combination of our measurements in the electron and muon channels can be used to measure various Standard Model parameters and in the process test the predictions of the model. The ratio of cross sections can be expressed as \begin{equation} R = \frac{\sigma (p\overline{p} \rightarrow W)}{\sigma (p\overline{p} \rightarrow Z)} \frac{\Gamma(\wlnu)}{\Gamma(Z \rightarrow \ell \ell)} \frac{\Gamma(Z)}{\Gamma(W)} \label{eq:rth}. \end{equation} Using the precision LEP measurements for $\Gamma(Z \rightarrow \ell \ell)/ \Gamma(Z)$ at the $\ensuremath{Z}$ pole mass and the NNLO calculation of $\sigma (p\overline{p} \rightarrow W)/\sigma (p\overline{p} \rightarrow Z)$ by \cite{int:nnlo0,int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3}, we extract the Standard Model parameter $Br(\wlnu) = \Gamma(\wlnu)/\Gamma(W)$ from Eq.~\ref{eq:rth} using our measured value of $R$. Using the Standard Model prediction for $\Gamma(\wlnu)$, we also make an indirect measurement of $\Gamma(W)$ and based on this value place a constraint on the CKM matrix element $V_{cs}$. \subsubsection{Extraction of $Br(\wlnu)$} \label{subsec:res_br} \begin{figure}[t!] \begin{center} \includegraphics[width=3.5in]{figures/brlept_summary_cdf.eps} \caption{Comparison of our measured value of $Br(\wlnu)$ with previous hadron collider measurements~\cite{int:cdf_ratio1,int:cdf_ratio2,int:d0_B}, the current world average of experimental results~\cite{int:pdg}, and the Standard Model expectation~\cite{int:pdg}.} \label{fig:brleptsummary} \end{center} \end{figure} \begin{figure}[t!] \begin{center} \includegraphics[width=3.5in]{figures/gamma_summary_cdf.eps} \caption{Comparison of our measured value of $\Gamma(W)$ with previous hadron collider measurements~\cite{int:ua1,int:ua2,int:cdf_ratio1,int:cdf_ratio2, int:d0_B}, the current world average of experimental results~\cite{int:pdg}, and the Standard Model expectation~\cite{int:pdg}.} \label{fig:gammasummary} \end{center} \end{figure} The required parameters to extract $Br(\wlnu)$ from our measured $R$ value using Eq.~\ref{eq:rth} are the predicted ratio of $\ensuremath{W}$ and $\ensuremath{Z}$ production cross sections and the measured value of $Br(Z \rightarrow \ell \ell) = \Gamma(Z \rightarrow \ell \ell)/\Gamma(Z)$. The value of $\sigma_{W}/ \sigma_{Z}$ obtained from the NNLO calculations provided by \cite{int:nnlo0,int:nnlo4,int:nnlo1,int:nnlo2,int:nnlo3} is 3.3696 with associated relative uncertainties of 0.0056 coming from the PDF model and 0.0043 coming from electroweak and CKM matrix parameters used in the calculations (see Sec.~\ref{subsec:res_eratio}). The experimental value of $Br(Z \rightarrow \ell \ell) =$ 0.033658~$\pm$~0.000023 as measured at LEP is taken from~\cite{int:pdg}. When extracting $Br(\wlnu)$ from $R$, it is important to consider correlated uncertainties in the ratio of predicted cross sections and the ratio of acceptances, $A_{Z}/A_{W}$, used in the measurement of $R$. In a sense, we measure $Br(\wlnu)$ by equating $R_{\mathrm{phys}}$ to $R_{\mathrm{meas}}$, where \begin{equation} R_{\mathrm{phys}} \equiv \frac{\sigma_{W}}{\sigma_{Z}} \, \frac{Br(\wlnu)}{Br(Z \rightarrow \ell \ell)} \end{equation} and \begin{equation} R_{\mathrm{meas}} \equiv \frac{N_{W}^{\mathrm{obs}}-N_{W}^{\mathrm{bck}}}{A_W\epsilon_W} \, \frac{A_Z\epsilon_Z}{N_{Z}^{\mathrm{obs}}-N_{Z}^{\mathrm{bck}}}~{\rm .} \end{equation} Then, \begin{eqnarray} Br(\wlnu) = && \frac{N_{W}^{\mathrm{obs}}-N_{W}^{\mathrm{bck}}}{N_{Z}^{\mathrm{obs}}-N_{Z}^{\mathrm{bck}}} \, \frac{\epsilon_Z}{\epsilon_W} \, \nonumber\\ && \times \left( \frac{A_Z\sigma_Z}{A_W\sigma_W} \right) \, Br(Z \rightarrow \ell \ell)~{\rm .} \nonumber\\ \label{eq:wlvbr} \end{eqnarray} The ratio of the acceptance times the cross section for $\ensuremath{Z}$ and $\ensuremath{W}$ bosons on the right-hand side of Eq.~\ref{eq:wlvbr} is affected by uncertainties in the PDF model. To account properly for correlations between the PDF uncertainties associated with each of these four quantities, we independently calculate a PDF model uncertainty for the quantity contained within the parentheses using the method described in Sec.~\ref{sec:acc}. The measured PDF model uncertainties on this quantity are found to be slightly larger than for those on $A_Z/A_W$ alone (0.9~$\!\%$ versus 0.6~$\!\%$ in the electron channel and 1.0~$\!\%$ versus 0.8~$\!\%$ in the muon channel). These correlated uncertainties are separately accounted for in our extraction of $Br(\wlnu)$ from the measured value of $R$. We obtain \begin{equation} Br(\wlnu) = 0.1082~\pm~0.0022 \end{equation} where the uncertainty contributions are from $R$ ($\pm$~0.00212), the predicted ratio of cross sections ($\pm$~0.00047), and the $Z \rightarrow \ell \ell$ branching ratio ($\pm$~0.00007). The Standard Model value for this parameter is 0.1082~$\pm$~0.0002, and the world average of experimental results is 0.1068~$\pm$~0.0012~\cite{int:pdg}, both of which are in good agreement with our measured value. A summary of $Br(\wlnu)$ measurements is shown in Fig.~\ref{fig:brleptsummary}. \subsubsection{Extraction of $\Gamma(W)$} \label{subsec:res_emugamma} An indirect measurement of $\Gamma(W)$ can be made from our measured value of $Br(\wlnu)$ using the Standard Model value for the leptonic partial width, $\Gamma(\wlnu)$. We use the fitted value for $\Gamma(\wlnu)$ of 226.4 $\pm$ 0.4~$\ensuremath{\mathrm{Me\kern -0.1em V}}$~\cite{int:pdg}. Based on this value, we obtain \begin{equation} \Gamma(W) = 2092 \pm 42 ~\ensuremath{\mathrm{Me\kern -0.1em V}} \end{equation} which can be compared to Standard Model prediction of 2092~$\pm$~3~$\ensuremath{\mathrm{Me\kern -0.1em V}}$~\cite{int:pdg} and the world average of experimental results, 2118~$\pm$~42~$\ensuremath{\mathrm{Me\kern -0.1em V}}$~\cite{int:pdg}. A summary of $\Gamma(W)$ experimental measurements is shown in Fig.~\ref{fig:gammasummary}. Our indirect measurement is in good agreement with the fit~\cite{int:pdg} and the theoretical prediction as well as other measurements in literature. An alternative approach for obtaining $\Gamma(W)$ is to first use the predicted values for both $\Gamma(\wlnu)$ and $\Gamma(Z \rightarrow \ell \ell)$ to extract a ratio of the total widths, $\Gamma(W)/\Gamma(Z)$, from the measured value of $R$. The precisely measured value of $\Gamma(Z)$ from the LEP experiments (2495.2 $\pm$ 2.3~$\ensuremath{\mathrm{Me\kern -0.1em V}}$~\cite{int:pdg}) is then used to extract a value for $\Gamma(W)$. Using this approach we obtain \begin{equation} \frac{\Gamma(W)}{\Gamma(Z)} = 0.838 \pm 0.017 \end{equation} for the ratio of total widths, which can be compared to the Standard Model prediction of 0.8382 $\pm$ 0.0011~\cite{int:pdg}. Based on the measured value of $\Gamma(Z)$ we obtain \begin{equation} \Gamma(W) = 2091 \pm 42 ~\ensuremath{\mathrm{Me\kern -0.1em V}}~{\rm ,} \end{equation} where the uncertainty on the measured value for $\Gamma(Z)$ makes a negligible contribution to the total uncertainty. Since the measurement of $\Gamma(Z)$ is independent of the measurement of the branching ratio $Br(Z \rightarrow \ell \ell)$, both extracted values of $\Gamma(W)$ are independent to some degree. \subsubsection{Extraction of $V_{cs}$} \label{subsec:res_vcs} In the Standard Model the total $\ensuremath{W}$ width is a sum over partial widths for leptons and quarks where the latter subset involves a sum over certain CKM matrix elements \cite{int:pdg}: \begin{eqnarray} \label{vcseq} \Gamma_W \simeq && 3\Gamma_W^0 + 3\left( 1 + \frac{\alpha_{\mathrm{s}}}{\pi} + 1.409(\frac{\alpha_{\mathrm{s}}}{\pi})^2 - 12.77(\frac{\alpha_{\mathrm{s}}}{\pi})^3 \right) \,\nonumber\\ && \times \sum_{\mathrm{[no~top]}} | V_{qq^\prime} |^2 \, \Gamma_W^0~{\rm .} \end{eqnarray} Only the first two rows of the CKM matrix contribute as decays to the top quark are kinematically forbidden. Thus the relevant CKM matrix elements are $V_{\mathrm{ud}}$, $V_{\mathrm{us}}$, $V_{\mathrm{cd}}$, $V_{\mathrm{cs}}$, $V_{\mathrm{ub}}$, and $V_{\mathrm{cb}}$. Of these, $V_{cs}$ contributes the largest uncertainty. We use the indirect measurement of $\Gamma(W)$ from our measured value of $Br(\wlnu)$ as a constraint on $V_{cs}$ based on world average measurements of all the other CKM matrix elements and find \begin{equation} |V_{cs}| = 0.976 \pm 0.030~{\rm ,} \end{equation} using $\alpha_{\mathrm{s}} = 0.120$ and $\Gamma_W^0 = 226.4$~$\ensuremath{\mathrm{Me\kern -0.1em V}}$ \cite{int:pdg}. Our measured value is more precise than the direct measurement at LEP, $|V_{cs}| =$~0.97~$\pm$~0.11~\cite{res:vcs1,res:vcs2}, but not as precise as the combined value from LEP and Run~I at the Tevatron, $|V_{cs}| =$~0.996~$\pm$~0.013~\cite{res:pete}. \subsection{Summary} \label{subsec:res_sum} \begin{table*}[t] \caption{Standard Model parameters extracted from the measured ratio of $\ensuremath{W}$ and $\ensuremath{Z}$ production cross sections, $R$.} \begin{tabular}{l c c c} \hline \hline Quantity & Our Measurement & World Average & SM Value \\ \hline $Br(\wlnu)$ & 0.1082 $\pm$ 0.0022 & 0.1068 $\pm$ 0.0012 & 0.1082 $\pm$ 0.0002 \\ $\Gamma(W)$ in $\ensuremath{\mathrm{Me\kern -0.1em V}}$ & 2092 $\pm$ 42 & 2118 $\pm$ 42 & 2092 $\pm$ 3 \\ $\Gamma(W)/\Gamma(Z)$ & 0.838 $\pm$ 0.017 & 0.849 $\pm$ 0.017 & 0.838 $\pm$ 0.001 \\ $V_{cs}$ & 0.976 $\pm$ 0.030 & 0.996 $\pm$ 0.013 & N/A \\ $g_{\mu}/g_{e}$ & 0.991 $\pm$ 0.012 & 0.993 $\pm$ 0.013 & 1 \\ \hline \hline \end{tabular} \label{tab:extracted} \end{table*} We have performed measurements for the $\ensuremath{W}$ and $\ensuremath{Z}$ boson production cross sections in the electron and muon decay channels based on 72~$\ensuremath{\mathrm{pb}^{-1}}$ of $p\overline{p}$ collision data at $\sqrt{s} =$~1.96~$\ensuremath{\mathrm{Te\kern -0.1em V}}$. We calculate the ratio of the $\ensuremath{W}$ and $\ensuremath{Z}$ cross sections, $R$, in each lepton channel and combine them to obtain a value which is precise to 1.9~$\!\%$. The precision will improve when more data are analyzed. From this ratio we extract the leptonic $\ensuremath{W}$ branching ratio, the $\ensuremath{W}$ width, the ratio of the $\ensuremath{W}$ and $\ensuremath{Z}$ widths, and constrain the CKM matrix element $V_{cs}$. A summary of extracted quantities is given in Table~\ref{tab:extracted}. \begin{acknowledgments} We thank the Fermilab staff and the technical staffs of the participating institutions for their vital contributions. This work was supported by the U.S. Department of Energy and National Science Foundation; the Italian Istituto Nazionale di Fisica Nucleare; the Ministry of Education, Culture, Sports, Science and Technology of Japan; the Natural Sciences and Engineering Research Council of Canada; the National Science Council of the Republic of China; the Swiss National Science Foundation; the A.P. Sloan Foundation; the Bundesministerium fuer Bildung und Forschung, Germany; the Korean Science and Engineering Foundation and the Korean Research Foundation; the Particle Physics and Astronomy Research Council and the Royal Society, UK; the Russian Foundation for Basic Research; the Comision Interministerial de Ciencia y Tecnologia, Spain; in part by the European Community's Human Potential Programme under contract HPRN-CT-2002-00292; and the Academy of Finland. \end{acknowledgments} \clearpage
1,314,259,996,117
arxiv
\section{Introduction} \label{sec:introduction} The molecular simulation of polymers and oligomers is in a mature state, which allows chemistry-specific predictions of many physical properties to be made. This includes, in particular, the prediction, or, reproduction, of density~\cite{Vagilis11mac,Du12JCTC}, viscosity~\cite{Kremer98BI,Bair2002PRL,Habchi2010TI,Jadhao2019TL}, and mechanical properties~\cite{Goddard91JPC,Rutlege94JPC,Root16Mac} as functions of temperature, pressure, and shear rate but also the computation of complex phase diagrams~\cite{Kroeger04PR,Mueller20PPS,Mukherji20AR}. Molecular simulation has even reached levels making it possible to design lubricants with small viscosity index~\cite{Kajita2020CP}. However, we did not manage to find any successful predictions for the specific heat $c_p$ of systems containing chain molecules, although, in principle, the specific heat could (falsely) be deemed a profane property to compute. It only requires the temperature derivative of the enthalpy to be taken and/or the energy or enthalpy fluctuation to be determined. There are certainly two main reasons impeding the calculation of the specific heat from molecular simulations. First, united-atom descriptions ignore the presence of hydrogen atoms so that their small but non-zero contribution to $c_p$ is ignored. Second, and more importantly, both united-atom and all-atom descriptions generally assume nuclei to be classical objects, while in reality, their motion is quantum mechanical. This difference makes classical simulations overestimate the specific heat at small temperatures. It explains why Bhowmik \textit{et al.}~\cite{Bhowmik2019B} found that the heat predicted from classical all-atom molecular dynamics (MD) simulations of hydrocarbon chains was almost a factor of three too high, while results for polytetrafluoroethylene (PTFE) exceeded experimental values only by 20\%. These findings can be rationalized in a back-of-the-envelope calculation. The vibrational energy of a CF bond is near 20~THz while that of the CH bond lies near 90~THz. At room temperature, each such mode contributes to the specific heat with approximately 0.45~$k_\textrm{B}$ (CF) and $1\cdot 10^{-4}$~$k_\textrm{B}$ (CH), respectively, while a classical harmonic mode would contribute $k_\textrm{B}$ according to the Dulong-Petit law. Many other modes also become more classical in PTFE compared to hydrocarbon chains, because fluorine atoms are heavier than hydrogen atoms, while bond stiffnesses do not depend substantially on the termination. Approximating all modes in PTFE other than the CF-stretching bond as perfectly classical would suggest that a classical PTFE simulation at room temperature should be reduced by twice 0.55~$k_\textrm{B}$ per CF$_2$ repeat unit, so that the quantum effect of the CF vibration can be estimated to reduce the specific heat of PTFE by roughly 15\%. A similarly accurate estimate for hydrocarbons is difficult to make, because a rather large fraction of characteristic frequencies require corrections spanning the entire domain from very small to unity. However, for a crude approximation, one could argue hydrogen atoms to be completely quantum and carbon atoms to be close to classical. One possibility to account accurately for the quantum nature of nuclear degrees of freedom is to treat them in a path-integral framework, as done more than 20 years ago by Marto{\v{n}}{\'{a}}k \textit{et al.}~\cite{Martonak1998PRE}. However, this approach is computationally demanding. Reaching the proper quantum limit needed for a reasonably accurate, direct estimate for condensed matter systems necessitates the simulations of $P$ replica of the system, where the so-called Trotter number $P$ needs to slightly exceed the ratio $h\nu/k_\textrm{B}T$~\cite{Muser1995PRB,Herrero2014JPCM}. Here $h$ is Planck's constant, $\nu$ is the maximum characteristic frequency in the system (e.g., the CH bond-stretching vibration), while $k_\textrm{B}T$ is the thermal energy. A related approach to simulate the effect of quantum mechanics is the use of potentials that implicitly include quantum effects through the Wigner-Kirkwood expansion~\cite{Wigner1932PR,Kirkwood1933PR} of the free energy in powers of Planck's constant. Using the leading-order terms, the temperature range, in which experimental data on the specific heat of magnesium oxide was successfully reproduced, extended to temperatures a little below the Debye temperature, but not further below~\cite{Matsui1989JCP}. Moreover, both the extra programming and computing time associated with the Kirkwood-Wigner expansion exceed that by path integrals substantially, so that an alternative, feasible, and easy-to-implement way to correct the specific heat of polymeric systems for quantum effects remains sought after. In this paper, we extend a method introduced by Horbach \textit{et al.}~\cite{Horbach1999JPCB} to calculate the low-temperature specific heat of a quantum mechanical system, namely silica well below its glass transition temperature. To this end, they first computed the mass-weighted, velocity autocorrelation function $C(\Delta t)$ using classical MD. For a fictitious harmonic reference yielding the same $C(\Delta t)$, the Fourier transform of this function, $g(\nu)$, allows the vibrational density of states to be directly deduced and from it the specific heat. Rather than to report that number directly, as done by Horbach \textit{et al.}~\cite{Horbach1999JPCB}, we use it to estimate the specific-heat \emph{difference} between a classical system and a corresponding quantum mechanical system. This way, we correct predominantly the stiff, high-frequency modes, which should obey the harmonic approximation reasonably well, while leaving the specific-heat contributions of the slow modes unaffected. The latter are certainly anharmonic in the liquid phase, whereby they contribute in a non-trivial fashion to the heat balance. Specific heats obtained in simulations not containing all degrees of freedom (DOFs) explicitly, such as in coarse-grained models, cannot be corrected as straightforwardly as those measured in classical all-atom simulations representing all DOFs explicitly. The optimum way to proceed depends not only on the type of coarse graining but also on whether or not an (unconstrained) all-atom simulation can be conducted at one or two representative temperatures. Thus, several avenues to estimate specific-heat corrections due to missing hydrogen atoms will also be discussed in this work. The remainder of this article is organized as follows: the simulation methods are presented in Sect.~\ref{sec:method}. Sect.~\ref{sec:theory} describes our approach to correcting specific heats. Sect.~\ref{sec:results} contains the results. Conclusions are drawn in Sect.~\ref{sec:conclusions}. \section{Simulation Methods} \label{sec:method} The simulations in this work were conducted by three different people, each one with his own preferences for software, potentials, and other details pertaining to methods, such as thermostats. Since all of the choices are made routinely in different contexts, the diversity of approaches allows the robustness of the observed trends to be tested. \begin{figure}[ptb] \includegraphics[width=0.49\textwidth,angle=0]{schematic.eps} \caption{Schematics showing different monomeric structures investigated in this study. Parts (a-c) show hydrocarbon structures for $n$-octane ($n=8$, including end groups) and $n$-hexadecane ($n=16$) in part (a), decene-dimer ($n=2$), -trimer ($n=3$) and -tetramer ($n=4$) in part (b), and isohexadecane in part (c). Parts (d-h) show commodity polymer structures for poly(methyl methacrylate) (PMMA), poly(N-acryloyl piperidine) (PAP), poly(acrylic acid) (PAA), poly(acrylamide) (PAM), and poly(N-isopropyl acrylamide) (PNIPAM), respectively. Note that for parts (a-b) and (d-h) chain ends outside the bracket are terminated with hydrogen atoms. \label{fig:schem}} \end{figure} For this study we chose two different sets of chain molecules: (1) linear and branched hydrocarbon oligomers and (2) commodity polymers containing elements in addition to carbon and hydrogen in the repeat units, see Fig.~\ref{fig:schem} for more details of the molecular structures. All simulations were conducted in the $NpT$-ensemble at atmospheric pressure. Temperature $T$ was raised from $T \approx 300$~K to $T \approx 560$~K for all hydrocarbons, except for $n-$octane for which $T$ varied from 200~K to 380~K. In the case of commodity polymers, $T$ lied between $440-600$~K. The specific heat was computed in two ways: first, by taking finite differences of the enthalpy $H(T)$ according to \begin{equation} \label{eq:finiteDiffcP} c^\textrm{cla}_p(T) \approx \frac{H(T+\Delta T) - H(T-\Delta T)}{2\,\Delta T}, \end{equation} and second by fitting a third-order polynomial to $H(T)$. Since the temperature-dependence of $c_{p}$ is rather weak in the considered temperature range, the second method may be slightly preferable. For the initial set of hydrocarbon simulations, we have chosen six different linear and branched oligomers, see Figs.~\ref{fig:schem}(a-c). The all-atom simulations are performed using the LAMMPS molecular dynamics package~\cite{Plimpton1995JCP}. The improved L-OPLS-AA force field parameters are used to simulate the all hydrocarbons~\cite{Price2001JCC,Siu2012JCTC}, except for $n$-octane where we have used the standard OPLS-AA \cite{OPLS}. The potentials were chosen because they reproduced experimental data on density, viscosity, and diffusion coefficient quite accurately~\cite{Siu2012JCTC}. The number of chains in a cubic simulation box was adjusted such that each system consists of approximately $10^4$ atoms. The temperature and pressure are imposed using the Nos${\rm\acute e}$-Hoover thermostat and barostat, respectively. For the temperature coupling, the time constant is chosen as $\tau_{T} = 0.1$~ps and for pressure as $\tau_{p} = 1$~ps. The long-range electrostatic interactions are treated using the particle-particle particle-mesh (PPPM) solver \cite{Hockney1988}. The interaction cutoff is chosen as $r_c = 1$~nm. The simulations for $n$-octane and $n$-hexadecane was performed for 6 ns, while for the other hydrocarbon oligomers we have conducted 10 ns simulations. These simulation time scale ensure well equilibration of the samples and the average of $H(T)$ is calculated by taking the last 2 ns data. The typical time step for the all-atom simulation is chosen as $\Delta t = 1$~fs. For $n$-hexadecane, we have also performed simulations using the united-atom TraPPE-UA force field~\cite{Martin1998JPCB}. In this case, the employed time step was set to $\Delta t = 2$~fs. For the second set of systems, we investigated five different commodity polymers, namely poly(methyl methacrylate) (PMMA), poly(N-acryloyl piperidine) (PAP), poly(acrylic acid) (PAA), poly(acrylamide) (PAM), and poly(N-isopropyl acrylamide) (PNIPAM), see Figs.~\ref{fig:schem}(d-h). The choice of these polymers is motivated by their possible use for the design of advanced polymeric materials~\cite{Cahill16Mac,Mukherji19PRM}. The chain length $N = 30$ is taken for PMMA, PAP, PAA and PAM, while $N = 40$ for PNIPAM. Different number of repeat units were used, because all-atom chain configurations were available from earlier studies by one of us \cite{Mukherji19PRM,Mukherji17NC,Mukherji17JCP}. Each configuration consists of 100 polymer chains randomly distributed within a cubic simulation box. All these polymers were equilibrated earlier in their (solvent free) melt states at T = 600~K, which is at least 150~K above their calculated glass transition temperatures~\cite{Mukherji19PRM}. All commodity polymers are modelled only in the full atomistic description. The standard OPLS-AA force field parameters \cite{OPLS} are used for PAP, PAA, and PNIPAM, while the modified parameters are used for PMMA \cite{Mukherji17NC} and PAM \cite{Mukherji17JCP}. The used potential reproduce not only bulk polymer properties, such as the density and elastic response \cite{Mukherji19PRM}, but also capture their solvation in dilute aqueous solutions~\cite{Mukherji17NC,Mukherji17JCP}. The simulations of commodity polymers are performed using the GROMACS molecular dynamics package \cite{gro}. 500~ns long, $NpT$ simulation were conducted for each system at each temperature. The total accumulated MD time for the commodity polymers is 25 $\mu$s. Here, the temperature is imposed using the ``canonical-sampling-through-velocity-rescaling thermostat''~\cite{Vscale} with $\tau_{T}=1$~ps and the pressure is set to 1~atm with a Berendsen barostat using $\tau_p = 0.5$~ps \cite{Berend}. Electrostatics are treated using the particle-mesh Ewald method \cite{PME}. The interaction cutoff for non-bonded interactions is chosen as 1.0 nm. The simulation time step is taken as $\triangle t = 1$ fs and the equations of motion are integrated using the leap-frog algorithm. For the calculation of $H(T)$, we have used the last 50~ns data after $H(T)$ reached a reasonable plateau. All polymeric systems described above are simulated in their liquid phase, where the equilibration of the individual samples are still possible. Moreover, for the case of $n$-octane we have also performed simulations with a crystalline phase at $T = 40$ K and a quenched phase, where a $n$-octane liquid at $T = 300$ K was shock-quenched to $T = 40$ K. \section{Theory} \label{sec:theory} The central property to be computed in this work is the mass-weighted velocity autocorrelation function (ACF), \begin{equation} \label{eq:globalACF} C(\Delta t) = \sum_n m_n \, \left\langle \mathbf{v}_n(t) \cdot \mathbf{v}_n(t+\Delta t) \right\rangle, \end{equation} where $m_n$ is the mass of atom $n$ and $\mathbf{v}_n(t)$ its velocity at time $t$, while the angles $\langle ... \rangle$ denote a thermal equilibrium average. A typical example for $C(\Delta t)$ is presented in Fig.~\ref{fig:hexadecaneACF}. It shows long-lived fluctuations, unlike the velocity ACF of simple liquids, in which all interactions are of similar strength. \begin{figure}[hbtp] \centering \includegraphics[width=.4\textwidth]{Ct_hexadecane_aa.eps} \caption{Normalized mass-weighted velocity autocorrelation function $C(\Delta t)/C(0)$ of hexadecane at temperature $T = 300$~K (blue) and at $T = 560$~K (red). } \label{fig:hexadecaneACF} \end{figure} Depending on whether $C(\Delta t)$ is measured using the coarse-grained descriptions of the polymer, as for the united-atom potentials, or in an all-atom simulation, different strategies can be pursued to estimate how the specific heat needs to be corrected to account for nuclear quantum effects. These are described in the following. \subsection{All-atom descriptions} In an equilibrated harmonic system, as much energy is contained in the potential energy as in the kinetic energy. If the frequency of a harmonic mode is known, e.g., from the measurement of its classical velocity ACF, the specific heat of this mode after quantization is given by \begin{equation} c_p^\textrm{qm}(\nu,T) = k_\textrm{B} \frac{(h\nu/2k_\textrm{B}T)^2}{\sinh^2(h\nu/2k_\textrm{B}T)}, \end{equation} as can be easily derived from the partition function of the quantum mechanical harmonic oscillator, see Ref.~\cite{Horbach1999JPCB}, or most textbooks on statistical mechanics. Since the specific heat of a classical harmonic mode satifies the Dulong-Petit law, $c_p^\textrm{cla}(T) = k_\textrm{B}$, the difference between the specific heat of a classical and a quantum system simply is $\Delta c_p = k_\textrm{B} - c_p^\textrm{qm}$ for each degree of freedom (DOF). In a harmonic system, the global ACF defined in Eq.~(\ref{eq:globalACF}) results from the superposition of individual normal modes so that its Fourier transform allows us to determine what percentage of modes has what resonance frequency. Towards this end, we define the spectrum \begin{equation} \label{eq:DOSfromACF} g(\nu) = \frac{1}{G} \int_0^\infty \!\! \mathrm{d}t\, \cos(2\pi\nu\Delta t) \, \frac{C(\Delta t)}{C(0 )}, \end{equation} where we have divided $C(\Delta t)$ by $C(0)$, whose exact value is $D\,N\,k_\textrm{B}T$, where $D=3$ is the spatial dimension and $N$ the number of explicitly considered atoms. Finally, we chose the prefactor $G$ in Eq.~(\ref{eq:DOSfromACF}) such that the integral over $g(\nu)$ is unity. This way, $g(\nu)$ can be interpreted as the vibrational density of states (DOS) normalized to an individual degree of freedom and in a unit system, in which Planck's constant defines the unit of angular momentum. The typical DOS for all molecules in Fig. \ref{fig:schem} are shown in the Supplementary Fig. S1~\cite{SM}. The relative difference between the specific heat of a classical and a quantum system can now be obtained as \begin{equation} \label{eq:specificHeat} \Delta c_\textrm{rel}(T) = \int_0^\infty \!\! \mathrm{d}\nu\, g(\nu)\,\left\{ 1 - c_p^\textrm{qm}(\nu,T)/k_\textrm{B}\right\}. \end{equation} Thus, the specific heat of a system of quantum mechanical harmonic oscillators would read \begin{equation} \label{eq:correctAllAtom} c_p(T) = c_p^\textrm{cla}(T)-c_p^\textrm{DP}\, \Delta c_\textrm{rel}(T), \end{equation} where $c_p^\textrm{cla}(T)$ is the specific heat of the classical system and $c_p^\textrm{DP}$ the specific heat of the system assuming the Dulong-Petit law to be valid, i.e., $c_p^\textrm{DP} = k_\textrm{B} n_\textrm{DOF}$, where $n_\textrm{DOF}$ is the number of DOFs. We propose to use Eq.~(\ref{eq:correctAllAtom}) for any system, whose degrees of freedom can be partitioned into slow modes, which are typically soft and/or anharmonic, and high-frequency modes, which tend to be quasi harmonic. This procedure leaves (low-frequency) contributions to the specific heat that deviate from Dulong-Petit's law unchanged, but distinctly reduces the specific heat associated with the high-frequency modes involving hydrogen atoms. Ideally, $g(\nu)$ is determined in the vicinity of the temperature at which the specific heat is computed. However, we demonstrate in Sect.~\ref{sec:results} that the high-frequency spectra and thereby the specific-heat corrections are relatively insensitive to the temperature at which $g(\nu)$ is determined. Thus, it should be generally sufficient to compute $g(\nu)$ at a single, medium temperature, or, alternatively to compute $g(\nu)$ at the lowest and highest temperature and to interpolate continuously between the spectra (or the two subsequent specific heat corrections) at intermediate temperatures. \subsection{United-atom descriptions} \label{sec:methodB} In united-atom descriptions and/or when using bond length constraints, the number of degrees of freedom (DOFs) is reduced compared to the real system. While only stiff modes not contributing significantly to the specific heat are usually eliminated in chemistry-specific, coarse-grained descriptions of polymers, a precise calculation of $c_p$ may necessitate the estimation of the contribution of the eliminated DOFs to the specific heat. Thus, the full (quantum) contributions of the $N_\textrm{ig}$ ignored degrees of freedom to $c_p(T)$ must be added to the estimate of the $N_\textrm{ex}$ explicitly treated degrees of freedom. If specific heats are normalized to individual degrees of freedom, this yields \begin{equation} \label{eq:fullUA-cp} c_p(T) = \frac{N_\textrm{ex}\,c_p^\textrm{ex}(T)\,\{1 -\Delta c_\textrm{rel}^\textrm{ex}(T)/k_{\rm B}\}+N_\textrm{ig}\,c_p^\textrm{ig}(T)}{N_\textrm{ex}+N_\textrm{ig}}, \end{equation} where the contribution of the ignored DOFs can be estimated with the help of the density of states associated with the motion of the ignored DOFs, $g_\textrm{ig}(\nu)$, i.e., with \begin{equation} c_p^\textrm{ig}(T) = \int_0^\infty \!\! \mathrm{d}\nu\, g_\textrm{ig}(\nu)\,c_p^\textrm{qm}(\nu,T). \end{equation} In the following, we propose three different ways to estimate the density of states of the ignored degrees of freedom. \subsubsection{Difference method} In the first method, which we call the difference method, the all-atom and the united-atom $g(\nu)$ are both computed and normalized to the same entity, e.g., to a single polymer or to an atom as in a count of all atoms, including those that were eliminated in the united-atom simulation. The missing contribution then reads $g_\textrm{ig}(\nu) = g_\textrm{aa}(\nu) - g_\textrm{ua}(\nu)$. Note that $g_\textrm{ig}(\nu)$ may have negative contributions, which, however do not cause any trouble in practice. \subsubsection{Explicit method} In the second method, which we call the explicit method, an all-atom system is first equilibrated at a representative temperature. All heavy atoms are then fixed in space and only hydrogen atoms are propagated in time and thermostatted, however, only so moderately that peaks in $g(\nu)$ do not broaden substantially. In this follow-up simulation, the hydrogen velocity ACF is measured and a first estimate for $g_\textrm{ig}(\nu)$ is obtained through a Fourier transform of that ACF. Since the mass of carbon atoms is finite, we suggest to reinterpret a frequency $\nu$ as $\alpha\nu$ with $\alpha=\sqrt{13/12}$ so that reduced-mass effects are accounted for approximately. At the same time, it needs to be ensured that the integral over $g_\textrm{ig}(\nu)$ yields the relative number of hydrogen atoms so that the full transformation can be cast as $g(\nu)\to g(\alpha\nu)/\alpha$. \subsubsection{Crude method} While only one or at most two all-atom simulation need to be run for the difference method and the explicit method to be executed, it might still be beneficial if setting up an all-atom system can be avoided all together. We thus need a third way to compute specific heat corrections, which could be called the I-don't-want-to-run-an-all-atom-simulation-but-still-need-a-rough-guess-for-the-specific-heat-correction method (quantum chemists would probably introduce the catchy and easy-to-remember abbreviation IDW2RA3SBSNARG4TSHC). To this end, we suggest to approximate $g_\textrm{ig}(\nu)$ with a set of delta-functions: \begin{equation} \label{eq:IDW2a} g_\textrm{ig}(\nu) = n_\textrm{H in CH$_x$}^\textrm{rel}\,\sum_{i=1}^{n_x} w_{x,i} \,\delta(\nu-\nu_{x,i}), \end{equation} where $n_\textrm{H in CH$_x$}$ with $x = 2$ or $3$ is the relative number of hydrogen atoms being part of a CH$_2$ or CH$_3$ unit, respectively, while the $w_{x,i}$ are weights and the $\nu_{x,i}$ are frequencies. We describe in the Supplementary Information how the pairs $(w_{x,i}, \nu_{x,i})$ were obtained and merely note their results here. For CH$_2$, we used $(1/6,20)$, $(1/2, 37.5)$, and $(1/3, 90)$. For CH$_3$, we used $(1/9, 8.5)$, $(1/9, 23)$, $(1/9, 30)$, $(2/9, 39)$, $(1/9, 50)$, $(1/9, 75)$, and $(2/9, 93)$. Frequencies are stated in THz. \subsubsection{Comparison of united-atom correction methods} The difference method is directly applicable to coarse-graining approaches going beyond the elimination of hydrogen atoms. The same holds for the explicit method, however, with the constriction that the corrective factor $\alpha$ would have to be modified when deuterium atoms are involved and/or hydrogen atoms terminate other atoms than carbon atoms. The crude method is only meant to be used directly when hydrogen atoms bonded to carbons are eliminated. When all hydrogen terminations are replaced with deuterium atoms, it might suffice to divide all used frequencies with $\sqrt{2}$. However, simple rescaling of frequencies would not be advised for partial deuterium termination. Finally, we note that a highly accurate knowledge of the respective spectra is not needed, unless $c_p$ must be known with a great accuracy. If a vibrational frequency has an error of say 10\%, which most contemporary force-fields should be in a position to reproduce, then the temperature range in which the absolute error of the quantum correction exceeds 0.1~$k_\textrm{B}$ of that mode is roughly $0.3 < k_\textrm{B}T/(h\nu) < 1.2$. Since the density of states spans a broad range of frequencies, the relative number of modes lying in such a range is typically at best around 30\%. \section{Results} \label{sec:results} \subsection{Explicit-atom simulations} The first step of estimating the specific-heat corrections in an explicit-atom simulation consists of measuring the full mass-weighted velocity ACF, $C(\Delta t)$, which is worth discussing in its own right. Fig.~\ref{fig:hexadecaneACF} shows $C(\Delta t)$ for $n$-hexadecane at the lowest and highest temperature investigated, i.e., at $T = 300$~K and at $T = 560$~K, each time normalized such that $C(0) = 1$. Both correlation functions have maxima and minima at similar locations. Peak heights and intensities are almost identical at very small times but start to differ at large times. As a consequence, the Fourier transform of $C(\Delta t)$, a.k.a. spectra or DOS's, which is shown in Fig.~\ref{fig:HD_PMMA}(a), is essentially identical at high frequencies for $300$ and $560$~K. Significant differences appear only at frequencies below what could be called the thermal frequency, which we define as $\nu_\textrm{t}=k_\textrm{B}T/h$. The numerical value of the ``room-temperature thermal frequency" is $\nu_\textrm{rt} = k_\textrm{B}~300~\mathrm{K}/h \approx 6.25$~THz. \begin{figure*}[ptb] \centering \includegraphics[width=.309\textwidth]{spectrum_hexadecane_aa.eps} \includegraphics[width=.3\textwidth]{delta_cp_hexadecane_aa.eps} \includegraphics[width=.3\textwidth]{cp_hexadecane_aa.eps} \includegraphics[width=0.309\textwidth]{spectrum_pmma_aa.eps} \includegraphics[width=0.3\textwidth]{delta_cp_pmma_aa.eps} \includegraphics[width=0.3\textwidth]{cp_pmma_aa.eps} \caption{ \label{fig:HD_PMMA} Vibrational spectra $g(\nu)$, panels (a) and (d), specific heat corrections, $\Delta c_p$, (b) and (e), as well as specific heats, (c) and (f), for hexadecane in the top row (a)--(c), and for PMMA in the bottom row (d)--(f). % In each case, $g(\nu)$ was obtained at a low (blue) and a high (red) temperature and $\Delta c_p$ deduced from it. The corresponding blue and red curves essentially overlap in panels (b) and (e). Their differences (diff.) are shown in their insets. Experimental data on $c_p$ for $n-$hexadecane~\cite{Regueira2017JCT} and PMMA~\cite{expPMMA} are shown in black lines. % They are compared to three numerical data sets: classical all-atom simulations (blue circles), results obtained using the harmonic~reference method~\cite{Horbach1999JPCB} (green triangles down) and from the methodology proposed in this work (red triangles up). } \end{figure*} Since the $c_p$ correction for a single mode with thermal frequency is merely around 8\%, the total specific heat corrections are rather insensitive to the temperature, at which the DOS was deduced, as long as that temperature lies in a reasonable interval. This claim is confirmed in panel (b) of Fig.~\ref{fig:HD_PMMA}, particularly in its inset, where differences between the $c_p$ corrections obtained at 300 and 560~K are shown to differ by no more than 0.5\%. Fig.~\ref{fig:HD_PMMA}(c) confirms the previously made observation~\cite{Bhowmik2019B} that classical, all-atom-based simulations of chain molecules with hydrogen termination overestimate the specific heat at room temperature by a factor smaller than but close to three. The discrepancy reduces with increasing temperature, but is still close to a factor of two at $T = 550$~K. However, after applying the specific-heat corrections to the classical $c_p(T)$ data, agreement with experimental results is obtained within 0.1~$k_\textrm{B}$ per atom, which translates to a relative accuracy of approximately 6\%. At the same time, our analysis reveals that the specific heat of the harmonic reference is clearly below both experimental data sets. Thus, while the original correction method pursued by Horbach \textit{et al.}~\cite{Horbach1999JPCB} clearly reduces the error from approximately 200\% to 20\%, our modification reduces the error by another factor of three. We note in passing that our treatment would not have improved the accuracy of the $c_p$ prediction for their system in a similar fashion, as they kept their supercooled silica at a relatively small temperature, where thermal anharmonicity effects are small. The just-reported methodology was repeated for all investigated systems. However, only one more example is presented explicitly, namely PMMA in Fig.~\ref{fig:HD_PMMA}(d)--(f). At high frequencies, an additional (double) peak shows up in $g(\nu)$ near 80~THz, which we attribute to the H vibrations of the methyl group attached to the side group, while the extra peak at 50~THz is due to the stretching vibrations of the CO double bond. Differences between spectra measured at different temperatures are again only substantial at frequencies at or below the lower of the two investigated temperatures, this time $T = 440$ and 600~K. Thus, specific heat corrections are again essentially identical irrespective of the temperature at which the DOS was acquired. Finally, Fig.~\ref{fig:HD_PMMA}(f) confirms that the original harmonic reference reduces the $c_p$ deviation between classical simulations of hydrocarbons and experiment by a factor close to ten and that using the proposed difference-methodology reduces the error much further. Given the currently available data, agreement appears to be within 2\%. At this point, it is difficult to speculate what the main reason for the small \emph{absolute} discrepancies between experimentally and \textit{in-silico} measured specific heats of order 0.1~$k_\textrm{B}$ may be, i.e., if they are mainly due to errors in the classical reference or if they originate from the quantum corrections, or, unlikely but not impossible, if they stem from experimental errors. Irrespective of the answer to this question, it appears to us that simulations should be in a position to predict specific heat \emph{differences} between different polymers to within clearly less than 0.1~$k_\textrm{B}$, at least as long as consistent potentials are used, i.e., it should be ensured that dispersive interactions, bond stiffnesses, bond angles, etc. are parameterized consistently when trying to ascertain specific heat differences between two liquids. This way, absolute errors would be highly correlated so that differences between the specific heat of different liquids can be resolved with great accuracy. An interesting observation that can be made when comparing the simulation data for hexadecane (HEX) and PMMA is that the specific-heat corrections at 450~K are quite similar, i.e., 1.68~$k_\textrm{B}$ (HEX) versus 1.56~$k_\textrm{B}$ (PMMA). In fact, Fig.~\ref{fig:cpCorr} reveals that the specific heat correction of most of the investigated molecules obey an almost universal function $\Delta c_p(T)$ in the investigated temperature range within less than 0.1~$k_\textrm{B}$. However, even the two exceptions, namely PMMA and PAA, do not stray too far away from the general trend. This is somewhat surprising given the significant differences in the monomer architectures shown in Fig.~\ref{fig:schem}. The relatively small $\Delta c_p$ of PAA can be rationalized as follows: The side group provides an extra classical degree of freedom, i.e., the libration of the side group, while having only one hydrogen atom per three heavy atoms. The $g(\nu)$, from which the $c_p$ corrections presented in Fig.~\ref{fig:cpCorr} were deduced, are shown in the Supplementary Figs. S1~\cite{SM}. \begin{figure}[hbtp] \centering \includegraphics[width=.4\textwidth]{cpcorr_combine.eps} \caption{Specific heat corrections for explicit-atom simulations of all chain molecules investigated in this study. Only lines are shown at temperature, where commodity polymers could not be equilibrated using feasible computing times. } \label{fig:cpCorr} \end{figure} Unfortunately, we did not manage to improve the superposition of the various $\Delta c_p(T)$ curves by scaling the corrections with the relative (inverse) ratio of estimated ``quantum'' DOFs per total DOFs. Thus, at this point of time, we can only recommend to use the quasi-universal correction for those (carbon-based molecules with predominant hydrogen termination) polymers that are not included in our list, for a ``quick and dirty'' assessment of the specific heat from classical explicit-atom simulations. The $c_p$ corrections do not appear to change substantially upon crystallization. For octane we found $\Delta c_p$ estimated from a 40~K crystal to exceed that deduced from a 300~K liquid, both at atmospheric pressure, by approximately 0.05~$k_\textrm{B}$ per DOF in between these two limits, see the Supplementary Figs. S2(b) and (d)~\cite{SM}. The increase is predominantly due to the fact that the ordering and the subsequent densification of octane increases vibrational frequencies, because atoms are pushed more deeply into the stiff, repulsive part of their interaction. A similar comment holds for pressurized liquids when setting the pressure in a $n-$octane at 2 and 4~GPa. The corresponding data is shown in the Supplementary Figs. S2(a) and (c)~\cite{SM}. Of course, it is only worth knowing $\Delta c_p$ if variations in $\Delta c_p$ from one polymer to the next generally exceed those in $c_p$ itself. Indeed, Fig.~\ref{fig:cpall} reveals that this appears to be the case. It shows our results for the final specific heat of polymers, for which we could not find experimental results in the temperature range, where the polymers can be equilibrated, but only at lower temperature for the experimentally and technologically relevant polymers, PAP, PAM, PAA, and PNIPAM~\cite{Cahill16Mac}. Computed $c_p$ values together with $\Delta c_p$ estimates are listed in the Supplementary Table S1~\cite{SM}. \begin{figure}[hbtp] \centering \includegraphics[width=.4\textwidth]{cp_aa_combine_sep.eps} \caption{Specific heat predictions from all-atom simulations of various chain molecules after applying quantum corrections grouped into (a) hydrocarbon oligomers and (b) commodity polymers.} \label{fig:cpall} \end{figure} \subsection{United-atom simulations} The explicit-atom model simulations were repeated for a united-atom model of hexadecane~\cite{Martin1998JPCB}. The crucial task of estimating $g(\nu)$ is now divided into two parts: the computation of the spectra associated with the explicitly treated units and that of the missing DOFs. The course of action differs depending on which of the three methods proposed in Sect.~\ref{sec:methodB} to estimate $c_p$ from united-atom-based simulations is chosen. However, in either case, the first step is to deduce $g(\nu)$ for the united atoms. Fig.~\ref{fig:UA-spectra}(a) reveals that the low-frequency part of the UA and AA spectra ($\nu \lesssim 16$~THz, {related to C-C-C bond angle vibrations}) are quite similar. The first peak missing in the UA spectrum lies slightly above $\nu = 20$~THz, which can be associated with torsional vibrations of terminal CH$_3$ groups. The highest frequencies in the UA spectrum, i.e., those slightly above 30~THz, can be associated with united-atom bond vibrations. The difference between all-atom and united-atom spectra (reweighted to the true number of DOFs), $g_\textrm{H}(\nu)$, is shown in Fig.~\ref{fig:UA-spectra}(b) (violet solid line) and compared to the spectrum that is obtained when all carbon atoms are frozen in and only the hydrogen atoms are explicitly propagated (green dashed line). Qualitative agreement is obtained, which, however, is further improved when rescaling the explicit spectrum according to $g(\alpha \nu)/\alpha$ with $\alpha = \sqrt{13/12}$ (green solid line). The integral over $G(\nu) \equiv \int_0^\nu \! \textrm{d}\nu'\,g_\textrm{H}(\nu')$ can be approximated as a linear combination of step function, whose derivative is given in Eq.~(\ref{eq:IDW2a}), which is demonstrated in Fig.~\ref{fig:UA-spectra}(c). It turns out that the different methods to account for the ignored density of states does not effect strongly the predicted $\Delta c_p$. They differ by at most 0.05~$k_\textrm{B}$ in the investigated temperature interval as demonstrated in Fig.~\ref{fig:UA-spectra}(d). \begin{figure*}[ptb] \centering \hspace{-7pt} \includegraphics[width=.348\textwidth]{spectrum_ua_aa.eps} \includegraphics[width=.35\textwidth]{spectrum_H.eps} \includegraphics[width=.336\textwidth]{spectrum_integration.eps} \hspace{0pt} \includegraphics[width=.342\textwidth]{cp_ua_ignore.eps} \caption{(a) A comparison of spectra of $n$-hexadecane from all-atom and united-atom models at 430 K. (b) The difference spectrum (diff), $g_\textrm{diff} \equiv g_\textrm{AA}-g_\textrm{UA}$ is compared to the explicit-H spectrum $g_\textrm{H}$ obtained as described in Sect.~\ref{sec:methodB}. % The latter is shown in its original (orig.) and rescaled (resc.) version in green dotted and solid lines, respectively. % (c) Integral over the spectra shown in (b). Here, the data for the crude estimation is obtained by the weighted linear combination of the data shown in Fig. S3. (d) Ignored $c_p$ of $n$-hexadecane in united-atom models retrieved via the three approaches described in Sec. III.B. } \label{fig:UA-spectra} \end{figure*} Finally, we find that $c_p$ as predicted with a UA potential from classical simulations near room temperature might falsely be believed to be accurate, since values turn out close to experimentally measured values, see Fig.~\ref{fig:UAcp}. However, $c_p$ (of $n$-hexadecane) decreases upon heating in UA classical simulations, while it increases experimentally. To make accurate predictions for the right reason, the specific heat must be corrected, e.g., in one of the three ways proposed in Sect.~\ref{sec:methodB}. This leads to an agreement within 0.1~$k_\textrm{B}$ per atom throughout the investigated temperature range with the available experimental data~\cite{Regueira2017JCT}, as revealed by Fig.~\ref{fig:UAcp}. \begin{figure}[hbtp] \centering \includegraphics[width=.4\textwidth]{cp_hexadecane_ua.eps} \caption{ Specific heat $c_p$ of $n$-hexadecane as a function of temperature: % experimental data (black lines), uncorrected $c_p$ of a classical, united-atoms based simulation before (blue circles) and after (green triangles down) applying quantum corrections as well as full estimates for $c_p$ (red triangles up) obtained using Eq.~(\ref{eq:fullUA-cp}), which includes corrections for ignored H~atoms. Thee experimental data on $c_p$ is taken from Ref.~\cite{Regueira2017JCT} } \label{fig:UAcp} \end{figure} It is interesting to note that the united-atom potential lead again to a slight overestimation of $c_p(T)$ in comparison to the available experimental data~\cite{Regueira2017JCT}. This could be coincidence, however, there may also be a reason why different potentials lead to similar errors. Both potentials were optimized to closely match density and viscosity as a function of temperature and pressure. Neither one, however, includes explicitly many-body dispersion terms, which, however, are not entirely negligible for molecular systems~\cite{Elrod1994CR}. \section{Conclusions and Outlook} \label{sec:conclusions} We presented a method allowing the specific heat of molecular systems to be corrected for vibrational quantum effects and demonstrated that the specific heat of various chain molecules can be computed with it so that the specific heat can be predicted as reliably from molecular simulations as any other quantity. In principle, the presented method also applies to systems other than chain molecules. In fact, it will most likely improve the specific heat prediction of any classically treated system with vibrational frequencies above what we call thermal frequencies. However, the method does not capture quantum-mechanical anharmonicity effects, as they occur in a non-negligible way, for example, in the case of water at room temperature~\cite{Tuckerman1997S}. Likewise, whenever the temperature of a system is below its Debye temperature, anharmonicity will effect the specific heat to some degree. For a truly accurate computation of the specific heat of such systems, we see no way around the use of path-integral simulations~\cite{Tuckerman1993JCP,Muser1995PRB,Martonak1998PRE}. However, for molecules with closed valence shell other than a few small selected molecules, such as water, methane, and ammonia, any intermolecular (including rotational) motion can be classified as classical at room temperature. Of course, even for polymers---like the ones investigated in this study---anharmonic quantum effects do exist. To compute them using an all-atom framework, it may not be necessary to use Trotter numbers as large as $P \gtrsim h\nu_\textrm{max}/(k_\textrm{B} T)$, where maximum frequencies are typically associated with vibrations of terminating hydrogen atoms. The idea to compute a mass-weighted velocity auto-correlation function to correct for an insufficient handling of intramolecular, vibrational quantum effects, which we presented in this work, can be generalized to path-integral simulations. This is possible, because it can be readily worked out how the predicted specific heat of a harmonic reference depends on the Trotter number $P$ and the ratio $h\nu/(k_\textrm{B} T)$ so that the excess specific heat obtained at finite $P$ can be estimated. Such an approach should be particularly beneficial when intermolecular interactions are clearly weaker than intramolecular forces but not necessarily for regular metals and ceramics. An indirect result of our study is that replacing hydrogen atoms with deuterium would not only enhance their chemical stability due to a reduction of zero-point energy, which was argued to benefit the tribological properties of hydrogen terminated coatings~\cite{Mo2009PRB}, but it would also increase the specific heat and thereby presumably the heat conduction. We estimate the increase in $c_p$ due to full deuteration in paraffins and polyalphaolefins to be 0.25~$k_\textrm{B}$/atom at $T = 300$~K and at 0.3~$k_\textrm{B}$/atom at $T = 400$~K, which would correspond to an increase of roughly 25\% in the specific heat and potentially to a similar increase in heat conduction. However, this insight is at best relevant for small-scale, niche applications, given that the currently achieved production of deuterated mineral oils is in the decagram range~\cite{Klenner2020PC}. A more immediate implication of our work is that a successful computation of thermal transport properties will necessitate a correct assessment of the specific heat \cite{Cahill16Mac}. When simulations using accurate potentials are conducted carefully but a classically computed heat conductivity $\kappa$ is not reweighted with a similar factor to account for quantum effects as the specific heat, we would expect $\kappa$ to be overestimated~\cite{KappaMDExp,Mukherji19PRM}. This might explain why one of us~\cite{Mukherji19PRM} found $\kappa \simeq 0.304$~W/Km and 0.264~W/Km for \textit{in-silico} PMMA and PAP, respectively, while the corresponding experimental values are 0.200~W/Km and 0.160~W/Km \cite{Cahill16Mac}. \section{Acknowledgement} M.M. thanks Markus Gallei for useful discussions. D.M. thanks the Canada First Research Excellence Fund (CFREF) for financial support and the ARC Sockeye computational facility where the commodity polymer simulations are performed.
1,314,259,996,118
arxiv
\section{Introduction} Magnetospheric coherent radio emission from pulsars consist of a main pulse -- which is the most bright structure in the pulse profile and associated with a linear polarization position angle (PPA) swing across the pulse; sometimes an interpulse -- which is located 180$^{\circ}$ away from the main pulse and also associated with a PPA swing; occasionally pre/post--cursor emission which is a highly polarized temporal structure with a flat PPA connected via a bridge to the main pulse but located significantly away from the main pulse and the recently discovered off--pulse emission (Basu etal. 2011) which is like a broad emission component observed in regions where no obvious temporal structures are seen in a pulse profile. To date there is no self--consistent theory that can explain the overall aspect of pulsar emission. Most theories use that idea that the region around the neutron star is a charge--separated magnetosphere which is ``force free'', meaning that the electromagnetic energy is significantly larger than all other inertial, pressure and dissipative forces. The magnetosphere is initially charge starved and supply of charged particles can come from the neutron star or due to pair creation in strong magnetic fields. Subsequently the magnetosphere attains charge neutrality by accelerating particles with density equal to the Goldreich--Julian density. In all models of pulsar radio emission there is general agreement that the radio emission arises due to growth of plasma instabilities in the relativistic plasma streaming along curved magnetic field lines. Such processes in pulsar astrophysics are: cyclotron maser (Kazbegi, Machabeli \& Melikidze 1991), two--stream instabilities (Usov 1987; Asseo \& Melikidze 1998), collapsing solitons (Weatherall 1998), charged relativistic solitons (Melikidze, Gil \& Pataraya 2000, hereafter MGP00; Gil, Lyubarsky \& Melikidze 2004, hereafter GLM04) and linear acceleration maser (Melrose 1978). The aim of this article is to summarize the key observational results that have given or have the possibility of providing constraint on understanding the coherent radio emission from pulsars. Here we will concentrate on the main pulse emission and will also take the position that the coherent radio emission mechanism is excited by curvature radiation from charged bunches (like solitons). The physical process include formation of a inner vacuum gap (IVG) near the pulsar polar cap where non-stationary spark associated relativistic ($\gamma_p \sim 10^6$) primary particles are generated (Ruderman \& Sutherland 1975, hereafter RS75). These particles further radiate in strong magnetic field and the photons thereby produce secondary e$^+$e$^-$ plasma with $\gamma_s \sim 400$. The two--stream instability in the plasma generates Langmuir plasma waves and the modulational instability of the Langmuir waves leads to the formation of charged solitions which can excite extraordinary (X) and ordinary (O) modes of curvature radiation in the plasma as shown by MGP00 and GLM04. For this process to work, highly non-dipolar surface magnetic field is essential (Gil, Melikidze \& Mitra 2002). \section{The Shape of the main pulse emission region} The radio emission from pulsars almost invariably arises from regions of open dipolar field lines. The linear PPA swings for a large number of pulsars are in very good agreement with the rotating--vector model (RVM, Radhakrishnan \& Cooke 1969) which predicts the behaviour of the PPA arising from open dipolar field lines. The average PPA in several pulsars have a complex non-RVM behaviour, however they are mostly complicated due to the presence of orthogonal polarization modes (OPM). Single pulse polarization can be used to separate the OPMs after which the RVM is clearly satisfied for the individual modes (Gil \& Lyne 1994). The distribution of pulse width with rotation period shows a lower bound which scales with pulsar period as $P^{-0.5}$, which reflects the change of dipolar open field lines due to the changing light cylinder distance. Most pulse profiles have one or many subpulses or components. Detailed phenomenological study of average pulse shape and polarization reveal that the pulsar emission beam has a central core emission (surrounding the magnetic axis, Rankin 1990) with typically two or three nested cones around them (Rankin 1993, Mitra \& Deshpande 1999). Each of the cones scales as $P^{-0.5}$ and the observed subpulses are the line of sight cuts across the core or conal regions. The distribution of the subpulse width of the core or cone follow a lower bound 2.4$^{\deg} P^{-0.5}$ (Maciesiak etal. 2012), where 2.4$^{\deg}$ equals the polar cap size for a 1 second period pulsar and a neutron star with a radius of 10 km. In this core-cone model, it is important to understand that the nested cones are not uniformly illuminated, and hence a given intensity pattern in a pulsar can have significant variation in component intensity, however the location of the components appear to follow the conal structure. The cone structure in the RS75 model results due to the $E \times B$ drift of plasma columns or ``sparks" produced in the IVG. The core emission is the central spark (Gil \& Sendyk 2000) while the other sets of sparks populate themselves in the polar cap maintaining a distance between them which is of the order of the vacuum gap height. The sparks themselves have the VG height dimension. While theoretically the formation of the IVG is possible, the physical mechanism of how exactly the sparks populate the polar cap and develop in size is not clear. Nonetheless, if these sparks eventually generates a streaming flow of plasma and can generate coherent radio emission at about 500 km from the neutron star surface, then it is possible to explain the observed subpulse widths and core-cone structure observed in pulsar. \articlefiguretwo{r_em.eps}{hires_spul.ps}{height}{The left figure (adopted from Krzeszowski etal. 2009) show the comparison between radio emission heights estimated from the geometrical (yaxis) and delay method (xaxis). The conal emission in pulsars is consistent with emission arising at around 500 km (refer Krzeszowski etal. (2009) for details). The right panel shows an Arecibo observation of a single pulse for the 3.7--sec pulsar PSR B0525+21, where microstructure of around 180 $\mu$sec is seen (Backus, Mitra \& Rankin 2012, in preparation): a much smaller timescale than the angular beaming timescale of 3 millisec.} \section{The emission height of the main pulse} Two methods of approach have provided emission--height determinations in pulsars, namely the geometrical method and the delay method. In the geometrical method the PPA traverse is used to infer the magnetic axis inclination angle and the line of sight angle, and using the pulse width dimension along with model of the open dipolar field lines, the height can be estimated. The delay method as was suggested by Blaskiewicz etal. (1991) is based on the kinematical effect of aberration and retardation (A/R) and subsequent careful derivation shows that emission heights can be estimated independent of pulsar geometry (see Dyks etal. 2004). The A/R effect is seen as a shift between the center of the total intensity profile and the fiducial plane containing the magnetic and spin axis which is often identified as the steepest gradient point of the PPA traverse or the peak of the core emission. The merits/demerits and usage of the height estimation methods can be found in Mitra \& Li (2004) and Dyks etal. (2004). One important point to note here is that the methods mentioned here can only give emission heights for the conal emission region. Determination of core emission height has not been possible to date. A few notable works dedicated to finding emission heights using the geometrical method are: Rankin (1993), Kijak \& Gil (1998) and the delay heights: BCW, von Hoenchbroech et al (1999), Malov \& Suleimanova (2000), Gangadhara \& Gupta (2001), Krzeszowski etal. (2009). The left panel of Fig~\ref{height} shows a comparison between the geometrical and delay method by Krzeszowski etal. (2009), where one can clearly see that the emission arises from about 500 km above the neutron star's surface. This finding is a very significant input to the pulsar emission--mechanism problem. The only plasma instability that can grow at these heights (where the magnetic field is very strong and the plasma is constrained to move along the magnetic field) is the two--stream instability. Hence models like cyclotron maser, which can only give rise to the coherent radio emission near the light cylinder, can be ruled out. \section{Single pulse dynamics of the main pulse} A range of phenomena occuring at different time scales are observed in pulsars. The ones which are intrinsic to the pulsar emission are nulling, moding, drifting and microstructure. In pulsar nulling the radio emission suddenly switches off for time scales as short as a pulsar period up to a few weeks to months. During pulsar moding the average pulse profile suddenly changes from one stable form to another and a given mode can last for intervals of a few minuites to hours. The phenomena of nulling and moding are perhaps the most difficult emission phenomenon to explain and any further discussion on this is beyond the scope of this article. Pulsar drifting and microstructure phenomena gives indirect hints about the radio loud plasma. Drifting subpulses exhibit drif through the average profile in a very regular manner. In fact for several pulsars very detailed analysis of the observations reveal that a given subpulse seem to have a periodic behaviour which can be modeled as a circular carousel of emitting sparks (e.g. Deshpande and Rankin 2001; Mitra and Rankin 2008), while there are quite a few other pulsars where the carousel timescale can be inferred. In the RS75 model the carousel rotation is explained as $E \times B$ drift of the spark--associated plasma columns in the IVG. However, it is found that the observed timescale of carousel rotation are much longer (several tens of seconds) than the RS75 prediction. A major refinement of this model was given by Gil, Melikidze \& Geppert (2006), where the longer carousel rotation time was explained by a partially screened vacuum gap. They also estimated the thermal X--ray emission that arises due to bombardment of charged particles created in the VG onto the neutron star surface. Such thermal X--ray emission has been found in several isolated neutron stars and its connection to carousel rotation is an active area of pulsar research. Additionally, these observations provide direct evidence to the model that an IVGs populated with sparks exist in pulsars. The pulsar microstructure phenomenon is observed as short time--scale features in the single pulse ranging from 1 to several hundreds of microsec (Cordes 1979; Lange etal. 1998). The microstructure has been thought to reveal either the Lorentz factor of the emitting plasma (Cordes 1979) or are signatures of plasma disturbances (Weatherall 1998). Cordes (1979) finds that the average microsturcture timescale ($t_{\mu}$) scales with pulsar period as $t_{\mu} \sim 10^{-3} P$. If we assume that pulsar microstructures result due to the angular beaming of relativistic particles, then we obtain a Lorentz factor of around 150 based on the Cordes (1979) relation. We have however ourselves tried to establish this relation but have not been successful so far. We also find that even for very long period pulsars (see Fig~\ref{height}), in some bright single pulses the microstructure scale is much smaller than the angular beaming time scale. The theory to understand these short timescale temporal effects is still not fully developed. However, microstructures in pulsars are the best examples of the smallest spacial and temporal scale plasma variations producing coherent radio emission. In the IVG model, the microstructures corresponds to spark--associated plasma columns of secondary plasma with Lorentz factors of about 100--500. A large number of such sparks add up incoherently to produce a given pulsar subpulse. \section{Orientation of the escaping waves of the main pulse} Lai etal. (2001) used the x-ray image of the Vela pulsar wind nebula and absolute PPA to establish that the electric vector emanating out of the pulsar is orthogonal to the magnetic field planes, and hence represents the extraordinary (X) mode. This significant observational result for the first time demonstrated that the electric fields emerging from the Vela pulsar magnetosphere are perpendicular to the dipolar magnetic field planes. Lai etal. (2001) also showed that the proper motion direction (PM) of the pulsar is aligned with the rotation axis. Johnston etal. (2005) \& Rankin (2007) produced a distribution of $\mid$PM- absolute PPA$\mid$ for a few pulsars and found a bimodal distribution around zero and 90$^{\circ}$. Assuming that the pulsars PMs are parallel to the rotation axis, the bimodality could be explained as emerging radiation being either parallel or perpendicular to the magnetic field planes, since pulsars are known to have orthogonal polarization modes. Alternatively PMs of pulsars can also be parallel or perpendicular to the rotation axis. While both the above explanations are possible, it is clear that the electric vectors of the waves which detache from the pulsar magnetosphere to reach the observer follows the magnetic field planes. This is in agreement with the IVG class of models where MGP00 and GLM04 demonstrate that curvature radiation can excite the X and O modes in plasma at around 500 km, and the X mode can escape the pulsar magnetosphere almost as in vacuum and reach the observer. \section{Pulsar Polarization and adiabatic walking condition (AWC)} Mitra, Gil \& Melikidze (2009) argued that single pulses with close to 100\% linear polarization are most suitable for unraveling the pulsar emission mechanism. They showcased highly polarized single pulses from several pulsars where the PPA followed the rotating--vector model. These pulses, which are relatively free from depolarization, must consist exclusively of a single polarization mode which they associate with the X-mode excited by the coherent--curvature radiation. This argument however only holds good if the wave polarization at the generation point in the magnetosphere does not modify as it propagates in the magnetosphere. Cheng \& Ruderman (1979) argued that if the AWC (which is given by $|\Delta N|kl>>1$, where $\Delta N$ is the difference in index of refraction $N=ck/\omega$ between the X and O modes) is satisfied, then the wave polarization slowly rotates, and hence as the wave detaches from the magnetosphere, it no longer carries information about the generation point. However, based on a rigorous treatment of the radiation mechanism GML04 and Melikidze, Mitra \& Gil (2012 in preparation) argue that the AWC does not hold in the pulsar magnetosphere and hence the X--mode can escape the pulsar magnetosphere unaffected. There are two aspects of pulsar polarization which are difficult to understand. One is the existence of OPMs where both the X and O modes are present. Most theories predict that the O-mode should be damped in the magnetosphere. How it escapes is still a puzzle to be solved. The second issue regards circular polarization. If the emission results from incoherent addition of smaller coherently emitting units, then one does not expect any phase relation between parallel and perpendicular electric fields and hence the circular polarization should vanish. Often propagation effects are invoked to explain circular polarization, however these explanations ignore a wide range of other phenomenon, and we feel that currently there is no good explanation for the circular polarization behaviour in pulsars. \section{Summary} The formation of IVG leading to radio emission excited by coherent-curvature radiation is by far the most successful theory that explains the main pulse emission phenomena. Currently challenging simultaneous X-ray and radio observations are being done to understand the IVG conditions in pulsars. We are still uncertain about the origin of pre/post-cursor and off-pulse emission in pulsars. More observations are needed to search and characterize such emission. It is possible that different coherent radio emission mechanisms are responsible for such emission. \acknowledgements I would like to thank my esteemed colleagues - Joanna Rankin, Janusz Gil \& G. Melikidze, with whom I have worked extensively on the pulsar emission problem over the years.
1,314,259,996,119
arxiv
\subsection{Results on feas of MILP and CA-MPC} \begin{theorem}[Existence of the zero-slack solution] \label{th:MILP_CAMPC_relation} Feasibility of the MILP problem~\eqref{eq:CentralMILP} implies the existence of the zero-slack solution of CA-MPC optimization \eqref{eq:campc}. \end{theorem} The Theorem~\ref{th:MILP_CAMPC_relation} states that the binary decision variables $b^i_k$ selected by the feasible solution of the MILP problem \eqref{eq:CentralMILP}, when used to select the constraints (defined by $H,\,g$) for the CA-MPC formulations for UAS 1 and 2, imply the existence of a zero-slack solution of \eqref{eq:campc}. \section{Experimental evaluation} \label{sec:experiments} \input{chapters/implementation} \input{chapters/experimental_results} \subsection{Introduction to Signal Temporal Logic} \label{sec:STL_intro} Signal Temporal Logic (STL) \cite{MalerN2004STL} is a behavioral specification language that can be used to encode requirements on signals. The grammar of STL~\cite{Raman14_MPCSTL} allows for capturing a rich set of behavioral requirements using temporal operators, such as \textit{Always} ($\always$) and \textit{Eventually} ($\eventually$), as well as logical operators like \textit{And} ($\land$), \textit{Or} ($\lor$), and \textit{negation} ($\neg$). With these operators, an STL specification $\varphi$ is defined over a signal, e.g. over the trajectories of quad-rotor robots, and evaluates to either \textit{True} or \textit{False}. The following example demonstrates STL to capture operational requirements for two UAS: \begin{exmp} \label{ex:reach_avoid_exmp} \textit{(A two UAS timed reach-avoid problem)} Two quad-rotor UAS are tasked with a mission with spatial and temporal requirements in the workspace shown in Fig. \ref{fig:dronetube}: \begin{enumerate} \item The two UAS have to reach a $\text{Goal}$ set (shown in green), or a region of interest, within a time of $6$ seconds after starting. UAS $j$ (where $j\in \{1,2\}$), with position denoted by $p_j$, has to satisfy: $\varphi_{reach, j} = \eventually_{[0,6]} (p_j \in \text{Goal})$. The \textit{Eventually} operator over the time interval $[0,6]$ requires UAS $j$ to be inside the set $\text{Goal}$ at some point within $6$ seconds. \item In addition, the two UAS also have an $\text{Unsafe}$ (in red) set to avoid, e.g. a no-fly zone. For each UAS $j$, this is encoded with \textit{Always} and \textit{Negation} operators: $\varphi_{\text{avoid},j} = \always_{[0,6]} \neg (p_j \in \text{Unsafe})$ \item Finally, the two UAS should also be separated by at least $\delta$ meters along every axis of motion: $\varphi_{\text{separation}} = \always_{[0,6]} ||p_1 - p_2||_{\infty} \geq \delta$ \end{enumerate} The 2-UAS timed reach-avoid specification is thus: \begin{equation} \label{eq:timed_RA} \varphi_{\text{reach-avoid}} = \land_{j=1}^2 ( \varphi_{\text{reach},j} \land \varphi_{\text{avoid},j}) \land \varphi_\text{separation} \end{equation} \end{exmp} In order to satisfy $\varphi$, a planning method generates trajectories $\mathbf{p}_1$ and $\mathbf{p}_2$ of a duration at least $hrz(\varphi)= 6$s, where $hrz(\varphi)$ is the time \textit{horizon} of $\varphi$. If the trajectories satisfy the specification, i.e. $(\mathbf{p}_1,\, \mathbf{p}_2) \models \varphi$, then the specification $\varphi$ evaluates to \textit{True}, otherwise it is \textit{False}. In general, an upper bound for the time horizon can be computed as shown in \cite{Raman14_MPCSTL}. In this work, we consider specifications such that the horizon is bounded. More details on STL can be found in \cite{MalerN2004STL} or \cite{Raman14_MPCSTL}. In this paper, we consider discrete-time STL semantics which are defined over discrete-time trajectories. \subsection{UAS planning with STL specifications} \label{sec:problem_planning} Fly-by-logic, the method of \cite{pant2018fly}, generates trajectories by centrally planning for fleets of UAS with STL mission specifications, e.g. for the specification $\varphi_{\text{reach-avoid}}$ of example \ref{ex:reach_avoid_exmp}. It does by maximizing a smooth approximation ($\srob_\formula$) of the robustness function \cite{pant2017smooth} by picking waypoints (connected via jerk-minimzing splines \cite{MuellerTRO15}) for each UAS through centralized, non-convex optimization. While the method shows success in planning for multiple multi-rotor UAS, its performance degrades as the number of UAS being planned for increases. There are two main reasons for this: a) it solves a (non-convex) optimization involving the variables of all the UAS, so the problem becomes harder as the number of variables increase, and b) for $J$ UAS, it has to account for $J \choose 2$ terms for pair-wise separation between the UAS (eq. \ref{eq:timed_RA}). So increasing the number of UAS makes the problem harder to solve. For these reasons, the method cannot be used for real-time planning. In addition, for a large airspace with a small number of UAS, centrally taking the $J \choose 2$ constraints into account might not be necessary as possible collisions between UAS could be possibly infrequent. Taking these observations into account, we use the underlying optimization of \cite{pant2018fly} to generate trajectories \footnote{we could also have used the method of \cite{Raman14_MPCSTL}, but instead use \cite{pant2018fly} because of it being computationally faster and tailored for multi-rotor robots.}, but allow each UAS to independently (and in parallel) solve for their own STL specification. This is captured in the following problem: \begin{myprob} \label{prob:STLtraj} For UAS $j$, given STL specification $\varphi_j$, generate trajectory $\sstraj_j$ such that $\sstraj_j \models \varphi_j$. \end{myprob} Each UAS solves this problem (to generate a trajectory of duration $hrz(\varphi_j)$) by solving an optimization of the form: \begin{equation} \label{eq:single_FbL} \begin{split} \text{max}_{\mathbf{w}_j} \,\srob_{\varphi_j}(L\mathbf{w}_j) \\ F\mathbf{w_j} \leq l \end{split} \end{equation} Here, $\srob_{\varphi_j}$ is the smooth robustness, and $L$ is a linear map that takes a sequence of waypoints $\mathbf{w}_j$ (vectors in 3D position and velocity space) to generate a discrete-time trajectory $\sstraj_j=L(\mathbf{w}_j)$ of duration $hrz(\varphi_j)$. This trajectory consists of spline segments that minimize the integral of jerk over time going from one waypoint to the next \cite{MuellerTRO15}. The linear constraints on the waypoints are such that they ensure kinematic feasibility (Theorem 3.2 in \cite{pant2018fly}) of the resulting trajectories, i.e. the velocities and accelerations of the UAS are within predefined bounds. For brevity , we omit the details here, but the full formulation can be found in \cite{pant2018fly}. The resulting robustness value of the generated trajectory $\sstraj_j$, $\rho_{\varphi_j}$, defines the robustness tube around the trajectory. By allowing each UAS to plan for itself independent of the others, we have ignored the mutual separation requirement in the STL-based planning problem. Instead we rely on online collision avoidance to pairwise ensure that UAS do not crash with each other. This is covered in the following section. \textbf{Note:} In the following sections, $\mathbf{x}$ will refer to a full-state (discrete-time, finite duration) trajectory for a UAS. We will also use $\mathbf{p}$ to refer to the position components in that trajectory, or the position trajectory. $x_k$ (and $p_k$) refer to the components of the trajectory at time step $k$. \subsection{Robustness of STL specifications} \label{sec:stl_rob_short} For a time domain $\TDom = [0, T]$ with sampling time $dt$, the signal space $\SigSpace$ is the set of all signals $\sstraj: \TDom \rightarrow X$. The \textit{Robustness} value \cite{FainekosP09tcs} $\rho_\formula$ of an STL formula $\formula$, with respect to the signal $\mathbf{x}$ that it is defined over, is a real-valued function of $\mathbf{x}$ that has the important following property: \begin{theorem} \cite{FainekosP09tcs} \label{thm:rob objective new} For any $\sstraj \in \SigSpace$ and STL formula $\formula$, if $\robf(\sstraj) <0$ then $\sstraj$ violates $\formula$, and if $\robf(\sstraj) > 0$ then $\sstraj$ satisfies $\formula$. The case $\robf(\sstraj) =0$ is inconclusive. \end{theorem} Intuitively, the degree of satisfaction or violation of a specification is indicated by the robustness value. For simplicity, the distances are defined in the inf-norm sense. This, combined with Theorem \ref{thm:rob objective new} gives us the following result: \begin{corollary} \label{cor:rob_tube} Given a discrete-time trajectory $\sstraj$ such that $\sstraj \models \formula$ with robustness value $\rho>0$, then any trajectory $\mathbf{x}'$ that is within $\rho$ of $\sstraj$ at each time step, i.e. $||x_t-x_t'||_\infty < \rho \, \forall t \in \TDom$, is such that $\mathbf{x}' \models \formula$ (also satisfies $\formula$). \end{corollary} \section{Problem formulation: Collision Avoidance} \label{sec:CA} While flying their planned trajectories (from the previous section), two UAS that are within communication range share a look-ahead of their trajectories and check for a potential collision at any time step $k$ in this look-ahead horizon of $N$ time steps. We assume the UAS can communicate with each other in a manner that allows for enough advance notice for avoiding collisions, e.g. using 5G technology. The details of this are beyond the scope of this paper. \begin{mydef} \label{def:msep} \textbf{2-UAS Conflict}: Two UAS, with discrete-time positions $\mathbf{p}_1$ and $\mathbf{p}_2$ are said to be in \textit{conflict} at time step $k$ if $||p_{1,k}-p_{2,k}||_\infty < \delta$, where $\delta$ is a predefined minimum separation distance\footnote{A more general polyhedral constraint of the form $H(p_{1,k}-p_{2,k})< g$ can be used for defining the conflict without loss of generality.}. Here, $p_{j,k}$ represents the position of UAS $j$ at time step $k$. \end{mydef} \begin{figure}[tb] \begin{center} \includegraphics[width=0.35\textwidth,trim={6cm 0cm 7.5cm 3cm},clip]{figures/DroneTube.pdf} \end{center} \vspace{-10pt} \caption{\small Discrete time trajectories of two UAS, and their associated robustness tubes (see def. \ref{def:robustnesstube}) in gray and purple. The trajectories satisfy a reach-avoid specification, see example \ref{ex:reach_avoid_exmp}. $\text{Unsafe}$ set is in red and the $\text{Goal}$ set is in green.} \label{fig:dronetube} \vspace{-10pt} \end{figure} \begin{mydef} \label{def:robustnesstube} \textbf{Robustness tube}: Given an STL formula $\varphi$ and a discrete-time position trajectory $\mathbf{p}_j$ that satisfies $\varphi$ (with associated robustness $\rho$), the (discrete) \textit{robustness tube} around $\mathbf{p}_j$ is given by $\mathbf{P}_j = \mathbf{p}_j\oplus \mathbb{B}_{\rho_{\varphi}}$. We say the \textit{radius} of this tube is $\rho$ (in the inf-norm sense). Here $\mathbb{B}_\rho$ is a 3D cube with sides $2\rho$ and $\oplus$ is the Minkowski sum operation. \end{mydef} See an example of the robustness tube in Figure~\ref{fig:dronetube}. \textbf{Note}: As long as a UAS stays within its robustness tube, it will satisfy the STL specification $\varphi$ for the which the trajectory was generated for (see Corollary \ref{cor:rob_tube}). The following assumption now relates the minimum allowable radius $\rho$ of the robustness tube to the minimum allowable separation $\delta$ between two UAS. \begin{myass} \label{assumption1} For each of the two UAS in conflict, the radius of the robustness tube is greater than $\delta/2$, i.e. $\min (\rho_1,\rho_2) \geq \delta/2$ where $\rho_1$ and $\rho_2$ are the robustness of UAS 1 and 2, respectively. \end{myass} This assumption defines the case where the radius of the robustness tube is just wide enough to have two UAS placed along opposing edges (of a cube at the same time step) and still achieve the minimum separation between them. We assume that all the trajectories generated by the independent planning have sufficient robustness to satisfy this assumption (see Sec. \ref{sec:problem_planning}). Now we define the problem of collision avoidance with satisfaction of STL specifications: \begin{myprob} \label{prob:deconfliction} Given two planned $N$-step UAS trajectories $\mathbf{p}_1$ and $\mathbf{p}_2$ that have a conflict, the collision avoidance problem is to find a new sequence of positions $\mathbf{p}_1'$ and $\mathbf{p}_2'$ that meet the following conditions: \begin{subequations} \begin{align} ||p_{1,k}'-p_{2,k}'|| &\geq \delta \, \forall k = 0,\dotsc,N \label{eq:msep}\\ p_{j,k}' &\in \mathbf{P}_j \, \forall j=1,2, \, \forall k = 0,\dotsc,N \label{eq:intube} \end{align} \end{subequations} \end{myprob} This implies that we need a new trajectory for each UAS such that they achieve minimum separation distance and also stay within the robustness tube around their originally planned trajectories (see Corollary \ref{cor:rob_tube}). \subsection{Convex constraints for collision avoidance} Let $z_k$ be the difference in UAS positions at time step $k$. For two UAS not to be in conflict, we need \begin{equation} \label{eq:noconf} z_k = p_{1,k} - p_{2,k} \not \in \mathbb{B}_\delta \, \forall k \end{equation} This is a non-convex constraint. For a computationally tractable controller formulation which solves problem \ref{prob:deconfliction}, we define convex constraints that when satisfied imply eq. \eqref{eq:noconf}. The $3$D cube $\mathbb{B}_\delta$ can be defined by a set of linear inequality constraints of the form $\widetilde{H}^i z \leq \widetilde{g}^i \, \forall i=1,\dotsc,6$ Eq.~\eqref{eq:noconf} is satisfied when $\exists i \, | \widetilde{H}^i z > \widetilde{g}_i$. Let $H = -\widetilde{H}$ and $g = -\widetilde{g}$, then for any $i \in \{1,\dotsc,6\}$, \begin{equation} \label{eq:pickaside} H^i(p_{1,k}-p_{2,k}) < {g}^i \Rightarrow (p_{1,k}-p_{2,k}) \not \in \mathbb{B}_\delta \end{equation} Intuitively, picking one $i$ at time step $k$ results in a configuration (in position space) where the two UAS are separated in one of two ways along one of three axis of motion\footnote{Two ways along one of three axis defines $6$ options, $i\in\{1,\ldots,6\}$.}, e.g. at a time step $k$ if we select $i|H^i=\begin{bmatrix}0& 0& 1\end{bmatrix},\, g^i = -\delta$, it implies than UAS 2 flies over UAS 1 by $\delta$ m, and so on. \section{Centralized solution: MILP formulation} \label{sec:subsec_milp} Let the dynamics of either UAS\footnote{For simplicity we assume both UAS have identical dynamics associated with multi-rotor robots, however our approach would work otherwise.} be of the form $x_{k+1} = Ax_k + Bu_k$. The states $x_k \in \mathbb{R}^6$ here are the positions and velocities in the 3D space, or $x_k = [p_k,\,v_k]^T$ (here $p$ and $v$ are the positions and velocities in the 3D space). The inputs $u_k \in \mathbb{R}^3$ are the thrust, roll and pitch of the UAS. The matrices $A$ and $B$ can be obtained through linearization of the UAS dynamics around hover and discretization in time \cite{PantAMNDM15_Anytime}. Let $C$ be the observation matrix such that $p_k=Cx_k$. For $N$ steps into the future with a conflict, solving the following receding horizon MILP over the variables of the two UAS would result in new trajectories $\mathbf{p}_1', \, \mathbf{p}_2'$ that satisfy the minimum separation requirement \eqref{eq:noconf}. Let $\mathbf{x}_j \in \mathbb{R}^{6(N+1)}$ be the pre-planned full state trajectories, $\mathbf{x}_j' \in \mathbb{R}^{6(N+1)}$ the new full state trajectories and $\mathbf{u}_j' \in \mathbb{R}^{3N}$ the new controls to be computed for the two UAS ($j=1,\,2$). Let $\mathbf{b} \in \{0,1\}^{6(N+1)}$ be binary decision variables, and $M$ is a large positive number, then the MILP problem is defined as: \begin{equation} \label{eq:CentralMILP} \resizebox{.442\textwidth}{!}{% $ \begin{aligned} \min_{\mathbf{u}_1', \, \mathbf{u}_2', \, \mathbf{b} |\mathbf{x}_1,\, \mathbf{x}_2} &L(\mathbf{x}_1', \, \mathbf{u}_1', \, \mathbf{x}_2', \, \mathbf{u}_2') \\ x_{j,0}' &= x_{j,0} \, \forall j \in \{1,2\} \\ x_{j,k+1}' &= Ax_{j,k}' + Bu_{j,k}' \, \forall k \in \{0,\dotsc,N-1\} , \, \forall j \in \{1,2\}\\ Cx'_{j,k} &\in P_{j,k} \, \forall k \in \{0,\dotsc,N\} , \, \forall j \in \{1,2\}\\ H^{i}C(x_{1,k}'\!-\!x_{2,k}') &\leq {g}_{i} \!+\! M(1\!-b_{i,k}\!) \, \forall k \in \{0,\dotsc\!,N\}, \forall i \in \{1,\dotsc,\!6\} \\ \sum_{i=1}^6 b^i_{k} &\geq 1 \, \forall k \in \{0,\dotsc,N\} \\ u_{j,k}' &\in U \, \forall k \in \{0,\dotsc,N\} , \, \forall j\in \{1,2\}\\ x_{j,k}' &\in X \, \forall k \in \{0,\dotsc,N+1\}, \forall j\in \{1,2\} \end{aligned}$% } \end{equation} Here $b^i_k$ encodes action $i=1,\dotsc,6$ taken for avoiding a collision at time step $k$ which corresponds to a particular side of the cube $\mathbb{B}_\delta$. A solution (when it exists) to this MILP results in new trajectories that avoid collisions and stay within their respective robustness tubes of the original trajectories. However, this method relies on solving for a pair of UAS in a centralized manner. Also, it introduces $6$ times as many variables (and constraints) as the time horizon of the optimization, which could make the MILP computationally intractable for a real-time implementation. Therefore, we develop a decentralized approach in the following sections. \section{Decentralized solution: Learning-to-Fly} \label{sec:CA_MPC} The distributed and co-operative collision avoidance MPC scheme of Section \ref{sec:ca_mpc} with the conflict resolution algorithm described in Section \ref{sec:learning_supervised} form the online collision avoidance scheme, \textit{Learning-to-Fly} (L2F), our main contribution. We assume that the two UAS can communicate their pre-planned $N$-step trajectories $\mathbf{p}_1,\, \mathbf{p}_2$ to each other (refer to Sec. \ref{sec:problem_planning}). Instead of solving the centralized MILP, we want to solve problem \ref{prob:deconfliction} by following these steps: \begin{enumerate} \item \textbf{Conflict resolution:} UAS 1 and 2 make a \textit{sequence of decisions}, $\mathbf{d}=(d_0,\ldots,d_N)$ to avoid collision. Each $d_k\in\{1,\ldots\,6\}$ represents a particular choice of $H$ and $g$ at time step $k$, see eq.~\ref{eq:pickaside}. Section \ref{sec:learning_supervised} describes our proposed learning-based method for conflict resolution. \item \textbf{UAS 1 CA-MPC:} UAS 1 takes the conflict resolution sequence $\mathbf{d}$ from step 1 and solves a convex optimization to try to deconflict while assuming UAS 2 maintains its original trajectory. After the optimization the new trajectory for UAS 1 is sent to UAS 2. \item \textbf{UAS 2 CA-MPC:} (If needed) UAS 2 takes the same conflict resolution sequence $\mathbf{d}$ from step 1 and solves a convex optimization to try to avoid UAS 1's new trajectory. Section~\ref{sec:ca_mpc} provides more details on CA-MPC steps 2 and 3. \end{enumerate} The overall algorithm is shown in Alg. \ref{alg:l2f}. The visualization of the above steps is presented in Fig.~\ref{fig:concept}. Such decentralized approach differs from the centralized MILP approach, where both the binary decision variables and continuous control variables for each UAS are decided concurrently. \subsection{Distributed and co-operative collision avoidance MPC} \label{sec:ca_mpc} Each UAS $j\in\{1,2\}$ solves the following Collision Avoidance MPC optimization\footnote{Enforcing the separation constraint at each time step can lead to a restrictive formulation, especially in cases where the two UAS are only briefly close to each other. This does however give us an optimization with a structure that does not change over time, and can avoid collisions in cases where the UAS could run across each other more than once in quick succession (e.g. \url{https://tinyurl.com/uex7722}), which is something ACAS-Xu was not designed for.}: \textbf{$\text{CA-MPC}_j(\mathbf{x}_j,\, \mathbf{x}_{avoid},\, P_j,\ \mathbf{d},\, prty_j)$}: \begin{equation} \label{eq:campc} \resizebox{.44\textwidth}{!}{ $ \begin{aligned} & \min_{\mathbf{u}_j',\mathbf{\lambda}_j|\mathbf{x}_j,\mathbf{x}_{avoid}} \sum_k \lambda_{j,k} \\ x_{j,0}' &= x_{j,0} \\ x_{j,k+1}' &= Ax_{j,k}' + Bu_{j,k}' \, \forall k=\{0,\dotsc,N-1\} \\ Cx_{j,k}' &\in P_{j,k} \, \forall k=\{0,\dotsc,N\} \\ prty_j \!\cdot\! H^{d_k}C(x_{avoid,k}\!-x_{j,k}') &\leq g^{d_k} \!+\! \lambda_{j,k}\, \forall k=\{0,\dotsc,N\} \\ \lambda_{j,k} &\geq 0\, \forall k=\{0,\dotsc,N\} \\ u_{j,k}' &\in U\, \forall k=\{0,\dotsc,N \} \\ x_{j,k}' &\in X \, \forall k \in \{0,\dotsc,N+1\} \end{aligned} $ } \end{equation} where, $\mathbf{x}_j$ is the pre-planned trajectory of UAS $j$, $\mathbf{x}_{avoid}$ is the pre-planned trajectory from which UAS $j$ must attain a minimum separation, $prty_j \in \{-1, +1\}$ is the priority of UAS $j$ w.r.t the other UAS in conflict. The decision sequence $\mathbf{d}$ is represented as $H^{d_k},\, g^{d_k}$. This MPC optimization tries to find a new trajectory $\mathbf{x}_j'$ for the UAS $j$ that minimizes the slack variables $\lambda_{j,k}$ that correspond to violations in the minimum separation constraint $\eqref{eq:pickaside}$ w.r.t the pre-planned trajectory $\mathbf{x}_{avoid}$ of the UAS in conflict. The constraints in \eqref{eq:campc} ensure that UAS $j$ respects its dynamics, input constraints, and state constraints to stay inside the robustness tube. An objective of $0$ implies that UAS $j$'s new trajectory satisfies the minimum separation between the two UAS, see eq.~\eqref{eq:pickaside}. \textbf{CA-MPC optimization for UAS 1:} UAS 1, with lower priority, $prty_1 = -1$, first attempts to resolve the conflict for the given sequence of decisions $\mathbf{d}$. An objective of $0$ implies that UAS 1 alone can satisfy the minimum separation between the two UAS. Otherwise, UAS 1 alone could not create separation and UAS 2 now needs to maneuver as well. \textbf{CA-MPC optimization for UAS 2:} If UAS 1 is unsuccessful at collision avoidance, UAS 1 communicates its current revised trajectory $\mathbf{x}_1'$ to UAS 2, with $prty_2 = +1$. UAS 2 then creates a new trajectory $\mathbf{x}_2'$ (w.r.t the same decision sequence $\mathbf{d}$). Alg. \ref{alg:l2f} is designed to be computationally lighter than the MILP approach (see Section~\ref{sec:subsec_milp}), but unlike the MILP it is not complete. In Section \ref{sec:experiments}, through extensive simulations we show that the L2F approach demonstrates a significant improvement in runtime while maintaining comparable performance in terms of separation. \begin{algorithm} \KwData{Pre-planned trajectories, robustness tubes} \KwResult{Sequence of control signals $\mathbf{u}_1'$, $\mathbf{u}_2'$ for the two UAS} Get $\mathbf{d}$ from conflict resolution UAS 1 solves CA-MPC optimization \eqref{eq:campc}:\\ $(\mathbf{x}_1', \mathbf{u}_1', \mathbf{\lambda}_1)=\textbf{CA-MPC}_1(\mathbf{x}_1,\mathbf{x}_2, P_1, \mathbf{d}, -1)$ \eIf{$\sum_k \lambda_{1,k} = 0$} {\textbf{Done}: UAS 1 alone has created separation; Set $\mathbf{u}_2'=\mathbf{u}_2$ } { UAS 1 transmits solution to UAS 2 UAS 2 solves CA-MPC optimization \eqref{eq:campc}: \\ $(\mathbf{x}_2', \mathbf{u}_2', \mathbf{\lambda}_2)=\textbf{CA-MPC}_2(\mathbf{x}_2,\mathbf{x}_1', P_2, d, +1)$ \eIf{$\sum_k \lambda_{2,k} = 0$}{\textbf{Done:} UAS 2 has created separation } {\eIf{$||p_{1,k}'-p_{2,k}'|| \geq \delta \, \forall k = 0,\dotsc,N$} {\textbf{Done}: UAS 1 and UAS 2 created separation} {\textbf{Not done}: UAS still violate eq. \ref{eq:msep}}} } Apply control signals $\mathbf{u}_1'$, $\mathbf{u}_2'$ if \textbf{Done}; else \textbf{Fail}. \caption{Learning-to-Fly: Decentralized and cooperative collision avoidance for two UAS. Also see fig. \ref{fig:concept}.} \label{alg:l2f} \end{algorithm} The solution of CA-MPC can be defined as follows: \begin{definition}[Zero-slack solution] \label{def:zero_slack} The solution of the CA-MPC optimization \eqref{eq:campc}, is called the \textit{zero-slack solution} if for a given decision sequence $\mathbf{d}$ either 1) there exists an optimal solution of \eqref{eq:campc} such that $\sum_k\lambda_{1,k}=0$ or 2) problem \eqref{eq:campc} is feasible with $\sum_k\lambda_{1,k}>0$ and there exists an optimal solution of \eqref{eq:campc} such that $\sum_k\lambda_{2,k}=0$. \end{definition} The two following theorems make important connections between feasible solutions for MILP and CA-MPC formulations. They are the consequence of the construction of CA-MPC optimizations. We omit the proofs for brevity. \begin{theorem}[Sufficient condition for CA] \label{th:CAMPC_success} Zero-slack solution of \eqref{eq:campc} implies that the resulting trajectories for two UAS are non-conflicting and within the robustness tubes of the initial trajectories\footnote{Theorem~\ref{th:CAMPC_success} formulates a conservative result as \eqref{eq:pickaside} is a convex under approximation of the originally non-convex collision avoidance constraint \eqref{eq:noconf}. Indeed, non-zero slack $\exists k| \lambda_{2,k}>0$ does not necessarily imply the violation of the mutual separation requirement \eqref{eq:msep}. The control signals $u_1',u_2'$ computed by alg. \ref{alg:l2f} can therefore in some instances still create separation between drones even when the conditions of Theorem \ref{th:CAMPC_success} are not satisfied.}. \end{theorem} \input{chapters/feasMILPCAMPC} \input{chapters/learning_supervised} \subsection{Results and comparison to other methods} \label{sec:exp_results} We analyzed three other methods alongside the proposed learning-based approach for L2F. \begin{enumerate} \item A \textbf{random} decision approach which outputs a sequence sampled from the discrete uniform distribution. \item A \textbf{greedy} approach that selects the discrete decisions for which the most distance between the two UAS available at each time step. \item A centralized \textbf{MILP} solution that picks a decision corresponding to a binary decision variable in \eqref{eq:CentralMILP}. \end{enumerate} For the evaluation, we measured and compared the \textbf{separation rate} and the \textbf{computation time} over 10K test trajectories. \textit{Separation rate} defines the fraction of given initially conflicting trajectories for which UAS managed to achieve minimum separation. \begin{figure}[t!] \begin{center} \includegraphics[width=0.49\textwidth, trim={0 0.5cm 0 0.25cm}]{figures/rho_rate_uniform.pdf} \end{center} \vspace{-5pt} {\footnotesize \caption{\small Model sensitivity analysis with respect to variations of fraction $\rho/\delta$, which connects the minimum allowable robustness tube radius $\rho$ to the minimum allowable separation between two UAS $\delta$, see Assumption~\ref{assumption1}. A higher $\rho/\delta$ implies there is more room within the robustness tubes to maneuver within for CA.} \label{fig:rho_rate_33}} \vspace{-10pt} \end{figure} Figure~\ref{fig:rho_rate_33} shows the trade-off between performance in terms of separation rate and $\rho/\delta$ fraction, which defines the connection between the robustness tube $\rho$ and the minimum separation $\delta$. Higher $\rho/\delta$ implies wider robustness tubes for the UAS to maneuver within, which should make the CA task easier. In the case of $\rho/\delta=0.5$, where the robustness tubes are just wide enough to fit two UAS (see assumption~\ref{assumption1}), we see the L2F significantly outperforms the methods (excluding the MILP). As the ratio grows, the performance of all methods improve with L2F still outperforming the others, topping out to achieve a best case separation of $1$. The worst-case performance for L2F is $0.9$ which is again significantly better than the other approaches. Table \ref{tbl:success-time} shows the separation rates for three different $\rho/\delta$ values as well as the computation times for conflict resolution schemes plus the CA-MPC optimizations. In terms of separation rate, L2F outperformed the random and the greedy approaches. The centralized MILP outperformed the L2F, however, the computation time for the centralized approach was orders of magnitude higher than L2F. These shows the benefits of L2F compared to other approaches, especially when considering the success-computation time trade-off. \begin{table}[h] \vspace{-10pt} \renewcommand{\arraystretch}{1.3} \setlength{\tabcolsep}{2.5pt} \centering \begin{tabular}{|l||c|c|c||c|c|} \hline \multirow{2}{*}{\textbf{CA Scheme}} & \multicolumn{3}{c||}{\textbf{Separation Rate}} & \multicolumn{2}{c|}{\textbf{Computation time}} \\ \cline{2-6} & \textbf{$\rho/\delta$ = 0.5} & \textbf{$\rho/\delta$ = 0.95} & \textbf{$\rho/\delta$ = 1.15} & \textbf{Mean} & \textbf{Std} \\ \hline\hline \textbf{Random} & 0.311 & 0.609 &0.661 & 2.02ms & 0.17ms \\ \hline \textbf{Greedy} & 0.529 & 0.836 &0.994 & 3.82ms & 0.25ms \\ \hline \textbf{L2F} & 0.901 & 0.999 &1 & 9.36ms & 1.75ms \\ \hline \textbf{MILP} & 1 & 1 &1 & 68.5s & 87.3s \\ \hline \end{tabular} \caption{{\small Separation rates and computation times (mean and standard deviation) comparison of different CA schemes. \textit{Separation rate} is the fraction of conflicting trajectories for which separation requirement \eqref{eq:msep} is satisfied after CA. }} \label{tbl:success-time} \vspace{-5pt} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[width=0.49\textwidth,trim={0 1cm 0 0.25cm}]{figures/non_parabola2.pdf} \end{center} \vspace{-2pt} {\footnotesize \caption{\small Trajectories for 2 UAS from different angles. The dashed (planned) trajectories have a collision at the halfway point. The solid ones, generated through L2F method, avoid the collision while remaining within the robustness tube of the original trajectories. Initial UAS positions marked as stars. Playback of the scenario is at \url{https://tinyurl.com/y8cm65ya}.} \label{fig:scen1_w_ca}\vspace{-10pt}} \end{figure} Figure~\ref{fig:scen1_w_ca} shows an example of two UAS trajectories before and after collision avoidance through L2F method. In addition, in order to evaluate the feasibility of the deconflicted trajectories, we have also ran experiments using two Crazyflie quad-rotor robots. Video recordings of the actual flights and additional simulations can be found at {\footnotesize\url{https://tinyurl.com/yxttq7l5}}. \subsection{Overview of approach and paper outline} \label{sec:overview} In this paper, we aim to develop a framework for UAS traffic management (UTM) that solves problems \ref{prob:traj_STL_highlevel} and \ref{prob:CA_highlevel}. Figure~\ref{fig:concept} depicts the proposed planning and control process and indicates the relevant sections in the paper. \begin{enumerate} \item \textit{Trajectory planning with Signal Temporal Logic (STL) specifications:} Each UAS, $j$, given the mission as an STL specification $\varphi_j$, generates a trajectory that robustly satisfies $\varphi_j$. The \emph{robustness value} $\rho_{\varphi_j}$, associated with this trajectory, corresponds to the maximum deviation from the planned trajectory such that the UAS $j$ still satisfies its mission $\varphi_j$. \end{enumerate} Two UAS within communication range share a look-ahead of planned trajectories and if a future collision is detected, new trajectories are needed that still satisfy their original mission specifications. For this, we develop our decentralized approach L2F, which consists of two stages: \begin{enumerate} \setcounter{enumi}{1} \item \textit{Collision detection} and \textit{Conflict resolution:} When a potential collision is detected, a supervised-learning based conflict resolution policy (CR-S), with pre-defined priority among the two UAS, generates a sequence of discrete decisions corresponding to maneuvers to avoid the collision. \item \textit{Distributed and co-operative Collision Avoidance MPC (CA-MPC):} The CA-MPC for each UAS takes as input the conflicting trajectories and the output of the conflict resolution policy, and controls the UAS to avoid collision. \end{enumerate} In Section \ref{sec:experiments} we evaluate our framework for 2 UAS collision avoidance through extensive simulations and compare its performance to other approaches. Section \ref{sec:casestudy} demonstrates a particular UTM framework case study. Finally, in Section \ref{sec:future} we discuss potential future directions \subsection{Outline of the paper} The following section (Section \ref{sec:overview}) presents an overview of the framework we develop to solve problems \ref{prob:traj_STL_highlevel} and \ref{prob:CA_highlevel}. Section \ref{sec:STL_intro} is an introduction to Signal Temporal Logic (STL), which we use for formally representing mission requirements. Section \ref{sec:problem_planning} presents the variant of the algorithm from \cite{pant2018fly} that we use for UAS trajectory generation for satisfying STL specifications. Section \ref{sec:CA} formalizes the two UAS collision-avoidance problem and formulates the solution to it as a Mixed-Integer Linear Program (MILP). Sections \ref{sec:CA_MPC} and \ref{sec:learning} outlines our approach to the problem, where instead of solving the computationally expensive MILP, we first solve a discrete conflict-resolution problem, and then a decentralized and co-operative collision avoidance MPC controller. These two components form the basis of our framework. We present a learning-based approach for the conflict resolution, as well as a greedy, computationally lightweight approach to the problem that serves as a baseline for comparison. In Section \ref{sec:experiments} we evaluate our framework for 2 UAS collision avoidance through extensive simulations and compare its performance to the MILP approach and the greedy algorithm. Section \ref{sec:casestudy} demonstrates how we combine the trajectory generation method of Section \ref{sec:problem_planning} and the collision avoidance schemes developed in this paper into a UTM framework. Finally in Section \ref{sec:future} we discuss potential future directions to overcome the limitations of the current approach. \subsection{CR-S: A Supervised Learning Approach} \subsection{Learning-based conflict resolution} \label{sec:learning_supervised} Motivated by Theorem~\ref{th:MILP_CAMPC_relation}, we propose to learn the conflict resolution policy from the MILP solutions. To do so, we use a \textit{Long Short-Term Memory} (LSTM)~\cite{hochreiter1997long} recurrent neural network augmented with fully-connected layers. LSTMs perform better than traditional recurrent neural networks on sequential prediction tasks~\cite{gers2002learning}. \begin{figure}[tb] \begin{center} \includegraphics[width=0.49\textwidth, trim={0cm 0.5cm 0cm 0cm}]{figures/lstm_arc2.pdf} \end{center} {\caption{\small Proposed LSTM model architecture for CR-S. LSTM layers are shown unrolled over $N$ time steps. The inputs are $z_k$ which are the differences between the planned UAS positions, and the outputs are decisions $d_k$ for conflict resolution at each time $k$ in the horizon.} \label{fig:lstm_arc}} \vspace{-15pt} \end{figure} The network is trained to map a difference trajectory $\mathbf{z}=\mathbf{x}_1-\mathbf{x}_2$ (as in eq.~\eqref{eq:noconf}) to a decision sequence $\mathbf{d}$ that deconflicts pre-planned trajectories $\mathbf{x}_1$ and $\mathbf{x}_2$. For creating the training set, $\mathbf{d}$ is produced by solving the MILP problem~\ref{eq:CentralMILP}, i.e. obtaining a sequence of binary decision variables $\mathbf{b}\in\{0,1\}^{6(N+1)}$ and translating it into the decision sequence $\mathbf{d}\in\{1,\ldots,6\}^{N+1}$. The proposed architecture is presented in Figure~\ref{fig:lstm_arc}. The input layer is connected to the block of three stacked LSTM layers. The output layer is a time distributed dense layer with a softmax activation function such that each value is a decision $d_k$, $k=\{0,\ldots,N\}$. \subsection{UAS planning with STL specifications} \label{sec:problem_planning} Fly-by-logic \cite{pant2018fly} generates trajectories by centrally planning for fleets of UAS with STL specifications, e.g. the specification $\varphi_{\text{reach-avoid}}$ of example \ref{ex:reach_avoid_exmp}. It maximizes a smooth approximation $\srob_\formula$ of the robustness function \cite{pant2017smooth} by picking waypoints (connected via jerk-minimzing splines \cite{MuellerTRO15}) for all UAS through a centralized, non-convex optimization. While successful in planning for multiple multi-rotor UAS, performance degrades as the number of UAS being planned for increases. The non-convex optimization involving the variables of all the UAS becomes harder as the number of variables increase \cite{pant2018fly} in particular because for $J$ UAS, $J \choose 2$ terms for pair-wise separation between the UAS are needed. Taking this into account, we use the underlying optimization of \cite{pant2018fly} to generate trajectorie , but ignore the mutual separation requirement, allowing each UAS to independently (and in parallel) solve for their own STL specification. For the timed reach-avoid specification \eqref{eq:timed_RA} in example \ref{ex:reach_avoid_exmp}, this is equivalent to each UAS generating its own trajectory to satisfy $\varphi_j = \varphi_{reach, j} \land \varphi_{avoid, j}$, independently of the other UAS. Associated with these trajectories, $\sstraj_j$ is a robustness values $\rho_{\varphi_j}$. Ignoring the collision avoidance requirement ($\varphi_\text{separation}$) in the planning stage allows for the specification of \eqref{eq:timed_RA} to be decoupled across UAS, but now requires online pairwise UAS collision avoidance if the planned trajectories are in conflict. This is covered in the following section. \textbf{Note:} In the following sections, $\mathbf{x}$ will refer to a full-state (discrete-time, finite duration) trajectory for a UAS. We will also use $\mathbf{p}$ to refer to the position components in that trajectory, the position trajectory. $x_k$ (and $p_k$) refer to the components of the trajectory at time step $k$. \subsection{Experimental Setup} \label{sec:exp_setup} \section{Preliminaries} \label{sec:preliminaries} We use Signal Temporal Logic (STL) to specify the mission objectives that the UAS need to satisfy (Problem \ref{prob:traj_STL_highlevel}). This section provides a brief introduction to STL and the trajectory generation approach. \input{chapters/STL_short} \input{chapters/stl_rob_short} \input{chapters/problemStatement_planning} \section{Conclusion} \label{sec:future} \textbf{Summary.} We developed \textit{Learning-to-Fly} (L2F), a two-stage, on-the-fly and predictive collision avoidance approach that combines learning-based decision-making for conflict resolution with decentralized linear programming-based UAS control. Through extensive simulations and demonstrations on real quadrotor drones we show that L2F, with a run-time $<10$ms is computationally fast enough for real-time implementation. It is successful in resolving $100\%$ of collisions in most cases, with a graceful degradation to the worst-case performance of $90\%$ when there is little room for the UAS to maneuver. L2F also enables independent UAS planning, speeding up the process compared to centrally planning for all the UAS in the airspace. A 4-UAS case study shows that the independent planning is $3.5\times$-faster. \textbf{Limitations and Future Work.} While pairwise collision avoidance is sufficient when the airspace density is low, in the future we will extend the approach to cases where more than two UAS could be in conflict with each other. As L2F does not always succeed, we plan to investigate this further and use the failure cases as counterexamples to make the learning-based models better. In general, we expect L2F to be realized within a larger UTM system with additional contingencies (e.g. FAA Lost Link procedures \cite{pastakia_faa_2015}), including the possibility of online re-planning of missions when L2F cannot guarantee collision avoidance. \section{Appendix} \section*{APPENDIX} \label{sec:Appendix} \input{chapters/appendix/greedyAlgo} \input{chapters/appendix/rural_case} \input{chapters/appendix/separation} \input{chapters/appendix/slack-separation} \input{chapters/appendix/crazy_exp_appendix} \section{Introduction} The development of safe and reliable UAS Traffic Management (UTM) is necessary to enable Urban Air Mobility (UAM) \cite{NAP25646}. The two fundamental issues here are: a) mission planning for UAS fleets with guarantees on safety and performance, and b) real-time airborne collision avoidance (CA) methods so UAS run by different operators can share the airspace without a priori approval of all flight plans. Tackling the planning and inter-UAS collision avoidance jointly yields a computationally intractable problem as the number of UAS in the airspace increase \cite{pant2018fly, SahaRSJ14}. So we separate these two aspects in a manner where individual UAS (or those in the same fleet) plan independently, which in turn requires an approach for runtime collision avoidance. The scalability of this will be essential in UAM applications as there will be no central authority to monitor and enforce UAS safety for hundreds of drones per square mile. This stands in contrast to the existing Air-Traffic Control and collision avoidance methods for commercial aviation, like TCAS-II, which was designed to operate in traffic densities of up to 0.3 aircraft per square nautical mile (nmi), i.e., 24 aircraft within a 5 nautical mile radius, which was the highest traffic density envisioned over the next 20 years \cite{TCAS}. Airborne collision avoidance however is a complex problem. With high-speed UAS operating at low altitudes in cluttered urban airspace, decisions for collision avoidance need to be made within fractions of a second. The CA system must also be able to take into account the environment (e.g. buildings and other infrastructure, altitude limits, geofenced areas etc.) around it, making the problem harder than simply avoiding inter-UAS collisions. \begin{figure}[t] \centering \includegraphics[width=0.35\textwidth]{figures/CADiagram2.pdf} \caption{\small Two UAS communicating their planned trajectories, and cooperatively maneuvering within their \textit{robustness tubes} to avoid a potential collision in the future.} \vspace{-10pt} \label{fig:CAdiagram} \end{figure} To overcome the limitations outlined above, we aim to solve the following problems: \begin{myprob} \label{prob:traj_STL_highlevel} \textbf{Independent trajectory planning} for an individual UAS (or fleets run by the same operator) to satisfy spatial, temporal and reactive mission objectives specified using Signal Temporal Logic (STL), independently of other UAS that could be in the airspace. \end{myprob} \begin{myprob} \label{prob:CA_highlevel} \textbf{Real-time pairwise predictive airborne collision avoidance} such that UAS mission requirements satisfied by the trajectories obtained by solving problem \ref{prob:traj_STL_highlevel} are not violated, e.g. UAS make it to their destinations in time while avoiding collisions with each other. \end{myprob} The airborne collision avoidance (Problem \ref{prob:CA_highlevel}) poses a larger challenge, and is the primary focus here. \begin{figure*}[t] \includegraphics[width=0.99\textwidth]{figures/Concept.pdf} \vspace{-10pt} \caption{\small Step-wise explanation and visualization of the framework. Each UAS generates its own trajectories to satisfy a mission expressed as a Signal Temporal Logic (STL) specification, e.g. regions in green are regions of interest for the UAS to visit, and the obstacle corresponds to infrastructure that all the UAS must avoid. When executing these trajectories, UAS communicate their trajectories to others in range to detect any collisions that may happen in the near future. If a collision is detected, the two UAS execute a conflict resolution scheme that generates a set of additional constraints that the UAS must satisfy in order to avoid the collision. A co-operative CA-MPC controls the UAS in order to best satisfy these constraints while ensuring each UAS's STL specification is still satisfied. This results in new trajectories (in solid blue and pink) that will avoid the conflict and still stay within the pre-defined robustness tubes.} \vspace{-15pt} \label{fig:concept} \end{figure*} \input{chapters/contributions} \input{chapters/overview} \input{chapters/relatedWork} \subsection{Experimental setup} All the simulations were performed on a computer with an AMD Ryzen 7 2700 8-core processor and 16GB RAM, running Ubuntu 18.04. The MILP formulation was implemented in MATLAB using Yalmip \cite{lofberg2004yalmip} with MOSEK v8 as the solver. The learning-based approach was implemented in Python 3 with Tensorflow 1.14 and Keras API and Casadi with QPOASES as the solver. We implemented the CA-MPC using CVXGEN for a measurement of computation times and real-time implementation for experiments of actual hardware. For the experiments, we set minimum separation to $\delta = 0.1$m. The learning-based CR scheme was trained for $\rho = 0.055$ which is close to the lower bound in assumption \ref{assumption1}. We have generated the data set of 14K training and 10K test conflicting trajectories using the minimum-jerk trajectory generation algorithm from~\cite{pant2018fly}. The time horizon was set to $T=4$s and $dt=0.1$s. The initial and final waypoints were sampled uniformly at random from two 3D cubes close to the fixed collision point, initial velocities were set to zero. We have trained and ran experiments for various network configurations. For each model, the number of training epochs was set to 2K with a batch size of 2K. Each network was trained to minimize categorical cross-entropy loss using Adam optimizer with training rate of $0.001$. The model with 3 LSTM layers with 128 neurons each has been chosen as the default learning-based CR model. \subsection{Contributions of this work} Our main contribution is \textit{Learning-to-Fly} (L2F)\footnote{Videos of the simulations and demonstrations in this paper can be viewed at \url{https://tinyurl.com/vvvuukh}}, a scheme for real-time, on-the-fly collision avoidance between two UAS whose main features are: \begin{enumerate} \item \textit{Systematic composition of machine learning and control theory:} We combine learning-based decision-making, and linear programming-based control to solve the problem in a decentralized manner. Unlike many other ad-hoc Machine Learning-based solutions, we provide a sound theoretical justification for our approach in Theorem \ref{th:MILP_CAMPC_relation}. We also provide a sufficient condition for the scheme to work successfully (Theorem \ref{th:CAMPC_success}). \item \textit{A notion of priority among the UAS} can be encoded naturally in L2F, where the UAS with higher priority does not have to deviate from its originally planned trajectory until absolutely necessary. \item \textit{Computationally lightweight enough for real-time implementation:} Experimental results show that L2F, with a computation time in milliseconds can be used in a real-time implementation at a high-rate ($10$ Hz). \item \textit{High performance:} In the best case, L2F successfully results in 2-UAS collision avoidance $100\%$ of the test cases, gracefully degrading to $90\%$ for the worst case. Comparisons with other methods also show the superior performance of L2F. \item \textit{Enabling fast, independent planning for UAS with temporal logic objectives}, as individual UAS, or fleets of UAS run by the same operator, can plan for themselves without considering other UAS in the airspace while calling upon L2F for on-the-fly collision avoidance. For a 4-UAS case study, we demonstrate a speed up of $3.5\times$ over the centralized planning method of \cite{pant2018fly}. \item \textit{Proof-of-concept demonstration} on Crazyflie quad-rotor robots to show feasibility on real UAS. \end{enumerate} \subsection{Related work} \label{sec:relatedwork} \subsubsection{UTM and Automatic Collision Avoidance approaches} The UAS Traffic Management (UTM) problem has been studied in various contexts. In the NASA/FAA Concept of Operations document \cite{FAA2018UTM}, an airspace allocation scheme is outlined where individual UAS reserve airspace in the form of 4D polygons (space and time), and the polygons of different UAS are not allowed to overlap. Similarly, \cite{maxetal} presents a voxel-based airspace allocation approach. Our approach is less restrictive and allows overlaps in the 4D polygons, but performs maneuvers for collision avoidance when two UAS are on track to a collision (see Fig. \ref{fig:CAdiagram}). TCAS \cite{TCAS} and ACAS \cite{kochenderfer2012next} systems for collision avoidance in commercial aircrafts rely on transponders in the two aircrafts to communicate information for the collision avoidance modules. These generate recommendations for the pilots to follow and create vertical separation between aircrafts \cite{ACASX}. In the context of UAS, \cite{UTMTCL4} uses vehicle-to-vehicle communication and tree-search based planning to achieve collision avoidance. ACAS-Xu \cite{ACASXu}, an automatic collision avoidance scheme for UAS relies on a look-up table to provide high-level recommendations to two UAS that have potentially colliding trajectories. It restricts desired maneuvers for CA to the vertical axis for cooperative traffic, and the horizontal axis for uncooperative traffic. While we consider only the cooperative case in this work, our method does not restrict CA maneuvers to any single axis of motion. Finally, in its current form, ACAS-Xu also does not take into account any higher-level mission objectives, unlike our approach. This excludes its application to low-level flights in urban settings, e.g. it can result in situations where ACAS-Xu recommends an action that avoids a nearby UAS but results in the primary UAS going close to a static obstacle. Our method avoids this as CA maneuvers are restricted to keeping UAS inside \textit{robustness tubes} (see Fig. \ref{fig:concept}) such that mission requirements are not violated. For this reason, ACAS-Xu is currently only being explored for large, high-flying UAS \cite{ACASXu} and is not directly applicable to the problem we study here. \subsubsection{Multi-agent planning with temporal logic objectives} Many approaches exist for the problem of planning for multiple robotic agents with temporal logic specifications. Most rely on abstract grid-based representations of the workspace \cite{SahaRSJ14, DeCastro17}, or abstract dynamics of the agents \cite{Drona,AksarayCDC16}. \cite{MaICUAS16} combines a discrete planner with a continuous trajectory generator. Some methods \cite{4459804, 1582935, 1641832} work for subsets of Linear Temporal Logic (LTL) that do not allow for explicit timing bounds on the mission requirements. While \cite{SahaRSJ14} uses a subset of LTL, safe-LTL$_f$ that allows them to express reach-avoid specifications with explicit timing constraints. However, in addition to a discretization of the workspace, they also restrict motion to a simple, discrete set of motion primitives. The predictive control method of \cite{Raman14_MPCSTL} allows for using the expressiveness of the complete grammar STL for mission specifications. It handles a continuous workspace and linear dynamics of robots, however its reliance on mixed-integer encoding (similar to \cite{Saha_acc16,KaramanF11_LTLrouting}) for the STL specification limit use in planning/control for multiple agents in 3D workspaces as seen in \cite{pant2017smooth}. The approach of \cite{pant2018fly} instead relies on optimizing a smooth (non-convex) function for generating trajectories for fleets of multi-rotor UAS with STL specifications. In our framework, we use the planning method of \cite{pant2018fly}, but we let each UAS plan independently of each other. We ensure the safe operation of all UAS in the airspace through the use of our predictive collision avoidance scheme. \section{Case study: Independent planning and L2F for a 4-UAS example} \label{sec:casestudy} Figure~\ref{fig:scenario} depicts a UAS case-study with a reach-avoid mission. Scenario consists of four UAS which must reach desired goal states within 4 seconds while avoiding the wall obstacle and each other. Each UAS $j\in\{1,\ldots,4\}$ specification can be defined as: \begin{equation} \label{eq:scenario_spec_d} \varphi_j = \eventually_{[0,4]} (p_j \in \text{Goal}) \ \wedge\ \always_{[0,4]} \neg (p_j \in \text{Wall}) \vspace{-2pt} \end{equation} A pairwise separations requirement of $0.1$ meters is enforced for all UAS, therefore, the overall mission specification is: \vspace{-2pt} \begin{equation} \label{eq:case_mission} \varphi_{\text{mission}} = \bigwedge_{j=1}^4 \varphi_j\ \wedge\ \bigwedge_{j\not=j'} \always_{[0,4]}||p_j-p_{j'}||\geq 0.1 \vspace{-3pt} \end{equation} \begin{figure}[h] \includegraphics[width=0.49\textwidth,trim={0 0cm 0.2cm 4.9cm},clip]{figures/case_4drones.png} \vspace{-10pt} \caption{\small Workspace for the case study scenario. Trajectories for 4 UAS (magenta stars) reaching their goal sets (green boxes) within 4 seconds, while not crashing into the vertical wall (in red). A pairwise separation requirement of $0.1$m is enforced. Simulations are available at \url{https://tinyurl.com/t8bwwqk}.} \label{fig:scenario} \vspace{-10pt} \end{figure} First, we solved the planning problem for all four UAS in a centralized manner following approach from~\cite{pant2018fly} Next, we solved the planning problem for each UAS $j$ and its specification $\varphi_j$ independently, with calling L2F on-the-fly, after planning is complete. This way, independent planning with the online collision avoidance scheme guarantees the satisfaction of the overall mission specification \eqref{eq:case_mission}. \textbf{Simulation results.} We have simulated the scenario for 100 different initial conditions. The average computation time to generate trajectories in a centralized manner was $0.35$ seconds. The average time per UAS when planning independently (and in parallel) was $0.1$ seconds. These results demonstrate a speed up of $3.5\times$ for the individual UAS planning versus centralized \cite{pant2018fly}. \subsection{Separation Profile} \label{sec:separation} Figure~\ref{fig:sep_profile} depicts the separation profile over time before and after L2F for one of 10K minimum-jerk random trajectories described in the Section~\ref{sec:exp_setup}. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{figures/t3.png} \end{center} {\footnotesize \caption{\small The separation profile before and after L2F for one of the minimum-jerk random trajectories described in the Section~\ref{sec:exp_setup}.} \label{fig:sep_profile}} \end{figure} \subsection{A greedy algorithm for conflict resolution} \label{sec:greedy_alg} For a horizon length of $N$ steps, there are $6^{N}$ possible selections for the deconfliction constraints $H^i_k, \, g^i_k$ (6 at each time step in the horizon). The centralized MILP formulation picks which constraint to use by activating the corresponding binary variable \eqref{eq:CentralMILP}. For CA-MPC, we need a decision making algorithm to pick these constraints as there are no binary variables in the CA-MPC formulations \eqref{eq:drone1mpc}, \eqref{eq:drone2mpc}. The following is a greedy approach to this: \begin{algorithm} \KwData{Pre-planned trajectories: $\mathbf{x_1}, \, \mathbf{x_2}$, a preset maneuver: $\text{preset} \in \{1,\dotsc,6\}$} \KwResult{$d_k \in \{1,\dotsc,6\}\, \forall k=\{0,\dotsc, N+1\}$ for conflict resolution maneuvers $H^{d_k},\, g^{d_k}$} initialization k=0, \; \\ \While{$k \leq N+1$}{ \For{$i=\{1,\dotsc,6\}$} {$r^{i}_k = g^i - H^i C(x_{1,k}-x_{2,k})$ } \eIf{$\exists i|r^i_k \geq 0 $}{ $d_k = \text{argmax}_i\, r_{i}[k]$ } {$d_k = \text{preset}$ } k = k+1 } \caption{Greedy conflict resolution} \label{alg:greedy} \end{algorithm} Algorithm \ref{alg:greedy}, for time steps $k$ such that there is no conflict, selects the direction (or side of the hypercube \eqref{eq:noconf}) where there is the most separation between the UAS. For the time steps where there is a conflict, it picks a pre-defined maneuver. \subsection{Case Study: Equipment Surveillance Scenario} \label{sec:rural_case} \begin{figure}[tb] \begin{center} \includegraphics[width=0.49\textwidth]{figures/rural_out} \end{center} {\footnotesize \caption{\small Trajectories for 4 UAS tasked with flying over the pumpjacks by reaching the all green-colored goal set within 10 minutes, while avoiding the all black-colored obstacles (the pumpjacks themselves).} \label{fig:rural}} \end{figure} The equipment surveillance scenario taken from~\cite{wargo2014unmanned} outlines a low-altitude mission profile for four UAS operating within a sparsely populated area. The airspace setting is depicted in a Figure~\ref{fig:rural}. This scenario can serve as a template for other rural use cases for UAS such as end-point package deliveries and wildfire management. The mission for each UAS $j\in\{1,\ldots,4\}$ is formalized in STL as following: \begin{equation} \label{eq:rural_d} \begin{aligned} \varphi_j = \bigwedge_{n=1}^5 \eventually_{[0,10]} (p_j \in \text{Zone}_n) &\land \eventually_{[9,10]} (p_j \in \text{EndZone}_j) \\ &\land\always_{[0,10]} \neg (p_j \in \text{Unsafe}) \end{aligned} \end{equation} where $T=10$m is the maximum allowable flight time allocated to the mission, $\text{Zone}_n$ denotes the airspace region directly above each of the five pumpjacks, $\text{Unsafe}$ denotes the no-fly zones within the mission environment and $\text{EndZone}_j$ defines the region signifying the end of the UAS mission. Table~\ref{tbl:runtimes_rural} shows the computation times for solving the planning problem in a centralized manner versus solving for each UAS $j$ and its specification $\varphi_j$ separately. \begin{table}[] \renewcommand{\arraystretch}{1.3} \caption{Computation times for the centralized versus individual UAS planning } \label{tbl:runtimes_rural} \centering \begin{tabular}{c|c|c|c|c|c|} \cline{2-6} \multicolumn{1}{l|}{} & \textbf{Centralized} & \multicolumn{4}{c|}{\textbf{Independent planning}} \\ \cline{2-6} & \textbf{4 UAS} & \textbf{UAS 1} & \textbf{UAS 2} & \textbf{UAS 3} & \textbf{UAS 4} \\ \cline{2-6} \hline\hline \multicolumn{1}{|c|}{\textbf{Mean (s)}} & 81.918 & 15.596 & 17.444 & 17.553 & 16.128 \\ \hline \multicolumn{1}{|c|}{\textbf{SD (s)}} & 27.215 & 6.257 & 7.404 & 6.912 & 6.764 \\ \hline \end{tabular} \end{table} \subsection{Brief discussion on minimum separation and CA-MPC objective value} \label{sec:slackbad} \begin{figure} \begin{subfigure}[b]{0.4\columnwidth} \includegraphics[width=1.25\linewidth]{figures/slack_distance153.png} \label{fig:1} \end{subfigure} \hspace{10pt} \begin{subfigure}[b]{0.4\columnwidth} \includegraphics[width=1.25\linewidth]{figures/slack_distance32.png} \label{fig:2} \end{subfigure} \vspace{-10pt} \caption{\small Slack $\lambda_2$ and minimum separation over time for two different initial conditions after the random sequence scheme.} \label{fig:slackbad} \end{figure} The condition of Theorem \ref{th:CAMPC_success} could be violated but two UAS could still be separated by more than $\delta$m. This could happen because at the time step $k$ the separation between the two UAS is achieved through some other relative position configurations ($i$ in eq. \ref{eq:pickaside}) compared to the one that is used as a constraint in the optimization. At the time steps $k$ where the robustness tubes of the two UAS are themselves more than $\delta$m apart, having a $\lambda_{2,k}>0$ cannot result in a violation of eq. \ref{eq:msep}. Fig. \ref{fig:slackbad} shows a couple of such examples. \subsection{Additional Resources} \label{sec:crazyflie_appendix} Videos of additional experimental recordings and simulations can be found at \url{https://tinyurl.com/yxttq7l5} \begin{figure}[tb] \begin{center} \includegraphics[width=0.45\textwidth]{figures/scenario1_wo_ca_traj_plot.png} \end{center} {\footnotesize \caption{\small Conflicting trajectories at halfway-point for 2 crazyflie quad-rotors. The 2 drones are unable to complete the mission due to the collision at the conflicting point. Video recordings of actual flight and tracking can be found at \url{https://youtu.be/1YOO-vOh6Zg}, \url{https://youtu.be/Of3EIwCGCrk}} \label{fig:scen1_wo_ca}} \end{figure} \section*{ACKNOWLEDGMENT} \bibliographystyle{IEEEtran}
1,314,259,996,120
arxiv
\section{Introduction} Since their launch in 1999, both XMM--{\it Newton} and {\it Chandra} have performed a large ($>$ 30) number of surveys covering a wide fraction of the area vs. depth plane (see fig~1 in Brandt \& Hasinger 2005). Thanks to vigorous programs of multiwavelength follow-up observations which, since a few years, have became customary, our understanding of AGN evolution has received a major boost. The high level of completeness in redshift determination for a large number of X--ray selected AGN (up to a few thousands) has made possible a robust determination of the luminosity function and evolution of unobscured and mildly obscured AGN (Ueda et al. 2003; Hasinger et al. 2005; La Franca et al. 2005), which turned out to be luminosity dependent: the space density of bright QSO ($L_X > 10^{44}$ erg s$^{-1}$) peaks at z$\sim$ 2--3, to be compared with the z$\sim$0.7--1 peak of lower luminosity Seyfert galaxies. The fraction of obscured AGN is also strongly dependent from the X--ray luminosity (Ueda et al. 2003; La Franca et al. 2005). Even though the shape and the normalization of the function describing the obscured fraction vs. luminosity is still matter of debate, it is clear that absorption is much more common at low X--ray luminosities. Such a trend has been observed also in the optical (Simpson 2005) and in the near infrared (Maiolino et al. 2007) and may be linked to the AGN radiative power which is able to ionize and expel gas and dust from the nuclear regions. More debated is the claim of an increase of the obscured fraction towards high redshifts. First suggested by La Franca et al. (2005), it has been confirmed by Treister \& Urry (2006), while it is not required in the models discussed by Ueda et al. (2003) and Gilli et al. (2007). A redshift dependence of the obscuring fraction would be naturally explained in the current framework of AGN formation and evolution (see, for example, Hopkins et al. 2006): the anti--hierarchical behaviour observed in AGN e\-vo\-lution (similar to that observed in normal galaxies), along with several other evidences, suggests that super\-massive bl\-ack holes (SMBH) and their host galaxies co--evolve and that their formation and evolution are most likely different aspects of the same astrophysical problem. At early times large quantities of cold gas were available to efficiently feed and obscure the growing black holes. Later on, the ionizing nuclear flux is able to "clean" its environment appearing as an unobscured QSO. However, the picture sket\-ched above is likely to be much more complicated, depending on many other parameters (such as the BH mass, the Eddington ratio, the QSO duty cicle) and may not necessarily result in an increasing fraction of obscuration towards high redshifts. It should also be remarked that sensitive X--ray observations are highly efficient to unveil weak and/or elusive accreting black holes which would be missed, or not classified as such, by surveys at longer wavelengths. Among them XBONG (Comastri et al. 2002), sources with high X--ray to optical flux ratio (Fiore et al. 2003), Extremely Red Objects (Brusa et al. 2005), Sub Millimeter Galaxies (Alexander et al. 2005). Although they are probably not representative of the sources of the X--ray background their study has allowed us to better understand the physics of accreting black holes. Finally, the coverage of several {\it Chandra} and XMM fie\-lds has allowed to uncover several redshifts spikes in the distribution of X--ray sources (Gilli et al. 2005) and demonstrated that AGN may be used as reliable tracers of the large scales structures. The underlying large scale structure may play an important role in triggering AGN activity. According to the analysis of a sample of X--ray selected AGN in the AEGIS survey, Georgakakis et al. (2006) suggest that $z \sim$ 1 AGN are more frequent in dense environments. \section{Open Questions} The most important achievements, briefly outlined above, were obtained combining data from both deep and large area surveys. The former were conceived to reach extremely faint fluxes at the expenses of a reduced sky coverage, while for the latter the search of a trade-off between area and exposure time is the main driver. Both the approaches have their own scientific goals which cannot be directly compared, but rather considered as a necessary sinergy. The de\-epest surveys in the {\it Chandra} Deep Fields North and South (CDF) have reached extremely faint fluxes over a relatively small ($\sim$ 400 arcmin$^2$ each) portion of the sky and represent an unique resource for the study of the faint end of the luminosity function and the discovery of distant and obscured AGN. The search for most luminous quasars and the study of the clustering of the AGN population is uniquely pursued by large area surveys. At present the most relevant projects in terms of area, X--ray flux and multiwavelength follow--up programs are the XMM--COSMOS 2 sq. degree survey (Hasinger et al. 2007, Cappelluti et al. 2007), the {\it Chandra}--AEGIS survey (Nandra et al. 2005) and the Extended CDFS (Lehmer et al. 2005). New results always bring new questions. An exhaustive review of open problems is beyond the purposes of the present paper and we highlight what we consider the most pressing questions and which of them could be answered by future XMM--{\it Newton} surveys. As far as SMBH are most directly concerned, attention is now focused on a few key questions: \par $\bullet$ How is star formation and gas accretion linked ? How strong is the relative feedback ? What is the role of the environment in triggering AGN activity ? \par $\bullet$ How many highly obscured AGN are still missing, how much do they contribute to the accretion history and SMBH mass budget in the Universe? \par $\bullet$ How strong is the dependence of obscuration upon luminosity and redshift ? Are emission K$\alpha$ lines common also at high redshift ? \par $\bullet$ What is the space density of high redshift ($>$ 3) qua\-sars ? How many of them are obscured ? Some of these topics are being addressed with the available multiwavelength data. \subsection{``Missing'' AGN} The X--ray background (XRB) below 5--6 keV (Worsley et al. 2005), has been almost completely resolved into single sources, the large majority of them being obscured by substantial amounts of gas (up to 10$^{24}$ cm$^{-2}$; i.e. in the Compton thin regime), thus nicely confirming the predictions of AGN synthesis models. The {\it unresolved} XRB fraction, which provides an integral constraint to the number of ``missing" A\-GN, increases with energy and is close to 100\% at the 30 keV peak. A good fit is obtained if a population of heavily obscured Compton Thick (CT; $N_H > 10^{24}$ cm$^{-2}$) AGN whose space density is of the same order of that of Compton thin is added. Such an estimate is model dependent and probably subject to large uncertainties however, while abundant in the local Universe, only a handful of CT AGN are known at cosmological distances (see Comastri 2004 for a review). Either CT AGN are numerous also at high z and the bulk of population is still undetected, or they represent a "local" phenomenon possibly associated with a "final" phase in the evolution of nuclear obscuration. In the latter hypothesis the 30 keV peak of the XRB would originate in different, yet unknown, sources. According to synthesis models the fraction of CT AGN is expected to steeply increase just beyond the present limits (Fig.~1) and indeed the deepest search for CT AGN in the X--ray band (Tozzi et al. 2006) uncovered a number of candidates which is fairly consistent with the model predictions, supporting the hypothesis that CT AGN are too faint to be directly detected in the X--ray band. \begin{figure}[!t] \includegraphics[width=80mm,height=80mm]{pico.m2.cfr.ps} \caption{The predicted fraction of CT AGN (Gilli et al. 2007) as a function of the X--ray flux in the 2--10 keV band along with the estimate of Tozzi et al. (2006). The model predicted fraction in the 10--60 keV band is also reported.} \label{label1} \end{figure} Alternative techniques, based on the selection via infrared and radio data, seem promising (i.e. Martinez Sansigre et al. 2005). Indeed Fiore et al. (2007) and Daddi et al. (2007) were able to recover the "missing" obscured CT AGN population making use of 24$\mu$m {\it Spitzer} observations and deep near--infrared and optical data. Stacking of the X--ray counts of CT candidates selected on the basis of an infrared excess and individually undetected in the CDF Ms exposures, revealed a strong signal in the hard ($\sim$ 4--8 keV) band which imply, at their average redshift ($z\sim$2), absorption column densities in the CT regime. The census of CT and obscured AGN population bears important con\-se\-quen\-ces for the study of the assembly and evolution of SMBH. For example the mass function of Compton thin AGN, estimated from the luminosity function of X--ray selected AGN, falls short by a factor 2 from that of "relic" SMBH in local bulges (Marconi et al. 2004). The candidate CT AGN population selected in the infrared would have the right size to reconcile the two SMBH mass function estimates. It has also been suggested that the absorption by CT matter of high energy photons may be able to efficiently heat the surrounding material through Compton scattering (Daddi et al. 2007). CT sources may thus play a key role in the AGN feedback eventually leading to the quenching of star formation. \subsection{High-redshift Quasars} The space density and evolution of high redshift QSO is still an open issue. In the optical band, mainly thanks to the SDSS, their luminosity function is relatively well known up to $z \sim$ 6. Although the highest redshift QSO are known to be X--ray emitters (Vignali et al. 2003) the lack of sensitive and wide enough surveys with a fairly homogenous and complete coverage has not allowed to obtain reliable estimates of high redshift X--ray selected QSO. The XMM-COSMOS survey is starting to fill this gap. So far seven spectroscopically confirmed QSO at $z >$ 3 were found abo\-ve a flux of $2\times 10^{-15}$ cgs (where the sky coverage is flat over the entire 2 sq degs area). By fully exploiting the multicolor optical and near infrared diagrams several candidate $z >$ 3 QSO are found at the same limiting flux, bringing the total number of candidate in the COSMOS field to $\sim$ 40 (Brusa et al.in preparation). The upper and lower limits of the $z >$ 3 QSO counts are reported in Fig.~2, along with the number counts of the overall X-ray source population from a compilation of different surveys (Cappelluti et al. 2007). The two magenta lines show the contribution of $z >$3 QSO predicted by XRB synthesis models (Gilli, Comastri \& Hasinger 2007) upon two different assumptions on their high redshift evolution: an exponential decline at $z >$ 3 as parameterized by Schmidt et al. (1995) or a constant space density. Although the actual number of $z >$ 3 QSO in the COSMOS field suffers from large uncertainities, the present data would suggest that a constant space density at high redshift is ruled out. \begin{figure}[!t] \includegraphics[width=80mm,height=80mm]{madrid.zgt3.cosmos.ps} \caption{The observed space density of $z>$ 3 quasars in the COSMOS field (blue vertical line) compared with the predicted number counts (Gilli et al. 2007) under two different hypotheses (see text)} \label{label1} \end{figure} \section{Perspectives} In order to properly address the key questions a better sensitivity, over a large sky area as well as an extended energy range than obtained to date, are required. While in principle one would need all of them at once, in practice this is impossible. An ultradeep (2--3 Ms) hard X--ray survey with XMM--{\it Newton} would allow the scientific community to address with an unprecedent accuracy several of the "key" questions outlined above and in particular those concerning the most obscured "missing" AGN and the X--ray spectral properties (absorption, emission lines, etc) of distant sources. The large collecting area and hard X--ray sensitivity of the imaging detectors onboard XMM have not yet been pushed to their limits (the deepest XMM survey in the Lockman Hole reached an exposure time of $\sim$ 700 ks). According to our calculations (see also Carrera et al. this volume) the {\tt EPIC} camera would not be confusion limited down to fluxes of the order of $3-5 \times 10^{-16}$ cgs in the 5--10 keV band corresponding to a factor 3--4 deeper than the present limits (Fig.~3). \begin{figure*} \includegraphics[width=160mm,height=120mm]{madrid.510.ps} \caption{The 5--10 keV counts, from a compilation of several surveys as labeled, compared with the Gilli et al 2007 model predicitons for unobscured (red line), Compton thin (blue line), Compton Thick (black line), and total AGN (magenta line)} \label{label1} \end{figure*} The model predicted logN--logS of CT AGN, in the 5--10 keV energy range, has a steep slope down to faint fluxes and thus a deeper exposure would allow to detect several new candidates, presumably at moderate to high redshifts, responsible for the unresolved XRB. Given that the Gil\-li et al. (2007) model reproduce the XRB level measured by HEAO1 (fig.~4), the predicted counts should be considered as a lower limit. The very shape and normalization of the XRB spectrum in the $\approx$ 5--15 keV band is, at present, quite uncertain. On the one hand, recent {\tt BeppoSAX} (Frontera et al. 2007); {\tt INTEGRAL} (Churazov et al. 2007) and {\tt Swift} (Ajello et al. in preparation) observations have unambiguosly demonstrated that the XRB intensity above 15 keV is close to that measured by HEAO1 (fig.~4). On the other hand, according to a detailed analysis of deep {\it Chandra} field (Hickox \& Markevitch 2006) the XRB level, below about 5 keV, is some 30\% higher than the extrapolation of higher energy data. Given that the summed contribution of faint {\it Chandra} sources in the 1--8 keV band is already exceeding, at the face value, the HEAO1 level, it may well be possible that a much larger fraction than that quoted by Worsley et al. (2005) has been already resolved. Alternatively, a sizable population of extremely hard sources, appearing only above 5--6 keV, and not accounted for in the XRB synthesis model, are providing a significant contribution. \begin{figure*} \includegraphics[width=160mm,height=125mm]{madrid.xrb.ps} \caption{A selection of XRB measurements, including the resolved fraction in deep fields (blue squares) and the best fit (magenta line) AGN synthesis model of Gilli et al. (2007). The vertical lines mark the 5--12 keV energy range} \label{label1} \end{figure*} \par Interesting enough fully, 4$\pi$, covered AGN with a negligible fraction of scattered flux in the soft X--rays may provide a relevant contribution only above 5 keV or even more depending from their redshift and column density. A few examples of these sources in the local Universe were recently discovered with Suzaku (Ueda et al. 2007; Comastri et al. 2007). Should a population of heavily obscured, fully covered AGN be abundant at moderate to high redshifts they might be discovered only by sensitive surveys above 5 keV. \par A deep XMM survey would allow to collect good quality X--ray spectra for a large number of sources. The obvious advantages of an improved spectral analysis concern a better determination of the absorption column densities especially at high redshifts where the determinations based on the Hardness Ratios suffer of the largest uncertainties. \par The studies based on the stacking of X--ray detected sources would also benefit from an improved counting sta\--ti\--stic. Such a technique has been widely adopted for different purposes. Among them, the study of the intensity and profile of iron K line in the average spectrum of faint AGN. Convincing evidence for the presence of iron emission up to high ($z \sim$ 2--3) redshifts has been presented by Brusa et al. (2005), while Streblyanska et al. (2005) reported the discovery of a broad line with an extended red wing in the average spectra of both Type 1 and Type 2 AGN. The line profile brings unique information on the SMBH spin and is the signature that General Relativistic effects are at work in the innermost regions of AGN. \par An ultradeep X--ray survey is also well suited to search for high redshift AGN. As shown in Fig.~2, the differences in the counts for a model with and without a cut--off in the evolution at high redshift increase towards faint fluxes. Moreover, the search for accreting SMBH among various classes of objects selected on the basis of longer wavelength properties (see $\S$ 2.1) can be exploited in more detail. It is worth mentioning that the selection on the basis of an extreme infrared to optical color seems quite promising to pick up the first ($z >$ 6) QSO (Koekemoer et al. 2004) \par The merit of an ultradeep XMM survey should be jud\-ged in the context of multiwavelength deep surveys and its scientific return would be maximized if it is carried out in the best studied fields. The excellent coverage of the Chandra Deep Field South with both present and future (i.e. AL\-MA and SKA) facilities, along with the already planned additional 1 Ms of DDT {\it Chandra} data, are making the CDFS a legacy field for the years to come. The so--called co--evo\-lu\-tion of AGN and host galaxies and their mutual feedback represents one of the most important achievements in the field of observational cosmology. It is by now clear that further advances can be obtained only by joining the efforts of people working in apparently different fields and at different wavelengths (crudely: optical near--IR for the galaxy evolution, X--ray surveys for AGN). Deep XMM observations will, at the same time, benefit and enhance the return of the many common scientific goals. \acknowledgements We thank Roberto Gilli for useful discussions and for help with Fig. 2. We acknowledge financial contribution from contract ASI--INAF I/023/05/0, PRIN--MUR grant 2006--02--5203, and the BMWI/DLR FKZ 50OX 0002 grant.
1,314,259,996,121
arxiv
\section{Introduction}\label{sec:intro} Intergalactic C~{\sc iv}~ has been studied in the spectra of quasars at essentially all redshifts, from the very local neighborhood in the observed-frame UV \citep{cooksey_civ} to redshift six in the observed-frame near-infrared \citep{becker_civ, ryan_weber_civ, simcoe_z6, ryan_weber_1}. Yet most of our detailed knowledge about the ionization-adjusted metal content of intergalactic matter comes from studies at $z\sim 3$ \citep{schaye_civ_pixels, simcoe2004}. In this interval the observed-frame transitions of both C~{\sc iv}~ and Ly-$\alpha$~ fall in an optimal region for optical spectrometers, in terms of both sensitivity and lack of foreground contaminants. However, even at $z\sim 3$ the density of intergalactic gas is sufficiently low, and the UV ionizing background field is sufficiently intense, that most carbon is ionized to the C~{\sc v}~ state and higher. At typical IGM densities for $z\sim 3$, about 2\% of all carbon atoms are in the C~{\sc iv}~ state. This fact, coupled with the low value of both the particle density and the overall heavy element abundance, leads to an extremely weak C~{\sc iv}~ signal in the more tenuous regions of the cosmic web. Consequently studies of metal abundances have required extremely high signal-to-noise ratio data ($\sim 50-200$). To date the most sensitive surveys have either utilized statistical methods to tease signal from the distribution of C~{\sc iv}~ and H~{\sc i}~ pixel optical depths \citep{schaye_civ_pixels, ellison_civ, songaila_new_civ}, or used the O~{\sc vi}~ line which has a higher ionization potential (and therefore a stronger signal) to trace metallicity \citep{simcoe2004,dave_ovi, bergeron_ovi}. When the same UV backgrounds are used to estimate ionization corrections, both the C~{\sc iv}~ pixel method and the O~{\sc vi}~ method yield an IGM abundance of roughly [C/H]$=-3.1$ or [O/H]$=-2.8$ at $z\sim 3$. The O~{\sc vi}~ line does not lend well to studies of abundance evolution at high redshifts, because it is blended in the Ly-$\alpha$~ forest, whose overall opacity increases at higher $z$. Above $z\sim 3$ it becomes effectively impossible to deblend O~{\sc vi}~ doublets from H~{\sc i}~ forest lines, even in high resolution spectra. C~{\sc iv}~ {\em is} seen at these redshifts, with a distribution of strengths that is nearly independent of redshift \citep{songaila_new_civ}. \begin{figure*} \epsscale{1.0} \plotone{f1.eps} \caption{MIKE spectra of the C~{\sc iv}~ region for the three objects used for the $z\sim 4.3$ sample. At top, the mean telluric absorption spectrum determined by standard star observations is shown for reference. The masked region at 8350\AA in the spectrum of BR0353 is at the location of a known Mg~{\sc ii}~ interloper.} \label{fig:spectra} \end{figure*} \citet{schaye_civ_pixels} systematically examined the ionization corrected carbon abundance between $3.0 < z < 4.0$, finding no statistically significant evidence of evolution. However the bulk of this sample was centered at lower redshift, with only one object at $ > 4$. In this paper, we revisit the question of abundance evolution by obtaining three new high-SNR spectra of $z\sim 4.5 $ quasars. We use these to make a detailed comparison of C~{\sc iv}~ absorption at $z\sim 4.3$ with the O~{\sc vi}~ and C~{\sc iv}~ measurements we obtained earlier at $z\sim 2.5$. The $z\sim 4.0-4.5$ range is a ``sweet-spot'' for observing C~{\sc iv}~ because it is the interval when the fraction of carbon in the triply ionized state peaks over cosmic time. As one moves to higher redshifts, the predominant ionization state moves to progressively lower levels due to the combined $(1+z)^3$ increase in the baryon density and decline of the hard-UV radiation field provided by quasars. Thus a survey of {\em individual} C~{\sc iv}~ systems at $z\sim 4.3$ is comparably sensitive to a survey for O~{\sc vi}~ at $z\sim 2.5$ or a statistical detection of C~{\sc iv}~ at $z\sim 3$. These different surveys all probe gas structures with baryonic overdensities of $\rho/\bar{\rho}\sim 1.5$ relative to the cosmic mean at their respective epochs. In Section \ref{sec:observations} we describe the new observations and methods for data processing; Section \ref{sec:sample} describes our sample selection, and Section \ref{sec:km_lines} describes our observational measurements. Section \ref{sec:uvbg} discusses ionization corrections applied to derive the abundance measurements presented in Section \ref{sec:abundance}. The resultant cosmic abundance evolution and implications for galaxy formation are discussed in Section \ref{sec:discussion}. Throughout we assume a cosmology with $\Omega_M=0.3, \Omega_\Lambda=0.7$, and $H_0=71$ km/s/Mpc. \section{Observations}\label{sec:observations} We observed the three bright, southern quasars listed in Table 1 using Magellan/MIKE over several different observing runs between 2004-2006. Spectra were obtained using a 0.875\arcsec slit, which yields a combined velocity resolution of 14km s$^{-1}$, as measured from concurrent exposures of ThAr arc lamps. Extraction was performed using a customized software package for MIKE \citep{bernsteinMIKE}, which makes use of the Poisson-limited 2D sky subtraction methods outlined in \citet{kelson}. The Echelle orders of individual exposures were flux calibrated using observations of hot spectrophotometric standard stars. Finally, the calibrated orders from all exposures for each object were combined onto a single 1-D wavelength grid and converted to vacuum-heliocentric units. \input{tab1} \begin{figure*} \epsscale{1.0} \plottwo{f2a.eps}{f2b.eps} \plottwo{f2c.eps}{f2d.eps} \plottwo{f2e.eps}{f2f.eps} \caption{Montage of absorbers and corresponding best-fit Voigt profiles, illustrating the use of partial profiles for multiple Lyman series transitions. For each plot, the column density and redshift of the line shown in red are given at top. Blue curves show blended absorption for other lines with $N_{{\rm H \; \mbox{\tiny I}}}>10^{14.5}$, which also are included in the C~{\sc iv}~ sample.} \label{fig:systems} \end{figure*} At $z=4.0-4.5$, the C~{\sc iv}~ transition falls between $\lambda=$7745\AA ~and 8520\AA. This region is rich with telluric absorption features from the atmospheric ``A'' and ``B'' bands of molecular oxygen and water. We carefully removed these features from our QSO spectra by fitting transmission functions to observations of our standard stars interspersed throughout each night. The final 1-D spectra are shown in Figure \ref{fig:spectra}. The resultant signal-to-noise ratio in the C~{\sc iv}~ region ranges from 20 to 40; in the Ly-$\alpha$~ forest the ratio is somewhat lower but still suitable for measuring H~{\sc i}~ given the much greater strength of the features being measured. \section{Sample Selection}\label{sec:sample} To choose our spectral search regions for C~{\sc iv}, we began by fitting Voigt profiles to the entire H~{\sc i}~ Ly-$\alpha$~ forest using the {\tt vpfit}\footnote{http://www.ast.cam.ac.uk/$\sim$rfc/vpfit.html} software package. We selected $N_{{\rm H \; \mbox{\tiny I}}} \ge 10^{14.5}$ as the minimum threshold column density for an absorber to be included in the C~{\sc iv}~ candidate sample. For a standard \citet{haardt_cuba} model of the UV background field, an absorber with this column density and [C/H]=-3.0 would have $N_{{\rm C \; \mbox{\tiny IV}}}=10^{12}$cm$^{-2}$---roughly the detection limit of our data. At $z\sim 4.3$, this H~{\sc i}~ column density corresponds to a baryonic overdensity of $\rho/\bar{\rho}=1.6$ relative to the cosmic mean \citep{schaye_forest}. Coincidentally, this threshold is nearly identical to the minimum $\rho/\bar{\rho}$ where individual O~{\sc vi}~ lines can be detected at $z\sim 2.5$; C~{\sc iv}~ lines can only be seen at higher overdensity at $z\sim 2.5$. Throughout the present paper we quote comparisons to both O~{\sc vi}~ and C~{\sc iv}~ samples at lower redshift. \subsection{Reliability Tests for $N_{\rm H \; \mbox{\tiny I}}$ Fits} When fitting for $N_{{\rm H \; \mbox{\tiny I}}}$ at $z > 4$, much of the Lyman series is accessible provided there are no strong Lyman limit systems. We used at least three transitions of the Lyman series, and often 4 or 5 to constrain the column densities of each H~{\sc i}~ line. If no unsaturated Lyman series line or profile wing was available, a line was discarded from the sample. Figure \ref{fig:systems} shows several examples of H~{\sc i}~ absorbers fit in this manner, selected to span the range of column densities in our sample. Clearly there is significant blending of lines in the Ly-$\alpha$~ and higher order transitions, and it is rare to have a single absorber with many unblended transitions. However roughly 60\% of the sample has at least one Lyman series transition completely free of blending, and an additional 20\% were significantly constrained by the unsaturated wings of blended lines in the high order transitions. Figure \ref{fig:lyseries} shows a graphical summary of which transitions were used to constrain each line in the sample, over the full range in redshift. \begin{figure} \epsscale{1.0} \plotone{f3.eps} \caption{Illustration of which Lyman series lines were used in H~{\sc i}~ fits at varying redshifts. For each system, multiple transitions are used. Horizontal lines indicate components with at least partial saturation (which sometimes contain information in absorption wings). Open circles are transitions which may be blended but are not saturated in absorption and hence provide upper limits on $N_{{\rm H \; \mbox{\tiny I}}}$. Solid circles indicate distinct, apparently unblended lines which align in velocity with other Lyman series absorption.} \label{fig:lyseries} \end{figure} In practice, very few systems were rejected because of saturation in all Lyman series lines. For each sightline we truncated our C~{\sc iv}~ search below the redshift where the Ly-$\gamma$~ line becomes saturated from Lyman limit (or completely blended Lyman series) absorption. The high redshift boundary for each sightline is located $5000$ km s$^{-1}$ blueward of the QSO's emission redshift. \begin{figure}[b] \epsscale{1.0} \plotone{f4.eps} \caption{The H~{\sc i}~ Column Density Distribution Function, $f_{(N_{{\rm H \; \mbox{\tiny I}}},X)}$ as measured in three sightlines at $z\sim 4.3$ (black points), and five sightlines at $z\sim 2.4$ (red points). The distributions are offset by roughly 0.5 dex, or a factor of three in normalization. Solid line shows a $N^{-1.5}$ power law, which was used as input for the Monte Carlo calculations described below.} \label{fig:cddf} \end{figure} In total, our H~{\sc i}~ sample contains 139 lines, with 13 additional candidates discarded from the sample because of saturation in the Lyman series. Just one of the discarded systems is associated with a possible C~{\sc iv}~ absorber. When using high order Lyman transitions to constrain $N_{{\rm H \; \mbox{\tiny I}}}$, there is danger of over-estimating the column because of blended Ly-$\alpha$~ absorption at lower redshift contaminating the signal. To assess this potential error, we generated Monte Carlo realizations of the Ly-$\alpha$~ forest in our redshift range to test whether the fitting procedure introduces systematic errors. The details of Ly-$\alpha$~ forest line samples are not extensively characterized above $z\sim 3.5$ and it is beyond our present goals to provide a comprehensive statistical analysis of the H~{\sc i}~ forest. However, as a byproduct of our fitting procedure, we obtain a determination of the H~{\sc i}~ column density distribution at $z\sim 4.2$, shown in Figure \ref{fig:cddf}. As at lower redshift, it follows a power-law form with incompleteness roll off at $N_{{\rm H \; \mbox{\tiny I}}}<10^{14}$cm$^{-2}$---well below the cutoff $N_{{\rm H \; \mbox{\tiny I}}}$ of our C~{\sc iv}~ sample. \citet{kim_forest} analyzed Ly-$\alpha$~ forest spectra at $z=2.31$ and $z=3.35$, finding $f(N_{\rm H \; \mbox{\tiny I}})\propto N_{{\rm H \; \mbox{\tiny I}}}^{-1.46}$. This same shape provides an accurate fit to our observed $f(N)$ if a slightly higher overall normalization is used to account for the higher density of forest lines at $z\sim 4.2$. For the Monte Carlo tests, we {\em assumed} this form for $f(N)$, with an overall normalization evolving as $(1+z)^{2.17}$ \citep{kim_forest} to approximate the evolution in density of lines. Artificial spectra were generated using lines drawn from these distributions, with the full Lyman series included for each absorber. Ly-$\alpha$~ lines at lower redshift were intentionally added into the Ly-$\beta$~ and higher order regions at appropriate density to simulate blending and contamination. Finally, Gaussian noise was injected into each spectrum with an amplitude set by the noise arrays in our sample spectra. We do not claim the resulting spectra to be a perfect representation of the Ly-$\alpha$~ forest at $z>4$, but they form a useful tool for bootstrapping estimates of our systematic errors in $N_{{\rm H \; \mbox{\tiny I}}}$ measurement from blending. We ran VPFIT on our simulated spectra without knowledge of the input parameters, in identical fashion to the true data set. Figure \ref{fig:montecarlo} shows a comparison of the best-fit $N_{{\rm H \; \mbox{\tiny I}}}$ determinations with the true input values, as a function of $N_{{\rm H \; \mbox{\tiny I}}}$ and redshift. We find a scatter of $\pm 0.1$ dex between the true and fitted values, with a small systematic (median) offset of 0.04 dex in the sense that the fitted $N_{{\rm H \; \mbox{\tiny I}}}$ over-estimate the true values. Individual outliers can miss by 0.5-0.7 dex in either direction, but [65,85]\% of systems are determined to better than [0.1,0.2] dex. \begin{figure} \epsscale{1.0} \plotone{f5.eps} \caption{Input vs. recovered column densities in our Monte Carlo tests of the {\tt vpfit} method for selecting the H~{\sc i}~ sample. We detect an overall scatter of $\sim 0.1$ dex in determining $N_{{\rm H \; \mbox{\tiny I}}}$, with systematic offset of 0.04 dex. Ionization corrections render the effective error in [C/H] smaller than the error in $N_{{\rm H \; \mbox{\tiny I}}}$ by a factor of 3.} \label{fig:montecarlo} \end{figure} \subsection{Comparison Sample}\label{sec:z2.5sample} To study evolutionary trends, we compare the $z\sim 4.3$ C~{\sc iv}~ sample with a separate sample of absorbers at $z\sim 2.4$, taken from previously published work \citep{simcoe2004}. In this work, O~{\sc vi}~ column densities were measured for a sample of 230 absorbers with $N_{{\rm H \; \mbox{\tiny I}}} \ge 10^{13.6}$cm$^{-2}$. A smaller sample of 83 systems were used to measure C~{\sc iv}~ absorption at $N_{{\rm H \; \mbox{\tiny I}}} \ge 10^{14}$ cm$^{-2}$. \begin{figure} \epsscale{1.0} \plotone{f6.eps} \caption{Distribution of overdensity in our H~{\sc i}~-selected samples at $z\sim 4.3$ and $z\sim 2.4$. The C~{\sc iv}~ sample at $z\sim 4.3$ probes smaller overdensities than the $z\sim 2.4$ C~{\sc iv}~ sample, by a factor of 2. The distribution is very similar to O~{\sc vi}~ samples at lower redshift.} \label{fig:km_oden} \end{figure} In Figure \ref{fig:km_oden}, we plot the cumulative distribution of $\rho/\bar{\rho}$ for each sample. We estimate the overdensity at each H~{\sc i}~ absorber location using the formalism of \citet{schaye_civ_pixels}: \begin{equation} n_H = 10^{-5}~{\rm cm^{-3}}\left({{N_{{\rm H \; \mbox{\tiny I}}}}\over{2.3\times 10^{13}{\rm ~cm^{-2}}}}\right)^{\frac{2}{3}} T_4^{0.17}\Gamma_{12}^{\frac{2}{3}}\left({{f_g}\over{0.16}}\right)^{-\frac{1}{3}}. \end{equation} Here, $T_4$ represents the temperature in units of $10^4$K, $\Gamma_{12}$ represents the flux of ionizing photons at the H~{\sc i}~ ionization edge, and $f_g$ represents the gas to total mass fraction in Ly-$\alpha$~ forest systems, which is assumed to be near $\Omega_b/\Omega_M$. To obtain overdensity, we normalize this density by the mean gas density, $\Omega_b\rho_c(1+z)^3$. We assume $T_4$ to be constant and unity over the $z=2.4-4.3$ range \citep{schaye_temp}. Figure \ref{fig:km_oden} shows that the C~{\sc iv}~ sample at $z=4.3$ probes a median {\em overdensity} that is $2\times$ lower than the C~{\sc iv}~ sample at $z\sim 2.4$, despite having a limiting $N_{{\rm H \; \mbox{\tiny I}}}$ that is $3\times$ higher. The O~{\sc vi}~ sample at lower redshift probes almost an identical range in gas overdensity to the high redshift C~{\sc iv}~ sample. Therefore, comparing C~{\sc iv}~ at both redshifts removes any ambiguities in the [C/O] relative abundance but means that the high redshift sample is probing lower densities. On the other hand, comparing [O/H] at $z\sim 2.4$ and [C/H] at $z\sim 4.3$ traces the abundance evolution at fixed overdensity for an assumed value of [O/C]. Unfortunately this quantity is degenerate with one's choice of UV background spectrum used for calculating ionization corrections \citep{simcoe2004, schaye_civ_pixels, aguirre_ovi_pixels}. In the discussions below we assume [O/C]$>0$ in the IGM, consistent with ionization by a soft spectrum composed of both galaxies and quasars \citep{haardt_cuba}. This model is favored by pixel optical depth measurements of C~{\sc iii}~/C~{\sc iv}~ and O~{\sc vi}~/Si~{\sc iv}~ \citep{schaye_civ_pixels, aguirre_ovi_pixels}. We caution however that observations of the most metal-poor damped Ly-$\alpha$~ systems show a recovery toward [O/C]=0 at very low metallicities\citep{pettini_oc}. Solar [O/C] ratios can be achieved with a hard background composed of quasar light with negligible galaxy contribution. \subsection{C~{\sc iv}~ Sample Definition and Measurements}\label{sec:selection} For each of the H~{\sc i}~ absorbers in the sample, we examined the corresponding wavelength of the C~{\sc iv}~ doublet. Using VPFIT, we fit C~{\sc iv}~ absorption profiles to any absorbers within a velocity range of $\pm 20$km s$^{-1}$ ~of the H~{\sc i}~ centroid. In blended systems, we matched individual C~{\sc iv}~ components to their corresponding H~{\sc i}~ components where possible. If a single H~{\sc i}~ line contained multiple components of C~{\sc iv}~ then the total column density for these components was recorded. Finally, if no C~{\sc iv}~ is detected we used VPFIT to calculate $1\sigma$ upper limits on the C~{\sc iv}~ column density, using the error array. \begin{figure} \epsscale{1.0} \plotone{f7.eps} \caption{Cumulative distribution of C~{\sc iv}~ column densities at $z\sim 4.3$ and $z\sim 2.4$.} \label{fig:civ_km} \end{figure} The final sample therefore consists of a mix of detected systems with measured column densities, and non-detections with upper limits. We used a $\ge 3\sigma$ cut as the threshold between detections and non-detections. For the non-detections, VPFIT usually finds a formal solution $N_{{\rm C \; \mbox{\tiny IV}}}\pm\sigma_{{\rm C \; \mbox{\tiny IV}}}$ with $N_{{\rm C \; \mbox{\tiny IV}}} \le 3\sigma_{{\rm C \; \mbox{\tiny IV}}}$; the upper limit is then $N_{{\rm C \; \mbox{\tiny IV}}} + 3\sigma_{{\rm C \; \mbox{\tiny IV}}}$. The final sample of $z=4.0-4.5$ H~{\sc i}~ and C~{\sc iv}~ redshifts and column densities is presented in Table 2. The three sightlines contain 131 systems with $N_{{\rm H \; \mbox{\tiny I}}} > 10^{14.5}$ Among this sample, 39 lines have C~{\sc iv}~ detections, while 92 are non-detections. The approximate limiting C~{\sc iv}~ column density is between $1-2\times 10^{12}$cm$^{-2}$. \section{Cumulative Distributions of C~{\sc iv}~, and $N_{{\rm C \; \mbox{\tiny IV}}}/N_{{\rm H \; \mbox{\tiny I}}}$}\label{sec:km_lines} Figure \ref{fig:civ_km} shows the distribution of C~{\sc iv}~ column densities in samples drawn at the two different redshifts considered. Because the C~{\sc iv}~ samples contain a mixture of detections and upper limits, we have plotted the cumulative distribution as determined by the Kaplan-Meier estimator. The Kaplan-Meier product limit is a fundamental tool of survival statistics, which deals with censored datasets such as the one presented here \citep{feigelson_survival}. Its application for measuring abundances from absorption lines is discussed in \citet{simcoe2004}. For the Kaplan-Meier distribution to accurately represent the true distribution, two conditions must be satisfied. First, the upper limits must be statistically independent of one another, which is clearly true for the resolved, discrete absorption lines in our sample. Second, the probability of a non-detection must not depend on the criterion used to select the sample. Since we are measuring C~{\sc iv}~ detections and upper limits for a sample selected on the basis of H~{\sc i}~ column density, this condition should hold. There may be slight discrepancies at the very high end of the H~{\sc i}~ sample because of local chemical enrichment. But because the H~{\sc i}~ column density distribution is a steep power law, the vast majority of our sample is at the low end where the censoring only depends on variation in signal-to-noise in the C~{\sc iv}~ region and is therefore quite random. Returning to Figure \ref{fig:civ_km}, it is clear that the distribution of C~{\sc iv}~ evolves very little between $z\sim 2.4$ and $z\sim 4.3$. The fact that the cumulative distributions only reach $\sim 40\%$ stems from the survival analysis --- recall that C~{\sc iv}~ is only detected in 39 of 131 systems, or 30\%. Most of the upper limits are clustered near low $N_{{\rm C \; \mbox{\tiny IV}}}$, but some are at slightly higher values. Kaplan-Meier accounts for the difference between the fraction of detections (30\%) and the value of the cumulative distribution near the limiting $N_{{\rm C \; \mbox{\tiny IV}}}$ (40\%). \begin{figure} \epsscale{1.0} \plotone{f8.eps} \caption{Cumulative distribution of the C~{\sc iv}~/H~{\sc i}~ ratio at $z\sim 4.3$ and $z\sim 2.4$. This is the fundamental observed quantity in our analysis. } \label{fig:civhi_km} \end{figure} The constancy of the C~{\sc iv}~ distribution with redshift has been noted by other authors, most notably \citet{songaila_new_civ}. Our result is consistent with these previous findings. For studies of the IGM metallicity, our fundamental observable is $N_{{\rm C \; \mbox{\tiny IV}}}/N_{{\rm H \; \mbox{\tiny I}}}$. Because C~{\sc iv}~ evolves very little with redshift but the H~{\sc i}~ opacity increases, we expect the C~{\sc iv}~/H~{\sc i}~ ratio to decrease toward higher redshift. The cumulative distribution of this quantity is shown in Figure \ref{fig:civhi_km}, with the expected difference between $z\sim 2.4$ and 4.3. Note that this is the distribution of the $N_{{\rm C \; \mbox{\tiny IV}}}/N_{{\rm H \; \mbox{\tiny I}}}$ ratio for each system in the sample, not the ratio of the C~{\sc iv}~ and H~{\sc i}~ distributions. The shapes are very similar at the two epochs, with the difference coming from a systematic shift downward by roughly 0.4 dex, or a factor of 2.5. \section{Ionization Corrections}\label{sec:ionization_corrections} To translate the evolution in $N_{{\rm C \; \mbox{\tiny IV}}}/N_{{\rm H \; \mbox{\tiny I}}}$ into an evolution in the carbon abundance, we apply an ionization correction to each individual absorber as follows: \begin{equation} \left[{{C}\over{H}}\right] = \log\left({{{N_{{\rm C \; \mbox{\tiny IV}}}}\over{N_{{\rm H \; \mbox{\tiny I}}}}}}\right) + \log\left({{f_{{\rm H \; \mbox{\tiny I}}}}\over{f_{\rm C \; \mbox{\tiny IV}}}}\right) - \log\left({{C}\over{H}}\right)_\sun. \label{eqn:metallicity} \end{equation} Here, $f_{\rm C \; \mbox{\tiny IV}}$ and $f_{\rm H \; \mbox{\tiny I}}$ represent the triply ionized fraction of carbon, and neutral fraction of hydrogen, respectively. The last term is a zero-point offset to normalize relative to the solar abundance. Throughout the paper we have assumed the meteoric Solar abundances of \citet{grevesse_solar_abund}, where $A_{\rm carbon}=8.52$ on a scale with $A_{\rm Hydrogen}=12$. Several revisions have been proposed to the Grevesse solar abundances \citep{allende_prieto_oc, allende_prieto_oxygen, holweger_solar_abundances}; we have kept them for consistency with prior work by many authors on IGM abundances. Ionization fractions for carbon and hydrogen were determined using the CLOUDY \citep{cloudy} software package. We assumed the IGM is optically thin (consistent with the low H~{\sc i}~ column densities in the sample), and in photoionization equilibrium. \subsection{Comments on choice of the UV background spectrum}\label{sec:uvbg} \begin{figure} \epsscale{1.0} \plotone{f9.eps} \caption{Mean spectrum of the UV background radiation field, at $z\sim 4.3$ and $z\sim 2.4$. We show the calculations of both \citet[including a galaxy contribution]{haardt_cuba} and \citet{faucher_uvbg} for comparison. The spectra are very similar at high redshift; at lower redshift the \citet{faucher_uvbg} version is harder, leading to higher overall abundances and a correspondingly larger evolution in the observed [C/H] ratio.} \label{fig:uvbg} \end{figure} The largest uncertainty in our ionization correction arises from the choice of a particular prescription for the ionizing background spectrum. Several groups have recently provided improved constraints on the overall intensity of the background field at 1 Rydberg - denoted as $J_{912}$. This can be derived from the opacity of the Ly-$\alpha$~ forest, which relates in turn to the H~{\sc i}~ photoionization rate $\Gamma_{12}$. In a sample of 63 high resolution spectra spanning $2.0 < z < 5.5$, \citet{becker_gamma} find $\Gamma_{12}=[0.8, 0.5]$ at $z=[2.4, 4.3]$, where by convention $\Gamma_{12}$ is normalized to units of $10^{-12}{\rm s}^{-1}$. \citet{faucher_gamma} find $\Gamma_{12}=[0.6, 0.4]$ over the same range using slightly different analysis techniques on a set of 86 high resolution spectra. Earlier analyses by \citet{bolton_gamma} and \citet{scott_uv_bkgd} derived slightly higher estimates closer to $\Gamma_{12}=1$. However, all of these studies find that the evolution of the H~{\sc i}~ photoionization rate is quite weak over the redshift range studied here, perhaps $10-30\%$ smaller at $z=4.3$ but not much more. Throughout the rest of the paper, we use the values from \citet{becker_gamma}. For a power law ionizing spectrum $\propto \nu^{-1.8}$, this corresponds to a flux at the H~{\sc i}~ ionization edge of $\log(J_{912})=[-21.5,-21.7]$ at $z=[2.4, 4.3]$. Even if the normalization of the background spectrum at 1 Rydberg does not evolve, it is generally agreed that the {\em shape} of the spectrum will change. The quasar luminosity function falls off between $z=4$ and $z=2$ \citep{hopkins_lf}. While ionizing photons at $z=2.4$ come from both quasars and galaxies, the background at $z=4.3$ contains a higher fractional contribution from galaxies \citep{faucher_gamma}. This should lead to a softer UV background at earlier epochs, with fewer photons at the higher energies which regulate the ionization balance of heavy elements like carbon and oxygen. \begin{figure} \epsscale{1.0} \plotone{f10.eps} \caption{Neutral fraction of hydrogen as calculated by CLOUDY, compared with the analytic approximation of Equation 3. This relation is redshift-independent, except for the implicit dependence on the IGM temperature and $\Gamma_{12}$. It has a logarithmic slope of $\frac{2}{3}$ dex change in $f_{\rm H \; \mbox{\tiny I}}$ per unit dex in $N_{{\rm H \; \mbox{\tiny I}}}$.} \label{fig:fhi} \end{figure} We have tested two main prescriptions of the UV background with our data: the commonly used version from \citet{haardt_cuba}, and an independent new determination from \citet{faucher_uvbg}. In Figure \ref{fig:uvbg} we show these spectra, with the energies of relevant transitions for the C~{\sc iv}~ ionization balance labeled. The Haardt~ \& Madau spectrum includes contributions from both QSOs and galaxies; the QSO portion is determined from the \citet{croom_qso_lf} luminosity function, while the galaxies are added through population synthesis modeling, assuming a 10\% escape fraction of photons beyond the Lyman limit. The Haardt\& Madau spectra have been normalized to match the observational constraints on $\Gamma_{12}$ described above. \citet{faucher_uvbg} take the QSO luminosity function from \citet{hopkins_lf} which falls more steeply toward higher redshift, and add a galaxy contribution to keep the evolution of $\Gamma_{12}$ constant. When the Haardt \& Madau spectrum is renormalized in CLOUDY to match observations of $\Gamma_{12}$ at 13.6 eV, its soft X-ray background is also artificially boosted by a factor of $\sim 10$ at energies above 10 Rydbergs. This has consequences for the C~{\sc iv}~ ionization balance because it affects the C~{\sc v}~ to C~{\sc vi}~ transition. \begin{figure} \epsscale{0.95} \plotone{f11a.eps} \plotone{f11b.eps} \caption{CLOUDY carbon ionization balance at redshift $4.3$ for the \citet{haardt_cuba} model of the UV background (top) and \citet[bottom]{faucher_uvbg} models. The histogram at top indicates the cumulative distribution of lines in our sample, with the minimum, median, and 90th percentile shown by vertical lines. At this redshift, the C~{\sc iv}~ fraction peaks in the IGM, at $f_{\rm C \; \mbox{\tiny IV}}\sim 0.5$.} \label{fig:fciv_z4} \end{figure} In their calculations, Haardt \& Madau treat the UV and X-ray backgrounds independently, trying to match each with observed luminosity functions by balancing the relative importance of Type I and II QSOs. In its raw form, the Haardt \& Madau X-ray background is slightly lower at z=4.3 than at 2.4 (as would be expected), but after renormalization to matched the observed $\Gamma_{12}$ at 1 Ry, the situation reverses, such that the X-ray background is higher at $z=4.3$ than it is at $z=2.4$. Since the X-rays originate in AGN which are proportionately fewer at high redshift, this situation is probably unphysical. So, we applied a downward correction of 0.8 dex to the $z=4.3$ HM spectrum above 10 Ryd to soften it by a comparable amount as the UV background and bring it back into agreement with the original X-ray intensity. Coincidentally, this downward correction brings the HM spectrum into fairly close agreement with the \citet{faucher_uvbg} spectrum, which by construction does not suffer from this effect of renormalization. While the HM and \citet{faucher_uvbg} spectra do not agree in every detail, their general shapes match fairly well, particularly at $z=4.3$. At lower redshift the difference is larger owing to assumptions on the AGN contribution as well as the treatment of He~{\sc ii}~ reionization. However, in both models the main change from $z=2.4$ to $z=4.3$ is a factor of 10-20 decrease in the hard UV background. It should be noted that neither model captures the He~{\sc ii}~ ``sawtooth'' effect described in \citet{HM_sawtooth}; this is discussed in Section \ref{sec:sawtooth}. \subsection{H~{\sc i}~ ionization fractions}\label{sec:fhi} Since the Ly-$\alpha$~ forest is optically thin and in photoionization equilibrium, a calculation of $f_{{\rm H \; \mbox{\tiny I}}}$ is relatively straightforward and may be made even without detailed knowledge of the spectral shape. In equilibrium, the ionization and recombination rates may be balanced as: \begin{equation} f_{{\rm H \; \mbox{\tiny I}}}n_{H} \Gamma \approx n_H^2 R \end{equation} where the recombination rate $R\approx 4\times 10^{-13}T_4^{-0.76}$ cm$^{3}$s$^{-1}$, $n_H$ is given in Equation 1, and we assume the gas is mostly ionized. Equations 1 and 3 may be combined to derive the hydrogen ionization fraction as a function of (observed) H~{\sc i}~ column density, plotted in Figure \ref{fig:fhi} alongside its numerical CLOUDY determination. This relation is independent of redshift, except implicitly through the (weak) evolution of the ionization rate $\Gamma$, and the temperature $T_4$. \subsection{C~{\sc iv}~ ionization fractions}\label{sec:fciv} \begin{figure} \epsscale{0.95} \plotone{f12a.eps} \plotone{f12b.eps} \caption{CLOUDY carbon ionization balance at $z\sim 2.4$, again for both models of the UV background. The mean H~{\sc i}~ column density is lower, as is the ionization fraction in C~{\sc iv}. The lower ionization fraction may be directly mapped to the inferred change in abundance.} \label{fig:fciv_z2} \end{figure} The ionization balance of C~{\sc iv}~ is a far more difficult calculation, because it is governed by photons at frequencies where the UV background spectrum is less well constrained observationally. Figure \ref{fig:fciv_z4} shows the CLOUDY-derived ionization balance for carbon as a function of H~{\sc i}~ column density, at $z=4.3$ (the balance at $z=2.4$ is shown in Figure \ref{fig:fciv_z2}). To calculate the ionization fractions, we first estimated the total hydrogen density associated with each H~{\sc i}~ column density as calculated from Equation 1. This value of $n_H$ was input into CLOUDY along with the prescriptions shown in Figure \ref{fig:uvbg} for the shape and amplitude of the background spectrum. Together, the determinations of the density and ionizing background flux determine the ionization parameter $U$ for the cloud which allows CLOUDY to calculate the abundance fraction of each carbon ionization state. The top panel of each figure shows the cumulative distribution of H~{\sc i}~ column densities for the C~{\sc iv}~ samples, as a visual aid to indicate what ionization corrections are appropriate for which fractions of the sample. Vertical lines on the bottom panels indicate the sample minimum $N_{{\rm H \; \mbox{\tiny I}}}$, and also its median value and 90th percentile. Plots are shown both for the Haardt \& Madau and Faucher-Giguere forms of the background spectrum. These plots show that the C~{\sc iv}~ ionization state predominates at $z\sim 4.3$, at a value of $f_{\rm C \; \mbox{\tiny IV}} = 0.50-0.65$ for the median H~{\sc i}~ system in the sample. This range reflects the difference between the \citet{haardt_cuba} and \citet{faucher_uvbg} backgrounds, respectively. Roughly equal amounts of carbon are in higher and lower ionization states. Conversely, the median system at $z\sim 2.4$ contains only 4-10\% C~{\sc iv}~, with the vast majority of carbon inhabiting higher ionization states. So, the typical ionization correction at $z=4.3$ is an upwards factor of 1.5-2.0 (0.2-0.3 dex), compared to a correction factor of $10-25$ (1.0-1.4 dex) at lower redshift. \section{Abundance Results}\label{sec:abundance} \subsection{Evolution from $z=4.3$ to $z=2.4$: Qualitative Results}\label{sec:qualitative} Before presenting the full carbon abundance distribution results, it is instructive to examine the evolution of the median system using qualitative arguments. From Figure \ref{fig:civhi_km}, the median H~{\sc i}~ absorber decreases in its C~{\sc iv}:H~{\sc i}~ ratio by 0.4 dex between $z=4.3$ and $z=2.4$. Suppose now that there were no evolution in the [C/H] ratio. To achieve this, our measured 0.4 dex decrease in C~{\sc iv}~/H~{\sc i}~ would need to be offset by a corresponding 0.4 dex {\em increase} in $\log\left({f_{\rm H \; \mbox{\tiny I}}}/{f_{\rm C \; \mbox{\tiny IV}}}\right)$, according to Equation 2. We have seen already that $f_{\rm H \; \mbox{\tiny I}}$ depends on $N_{{\rm H \; \mbox{\tiny I}}}$ but only very weakly on redshift. For a median $\log(N_{{\rm H \; \mbox{\tiny I}}})=14.5$ at $z=2.4$ and $15.2$ at $z=4.3$, the H~{\sc i}~ ionization fractions can be read from Figure \ref{fig:fhi}, yielding $\log (f_{{\rm H \; \mbox{\tiny I}},4.3})-\log (f_{{\rm H \; \mbox{\tiny I}},2.4}) \approx 0.5$. Thus for fixed $f_{\rm C \; \mbox{\tiny IV}}$, the first term in Equation 2 is 0.4 dex {\em lower} at $z\sim 4.3$, while the second is 0.5 dex {\em higher}. The first term is observationally determined, and the second is fairly model-independent, and they approximately cancel each other out. The degree of evolution in [C/H] therefore corresponds almost exactly to the change in $f_{\rm C \; \mbox{\tiny IV}}$ with redshift. Even if we take the most conservative possible hypothesis - that the UV background spectrum at $z=4.3$ is identical to the spectrum at $z=2.4$ - then $U$ will decrease because of the increase in gas density at higher redshift. But observational studies of the QSO and galaxy luminosity functions at these redshifts indicate that the background spectrum should soften as it is increasingly dominated by galaxies toward higher redshift \citep{faucher_gamma}. These factors cause an increase in $f_{\rm C \; \mbox{\tiny IV}}$ which leads to a decrease in implied [C/H]. Thus the observed distribution of $N_{\rm C \; \mbox{\tiny IV}}/N_{\rm H \; \mbox{\tiny I}}$, coupled with minimal assumptions about ionization, suggest that [C/H] is lower at $z=4.3$ than it is at $z=2.4$. To estimate by how much, consider the CLOUDY calculations from both the \citet{haardt_cuba} and \citet{faucher_uvbg} spectra. These illustrate that the C~{\sc iv}~ ionization fraction for the median system increases by 0.7 dex toward higher redshift. Taken together, these qualitative arguments suggest that [C/H] in the median system is lower at $z\sim 4.3$ by 0.5-0.6 dex (a factor of $\sim 3$) than it is at $z=2.4$. They also suggest that the {\em scatter} in the C~{\sc iv}~ to H~{\sc i}~ ratio may be an interesting diagnostic of fluctuations in the hard UV background. The scatter in this ratio reflects the convolved scatter in the intrinsic [C/H] ratio, measurement errors, and variance in the ionization correction. Even if one makes no assumptions about the relative weighting of these three factors, one may set upper bounds on the scatter in the hard background under the conservative assumption of zero measurement error and abundance scatter. \subsection{The Cumulative [C/H] Distribution at $z\sim 4.3$}\label{sec:km_metals} \begin{figure} \epsscale{1.0} \plotone{f13.eps} \caption{Kaplan-Meier cumulative probability function for carbon abundance in the IGM at $z\sim 4.3$ and $z\sim 2.4$. Note that the low redshift curves sample the IGM at a median overdensity roughly $2\times$ larger than the high redshift sample. Solid curves indicate lognormal probability distributions. At $z\sim 4.3$, the curve represents a median abundance of [C/H]=-3.65, and $\sigma=0.8$. At lower redshift the median rises to [C/H]=-3.1 for the \citet{haardt_cuba} background, or [C/H]=-2.8 for \citet{faucher_uvbg}, with similar scatter. The difference of 0.5-0.7 dex represents a $3-5\times$ increase in the median abundance for the low redshift sample.} \label{fig:km_ch} \end{figure} Figure \ref{fig:km_ch} shows the Kaplan-Meier distribution of [C/H] at $z=4.3$ and $z=2.4$, derived by applying ionization corrections individually to each C~{\sc iv}~ system (or its upper limit). The distributions are shown for both the \citet{haardt_cuba} and \citet{faucher_uvbg} forms of the UV background radiation field. Recall that we show the [C/H] distributions for each redshift to eliminate ambiguity in [O/C], but the $z=4.3$ sample probes gas with a median overdensity of $\rho/\bar{\rho} \approx 3$, whereas the C~{\sc iv}~ sample at $z=2.4$ only reaches a median $\rho/\bar{\rho}\approx 6$ (the sample minima are $\rho/\bar{\rho}\approx 1.6$ and $\rho/\bar{\rho} \approx 3$). At $z=4.3$, our sample reaches nearly to the median [C/H] ($50\%$ on the plot), which lies near [C/H]$=-3.55$, or $\sim 1/3500$ the Solar abundance. Roughly half of the lines in our sample have $3\sigma$ upper limits restricting them below this value, so the shape of the distribution below the median is not known. Above the median, the distribution appears to be very crudely lognormal, with the upper quartile falling above [C/H]$=-3.1$. The distributions are very similar for the two choices of UV background radiation spectrum at $z=4.3$. The smooth solid curve shows the cumulative distribution for a lognormal probability distribution. Our best fit lognormal model shows a median abundance of [C/H]=-3.55, and lognormal deviation of $\sigma=0.8$ dex. At $z=2.4$, we find a median abundance of [C/H]$=-3.1$ with $\sigma=0.8$ dex for the \citet{haardt_cuba} form of the background spectrum, consistent with prior results of \citet{simcoe2004} and close to, though slightly higher than those of \citet{schaye_civ_pixels}. The \citet{faucher_uvbg} form of the background spectrum results in a slightly higher carbon abundance but similar scatter, at median [C/H]=$-2.8$ and $\sigma=0.7$ dex. The higher median results from the increased flux in the \citet{faucher_uvbg} spectrum near 5 Rydbergs, at the C~{\sc iv}~ to C~{\sc v}~ ionization edge. This flux enhancement favors higher ionization and hence larger ionization corrections. Comparing the two distributions, we see that the sample median carbon abundance has increased by $\sim 0.45-0.65$ dex (depending on the background model) during the epoch between $z=4.3$ and $z=2.4$, although the density probed at $z\sim 2.4$ is a factor of 2 larger. It is interesting to compare this result to that of \citet{schaye_civ_pixels}, who find $d/dz([{\rm C/H}])=+0.08^{+0.09}_{-0.10}$ and $d/d(\log(\rho))({\rm [C/H]})=0.65^{+0.10}_{-0.14}$. Not accounting for density, our measurement implies a derivative of roughly $-0.26$ dex per unit redshift, a factor of $3.3$ higher than reported by Schaye et al. Moreover the evolutionary trend has the opposite sign as Schaye's; our new result evolves in the expected direction (i.e. increasing abundance with time), although the prior result was basically consistent with zero evolution. Since Schaye et al. reported weak correlation with redshift but strong correlation with density, it is worth considering whether our measured evolution is simply an artifact of the different densities probed at the different redshifts. According to the regression analysis of Schaye et al., a $2\times$ change in density alone could account for $\approx 0.3$ dex difference in [C/H] between the samples, but their reported evolution of +0.08 dex per unit redshift for $\Delta z=1.9$ would counteract the density effect by 0.15 dex, yielding a total change in [C/H] of 0.15 dex to our 0.5. Schaye et al interpret the small statistical significance of their evolution coefficient and its non-intuitive sign as a non-detection. We have detected a larger difference in [C/H] between our high and low redshift samples, in the expected direction, even accounting for the difference in density. To decouple the effects of density and true evolution, we ran our analysis for a subset of our high redshift sample restricted to $\log(N_{{\rm H \; \mbox{\tiny I}}})\ge 10^{14.7}$, thus ensuring that [C/H] is measured at fixed overdensity between the two redshifts. The resulting Kaplan-Meier distribution is shifted to higher [C/H] by 0.2 dex for this factor of 2 increase in $\rho/\bar{\rho}$, suggesting that the abundance is indeed lower in regions of smaller overdensity, but by a smaller degree than measured by Schaye et al. If we budget 0.2 dex of the total 0.5 change seen in Figure \ref{fig:km_ch} to the change in gas density, the remaining 0.3 dex represents the temporal evolution signal. \begin{figure} \epsscale{1.0} \plotone{f14.eps} \caption{Kaplan-Meier Distribution function for Carbon and Oxygen in the $z\sim 2.4$ IGM, used for calibrating the [O/C] ratio, which is best fit with [O/C]=0.25. A positive [O/C] has been found by many authors, though the value used here is slightly smaller than other estimates in the literature (see text).} \label{fig:km_oc} \end{figure} As described earlier, we may also compare our measurements of [C/H] at $z\sim 4.3$ to pre-existing measurements of [O/H] at $z\sim 2.4$, which cover an identical range of overdensity bust must be corrected for any non-Solar value of [O/C]. We estimate the value of [O/C] by comparing the [O/H] and [C/H] distributions at $z\sim 2.4$, and applying this same value at $z=4.3$ (i.e. assuming no evolution in relative solar abundances). Figure \ref{fig:km_oc} shows the result of this calculation. The [C/H] distribution is shown in the thick solid line, while the [O/H] distribution is the thin line; here the [O/H] curve has been shifted downward by 0.25 dex, simulating the predicted [C/H] for a best-fit [O/C]=0.25. Other studies \citep{simcoe2004, schaye_civ_pixels, aguirre_ovi_pixels} have generally found positive [O/C] distributions for soft photoionizing spectra, with values near [O/C]$\sim 0.5$. Our slightly smaller value likely results from a slightly higher normalization of the background spectrum to match recent observations \citep{becker_gamma, bolton_gamma}. The best-fit [O/C] for the \citep{faucher_uvbg} UV background is near [O/C]=0, which is to be expected considering that this spectrum is much harder than the \citep{haardt_cuba} background at $z=2.4$. \begin{figure} \epsscale{1.0} \plotone{f15.eps} \caption{Kaplan-Meier Distribution function for [C/H] at $z\sim 4.3$ and [O/H]-0.25 at $z\sim 2.4$. Subject to estimates of [O/C], this distribution is our best representation of the change in [C/H] at fixed overdensity. The curve at $z\sim 4.3$ is the same as in Figure \ref{fig:km_ch}; the lognormal fits shown here for $z\sim 2.4$ have a median [C/H]=-3.1, -2.9 for the two models of the UV background.} \label{fig:km_constrho} \end{figure} Figure \ref{fig:km_constrho} shows the distributions of [C/H] at $z=4.3$, [O/H]$-0.25$ at $z=2.4$ for the \citet{haardt_cuba} UV background, and [O/H] for the \citet{faucher_uvbg} model (which has best-fit [O/C]=0). Lognormal curves are show for each model. The two $z=2.4$ models have median abundances of $-2.9$ and $-3.1$, with similar scatter. Evidently there is a 0.5 dex evolution in the median [C/H] at fixed overdensity for [O/C]=0.25; adopting [O/C]=0.5 would reduce the absolute change by 0.25 dex. Figure \ref{fig:evolution} shows this evolution as evidenced in the individual sample measurements at each redshift. The large circles show our median estimates at each redshift, while the central point is taken from \citet{schaye_civ_pixels} Taken all together, these calculations indicate that the intergalactic carbon abundance increased by 0.3-0.5 dex between $z\sim 4.3$ and $z\sim 2.4$, with little change in the lognormal abundance scatter. The range quoted reflects uncertainty in various choices for the UV background, the relation between density and metallicity, and the value of [O/C] used. A change of 0.3-0.5 dex represents a factor of $\sim 2-3$ increase in the median intergalactic carbon abundance at fixed overdensity. It suggests that roughly half of the carbon seen in the IGM at $z\sim 2.4$ was deposited in the 1.3 Gyr interval between $z\sim 4.3$ and $z\sim 2.4$. \begin{figure} \epsscale{1.0} \plotone{f16.eps} \caption{Illustration of the individual measurements for each line in the full sample. Large red dots indicate the median abundance at each redshift, calculated using the Kaplan-Meier product limit estimator. The median points lie below the center of the scatter because of the inclusion of numerous upper limits. The solid point at $z=3.1$ represents the measurement of \citet{schaye_civ_pixels}.} \label{fig:evolution} \end{figure} \subsection{Uncertainties}\label{sec:uncertainties} \subsubsection{Ionizing Background Fluctuations}\label{sec:uvbg_fluctuations} The results presented above all assume a spatially uniform shape and intensity of the ionizing background, following the convention of all similar previous studies. Although we find similar results for two representations of the {\em mean} background (Haardt \& Madau, and Faucher-Giguere), we did not explore the uncertainty or scatter that would be introduced by place-to-place variations in the radiation field. This subject has largely been ignored in the previous literature on IGM abundances, but recent work on the reionization of H~{\sc i}~ and He~{\sc ii}~ contains arguments that are relevant to observations of heavy element lines. Following the development of \citet{fardal_uvbg} and \citet{meiksin}, the variance in the background intensity at a given energy is a function of the source density of photon emitters at that energy, the mean free path of photons with that wavelength through the IGM, and the variance in source spectrum from object to object. When ionizing sources are rare, differ in SED from object to object, and photons have a small mean free path, the radiation seen by a random absorber tends to be dominated by a single source and will vary significantly from place to place. The production of H~{\sc i}~ ionizing photons at $z=4.3$ occurs mostly in galaxies which are relatively numerous, and the mean free path for these photons is large because H~{\sc i}~ is already reionized. So, many sources will be contained within an H~{\sc i}~ photo-attenuation volume and the background should be comparatively uniform at 1 Ryd (except in the immediate vicinity of QSOs). Overall variations in [C/H] are therefore more sensitive to the C~{\sc iv}~ ionization balance, whose atomic transitions are resonant with photons between $3-30$ Rydbergs. These photons originate in less numerous AGN, which could lead to an increased variation in spatial intensity. Since our high redshift sample is measured at the probable leading edge of He~{\sc ii}~ reionization \citep{reimers_2347, 0302_heii, 0302_heap, davidsen_kriss_1700, kriss_2347, fechner_1700, shull_2347, songaila_siiv}, the IGM opacity from photoelectric absorption must be considered above the He~{\sc ii}~ ionization potential (4 Ryd). For photons of energy $E$, the mean free path is given as \citep{furlanetto} \begin{equation} \lambda = 0.41 \left({1+z}\over{5.3}\right)^{-2} \left({E}\over{4 {\rm ~Ryd}}\right)^3 \bar{x}_{{\rm He \; \mbox{\tiny II}}}^{-1} ~{\rm Mpc} \end{equation} At $z=4.3$, $x_{{\rm He \; \mbox{\tiny II}}} \approx 1$ since He~{\sc ii}~ reionization is only in its earliest phases. For C~{\sc iv}~$\rightarrow$C~{\sc v}~ ionizing photons with 4.74 Ryd, this implies $\lambda < 1$ Mpc because of He~{\sc ii}~ attenuation, while for C~{\sc v}~$\rightarrow$C~{\sc vi}~ ~(24.2 Ryd) the path is $90$ Mpc. Thus the most important effect we must consider is stochastic fluctuation in the 4.74 Ryd background governing the C~{\sc iv}~ to C~{\sc v}~ transition. At $z=4.3$, simulations indicate that 10-20\% of the universe resides in ionized ``bubbles'' of He~{\sc iii}~ \citep{mcquinn}. He~{\sc ii}~ Ly-$\alpha$~ observations focus only on the interiors of these bubbles, where the ionized fraction is high and He~{\sc ii}~ absorption troughs are not totally saturated. The lower oscillator strength of C~{\sc iv}~ renders it observable in both He~{\sc iii}~ ionized bubbles, and the He~{\sc ii}~-neutral walls in between. In the bubble walls, He~{\sc ii}~ absorption strongly attenuates the UV background near 4 Ryd, affecting the ionization balance of carbon. Inside of bubbles, stochastic variations in the proximity of hard photon sources and attenuation by He~{\sc ii}~ Lyman-limit systems generates intensity variations both above and below the mean spectrum. In the following two sections we consider each of these environments in separate detail. \subsubsection{UV background fluctuations in the interior of He~{\sc iii}~ bubbles}\label{sec:bubbles} To study ionization variations of carbon inside He~{\sc iii}~ bubbles, we follow the arguments of \citet{furlanetto}, who develops a simple Monte Carlo framework suited to the problem. The method was originally intended for studying variations in the 4 Ryd background in He~{\sc iii}~ bubbles at $z\sim 3$; a minor set of modifications renders it useful for studying the C~{\sc iv}~$\rightarrow$C~{\sc iv}~ transition at at $E=4.74$ Ryd and $z=4.3$. Essentially all opacities in the model are scaled down to reflect the smaller cross section of He~{\sc ii}~ at higher energy, and fluxes from the ionizing sources is also scaled down to reflect their SED. We defer a full description of the method to Furlanetto's paper. Very briefly, all quasars are assumed to live inside of He~{\sc iii}-ionized bubbles. The space outside of bubbles is opaque to He~{\sc ii}-edge photons; inside of bubbles there is still attenuation from the He~{\sc ii}~ analog of Lyman limit systems, but between these systems the mean free path is large. Because QSOs are rare, the typical bubble contains only one to a few sources, so a Monte Carlo approach is used. Ionizing sources are drawn at random from the luminosity function of \citet{hopkins_lf}, and placed uniformly within each bubble volume (no sources are located outside of bubbles). Then, the spatial variation of the background $J$ is calculated by compiling trial statistics with varying QSO luminosities, spatial locations in the bubble, Lyman limit attenuation, and UV spectral indices from the sources. The Monte Carlo calculation outputs a distribution of $j=J/\left<{J}\right>$, where $\left<{J}\right>$ represents the mean value of the background in a fully He~{\sc ii}-ionized IGM. The $J$ distribution at the C~{\sc iv}~$\rightarrow$C~{\sc v}~ ionization edge can be used to examine variations in our C~{\sc iv}~ ionization corrections. The spread of $j$ depends strongly on the radii of the He~{\sc iii}~ bubbles, denoted as $R$, since variations diminish once bubbles become large enough for absorbers to ``see'' multiple QSOs. \citet{mcquinn} and \citet{bolton_heii} have shown in simulations that the distribution of $R$ does not evolve strongly with redshift and is governed primarily by QSO duty cycle. Typical values range from $20-40$ comoving Mpc, with the spread at any redshift larger than the difference between $z=4$ and $z=2$. The spread in $j$ also depends on the photoattenuation length $r_0$, which parameterizes the distance that C~{\sc iv}~ ionizing photons travel from their point of origin before reaching a He~{\sc ii}~ lyman limit system. \citet{furlanetto} calculates $r_0$ for He~{\sc ii}~ ionizing photons at $z=2,3,4$ and demonstrates that its luminosity-weighted mean evolves very little. C~{\sc iv}~ ionizing photons have slightly higher energies; their attenuation is still dominated by He~{\sc ii}~ absorption, but the relevant $r_0$ increases very slightly because of the smaller He~{\sc ii}~ cross section at 4.74 Ryd. \begin{figure} \epsscale{1.0} \plotone{f17.eps} \caption{Distribution function of flux at the C~{\sc iv}~$\rightarrow$C~{\sc v}~ ionization edge, in He~{\sc iii}-ionized bubbles at $z\sim 4.3$. The flux is normalized in units of the mean, $\left<{J}\right>$. Scatter above the mean is dominated by proximity to individual QSOs, which enhances the background over a wide range of frequencies above 1 Ryd. Scatter on the low side arises from stochastic variations in the density of nearby QSOs and shielding from He~{\sc ii}~ patches.} \label{fig:bubbles} \end{figure} Figure \ref{fig:bubbles} shows the distribution of $J$ at $4.74$ Ryd in logarithmic (top panel) and linear (bottom panel) forms, for bubbles ranging from $R=15$ to 50 comoving Mpc. In all cases we have assumed $r_0=35$ Mpc, as in \citet{furlanetto}. As expected, the shape is very similar to what was derived for He~{\sc ii}~ by \citet{furlanetto}, except the distributions are slightly narrower because of the increased transparency of He~{\sc ii}~ at 4.74 Ryd. Above the mean background flux, the curves converge for all bubble sizes. \citet{furlanetto} observed this same effect for He~{\sc ii}; it represents environments where the background is dominated by a single source. In this case the variance is driven by the shape of the QSO luminosity function and the random distribution of QSO-absorber spacings. According to Figure \ref {fig:bubbles}, variations on the high-side are limited to a factor of $\sim 10\times$ enhancement in the background. Because this enhancement is driven by proximity to a single QSO, it should lead to an increase in the UV background across a wide range in frequency. When examining the C~{\sc iv}~ ionization balance in these neighborhoods, we generate an ``enhanced'' toy model of the background spectrum which boosts the flux for all energies above 1 Ryd by a constant factor of $5-10$. Below 1 Ryd the background remains the same, since it should still be dominated by integrated starlight from more numerous galaxies. Below the mean $J$, the distribution in Figure \ref{fig:bubbles} is more extended and varies with bubble size. The increased variance for small bubble sizes results primarily from shot noise in the number of quasars: by $R=15$ Mpc most bubbles have either no source or one source, and sources outside the bubble are shielded from view. This effect is strongest at the He~{\sc ii}~ ionization edge, and still significant at 4.74 Ryd. But the bubble walls are optically thin to photons with $E<4$ Ryd and also for higher energy photons including the $24.2$ Ryd C~{\sc v}~$\rightarrow$C~{\sc vi}~ edge. Thus it is appropriate simply to use the mean UV background spectrum for H~{\sc i}~, as well as all carbon transitions except C~{\sc iv}~$\rightarrow$C~{\sc v}~ (and possibly C~{\sc iii}~$\rightarrow$C~{\sc iv}~, see Section \ref{sec:sawtooth}). To model effects at 4.74 Ryd, we attenuate the \citet{haardt_cuba} spectrum by an absorbing column with $\tau = \tau_0(E/4 {\rm Ryd})^{-3}$. This produces the desired suppression at $4.74$ Ryd while preserving the cosmic averaged spectrum in optically-thin regions. The normalization $\tau_0$ is adjusted to produce a factor of 10-100 decrease in the background at 4.74 Ryd, consistent with the distribution in Figure \ref{fig:bubbles}. \begin{figure} \epsscale{1.0} \plotone{f18.eps} \caption{Model UV background spectra used to evaluate the effects of variations on our inferred [C/H] estimates. The thick solid line represents the mean \citet{haardt_cuba} background. Curves below this mimic the suppression of 4.74 Ryd photons by He~{\sc ii}~ Lyman limit systems (inside of bubbles) or the neutral walls between bubbles. Curves above the mean represent the proximity zones of QSOs.} \label{fig:toymodels} \end{figure} \subsubsection{UV background fluctuations in the He~{\sc ii}~-neutral inter-bubble medium} In simulations of He~{\sc ii}~ reionization, He~{\sc iii}~ bubbles only permeate 10-20\% of the IGM by volume, so we must consider the shape of the spectrum in the He~{\sc ii}~ walls between bubbles. This scenario is discussed in \citet{faucher_uvbg}; essentially flux at the He~{\sc ii}~ edge is completely suppressed because in He~{\sc ii}-neutral regions the optical depth can easily exceed $\tau\sim 100$. Once again, the background at lower energies is spatially uniform, being optically thin after H~{\sc i}~ reionization (and, for low energies, dominated by galaxies). Likewise at higher energies the He~{\sc ii}~ cross section declines and the hard flux recovers to a uniform background. We therefore model the background spectrum in He~{\sc ii}~ walls the same as in the ``suppressed flux'' regions of ionized bubbles: a \citet{haardt_cuba} standard mean background having He~{\sc ii}~ absorption blueward of the edge with $\tau = \tau_0(E/4 {\rm Ryd})^{-3}$. However in the neutral walls the normalization of optical depth is much higher. Whereas attenuation from Lyman limit systems in bubbles reduces the flux by a factor of 10-100, the completely neutral bubble walls with $\tau\sim 100$ experience attenuation by 44 orders of magnitude. \subsubsection{The effect of spatial fluctuations on [C/H] estimates} \begin{figure} \epsscale{1.0} \plotone{f19.eps} \caption{CLOUDY calculations showing correction factors to the ionization term in Equation 2. In regions where the 4.74 Ryd background is suppressed, little or no change is apparent. In regions of enhanced flux and high density, we may overestimate the abundance by up to $\sim 1$ dex.} \label{fig:chchange} \end{figure} Figure \ref{fig:toymodels} shows the range of ionizing background models we considered when investigating spatial fluctuations. The thick solid line represents the mean \citet{haardt_cuba} quasar+galaxy spectrum at $z=4.3$. Four models with He~{\sc ii}~ attenuation of $\tau_0=1,3,10,100$ fall below the mean. Smaller values of $\tau_0$ represent conditions on the lower half of the $J$ distribution inside of He~{\sc iii}~ bubbles, the larger values are characteristic of the He~{\sc ii}~ bubble walls. Above the mean curve are two models with enhanced hard UV flux (i.e. proximate to QSOs), shown in dot-dashed lines. We ran each of these spectra through our CLOUDY model grid to examine the change to the ionization fraction as a function of column density. Figure \ref{fig:chchange} shows the result of this exercise, where we plot the change in ionization correction for each different choice of background spectrum. A value of zero indicates no change in [C/H] for a given line; a positive value implies a {\em lower} metallicity for a given set of C~{\sc iv}~ and H~{\sc i}~ measurements. The solid lines near zero represent the four ``attenuated'' models with varying optical depth at 4 Ryd, increasing upwards. For these models the H~{\sc i}~ ionization fraction does not change at all because the universe is already optically thin to 912 \AA ~photons at $z\sim 4.3$ and the background is dominated by flux from numerous galaxies. The C~{\sc iv}~ ionization fraction does not change either, which results in a very small total correction of $\lesssim 0.2$ dex to [C/H] across our range of $N_{{\rm H \; \mbox{\tiny I}}}$. The small change in $f_{{\rm C \; \mbox{\tiny IV}}}$ is surprising at first glance given that the flux at the C~{\sc iv}~$\rightarrow$C~{\sc v}~ ionization edge changes by many orders of magnitude. The reason this effect is so small is that even for the mean spectrum, over 50\% of the carbon is already in the C~{\sc iv}~ state. So, a reduction in the C~{\sc iv}~$\rightarrow$C~{\sc v}~ rate---which would increase $f_{\rm C \; \mbox{\tiny IV}}$---cannot change the C~{\sc iv}~ fraction by more than a factor of $\sim 2$. A much larger correction is obtained when considering upward fluctuations in the UV background from proximity to bright quasars. While this change can be significant for selected systems, the number of affected data points should be small. Major upward fluctuations occur only in the largest bubbles surrounding rare, luminous QSOs, and even in these bubbles the probability of lying in a proximate region is low according to Figure \ref{fig:bubbles}. Such an error would push our abundances {\em lower}, enhancing the evolutionary trend discussed in Sections \ref{sec:qualitative} and \ref{sec:km_metals}. \subsubsection{He~{\sc ii}~ Sawtooth}\label{sec:sawtooth} \begin{figure} \epsscale{1.0} \plotone{f20.eps} \caption{Effect of flux suppression at the C~{\sc iii}~ ionization edge from the He~{\sc ii}~ Ly-$\alpha$~ forest ``sawtooth'' spectral modulation.} \label{fig:sawtooth} \end{figure} \citet{HM_sawtooth} have recently emphasized the importance of a ``sawtooth'' imprint left on the ionizing background spectrum between 3-4 Ryd, from line absorption of the He~{\sc ii}~ ``Lyman'' series. This relates to the C~{\sc iv}~ balance because the transition from C~{\sc iii}~ to C~{\sc iv}~ occurs at 3.5 Ryd. These authors' updated model of the UV background including the sawtooth modulation leads to a decrease in flux at the C~{\sc iii}~ ionization edge of 0.2-0.3 dex at $z=3$. They also explored models where an artificially delayed He~{\sc ii}~ reionization led to a reduction in flux at the C~{\sc iii}~ edge of over a full dex. This latter case may be more appropriate for studying the $z=4.3$ IGM since He~{\sc ii}~ reionization should not be complete at this epoch and the resulting opacity may be quite high. Statistical studies of metal lines at high redshift provide some evidence for this effect \citep{agafonova} For the mean spectrum with no He~{\sc ii}~ sawtooth, the majority of carbon is in the C~{\sc iv}~ state at $z\sim 4.3$. Therefore, in principle a softening of the background should shift the balance toward C~{\sc iii}~ and C~{\sc ii}, reducing $f_{\rm C \; \mbox{\tiny IV}}$ and increasing [C/H] estimates in the process (Equation 2), thereby reducing the evolution signal. To explore this effect, we obtained a copy of the sawtooth spectrum from F. Haardt, and used it to recalculate ionization corrections. In its raw form, the sawtooth model has a very hard spectrum that is disfavored by IGM abundance studies at lower redshift \citep{schaye_civ_pixels, simcoe2004}. It produces an abundance gradient that decreases with density, leaving very high abundances in weak Ly-$\alpha$~ forest lines. Rather than use this form as-is, we instead examined the ratio of the sawtooth spectrum (HM+S in their Figure 1) to a separate model calculated with identical input parameters but no sawtooth included (HM in their Figure 1). This isolates the effect of the He~{\sc ii}~ Lyman series, which we then multiply into the softer background spectra shown in Figure \ref{fig:uvbg}. Clearly this is not a self-consistent way to model He~{\sc ii}~ absorption in the IGM, and \citet{HM_sawtooth} point out that even their models do not fully capture the patchy nature of He~{\sc ii}~ reionization that could be important here. But lacking a full simulation suite, our approach captures the spectral shape of the sawtooth, and provides an heuristic tool for examining its impact on our results. Figure \ref{fig:sawtooth} illustrates the change in [C/H] resulting from use of our modified sawtooth spectrum, as a function of $N_{{\rm H \; \mbox{\tiny I}}}$. The solid line shows the MH09 model; for the dashed curve we arbitrarily increase the optical depth across the sawtooth by a factor of 2. Depending on the degree by which the spectrum is changed, the correction can be anywhere from 0.08 to 0.2 dex, and always in the upward sense. The sawtooth modulation therefore will decrease the evolution signal between our $z=4.3$ and $z=2.4$ samples. \begin{figure} \epsscale{1.0} \plotone{f21.eps} \caption{Comparison of Kaplan-Meier distributions for the standard $z=4.3$ \citet{haardt_cuba} model, the $z=2.4$ standard model, and the $z=4.3$ model incorporating a 1 dex decrement at 3-4 Ryd from the sawtooth modulation. This modification changes the shape of the Kaplan-Meier distribution, such that the high [C/H] regions do not evolve, but the low [C/H] regions do.} \label{fig:km_saw} \end{figure} Figure \ref{fig:km_saw} shows the Kaplan-Meier distribution in [C/H] obtained by running our survival analysis with the HM+S background at $z=4.3$. The two comparison curves show the distributions at $z=2.4$ and 4.3 for the unmodulated spectrum. For our modified HM+S spectrum, the [C/H] distribution shifts uniformly to the right by $\sim 0.06$ dex; a similar plot using the background with $2\times$ inflated He~{\sc ii}~ optical depths would shift $\sim 0.15$ dex higher than the distribution from the unmodulated spectrum. Recall that we observe a 0.5 dex change in [C/H] between $z=4.3$ and $z=2.4$; if one forces [O/C]=0.5, or requires a strong correlation between density and metallicity, the evolution could be as small as 0.3 dex. A very strong sawtooth modulation in the UV background could in principle remove 0.15 dex of our evolution signal. If one applies both the sawtooth spectrum {\em and} forces abundances to [O/C]=0.5, it is possible to derive a solution with only 0.10-0.15 dex of evolution in the carbon abundance, much closer to the no-evolution scenario. This situation arises only for a set of assumptions stressed to a particular end of our parameter space, but its possiblility should not be ignored. \begin{figure} \plotone{f22.eps} \caption{Si~{\sc iv}~/C~{\sc iv}~ ratio for systems in the sample with detected C~{\sc iv}~. Model curves show the predicted trends for various degrees of sawtooth absorption in the UV background spectrum. The solid curve incorporates no sawtooth, dotted line is the \citet{HM_sawtooth} form modified as described in the text. The dashed and dot-dash lines are for sawtooth spectra with optical depth arbitrarily multiplied by $2\times$ and $4\times$, respectively. These models, which would be needed to substantially shift the [C/H] distribution, are disfavored by the few measurements shown here.} \label{fig:sivciv} \end{figure} The chief effect of the 3-4 Ryd decrement at $z\sim 4.3$ is to take carbon from the C~{\sc iv}~ state and move it to C~{\sc iii}, so we should in principle be able to see an increase in the C~{\sc iii}/C~{\sc iv}~ ratio using the C~{\sc iii}~ 977\AA ~line, or even an increase in C~{\sc ii}/C~{\sc iv}~ at 1334\AA. We searched our data for systems where this measurement could be made, but unfortunately the results do not provide strong constraints on the magnitude of the sawtooth effect. The C~{\sc iii}~ lines tend to be blended with Ly-$\alpha$~ forest absorption, and C~{\sc ii}~ lines are too weak to detect in all but our strongest few systems. One of the strongest constraints may come from pixel-optical-depth analyses. The C~{\sc iii}/C~{\sc iv}~ ratio is not a good diagnostic at $z\gtrsim4$. However, the models in \citet{schaye_civ_pixels}---which show no trend of [C/H] with redshift---may yield a declining cosmic abundance from high values in the past toward low values at present using a sawtooth background---a solution we are generally prejudiced against. The Si~{\sc iv}~/C~{\sc iv}~ ratio is another diagnostic of spectral hardness which is accessible for a limited number of systems in our sample. Figure \ref{fig:sivciv} shows the calculated values, or upper limits (where no Si~{\sc iv}~ is detected). The solid lines show predicted values of $N_{\rm Si \; \mbox{\tiny IV}}/N_{\rm C \; \mbox{\tiny IV}}$ for different prescriptions of the sawtooth, calculated using CLOUDY with Solar relative abundances \citep{grevesse_solar_abund}. At high $N_{{\rm H \; \mbox{\tiny I}}}$ where the most reliable measurements are located, the data are broadly consistent with the non-sawtooth spectrum and our modified \citet{HM_sawtooth} sawtooth form. Artificially boosted sawtooth absorption overpredicts the amount of Si~{\sc iv}~ seen in these systems, although the number of points is too small to draw broad conclusions. This diagnostic suffers a slight degree of ambiguity because of possible non-solar values of [Si/C]. However many absorbers at high redshift show [Si/C]$>0$, in which case there would be an even stronger conflict for the heavily absorbed sawtooth spectra. \subsubsection{Continuum Fitting}\label{sec:contiuum} At high redshifts an additional possible bias is introduced from uncertainties in the absolute continuum flux from the background quasar over the Ly-$\alpha$~ forest region. We have followed the typical procedure of estimating continuum levels by fitting a low-order spline across relatively unobscured regions of the forest. This procedure becomes increasingly inaccurate at high redshifts, as the opacity of the Ly-$\alpha$~ forest increases, and it becomes increasingly difficult to find regions with transmission of unity. \citet{faucher_continuum} present a treatment of this same problem in the context of Ly-$\alpha$~ forest opacity measurements. Drawing mock spectra from Ly-$\alpha$~ forest simulations at various redshifts, they performed manual continuum fits similar to the ones described here and examined the resultant deviation from the known, true continuum as a function of redshift. Their results indicate that at $z=4.2$ (our sample redshift) a manual estimate could systematically place the continuum too low, by $\sim 17\%$. By comparison, at $z=2.5$, where O~{\sc vi}~ and other studies focused, the correction is of order $2\%$. To examine this effect, we created a ``continuum corrected'' version of our highest quality sample spectrum (BR0353-3820) and refit H~{\sc i}~ column densities to the full Ly-$\alpha$~ forest. Following the empirical approach of \citet{faucher_continuum}, we adjusted the continuum-normalized flux at each wavelength in the Ly-$\alpha$~ forest downward by a factor of $(1-C(z))$, where $C(z)=1.58\times 10^{-5}(1+z)^{5.63}$, to account approximately for the bias introduced by incorrect continuum fits. We then re-calculated $N_{{\rm H \; \mbox{\tiny I}}}$ at each redshift for our full sample. Figure \ref{fig:contin} shows the results of the re-fitting exercise for BR0353-3820, where as before we use multiple Lyman series transitions to constrain $N_{{\rm H \; \mbox{\tiny I}}}$. \begin{figure} \epsscale{1.0} \plotone{f23.eps} \caption{Error introduced in $N_{{\rm H \; \mbox{\tiny I}}}$, and hence [C/H], caused by errors in our estimate of the QSO continuum in the Ly-$\alpha$~ forest. No systematic trend is apparent, and most of the scatter is at the $\sim \pm 1$ dex level.} \label{fig:contin} \end{figure} Somewhat surprisingly, we found that the column density for the median system was nearly unchanged, or at least was offset by an amount smaller than the $\sim \pm 0.2$ dex point-to-point scatter. This may result from the fact that in each system, the $N_{{\rm H \; \mbox{\tiny I}}}$ measurement is driven by that Lyman series line which is closest to saturation but not bottomed-out across the profile. The fractional error introduced by poor continuum fits is smallest at low flux levels so by choosing the right transitions this is minimized. At least it seems clear that changes in the continuum level at $z\sim 4$ lead to errors at the level of $\lesssim 0.5$ dex in $N_{{\rm H \; \mbox{\tiny I}}}$. In general, continuum errors enter into [C/H] in two ways. One is a simple error in $N_{\rm C \; \mbox{\tiny IV}}/N_{\rm H \; \mbox{\tiny I}}$. However, because $f_{\rm C \; \mbox{\tiny IV}}\propto N_{\rm H \; \mbox{\tiny I}}^{2/3}$ (Figure \ref{fig:fhi}), we also underestimate the neutral fraction, which partly counteracts the misestimate of $N_{\rm C \; \mbox{\tiny IV}}/N_{\rm H \; \mbox{\tiny I}}$. Thus the error in [C/H] is only $\frac{1}{3}$ the logarithmic error in $N_{{\rm H \; \mbox{\tiny I}}}$---in our case, $\sim 0.15$ dex of random scatter with no systematic offset. There is an additional, much smaller contribution from an incorrect estimate of $f_{{\rm C \; \mbox{\tiny IV}}}$, which is derived from $N_{{\rm H \; \mbox{\tiny I}}}$. Over the column density range of our sample, the C~{\sc iv}~ fraction is near its peak. This results in a broad minimum in the logarithmic derivative of the C~{\sc iv}~ fraction with $N_{{\rm H \; \mbox{\tiny I}}}$. For the \citet{faucher_uvbg} background (as a representative example; see Figure \ref{fig:fciv_z4}), \begin{equation} \left|{{{d \log(f_{{\rm C \; \mbox{\tiny IV}}})}\over{d\log (N_{{\rm H \; \mbox{\tiny I}}})}}}\right|\lesssim 0.1 \end{equation} from $14.5<\log (N_{{\rm H \; \mbox{\tiny I}}}) < 16.5$, with the derivative near zero between 15.0 and 15.75. The C~{\sc iv}~ column density in Equation 2 is not sensitive to continuum fitting errors and remains constant. Taken all together, the corrections to $N_{{\rm H \; \mbox{\tiny I}}}, f_{{\rm H \; \mbox{\tiny I}}},$ and $f_{{\rm C \; \mbox{\tiny IV}}}$ would lead to a combined correction in [C/H] of only 0.032-0.034 dex for each 0.1 dex change in $N_{{\rm H \; \mbox{\tiny I}}}$. Moreover the sense of the change is such that the measured abundance at $z=4.3$ becomes {\em lower} as $N_{{\rm H \; \mbox{\tiny I}}}$ is increased, which tends to enhance the trend of declining carbon abundance with increasing redshift. \subsection{Summary of systematic uncertainties} To synthesize the results of this section: the main systematic sources of uncertainty in our abundance estimates come from the UV background used to calculate ionization corrections, and the determination of H~{\sc i}~ column densities in the thick Ly-$\alpha$~ forest at $z\sim 4.3$. We have tried two choices for the ionizing background: the commonly-used \citet{haardt_cuba} model with quasars + galaxies, and a newer calculation of \citet{faucher_uvbg}. Both versions of the spectrum yield an evolving carbon abundance with redshift, with the \citet{faucher_uvbg} version giving a slightly larger derivative. We investigated the effect of spatial variations in the UV background due to photoelectric absorption from He~{\sc ii}~, and stochastic fluctuations in the ionizing source population. These variations do not affect the H~{\sc i}~ neutral fraction, or transitions relevant to C~{\sc iv}~ other than the C~{\sc iv}~ to C~{\sc v}~ transition at 4.74 Ryd. Near this edge, the background can vary by factors of 10-100 inside of He~{\sc iii}-ionized bubbles, and many orders of magnitude in the neutral bubble walls. However, CLOUDY tests revealed that changes in the 4.74 Ryd flux, absent changes at other wavelengths, affects our abundances only at the 0.1 dex level. It is possible that we have overestimated abundances for systems in the proximity zones of bright quasars by $\sim 1$ dex or more, but these systems are quite rare and should not drastically alter our distributions. The inclusion of ``sawtooth'' absorption from the He~{\sc ii}~ Lyman series at 3-4 Ryd leads to a systematic increase of $0.06$ dex in our median [C/H], for an absorption strength consistent with the results of \citet{HM_sawtooth}. If the sawtooth absorption strength is artificially inflated this offset becomes correspondingly larger. Abundance errors from a systematic underestimate of $N_{{\rm H \; \mbox{\tiny I}}}$ (as would be found from continuum fitting errors) are limited to $\lesssim 0.1$ dex, because for every 0.1 dex increase in $N_{{\rm H \; \mbox{\tiny I}}}$, there is a corresponding increase of $0.07$ dex in $f_{\rm H \; \mbox{\tiny I}}$ offsetting the effect, and little change in the C~{\sc iv}~ ionization fraction. Taken together these tests suggest that [C/H] abundances at $z\sim 4.3$ are less sensitive to systematic errors than one might naively expect. Different choices of the mean background will change the overall median abundance but do not affect our conclusion that the carbon abundance is evolving except under stressed sets of assumptions. Most of the effects studied above tend to {\em enhance} the evolutionary signal, if we have underestimated their importance. \section{Discussion}\label{sec:discussion} \subsection{The Integrated Carbon Production Between $z=4.25$ and $z=2.4$}\label{sec:c_production} The the total mass flux of carbon into the Ly-$\alpha$~ forest may be estimated simply from our data by integrating the abundance-weighed H~{\sc i}~ column density distribution. As described in \citet{simcoe2004}, the contribution of carbon atoms (in all ionization states) to the closure density may be estimated as: \begin{eqnarray} \Omega_{C}={{1}\over{\rho_c}}\left({{C}\over{H}}\right)_\sun\left<{10^{[C/H]}}\right>\left({{c}\over{H_0}}\right)^{-1}\times \nonumber\\ \int{N_{{\rm H \; \mbox{\tiny I}}}f_{(N_{{\rm H \; \mbox{\tiny I}}},X)}dN_{\rm H \; \mbox{\tiny I}}} \end{eqnarray} where $\rho_c$ is the (current) closure density, and $(C/H)_\sun$ is the solar carbon abundance by number. The H~{\sc i}~ column density distribution $f_{(N_{{\rm H \; \mbox{\tiny I}}},X)}$ is defined as $d^2{\cal N}/dXdN_{{\rm H \; \mbox{\tiny I}}}$ with ${\cal N}$ being the number of absorbers in a survey of comoving pathlengh $dX$ (Shown in Figure \ref{fig:cddf}). The mean carbon abundance, shown in angled brackets, may be derived from the distributions in Figure \ref{fig:km_ch}. Assuming a lognormal parameterization of the abundance, the mean may be determined as \begin{equation} \left<{10^{[C/H]}}\right> = \exp \left[{\ln10\left<{\left[{{C}\over{H}}\right]}\right>+\frac{1}{2}(\ln 10\times \sigma)^2}\right] \end{equation} \noindent For our measured median abundance of $[C/H]=-3.55$ and scatter of $\sigma=0.7$ dex, this amounts to a mean carbon abundance of $10^{-2.81}$ at $z\sim 4.3$. The same equation applied to $z\sim 2.4$ yields a mean abundance of $10^{-2.36}$ at lower redshift. We substitute these values into Equation 8 to derive $\Omega_{C}=2.7\times 10^{-8}$ at $z=4.3$, and $4.8\times 10^{-8}$ at $z\sim 2.4$. This represents a factor of 1.7 increase over an interval of just 1.3 Gyr. Put another way, {\em roughly half of the carbon observed in the Ly-$\alpha$~ forest at $z\sim 2.4$ was deposited into the IGM after $z\sim 4.3$.} Even in the conservative limit where 0.2 dex of the difference between our redshift points is attributed to density effects, the fraction of ``recently deposited'' carbon would be about one-third of the total at $z\sim 2.4$. The rise in intergalactic abundance suggests a connection with the concurrent rise in the cosmic star formation rate and/or the luminosity density of AGN in this same redshift interval. Many of these metals may reside in enriched regions near high redshift galaxies, given their relatively recent production. Outflows are of course commonly seen in the spectra of star forming galaxies at these redshifts \citep{lbg_winds, steidel2010,vvds_lbg} and there is evidence of spatial association between strong C~{\sc iv}~ systems and UV-selected galaxies at $z\sim 3$ \citep{kurt_winds, steidel2010}. Our measurement would seem to connect these correlations in the local galactic environment to the global growth of metal abundance in the IGM, in a temporal sense. Estimates of the stellar population ages for $z\sim 3$ galaxies typically range between $100-200$ Myr to $\sim 1 $ Gyr \citep{alice_spitzer}. This means that the archetypal Lyman Break galaxy at $z=2.4$ could be built from scratch within the redshift interval of our C~{\sc iv}~ survey. The winds seen in LBG spectra travel at several hundred km/s, so even if these galaxies began driving totally unbound winds at birth, the wind material could only have traversed several hundred kpc and not deep into the IGM. The C~{\sc iv}~ column densities studied here ($N_{{\rm C \; \mbox{\tiny IV}}} \sim 10^{12}$) generally fall below the range where correlations with galaxies are strongest \citep[$N_{{\rm C \; \mbox{\tiny IV}}}>10^{13.5}$---][]{kurt_winds, steidel2010}. Moreover \citet{songaila_winds} studied similar C~{\sc iv}~ systems to the ones presented here, concluding that the weaker lines dominating absorption selected samples are too quiescent to trace actively evolving wind bubbles. However they do trace densities that would be found in the filaments seen in cosmic simulations, where galaxies also reside. These filaments may be populated with moderately recent, cooled wind relics that would manifest as the weaker C~{\sc iv}~ systems seen in our survey and other sensitive C~{\sc iv}~ searches in the Ly-$\alpha$~ forest, although further theoretical work would be required to assess if this scenario holds up in detail. \subsection{Comparisons with the Global Star Formation Rate}\label{sec:sfr} For any moment in time, the difference between the intergalactic carbon growth rate and the yield-weighted star formation rate measures the global efficiency of feedback processes at ejecting metals from nascent galaxies. Observations of individual star forming galaxies in the local universe indicate that galactic winds can be significantly loaded in mass and especially in metals \citep{crystal_wind_yield}. During periods of intense star formation, it is possible that all metals produced in a burst will escape; in more quiescent periods these same elements would settle back into the galaxy's ISM. At $z=2.5-4.5$ individual galaxy measurements are difficult, but using our C~{\sc iv}~ sample we may estimate a globally averaged efficiency of metal ejection. First, we calculate the total stellar mass formed over our redshift interval by integrating the cosmic star formation rate density of \citet{reddy}. Over the interval $2.4<z<4.3$ ($\Delta t=1.34$ Gyr), this yields a total formed stellar mass of $1.6\times 10^{8} M_\sun$/Mpc$^3$, or an average of about 0.16 $M_\sun$ Mpc$^{-3}$yr$^{-1}$. We then estimate the carbon output resulting from this star formation activity using yields from \citet{woosley_weaver_yields}\footnote{The IMF correction from UV luminosity to stellar mass is counterbalanced by the IMF-weighting of the yield, and it is the UV-bright stars which matter most for enrichment. As long as the same IMF is applied consistently for both calculations the detailed form matters less for metal production. We thank the referee for raising this point.}. The IMF-weighted carbon yield, calculated assuming a Salpeter IMF from 0.5 to 40 $M_\sun$ and for solar metallicity, is $1.2\times 10^{-4}$. The yield for progenitors with $M<10M_\sun$ (which mostly still lie on the main sequence) is set to zero. Folding this into the star formation rate, we find that the total production of carbon from core collapse supernovae between $z=4.3$ and $z=2.4$ is \begin{equation} \Delta M_C \approx 2.1\times 10^4 M_\sun ~{\rm Mpc}^{-3} ~{\rm (Produced ~in ~galaxies)}. \end{equation} In the previous section, we found that the total growth of intergalactic carbon came to $\Delta \Omega_{\rm C}=2.2\times10^{-8}$, or \begin{equation} {{d}\over{dt}}\Omega_{\rm C} \sim 1.9\times10^{-17} {\rm yr}^{-1}. \end{equation} Multiplying by the critical density, we obtain the integrated volumetric increase in carbon mass: \begin{equation} \Delta M_{\rm C}=7.1\times 10^3 M_\sun ~{\rm Mpc}^{-3} ~{\rm(Added ~to ~IGM)}. \end{equation} A simple comparison of Equations 8 and 10 shows that the rate of carbon enrichment in the IGM is roughly a factor of $\sim 3$ lower than the rate of carbon production in stars at this epoch. This analysis is subject to substantial uncertainties in calculating the SFRD, IGM carbon abundance, and yields. It is somewhat remarkable that in spite of this, the carbon production and pollution rates are of similar magnitude. Taken at face value, the result suggests that galaxies at $z\sim 2-4$ keep roughly $70\%$ of the heavy elements they produce, and donate the other $30\%$ back to the IGM. There is precedent for this behavior at both low and high redshift. At the high redshift end, \citet{simcoe2004} estimated galaxy yields based on a closed-box chemical enrichment model of the Ly-$\alpha$~ forest. This analysis relates to the present one, except that it is an integral calculation whereas the present version is differential. However the results are similar: the closed-box model requires that galaxies lose at least $\sim 10\%$ of their heavy element mass to enrich the IGM to observed levels by $z\sim 2.4$, and possibly more. Our escape fraction of $\sim 30\%$ is also comparable to those derived from very limited samples of dwarf starbursts with outflows in the local universe. \citet{crystal_wind_yield} report ejection efficiencies of $15-20\%$ for the dwarf starburst NGC1569. For the local prototype starburst M82, \citet{strickland_m82} report a star formation rate of 4-6$M_\sun$ yr$^{-1}$ with mass outflow rates of 1-2 $M_\sun$ yr$^{-1}$. If the SFR is weighted by yield and the ejecta have abundances above $\frac{1}{10}Z_\sun$ (observations suggest $Z\sim 5Z_\sun$ in the wind), the metal outflow rate even exceeds the metal production rate, presumably because of mass loading from the ISM. While selected individual galaxies in the local universe have mass and metal outflow rates similar to these values, the high redshift growth in abundance would require that essentially every galaxy drive outflows with this efficiency. \subsection{Metal Mixing and the Smoothing Scale for Abundance Measurements} The abundances measured in this work---as well as in \citet{simcoe2004} and \citet{schaye_civ_pixels}---are understood to be smoothed on scales comparable to the Jeans length for the structures in question, which fall in the range $\sim 50-150$ kpc. Ly-$\alpha$~ forest clouds have small overdensities relative to the cosmic mean, and are generally thought to arise in structures that are marginally Jeans unstable \citep{schaye_forest}. This sets the relation between $N_{{\rm H \; \mbox{\tiny I}}}$ and $n_H$ seen in simulations, which we have exploited in calculating our ionization corrections. It indicates that the sizes, densities, and H~{\sc i}~ ionization fractions employed here should accurately represent the true physical conditions inside of H~{\sc i}~ absorbers. However the mixing of C~{\sc iv}~ on $\sim 100$ kpc scales is neither well-mapped observationally or well-resolved in numerical simulations. The information that we do know suggests that C~{\sc iv}~ is not smoothly distributed throughout Ly-$\alpha$~ forest clouds. This is suggested observationally by the slight velocity offsets often observed between H~{\sc i}~ and C~{\sc iv}~ lines at similar redshift. Moreover observations of starburst outflows in the local universe show a highly non-uniform geometry. Typically there is a uniform hot X-ray halo powered by the feedback source, but cooler gas that would be seen in absorption is often clumpy or filamentary. Several recent hydrodynamic codes have explicitly tracked the fate of metals flowing out of galaxies, and these also tend to find a very non-uniform distribution, with metal absorption coming from small, higher density regions embedded in the more diffuse H~{\sc i}~ medium \citep{cen_enrichment_sims,oppenheimer}. However these simulations employ SPH or other particle based methods, tracking heavy element mixing by tagging outflow particles. These methods will underestimate the diffusion of metals on scales at or below that of a particle, which are crucial to the mixing process. Several authors have studied the sizes of C~{\sc iv}~ absorbing regions along the line of sight using ionization modeling \citep{simcoe2004}, or using transverse separation provided by gravitational lenses \citep{rauch_civ_lens}. Invariably these studies find small absorber sizes of $\sim 1$ kpc or less, sometimes much smaller. The extreme argument for non-uniform mixing is presented by \citep{schaye_mixing}, based on fitting heavy element components on the wings of larger C~{\sc iv}~ profiles. \begin{figure} \plotone{f24.eps} \caption{Comparison of the C~{\sc iv}~ column denisty distribution from sample spectra with the redshift-independent power law determined by \citep{songaila_omegaz}. For this plot only, we use Songaila's cosmology for direct comparison. Small number statistics set in at $N_{\rm C \; \mbox{\tiny IV}} \gtrsim 10^{14}$, while incompleteness is apparent at $N_{\rm C \; \mbox{\tiny IV}}\lesssim 10^{12.5}$. The few strong systems missed have minimal impact on the survival analysis; the statistics are dominated by the more numerous, low column density absorbers.} \label{fig:civ_cddf} \end{figure} Using simple scalings, \citet{schaye_mixing} examine the fate of non-uniform, high metallicity clumps outside of galaxies. If these systems start out as hot clumps from superwinds, they are overpressurized with respect to the intergalactic surroundings, and will expand until they reach pressure and temperature equilibrium with the ambient medium. Once in equilibrium, the metal-rich patch also reaches a common density with the IGM. This process occurs on timescales of 1-10 Myr, essentially instantaneously with respect to the Hubble time or even the star formation age of galaxies at $z\sim 2-4$. Small patches of high metallicity embedded in the Ly-$\alpha$~ forest clouds may therefore dominate the C~{\sc iv}~ absorption profile, but the H~{\sc i}~ profile is dominated by the extended, metal poor Ly-$\alpha$~ forest absorber. Because their densities are the same once equilibrium is reached, the ionization correction for C~{\sc iv}~ (which depends on $n_H$ derived from $N_{{\rm H \; \mbox{\tiny I}}}$) should actually still be accurate for calculating $f_{\rm C \; \mbox{\tiny IV}}$. This implies that we have correctly measured the relative amounts of hydrogen and carbon for calculating [C/H]. However this estimate is essentially averaged over the whole Ly-$\alpha$~ absorber. The actual abundance in any single volume element may be either substantially lower or higher depending on the degree of mixing. \subsection{Comparison with Previous Studies}\label{sec:thepast} A number of authors have studied the evolution of C~{\sc iv}~ at $z>2$. \citet{songaila_new_civ} notably evaluated $\Omega_{{\rm C \; \mbox{\tiny IV}}}$ and the C~{\sc iv}~ column density distribution function from $z=2.1$ to $z\sim 6$. A key finding of this work was the lack of evolution observed in the C~{\sc iv}~ CDDF. Our results are broadly consistent with this result. Figure \ref{fig:civ_km} illustrates that our data yield a very similar column density distribution of C~{\sc iv}~ at redshifts $2.4$ and $4.3$. The Kaplan-Meier distribution is somewhat different than a standard CDDF in that we have not scaled by redshift pathlength, but by reporting as a percentage of the sample the different pathlengths of the samples normalize out. Also, the KM distribution better captures information about non-detections, which explains why C~{\sc iv}~ is detected in only about $50\%$ of our sample (which in turn consists of stronger Ly-$\alpha$~ forest lines). To test whether the C~{\sc iv}~ systems found in our sightlines are consistent with other surveys from the literature, we show the classic C~{\sc iv}~ CDDF in Figure \ref{fig:civ_cddf}, made from a count of (C~{\sc iv}~-selected) absorbers in our spectra. The power-law fit from \citet{songaila_omegaz} is shown as a solid line, this fit agrees with our data over the range $12.4<\log (N_{\rm C \; \mbox{\tiny IV}})<14.0$. Both we and Songaila are substantially complete in the range $13<\log(N_{\rm C \; \mbox{\tiny IV}})<14$. A straight sum of systems in this $N_{\rm C \; \mbox{\tiny IV}}$ interval yields $\Omega_{\rm C \; \mbox{\tiny IV}}=1.3\times 10^{-8}$ for our sample at $z=4.3$, while her result (scaled to $\Omega_M=0.3$) yields $1.1\times 10^{-8}$, in good agreement. Using her full sample with $13.0<\log(N_{\rm C \; \mbox{\tiny IV}})<15.0$, \citet{songaila_new_civ} estimates that $\Omega_{{\rm C \; \mbox{\tiny IV}}}=1.8-2.7\times 10^{-8}$ over our redshift range. In Section \ref{sec:c_production}, we calculated the total {\em ionization corrected} $\Omega_{C}$ for an H~{\sc i}~-selected sample, with result $2.2\times 10^{-8}$ and $4.8\times 10^{-8}$ at $z=4.3, 2.4$, respectively. From Figure \ref{fig:fciv_z4}, we see that the C~{\sc iv}~ fraction in our sample ranges from $35-50\%$ at $z=4.3$ to $17-36\%$ at $z=2.4$. These imply an approximate $\Omega_{{\rm C \; \mbox{\tiny IV}}}\sim 1.0-1.5\times 10^{-8}$, on the low side of Songaila's range but generally consistent. It is not surprising that our estimate made in this alternate way is slightly low, since about half the signal in raw measurements of $\Omega_{\rm C \; \mbox{\tiny IV}}$ is produced by the rare, strong systems (i.e. outliers) that we have not included in our statistical calculation. The strong systems picked up in $\Omega_{\rm C \; \mbox{\tiny IV}}$ are not well sampled here and may reside in circum-galactic environments that are locally enriched. \citet{schaye_civ_pixels} performed a systematic investigation of C~{\sc iv}~ at similar redshifts and, importantly, included ionization corrections in their analysis. Unlike our method, which relies on measurements of line column densities and upper limits, \citet{schaye_civ_pixels} performed a pixel-optical-depth (POD) analysis and compared with forward-modeled simulations of the IGM to infer [C/H] versus redshift. The POD method yields a median [C/H]$=-3.47$ at a pathlength-weighted mean redshift of $3.1$. This lies between our estimate of [C/H]=-3.1 at $z\sim 2.4$ and [C/H]=-3.6 at $z\sim 4.3$. A simple linear interpolation between our two redshift points would yield [C/H]=-3.3 at $z=3.1$, about 50\% higher than the POD estimate. Schaye et al. find no statistically significant evidence of redshift evolution, in contrast to our result. Some of this discrepancy may reflect a real difference in the measurements and ionization corrections. But there are several factors to consider when comparing the two. First, while we and Schaye have both employed the updated Haardt \& Madau model of the UV background with galaxies included, we have softened our spectrum above 10 Ryd at $z=4.3$, to avoid a rise in the X-ray background that would be inconsistent with the observed decline in the space density of AGN. Schaye et al noted that in experimenting with models softened above 4 Ryd (simulating He~{\sc ii}~ reionization), a transition in the middle of their redshift range would result in a positive detection of evolution. For reference, when we use the updated calculations of \citet{faucher_uvbg} which are {\em not} artificially tuned but which incorporate He~{\sc ii}~ ionization into the mean calculation, we measure an even {\em stronger} evolution by about 0.3 dex. A second point is that the present study (by design) has its pathlength weighted at the two endpoints of the redshift interval where we have detected a time derivative in the metallicity. The Schaye et al sample has its pathlength weighted toward the middle of its redshift range (around $z\sim 3.0$) to maximize signal-to-noise, and therefore has somewhat less leverage for measuring evolutionary trends. For example, of the 19 QSOs in their sample, only 2 have pathlength above $z=3.6$, and only one has pathlength above $z=4.0$. Indeed, the relative lack of high redshift, high resolution data in the Schaye sample was a motivating factor for obtaining our MIKE data set. This effect manifests partly as a reduced signal-to-noise for measuring evolution. However, it is also the case that the lowest densities probed in any large sample spanning $\Delta z >1$ will be found at higher redshift, because of the reduced ionization of the IGM. Likewise the highest redshifts are best for measuring low [C/H] because of the large $f_{\rm C \; \mbox{\tiny IV}}$ at earlier times. Since density, abundance, and redshift can be artificially correlated, a large regression analysis like the POD technique may encounter degeneracies along these basis vectors. The POD analysis projects a strong signal along the density axis with weak redshift evolution. By studying slices of samples with individual absorbers, we favor a slightly weaker correlation with density, and a stronger evolution with redshift. Finally, we note that simulations of metal enrichment in the IGM have improved substantially over the last several years. \citet{oppenheimer} and \citet{cen_enrichment_sims} have performed extensive studies of metal enrichment in cosmological simulations with particular attention to carbon and the C~{\sc iv}~ ion. These represent improved updates to the early work of \citet{aguirre_outflows}. They find that the total budget of heavy elements in the IGM increases by a factor of 2.7, or 0.4 dex over this range. This is very closely in line with the change that we observe, indicating that wind models motivated by local observations--but implemented at high redshift--provide a good match to the trends seen in the carbon abundance. \section{Conclusions}\label{sec:conclusions} We have derived the distribution function of carbon abundance in Ly-$\alpha$~ forest clouds at $z\sim 4.3$, using a set high signal-to-noise ratio spectra taken with the MIKE echelle spectrograph on the Magellan Clay telescope. The C~{\sc iv}~ column density or its upper limit was measured for an H~{\sc i}~-selected sample of 131 discrete absorbers with $N_{{\rm H \; \mbox{\tiny I}}} > 10^{14.5}$ cm$^{-2}$, corresponding to $\rho/\bar{\rho} \ge 1.6$. These measurements were converted into [C/H] abundances via density-dependent ionization corrections. The [C/H] distribution was then determined via the Kaplan-Meier product limit estimator for censored data. Our main findings are that: \begin{enumerate} \item{Over the range we can probe, the C~{\sc iv}~ distribution at $z\sim 4.3$ appears to be crudely lognormal, with a median of [C/H]=-3.55 and scatter of $0.8$ dex.} \item{The median abundance at $z\sim 4.3$ is about 0.3-0.5 dex lower than at $z\sim 2.4$, at fixed cosmic overdensity $\rho/\bar{\rho)}$. The range quoted reflects differing assumptions about the UV ionizing background, and the gradient of abundance with density.} \item{We examined several sources of uncertainty in the measurements, including spatial variations in the radiation field and misestimates of the continuum. Variations in the background may contribute to scatter in abundance estimates at the $\sim 0.2$ dex level, but do not contribute a significant systematic error. The same is true for continuum errors. If systematic errors are present they would tend to enhance, rather than diminish, the evolutionary signal. The one exception to this rule is the proposed ``sawtooth'' shaped attenuation of the 3-4 Ryd background from He~{\sc ii}~ absorption. Absorption at the levels indicated in \citet{HM_sawtooth} lead to a systematic increase of 0.06 dex in our abundances. A substantially stronger absorption signature could weaken the detected evolution signal, but this may also cause conflict with measurements of the Si~{\sc iv}~/C~{\sc iv}~ ratio in the limited cases where this could be measured.} \item{The total carbon contribution to closure density in our fiducial model is $\Omega_C\sim 2.7\times 10^{-8}$. The same quantity calculated at $z\sim 2$ is 1.7 times larger, implying that roughly half of the heavy elements seen in the Ly-$\alpha$~ forest at $z\sim 2.4$ were distributed into the IGM in the 1.3 Gyr of the $z\sim 3-4$ interval.} \item{The mass flux of carbon into the IGM needed to sustain this growth is comparable in magnitude to the carbon yield inferred from newly formed stellar populations at these redshifts. We estimate that on global scales (and within very large uncertainties), the metal feedback rate from galaxies is around $\sim 30\%$ of the star formation rate. This large rate is consistent with observations of star forming galaxies in the local universe.} \end{enumerate} The main result of the paper---our detection of a decline in abundance at higher redshift---shows that at least some of the enrichment of the IGM is taking place at the time when we observe it. High redshift C~{\sc iv}~ observations show that some metals already existed at $z\sim 5.5$, but a substantial fraction of the heavy elements seen at $z\sim 2.4$ must have been ejected from galaxies we can readily observe. The transport of metals from galaxies to the IGM is one of the strongest lines of evidence supporting the notion that feedback is a crucial element of galaxy formation. This has long been inferred indirectly, based on observations and models of galaxy properties in the local universe. Our measurements suggest that observations at high redshift may start to place meaningful constraints on the feedback process during the epoch when it was actually taking place. \acknowledgements It is a pleasure to thank the staff of the Magellan telescopes for their assistance in obtaining the data contained here. George Becker kindly assisted in reducing much of the data taken with the MIKE spectrograph. Francesco Haardt supplied private versions of the UV background models used, and Claude-Andre Faucher-Giguere kindly supplied his new models as well and offered several useful suggestions. Steve Furlanetto deserves credit for pointing out the possibility of using He~{\sc ii}~ methods to study fluctuations in the C~{\sc iv}~ background. Finally, I wish to acknowledge financial support from the Alfred P. Sloan foundation, and the NSF under grant AST-0908920. I also gratefully acknowledge generous lumbar support from the Adam J. Burgasser Chair in Astrophysics.
1,314,259,996,122
arxiv
\section{Introduction} In many different situations, ranging from low to high energy physics, we are interested in --or have access to-- only a part of the coordinates describing a given physical system. The problem is finding the effective dynamics that drive the interesting variables by reducing the uninteresting ones or, vice versa, given the effective dynamics of accessible variables, introducing extra coordinates that simplify the overall dynamical picture. Dimensional reduction may be induced by a number of very different mechanisms which in general leave a track in the lower dimensional dynamics. Typical examples are Kaluza-Klein and brane-world reduction in high energy physics, quantum dots/lines/surfaces in semiconductor physics, magnetic confinement in plasma physics and so on. However, there are features that only depend on the selection of the `interesting' coordinates and not on the specific mechanism under consideration. In this paper we focus on these universal features that depend on the selection of a subset of coordinates and not on specific reduction schemes. It should be stressed, that even if in certain cases --like brane-worlds and quantum lines/surfaces-- the effective lower dimensional configuration space can be identified with a regularly embedded metric submanifold, this is not the general situation. A classical example is provided by Kaluza-Klein theories where the physical spacetime is obtained by identifying higher dimensional points connected by a special class of diffeomorphisms that will eventually be identified with gauge transformations. The resulting quotient space can not be given the structure of metric submanifold. The classical theory of embeddings \cite{submanifolds,embST} is not enough for describing the general situation. In this paper we further investigate the geometry of coordinate separation with emphasis on residual general covariance and provide a unifying framework that successfully generalizes the theory of metric submanifolds. The paper is organized as follows. In Section \ref{sec1} we find that dimensional reduction is completely characterized by lower dimensional tensors, generalizing, on the one hand, Kaluza-Klein gauge fields~\cite{KK,nonAbKK,KKrev} and, on the other, extrinsic curvature and torsion --i.e.\ second and normal fundamental forms-- of embedded spaces \cite{submanifolds,embST}. In terms of these we obtain in Section \ref{sec2} general reduction formulas for the Riemann tensor, Ricci tensor, scalar curvature, geodesics equations, Laplace and Dirac operators, providing what is probably the maximal possible generalization of Gauss, Codazzi, Ricci equations \cite{GCR} and various other standard identities in embeddings and Kaluza-Klein theories. These equations also represent the natural starting point to investigate higher dimensional unification scenarios in which physics is allowed to fully depend on all the introduced coordinates. In Section~\ref{sec3} special attention is given to induced gauge structures. We show how residual general covariance in the reduced variables always emerges in the effective dynamics as gauge covariance. The induced gauge group is in general infinite dimensional and reduces to finite dimensions in Kaluza-Klein and a few other remarkable backgrounds, all characterized by the vanishing of appropriate lower dimensional tensors. Finally, in Section~\ref{sec4} a discussion of the findings is presented and concluding remarks are made. For the shake of concreteness we tackle the problem from the viewpoint of higher dimensional unification. We consider a higher dimensional (HD) spacetime ${\mathbf M}_D$ parameterized by $D$ continuous coordinates ${\mathbf x}^I$, $I=0,1, ... ,D-1$, endowed with a pseudo-Riemannian metric ${\mathbf g}_{IJ}$. In addition to the coordinate system, we set up reference frames at each spacetime point ${\mathbf r}_A^{\ I}({\mathbf x})$, $A=0,1, ... ,D-1$, ${\mathbf r}_A^{\ I} {\mathbf r}_B^{\ J}{\mathbf g}_{IJ}=\eta_{AB}$. Physical laws are assumed to be covariant under general coordinates transformations and local redefinitions of reference frames \cite{Weinberg72} \begin{equation} {\mathbf x}^I \rightarrow {\mathbf x}'^I({\mathbf x})\hskip0,5cm {\mathbf r}_A^{\ I} \rightarrow {\mathbf\Lambda}_A^{\ B}({\mathbf x}){\mathbf r}_B^{\ I} \label{genD} \end{equation} At low energies the spacetime ${\mathrm M}_d$ (e.g. $d=4$) is parameterized by $d$ continuous coordinates $x^\mu$, $\mu=0,1,...,d-1$, and reference frames are made up of $d$ reference vectors $r_\alpha^{\ \mu}$, $\alpha=0,1,...,d-1$. Physical laws are covariant under (electroweak and strong) gauge transformations, other than general coordinates transformations $x^\mu \rightarrow x'^\mu(x)$ and local redefinitions of reference frames $r_\alpha^{\ \mu}\rightarrow \Lambda_\alpha^{\ \beta}(x) r_\beta^{\ \mu}$. The original motivation for considering higher dimensional unification is the hope that HD covariance can account for lower dimensional (LD) gauge symmetries other than LD spacetime covariance. To make contact with LD physics, we split HD coordinates in two groups ${\mathbf x}^I =(x^\mu,y^i)$ with $\mu=0,1,...,d-1$, $i=1,2,...,c\equiv D-d$. We refer to $x^\mu$ and $y^i$ as {\em external} and {\em internal} coordinates, respectively. Consequently, reference frames split in four blocks ${\mathbf r}_\alpha^{\ \mu}\equiv r_\alpha^{\ \mu}$, ..., ${\mathbf r}_{a+d-1}^{\ i+d-1}\equiv \rho_a^{\ i}$ with $\alpha=0,1,...,d-1$, $a=1,2,...,c$. As we are willing to make no a priori hypothesis on specific reduction mechanisms, we proceed by noticing that the minimal assumption that drives us to recover the desired LD spacetime covariance is that the HD transformation group (\ref{genD}) is effectively broken down to \begin{equation} \left\{ \begin{array}{l} x^\mu\rightarrow x'^\mu(x)\\ y^i\rightarrow y'^i(x,y) \end{array} \right. \hskip0,5cm \left\{ \begin{array}{l} r_\alpha^{\ \mu}\rightarrow \Lambda_\alpha^{\ \beta}(x)r_\beta^{\ \mu}\\\ \vdots\\ \rho_a^{\ i}\rightarrow \Lambda_a^{\ b}(x,y)\rho_b^{\ i} \end{array} \right.\label{STr} \end{equation} We take this as a characterization of dimensional reduction. In working out the consequences that it implies, as a check of our results and to make contact with the most important applications, we constantly specialize in appropriate subsections to Kaluza-Klein theories \cite{KK,nonAbKK,KKrev} and spacetimes embedded in a flat \footnote{The assumption of flatness is clearly not necessary and is made because it is common in applications and to keep our explanatory formulas as simple as possible.} higher dimensional space~\cite{embST}. While in the former case the topology reduces to that of a direct product and in the latter the system is localized on a submanifold, in the general case the structure of the HD spacetime is more complex. In correspondence to every choice of external coordinates $x^\mu$, the internal coordinates $y^i$ span a $c$-dimensional {\em internal spacetime} ${\mathrm M}_c^x$ regularly embedded in ${\mathbf M}_D$. Every internal spacetime ${\mathrm M}_c^x$ has to be identified with a point of the $d$-dimensional {\em external spacetime} ${\mathrm M}_d$ and may posses a geometry --and even a topology-- that vary from point to point. Strictly speaking, ${\mathrm M}_d$ can not be identified with the effective spacetime before internal coordinates have been completely removed. In spite of this we will talk about LD external metric, curvature or general tensors, with the bona fide assumption that internal coordinate dependence will be eventually removed from the effective theory. Clearly, any realistic reduction mechanism will eventually involve such a removal. However we will not address this issue in this paper. \section{\label{sec1} The Geometry of \protect\\ Dimensional Reduction} The HD spacetime ${\mathbf M}_D$ is endowed with standard pseudo-Riemannian geometry. \subsection{\label{sec1.1}Tensors} HD {\em tensors} ${\mathbf t}_{...I...}^{\ ...J...}$ transform according to \[ {\mathbf t}_{...I...}^{\ ...J...}\rightarrow ...\ {\mathbf J}_{I}^{\ K} ...\ {\mathbf t}_{...K...}^{\ ...L...}\ ...\ {{\mathbf J}^{-1}}_{L}^{\ J}... \] with ${\mathbf J}_{I}^{\ J}=\frac{\partial{\mathbf x}^J}{\partial{\mathbf x}'^I}$ the Jacobian matrix associated with the transformation of HD coordinates. LD {\em external tensors} $t_{...\mu...}^{\ ...\nu...}$ and LD {\em internal tensors} $t_{...i...}^{\ ...j...}$, respectively carrying external and internal indices, transform according to \begin{eqnarray*} t_{...\mu ...}^{\ ...\nu...}&\rightarrow& ...\ J_{\mu}^{\ \kappa}...\ t_{...\kappa ...}^{\ ...\lambda...}\ ...\ {J^{-1}}_{\lambda}^{\ \nu}...\\ t_{...i...}^{\ ...j...}&\rightarrow& ...\ J_{i}^{\ k}...\ t_{...k...}^{\ ...l...}\ ...\ {J^{-1}}_{l}^{\ j}... \end{eqnarray*} with $J_\mu^{\ \nu}=\frac{\partial x^\nu}{\partial x'^\mu}$ and $J_i^{\ j}=\frac{\partial y^j}{\partial y'^i}$ the Jacobian matrices associated with the transformations of $x^\mu$ and $y^i$ respectively. LD {\em hybrid tensors} $t_{...\mu... i...}^{\ ...\nu... j...}$, carrying internal and external indices that transform with $J_\mu^{\ \nu}$ and $J_i^{\ j}$, respectively, will also be considered. When HD covariance is broken from (\ref{genD}) to (\ref{STr}) , ${\mathbf J}_{I}^{\ J}$ takes the block non-diagonal form \begin{equation} {\mathbf J}_{I}^{\ J}({\mathbf x}')= \left( \begin{array}{cc} J_\mu^{\ \nu}(x') & \frac{\partial y^j}{\partial x'^\mu}(x',y') \\ 0 & J_i^{\ j}(x',y')\nonumber \end{array}\right) \end{equation} The off-diagonal block makes covariant external ${\mathbf t}_{...\mu...}$, contravariant internal ${\mathbf t}^{...i...}$ and analogous hybrid components ${\mathbf t}_{...\mu...}^{\ ...j ...}$ of HD tensors, in non-covariant LD objects. On the other hand, contravariant external ${\mathbf t}^{...\mu...}$, covariant internal ${\mathbf t}_{...i...}$ and analogous hybrid components ${\mathbf t}_{...i...}^{\ ...\nu...}$ of HD tensors, transform like LD tensors. As an explicit example, external and internal components of a HD covariant vector ${\mathbf v}_I$ transform like \[ {\mathbf v}_\mu\rightarrow J_{\mu}^{\ \kappa}{\mathbf v}_\kappa+\frac{\partial y^k}{\partial x'^\mu}{\mathbf v}_k\ \ \ \text{and}\ \ \ {\mathbf v}_i\rightarrow J_{i}^{\ k}{\mathbf v}_k \] so that ${\mathbf v}_\mu$ can not be identified with an external vector, while ${\mathbf v}_i\equiv v_i$ transforms like a LD internal vector. External and internal components of a HD contravariant vector ${\mathbf v}^I$ transform according to \[ {\mathbf v}^\mu\rightarrow {\mathbf v}^\kappa{J^{-1}}_{\kappa}^{\ \mu} \ \ \ \text{and}\ \ \ {\mathbf v}^i\rightarrow {\mathbf v}^\kappa\frac{\partial y'^i}{\partial x^\kappa}+{\mathbf v}^k{J^{-1}}_{k}^{\ i} \] so that ${\mathbf v}^\mu\equiv v^\mu$ can be identified with a LD contravariant external vector, while ${\mathbf v}^i$ is not a LD vector. \\ When constructed from HD tensors, LD tensors are in general functions of external and internal coordinates. In internal directions the $x^\mu$ dependence just labels the internal space ${\mathrm M}^x_c$ under consideration. In external directions the $y^i$ dependence will be eventually removed. \subsection{\label{sec1.2} Metric} The most general parameterization of the HD spacetime metric ${\mathbf g}_{IJ}$ covariant under (\ref{STr}) reads \begin{equation} {\mathbf g}_{IJ}= \left( \begin{array}{cc} g_{\mu\nu}+{h}_{kl}a_\mu^k a_\nu^l & a_\mu^k {h}_{kj}\\ {h}_{il}a_\nu^l & {h}_{ij} \end{array} \right) \label{metric} \end{equation} with $g_{\mu\nu}(x,y)$, ${h}_{ij}(x,y)$ and $a_\mu^i(x,y)$ functions of external and internal coordinates that transform according to \begin{subequations} \begin{eqnarray} g_{\mu\nu}&\rightarrow& J_\mu^{\ \kappa}J_\nu^{\ \lambda}g_{\kappa\lambda}\label{g trans}\\ {h}_{ij}&\rightarrow& J_i^{\ k}J_j^{\ l}{h}_{kl}\label{h trans}\\ a_\mu^i&\rightarrow&J_\mu^{\ \kappa}\left(a_\kappa^k {J^{-1}}_k^{\ i} -\partial_\kappa y'^i\right)\label{a trans} \end{eqnarray} \end{subequations} The square matrices $g_{\mu\nu}$ and ${h}_{ij}$ respectively transform like LD external and internal tensors and can be identified with metrics on ${\mathrm M}_d$ (after $y^i$ removal) and ${\mathrm M}^x_c$. The rectangular matrix $a_\mu^i$ transforms like a LD hybrid tensor up to an inhomogeneous term reminding the transformation rule of a gauge potential. By means of $a_\mu^i$ it is also possible to construct a genuine LD hybrid tensor \begin{equation} f_{\mu\nu}^i=\partial_\mu a_\nu^i-\partial_\nu a_\mu^i -a_\mu^j\partial_j a_\nu^i+a_\nu^j\partial_j a_\mu^i \label{f} \end{equation} appearing as the associated gauge curvature \footnote{ The vanishing of $f_{\mu\nu}^i$ implies the existence of an internal coordinate transformation setting $a_\mu^i=0$. In general relativity --identifying space-like coordinates with external variables and time with the internal coordinate-- the vanishing of $f_{\mu\nu}^i$ characterizes static gravitational fields.}. It is well known, that this is more than a similarity in Kaluza-Klein \cite{KK,nonAbKK,KKrev} and embedded spacetime \cite{gaugeEST} theories, where (\ref{a trans}) precisely corresponds to the transformation rule of a ${\mathrm G}^\mathrm{KK}$ or $SO(c)$ gauge potential. On the other hand, apparently unnoticed is the fact that (\ref{a trans}) always corresponds to the transformation rule of a vector potential. To see this explicitly, we read $x$-dependent internal coordinate transformations (\ref{STr}) as the actions of the internal diffeomorphism group ${\mathcal Di\!f\!f}_{c}$ on ${\mathrm M}_c^x$ \begin{equation} y^i\rightarrow \exp\{\xi^k(x,y)\partial_k\}y^i \nonumber \end{equation} with $\xi^k(x,y)$ an appropriate internal vector. By introducing the operator-valued external covariant vector \begin{equation} a_\mu=-i a_\mu^i\partial_i \label{a operator} \end{equation} and denoting by ${T} =\exp\{-\xi^k(x,y)\partial_k\}$ the inverse of the operator generating the transformation, it is straightforward to check that (\ref{a trans}) can be rewritten in the familiar gauge transformation form \begin{equation} a_\mu\rightarrow {T}a_\mu {T}^{-1}+i{T}(\partial_\mu {T}^{-1}) \label{a gauge trans} \end{equation} The off-diagonal term of the HD metric has to be identified with a vector potential taking values in the internal diffeomorphism algebra of ${di\!f\!f}_{c}$. The associated curvature $f_{\mu\nu}=\partial_\mu a_\nu -\partial_\nu a_\mu -i[a_\mu,a_\nu]$ corresponds to the operator associated to $f_{\mu\nu}^i$ \begin{equation} f_{\mu\nu}=-if_{\mu\nu}^i\partial_i \label{f operator} \end{equation} and transforms in the adjoint representation \begin{equation} f_{\mu\nu}\rightarrow {T}f_{\mu\nu}{T}^{-1} \end{equation} General coordinate transformations do not preserve lengths and angles, so that the operator $T$ is in general not unitary. The vanishing of the divergence of $\xi^i$ makes $T$ formally unitary, a condition always met in Kaluza-Klein and embedded spacetime theories. \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: The HD spacetime ${\mathbf M}_D={\mathrm M}_d\times {\mathcal K}_c$ is the product manifold of a Lorentz space ${\mathrm M}_d$ and the internal space ${\mathcal K}_c$ admitting an isometry group ${\mathrm G}^\mathrm{KK}$. The metric ansatz reads \begin{equation} {\mathbf g}_{IJ}= \left( \begin{array}{cc} {\rm g}_{\mu\nu}+{\rm A}_\mu^{\sf a}{\rm A}_\nu^{\sf b}{\rm K}_{\sf a}^k{\rm K}_{\sf b}^l\kappa_{kl} & {\rm A}_\mu^{\sf a}{\rm K}_{\sf a}^k\kappa_{kj}\\ \kappa_{il}{\rm A}_\nu^{\sf a}{\rm K}_{\sf a}^l & \kappa_{ij} \end{array} \right)\tag{\ref{metric}KK} \label{metricKK} \end{equation} with ${\rm g}_{\mu\nu}(x)$ a metric on ${\mathrm M}_d$, $\kappa_{ij}(y)$ a metric on ${\mathcal K}_c$, ${\mathrm K}_{\sf a}^k(y)$ Killing vector fields on ${\mathcal K}_c$ and ${\rm A}_\mu^{\sf a}(x)$ identified with the gauge potential taking values in the algebra of ${\mathrm G}^\mathrm{KK}$. By assumption $L_{{\rm K}_{\sf a}}\kappa=0$, equivalently $(\partial_i{\rm K}_{\sf a}^k)\kappa_{kj}+(\partial_j{\rm K}_{\sf a}^k)\kappa_{ik}+{\rm K}_{\sf a}^k\partial_k\kappa_{ij}=0$ or $\nabla_i {\rm K}_{{\sf a}j}+\nabla_j {\rm K}_{{\sf a}i}=0$. Allowed internal coordinate transformations are generated by Killing vector fields $\xi^k(x,y)=\epsilon^{\sf a}(x){\rm K}_{\sf a}^k(y)$. Because of the above identity, $\nabla_i{\rm K}_{\sf a}^i=0$, so that $T$ is unitary. The transformation rule (\ref{a gauge trans}) yields for ${\rm A}_\mu^{\sf a}$ the ${\mathrm G}^\mathrm{KK}$ gauge potential transformation rule, which infinitesimally takes the standard form \begin{equation} {\rm A}_\mu^{\sf a}\rightarrow {\rm A}_\mu^{\sf a} +{\rm A}_\mu^{\sf b}\epsilon^{\sf c}c_{\sf bc}^{\sf a} - \partial_\mu\epsilon^{\sf a}\nonumber \end{equation} The corresponding curvature is related to (\ref{f}) by \begin{equation} f_{\mu\nu}^i=\left(\partial_\mu {\rm A}_\nu^{\sf c}- \partial_\nu {\rm A}_\mu^{\sf c}-c_{\sf ab}^{\sf c}{\rm A}_\mu^{\sf a}{\rm A}_\nu^{\sf b} \right){\rm K}_{\sf c}^i={\rm F}_{\mu\nu}^{\sf c}{\rm K}_{\sf c}^i \tag{\ref{f}KK} \label{fKK} \end{equation}} \vskip0,1cm\noindent{\small {\bf Embedded spacetime}: The HD spacetime ${\mathbf M}_D\equiv\mathbb{R}^D$ is reduced to a Lorentz space ${\mathrm M}_d$. Denoting be $x^\mu$ the coordinates on ${\mathrm M}_d$, by ${\mathbf t}_\mu$ the associated tangent vectors and by ${\mathbf n}_i(x)$ a smooth assignment of $c$ orthonormal vectors, ${\mathbf n}_i\!\cdot\!{\mathbf n}_j=0$, ${\mathbf n}_i\!\cdot\!{\mathbf t}_\mu=0$, coordinates are adapted by parameterizing internal directions by the distances $y^i$ along the geodesics leaving ${\mathrm M}_d$ with velocity ${\mathbf n}_i$. In adapted coordinates the flat HD metric reads \begin{equation} {\mathbf g}_{IJ}= \left( \begin{array}{cc} g_{\mu\nu}+{\rm A}_{\mu m}^{\ \ \ k}{\rm A}_{\mu n}^{\ \ \ l}y^my^n \eta_{kl}& {\rm A}_{\mu m}^{\ \ \ k}y^m\eta_{kj}\\ \eta_{il} {\rm A}_{\mu n}^{\ \ \ l}y^n & \eta_{ij} \end{array} \right)\tag{\ref{metric}{\scriptsize\rm emb}} \label{metricES} \end{equation} where $g_{\mu\nu}={\rm g}_{\mu\nu}+2\mathrm{II}_{k\mu\nu}y^k+ \mathrm{II}_{k\mu\kappa} \mathrm{II}_{l\nu}^{\ \ \kappa}y^ky^l$ with ${\rm g}_{\mu\nu}(x)={\mathbf t}_\mu\!\cdot\!{\mathbf t}_\nu$ the induced metric and $\mathrm{II}_{i\mu\nu}(x)={\mathbf t}_\mu\!\cdot\!\partial_\nu{\mathbf n}_i$ the extrinsic curvature (or \emph{second fundamental form}) of the embedding; $\eta_{ij}$ is a (pseudo-)Euclidean metric in extra directions; ${\rm A}_{\mu ij}(x)={\mathbf n}_i\!\cdot\! \partial_\mu{\mathbf n}_j$ is the extrinsic torsion (or \emph{normal fundamental form}) of the embedding \cite{submanifolds}. The off-diagonal blocks of (\ref{metricES}) are proportional to the Killing vectors generating (pseudo-)rotations around the point $y^i=0$ in the flat internal space. However, the metric is not Kaluza-Klein because of terms that make $g_{\mu\nu}$ explicitly dependant on $y^i$. Allowed internal coordinate transformations correspond to the $x$-dependent (pseudo-)rotation ${\mathbf n}_i \rightarrow \Lambda_i^{\ j}(x){\mathbf n}_j$ and are generated by the Killing vector fields $\xi^k(x,y)=y^l\omega_l^{\ k}(x)$ with $\omega_{kl}=-\omega_{lk}$. $\nabla_k y^l\omega_l^{\ k}=\omega_k^{\ k}=0$ so that $T$ is unitary. Under (\ref{STr}) ${\rm A}_{\mu i}^{\ \ j}$ transform like a $SO(c)$ gauge potential \begin{equation} {\rm A}_{\mu i}^{\ \ j}\rightarrow \Lambda_i^{\ k}{\rm A}_{\mu k}^{\ \ l}{\Lambda^{-1}}_l^{\ j} -\Lambda_i^{\ k}\partial_\mu{\Lambda^{-1}}_k^{\ j} \nonumber \end{equation} The associated curvature is related to (\ref{f}) by \begin{equation} f_{\mu\nu}^i=\left(\partial_\mu {\rm A}_{\nu j}^{\ \ \ i}-\partial_\nu {\rm A}_{\mu j}^{\ \ \ i}-[{\rm A}_\mu,{\rm A}_\nu]_j^{\ i} \right)y^j={\rm F}_{\mu\nu j}^{\ \ \ \ i}y^j \tag{\ref{f}{\scriptsize\rm emb}} \label{fES} \end{equation}} \vskip0,2cm\noindent Denoting by ${\mathbf g}$ the HD metric determinant and by $g$ and ${h}$ the LD metric determinants, we have that ${\mathbf g}=gh$. The HD volume element factorizes in the product of LD volume elements $|{\mathbf g}|^{1/2}= |g|^{1/2}|{h}|^{1/2}$. The HD inverse metric ${\mathbf g}^{IJ}$ can be evaluated in general terms as \[ {\mathbf g}^{IJ}= \left( \begin{array}{cc} g^{\mu\nu} & -g^{\mu\kappa}a_\kappa^j\\ -a_\lambda^ig^{\lambda\nu} & {h}^{ij}+a_\kappa^ia_\lambda^jg^{\kappa\lambda} \end{array} \right) \] with $g^{\mu\nu}$ and ${h}^{ij}$ the inverses of the LD metrics.\\ The parameterization (\ref{metric}) is particularly convenient in connecting HD with LD geometrical quantities. It generalizes the Kaluza-Klein and embedded spacetime metric ans\"atze, to the case where no a priori symmetries or special submanifold have been introduced. \subsection{\label{sec1.3}Connections and Curvature Tensors} The HD covariant derivative induced by ${\mathbf g}_{IJ}$ is denoted by $\bm{\nabla}_I$ and acts on tensors as \[ \bm{\nabla}_I{\mathbf t}_{...J...}^{\ ...}=\partial_I{\mathbf t}_{...J...}^{\ ...}+ ... -{\mathbf\Gamma}_{IJ}^{\ \ \ K}{\mathbf t}_{...K...}^{\ ...}+\ ...\ \] with intrinsic connection coefficients given by ${\mathbf\Gamma}_{IJ}^{\ \ K} = \frac{1}{2}{\mathbf g}^{KL}\left(\partial_I{\mathbf g}_{LJ}+ \partial_J{\mathbf g}_{IL} -\partial_L{\mathbf g}_{IJ}\right)$; by definition $\bm{\nabla}_K{\mathbf g}_{IJ}=0$ and $\bm{\nabla}_K|{\mathbf g}|^{1/2}=0$. The commutator of two covariant derivatives \[ \left[\bm{\nabla}_I,\bm{\nabla}_J\right]{\mathbf t}_{...K...}^{\ ...}=...-{\mathbf R}_{IJK}^{\ \ \ \ L}{\mathbf t}_{...L...}^{\ ...}+... \] defines the intrinsic curvature tensor \begin{equation} {\mathbf R}_{IJK}^{\ \ \ \ L}=\partial_I{\mathbf\Gamma}_{JK}^{\ \ L}- \partial_J{\mathbf\Gamma}_{IK}^{\ \ L} -{\mathbf\Gamma}_{IK}^{\ \ H} {\mathbf\Gamma}_{JH}^{\ \ L} + {\mathbf\Gamma}_{JK}^{\ \ H}{\mathbf\Gamma}_{IH}^{\ \ L}\label{HDinR} \end{equation} We also denote the Ricci tensor by ${\mathbf R}_{IJ}={\mathbf R}_{IKJ}^{\ \ \ \ K}$ and the scalar curvature by ${\mathbf R}={\mathbf g}^{IJ}{\mathbf R}_{IJ}$. The covariant derivative $\bm{\nabla}_I$ and the associated curvature tensor ${\mathbf R}_{IJK}^{\ \ \ \ L}$ completely characterize the geometry of the HD spacetime ${\mathbf M}_D$. We now consider analogous quantities for the LD internal spaces ${\mathrm M}_c^x$ and external space ${\mathrm M}_d$. \subsubsection{\label{sec1.3.1}Internal connection and curvatures} The LD {\em internal covariant derivative} $\nabla_i$ induced by the metric tensor ${h}_{ij}$ \begin{equation}\nabla_it_{...j...}^{\ ...}=\partial_it_{...j...}^{\ ...}+...-{\mathit\Gamma}_{ij}^{\ \ k}t_{...k...}^{\ ...}+...\label{IntD} \end{equation} with {\em internal intrinsic connection coefficients} \begin{equation} {\mathit\Gamma}_{ij}^{\ \ k} =\frac{1}{2}{h}^{kl}(\partial_i{h}_{lj}+ \partial_j{h}_{il}-\partial_l{h}_{ij}) \end{equation} is covariant under (\ref{STr}) when acting either on LD internal, external or hybrid tensors. As a consequence, new LD tensors can be generated by the action of $\nabla_i$. The commutator \[ \left[\nabla_i,\nabla_j\right]t_{...k...}^{\ ...}=...-R_{ijk}^{\ \ \ l} t_{...l...}^{\ ...}+... \] defines the {\em internal intrinsic curvature} \begin{equation} R_{ijk}^{\ \ \ l}= \partial_i{\mathit\Gamma}_{jk}^{\ \ l}- \partial_j{\mathit\Gamma}_{ik}^{\ \ l}- {\mathit\Gamma}_{ik}^{\ \ m}{\mathit\Gamma}_{jm}^{\ \ l} + {\mathit\Gamma}_{jk}^{\ \ m}{\mathit\Gamma}_{im}^{\ \ l}\label{LDinR} \end{equation} Internal Ricci tensor and scalar curvature are defined like in higher dimensions. The internal metric ${h}_{ij}$ and the internal volume element $|{h}|^{1/2}$ are parallel transported $\nabla_k{h}_{ij}=0$, $\nabla_k |{h}|^{1/2}=0$. The internal covariant derivative, however, is not compatible with the external metric structure as $\nabla_ig_{\mu\nu}\not=0$. External indices can not be raised, lowered or contracted regardless to the position of $\nabla_i$. To overcome the problem we extend the action of $\nabla_i$ to external indices. We define an {\it internal total covariant derivative} $\nabla^\text{\tiny{tot}}_i$ by \begin{equation} \nabla^\text{\tiny{tot}}_it_{...\mu...j...}^{\ ...}=\nabla_it_{...\mu... j...}^{\ ...}+...-\hat{E}_{i\mu}^{\ \ \nu} t_{...\nu...j...}^{\ ...} ...\ \label{IntDtot} \end{equation} with {\em internal extrinsic connection coefficients} $\hat{E}_{i\mu}^{\ \ \nu}$ chosen so that $\nabla^\text{\tiny{tot}}_kg_{\mu\nu}=0$ (also implying $\nabla^\text{\tiny{tot}}_k |g|^{1/2}=0$). This requirement fixes the symmetric part of the extrinsic connection to $\hat{E}_{i(\mu\nu)}=\frac{1}{2}\partial_ig_{\mu\nu}$, living the antisymmetric part completely arbitrary. It is possible and even natural to include in $\hat{E}_{i\mu\nu}$ a term proportional to the hybrid tensor $f_{i\mu\nu}$. Different choices correspond to different internal extrinsic geometries. In Section \ref{sec2}, equation (\ref{HvLconnectionC}), we will see that the internal extrinsic connection induced by HD geometry corresponds to the choice $\hat{E}_{i[\mu\nu]}= \frac{1}{2}f_{i\mu\nu}$. We therefore set \begin{equation} \hat{E}_{i\mu}^{\ \ \nu}=\frac{1}{2}\left(\partial_ig_{\mu\kappa} +f_{i\mu\kappa}\right)g^{\kappa\nu} \label{IIex} \end{equation} Under coordinate redefinitions (\ref{STr}), $\hat{E}_{i\mu}^{\ \ \nu}$ transforms like a genuine LD hybrid tensor \begin{equation} \hat{E}_{i\mu}^{\ \ \nu}\rightarrow J_i^{\ j }J_\mu^{\ \kappa}\hat{E}_{j\kappa}^{\ \ \lambda}{J^{-1}}_\lambda^{\ \nu}\nonumber \end{equation} \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: The symmetric part of the internal extrinsic connection vanishes identically; the antisymmetric part reduces to the gauge curvature \begin{equation} \hat{E}_{i\mu\nu}=\frac{1}{2}{\rm F}_{\mu\nu}^{\sf c}{\rm K}_{{\sf c}i} \tag{\ref{IIex}KK}\label{IIexKK} \end{equation}} \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: The internal extrinsic connection equals the second fundamental form $\mathrm{II}_{i\mu\nu}$ of ${\mathrm M}_d$ plus a term linear in $y^i$ \begin{equation} \hat{E}_{i\mu\nu}=\mathrm{II}_{i\mu\nu}+\frac{1}{2}\left( \mathrm{II}_{i\mu\kappa}\mathrm{II}_{j\nu}^{\ \ \kappa}+ \mathrm{II}_{i\nu\kappa}\mathrm{II}_{j\mu}^{\ \ \kappa}- {\rm F}_{\mu\nu ij}\right)y^j \tag{\ref{IIex}{\scriptsize\rm emb}}\label{IIexES} \end{equation} On ${\mathrm M}_d$ the linear term vanishes and $\hat{E}_{i\mu\nu}$ coincides with the second fundamental form $\hat{E}_{i\mu\nu}|_{y=0} \equiv \mathrm{II}_{i\mu\nu}$.} \vskip0,2cm\noindent The hybrid tensor $\hat{E}_{i\mu\nu}$ reduces to the gauge curvature of the external space in Kaluza-Klein backgrounds and to the extrinsic curvature --second fundamental form-- of the external spacetime in embedded spacetime models. In Section \ref{sec2} we will see that $\hat{E}_{i\mu\nu}$ enters the general equations (\ref{HvLRiemann}) relating higher and lower dimensional curvatures in the very same way as the second fundamental form enters Gauss, Codazzi and Ricci equations. For these reasons we will also refer to $\hat{E}_{i\mu\nu}$ as to the {\em external fundamental form}. The `hat' is introduced to remind us that ${\mathrm M}_d$ is not in general an embedded object and $\hat{E}_{i\mu\nu}$ is not a fundamental form in the standard sense of embedding theory. The commutator of two total internal covariant derivatives \begin{eqnarray*} \left[\nabla^\text{\tiny{tot}}_i,\nabla^\text{\tiny{tot}}_j \right]t_{...\mu...k...}\!=...-R_{ijk}^{\ \ \ l}t_{...\mu...l...}+...\\ ...-F_{ij\mu}^{\ \ \ \nu}t_{...\nu...k...}+... \end{eqnarray*} defines the {\em internal extrinsic curvature} \begin{equation} F_{ij\mu}^{\ \ \ \nu}\!=\partial_i \hat{E}_{j\mu}^{\ \ \nu}-\partial_j\hat{E}_{i\mu}^{\ \ \nu}- \hat{E}_{i\mu}^{\ \ \kappa}\hat{E}_{j\kappa}^{\ \ \nu} + \hat{E}_{j\mu}^{\ \ \kappa}\hat{E}_{i\kappa}^{\ \ \nu}\label{LDinF} \end{equation} carrying two internal and two external indices. A direct computation allows to rewrite $F_{ij\mu\nu}$ as \begin{equation} F_{ij\mu\nu}= \frac{1}{2}\partial_if_{j\mu\nu}-\frac{1}{2}\partial_jf_{i\mu\nu} +\hat{E}_{i\mu}^{\ \ \kappa}\hat{E}_{j\nu\kappa}- \hat{E}_{j\mu}^{\ \ \kappa}\hat{E}_{i\nu\kappa}\label{LDinF'} \end{equation} \subsubsection{\label{sec1.3.2}External connection and curvatures} The definition of covariant differentiation along external direction is less straightforward. The derivative $\nabla_\mu$ associated with the external metric $g_{\mu\nu}$ is not a covariant LD object. Difficulties already emerge at the scalar level. The allowed external coordinate dependence of internal coordinate redefinitions produces an inhomogeneous term in the transformation rule of partial derivatives \begin{equation} \partial_\mu\rightarrow\partial'_\mu=J_\mu^{\ \nu}\left(\partial_\nu+\frac{\partial y^i}{\partial x^\nu}\partial_i\right) \nonumber \end{equation} The problem can be resolved by adding a counter term proportional to $a_\mu^i$ which also transform inhomogeneously. The derivative operator \begin{equation} \hat\partial_\mu=\partial_\mu-i a_\mu \label{hat-d} \end{equation} transforms like a genuine LD external vector when acting on scalars \begin{equation} \hat\partial_\mu\rightarrow\hat\partial'_\mu= J_\mu^{\ \nu}\hat\partial_\nu \nonumber \end{equation} On the other hand, the commutator of two hatted derivatives is no longer vanishing \[ \left[\hat\partial_\mu,\hat\partial_\nu\right]=-if_{\mu\nu} \] Differentiation is extended to LD external tensors by introducing the generalized Christoffel symbols \begin{equation} {\mathit{\hat\Gamma}}_{\mu\nu}^{\ \ \kappa}=\frac{1}{2}g^{\kappa\lambda}(\hat\partial_\mu g_{\lambda\nu}+\hat\partial_\nu g_{\mu\lambda}-\hat\partial_\lambda g_{\mu\nu}) \end{equation} where ordinary derivatives are replaced by hatted ones in the standard definition. Generalized Christoffel symbols transform like proper connection symbols. The {\em external covariant derivative} $\hat\nabla_\mu$ \begin{equation} \hat\nabla_\mu t_{...\nu...}^{\ ...}=\hat\partial_\mu t_{...\nu...}^{\ ...}+... - {\mathit{\hat\Gamma}}_{\mu\nu}^{\ \ \kappa}t_{...\kappa...}^{\ ...} \end{equation} is covariant under ($\ref{STr}$) when acting on LD external tensors. New LD tensors can be generated by the action of $\hat\nabla_\mu$ on external tensors. The commutator \[ \left[\hat\nabla_\mu,\hat\nabla_\nu\right]t_{...\kappa...}^{\ ...}=...-\hat{R}_{\mu\nu\kappa}^{\ \ \ \lambda}t_{...\lambda...}^{\ ...}+... -f_{\mu\nu}^i\nabla^\text{\tiny{tot}}_it_{...\kappa...}^{\ ...} \] defines a genuine {\em external intrinsic curvature} tensor as \begin{eqnarray} \hat{R}_{\mu\nu\kappa}^{\ \ \ \lambda}= \hat\partial_\mu{\mathit{\hat\Gamma}}_{\nu\kappa}^{\ \ \lambda} -\hat\partial_\nu{\mathit{\hat\Gamma}}_{\mu\kappa}^{\ \ \lambda}+\hskip2,5cm\nonumber\\ -{\mathit{\hat\Gamma}}_{\mu\kappa}^{\ \ \rho}{\mathit{\hat\Gamma}}_{\nu\rho}^{\ \ \lambda}+{\mathit{\hat\Gamma}}_{\nu\kappa}^{\ \ \rho}{\mathit{\hat\Gamma}}_{\mu\rho}^{\ \ \lambda}+ f_{\mu\nu}^i\hat{E}_{i\kappa}^{\ \ \lambda} \label{LDexR} \end{eqnarray} External Ricci and scalar curvatures are defined as usual by contraction $\hat{R}_{\mu\nu}=\hat{R}_{\mu\kappa\nu}^{\ \ \ \kappa}$ and $\hat{R}=g^{\mu\nu}\hat{R}_{\mu\nu}$. It is worth noticing that $\hat{R}_{\mu\nu\kappa}^{\ \ \ \lambda}$, $\hat{R}_{\mu\nu}$ and $\hat{R}$ are reducible tensors. \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: In Kaluza-Klein theories $\hat{R}$ does not correspond with the scalar curvature ${\mathrm R}$ associate with the four dimensional metric ${\rm g}_{\mu\nu}(x)$. Equation (\ref{LDexR}) yields \begin{equation} \hat{R}={\mathrm R}+{\rm F}^{\sf a}_{\mu\nu} {\rm F}^{{\sf a}\mu\nu}/2 \tag{\ref{LDexR}KK}\label{LDexRKK} \end{equation} with gauge indices contracted with the group metric.} \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: The corresponding equation in embedded spacetime theories is more complicated involving, apart from the gauge field ${\rm F}_{\mu\nu i}^{\ \ \ \ j}$, the external fundamental forms $\hat{E}_{i\mu\nu}$. Specializing to $y^i=0$ we obtain \begin{equation} \hat{R}={\mathrm R}+ {\mathcal O}(y)\tag{\ref{LDexR}{\scriptsize\rm emb}}\label{LDexRES} \end{equation} with ${\mathrm R}$ the intrinsic curvature associated with the metric induced on the submanifold.} \vskip0,2cm\noindent The external metric $g_{\mu\nu}$ and the external volume element $|g|^{1/2}$ are parallel transported $\hat\nabla_\kappa g_{\mu\nu}=0$, $\hat\nabla_\kappa|g|^{1/2}=0$. On the other hand, it is not even possible to ask wether the external derivative is compatible with internal metric structures, because $\hat\nabla_\mu$ is not covariant when acting on internal and hybrid tensors. Both problems can be resolved by extending the action of $\hat\nabla_\mu$ to internal indices. We define the {\it external total covariant derivative} $\hat\nabla^\text{\tiny{tot}}_\mu$ by \begin{equation} \hat\nabla^\text{\tiny{tot}}_\mu t_{...\kappa... k...}=\hat\nabla_\mu t_{...\kappa... k...}+...-\hat{C}_{\mu k}^{\ \ l}t_{...\kappa... l...}+...\label{ExtDtot} \end{equation} where the {\em external extrinsic connection coefficients} $\hat{C}_{\mu k}^{\ \ l}$ are determined by the requirement of covariance and by the compatibility condition $\hat\nabla^\text{\tiny{tot}}_\mu{h}_{ij}=0$ (also implying $\hat\nabla^\text{\tiny{tot}}_\mu|{h}|^{1/2}=0$). We obtain \begin{equation} \hat{C}_{\mu k}^{\ \ l}=\partial_k a_\mu^l +{E}_{\mu km}{h}^{ml} \label{Ain} \end{equation} where \begin{equation} {E}_{\mu ij}=\frac{1}{2}[\hat\partial_\mu{h}_{ij}- (\partial_i a_\mu^k){h}_{kj}-(\partial_j a_\mu^k){h}_{ik}] \label{IIin} \end{equation} transforms like a genuine LD hybrid tensor. At any given external point ${E}_{\mu ij}|_x$ corresponds to the standard second fundamental form describing the embedding of ${\mathrm M}^x_c$ in ${\mathbf M}_D$. ${E}_{\mu ij}$ generalizes the notion of second fundamental form to the whole foliation of the HD spacetime in internal spaces. For this reason we refer to ${E}_{\mu ij}$ as the {\em internal fundamental form}. For later use we also rewrite ${E}_{\mu ij}$ as \begin{equation} {E}_{\mu ij}=\frac{1}{2}(\partial_\mu h_{ij}-\nabla_ia_{\mu j}-\nabla_ja_{\mu i})\label{IIinKK'} \end{equation} and note that the following identity holds \begin{equation} \hat\nabla^\text{\tiny{tot}}_\mu {E}_{\nu ij}- \hat\nabla^\text{\tiny{tot}}_\nu {E}_{\mu ij}= \frac{1}{2}\left(\nabla_if_{j\nu\mu}+\nabla_jf_{i\nu\mu}\right) \label{IIinIdentity} \end{equation} Under the residual general covariance group (\ref{STr}) external extrinsic connection coefficients transform like a genuine $GL(c)$ connection \begin{equation} \hat{C}_{\mu k}^{\ \ l}\rightarrow J_\mu^{\ \nu}(J_k^{\ m} \hat{C}_{\nu m}^{\ \ n} {J^{-1}}_n^{\ l}-J_{k}^{\ m}\hat\partial_\nu {J^{-1}}_{m}^{\ l}) \nonumber \end{equation} \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: By virtue of the identity $(\partial_i{\rm K}_{\sf a}^k)\kappa_{kj}+(\partial_j{\rm K}_{\sf a}^k)\kappa_{ik}+{\rm K}_{\sf a}^k\partial_k\kappa_{ij}=0$ the internal fundamental form vanishes identically \begin{equation} {E}_{\mu ij}=0 \tag{\ref{IIin}KK}\label{IIinKK} \end{equation} The embedding of each ${\mathcal K}_c$ is {\em totally geodesic} \cite{submanifolds}. The external extrinsic connection coefficients only depends on off-diagonal blocks of the metric \begin{equation} \hat{C}_{\mu k}^{\ \ l}= {\rm A}_\mu^{\sf a} (\partial_k {\rm K}_{\sf a}^l) \tag{\ref{Ain}KK} \end{equation} } \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: Since the internal metric $\eta_{ij}$ does not depend on coordinates and $A_{\mu kl}$ are antisymmetric in internal indices the embedding of internal spaces is again {\em totally geodesic} \begin{equation} {E}_{\mu ij}=0 \tag{\ref{IIin}{\scriptsize\rm emb}}\label{IIinES} \end{equation} The external extrinsic connection reduces to the normal fundamental form of the embedding \begin{equation} \hat{C}_{\mu k}^{\ \ l}= {\rm A}_{\mu k}^{\ \ l}\tag{\ref{Ain}{\scriptsize\rm emb}} \end{equation}} \vskip0,2cm\noindent The commutator of two external total covariant derivatives yields the associated curvature forms \begin{eqnarray*} \left[\hat\nabla^\text{\tiny{tot}}_\mu,\hat\nabla^\text{\tiny{tot}}_\nu \right]t_{...\kappa... k...}=...-\hat{R}_{\mu\nu\kappa}^{\ \ \ \lambda}t_{...\lambda... k...}+\hskip1cm\\ +... - \hat{F}_{\mu\nu k }^{\ \ \ l}t_{...\kappa... l...}+ ...-f_{\mu\nu}^i\nabla^\text{\tiny{tot}}_i t_{...\kappa... k...} \end{eqnarray*} where the {\em external extrinsic curvature} tensor, carrying two external and two internal indices, is defined as \begin{eqnarray} \hat{F}_{\mu\nu k }^{\ \ \ l}=\hat\partial_\mu\hat{C}_{\nu k}^{\ \ l}-\hat\partial_\nu \hat{C}_{\mu k}^{\ \ l} +\hskip2,5cm\nonumber\\ -\hat{C}_{\mu k}^{\ \ m}\hat{C}_{\nu m}^{\ \ l}+\hat{C}_{\nu k}^{\ \ m}\hat{C}_{\mu m}^{\ \ l}+f_{\mu\nu}^i{\mathit\Gamma}_{ik}^{\ \ l}\label{LDexF} \end{eqnarray} With the help of (\ref{IIinIdentity}) a straightforward computation allows to evaluate $\hat{F}_{\mu\nu k }^{\ \ \ l}$ directly in terms of $f_{\mu\nu}^i$ and ${E}_{\mu ij}$ as \begin{equation} \hat{F}_{\mu\nu kl}= \frac{1}{2}\partial_kf_{l\mu\nu}-\frac{1}{2}\partial_lf_{k\mu\nu}+ {E}_{\mu k}^{\ \ i}{E}_{\nu li}- {E}_{\nu k}^{\ \ i}{E}_{\mu li}\label{LDexF'} \end{equation} a formula that closely resembles (\ref{LDinF'}). \subsubsection{\label{sec1.3.3} Hybrid curvatures} The commutator of external and internal total covariant derivatives defines one more curvature tensor that describes the tangling of ${\mathrm M}_d$ and ${\mathrm M}_c^x$ in ${\mathbf M}_D$ \begin{eqnarray*} \left[\hat\nabla^\text{\tiny{tot}}_\mu,\nabla^\text{\tiny{tot}}_i\right] t_{...\kappa... k...}=...-\hat{H}_{\mu i\kappa}^{\ \ \ \lambda}t_{...\lambda... k...}+\hskip0.4cm \\ +... -H_{\mu i k }^{\ \ \ l}t_{...\kappa... l...}+\hskip0.2cm \\ +...+\hat{E}_{i\mu}^{\ \ \nu} \hat\nabla^\text{\tiny{tot}}_\nu t_{...\kappa... k...}- {E}_{\mu i}^{\ \ j}\nabla^\text{\tiny{tot}}_j t_{...\kappa... k...} \end{eqnarray*} where the two {\em hybrid curvature} tensors $\hat{H}_{\mu i\kappa}^{\ \ \ \lambda}$ and $H_{\mu ik}^{\ \ \ l}$ have the form \begin{eqnarray} \hat{H}_{\mu i\kappa}^{\ \ \ \lambda}=\hat\partial_\mu\hat{E}_{i\kappa}^{\ \ \lambda}-\partial_i{\mathit{\hat\Gamma}}_{\mu\kappa}^{\ \ \lambda}+\hskip2.5cm\nonumber\\ -{\mathit{\hat\Gamma}}_{\mu\kappa}^{\ \ \nu}\hat{E}_{i\nu}^{\ \ \lambda} +\hat{E}_{i\kappa}^{\ \ \nu}{\mathit{\hat\Gamma}}_{\mu\nu}^{\ \ \lambda} -(\partial_ia_\mu^j)\hat{E}_{j\kappa}^{\ \ \lambda}\label{LDhyH} \end{eqnarray} and \begin{eqnarray} H_{\mu ik}^{\ \ \ l}=\hat\partial_\mu{\mathit\Gamma}_{ik}^{\ \ l} -\partial_i\hat{C}_{\mu k}^{\ \ l}+\hskip2.5cm \nonumber\\ - \hat{C}_{\mu k}^{\ \ j}{\mathit\Gamma}_{ij}^{\ \ l} +{\mathit\Gamma}_{ik}^{\ \ j}\hat{C}_{\mu j}^{\ \ l} -(\partial_ia_\mu^j){\mathit\Gamma}_{jk}^{\ \ l}\label{LDhyHh} \end{eqnarray} A direct computation allows to reexpress the hybrid curvatures in terms of the sole fundamental forms $\hat{E}_{i\mu\nu}$ and ${{E}}_{\mu ij}$ as \begin{eqnarray} \hat{H}_{\mu i\kappa\lambda}= \hat\nabla^\text{\tiny{tot}}_\lambda\hat{E}_{i\kappa\mu}- \hat\nabla^\text{\tiny{tot}}_\kappa\hat{E}_{i\lambda\mu}+\hskip2.0cm\nonumber\\ +{E}_{\lambda i}^{\ \ k}\hat{E}_{k\mu\kappa}+ {E}_{\kappa k}^{\ \ k}\hat{E}_{k\mu\lambda}+ f^j_{\kappa\lambda}{E}_{\mu ik}\label{LDhyH'} \end{eqnarray} \begin{eqnarray} H_{\mu ikl}= \nabla^\text{\tiny{tot}}_k{E}_{\mu li} -\nabla^\text{\tiny{tot}}_l{E}_{\mu ki}+\hskip2.0cm\nonumber\\ +\hat{E}_{k\mu}^{\ \ \nu}{E}_{\nu li}- \hat{E}_{l\mu}^{\ \ \nu}{E}_{\nu ki}\label{LDhyHh'} \end{eqnarray} Therefore, the four LD tensors $\hat{R}_{\mu\nu\kappa}^{\ \ \ \lambda}$, $R_{ijk}^{\ \ \ l}$, ${{E}}_{\mu ij}$, $\hat{E}_{i\mu\nu}$ give a complete characterization of the intrinsic and extrinsic geometry of external and internal spaces. Note that $f_{i\mu\nu}$ is the antisymmetric part of $\hat{E}_{i\mu\nu}$ and $a_\mu^i$ is related to it by (\ref{f}). It is curious that in spite of the different role played by external and internal coordinates the formalism is symmetric under their interchange. The symmetry is substantial only when $f^i_{\mu\nu}\equiv0$ and ${\mathbf M}_D$ double foliates in internal and external directions. \subsection{\label{sec1.4} Reference Frames} Besides standard tensor calculus in holonomic coordinates, there is a second formalism that allows to successfully deal with geometrical problems: the tetrad (in four dimensions) or reference frame formalism. Among other things, it allows to clarify the role of gauge invariance for the gravitational field \cite{UtiKib} and is indispensable to deal with general relativistic interactions of spinors. In this section we show that the reference frame formalism is also the natural language to deal with dimensional reduction problems.\\ In the HD spacetime, we consider pseudo-orthogonal covariant and contravariant {\em reference frames} ${\mathbf r}_I^{\ A}$ and ${\mathbf r}_A^{\ I}$, decomposing the metric and its inverse as ${\mathbf g}_{IJ}={\mathbf r}_I^{\ A}{\mathbf r}_J^{\ B}\eta_{AB}$, ${\mathbf g}^{IJ}={\mathbf r}_A^{\ I}{\mathbf r}_B^{\ J}\eta^{AB}$. In terms of the metric parametrization (\ref{metric}) \begin{equation} {\mathbf r}_I^{\ A}= \left( \begin{array}{cc} r_\mu^{\ \alpha} & a_\mu^k\rho_k^{\ a}\\ 0 & \rho_i^{\ a} \end{array} \right)\ \ {\mathbf r}_A^{\ I}= \left( \begin{array}{cc} r_\alpha^{\ \mu} & -r_\alpha^{\ \kappa}a_\kappa^i\\ 0 & \rho_a^{\ i} \end{array} \right)\label{rf} \end{equation} with $r_\mu^{\ \alpha}$, $r_\alpha^{\ \mu}$ and $\rho_i^{\ a}$, $\rho_a^{\ i}$ decomposing the LD metrics, $r_\mu^{\ \alpha}r_\nu^{\ \beta}\eta_{\alpha\beta}=g_{\mu\nu}$, $\rho_i^{\ a}\rho_j^{\ b}\eta_{ab}={h}_{ij}$ etc. Reference vectors are determined up to point dependent pseudo-rotations expressing observer's freedom of arbitrarily choosing the reference frame. Hence, reference frames transform as holonomic vectors under general coordinate transformations and like pseudo-Euclidean vectors under pseudo-rotations. The theory is covariant under \[ {\mathbf r}_A^{\ I}\rightarrow {\mathbf r}_A^{\ J}{J^{-1}}_J^{\ I}\text{,}\ \ \ {\mathbf r}_A^{\ I}\rightarrow {\mathbf\Lambda}_A^{\ B}{\mathbf r}_B^{\ I} \] with ${\mathbf\Lambda}_A^{\ B}({\mathbf x})$ any point dependent, pseudo-orthogonal matrix, ${\mathbf\Lambda}_A^{\ C} {\mathbf\Lambda}_B^{\ D}\eta_{CD}=\eta_{AB}$. When coordinate invariance is broken, local pseudo-orthogonal transformations get restricted to the block diagonal form \begin{equation} {\mathbf\Lambda}_A^{\ B}(\bm{\mathrm x})= \left( \begin{array}{cc} \Lambda_\alpha^{\ \beta}(x)& 0 \\ 0& \Lambda_a^{\ b}(x,y) \end{array}\right)\nonumber \end{equation} with $\Lambda_\alpha^{\ \beta}(x)$ and $\Lambda_a^{\ b}(x,y)$ lower dimensional pseudo-orthogonal matrices: $\Lambda_\alpha^{\ \gamma}\Lambda_\alpha^{\ \delta}\eta_{\gamma\delta}=\eta_{\alpha\beta}$ and $\Lambda_a^{\ c}\Lambda_b^{\ d}\eta_{cd}=\eta_{ab}$. The LD vectors $r_\alpha^{\ \mu}$ and $\rho_a^{\ i}$ correctly transform as LD reference frames \begin{subequations} \begin{eqnarray*} r_\alpha^{\ \mu}&\rightarrow& r_\alpha^{\ \kappa}{J^{-1}}_\kappa^{\ \mu} \text{,}\ \ r_\alpha^{\ \mu}\rightarrow \Lambda_\alpha^{\ \beta} r_\beta^{\ \mu}\\ \rho_a^{\ i}&\rightarrow& \rho_a^{\ k}{J^{-1}}_k^{\ i} \text{,}\ \ \ \rho_a^{\ i}\rightarrow\Lambda_a^{\ b}\rho_b^{\ i} \end{eqnarray*} \end{subequations} We fix the following notation for Kaluza-Klein and embedded spacetime models \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: LD reference frames are denoted by \begin{equation} r_\mu^{\ \alpha} ={\rm r}_\mu^{\ \alpha}(x)\hskip0,5cm \rho_i^{\ a}=k_i^{\ a}(y) \tag{\ref{rf}KK}\label{rfKK} \end{equation} with ${\rm g}_{\mu\nu}={\rm r}_\mu^{\ \alpha} {\rm r}_\nu^{\ \beta}\eta_{\alpha\beta}$ and $\kappa_{ij}= k_i^{\ a}k_j^{\ b}\eta_{ab}$.} \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: LD reference frames are chosen as \begin{equation} r_\mu^{\ \alpha} =(\delta_\mu^\kappa+y^i\mathrm{II}_{i\mu}^{\ \ \kappa}){\rm t}_\kappa^{\ \alpha}(x) \hskip0,5cm \rho_i^{\ a}={\rm n}_i^{\ a}(x) \tag{\ref{rf}{\scriptsize\rm emb}}\label{rfES} \end{equation} with ${\rm g}_{\mu\nu}={\rm t}_\mu^{\ \alpha}{\rm t}_\nu^{\ \beta}\eta_{\alpha\beta}$ and $\eta_{ij}= {\rm n}_i^{\ a}{\rm n}_j^{\ b}\eta_{ab}$.} \subsection{\label{sec1.5} More on Tensors} Instead of specifying HD tensors by giving their components with respect to the holonomic coordinate system, we can specify them by giving their projections on the reference frame \[ {\mathbf t}_{...A...}^{...B...}=...{\mathbf r}_{A}^{\ I}...\ {\mathbf t}_{...I...}^{...J...}\ ...{\mathbf r}_{J}^{\ B}... \] These quantities are invariant under general coordinate transformations and transform like pseudo-Euclidean tensor components under point dependent reference frame redefinition \[ {\mathbf t}_{...A...}^{...B...}\rightarrow ...{\mathbf\Lambda}_{A}^{\ C} ...\ {\mathbf t}_{...C...}^{...D...}\ ...{{\mathbf\Lambda}^{-1}}_{D}^{\ B}... \] LD external, internal and hybrid tensor components $t_{...\alpha...}^{...\beta...}$, $t_{...a...}^{...b...}$ and $t_{...\alpha ... a... }^{...\beta...b...}$ are introduced with analogous conventions and transformation properties \begin{eqnarray*} t_{...\alpha...}^{...\beta...} \!&\rightarrow&\! ...\Lambda_{\alpha}^{\ \gamma}...\ t_{...\gamma...}^{...\delta...}\ ...{\Lambda^{-1}}_{\delta}^{\ \beta}...\\ t_{...a...}^{...b...} \!&\rightarrow& \!...\Lambda_{a}^{\ c}...\ t_{...c...}^{...d...}\ ...{\Lambda^{-1}}_{d}^{\ b}...\\ t_{...\alpha...a...}^{...\beta...b...}\!&\rightarrow&\!...\Lambda_{\alpha}^{\ \gamma}...\Lambda_{a}^{\ c}...\ t_{...\gamma...c...}^{...\delta...d...}...\ {\Lambda^{-1}}_{\delta}^{\ \beta}...{\Lambda^{-1}}_{d}^{\ b}... \end{eqnarray*} It is readily checked that, when HD covariance is broken {\em pseudo-orthogonal components of HD tensors transform like (pseudo-)orthogonal components of LD tensors.} For example, external and internal components of a HD covariant vector ${\mathbf v}_A={\mathbf r}_A^{\ I}{\mathbf v}_I$ transform like \[ {\mathbf v}_\alpha\rightarrow \Lambda_\alpha^{\ \beta}{\mathbf v}_\beta\ \ \ \text{and}\ \ \ {\mathbf v}_a\rightarrow \Lambda_{a}^{\ b}{\mathbf v}_b \] so that $v_\alpha\equiv{\mathbf v}_\alpha$ and $v_a\equiv{\mathbf v}_a$ may be identified with the components of two LD external and internal vectors. A HD rank-two covariant tensor ${\mathbf b}_{AB}$ produces an external $b_{\alpha\beta}\equiv{\mathbf b}_{\alpha\beta}$, an internal $b_{ab}\equiv{\mathbf b}_{ab}$ and two hybrid $b_{\alpha b}\equiv{\mathbf b}_{\alpha b}$, $b'_{\alpha b}\equiv{\mathbf b}_{b\alpha}$ LD rank-two covariant tensors. This makes the use of pseudo-orthogonal reference frames particularly convenient in investigating dimensional reduction problems. \subsection{\label{sec1.6} More on Connections and Curvature Tensors} The whole machinery of calculus on manifolds is readily transposed in the reference frame formalism by defining a covariant derivative acting on both, curved and flat spacetime indices \begin{equation} {\mathbf D}_I{\mathbf t}_{...A...}=\bm{\nabla}_I{\mathbf t}_{...A...}+...-{\mathbf \Omega}_{I,A}^{\ \ \ B}{\mathbf t}_{...B...}+... \end{equation} with connection coefficients ${\mathbf\Omega}_{I,AB}=(\bm{\nabla}_I {\mathbf r}_A^{\ K}) {\mathbf r}_B^{\ L}{\mathbf g}_{KL}$. With these conventions ${\mathbf D}_I{\mathbf r}_A^{\ J}\equiv0$. The commutator of two covariant derivatives yields the intrinsic curvature tensor \begin{eqnarray} {\mathbf R}_{IJAB}=\partial_I{\mathbf\Omega}_{J,AB}- \partial_J{\mathbf\Omega}_{I,AB}+\hskip1,5cm\nonumber\\ -{\mathbf\Omega}_{I,A}^{\ \ \ C}{\mathbf\Omega}_{J,CB} +{\mathbf\Omega}_{J,A}^{\ \ \ C}{\mathbf\Omega}_{I,CB} \label{HDRrf} \end{eqnarray} which is related to (\ref{HDinR}) by contraction with reference frames, ${\mathbf R}_{IJKL}={\mathbf R}_{IJAB}{\mathbf r}_K^{\ A} {\mathbf r}_L^{\ B}$. In LD internal and external spaces we proceed along the very same lines. \subsubsection{\label{sec1.6.1} Internal connection and curvatures} On internal spaces, we define an {\em internal total covariant derivative} $D^\text{\tiny{tot}}_i$ as \begin{eqnarray} D^\text{\tiny{tot}}_it_{...\alpha... a...}= \nabla^\text{\tiny{tot}}_it_{...\alpha... a...}+ ...- {\mathit\Omega}_{i,a}^{\ \ \ b}t_{...\alpha... b...}+\nonumber \\+...- {A}_{i,\alpha}^{\ \ \ \beta}t_{...\beta... a...}+... \end{eqnarray} with connection coefficients ${\mathit\Omega}_{i,ab}= (\nabla^\text{\tiny{tot}}_i\rho_a^{\ k})\rho_b^{\ l}{h}_{kl}$ and ${A}_{i,\alpha\beta}= (\nabla^\text{\tiny{tot}}_ir_\alpha^{\ \kappa})r_\beta^{\ \lambda}g_{\kappa\lambda}$. Under coordinate redefinitions ${\mathit\Omega}_{i,ab}$ and ${A}_{i,\alpha\beta}$ transform like genuine internal tensors. Under local (pseudo-)rotations of reference frames, ${\mathit\Omega}_{i,ab}$ transforms like an $SO(c)$ gauge connection while ${A}_{i,\alpha\beta}$ behave like a tensor \begin{eqnarray} {\mathit{\Omega}}_{i,a}^{\ \ \ b}&\rightarrow& \Lambda_a^{\ c}{\mathit{\Omega}}_{i,c}^{\ \ \ d} \Lambda_d^{\ b}-\Lambda_a^{\ c}(\partial_i \Lambda_c^{\ b})\\ {A}_{i,\alpha}^{\ \ \ \beta} & \rightarrow&\Lambda_\alpha^{\ \gamma}{A}_{i,\gamma}^{\ \ \ \delta} \Lambda_\delta^{\ \beta}\label{A trans} \end{eqnarray} With these conventions $D^\text{\tiny{tot}}_i\rho_a^{\ j}=0$ and $D^\text{\tiny{tot}}_ir_\alpha^{\ \mu}=0$. The commutator of two total internal covariant derivatives yields the intrinsic and extrinsic curvature tensors \begin{equation} R_{ijab}= \partial_i{\mathit\Omega}_{j,ab}-\partial_j{\mathit\Omega}_{i,ab}- {\mathit\Omega}_{i,a}^{\ \ \ c}{\mathit\Omega}_{j,cb}+{\mathit\Omega}_{j,a}^{\ \ \ c}{\mathit\Omega}_{i,cb} \label{LDinRrf} \end{equation} and \begin{equation} {F}_{ij\alpha\beta}= \partial_i{A}_{j,\alpha\beta}-\partial_j{A}_{i,\alpha\beta}- {A}_{i,\alpha}^{\ \ \ \gamma}{A}_{j,\gamma\beta}+ {A}_{j,\alpha}^{\ \ \ \gamma}{A}_{i,\gamma\beta} \label{LDinFrf} \end{equation} which are related to (\ref{LDinR}) and (\ref{LDinF}) by contraction with LD reference frames, $R_{ijkl}={R}_{ijab}\rho_k^{\ a}\rho_l^{\ b}$ and $F_{ij\kappa\lambda}= {F}_{ij\alpha\beta}r_\kappa^{\ \alpha}r_\lambda^{\ \beta}$. \subsubsection{\label{sec1.6.2}External connection and curvatures} On the external space, we define an {\em external total covariant derivative} $\hat{D}^\text{\tiny{tot}}_\mu$ as \begin{eqnarray} \hat{D}^\text{\tiny{tot}}_\mu t_{...\alpha... a...}= \hat\nabla^\text{\tiny{tot}}_\mu t_{...\alpha... a...}+ ...- {\mathit{\hat\Omega}}_{\mu,\alpha}^{\ \ \ \beta} t_{...\beta... a...}+ \nonumber\\+...- \hat{A}_{\mu,a}^{\ \ \ b}t_{...\alpha...b...}+... \end{eqnarray} with connection coefficients ${\mathit{\hat\Omega}}_{\mu, \alpha\beta}= (\hat\nabla^\text{\tiny{tot}}_\mu r_\alpha^{\ \kappa})r_\beta^{\ \lambda}g_{\kappa\lambda}$ and $\hat{A}_{\mu,ab} =(\hat\nabla^\text{\tiny{tot}}_\mu\rho_a^{\ k})\rho_b^{\ l}{h}_{kl}$. Under coordinate transformations ${\mathit{\hat\Omega}}_{\mu,\alpha\beta}$ and $\hat{A}_{\mu,ab}$ behaves like genuine external tensors. Under local redefinition of reference frames ${\mathit{\hat\Omega}}_{\mu,\alpha\beta}$ and $\hat{A}_{\mu,ab}$ transform as $SO(d)$ and $SO(c)$ connections respectively \begin{eqnarray} {\mathit{\hat\Omega}}_{\mu,\alpha}^{\ \ \ \beta}&\rightarrow& \Lambda_\alpha^{\ \gamma}{\mathit{\hat\Omega}}_{\mu,\gamma}^{\ \ \ \delta} \Lambda_\delta^{\ \beta}-\Lambda_\alpha^{\ \gamma}(\hat\partial_\mu \Lambda_\gamma^{\ \beta})\\ \hat{A}_{\mu,a}^{\ \ \ b}&\rightarrow&\Lambda_a^{\ c}\hat{A}_{\mu,c}^{\ \ \ d} \Lambda_d^{\ b}-\Lambda_a^{\ c}(\hat\partial_\mu \Lambda_c^{\ b})\label{hatA gauge trans} \end{eqnarray} As above $\hat\nabla^\text{\tiny{tot}}_\mu r_\alpha^{\ \nu}=0$ and $\hat\nabla^\text{\tiny{tot}}_\mu\rho_a^{\ i}=0$. The commutator of two external total covariant derivative again yields the intrinsic and extrinsic curvature tensors \begin{eqnarray} \hat{R}_{\mu\nu\alpha\beta}= \hat\partial_\mu{\mathit{\hat\Omega}}_{\nu,\alpha\beta} -\hat\partial_\nu{\mathit{\hat\Omega}}_{\mu,\alpha\beta}+\hskip2,5cm\nonumber \\-{\mathit{\hat\Omega}}_{\mu,\alpha}^{\ \ \ \gamma}{\mathit{\hat\Omega}}_{\nu,\gamma\beta}+{\mathit{\hat\Omega}}_{\nu,\alpha}^{\ \ \ \gamma}{\mathit{\hat\Omega}}_{\mu,\gamma\beta} +f_{\mu\nu}^i{A}_{i,\alpha\beta} \label{LDexRrf} \end{eqnarray} and \begin{eqnarray} \hat{F}_{\mu\nu ab}=\hat\partial_\mu\hat{A}_{\nu,ab}- \hat\partial_\nu\hat{A}_{\mu,ab}+\hskip2,5cm\nonumber\\ - \hat{A}_{\mu,a}^{\ \ \ c}\hat{A}_{\nu,cb}+ \hat{A}_{\nu,a}^{\ \ \ c}\hat{A}_{\mu,cb}+f_{\mu\nu}^i{\mathit\Omega}_{i,ab} \label{LDexFrf} \end{eqnarray} again related to (\ref{LDexR}) and (\ref{LDexF}) by contraction with LD reference frames, $\hat{R}_{\mu\nu\kappa\lambda}=\hat{R}_{\mu\nu\alpha\beta}r_\kappa^{\ \alpha}r_\lambda^{\ \beta}$ and $\hat{F}_{\mu\nu kl}=\hat{F}_{\mu\nu ab} \rho_k^{\ a}\rho_l^{\ b}$. \subsubsection{\label{sec2.6.3}Hybrid curvatures} The commutator of total external and internal derivative yields the hybrid curvatures \begin{eqnarray} \hat{H}_{\mu i\alpha\beta}= \hat\partial_\mu A_{i,\alpha\beta}- \partial_i{\mathit{\hat\Omega}}_{\mu,\alpha\beta}+\hskip2,5cm\nonumber\\ -{\mathit{\hat\Omega}}_{\mu,\alpha}^{\ \ \ \gamma}A_{i,\gamma\beta}+ A_{i,\alpha}^{\ \ \ \!\gamma}{\mathit{\hat\Omega}}_{\mu,\gamma\beta}- (\partial_ia_\mu^j)A_{j,\alpha\beta} \label{LDHrf} \end{eqnarray} and \begin{eqnarray} H_{\mu iab}= \hat\partial_\mu{\mathit\Omega}_{i,ab}- \partial_i\hat{A}_{\mu,ab}+\hskip2,5cm\nonumber\\ -\hat{A}_{\mu,a}^{\ \ \ c}{\mathit\Omega}_{i,cb}+ {\mathit\Omega}_{i,a}^{\ \ \ \!c}\hat{A}_{\mu,cb}- (\partial_ia_\mu^j){\mathit\Omega}_{j,ab} \label{LDHhrf} \end{eqnarray} related to (\ref{LDhyH}) and (\ref{LDhyHh}) by contraction with LD reference frames and that can be rewritten in terms of the pseudo-orthogonal components of fundamental forms \begin{eqnarray} {E}_{\gamma ab}=r_\gamma^{\ \kappa}\rho_a^{\ i} \rho_b^{\ j}{E}_{\kappa ij}, \hskip0,5cm \hat{E}_{c \alpha\beta}=\rho_c^{\ k}r_\alpha^{\ \mu}r_\beta^{\ \nu}\hat{E}_{k\mu\nu} \label{IIrf} \end{eqnarray} Nothing has really changed; the pseudo-Euclidean tensors $\hat{R}_{\alpha\beta\gamma\delta}$, $R_{abcd}$, ${E}_{\gamma ab}$ and $\hat{E}_{c \alpha\beta}$ completely characterize the geometry of dimensional reduction. \section{\label{sec2} Reducing Geometry} We are now in position to write down general equations that relate the higher and lower dimensional geometries. In holonomic coordinates this task requires very long and tedious calculations with results that are not always transparent. Instead, within the reference frames formalism, it is almost straightforward to establish the desired relations. The formulas obtained in this section extend and unify well known identities of Kaluza-Klein and submanifold theories. \subsection{\label{sec2.1} Connection coefficients} \begin{subequations}\label{HvLconnection} In the reference frames formalism, HD connection coefficients directly relate to LD intrinsic connection coefficients, fundamental forms and extrinsic connection coefficients in the following way \begin{equation} {\mathbf r}_\gamma^{\ I}{\mathbf\Omega}_{I,\alpha\beta}= r_\gamma^{\ \mu}{\mathit{\hat\Omega}}_{\mu,\alpha\beta}\\ \label{HvLconnectionA} \end{equation} \begin{equation} {\mathbf r}_c^{\ I}{\mathbf\Omega}_{I,ab}= \rho_c^{\ i} {\mathit\Omega}_{i,ab} \label{HvLconnectionB} \end{equation} \begin{equation} {\mathbf r}_\gamma^{\ I}{\mathbf\Omega}_{I,a\beta}= \hat{E}_{a\gamma\beta}\label{HvLconnectionC} \end{equation} \begin{equation} {\mathbf r}_c^{\ I}{\mathbf\Omega}_{I,\alpha b}= {E}_{\alpha cb} \label{HvLconnectionD} \end{equation} \begin{equation} {\mathbf r}_\gamma^{\ I}{\mathbf\Omega}_{I,ab}= r_\gamma^{\ \mu}\hat{A}_{\mu,ab} \label{HvLconnectionE} \end{equation} \begin{equation} {\mathbf r}_c^{\ I}{\mathbf\Omega}_{I,\alpha\beta}= \rho_c^{\ i} A_{i,\alpha\beta}\label{HvLconnectionF} \end{equation} Analogous equations connecting HD Christoffel symbols with LD quantities are much more complicated. By means of relations (\ref{HvLconnection}) it is straightforward to relate HD to LD curvatures, geodesic equations and geometric operators. \end{subequations} \subsection{\label{sec2.2} Riemann Curvatures: \protect\\ extension of Gauss, Codazzi and Ricci equations} Gauss, Codazzi and Ricci equations give relations between HD curvature and LD curvatures, second and normal fundamental forms of a submanifold and provide, at the same time, integrability conditions for a subspace to be embeddable in a HD spacetime~\cite{submanifolds}. They are important in a variety of physical applications, especially in general relativity. Recently, they have been extended to foliations and applied to the analysis of embedded spacetimes \cite{GCR}. Equations of an apparently different nature relating HD curvature to LD curvatures and gauge fields are also the key ingredient of Kaluza-Klein unification schemes \cite{KK,nonAbKK,KKrev}. Both set of equations are special cases of the general equations relating the HD Riemann tensor ${\mathbf R}_{ABCD}$ to LD Riemann tensors $\hat{R}_{\alpha\beta \gamma\delta}$, $R_{abcd}$ and fundamental forms ${E}_{\gamma ab}$, $\hat{E}_{c\alpha\beta}$. The symmetries of the Riemann tensor allow only six independent projections on external/internal directions. \begin{subequations}\label{HvLRiemann} \subsubsection{Gauss type equations} The external components of the HD Riemann tensor are related to the external intrinsic curvature and fundamental forms by an equation which is formally identical to the Gauss equation for an embedded space \begin{equation} {\mathbf R}_{\alpha\beta\gamma\delta}= \hat{R}_{\alpha\beta\gamma\delta}+ \hat{E}_{a\alpha\gamma} \hat{E}^a_{\ \beta\delta}- \hat{E}_{a\beta\gamma} \hat{E}^a_{\ \alpha\delta} \label{GaussEX} \end{equation} In spite of this analogy it is worth remarking that the external space ${\mathrm M}_d$ is not an embedded object, $\hat{R}_{\alpha\beta \gamma\delta}$ is not a standard Riemannian curvature tensor and $\hat{E}_{c \alpha\beta}$ has an antisymmetric part keeping truck of the gauge field $f^i_{\mu\nu}$. The internal components of the HD Riemann tensor are related to the internal intrinsic curvature and the fundamental forms yielding again an equation formally identical to the Gauss equation for an embedded space \begin{equation} {\mathbf R}_{abcd}= R_{abcd}+{E}_{\alpha ac}{E}^{\alpha}_{\ bd} -{E}_{\alpha bc}{E}^{\alpha }_{\ ad}\label{GaussIN} \end{equation} This time the analogy is more than formal. For every given value $x^\mu$ of the external coordinates the internal space ${\mathrm M}_c^x$ is an embedded object in ${\mathbf M}_D$. In this case, $R_{abcd}|_x$ is the relative Riemann tensor and ${E}_{\gamma ab}|_x$ is the second fundamental form so that (\ref{GaussIN}) correspond to a genuine Gauss equation for the embedding. \subsubsection{Codazzi type equations} HD Riemann tensor components with three indices of one sort and one index of the other, are related to the LD hybrid curvatures and the fundamental forms. These terms yield the generalization of the Codazzi equation for the external space ${\mathrm M}_d$ \begin{equation} {\mathbf R}_{\alpha b\gamma\delta}= \hat{H}_{\alpha b\gamma\delta} +{E}_{\gamma b}^{\ \ a}\hat{E}_{a\alpha\delta} -{E}_{\delta b}^{\ \ a}\hat{E}_{a\alpha\gamma} \label{CodazziEX} \end{equation} and for the foliation of ${\mathbf M}_D$ in the internal spaces ${\mathrm M}_c^x$ \begin{equation} {\mathbf R}_{\alpha bcd}=H_{\alpha bcd} -{\hat{E}}_{c \alpha}^{\ \ \alpha}{E}_{\alpha bd} +{\hat{E}}_{d \alpha}^{\ \ \alpha}{E}_{\alpha bc} \label{CodazziIN} \end{equation} The explicit appearance of the hybrid curvatures $\hat{H}_{\alpha b\gamma\delta}$ and $H_{\alpha bcd}$ can be eliminated by means of (\ref{LDhyH'}) and (\ref{LDhyHh'}), giving the Codazzi equations in their more familiar form \begin{equation} {\mathbf R}_{\alpha b\gamma\delta}= \hat{D}^\text{\tiny{tot}}_\delta\hat{E}_{b \gamma\alpha}- \hat{D}^\text{\tiny{tot}}_\gamma\hat{E}_{b \delta\alpha}+ f^i_{\gamma\delta}{E}_{\alpha ib} \tag{\ref{CodazziEX}$'$}\label{CodazziEX'} \end{equation} \begin{equation} {\mathbf R}_{\alpha bcd}= D^\text{\tiny{tot}}_c{E}_{\alpha db}- D^\text{\tiny{tot}}_d{E}_{\alpha cb} \tag{\ref{CodazziIN}$'$}\label{CodazziIN'} \end{equation} The interpretation of these equations requires the same caution used for generalized Gauss equations. While (\ref{CodazziIN'}) are genuine Codazzi equations for the embedded spaces ${\mathrm M}_c^x$, (\ref{CodazziEX'}) correspond to standard Codazzi equations only when $f^i_{\mu\nu}=0$ and the external space ${\mathrm M}_d$ reduce to an embedded object. \subsubsection{Ricci type equations} HD Riemann tensor components with the first two indices of one sort and the last two indices of the other, relate the LD extrinsic curvatures (\ref{LDexF}), (\ref{LDinF}) to the hybrid tensor (\ref{f}), yielding a single equation \begin{eqnarray} {\mathbf R}_{\alpha\beta cd}=\hat{F}_{\alpha\beta cd}+F_{cd\alpha\beta}+\hskip2.0cm\nonumber\\ -\frac{1}{2}r_\alpha^{\ \mu}r_\beta^{\ \nu}\rho_c^{\ k}\rho_d^{\ l} \left(\partial_kf_{l\mu\nu}-\partial_lf_{k\mu\nu}\right) \label{Ricci} \end{eqnarray} This generalizes the Ricci equation for both, the external space ${\mathrm M}_d$ and the foliation in internal subspaces ${\mathrm M}_c^x$. The explicit appearance of the external extrinsic curvature $\hat{F}_{\alpha\beta cd}$ or of the internal extrinsic curvature $F_{cd\alpha\beta}$ (or of both of them), can be removed by means of (\ref{LDexF'}) and (\ref{LDinF'}). It respectively yields the standard form of the Ricci equation for the external space ${\mathrm M}_d$ \begin{equation} {\mathbf R}_{\alpha\beta cd}=\hat{F}_{\alpha\beta cd}+ \hat{E}_{c\alpha}^{\ \ \gamma}\hat{E}_{d\beta\gamma} -\hat{E}_{c\beta}^{\ \ \gamma}\hat{E}_{d\alpha\gamma} \tag{\ref{Ricci}$'$}\label{RicciEX} \end{equation} and for the foliation in internal spaces ${\mathrm M}_c^x$ \begin{equation} {\mathbf R}_{\alpha\beta cd}=F_{cd\alpha\beta}+ {E}_{\alpha c}^{\ \ a}{E}_{\beta da}- {E}_{\beta c}^{\ \ a}{E}_{\alpha da} \tag{\ref{Ricci}$''$}\label{RicciIN} \end{equation} Once again, a little caution in the interpretation of (\ref{Ricci}), or (\ref{RicciEX}), or (\ref{RicciIN}), is necessary. \subsubsection{The sixth equation} The remaining group of HD Riemann tensor components relates the fundamental forms to their total covariant derivatives, yielding an equation that has no equivalent in the theory of embedding \begin{eqnarray} {\mathbf R}_{\alpha b \gamma d}= \hat{D}^\text{\tiny{tot}}_\alpha{E}_{\gamma bd} +D^\text{\tiny{tot}}_b\hat{E}_{d\alpha\gamma}+\hskip2.0cm\nonumber\\ +{E}_{\alpha b}^{\ \ a}{E}_{\gamma ad}+ \hat{E}_{b\alpha}^{\ \ \beta}\hat{E}_{d\beta\gamma} \label{sixth} \end{eqnarray} This equation appears as a further integrability condition for the tangling of ${\mathrm M}_d$ and ${\mathrm M}_c^x$ in ${\mathbf M}_D$ and consists a new result obtained by this approach. \end{subequations} \subsection{\label{sec2.3} Ricci curvatures} \begin{subequations} By contracting the generalized Gauss, Codazzi, Ricci equations and (\ref{sixth}) we easily obtain the external \begin{eqnarray} {\mathbf R}_{\alpha\beta}=\hat{R}_{\alpha\beta}+ \hat{D}^\text{\tiny{tot}}_\alpha{E}_{\beta c}^{\ \ c}+{E}_{\alpha cd}{E}_\beta^{\ dc}+\nonumber\\ +D^\text{\tiny{tot}}_c\hat{E}_{\ \alpha\beta}^{c} +\hat{E}_{c\alpha\beta}\hat{E}_{\ \gamma}^{c\ \gamma} \end{eqnarray} hybrid \begin{eqnarray} {\mathbf R}_{\alpha b}=\hat{D}^\text{\tiny{tot}}_\alpha \hat{E}_{b\gamma}^{\ \ \gamma}- \hat{D}^\text{\tiny{tot}}_\gamma \hat{E}_{b\alpha}^{\ \ \gamma}-f_{\alpha\gamma}^c\hat{E}_{\ cb}^\gamma+\nonumber\\ +D^\text{\tiny{tot}}_b{E}_{\alpha c}^{\ \ c}- D^\text{\tiny{tot}}_c{E}_{\alpha b}^{\ \ c} \end{eqnarray} and internal \begin{eqnarray} {\mathbf R}_{ab}=R_{ab}+\hat{D}^\text{\tiny{tot}}_\gamma {E}_{\ ab}^\gamma+{E}_{\gamma ab} {E}_{\ c}^{\gamma\ c}+\nonumber\\ +D^\text{\tiny{tot}}_a\hat{E}_{b\gamma}^{\ \ \gamma} +\hat{E}_{a\gamma\delta}\hat{E}_{b}^{\ \delta\gamma} \end{eqnarray} components of the HD Ricci tensor. From the viewpoint of pure higher dimensional gravity these equations display the most general kind of LD matter that can be obtained in induced-matter theories \cite{STM}. \label{HvLRicci} \end{subequations} \subsection{\label{sec2.4} Scalar curvatures} The eventual contraction of equations (\ref{HvLRicci}) yields the identity connecting the HD scalar curvature with LD intrinsic and extrinsic curvatures, lying at the heart of Lagrangian reduction of HD Einstein gravity. We display it in standard tensor formalism \begin{eqnarray} {\mathbf R}= \hat{R}+2\nabla_i\hat{E}_{\ \mu}^{i\ \mu}+\hat{E}_{i \mu}^{\ \ \mu}\hat{E}_{\ \nu}^{i\ \nu}+\hat{E}_{i\mu\nu}\hat{E}^{i\nu\mu}+\nonumber\\ +R+2\hat{\nabla}_\mu{E}_{\ i}^{\mu\ i}+{E}_{\mu i}^{\ \ i} {E}_{\ j}^{\mu\ j}+{E}_{\mu ij}{E}^{\mu ji} \label{HvLscalar} \end{eqnarray} This equation generalizes well known relations holding in Kaluza-Klein and submanifold theories. \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: In virtue of (\ref{IIexKK}), (\ref{LDexRKK}) and (\ref{IIinKK}) equation (\ref{HvLscalar}) reduces to \begin{equation} {\mathbf R}={\mathrm R}+R+\frac{1}{4} {\rm F}^{\sf a}_{\mu\nu} {\rm F}^{{\sf a}\mu\nu} \tag{\ref{HvLscalar}KK} \end{equation} with ${\mathrm R}$ the standard scalar curvature associated with the four dimensional metric ${\rm g}_{\mu\nu}(x)$.} \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: By recalling (\ref{IIexES}), (\ref{LDexRES}), (\ref{IIinES}) and the fact that we are considering spacetimes embedded in flat HD spacetime equation (\ref{HvLscalar}) evaluated at $y^i=0$ reproduces the well known identity \begin{equation} {\mathrm R}+\mathrm{II}_{i \mu}^{\ \ \mu}\mathrm{II}_{\ \nu}^{i\ \nu}-\mathrm{II}_{i\mu\nu}\mathrm{II}^{i\mu\nu}=0 \tag{\ref{HvLscalar}{\scriptsize\rm emb}} \end{equation} that relates intrinsic and extrinsic curvature scalars for a submanifold embedded in a HD flat space.} \vskip0,2cm\noindent By means of equations (\ref{GaussEX})-(\ref{sixth}), (\ref{HvLRicci}) and (\ref{HvLscalar}) it is possible to obtain general reduction formulas for the Weyl conformal tensor, which also plays an important role in the analysis of dimensional reduction~\cite{Jackiw06}. \subsection{\label{sec2.5} Geodesic motion} Free motion in HD spacetime is described by geodesic equations \begin{equation} \ddot{\mathbf x}^K+{\mathbf\Gamma}^K_{IJ}\dot{\mathbf x}^I\dot{\mathbf x}^J =0\nonumber \end{equation} where $\dot{\mathbf x}^I=d\mathbf x^I/d\tau$ is the HD velocity vector. As discussed in Section \ref{sec1.1} external contravariant components of HD vectors behave like LD vectors, so that $\dot{\mathbf x}^\mu=\dot{x}^\mu$ is identified with the LD external velocity. On the other hand internal contravariant components do not, so that $\dot{\mathbf x}^i=\dot{y}^i$ is not a LD object. The definition of a LD vector once again involves $a_\mu^i$ \[ \hat{\dot{y}}^i=\dot{y}^i+a^i_\mu \dot{x}^\mu \] HD geodesic equations split in two groups that separately transform under the residual covariance group (\ref{STr}). The first group describes a no longer free motion in external directions and its coupling to internal variables through the fundamental forms, is given by \begin{subequations} \begin{equation} \ddot{x}^\kappa+{\hat\Gamma}^\kappa_{\mu\nu}\dot{x}^\mu\dot{x}^\nu+2\hat{E}_{i\ \mu}^{\ \kappa}\hat{\dot{y}}^i \dot{x}^\mu - {E}_{\ ij}^{\kappa}\hat{\dot{y}}^i \hat{\dot{y}}^j =0 \label{geoex} \end{equation} The second group takes in to account internal motion and its dynamical interaction with external variables \begin{equation} \dot{\hat{\dot{y}}}^k+\Gamma^k_{ij}\hat{\dot{y}}^i\hat{\dot{y}}^j +(\partial_ia_\mu^k)\dot{x}^\mu \hat{\dot{y}}^i+ 2{E}_{\mu\ i}^{\ k}\dot{x}^\mu \hat{\dot{y}}^i -\hat{E}_{\ \mu\nu}^{k}\dot{x}^\mu \dot{x}^\nu=0 \label{geoin} \end{equation} (the first three terms of the left hand side can be recast in the LD covariant expression $\dot{x}^\mu\hat\nabla^\text{\tiny{tot}}_\mu \hat{\dot{y}}^k+\hat{\dot{y}}^i\nabla^\text{\tiny{tot}}_i \hat{\dot{y}}^k-{E}_{\mu\ i}^{\ \ \!k} \hat{\dot{y}}^i\dot{x}^\mu$). The interaction between internal and external motion vanishes if and only if the fundamental forms identically vanish, $\hat{E}_{i\mu\nu}=0$ and $\mathit{E}_{\mu ij}=0$. Specializing to Kaluza-Klein and embedded spacetime models we obtain: \label{geo} \end{subequations} \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: Taking into account (\ref{IIexKK}), (\ref{IIinKK}) equations (\ref{geo}) reduce to \begin{equation} \begin{array}{l} \ddot{x}^\kappa+\Gamma^\kappa_{\mu\nu}\dot{x}^\mu\dot{x}^\nu+ {\rm q}_{\sf a}{\rm F}_{\ \!\mu}^{{\sf a}\ \!\kappa}\dot{x}^\mu=0\\ \dot{{\rm q}}_{\sf a}-c_{\sf ab}^{\sf c}\dot{x}^\mu{\rm A}^{\sf b}_\mu {\rm q}_{\sf c}=0 \end{array} \tag{\ref{geo}KK} \end{equation} where ${\rm q}_{\sf a}={\rm K}_{{\sf a}i}(y)\hat{\dot{y}}^i$. The first equation describes the external motion of a particle of vector charge ${\rm q}_{\sf a}$ in the possibly non-Abelian gauge field ${\rm F}_{\ \!\mu\nu}^{\sf a}$. The second equation describes the rotation of the charge-vector in the group space. In the case of a one dimensional Abelian group ${\rm q}_{\sf 1}$ is constant in time and the first equation reduces to the classical Lorentz equation of a charged particle moving on a manifold in an electromagnetic field.} \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: In a neighborhood of radius $\epsilon$ of a submanifold the equations (\ref{IIexES}), (\ref{IIinES}) allow to rewrite (\ref{geo}) in the form \begin{equation} \begin{array}{l} \ddot{x}^\kappa+\Gamma^\kappa_{\mu\nu}\dot{x}^\mu\dot{x}^\nu+ \frac{1}{2}{\rm L}_{ij}{\rm F}_{\mu}^{\ \kappa ij}\dot{x}^\mu +2\hat{\dot{y}}^i\mathrm{II}_{i\mu}^{\ \ \!\kappa}\dot{x}^\mu+ {\cal O}(\epsilon)=0\\ \dot{{\rm L}}^{ij}-\dot{x}^\mu{\rm A}_{\mu\ k}^{\ \ \! i}{\rm L}^{kj} +\dot{x}^\mu{\rm A}_{\mu\ k}^{\ \ \! j}{\rm L}^{ki} +{\cal O}(\epsilon)=0 \end{array}\tag{\ref{geo}{\scriptsize\rm emb}} \end{equation} where ${\rm L}^{ij}=y^i\hat{\dot{y}}^j-y^j\hat{\dot{y}}^i$ is the angular momentum in internal directions and $\mathrm{II}_{i\mu\nu}(x)$ the second fundamental form of the embedding. Higher order terms in $\epsilon$ can be neglected only if some physical mechanism constrains the system in a sufficiently small neighborhood of the submanifold. As in standard Kaluza-Klein theories the first equation describes the external motion of a particle of charge $\frac{1}{2}{\rm L}_{ij}$ in the gauge field ${\rm F}_{\mu\nu}^{\ \ \ ij}$; the non-trivial dependence of external metric on internal coordinates produces the extra term $2\hat{\dot{y}}^i\mathrm{II}_{i\mu}^{\ \ \!\kappa}\dot{x}^\mu$ making geodesics to drift away from the submanifold. The second equation describes the precession of internal angular momentum produced by the extrinsic torsion of the embedding.} \vskip0,2cm \noindent Equations (\ref{geo}) can also be obtained from the Lagrangian \begin{equation} {\mathcal L}=\frac{1}{2}g_{\mu\nu}\dot{x}^\mu\dot{x}^\nu +\frac{1}{2}h_{ij}\left(\dot{y}^i+a^i_\mu \dot{x}^\mu\right) \left(\dot{y}^j+a^j_\nu \dot{x}^\nu\right) \end{equation} For later considerations it is also useful to write the corresponding Hamiltonian \begin{equation} {\mathcal H}=\frac{1}{2}g^{\mu\nu}\left(p_\mu-a_\mu^i\pi_i\right) \left(p_\nu-a_\nu^j\pi_j\right)+\frac{1}{2}h^{ij}\pi_i\pi_j \label{H} \end{equation} with $p_\mu=\partial{\mathcal L}/\partial\dot{x}^\mu$, $\pi_i=\partial{\mathcal L}/\partial\dot{y}^i$ the momenta conjugated to external and internal coordinates respectively. Internal momenta $\pi_i$ correctly transform as LD vectors, while LD external covariant momenta have to be defined as $\hat{p}_\mu\equiv p_\mu-a_\mu^i\pi_i$. \subsection{\label{sec2.6}Geometric operators} We now consider the dimensional reduction of Laplace and Dirac operators. \subsubsection{Laplace operator} In every local coordinate frame the HD scalar Laplace operator ${\mathbf\Delta}$ takes the form \begin{equation} {\mathbf\Delta} =|{\mathbf g}|^{-1/2}\partial_I {\mathbf g}^{IJ}|{\mathbf g}|^{1/2}\partial_J\nonumber \end{equation} ${\mathbf\Delta}$ is Hermitian with respect to the standard scalar product constructed by means of the HD covariant measure $|{\mathbf g}|^{1/2}d{\mathbf x}=|g|^{1/2}|h|^{1/2}dxdy$. By rewriting the operator in terms of covariant derivatives, recalling the inverse metric decomposition and the relations (\ref{HvLconnection}) between HD and LD connection coefficients, we obtain the most general decomposition covariant under (\ref{STr}) \begin{eqnarray} {\mathbf\Delta} =\nonumber|g|^{-1/2} \left(\hat\partial_\mu\!+\! \frac{1}{2}{E}_{\mu i}^{\ \ i} \right) g^{\mu\nu}|g|^{1/2}\left(\hat\partial_\nu\!+\! \frac{1}{2}{E}_{\nu i}^{\ \ i}\right)\!+ \nonumber\\ \!+|{h}|^{-1/2}\left(\partial_i\! +\! \frac{1}{2}\hat{E}_{i\mu}^{\ \ \mu}\right) {h}^{ij}|{h}|^{1/2}\left(\partial_j\!+\! \frac{1}{2}\hat{E}_{j\mu}^{\ \ \mu}\right)\!+ \nonumber\\ \!-\!\frac{1}{2}\hat\nabla_\mu{E}_{\ i}^{\mu\ i} \!-\!\frac{1}{4}{E}_{\mu i}^{\ \ i}{E}_{\ j}^{\mu\ j}\!-\!\frac{1}{2}\nabla_i\hat{E}_{\ \mu}^{i\ \mu}\!-\! \frac{1}{4}\hat{E}_{i\mu}^{\ \ \mu} \hat{E}_{\ \nu}^{i\ \nu}\hskip0,2cm \label{LapRecuction} \end{eqnarray} The first righthand side term of this equation corresponds to the LD external Laplace operator \begin{equation} \Delta^{\rm ext} =|g|^{-1/2}\partial_\mu g^{\mu\nu}|g|^{1/2}\partial_\nu\nonumber \end{equation} (Hermitian with respect to the external scalar product constructed by means of the LD volume element $|g|^{1/2}dx$) with partial derivatives $\partial_\mu$ replaced by the HD Hermitian operators \[ \hat\partial_\mu+\frac{1}{2}{E}_{\mu i}^{\ \ i}=\partial_\mu+\left(\partial_\mu\ln |h|^{1/4}\right)-ia_\mu-\frac{1}{2}\nabla_ia_\mu^i \] The total derivative $\partial_\mu\ln |h|^{1/4}$ takes into account the different normalization of HD and LD states. It amounts to the rescaling $\Delta^{\rm ext}\rightarrow|{h}|^{-1/4} \Delta^{\rm ext}|{h}|^{1/4}$. The Hermitian internal operator $a_\mu-\frac{i}{2}\nabla_ia_\mu^i$ enters the expression as a gauge potential. The second righthand term of (\ref{LapRecuction}) corresponds to the LD internal Laplace operator \begin{equation} \Delta^{\rm int}= |{h}|^{-1/2}\partial_i{h}^{ij}|{h}|^{1/2}\partial_j \nonumber \end{equation} (Hermitian with respect to the internal scalar product constructed by means of the LD volume element $|h|^{1/2}dy$) with partial derivatives $\partial_i$ replaced by \[ \partial_i+ \frac{1}{2}\hat{E}_{i\mu}^{\ \ \mu}=\partial_i+\left(\partial_i\ln |g|^{1/4}\right) \] As above, the total derivative amounts to the rescaling $\Delta^{\rm int}\rightarrow |g|^{-1/4}\Delta^{\rm int} |g|^{1/4}$, necessary to correct the different normalization of HD and LD states. The remaining terms in the righthand side of (\ref{LapRecuction}) are identified with a scalar potential induced by dimensional reduction. They are know to produce observable effects in low energy physics \cite{gaugeEST}. Specializing to Kaluza-Klein and embedded spacetime models we obtain: \vskip0,2cm\noindent{\small {\bf Kaluza-Klein}: By recalling (\ref{metricKK}), (\ref{IIexKK}), (\ref{LDexRKK}), (\ref{IIinKK}), the fact that $\nabla_i{\rm K}_{\sf a}^i=0$ and assuming that the external metric only depends on external coordinates, (\ref{LapRecuction}) reduces to the well known expression \begin{equation} \begin{array}{l} {\mathbf\Delta}^{\rm KK}= |{\rm g}|^{-1/2}\left(\partial_\mu- i{\rm A}_\mu^{\sf a}\hat{\rm K}_{\sf a}\right){\rm g}^{\mu\nu} |{\rm g}|^{1/2}\left(\partial_\nu-i {\rm A}_\nu^{\sf a}\hat{\rm K}_{\sf a}\right)+\\ \hskip1.2cm+|\kappa|^{-1/2}\partial_i\kappa^{ij}|\kappa|^{1/2}\partial_j \end{array}\tag{\ref{LapRecuction}KK}\label{Laplace KK} \end{equation} where $\hat{\rm K}_{\sf a}=-i{\rm K}_{\sf a}^i\partial_i$ are infinite dimensional Hermitian generators of the isometry algebra.} \vskip0,05cm \noindent{\small {\bf Embedded spacetime}: By recalling (\ref{metricES}), (\ref{IIexES}), (\ref{LDexRES}), (\ref{IIinES}), the fact that $\nabla_i{\rm A}_j^{\ i}y^j={\rm A}_i^{\ i}=0$ and after rescaling fields and operators by \begin{equation} \begin{array}{rcl} \bm\Psi&\rightarrow& |g|^{1/4}|{\rm g}|^{-1/4}\bm\Psi \\ {\mathbf\Delta} &\rightarrow& |g|^{1/4} |{\rm g}|^{-1/4}{\mathbf\Delta}\ |g|^{-1/4}|{\rm g}|^{1/4} \end{array}\nonumber \end{equation} in a neighborhood of radius $\epsilon$ of ${\mathrm M}_d$ (\ref{LapRecuction}) reduces to \begin{equation} \begin{array}{l} {\mathbf\Delta}^{\rm emb}= |{\rm g}|^{-1/2} \left(\partial_\mu-\frac{i}{2}{\rm A}_\mu^{\ ij}{\rm L}_{ij} \right) {\rm g}^{\mu\nu}|{\rm g}|^{1/2} \left(\partial_\nu- \frac{i}{2}{\rm A}_\mu^{\ kl}{\rm L}_{kl} \right)+\\ \hskip1.2cm +\frac{1}{2}\mathrm{II}_{i\mu\nu}\mathrm{II}^{i\mu\nu}- \frac{1}{4}\mathrm{II}_{i\mu}^{\ \ \mu}\mathrm{II}_{\ \nu}^{i\ \nu} +\partial^i\partial_i+{\cal O}(\epsilon) \end{array}\tag{\ref{LapRecuction}{\rm emb}}\label{Laplace emb} \end{equation} where ${\rm L}_{ij}=-i(y_i\partial_j-y_j\partial_i)$ are orbital angular momentum operators in internal directions.} \subsubsection{Dirac operator} The HD Dirac operator $\bm{\mathcal D\!\!\!\!/}$ acts on $2^{[D/2]}$-dimensional Dirac fermions. In every local coordinate frame $\bm{\mathcal D\!\!\!\!/}$ is written in terms of HD gamma matrices $\bm\gamma^A$, reference frames, partial derivatives, pseudo-orthogonal connection coefficients and spin pseudo-orthogonal generators $\bm\Sigma^{AB}=-\frac{i}{4}[\bm\gamma^A,\bm\gamma^B]$ as \begin{equation} \bm{\mathcal D\!\!\!\!/}=\bm\gamma^C{\mathbf r}_C^{\ I} \left(\partial_I-\frac{i}{2}{\mathbf\Omega}_{I,AB}\bm\Sigma^{AB}\right) \nonumber \end{equation} $\bm{\mathcal D\!\!\!\!/}$ is Hermitian with respect to the measure constructed by means of Dirac adjoint and HD covariant volume element $|{\mathbf g}|^{1/2}d{\mathbf x}$. HD gamma matrices $\bm\gamma^A$ can be decomposed in terms of LD external $\gamma^\alpha$ and internal $\gamma^a$ gamma matrices as \begin{eqnarray} \bm{\gamma}^\alpha &=&\gamma^\alpha \otimes \bm{1}^{\rm int}\nonumber\\ \bm{\gamma}^a&=&\gamma^{\rm ext} \otimes \gamma^a\nonumber \end{eqnarray} where here and in what follows, $\bm{1}^{\rm ext}$, $\gamma^{\rm ext}$ and $\bm{1}^{\rm int}$, $\gamma^{\rm int}$ denote identity and chiral matrices in external and internal spin spaces, respectively. Correspondingly, the HD spin generators $\bm\Sigma^{AB}$ decompose in terms of LD external $\Sigma^{\alpha\beta}=-\frac{i}{4}[\gamma^\alpha,\gamma^\beta]$ and internal $\Sigma^{ab}=-\frac{i}{4}[\gamma^a,\gamma^b]$ ones as \begin{eqnarray} \bm\Sigma^{\alpha\beta} &=& \Sigma^{\alpha\beta} \otimes {\mathbf 1}^{\rm int}\nonumber\\ \bm\Sigma^{\alpha b} &=& \frac{i}{2} \gamma^{\rm ext}\gamma^\alpha \otimes\gamma^b\nonumber\\ \bm\Sigma^{ab} &=& {\mathbf 1}^{\rm ext} \otimes \Sigma^{ab}\nonumber \end{eqnarray} By recalling the reference frames decomposition (\ref{rf}), the relation between HD and LD connection coefficients (\ref{HvLconnection}), suppressing --as customary-- tensor product symbols and spin identity matrices, we obtain the most general LD decomposition covariant under (\ref{STr}) \begin{eqnarray} \bm{\mathcal D\!\!\!\!/}=\gamma^\gamma r_\gamma^{\ \mu} \left(\hat\partial_\mu \!+\!\frac{1}{2}{E}_{\mu i}^{\ \ i}\!-\!\frac{i}{2} \hat{A}_{\mu,ab} \Sigma^{ab}\!-\! \frac{i}{2}{{\mathit{\hat\Omega}}}_{\mu,\alpha\beta} \Sigma^{\alpha\beta}\right) \!+ \nonumber\\ +\gamma^{\rm ext} \gamma^c \rho_c^{\ i}\left(\partial_i \!+\!\frac{1}{2}\hat{E}_{i\mu}^{\ \ \mu}\!-\!\frac{i}{2}A_{i,\alpha\beta}\Sigma^{\alpha\beta}\!-\!\frac{i}{2} {\mathit\Omega}_{i,ab}\Sigma^{ab}\right)\!+\nonumber\\ +\frac{i}{2}\gamma^{\rm ext} \gamma^cf_{c\alpha\beta}\Sigma^{\alpha\beta} \hskip0,5cm\label{DirRecuction} \end{eqnarray} The first righthand side term reproduces the four dimensional Dirac operator \begin{equation} {\mathcal D\!\!\!\!/}^{\rm\ ext}= \gamma^\gamma r_\gamma^{\ \mu} \left(\partial_\mu- \frac{i}{2} {{\mathit{\Omega}}}_{\mu,\alpha\beta} \Sigma^{\alpha\beta}\right)\nonumber \end{equation} with connection coefficients replaced by hatted ones and partial derivatives $\partial_\mu$ replaced by \[ \hat\partial_\mu +\frac{1}{2}{E}_{\mu i}^{\ \ i} - \frac{i}{2} \hat{A}_{\mu,ab} \Sigma^{ab} \] As in the scalar case, the total derivative hidden in the trace of the second fundamental form $\frac{1}{2}{E}_{\mu i}^{\ \ i}$ corrects the different HD and LD normalization, while the operator gauge potential $a_\mu-\frac{i}{2}\nabla_ia_\mu^i$ is now supplemented by the Hermitian internal spin matrix $\frac{1}{2} \hat{A}_{\mu,ab} \Sigma^{ab}$. The second righthand side term corresponds to $\gamma^{\rm ext} $ times the internal Dirac operator \begin{equation} {\mathcal D\!\!\!\!/}^{\rm\ int}= \gamma^c \rho_c^{\ i}\left(\partial_i -\frac{i}{2} {\mathit\Omega}_{i,ab}\Sigma^{ab}\right) \nonumber \end{equation} with partial derivatives replaced by \[ \partial_i+\frac{1}{2}\hat{E}_{i\mu}^{\ \ \mu}-\frac{i}{2}A_{i,\alpha\beta}\Sigma^{\alpha\beta} \] Once again $\frac{1}{2}\hat{E}_{i\mu}^{\ \ \mu}$ remedies the different states normalization, while the Hermitian external spin matrix $\frac{1}{2}A_{i,\alpha\beta}\Sigma^{\alpha\beta}$ enters the expression as a gauge potential. The third righthand term $\frac{i}{2}\gamma^{\rm ext} \gamma^cf_{c\alpha\beta}\Sigma^{\alpha\beta}$ is an induced Pauli term. Specializing to Kaluza-Klein and embedded spacetime models we obtain: \vskip0,2cm\noindent{\small {\bf Kaluza-Klein} By recalling (\ref{rfKK}), (\ref{IIexKK}), (\ref{LDexRKK}), (\ref{IIinKK}), we have $\nabla_i{\rm K}_{\sf a}^i=0$ and assuming that the external metric only depends on external coordinates, (\ref{DirRecuction}) reduces to the well known Kaluza-Klein decomposition of the Dirac operator \begin{equation} \begin{array}{l} \bm{\mathcal D\!\!\!\!/}^{\rm KK}=\gamma^\alpha{\rm r}_\alpha^{\ \mu}\left(\partial_\mu - i{\rm A}_\mu^{\sf a}\hat{\rm K}_{\sf a}- \frac{i}{2}\Omega_{\mu,\alpha\beta}\Sigma^{\alpha\beta}\right)+\\ \hskip0.4cm + \gamma^{\rm ext}\gamma^ak_a^{\ i}\left(\partial_i- \frac{i}{2}\Omega_{i,ab}\Sigma^{ab}\right) +\frac{i}{2}\gamma^{\rm ext}\gamma^i{\rm F}_{\alpha\beta}^{\sf a} {\rm K}_{{\sf a}i}\Sigma^{\alpha\beta} \end{array} \tag{\ref{DirRecuction}KK}\label{Dirac KK} \end{equation} where \begin{equation} \hat{\rm K}_{\sf a}= -i{\rm K}^i_{\sf a}\partial_i+\frac{1}{2} \left[k_a^{\ i}(\partial_i{\rm K}^j_{\sf a})k_{bj}-{\rm K}^i_{\sf a}(\partial_i k_a^{\ j})k_{bj}\right]\Sigma^{ab}\nonumber \end{equation} are infinite dimensional Hermitian generators of the isometry group algebra.} \vskip0,05cm \noindent{\small {\bf Embedded spacetime} By recalling (\ref{rfES}), (\ref{IIexES}), (\ref{LDexRES}), (\ref{IIinES}), that $\nabla_i{\rm A}_j^{\ i}y^j=0$ and by rescaling fields and operators by \begin{equation} \begin{array}{rcl} \bm\Psi & \rightarrow & |g|^{1/4}|{\rm g}|^{-1/4}\bm\Psi\\ \bm{\mathcal D\!\!\!\!/}\ & \rightarrow & |g|^{1/4}|{\rm g}|^{-1/4} \bm{\mathcal D\!\!\!\!/}\ |g|^{-1/4}|{\rm g}|^{1/4} \end{array} \nonumber \end{equation} we obtain the following expression for the Dirac operator in neighborhood of radius $\epsilon$ of ${\mathrm M}_d$ \begin{equation} \begin{array}{l} \bm{\mathcal D\!\!\!\!/}^{\rm\ emb}=\gamma^\alpha{\rm t}_\alpha^{\ \mu}\left(\partial_\mu- \frac{i}{2}{\rm A}_\mu^{\ ij}{\rm J}_{ij}- \frac{i}{2}\Omega_{\mu,\alpha\beta}\Sigma^{\alpha\beta}\right)+\\ \hskip0.4cm+\gamma^{\rm ext}\gamma^i\partial_i +{\cal O}(\epsilon) \end{array} \tag{\ref{DirRecuction}{\scriptsize\rm emb}}\label{Dirac emb} \end{equation} with ${\rm J}_{ij}={\rm L}_{ij}+\Sigma_{ij}$ the total angular momentum in internal directions.} \subsubsection{Higher spin operators} HD higher spin operators decompose in the very same way as the sum of LD spin operators, with partial derivatives replaced by `gauge covariant' ones and the possible addition of scalar potential terms. In particular, external partial derivatives $\partial_\mu$ are replaced by \begin{equation} \hat\partial_\mu+\frac{1}{2}{E}_{\mu i}^{\ \ i} -\frac{i}{2} \hat{A}_{\mu,ab}{\mathrm S}^{ab} \label{hs_gauge_derivative} \end{equation} with ${\mathrm S}^{ab}$ appropriate internal spin generators. \section{\label{sec3} Gauge Symmetries from \protect\\ Higher Dimensional Covariance} One of the most interesting features of dimensional reduction is the possibility of geometrically inducing gauge structures in the effective LD dynamics. In this section we see that, after general covariance breaking, residual internal coordinates and reference frame transformations are always perceived by effective LD observers as gauge transformations. We also discuss conditions for the induced gauge group to be finite dimensional, providing a covariant characterization of Kaluza-Klein and other few remarkable backgrounds.\\ Gauge fields are identified by their coupling to matter and by their transformation rules. From the point of view of classical equation of motion, a quick look to (\ref{H}) shows that $a_\mu^i\pi_i$ enters the Hamiltonian as a gauge potential. To make the gauge structure explicit we may rewrite the interaction term as $\mathbf{tr}(qa_\mu)$ with $q=iy^j\pi_j$ a suitable charge operator and $a_\mu=-ia_\mu^i\partial_i$ the gauge connection introduced in Section \ref{sec1.2}. The corresponding curvature $f_{\mu\nu}=-if_{\mu\nu}^i\partial_i$ enters the third term of equation (\ref{geoex}). From the operatorial/quantum viewpoint, after adapting the state measure to the external spacetime by the scale transformation \begin{equation} \begin{array}{l} \bm\Psi\rightarrow|h|^{1/4}\bm\Psi\hskip0,3cm\mbox{and} \nonumber\\ {\mathbf\Delta}\rightarrow|h|^{1/4} {\mathbf\Delta}|h|^{-1/4}, \hskip0,2cm \bm{\mathcal D\!\!\!\!/}\rightarrow|h|^{1/4} \bm{\mathcal D\!\!\!\!/}|h|^{-1/4} \hskip0,2cm ... \end{array}\nonumber \end{equation} expressions (\ref{LapRecuction}), (\ref{DirRecuction}) and (\ref{hs_gauge_derivative}) taken by Laplace, Dirac and higher spin operators show that \begin{eqnarray} {\mathcal A}_\mu&=& -ia_\mu^i\left(\partial_i-\frac{i}{2}{\mathit\Omega}_{i,ab} {\mathrm S}^{ab} \right) -\frac{i}{2}\nabla_ia_\mu^i+\nonumber\\ & & +\frac{1}{2}(\partial_\mu\rho_a^{\ k})\rho_{bk}{\mathrm S}^{ab} \label{gauge_potential} \end{eqnarray} couples to effective LD degrees of freedom as a gauge potential. Under the residual covariance group ${\mathcal A}_\mu$ transforms like a gauge potential: \begin{description} \item -- internal diffeomorphisms \begin{equation} y^i\rightarrow \exp\{\xi^k(x,y)\partial_k\}y^i \nonumber \end{equation} make $a_\mu^i$ and hence ${\mathcal A}_\mu$ to transform like \begin{equation} {\mathcal A}_\mu\rightarrow {T}{\mathcal A}_\mu {T}^{-1}+i{T}(\partial_\mu {T}^{-1}) \end{equation} with $T=\exp\{-\xi^k\partial_k\}$ \item -- internal reference frame redefinitions \begin{equation} \rho_a^{\ i} \rightarrow \Lambda_a^{\ b}(x,y)\rho_b^{\ i}\nonumber \end{equation} make $(\partial_\mu\rho_a^{\ k})\rho_{bk}$ and hence ${\mathcal A}_\mu$ to transform like \begin{equation} {\mathcal A}_\mu\rightarrow {\mathit \Lambda}{\mathcal A}_\mu {\mathit \Lambda}^{-1}+i{\mathit \Lambda}(\partial_\mu {\mathit \Lambda}^{-1}) \end{equation} with ${\mathit \Lambda}=\exp\{\frac{i}{2} \Lambda_{ab}{\mathrm S}^{ab}\}$ \end{description} The commutator of two gauge covariant derivatives defines the operator ${\mathcal F}_{\mu\nu}=\partial_\mu {\mathcal A}_\nu-\partial_\nu{\mathcal A}_\mu-i[{\mathcal A}_\mu,{\mathcal A}_\nu]$. A direct computation yields \begin{eqnarray} {\mathcal F}_{\mu\nu} &=& -i f_{\mu\nu}^i\left(\partial_i-\frac{i}{2}{\mathit\Omega}_{i,ab} {\mathrm S}^{ab} \right) -\frac{i}{2}\nabla_if_{\mu\nu}^i+\nonumber\\ &&+\frac{1}{2}(\nabla_af_{b\mu\nu}){\mathrm S}^{ab}+{E}_{\mu a}^{\ \ c}{E}_{\nu bc} {\mathrm S}^{ab}\label{gauge_field} \end{eqnarray} Under internal diffeomorphisms and reference frames redefinitions ${\mathcal F}_{\mu\nu}$ correctly transforms like a gauge curvature \begin{equation} {\mathcal F}_{\mu\nu}\rightarrow{T}{\mathcal F}_{\mu\nu}{T}^{-1} \hskip0,3cm\mbox{and}\hskip0,3cm {\mathcal F}_{\mu\nu}\rightarrow{\mathit \Lambda}{\mathcal F}_{\mu\nu}{\mathit \Lambda}^{-1} \end{equation} ${\mathcal A}_\mu$ and ${\mathcal F}_{\mu\nu}$ are Hermitian operators --i.e.\ infinite dimensional matrices-- acting on internal tensors/spinors. After HD covariance braking, residual internal covariance is perceived by effective LD observers as an infinite dimensional gauge group, with internal coordinate and spin playing the role of --one of the many possible choices of-- gauge indices. The gauge curvature ${\mathcal F}_{\mu\nu}$ receives contributions from two independent LD tensors: $f_{\mu\nu}^i$ and ${E}_{\mu ij}$. In general, the two contributions are simultaneously active producing an effective infinite-dimensional gauge group. In some special backgrounds the gauge group may reduce to finite dimensions. \subsection{\label{sec4.1} ${E}_{\mu ij}=0$: gauge structures related \\ to the isometric structure of internal spaces} Let us first consider the case where the internal fundamental form ${E}_{\mu ij}$ vanishes identically while $f_{\mu\nu}^i$ is arbitrary. This requirement is equivalent to the statement that the induced gauge structure is of the Kaluza-Klein type. We have already seen in Section \ref{sec1.3.2}, equations (\ref{IIinKK}) and (\ref{IIinES}), that gauge structures of the Kaluza-Klein type imply ${E}_{\mu ij}=0$. To prove the inverse, we note that under the vanishing of the internal second fundamental form equation (\ref{IIinIdentity}) implies that \begin{equation} \nabla_if_{j\mu\nu}+\nabla_jf_{i\mu\nu}=0\nonumber \end{equation} In every internal space the vector $f_{\mu\nu}^i$ is Killing. In principle the Killing structure of ${\mathrm M}_c^x$ can depend on the external point $x$. However, the fact that $f_{\mu\nu}^i$ belongs to the Killing algebra also implies that $a_\mu^i$ takes values on the same algebra up to a pure gauge term. It is therefore possible to choose internal coordinates in which $a_\mu^i$ is Killing, $\nabla_ia_{\mu j}+\nabla_ja_{\mu i}=0\nonumber$. In such adapted coordinate frames equation (\ref{IIinKK'}) implies $\partial_\mu h_{ij}=0$, that is $h_{ij}(x,y)=\kappa_{ij}(y)$. Thus, the intrinsic geometry of internal spaces does not depend on the external spacetime point. Having the same intrinsic and extrinsic geometry, all internal spaces are isomorphic: ${\mathrm M}_c^x\equiv{\mathcal K}_c$. By choosing a Killing vector basis ${\mathrm K}_{\mathsf a}^i(y)$, ${\mathsf a}=1,...,n$, for the isometry algebra $iso({\mathcal K}_c)\equiv {\mathrm g}_{\mathrm{KK}}$, $[{\mathrm K}_{\mathsf a},{\mathrm K}_{\mathsf b}]^i=k_\mathsf{ab}^{ \ \mathsf{c}}{\mathrm K}_{\mathsf c}^i$, the off-diagonal metric term $a_\mu^i$ and the antisymmetric hybrid tensor $f_{\mu\nu}^i$ can be expanded as in (\ref{fKK}) or (\ref{fES}) by \begin{eqnarray} a^i_\mu(x,y)&=&{\mathrm A}^{\mathsf a}_\mu(x){\mathrm K}_{\mathsf a}^i(y)\nonumber\\ f^i_{\mu\nu}(x,y)&=&{\mathrm F}^{\mathsf a}_{\mu\nu}(x){\mathrm K}_{\mathsf a}^i(y)\nonumber \end{eqnarray} with ${\mathrm F}^{\mathsf c}_{\mu\nu}=\partial_\mu{\mathrm A}^{\mathsf c}_\nu-\partial_\nu{\mathrm A}^{\mathsf c}_\mu-k_\mathsf{ab}^{ \ \mathsf{c}}{\mathrm A}^{\mathsf a}_\mu{\mathrm A}^{\mathsf b}_\nu$. The gauge potential (\ref{gauge_potential}) and the gauge field (\ref{gauge_field}) acting on spin-$\mathrm{s}$ matter take then the standard Kaluza-Klein form \begin{eqnarray} {\mathcal A}_{\mu}&=&{\mathrm A}_\mu^{\mathsf a}(x)\hat{\mathrm K}_{\mathsf a}\label{KKpotential}\\ {\mathcal F}_{\mu\nu}&=& {\mathrm F}^{\mathsf a}_{\mu\nu}(x)\hat{\mathrm K}_{\mathsf a}\label{KKfiled} \end{eqnarray} with \begin{eqnarray} \hat{\mathrm K}_{\mathsf a}\!=\!{\mathrm K}_{\mathsf a}^i\left(\!-i\partial_i\! -\!\frac{1}{2}\Omega_{i,ab}{\mathrm S}^{ab}\!\right)\!+\!\frac{1}{2} (\nabla_a{\rm K}_{{\sf a}b}){\mathrm S}^{ab}\nonumber \end{eqnarray} spin-$\mathrm{s}$ valued Hermitian differential operators closing the finite-dimensional algebra $iso({\mathcal K}_c)$, $[\hat{\mathrm K}_{\mathsf a},\hat{\mathrm K}_{\mathsf b}]=-ik_\mathsf{ab}^{ \ \mathsf{c}}\hat{\mathrm K}_{\mathsf c}$.\\ The theory is still covariant under the whole residual covariance group (\ref{STr}). However, a generic diffeomorphism $T=\exp\{-\xi^i(x,y)\partial_i\}$ will bring $a_\mu^i$ outside $iso({\mathcal K}_c)$. To keep the group structure of the background explicit it is necessary to work in adapted coordinates. This is achieved by restricting the allowed covariance group to Killing transformations, that is, by restricting attention to $\xi^i(x,y)=\epsilon^{\mathsf a}(x){\mathrm K}_{\mathsf a}^i(y)$ as standard in Kaluza-Klein theories. In arbitrary coordinate frames, Kaluza-Klein gauge structures are completely characterized by the LD covariant condition \begin{equation} {E}_{\mu ij}=0 \end{equation} Kaluza-Klein backgrounds, in the strict sense, further require the independence of the induced external metric on internal coordinates, a condition enforced by the vanishing of the symmetric part of the external fundamental form $\hat{E}_{i(\mu\nu)}=0$. By contrast, with diffeomorphisms, the Killing algebra of a manifold is always finite-dimensional having dimension at most $c(c+1)/2$ \cite{Kill}. As a consequence, in the Kaluza-Klein context, at least two internal dimensions are necessary to produce non-Abelian gauge structures. Thus, a minimum of seven extra-dimensions is required to realize the Standard Model group $U(1)\times SU(2)\times SU(3)$ \cite{Witten81}. \subsection{\label{sec4.2} ${E}_{\mu ij}-\frac{1}{c} {E}_{\mu k}^{\ \ k}h_{ij}=0$: gauge structures related \\ to the conformal structure of internal spaces} Let us now weaken the Kaluza-Klein condition by requiring the proportionality of the internal fundamental form ${E}_{\mu ij}$ to the internal metric $h_{ij}$, ${E}_{\mu ij}=\frac{1}{c}{E}_{\mu k}^{\ \ k}h_{ij}$ (this condition is trivial in $c=1$). Assuming this, equation (\ref{IIinIdentity}) implies that \begin{equation} \nabla_if_{j\mu\nu}+\nabla_jf_{i\mu\nu}=\frac{2}{c} (\nabla_kf^{k}_{\mu\nu})h_{ij}\nonumber \end{equation} In every internal space the internal vector $f^{i}_{\mu\nu}$ belongs to the conformal algebra of the manifold. As above, the fact that $f^{i}_{\mu\nu}$ belongs to an algebra implies that also $a^i_\mu$ belongs to the same algebra up to a pure gauge term. It is then possible to adapt internal coordinates in such a way that $\nabla_ia_{\mu j}+\nabla_ja_{\mu i}=\frac{2}{c}(\nabla_ka^k_\mu) h_{ij}$. Equation (\ref{IIinKK'}) implies that $|h|^{1/c}\partial_\mu |h|^{-1/c}h_{ij}=0$. Hence, in the adapted coordinate system $h_{ij}(x,y)=\lambda(x){\mathrm c}_{ij}(y)$ for some conformal factor $\lambda(x)$ and some internal metric ${\mathrm c}_{ij}(y)$. All internal spaces are conformal to a given manifold ${\mathcal C}_c$. Choosing a basis ${\mathrm C}_{\mathsf a}^i(y)$, ${\mathsf a}=1,...,n$ for the conformal algebra $con\!f({\mathcal C}_c)$, $[{\mathrm C}_{\mathsf a},{\mathrm C}_{\mathsf b}]^i=c_\mathsf{ab}^{ \ \mathsf{c}}{\mathrm C}_{\mathsf c}^i$, $a^i_\mu$ and $f^{i}_{\mu\nu}$ can be expanded as \begin{eqnarray} a^i_\mu(x,y)&=&{\mathrm A}^{\mathsf a}_\mu(x){\mathrm C}_{\mathsf a}^i(y)\nonumber\\ f^i_{\mu\nu}(x,y)&=&{\mathrm F}^{\mathsf a}_{\mu\nu}(x){\mathrm C}_{\mathsf a}^i(y)\nonumber \end{eqnarray} where again ${\mathrm F}^{\mathsf c}_{\mu\nu}=\partial_\mu{\mathrm A}^{\mathsf c}_\nu-\partial_\nu{\mathrm A}^{\mathsf c}_\mu-c_\mathsf{ab}^{ \ \mathsf{c}}{\mathrm A}^{\mathsf a}_\mu{\mathrm A}^{\mathsf b}_\nu$. Also (\ref{gauge_potential}) and (\ref{gauge_field}) take the standard gauge potential and gauge curvature form \begin{eqnarray} {\mathcal A}_{\mu}&=&{\mathrm A}_\mu^{\mathsf a}(x)\hat{\mathrm C}_{\mathsf a}\label{CONFpotential}\\ {\mathcal F}_{\mu\nu}&=& {\mathrm F}^{\mathsf a}_{\mu\nu}(x)\hat{\mathrm C}_{\mathsf a}\label{CONFfiled} \end{eqnarray} where the spin-s valued Hermitian operators $\hat{\mathrm C}_{\mathsf a}$ take now the slightly more complicated form \begin{equation} \hat{\mathrm C}_{\mathsf a}\!= \!{\mathrm C}_{\mathsf a}^i\left(\!-i\partial_i\!- \!\frac{1}{2}\Omega_{i,ab}{\mathrm S}^{ab}\!\right)\!+ \!\frac{1}{2}(\nabla_a{\mathrm C}_{{\sf a}b}){\mathrm S}^{ab}\!-\!\frac{i}{2}\nabla_i{\mathrm C}_{\sf a}^i \nonumbe \end{equation} It is readily checked that the $\hat{\mathrm C}_{\mathsf a}$ do not depend on external coordinates and close $con\!f({\mathcal C}_c)$, $[\hat{\mathrm C}_{\mathsf a},\hat{\mathrm C}_{\mathsf b}]=-ic_\mathsf{ab}^{ \ \mathsf{c}}\hat{\mathrm C}_{\mathsf c}$.\\ As in the previous case, gauge invariance is only explicit when the allowed covariance group is restricted to conformal transformations $\xi^i(x,y)=\epsilon^{\mathsf a}(x){\mathrm C}_{\mathsf a}^i(y)$, while in arbitrary coordinates the background is completely characterized by the LD covariant condition \begin{equation} {E}_{\mu ij}-\frac{1}{c}{E}_{\mu k}^{\ \ k}h_{ij}=0 \end{equation} The conformal algebra of a manifold contains the isometry algebra as subalgebra and is always finite dimensional with maximal dimension $(c+1)(c+2)/2$. As a consequence non-Abelian gauge fields may be induced even with a single internal dimension. \vskip0,05cm \noindent{\small {\bf Example:} To check this explicitly we consider a one-dimensional internal space with topology of a circle parameterized by the internal coordinate $\theta\in[-\pi,\pi]$. The corresponding conformal algebra $so(2,1)$ is generated by the vector fields ${\mathrm C}_{\mathsf 1}^\theta=1$, ${\mathrm C}_{\mathsf 2}^\theta=\sin\theta$ and ${\mathrm C}_{\mathsf 3}^\theta=\cos\theta$. Assuming the off-diagonal term of the HD metric to be of the form \begin{equation} a_\mu^\theta(x,\theta)=A_\mu^1(x)+A_\mu^2(x)\sin\theta+A_\mu^3(x)\cos\theta\nonumber \end{equation} the vector field (\ref{gauge_field}) rewrites like in (\ref{CONFfiled}) with \begin{eqnarray} \hat{\mathrm C}_1&=&-i\frac{\partial}{\partial\theta}\nonumber\\ \hat{\mathrm C}_2&=&-i\sin\theta \frac{\partial}{\partial\theta}- \frac{i}{2}\cos\theta\nonumber\\ \hat{\mathrm C}_3&=&-i\cos\theta \frac{\partial}{\partial\theta}+ \frac{i}{2}\sin\theta\nonumber \end{eqnarray} which are easily checked to close the $so(2,1)$ algebra \begin{eqnarray} [\hat{\mathrm C}_{\mathsf 1},\hat{\mathrm C}_{\mathsf 2}]=i\hat{\mathrm C}_{\mathsf 3},\hskip0,2cm [\hat{\mathrm C}_{\mathsf 2},\hat{\mathrm C}_{\mathsf 3}]=-i\hat{\mathrm C}_{\mathsf 1}, \hskip0,2cm [\hat{\mathrm C}_{\mathsf 3},\hat{\mathrm C}_{\mathsf 1}]=i\hat{\mathrm C}_{\mathsf 2} \nonumber \end{eqnarray} We should remark, however, that $so(2,1)$ is the only non-Abelian Lie algebra that can be embedded in $di\!f\!f({\mathrm M}_{1})$. } \subsection{\label{sec4.3} $f_{\mu\nu}^i=0$: gauge structures related to the local\\ freedom of choosing internal reference frames} Eventually, we consider the case where the antisymmetric hybrid tensor $f_{\mu\nu}^i$ vanishes identically while ${E}_{\mu ij}$ is arbitrary. Under these circumstances it is always possible to choose internal coordinates in such a way that the off-diagonal block of the HD metric vanishes identically $a_\mu^i=0$. In such adapted coordinate systems the internal fundamental form reduces to the external derivative of the internal metric \begin{equation} {E}_{\mu ij}=\frac{1}{2}\partial_\mu h_{ij} \nonumber \end{equation} The gauge potential (\ref{gauge_potential}) and the gauge curvature (\ref{gauge_field}) acting on spin-$\mathrm{s}$ matter take the form of standard (pseudo-)orthogonal gauge fields, with internal spin generators ${\mathrm S}^{ab}$ playing the role of gauge algebra generators \begin{eqnarray} {\mathcal A}_{\mu}&=&\frac{1}{2}(\partial_\mu\rho_a^{\ k})\rho_{bk} {\mathrm S}^{ab}\\ {\mathcal F}_{\mu\nu}&=&\frac{1}{4}\rho_a^{\ i}\rho_b^{\ j}(\partial_\mu h_{ik})h^{kl}(\partial_\mu h_{jl}) {\mathrm S}^{ab} \end{eqnarray} We see that LD gauge structures can be induced even when the off-diagonal block of the HD metric vanishes identically, but they only act on matter carrying spin. As in the previous cases, the theory is still covariant under the whole residual covariance group. The gauge structure emerges explicitly only when adapted coordinates are introduced and the covariance group is restricted to (pseudo-)rotations of internal reference frames. In generic coordinate systems the background is fully characterized by the LD covariant condition \begin{equation} f_{\mu\nu}^i=0 \end{equation} Under these circumstances a minimum of three internal dimensions is required to generate non-Abelian gauge structures, while ten extra dimensions naturally provide the background for $SO(10)$ grand unification \cite{SO10}. Internal gauge indices like isospin and color can be nicely understood as internal spin indices and a complete matter unification can be achieved in terms of a single fourteen-dimensional spinor \cite{Maraner04}. \section{\label{sec4} Discussion and Conclusion} The selection of a subset of coordinates --with the relative general covariance breaking-- does not imply in itself neither the selection of a reduced space nor a dimensional reduction procedure. However, it determines the geometrical features of all reduction schemes leading to that subset of coordinates as residual coordinates. By investigating invariant/covariant quantities under the residual transformation group we constructed LD tensors that fully characterize the geometry of the coordinate choice and hence of the associated dimensional reduction schemes. These allow to see in the same light reduction procedures that seems otherwise totally unrelated, like Kaluza-Klein models --where the system is totally delocalized in internal directions-- and embedded spacetimes --where, on the contrary, the system gets localized at an internal space point. Most of the formulas of Kaluza-Klein and embedded spacetime theories do not depend on the averaging procedure employed, but only on the geometry of the coordinate choice. In this paper we presented general formulas for the reduction of the main tensors and operators of Riemannian geometry. In particular, the reduction of the HD Riemann tensor provides what is probably the maximal possible generalization of Gauss, Codazzi and Ricci equations. Our work also sheds some new light on the nature of geometrically induced gauge structures, tracing their origin to residual general covariance in internal directions.\\ \indent We conclude by remarking that --from the separation of radial and angular coordinates in the two-body problem to the latest theories of everything-- {\em adapting}, {\em selecting an appropriate subset} and {\em exactly or effectively separating} coordinates is such a basic procedure in solving physical problems, that it is unthinkable to compile even a partial list of the papers where particular adapted/reduced expressions of geometric tensors, equations and operators have been obtained. Our hope is that the formulas presented in this paper may be of help and save some tedious computational work to all researcher working on some adapting coordinates problem. \acknowledgements J.K.P. would like to thank KITP for its hospitality.
1,314,259,996,123
arxiv
\section{Introduction} The dynamics of charge carriers in low-dimensional quantum structures has been in the focus of interest for many years. A special role in its study is played by probeless methods allowing the influence of contacts to be avoided. In one of these methods the attenuation and the phase shift of a surface acoustic wave (SAW) are measured. These are caused by charge carriers in a two-dimensional (2D) layer located close to the surface supporting the SAW. To the best of our knowledge, the first results of acoustic studies of 2D electron systems were reported in Ref.~\onlinecite{Wixforth1986}. In that paper, the interaction of a SAW with the 2D electrons in a GaAs/AlGaAs heterostructure was investigated in the integer quantum Hall effect (IQHE) regime. In this work, a SAW was excited and propagated directly on the surface of a piezoelectric GaAs/AlGaAs sample. This procedure caused the sample to be somewhat mechanically stressed and thereby deformed. Later the authors suggested a configuration in which no mechanical deformation is transferred from the piezoelectric substrate to the sample, such that only the electrical field matters.~\cite{Wixforth1989} This allowed to determine the complex AC conductance, $\sigma (\omega)$, from the measured SAW attenuation and phase shift. A relevant theoretical model developed in Refs.~\onlinecite{Efros1990,Simon1996,Kagan1997} allowed extracting quantitative information, as was shown in subsequent works.~\cite{Drichko2000,Drichko2005,Drichko2008,Drichko2009,Drichko2013} However, the acoustic method has an upper frequency limit,associated with technological problems in producing the inter-digital transducers (IDTs) used for the excitation and detection of the SAW, see Sec.~\ref{experiment_b}. At the same time, measurements of the electron response in low-dimensional systems to high-frequency perturbations are especially important, as being related to several intrinsic properties of low-dimensional systems. Among these properties are peculiarities of electron localization, collective modes and their pinning, mechanisms of integer and fractional quantum Hall effects, etc. A powerful approach is provided by microwave spectroscopy (MWS) suggested in Ref.~\onlinecite{Engel1993}. Various modifications of this method, including probeless ones, were developed in Refs.~\onlinecite{Stone2012,Endo2013,Chen_thesis,Stone_thesis}. The MWS provides a much broader frequency domain compared with acoustic spectroscopy~(AS) - from hundreds of megahertz to dozens of gigahertz. It is very efficient for studying the dependence of the electron response on magnetic field, temperature, etc. However, it is rather difficult to calibrate the measured AC conductance in absolute units, which is why the results of many papers on this subject are presented in relative units. The aim of the present work is to compare the results of microwave measurements of $\sigma (\omega)$ with those obtained using AS, as well as DC transport measurements. This comparison will facilitate the calibration of the dynamical electromagnetic response of the low-dimensional electron gas in absolute units. The paper is organized as follows. In Sec.~\ref{experiment} we describe the sample, as well as both AS and MWS. The results are reported in Sec.~\ref{results} and discussed in Sec.~\ref{discussion}. \section{Experiment} \label{experiment} \subsection{Sample}\label{sample} \begin{figure}[h!] \centerline{ \includegraphics[width=.4\columnwidth]{fig1.eps} } \caption{Structure of the sample. \label{fig1}} \end{figure} Since it is very important to use well-characterized samples we have chosen the \textit{p}-SiGe/Ge/SiGe heterostructure (K6016) investigated earlier in Refs.~\onlinecite{Drichko2009,Drichko2013} by means of AS. The structure of the sample is schematically shown in Fig.~\ref{fig1}. The sample was made by low-energy plasma-enhanced chemical vapor deposition (LEPECVD), see~\cite{lepecvd} for details. The active part of the sample is a 2D hole channel formed in a strained Ge layer. The density and mobility of holes are $p=6\times10^{11}$~cm$^{-2}$ and $\mu_p=6\times 10^4$~cm$^2$/(V$\cdot$ s), respectively, at 4.2 K. \subsection{Experimental methods} \label{methods} In the following, we briefly explain the two methods employed for the AC conductance measurements - one (AS) based on the propagation of the SAWs and the other on microwave spectroscopy (MWS). \subsubsection{Acoustic spectroscopy} \label{experiment_b} In Fig.~\ref{fig2} is shown a sketch of the acoustic method. \begin{figure}[b] \centerline{ \includegraphics[width=.8\columnwidth]{fig2.eps} } \caption{Sketch of the AS setup. \label{fig2}} \end{figure} A SAW is excited on the surface of a piezoelectric LiNbO$_3$ crystal by electromagnetic pulses applied to an interdigital transducer IDT1 and detected by another interdigital transducer IDT2 placed on the same surface. The SAW, generated by the piezoelectric effect in LiNbO$_3$, is accompanied by a travelling electromagnetic wave. The sample is pressed onto the surface by a spring. As a result, the electric field of the travelling wave penetrates into the 2D hole gas and interacts with holes. This interaction leads to an attenuation, $\Gamma$, and a phase shift of the SAW, both of which are measured. The latter manifests itself as a renormalization of the SAW phase velocity, $v$. It is important to note that both real and imaginary parts of the conductance vanish in a strong perpendicular magnetic field. This creates the possibility to single out the electron contribution by subtracting the zero-field values of $\Gamma$ and $\Delta v$ from those in a strong magnetic field. The expressions for these differences, as well as the parameters which we use for finding the conductances, are given in Appendix~\ref{A} as Eqs.~\eqref{eq:01} and \eqref{eq:02}. They are based on the derivation given in Ref.~\onlinecite{Kagan1997}. In principle, Equations~\eqref{eq:01} and \eqref{eq:02} allow finding both the real and imaginary parts of the complex conductance from the measured $\Gamma$ and $\Delta v/v$. \subsubsection{Microwave spectroscopy} \label{microwave} In Fig.~\ref{fig3} is shown a sketch of the experimental setup for microwave spectroscopy. In this case, the sample is placed on a meandered coplanar waveguide (CPW) formed on the surface of an insulating GaAs substrate. The microwave pulses applied to the CPW center conductor excite a quasi-transverse electromagnetic mode (quasi-TEM mode). Similarly to the case of a SAW, the interaction with holes in the 2D layer leads to the attenuation of this mode, as well as to a change of its phase. Again, both effects are due to the complex conductance of the 2D layer. To separate these contributions, we will subtract the results in a transverse magnetic field from the zero-field ones. \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{fig3a.eps} \\[1mm] \includegraphics[width=.8\columnwidth]{fig3b.eps} \\[1mm] \includegraphics[width=0.6\columnwidth]{fig3c.eps} \caption{(a) - Sketch of the MWS setup; (b) - Sizes of the elements of the CPW. (c) - A simple transmission line model for using the CPW to measure the AC conductance of the 2D hole layer. \label{fig3}} \end{figure} To relate the measurable quantities with the complex conductance, a simple transmission line model is used as outlined in Fig.~\ref{fig3}c. Here $L^\prime$ is the inductance of the center conductor per unit length and $C^\prime$ the capacitance between the center conductor and ground (side plane) per unit length. The 2D layer constitutes a shunt admittance from the CPW center conductor to the ground. $C^\prime =sC_g$ is the capacitance per unit length from the center conductor (of width $s$) to the 2D layer (located at a distance $d$ below the surface), where $C_g=\varepsilon/d$ is the capacitance per unit area. If $\xi =\sqrt{|\sigma_{\alpha \alpha}|/(\omega C_g)} =\sqrt{|\sigma_{\alpha \alpha}|d/\varepsilon} \ll w$, the microwave electric field is mainly confined in the slots, and the shunt capacitance ($C^\prime_c$ term) has a negligible contribution compared to that of the shunt conductance ($G^\prime$ term, where $G^\prime =2\sigma_{\alpha \alpha}/w$ is the conductance per unit length of the 2D layer under both slots (thus the factor of 2)). In this case, the CPW simply acts as contacts to the 2D hole layer. Here $\sigma_{\alpha \alpha}$ is the conductance in the direction of the electric field. The wave attenuation is given as~\cite{Simons,Chen_thesis}: \begin{equation}\label{CPW-r} \Gamma= - \frac{1}{2l}\ln \left(\frac{P_{\text{out}}}{P_{\text{in}}}\right) = \Re \left[\sqrt{ i \omega L^\prime (i\omega C^\prime +G^\prime)}\right]\,. \end{equation} Substituting the expressions for $L^\prime$, $C^\prime$ and $G^\prime$ (given, e.g., by Eqs. (D.1)-(D.3) in Appendix D of~\cite{Chen_thesis},), we can express $\Re \sigma_{\alpha \alpha}$ through the attenuation coefficient, $\Gamma$. This expression can be cast in the form (cf. with~\cite{Endo2013}): \begin{equation} \label{sigma_full} \frac{Z_0 \Re(\sigma_{\alpha \alpha})}{w}=\Gamma \sqrt{1+ \left(\frac{v_{\text{ph}}\Gamma}{\omega}\right)^2}\, . \end{equation} Here $v_{\text{ph}}=1/\sqrt{L^\prime C^\prime} =c\sqrt{2/(1+\varepsilon_1)}$ is the phase velocity of the wave, $c$ - is the vacuum velocity of light, $\varepsilon_1$ is the dielectric constant of $i$-GaAs, and $Z_0 =\sqrt{L^\prime/C^\prime}$ is the characteristic impedance. In our experiment the parameters of the CPW are selected in order to have $v_{\text{ph}}=1.02 \times 10^8$~m/s and $Z_0 = 50$~Ohm, see Fig.~\ref{fig3}. Equation~\eqref{sigma_full} is valid under the following conditions~\cite{Engel1993}: (i) There are no reflections at the CPW edges; (ii) The AC current is concentrated inside the slots, i.~e.,~the width of the slot, $w$, should exceed the wave penetration depth in the stripes. The inequality can be written as \begin{equation} \label{concentration} w \gtrsim \sqrt{\frac{\sigma_1}{\pi f C_c}}\, , \quad C_c= \varkappa_0\frac{\varepsilon_s\varepsilon_0}{a \varepsilon_s+d\varepsilon_0}\, . \end{equation} Here $C_c$ is the capacitance between the metallic grounded stripe and the 2D layer per unit area, $\varkappa_0= 8.854\times 10^{-12}$~F/m is the vacuum permittivity, $a$ is the clearance between the sample and the $i$-GaAs surface; $\varepsilon_0$=1 and $\varepsilon_s$=16.2 are the dielectric constants, of vacuum, and of the semiconductor, respectively. Putting $a= 1$~$\mu$m and $d=0.145$~$\mu$m, we get $C_c=0.88\times 10^{-5}$~F/m$^2$. Then the inequality~\eqref{concentration} can be rewritten as \begin{equation}\label{c1} \sigma_1 \ (\text {Ohm}^{-1}) \lesssim 1.87\times 10^{-8} f \ (\text{MHz})\, , \end{equation} where we have substituted $w=26$~$\mu$m. The inequality~\eqref{c1} is a serious limitation for the quantitative determination of $\sigma_1$ for samples with relatively large conductance. Therefore, one should be careful while extracting quantitative information from electromagnetic measurements, especially at relatively low frequencies. In what follows we will report our procedure for extracting $\sigma_1 (\omega)$ in a broad frequency domain by combining the results of acoustic and electromagnetic measurements. \section{Results} \label{results} \subsubsection{Acoustic spectroscopy} Acoustic spectroscopy is most suitable for low frequencies, the upper frequency limit being mainly defined by design of the IDT. Shown in Fig.~\ref{fig4} are magnetic field dependences of the attenuation, $\Gamma$ (a), and the SAW velocity, $\Delta v/v$ (b) at a frequency $f=\omega/2\pi=30$~MHz and temperature $T=1.7$~K. The filling factors are shown by arrows. \begin{figure}[t] \centering \includegraphics[width=.8\columnwidth]{fig4a.eps} \\ \includegraphics[width=.8\columnwidth]{fig4b.eps}\\ \includegraphics[width=.8\columnwidth]{fig4c.eps} \caption{Magnetic field dependences of the acoustic absorption $\Gamma$ (a), the variation of the velocity $\Delta v/v$ (b) and the conductance $\sigma_1$, as obtained from these experimental data (c). $f=30$~MHz, $T=1.7$~K. Values of the filling factor $\nu$ are shown close to corresponding conductance minima. The gray line in panel (c) shows the DC conductance, $\sigma_0(H)$. \label{fig4}} \end{figure} Using the procedure based on the expressions given in Appendix~\ref{A} and described in detail in~\cite{Drichko2000,Drichko2008,Drichko2013}, we can extract both the real and imaginary parts of the complex AC conductance. The extracted magnetic field dependence of $\sigma_1$ is shown in Fig.~\ref{fig4}c. \subsubsection{Microwave spectroscopy} \label{em} Shown in Fig.~\ref{fig7}a is the magnetic field dependence of the output signal, $U_{\text{out}}$ of the CPW at a frequency of 1102~MHz and a temperature of 1.7~K. The sample is the same as that used for the acoustic measurements. \begin{figure}[h!] \centering \includegraphics[width=.9\columnwidth]{fig5.eps} \caption{ Magnetic field dependence of $U_{\text{out}}$. Inset - Dependence of $\ln (U_s, V)$ on $H^{-2}$. $f = 1102$~MHz, $T=1.7$~K. \label{fig7}} \end{figure} To relate the output signal to the sample conductance, one also needs to know the input signal, $U_{\text{in}}$. This is not a trivial task since we observed a significant background signal independent of the magnetic field. We attribute this signal to some leakage in the structure and present the total output signal as $U_{\text{out}}=U_s(H)+U_l$. Here $U_s$ is the signal having interacted with the sample, while $U_l$ (shown by solid red line in Fig.~\ref{fig7}) represents the leakage. According to our measurements, the magnetic field dependence of the phase shift of the total output signal is relatively weak, the field-dependent shift being 20-50$^\circ$. Therefore, we simply subtract the background amplitude,~i.e.~$U_l$, from the total output amplitude. Now we take into account that the oscillations of $U_s$ versus magnetic field are caused by oscillations of the diagonal conductance in the regime of the integer quantum Hall effect. As is well known, the maxima of the diagonal conductance correspond to extended states close to the Landau level centers. At the same time, the wave frequencies are much below that the typical electron relaxation rate, $\omega \ll \tau^{-1}$. Therefore, one can expect that the maximal values of the AC conductance should coincide with the values of the static conductance at the same magnetic field. This is illustrated in Fig.~\ref{fig4}c. At the same time, the minima of the AC conductance in the IQHE regime are determined by hopping between localized states, see, e.~g.,~Ref.\onlinecite{Polyakov1993} and references therein. The hopping AC conductance is also suppressed by an external transverse magnetic field, and in strong magnetic fields $\sigma_1 (\omega) \propto H^{-2}$ (with logarithmic accuracy), see, e.g.,~Ref.\onlinecite{Galperin1991-review}. This dependence is experimentally confirmed, and we use it to find $U_{\text{in}}$. An example including 3 signal maxima or conductance minima, corresponding to the filling factors $\nu = 4,6,8$ is shown in the inser of Fig.~\ref{fig7}a (inset), where we have plotted $\ln (U_s, \text{V})$ versus $H^{-2}$. The intercept of the straight line fit with the ordinate axis provides the value of $\ln (U_{\text{in}}, \text{V})=-1.077$ corresponding to $U_{\text{in}}=0.34$~V. Knowing $U_{\text{in}}$, we then find the real part of the conductance from Eq.~\eqref{sigma_full}. The results for $\sigma_1 (H)$ obtained by AS for $f=30$~MHz and MWS for $f=1102$~MHz are shown in Fig.~\ref{fig:fig4_new}a. \begin{figure}[h!] \centering \includegraphics[width=0.8\columnwidth]{fig6a.eps} \\ \includegraphics[width=0.8\columnwidth]{fig6b.eps} \\[0.05in] \includegraphics[width=0.8\columnwidth]{fig6c.eps} \caption{(a)~Magnetic field dependences of the conductance $\sigma_1$ at $T=1.7$~K extracted from AS and MWS. The frequencies are 30 and 1102 MHz, respectively. (b)~The same dependences, but the results of microwave measurements are multiplied by the factor $\mathcal{K}=3.7$. (c)~Results of acoustic and electromagnetic measurements of frequencies of 142~MHz and 148~MHz, respectively. \label{fig:fig4_new}} \end{figure} According to our estimates, the inequality~\eqref{c1} needed for the validity of Eq.~\eqref{sigma_full} is met only within an order of magnitude. Therefore, it is hard to expect high numerical accuracy of the values of $\sigma_1$ extracted from the CPW measurements. However, we know that at the maxima of the $\sigma_1 (H)$ curves the conductance should coincide with the static one for any frequency, so they should be \textit{frequency-independent}. Therefore, we suggest rescaling the curve by some factor, $\mathcal{K}(\omega,T)$, such that the values of $\sigma_1(H)$ at the maxima will be equal to the static conductance, $\sigma_0 (T)$, for the same temperature. The results of low-frequency acoustic measurements do not need such rescaling since, as shown in Fig.~\ref{fig:fig4_new}c, is the required frequency independence of the maxima is fulfilled automatically. The result of such rescaling for MWS at $f=1102$~MHz is shown in Fig.~\ref{fig:fig4_new}b. The scale factor turns out to be $\mathcal{K}(1102~\text{MHz}, 1.7~\text{K}) = 3.7$. At the same time, the values of $\sigma_1$ at the minima are strongly depend on frequency. This is natural, because a hopping transport mechanism is valid for magnetic field values far from the maxima of $\sigma_1(H)$, which inevitably depends on frequency. Based on the above considerations, we use the following procedure to analyze $\sigma_1 (\omega)$ at the conductance minima: (i) For each frequency and temperature, after subtracting the $H$-independent background $U_l$, we determine the input signal, $U_{\text{in}}$, using the procedure described earlier; (ii) Then we determine $\sigma_1$ using Eq.~\eqref{sigma_full}; (iii) After that we rescale the data by some factor $\mathcal{K}(\omega, T)$ determined in such a way that the $\sigma_1 (H)$ maxima coincide with those obtained from either acoustic or DC measurements. To verify the suggested procedure, we applied it to both AS and MWS for closely similar frequencies. The result is shown in Fig.~\ref{fig:fig4_new}c. Since the frequencies are close, the curves should coincide. This can indeed be seen to be the case, such that our can be considered consistent. \subsubsection{Frequency dependence of AC conductance} In Fig.~\ref{fig7_new} are shown the magnetic field dependences of $\sigma_1$ for different frequencies. They are obtained using the procedure outlined in Sec.~\ref{em}. The insert shows the frequency dependence of the scaling factor $\mathcal{K}$. \begin{figure}[h!] \centering \includegraphics[width=.9\columnwidth]{fig7.eps} \caption{Magnetic field dependences of $\sigma_1$ for different frequencies at $T=1.7$~K. The inset shows the frequency dependence of the scaling factor $\mathcal{K}$. \label{fig7_new}} \end{figure} The frequency dependence of the conductance in the minima with $\nu =4, \, 6$ and at $T=1.7$~K is shown in Fig.~\ref{fig8_new}. \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{fig8.eps} \caption{Frequency dependence of the conductance in the minima with $\nu =4$ and 6 at $T=1.7$~K. The solid horizontal line shows the DC conductance, which is practically the same for both minima. \label{fig8_new}} \end{figure} It is clear that at sufficiently high frequencies, $f \gtrsim 100$~MHz, the minimal values of $\sigma_1 (\omega)$ are roughly proportional to $\omega$, as it should be for AC hopping conductance. The solid line shows the value of the static conductance, $\sigma_0$. At very low frequencies, the two-site model leading to $\sigma_1 \propto \omega$ dependence is not applicable, and more complicated clusters become important, see for a review Refs.~\onlinecite{Efros1985,Dyre2000}. As a result, $\sigma_1(\omega)\to \sigma_0$. This might be the reason of the fact that the point corresponding to $f=30$~MHz falls above the line corresponding to the slope of 1 in the $\log \sigma_1$ vs. $\log f$ dependence. \section{Discussion and conclusions} \label{discussion} We have developed a procedure for quantitative probeless measurements of the AC conductance of 2D electron/hole layers in a broad frequency domain. The main ingredient of this procedure is to measure the attenuation of surface acoustic waves (at low frequencies) and electromagnetic modes in the CPW (at high frequencies) in a transverse magnetic field in the regime of the IQHE. Since the transverse magnetic field suppresses the electronic contribution to the conductance both in diffusive and hopping regimes, it is possible to resolve the contribution of the charge carriers. Another important point is to rescale the data obtained by MWS such that the maxima of $\sigma_1(H)$ for all frequencies coincide with the static conductance (and with the results of low-frequency acoustic measurements). This rescaling to some extent compensates for the leakage of the electromagnetic modes outside the slots of the CPW. The results of acoustic measurements do not need such rescaling since the maxima of $\sigma_1(H)$ for all SAW frequencies coincide with those of the static conductance. This fact allows avoiding DC measurements which would require contacts. On the other hand, $\sigma_1(H)$ extracted from MWS for any frequency can be rescaled to make the maxima coinciding with those extracted from AS. In this way, we can determine $\sigma_1(\omega,H)$ in a broad domain of frequencies and magnetic fields without the need of contacts. This is the main conclusion of this work. The suggested procedure has been tested using a well-characterized sample. It is shown that at frequencies close to 150~MHz, where both the AS and MWS can be performed, the dependences $\sigma_1(H)$ obtained by both spectroscopies practically coincide with each other. The advantage of the procedure is that it can be applied to various materials and structures. In particular, systems without intrinsic piezoelectric effect can be studied acoustically since the sample is mounted on the surface of a piezoelectric crystal. The procedure is especially useful for studies of the AC conductance in the hopping regime, which is the case in the minima of the IQHE. \acknowledgments This work was supported by Russian Foundation for Basic Research grant of RFBR 14-02-0023214, Presidium of the Russian Academy of Science, the U.M.N.I.K grant 16906, and, partially, from Era.Net-Rus.
1,314,259,996,124
arxiv
\section{Introduction} \label{sec:introduction} It is now widely accepted that the supermassive black holes residing in the vast majority of early-type galaxies and bulges \citep[e.g.,][]{yu02} can have a sizeable impact on the evolution of their host galaxies. Semi-analytic models require a strong source of energy to balance the overcooling of gas onto dark matter halos and to avoid a strong excess in baryonic mass and star formation in galaxies compared to observations. The discrepancy is strongest at the high-mass end of the galaxy mass function, which is dominated by early-type galaxies \citep[e.g.,][]{Benson2003}, whose mass and structural properties appear closely related to the mass of their central supermassive black hole \citep[e.g.,][]{tremaine02}. Powerful AGNs release approximately the equivalent of the binding energy of a massive galaxy during their short activity period, either in the form of radiation, or through jets of relativistic particles, or both. They are thus in principle able to offset the excess cooling out to the highest galaxy masses \citep[][]{silk98}. If sufficiently large parts of this energy are deposited in the interstellar medium of the host galaxy, they may drive winds \citep[][]{dimatteo05} or turbulence \citep[][]{Nesvadba2011c}. The mechanisms that cause this, however, are still not very well understood. Even very basic questions, e.g., whether feedback is dominated by radio jets or the bolometric energy output of radiatively efficient accretion disks, are still heavily debated in the literature. There is clear observational evidence that jets can perturb the ISM strongly even at kpc distance from the nucleus, while the observational evidence for winds driven by quasar radiation over kpc scales is still mixed \citep[e.g.,][]{husemann13,liu13,CanoDiaz2012,Harrison2012}. Hydrodynamic models of radio jets are now finding deposition rates of kinetic energy from the jet into the gas that are broadly consistent with observations \citep[][]{Wagner2012}. Following the most popular galaxy evolution models, AGN feedback should have been particularly important in the early evolution of massive galaxies at high redshift, where AGN-driven winds may have blown out the remaining gaseous reservoirs that fueled the main phase of galaxy growth, inhibiting extended periods of subsequent star formation from the residual gas. Whereas the jets of powerful radio galaxies in the local Universe are known to affect the gas locally within extended gas disks \citep[``jet-cloud interactions'', e.g.,][]{tadhunter98, vanbreugel85}, it was found only recently that outflows driven by the most powerful radio jets in the early universe at z$\sim$2 can encompass very high gas masses, up to about $10^{10}$ M$_{\odot}$ in the most powerful radio galaxies at high redshift \citep[][]{Nesvadba2006,Nesvadba2008}. This is similar to the typical total gas masses in massive, intensely star-forming, high-redshift galaxies of a few $10^{10}$ M$_{\odot}$ \citep[e.g.,][]{greve05, tacconi08}. This gas is strongly kinematically perturbed with FWHM up to $\sim2000$ km s$^{-1}$ and high, abrupt velocity gradients of similar amplitude, consistent with the expected signatures of vigorous jet-driven winds. Galaxies with extreme radio power at 1.4~GHz of up to P$_{1.4}=$few$\times 10^{29}$ W Hz$^{-1}$ like those studied by \citet{Nesvadba2006,Nesvadba2008} are, however, very rare, which raises the question of the impact that AGNs may have on their host galaxies when their radio power is significantly lower. The present paper gives a first answer to this question. It is part of a systematic study of the warm ionized gas in 49 high-redshift radio galaxies at z$\sim$2 with SINFONI, which span three decades in radio power and two decades in radio size. Our sources cover the lower half of the radio power of this sample, P$_{1.4}=$few$\times 10^{26-27}$ W Hz$^{-1}$ at 1.4~GHz in the rest frame. Toward lower radio power, contamination from the non-thermal radio continuum of vigorous starbursts becomes increasingly important. The high-redshift radio luminosity function of \citet{Willott2001} and \citet{Gendre2010} suggests that such galaxies are factors of 100 more common than the very powerful radio sources, with co-moving number densities on the order of a few $10^{-7}$ Mpc$^{-3}$, sufficient to represent a short, generic phase in the evolution of massive galaxies, as we will argue below in \S\ref{sec:ensemble}. The organization of the paper is as follows. In \S\ref{sec:sample} we present our sample and in \S\ref{sec:observationsAndDataReduction_NVSS} our SINFONI NIR imaging spectroscopy and ATCA centimeter continuum observations, and the methods with which we reduced these data. In \S\ref{s:PresentationOfOurAnalysisTools} we present our methods of analysis and the results for each individual target, before presenting the overall results drawn from our sample in \S\ref{sec:ensembleproperties}. In \S\ref{ss:agnproperties} we discuss the AGN properties, and in \S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes} we use additional line emitters near our radio galaxies to estimate dynamical masses of our high-redshift radio galaxies (HzRGs). In \S\ref{sec:results.globalproperties} we compare our data with other classes of HzRGs before discussing the implications of our results for AGN feedback. We argue in \S\ref{sec:ensemble} that sources similar to those studied here may well be a representative subset of massive high-redshift galaxies overall, seen in a short but important phase of their evolution, and we summarize our results in \S\ref{ssec:summary}. Throughout our analysis we adopt a flat cosmology with H$_0$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$=0.7, and $\Omega_{M}=0.3$. \section{Sample} \label{sec:sample} Our sources have a radio power of a few 10$^{27-28}$ W Hz$^{-1}$ at 500~MHz in the rest frame, about 2 to 3 orders of magnitude fainter than the most powerful high-redshift radio galaxies known, which reach up to nearly $1\times 10^{30}$ W Hz$^{-1}$ \citep[][ and references therein]{miley08}, but powerful enough to safely neglect contamination from intense star formation. For comparison, an intensely star-forming HyLIRG (hyper-luminous infrared galaxy) with far-infrared luminosity $L_{FIR}=1\times 10^{13} L_{\odot}$ would produce a rest-frame 1.4~GHz radio power of $10^{25.0}$ W Hz$^{-1}$, assuming a far-infrared-to-radio luminosity ratio of 2.0, as found for high-redshift submillimeter galaxies \citep[e.g.,][]{vlahakis07, seymour09, thomson14}, and a radio spectral index typical of star formation of $\alpha=-0.7$ to $-0.8$. This is very similar to the steep spectral indices $\alpha \sim -1.0$ that are characteristic of high-redshift radio galaxies, making it even more difficult to disentangle the contribution of AGNs and star formation to lower-power radio sources than those studied here. In spite of their faintness relative to other high-redshift radio galaxies, the radio power of fainter sources in this present study is nonetheless comparable to that of the most powerful radio galaxies known at low redshift \citep[e.g.,][]{tadhunter93}. Our targets come from two different surveys. One is the southern sample of 234 distant radio galaxies of \citet{Broderick2007}, \citet[][]{Bryant2009a}, \citet[][]{Bryant2009b}, and Johnston et al. (in prep.), which we refer to hereafter as the ``MRCR-SUMSS'' sample. The other is the sample of radio galaxies within the fields of the ESO imaging survey (EIS) by \citet{Best2003CENSORS} and \citet{Brookes2006CENSORS, Brookes2008CENSORS}, which we call the ``CENSORS'' sample (``Combined EIS-NVSS Survey Of Radio Sources''). The MRCR-SUMSS sources have steep radio spectral indices $\alpha_{408-843} \le -1.0$ between 408~MHz and 843~MHz, and fluxes at 408~MHz $S_{408} \ge 200$~mJy. From this catalogue, we selected 12 moderately low-power sources at z~$\ge$~2 with $P_{1.4}$~=~few$\times 10^{27}$~W~Hz$^{-1}$. They have radio sizes between $\sim$~2\arcsec\ and 24\arcsec\ at 1.4~GHz frequency, a typical range of radio sizes of powerful HzRG. The six galaxies with the lowest radio power ($\mathcal{P}_{1.4} \sim 10^{26}$~W~Hz$^{-1}$) come from the CENSORS survey. This catalogue of 150 radio galaxies results from cross-matching the ESO Imaging Survey (EIS) patch D with the NVSS radio survey \citep[][]{best03}. Radio sources detected in the NVSS were re-observed at 1.4~GHz with the VLA in the BnA configuration at a spatial resolution of 3\arcsec$-$4\arcsec, compared to the initial spatial resolution of the NVSS of 45\arcsec, which is complete down to 7.2~mJy. This made it possible to study the structure of the radio sources and to identify the most likely rest-frame optical counterparts of 102 sources. Optical spectroscopy provided redshifts of 81 sources \citep{Brookes2008CENSORS}. Among these, we selected six sources with a radio power of a few $\times 10^{26}$ W Hz$^{-1}$ and appropriate redshifts for ground-based NIR follow-up spectroscopy. Three have extended radio morphologies and three have compact, unresolved radio cores. \section{Observations and data reduction} \label{sec:observationsAndDataReduction_NVSS} \subsection{Near-infrared imaging spectroscopy} \label{ss:SINFONIobservations_NVSS} We observed all MRCR-SUMSS galaxies with the near-infrared imaging spectrograph SINFONI \citep[][]{SINFONI2003} between late 2009 and early 2010 under program ID 084.A-0324 at the ESO Very Large Telescope. SINFONI observations of the CENSORS sources were carried out between late 2010 and early 2011 under program ID 086.B-0571. All data were taken in Service Mode under variable conditions. SINFONI is an image slicer that operates between 1.1$\mu$m and 2.4$\mu$m. We used the seeing-limited mode with the largest available field of view of 8\arcsec$\times$8\arcsec\ and a pixel scale of 250~mas. All data were taken with the $H+K$ grating which covers wavelengths between 1.45~$\mu$m and 2.4~$\mu$m at a spectral resolving power R~$\sim$~1500 ($\sim$~200~km s$^{-1}$). We observed each MRCR-SUMSS galaxy for 180$-$230 min of on-source observing time (300-440~min for the CENSORS sources), split into individual observations of 5~min. Most of our galaxies are smaller than the field of view. We therefore adopted a dither pattern where the object is shifted between two opposite corners of the field of view, which allows us to use two subsequent frames for the sky subtraction and makes taking dedicated sky frames unnecessary. The spatial resolution of our data is limited by the size of the seeing disk, which is typically around 1.0\arcsec\ for both samples. The FWHM sizes of the PSFs of individual targets are given in Table~\ref{tab:obslog}. They are measured from a standard star observed at the end of each hour of data taking. Data reduction relies on a combination of IRAF \citep[][]{tody93} and our own custom IDL routines \citep[e.g.,][]{nesvadba11a}. All frames are dark-subtracted and flat-fielded. We then remove the curvature from the spectra in each slit and put them onto a common wavelength scale by using the bright night-sky lines superimposed on each frame, using only arc lamp spectra to set the absolute wavelength scale. We then sky subtract our data and rearrange them into three-dimensional data cubes, which are then combined. To account for the variability of the night sky we scale the total flux in each sky frame to the total flux in each object frame, after masking the target. We use the standard star observations to correct for telluric and instrumental effects and to set the absolute flux scale. In this analysis, we discuss the optical emission-line properties of 8 of the 12 MRCR-SUMSS galaxies we observed. Two of the other four, NVSS~J004136$-$345046 and NVSS~J103615$-$321659, have continuum emission but no line emission at the redshifts previously measured in the rest-frame UV. Their redshifts are relatively high, z$=$2.6, placing [OIII] and H$\alpha$ at wavelengths outside the near-infrared atmospheric bands, where the atmospheric transmission is below 10\%, and is strongly variable both in time and in wavelength. At the expected wavelength of H$\alpha$, the telluric thermal background is already a factor of $\sim 10$ greater than at 2.2$\mu$m. A third source, NVSS~J233034$-$330009, was found to coincide with a foreground star after our data had already been taken \citep[][]{Bryant2009b}. The fourth source, NVSS~J210626$-$314003 shows a strong misalignment between the radio source and the extended gas, and no gas along the radio jet axis, which is very different from the other galaxies presented here. This source has already been discussed by \citet{collet14}, so we do not describe its characteristics in detail again, but we do include it in the overall discussion of the properties of our sources. For the CENSORS sources we focus our discussion on the three sources where we detected line emission at the redshifts previously measured in the rest-frame UV. \subsection{Radio continuum observations} \label{ss:radioObservations_NVSS} We observed our MRCR-SUMSS and CENSORS sources in two runs on 2012 January 28 and February 02 with the Australia Telescope Compact Array (ATCA, project C2604). Observations were carried out simultaneously at 5.5 and 9.0 GHz using the Compact Array Broadband Backend, with bandwidths of 2 GHz and channel widths of 1~MHz. The array configuration was 6A, with baselines between 337 and 5939~m. For flux density and bandpass calibration we observed PKS B1934$-$638 at the beginning and end of each session. Poor phase stability was due to heavy rain and high humidity, and this significantly affected the signal-to-noise ratio. Individual sources were observed in 13--15 five-minute snapshots spread over 8.5 hours to ensure good {\it uv\,} coverage; an exception was NVSS J144932$-$385657, which set early with only five snapshots spanning 3 hours. We did not obtain any new data for CENSORS 072 because we used incorrect coordinates from Brooks et al.\ (2008). Standard data reduction was carried out in {\sc miriad} (Sault et al. 1995). The ATCA observing log is given in Tab.~2 and lists for each source the date of observation, total on-source integration time, the secondary phase calibrator used, and the synthesized beam at each frequency. The data reduction was done with \textsc{Miriad} \citep[][]{MIRIAD1995} in the standard way. We find typical beam sizes of 4\arcsec$\times$1.5\arcsec\ at 5.5~GHz and 2.5\arcsec$\times$0.9\arcsec\ with position angles between $-$6$^\circ$ and 13$^\circ$ (except for NVSS~J144932$-$385657, where PA~=~$-$40$^\circ$). Details are given in Tab.~\ref{tab:atcalog}. The radio morphologies are shown in Fig.~\ref{fig:radioMorphologies}. They generally confirm those previously measured at 1.4~GHz and 2.4~GHz with larger beams. In NVSS~J002431$-$303330 we detect a fainter second component to the southwest of the main radio emission below the detection threshold of previous observations. In NVSS~J234235$-$384526 we tentatively detect a radio core that is coincident with the galaxy. Radio sizes are given in Tab.~\ref{tab:atcaresults}. The largest angular scale (LAS) gives the separation between the two lobes, if the source is resolved, or the deconvolved size, if it is not. \subsubsection{Relative astrometric alignment of the radio and SINFONI data} \label{sss:ancillaryDataSetsRelativeAlignment} Studying the effects of the radio jet on the interstellar medium of high$-$redshift galaxies requires an accurate relative alignment between the radio and near-infrared data sets to better than an arcsecond, i.e., better than the absolute astrometry of the VLT. Unfortunately, we did not detect the radio core of most of our galaxies with extended radio lobes. Moreover, owing to the small field of view of SINFONI, aligning our data cubes accurately within the World Coordinate System (WCS) is not trivial. We therefore register our cubes relative to the K-band imaging of \citet{Bryant2009a}, which is accurately aligned with the WCS, and assume that the radio frame of ATCA aligns well with the WCS, to better than 1\arcsec\ \citep[][]{Broderick2007}. For compact radio sources (LAS~$\lesssim$~2.0\arcsec\ in Tab.~\ref{tab:atcaresults}), we assume that the K-band continuum is aligned with the radio source, corresponding to the assumption that the radio emission in compact sources originates from the nucleus of the galaxy. Figure~\ref{fig:radioMorphologies} shows the radio contours of the MRCR-SUMSS and CENSORS sources, and the red box gives the adopted position of the SINFONI maps based on this method. \section{Results} \label{s:PresentationOfOurAnalysisTools} For each galaxy we show integrated spectra and emission-line maps of [OIII]$\lambda$5007 surface brightness, relative velocities, and FWHM line widths (Figs.~\ref{fig:spec1}~to~\ref{fig:spec3} and Tabs.~\ref{tab:spec1} and~\ref{tab:spec2}). Unless stated otherwise, we give intrinsic FWHMs, $FWHM_{intrinsic}$, that are corrected for instrumental resolution $FWHM_{inst}$, setting $FWHM_{intrinsic} = \sqrt{FWHM_{obs}^{2}-FWHM_{inst}^{2}}$. The instrumental resolution, $FWHM_{inst}$, is wavelength dependent and was measured from the width of night-sky lines. Maps are only given for spatial pixels where the signal-to-noise ratio of the line core exceeds 5. We used a Monte Carlo method to confirm that this was a good value with which to robustly measure the line properties in spite of strong non-Gaussianities in the noise related to the imperfect night-sky line subtraction, bad pixels, and potentially intrinsic line profiles. Integrated spectra include all pixels where [OIII]$\lambda$5007 is detected at a significant level. We adopt the redshift estimated from the brightest pixels near the center of the galaxy as systemic. Before adding the spectrum of a pixel, we shift it to the systemic redshift in order to avoid artificial broadening of the line in the integrated spectrum by the large-scale velocity gradient. For each galaxy we also mapped the surface brightness, relative velocity to the systemic redshift and the FWHM line widths of [OIII]$\lambda$5007 (Fig.~\ref{fig:maps1} to~\ref{fig:maps3}) by fitting Gaussian profiles to the lines extracted from small apertures across the cube. Aperture sizes are 3~pixels $\times$ 3~pixels, corresponding to 0.4\arcsec\ $\times$ 0.4\arcsec, or 5~pixels $\times$ 5~pixels (0.6\arcsec\ $\times$ 0.6\arcsec) for the faintest regions of the source. This helps to improve the signal-to-noise ratio of the data, but still oversamples the seeing disk and avoids loss of spatial information. Since the sizes of the extended emission-line regions, $S_{maj,obs}$, are typically only a few times larger than the size of the seeing disk, we list sizes of semi-major and semi-minor axes that are corrected for the broadening of the PSF $S_{PSF}$, $S_{maj,intrinsic}$, by setting $S_{maj,intrinsic} = \sqrt{ S_{maj,obs}^2-S_{PSF}^2}$ along the same position angle. The same method was applied to the size along the semi-minor axis, where resolved. Contours in Fig.~\ref{fig:maps1} show the line-free continuum emission for the galaxies where the continuum was detected. In most galaxies the continuum is only detected after collapsing the line-free cube along the wavelength axis. However, we detect relatively bright continuum emission in NVSS~J002431$-$303330 and NVSS~J201943$-$364542, which we need to subtract from the spectra before fitting the emission lines. To perform this subtraction, we mask strong emission lines and strong night-sky line residuals and fit a fifth-order polynomial over their whole spectrum, which we subtract afterward. \subsection{Description of individual sources} \label{ss:NVSSssourcesDescription} \subsubsection{NVSS~J002431$-$303330} \label{sss:J0024} NVSS~J002431$-$303330 is a double radio source, dominated by a bright component associated with the optical counterpart, and a weaker component at 16\arcsec\ toward southwest (Fig.~\ref{fig:radioMorphologies}). We find this source at a redshift of $z=2.415 \pm 0.001$, which is in good agreement with the estimate of Johnston et al. (in prep.) from the rest-frame UV lines, with $z_{\mathrm{\textsc{uv}}}=2.416 \pm 0.001$. In the H band, we detect the [OIII]$\lambda 4959,5007$ doublet and H$\beta$. In the K band, we find H$\alpha$ and [NII]$\lambda$6548,6583, which are strongly blended owing to their large intrinsic widths. The H$\alpha$ line has a broad component with FWHM~=~3250~km s$^{-1}$. Figure~\ref{fig:spec1} shows the integrated spectrum of this source. All line properties are listed in Tab.~\ref{tab:spec1}. As shown in Fig.~\ref{fig:maps1}, NVSS~J002431$-$303330 has a strong continuum associated with a bright emission-line region with [OIII]$\lambda$5007 surface brightness of $(5-25)\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. Line widths in this region are very broad, FWHM $\sim$ 1200~km s$^{-1}$, and the velocity field is perturbed with two small, unresolved regions that show abrupt velocity jumps relative to their surroundings with relative redshifts of about 250 km s$^{-1}$ in each region. This area extends over $\sim$~1.0\arcsec$\times$1.0\arcsec\ around the peak of the continuum and [OIII]$\lambda$5007 line emission, corresponding to 8~kpc at $z=2.415$. The H$\alpha$ surface brightness in this region is $\Sigma_{\mathrm{H}\alpha} \sim (5 - 9) \times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. Toward the southwest, the line emission becomes fainter, but can be traced out over another 2\arcsec, with a typical [OIII]$\lambda$5007 surface brightness of $\Sigma_{\mathrm{[OIII]}}=(1-5)\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. The gas is more quiescent, with line widths of FWHM$=$300$-$400 km s$^{-1}$. We show integrated spectra of both regions in Fig.~\ref{fig:J0024_OIIISpectra}. The box in the right panel of Fig.~\ref{fig:maps1} shows the region from which we extracted the narrow-line emission. This extended emission-line region to the southwest extends along the axis between the two radio lobes. \subsubsection{NVSS~J004000$-$303333} \label{sss:J0040} NVSS~J004000$-$303333 is a double radio source with a size of 17\arcsec\ (Fig.~\ref{fig:radioMorphologies}). With SINFONI we find the [OIII]$\lambda\lambda$4959,5007 doublet and H$\beta$ in the K band at wavelengths that correspond to $z = 3.399 \pm 0.001$. The H$\alpha$ and [NII]$\lambda\lambda6548,6583$ lines fall outside the atmospheric windows. In the H band, we detect the [OII]$\lambda$3727 doublet. The two lines of the doublet are too close to each other to be spectrally resolved with our data (Fig.~\ref{fig:spec1}). All line properties are listed in Tab.~\ref{tab:spec1}. Line emission extends over 2.5\arcsec$\times$1.5\arcsec\ with surface brightnesses between $\Sigma_{\mathrm{[\textsc{Oiii}]}} \sim 5 \times 10^{-16}$ erg~s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$s and $3 \times 10^{-15}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ (Fig.~\ref{fig:maps1}). Faint continuum emission is found associated with a knot to the very east of the emission-line region, and about 1.5\arcsec\ southwest from the center, outside the bright line emission. The K-band image of \citet{Bryant2009a} shows a very similar continuum morphology, likewise at low signal-to-noise ratio. The local velocities of [OIII]$\lambda$5007 fall monotonically from the southwest to the northeast with a total gradient of about 300 km s$^{-1}$. The knot in the far east shows an abrupt velocity increase of 300 km s$^{-1}$ relative to the nearby blueshifted gas. The line widths are lower in the north (FWHM~=~200$-$400~km s$^{-1}$) than in the south (FWHM~=~700$-$1000~km s$^{-1}$). At fainter flux levels than shown in Fig.~\ref{fig:maps1}, but still above 3$\sigma$, we detect another source of line emission at a distance of about 2\arcsec\ to the south from the radio galaxy (about 15~kpc at z~$\sim$~3). The redshift of this second source is $z_{\mathrm{south}} = 3.395 \pm 0.001$, i.e., it is blueshifted by $350 \pm 90$~km s$^{-1}$ relative to the radio galaxy. This source is shown in Fig.~\ref{fig:J0040_SN3} and is discussed in \S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes}. \subsubsection{NVSS~J012932$-$385433} \label{sss:J0129} The SINFONI maps of NVSS~J012932$-$385433 are shown in Fig.~\ref{fig:maps1}. We find the optical emission lines at redshift $z = 2.185 \pm 0.001$. The radio source is compact with a deconvolved size of 0.7\arcsec, and is associated with the optical counterpart. [OIII]$\lambda\lambda$4959,5007 is bright in the H band. H$\beta$ is detected at 5.6$\sigma$. In the K band, H$\alpha$ and [NII]$\lambda\lambda$6548,6583 are detected and strongly blended. H$\alpha$ also shows a broad component with FWHM $\sim$ 3500~km s$^{-1}$. The $[$SII$]$ doublet is clearly detected. The two components are also strongly blended owing to their intrinsic width (Fig.~\ref{fig:spec1}). All line properties are listed in Tab.~\ref{tab:spec1}. The emission-line region is extended over 1.6\arcsec$\times$1.2\arcsec. It is brighter in the center with $\Sigma_{\mathrm{H}\alpha} \sim (1.0 - 1.7) \times 10^{-15}$ erg~s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ and fades toward the periphery. We detect continuum emission coincident with the emitting gas. The [OIII]$\lambda$4959,5007 lines show a clear, relatively regular velocity gradient of $\Delta$v~$\sim$~350~km s$^{-1}$ along a northeast-southwest axis. The lines are more narrow toward the southeast, with FWHM $\sim$ 700$-$800~km s$^{-1}$. In the northwest FWHMs are higher, $\sim$ 900$-$1000~km s$^{-1}$. \subsubsection{NVSS~J030431$-$315308} \label{sss:J0304} NVSS~J030431$-$315308 is a single, relatively compact source at 9~GHz and 5.5~GHz, with a deconvolved size of 1.8\arcsec\ in our highest resolution data at 9~GHz (Fig.~\ref{fig:radioMorphologies}). The [OIII]$\lambda\lambda$4959,5007 doublet is clearly detected in the H band with SINFONI and is well fitted with single Gaussians (Fig~\ref{fig:spec1}). The same holds for the H$\alpha$ and [NII]$\lambda\lambda$6583 lines. The $[$SII$]$ doublet is not detected. All line properties are listed in Tab.~\ref{tab:spec1}. The line emission is marginally spatially resolved with a size of 1.5\arcsec$\times$1.5\arcsec, and a PSF with FWHM=1.2\arcsec$\times$1.0\arcsec, the largest in this program (Tab.~\ref{tab:obslog}). Faint continuum emission is also detected, at a slightly different position ($\sim$~0.5\arcsec\ to the west) from the peak in [OIII]$\lambda$5007 surface brightness, but at the same position as the peak of H$\alpha$ surface brightness. The velocity maps show two small redshifted (by 50$-$100 km s$^{-1}$) regions north and south of the continuum, and uniform velocities in the rest of the source. Line widths are between 500 and 1200 km s$^{-1}$ and higher in the western parts of the emission-line region associated with the continuum. \subsubsection{NVSS~J144932$-$385657} \label{sss:J1449} NVSS~J144932$-$385657 has one of the largest radio sources in our sample; the lobes are offset by 7.5\arcsec\ relative to each other. We find the [OIII] line at $z = 2.149 \pm 0.001$. In the H band, we detect the [OIII]$\lambda\lambda$4959,5007 doublet and H$\beta$ (Fig.~\ref{fig:spec2}). In the K band, H$\alpha$ and [NII]$\lambda$6583 are narrow enough not to be blended. FWHM$=$350~km s$^{-1}$ for H$\alpha$, and the width of [NII]$\lambda$6583 is dominated by the spectral resolution. NVSS~J144932$-$385657 has a very extended emission-line region of nearly 4\arcsec\ ($\sim 30$ kpc at $z \sim 2$) along a northeast-southwest axis (Fig.~\ref{fig:maps1}). We identify two parts: a large, elongated region, which coincides with the continuum and extends over another 2\arcsec\ toward the southwest, and a fainter, smaller region in the northwest. Surface brightnesses of [OIII]$\lambda$5007 are between $1\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ and $10\times 10^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. The northwestern region is near the edge of the SINFONI data cube, and it is therefore possible that it extends farther beyond the field of view of our data. Both emission-line regions are aligned with the axis of the radio jet. The velocity offset between the two regions is about 800 km s$^{-1}$. While the compact northeastern part has a uniform velocity field with a redshift of about v $\sim 400$~km s$^{-1}$, the kinematics in the very extended southwestern region are more complex with a maximum blueshift of about $-400$ km s$^{-1}$, before the velocities approach the systemic redshift again at the largest radii. Line widths are FWHM=200$-$500 km s$^{-1}$ in the southwest, and up to 800 km s$^{-1}$ in the northeast. In the southwestern region we find elevated widths in particular near the continuum and at about 1.5\arcsec, a distance associated with the sudden velocity shift from $-400$ km s$^{-1}$ to the systemic velocity. In Fig.~\ref{fig:J1449}, we compare the surface brightness maps of H$\alpha$ and [NII]$\lambda$6583, obtained by fitting three Gaussians corresponding to H$\alpha$, [NII]$\lambda$6548, and [NII]$\lambda$6583 with the velocities and line widths measured from [OIII]$\lambda$5007, and leaving the line flux as a free parameter. The map highlights the similarity of the H$\alpha$ and [NII]$\lambda$6583 morphologies, which justifies our assumption that the Balmer and the optical forbidden lines originate from the same gas, at least at the spatial resolution of our data. \subsubsection{NVSS~J201943$-$364542} \label{sss:J2019} The radio source of NVSS~J201943$-$364542 is very extended, with LAS = 14.7\arcsec, corresponding to $\sim$ 120~kpc at $z = 2.1$ (Fig.~\ref{fig:radioMorphologies}). In the H band with SINFONI, we detect the [OIII]$\lambda\lambda$4959,5007 doublet at a redshift of $z = 2.120 \pm 0.001$, but not H$\beta$. In the K band, in addition to the narrow components of H$\alpha$ and [NII]$\lambda$6583, we find a broad H$\alpha$ emission line (FWHM $\ge$ 8000~km s$^{-1}$) from a compact region aligned with the nucleus. We come back to this line in \S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes}. Integrated line properties are listed in Tab.~\ref{tab:spec1}. NVSS~J201943$-$364542 has a compact emission-line region of 1.0\arcsec\ with an [OIII]$\lambda$5007 surface brightness $\Sigma_{\mathrm{[\textsc{Oiii}]}} = (0.5 - 1.0) \times 10^{-15}$ erg~s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ (Fig.~\ref{fig:maps2}). It also has relatively bright continuum emission associated with the line emission. The velocity field of the narrow-line component shows a scatter of $\le$ 200 km s$^{-1}$, with the highest velocities reached in the far east and west. The line widths are up to 800 km s$^{-1}$. At about 3\arcsec\ to the southeast we marginally detect another compact line emitter at a very similar redshift $z_{\mathrm{south}} = 2.116 \pm 0.001$ (Fig.~\ref{fig:J2019_Spectrum2ndRegion}). The proximity on the sky and in redshift suggests that both sources are physically related with the radio galaxy. We will discuss this source in more detail in \S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes}. \subsubsection{NVSS~J204601$-$335656} \label{sss:J2046} NVSS~J204601$-$335656 has a compact radio source with a deconvolved size of 1.8\arcsec\ in our data, consistent with the 1.6\arcsec\ previously found by \citet{Broderick2007}. With SINFONI in the H band, we detect the [OIII]$\lambda\lambda$4959,5007 doublet and H$\beta$ at $z = 2.499 \pm 0.001$. In the K band, H$\alpha$ and the [NII]$\lambda\lambda$6548,6583 doublet are blended but well fitted with a single Gaussian component for each line (Fig.~\ref{fig:spec2}). The $[$SII$]\lambda\lambda$6716,6731 doublet is not detected. Table~\ref{tab:spec1} summarizes the line properties of NVSS~J204601$-$335656. The line emission is marginally spatially resolved with a FWHM size of 1\arcsec\ compared to a 0.7\arcsec$\times$0.6\arcsec\ PSF. The emission-line region is associated with a faint continuum source; the region is roughly circular and extends over 1.0\arcsec, with $\Sigma_{\mathrm{[OIII]}} = (0.3 - 1.7) \times 10^{-15}$ erg~s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$. NVSS~J204601$-$335656 has a small velocity offset of 150 km s$^{-1}$ which rises from northeast to southwest. Typical line widths are FWHM$=900-1200$~km s$^{-1}$. \subsubsection{NVSS~J234235$-$384526} \label{sss:J2342} NVSS~J234235$-$384526 is an extended radio source with two lobes at a relative distance of 9.8\arcsec\ \citep[Fig.~\ref{fig:radioMorphologies}, see also][]{Broderick2007}. It is at redshift $z = 3.515 \pm 0.001$, where the [OIII]$\lambda\lambda$4959,5007 doublet and the H$\beta$ emission lines fall into the K band. Fitting the [OIII]$\lambda\lambda$4959,5007 line profiles adequately requires two components per line (Fig.~\ref{fig:spec1}), a narrow component with FWHM = 360~km s$^{-1}$ (which we consider to be at the systemic redshift), and a broader blue wing with FWHM = 1300~km s$^{-1}$, blueshifted by 700~km s$^{-1}$. In the H band, we detect the [OII]$\lambda$3727 emission lines. Line properties are listed in Tab.~\ref{tab:spec1}. Line emission extends over 2.0\arcsec$\times$1.0\arcsec\ along an axis going from northeast to southwest, with surface brightness $\Sigma_{\mathrm{[\textsc{Oiii}]}} = (5 - 30) \times 10^{-16}$~erg~s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ (Fig.~\ref{fig:maps2}). The velocity map shows a gradient of about 400~km s$^{-1}$ rising from the northeast to the southwest, and well aligned with the axis of the radio source. Line widths are larger in the center of the galaxy (FWHM $\simeq 1000-1200$~km s$^{-1}$) than in the periphery (FWHM $\simeq$ 200-400~km s$^{-1}$). We do not detect any continuum emission. \subsection{CENSORS sample} \label{s:CENSORS} \subsubsection{NVSS~J094949$-$211508 (CEN~072)} This source has a redshift of z~=~$2.427 \pm 0.001$, in agreement with the previous UV redshift measured by \citet{Brookes2008CENSORS}. Because there was a coordinate mismatch, we do not have new radio measurements at 5.5 and 9.0~GHz for this source. \citet{Best2003CENSORS} found LAS$<$0.7\arcsec, which is small enough to infer that the extended emission-line region is larger than the radio emission. The integrated spectrum shown in Fig.~\ref{fig:spec3} shows that [OIII]$\lambda\lambda$4959,5007 and H$\beta$ are well detected in the H band, and H$\alpha$ and [NII]$\lambda\lambda$6548,6583 in the K band, The lines are not very strongly blended. [SII]$\lambda\lambda$6716,6731 and [OI]$\lambda$6300 are not detected. Line emission extends over 1.7\arcsec$\times$1.2\arcsec\ (14~kpc$\times$10~kpc) along a northwest-southeast axis (Fig.~\ref{fig:maps3}). It has a velocity gradient of $\sim$~300~km s$^{-1}$. The stellar continuum is associated with the northwestern part of the emission-line region, where the line widths also reach their maximum (FWHM~$\sim$~800~km s$^{-1}$), compared to FWHM~$\sim$~300-500~km s$^{-1}$ in the southeast. \subsubsection{NVSS~J095226$-$200105 (CEN~129)} This source is at redshift z~=~$2.422 \pm 0.001$, in agreement with the previous estimate of z$=$2.421 based on rest-frame UV lines \citep{Brookes2008CENSORS}. The radio morphology is resolved in our ATCA observation at 5.5 and 9.0~GHz (see Fig.~\ref{fig:radioMorphologies}), but was compact in the previous 1.4~GHz data of \citet{Best2003CENSORS}. We find two radio lobes along an east-west axis along a position angle of 95$^\circ$, separated by 2.5\arcsec (20~kpc at z$=$2.4). The integrated spectrum of CEN~129 shows H$\beta$ and the [OIII]$\lambda\lambda$4959,5007 doublet in the H band, and H$\alpha$ and the [NII]$\lambda\lambda$6548,6883 lines in the K band (Fig.~\ref{fig:spec3} and Tab.~\ref{tab:spec2}). The emission-line morphology of CEN~129 is fairly spherical, with a small extension toward the northwest, and a size of 2.3\arcsec$\times$1.7\arcsec\ (19~kpc$\times$14~kpc). The major axis is along the northwest-southeast direction. The stellar continuum is detected at the center of the line emission. The velocity field has a gradient of $\sim$~200~km s$^{-1}$, and is roughly aligned with the radio axis, along an east-west axis. The northwestern extension has the most blueshifted emission ($-$200~km s$^{-1}$). Line widths are fairly uniform with FWHM~$\sim$~600-700~km s$^{-1}$. \subsubsection{NVSS~J094949$-$213432 (CEN~134)} The integrated spectrum of CEN~134 shows H$\beta$ and the [OIII]$\lambda\lambda$4959,5007 doublet in the H band, and H$\alpha$ and [NII]$\lambda$6583 in the K band. We find a redshift of z~=~$2.355 \pm 0.001$ for CEN~134, in good agreement with the value estimated from rest-frame UV lines \citep{Brookes2008CENSORS}. At 1.4~GHz, the radio source is very extended, with LAS=22.4\arcsec\ along a position angle of 125$^\circ$ \citep[][]{Best2003CENSORS}. At 5.5~GHz we measure LAS~=~21.9\arcsec\ and PA~=~131$^\circ$, but we do not detect the second lobe in the 9.0~GHz observations. Line emission in CEN~134 extends over 3.1\arcsec$\times$1.8\arcsec\ (25~kpc$\times$15~kpc), with the major axis going from the northwest to the southeast (Fig.~\ref{fig:maps3}). Continuum emission is detected in the southern part of the emission-line region. The velocity field shows two blueshifted regions, one south of the continuum, one at the very north of the emission-line region, with velocities of about $-120$ km s$^{-1}$ relative to the velocities near the center. Line widths are higher in the south near the continuum position, with FWHM~$\sim$~600~km s$^{-1}$. In the north, the gas is more quiescent with FWHM$\sim$~300~km s$^{-1}$. \section{Ensemble properties of the CENSORS and MRCR-SUMSS samples} \label{sec:ensembleproperties} After discussing each of our targets individually and in detail, we now turn to characterizing the overall properties of our sample. Although we do not have a complete sample in a strict statistical sense, this is a fairly common situation for studies of the spatially resolved properties of small to mid-sized samples of high-redshift galaxies \citep[e.g.,][]{nmfs06, nmfs09}, and our sample size is comparable to most of these studies. It is also important to note that our study is a parameter study, not a population study. This means that we wish to analyze certain source properties as a function of the AGN characteristics of our sources. Therefore, we have to sample the range of AGN properties as uniformly as we can, and can put less emphasis on matching, e.g., the shape of the radio luminosity function. For this reason, we do not require a statistically complete sample in order to identify global trends in our data. \subsection{Rest-frame optical continuum} We detect rest-frame optical continuum emission in 12 sources, 9 from the MRCR-SUMSS, and 3 from the CENSORS sample. At redshifts $z=2-3$, the observed H and K bands correspond roughly to the rest-frame V and R bands, and at redshifts $z=3-4$ to the rest-frame B and V bands. Since continuum fluxes in all sources are too faint to measure the detailed spectral profiles or even spectral slopes, we merely extract the spectrally integrated continuum image (Figs.~\ref{fig:maps1} to~\ref{fig:maps3}). We generally find only one spatially unresolved continuum source per target at most, with one notable exception. In NVSS~J004000$-$303333 we find two very faint unresolved continuum emitters of about equal flux, perhaps indicating an ongoing interaction of two galaxies, or AGN light scattered on extended dust \citep[e.g.,][]{hatch09}. Both blobs are roughly along the radio jet axis, somewhat reminiscent of the alignment effect found in more powerful HzRGs \citep[][]{tadhunter98,vernet01}. In NVSS~J234235$-$384526 we do not detect the continuum at all. \subsection{Morphology of the extended emission-line gas} Gas morphologies and kinematics are the two primary sets of constraints where the advantages of imaging spectrographs become particularly evident. Among our 12 targets, 10 are spatially resolved into a maximum of 10 elements along the major axis of the bright emission-line regions. The emission-line morphologies and kinematics of our sources are very diverse. Isophotal sizes down to the 3$\sigma$ detection limit of a few $10^{-17}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-1}$ range from about 5$-$6~kpc, corresponding to our resolution limit, to very extended sources where line emission is detected over at least 30~kpc (e.g., in NVSS~J144932$-$385657), and potentially more because we cannot exclude in all cases (e.g., NVSS~J144932$-$385657) that parts of the emission-line region fall outside the 8\arcsec$\times$8\arcsec\ field of view of SINFONI. Resolved emission-line regions are often elongated with ellipticities between 0.2 and 0.7. We do find a correlation between elongation and size, but cannot rule out that this is an artifact from the relatively low spatial resolution. Typical emission-line surface brightnesses are between a few $10^{-16}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ and a few $10^{-15}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ (Figs.~\ref{fig:maps1} to \ref{fig:maps3}). Eight of our galaxies have spatially-resolved gas and a single rest-frame optical continuum peak. In those cases we can evaluate whether the line emission extends symmetrically about the nucleus based on the continuum peak being an approximation for the location of the central regions of the HzRGs \citep[see][for a justification]{Nesvadba2008}. It is interesting that this is not always the case; in particular, we note a trend that the galaxies with the most regular velocity fields also appear to have gas that is well centered about the nucleus, as would be expected from a more dynamically relaxed system. While lopsidedness does exist even in isolated low-z galaxies, it would need to reach extreme levels to be seen at a resolution of 5 kpc and more. In NVSS~J004000$-$303333, the source with the two equally faint continuum emitters, most of the gas is between the two continuum sources. This may somewhat favor the interpretation that the continuum is from two regions of scattered light rather than a merging galaxy pair, in which case we would expect at least some line emission to be associated with the nuclear regions of each interacting galaxy. \subsection{Blending of circumnuclear and extended emission} \label{ssec:toymodels} A potential worry for our morphological study is that unresolved circumnuclear emisson might be bright enough to dominate the emission-line maps even beyond the central PSF. Even very compact ($\le 1$~kpc) emission-line regions akin to classical narrow-line regions (NLRs), if bright enough, could bias our measurements. In radio galaxies, where the direct view into the circumnuclear regions is obscured, this may be less important than in quasars, but since some of our sources do show a prominent nucleus, a more quantitative analysis is in order. We constructed a suite of toy data cubes with and without a bright NLR embedded in extended, fainter gas with more quiescent kinematics. The NLRs were approximated by a bright point source, the extended line emission by a region of uniform surface brightness, with sizes, spatial and spectral sampling, and a spatial resolution comparable to our data (Fig.~\ref{fig:toymodel}). Both components have Gaussian line profiles with FWHM$=$300 km s$^{-1}$ in the extended emission and FWHM$=$800 km s$^{-1}$ for the circumnuclear gas. These widths were derived from the final fitted data cube. The extended component has a velocity gradient of 400~km s$^{-1}$ over the full source diameter. The signal-to-noise ratios in the final data cubes are comparable to our data, and for simplicity we assumed Gaussian noise. The beam smearing due to the seeing was approximated by convolving the data cube with a two-dimensional Gaussian in each wavelength plane. For cubes with a dynamic range of at least 10 in gas surface brightness between nuclear point source and extended emission in the fitted data, we find an apparent increase in line width of 30$-$50\% at a distance of 1$\times$ the PSF from the nucleus due to the NLR, and of 10$-$20\% at 1.5$\times$ that radius (Fig.~\ref{fig:toymodel}); 10$-$20\% corresponds to our measurement uncertainties. The NLR component in these galaxies is easily seen in the integrated spectrum (Fig.~\ref{fig:toymodelspec}). However, only one galaxy (NVSS~J144932$-$385657) has a dynamic range as high as 10. In most cases, the nucleus is only about 4-6 times brighter than the faintest extended emission (Fig.~\ref{fig:maps1} to \ref{fig:maps3}). Such a contrast is not sufficient to produce the observed strong increase in FWHM from 300 to 800~km s$^{-1}$. The integrated line flux scales linearly with line core and FWHM. Because of the line broadening, an increase in surface brightness of a factor of 2.5-3 is therefore already implied by the line broadening, and the core of the broader component does not make a large contribution to the core of the measured line profile in the combined spectrum of circumnuclear and extended component. At SNR~5$-$10, the line wings of the circumnuclear component are hidden in the noise. The broadening is stronger if the extended line emission is distributed asymmetrically about the nucleus, as is the case for CEN-134. For example, if we truncate the extended emission prior to the smoothing with the seeing disk at 1 times the FWHM of the PSF from the nucleus on one side, the central line width increases by a factor of~2 compared to the symmetric case. However, the circumnuclear spectral component then becomes even more prominent in the integrated spectrum. For an extended component that extends out to at least 1.5 times the FWHM size of the PSF, the difference very rapidly becomes indistinguishable. This shows that our objects are not comparable to low-redshift Seyfert galaxies, for example, where a systemic and a narrow-line region, often with very different properties, can clearly be distinguished. The gas producing the relatively broad emission lines must extend over larger radii, even if these regions are not clearly resolved at the 5$-$8~kpc resolution of our data. \subsection{Kinematics} \label{ssec:kinematics} Our sample shows a wide variety of kinematic properties. Velocity fields range from very regular-- dominated by a single, smooth, large-scale velocity gradient-- to very irregular. The total velocity gradients are typically on the order of v~$=$~200$-$300 km s$^{-1}$. Given that the spatial resolution of our data is not very high for many of our sources, this gradient may appear lower than the intrinsic velocity gradient owing to beam-smearing effects. We used a set of Monte Carlo simulations to estimate that beam-smearing may lower the measured velocity gradients by about a factor of~2, comparable to inclination (Collet et al. 2014, PhD~thesis). Overall, the line widths in our galaxies are relatively broad, FWHM$=$400$-$1000 km s$^{-1}$, down to the spatial resolution limit of our data. This is more than can be attributed to the overall velocity shifts and beam smearing. Visual inspection of Figures~\ref{fig:maps1}~to \ref{fig:maps3} shows that at most two galaxies, NVSS~J012932$-$385433 and CEN~072, have regular velocity gradients without very obvious perturbations. All the other galaxies have significantly perturbed velocity fields. Even the very regular galaxy NVSS~J012932$-$385433 may have slight irregularities in the blueshifted gas in a small region of the southwestern hemisphere, which is strongly blurred by the size of the seeing disk (Fig.~\ref{fig:maps1}). However, both galaxies have irregular distributions of line widths, which are at variance with quiescent disk rotation. A good example for a galaxy with an irregular velocity field is CEN~134 (Fig.~\ref{fig:maps3}). This galaxy, and others with irregular velocity fields, exhibits sudden velocity jumps at least in parts of the emission-line region, and at signal-to-noise levels where this irregularity must be intrinsic to our sources over regions consistent with at least the size of the seeing disk (which is oversampled in the seeing-limited SINFONI mode, with 4$-$6 pixels per FWHM of the PSF). This would not be the case for simple noise features. It is interesting that these velocity jumps often coincide with the position of the continuum where we would expect the galaxy nucleus, and hence the AGN. They are also associated with a local broadening of the emission lines, but are not large enough to be the sole cause of this broadening. We will come back to the interpretation of these kinematic properties in \S\ref{sec:results.globalproperties}. \subsection{Extinction} \label{ssec:extinction} The extended gas of powerful HzRGs is often very dusty, causing several magnitudes of extinction in the rest-frame V band \citep{Nesvadba2008}. We measure the H$\alpha$/H$\beta$ decrement to estimate the extinction in the warm ionized gas of our sources, assuming an intrinsic Balmer decrement of H$\alpha$/H$\beta$~=~2.9 and adopting the galactic extinction law of \citet{Cardelli1989}. We detect H$\alpha$ in all seven galaxies where it falls into the atmospheric windows. For NVSS~J004000$-$303333 and NVSS~J234235$-$384526, which are at z $\gtrsim$ 3.5, we cannot observe H$\alpha$ from the ground. H$\beta$ is detected in eight of the nine integrated spectra of the galaxies of our sample, and six galaxies have both lines in common. For NVSS~J201943$-$364542, we can only set an upper limit to the H$\beta$ flux and, consequently, we give lower limits to the extinction. We find typical extinctions between formally $A_{H\beta}=$~0~mag in four galaxies, and up to $A_{H\beta}=$~1.9~mag, where $A_{H\beta} = A_V - 0.14$ mag. Results for individual sources are listed in Tab.~\ref{table:ExtinctionElectronDensityMassEnergy}. \subsection{Electron densities} The ratio of the lines in the $[$SII$]$$\lambda\lambda$6716,6731 doublet is density-sensitive over large ranges in density from about 100~cm$^{-3}$ to $10^5$~cm$^{-3}$ and can be used to estimate the electron density in the emission-line gas \citep{Osterbrock1989}. These lines are well detected in two galaxies, NVSS~J012932$-$385433, where they are broad and blended, and in NVSS~J210626$-$314003 (see Fig.~\ref{fig:spec1} and~\ref{fig:spec2}). We find $n_e = 750$~cm$^{-3}$ for NVSS~J012932$-$385433 and $n_e = 500$~cm$^{-3}$ for NVSS~J210626$-$314003, assuming a temperature $T = 10^4$~K. Electron densities of 500-700 cm$^{-3}$ are higher by factors of a few than those in low-redshift AGNs with powerful radio sources and electron densities of a few 10 to about 100 cm$^{-3}$ \citep[e.g.,][]{stockton76}. This mirrors the higher electron densities of a few~100~cm$^{-3}$ in the interstellar gas of star-forming galaxies at z$\sim$2 \citep[][]{letiran11} compared to starburst galaxies in the nearby Universe, and it also explains the higher surface brightnesses of extended gas in our galaxies compared to low-z AGN hosts. Similar electron densities of a few 100~cm$^{-3}$ have previously been found in other HzRGs \citep[e.g.,][Nesvadba et al., 2015, in prep.]{Nesvadba2006, humphrey08, Nesvadba2008}, but we caution nonetheless that the value we adopt here is uncertain by factors of a few. We adopt a fiducial value of $n_e = 500$~cm$^{-3}$ for the other galaxies in our sample, which corresponds to the average of HzRGs with appropriate measurements. \subsection{Ionized gas masses} \label{ssec:gasmass} Estimating the mass of warm ionized gas in high-redshift galaxies is straightforward, and can be done by measuring the flux of the bright Balmer lines and the electron density in the gas. The basic principle of the measurement is to count the number of recombining photons at a given electron density. Assuming case B recombination, we can estimate the ionized gas mass following \cite{Osterbrock1989} by setting \begin{equation} M_{ion} = 3.24 \times 10^{8}\frac{L_{H\alpha}}{10^{43} {\rm erg s}^{-1}} \frac{10^2 {{\rm cm}^{-3}}}{{\rm n_{e}}} M_{\odot} ,\end{equation} where $L_{H\alpha}$ is the H$\alpha$ luminosity corrected for extinction and $n_e$ is the electron density. We find masses of ionized gas in our sample in the range $1 - 10 \times 10^8$~M$_\odot$ when using extinction-corrected luminosities, and in the range $0.5 - 5 \times 10^8$~M$_\odot$ when using the observed luminosities of H$\alpha$ without taking extinction into account. This is generally less than in previous studies of more powerful radio galaxies \citep[e.g.,][Nesvadba et al., 2015, in prep.]{Nesvadba2006,Nesvadba2008}, which have masses of warm ionized gas between $10^9$ and a few $\times 10^{10}$~M$_\odot$. \section{Properties of AGNs and black holes } \label{ss:agnproperties} \subsection{Centimeter radio continuum} \label{sec:radiocontinuum} Our sources cover a range of radio sizes and morphologies, ranging from single compact sources of the size of the ATCA beam (typically 1\arcsec\ $-$3\arcsec\ at the highest observed frequency of 9~GHz) and single compact sources that potentially have faint extended structures to doubles with sizes of up to 25\arcsec. Only in one case do we potentially detect the radio core along with the lobes. This spatial resolution is lower than can be achieved with the JVLA at the highest resolution, but most of our targets are too far south to be observed with the JVLA with a good, symmetric beam. Nonetheless, the resolution of these data is sufficient to distinguish between radio sources that have and have not yet broken out of the ISM of the host galaxy (i.e., sources with sizes below or above approximately 10~kpc), which is the most important distinction for our purposes. We use our observed radio fluxes, previous ATCA results from \citet{Broderick2007} and \citet{Bryant2009b}, and the NVSS results from \citet{Brookes2008CENSORS} to constrain the radio power at different frequencies, the radio spectral index, and the kinetic energy of the radio lobes. In Tab.~\ref{tab:atcaresults} we list the radio power of each source at a rest-frame frequency of 1.4~GHz, which is frequently given in the literature for high-redshift quasars \citep[e.g.,][]{Harrison2012} and at 500~MHz in the rest-frame, which is the reference frequency of the sample of powerful radio galaxies at z$\sim$2 in the compilation of \citet{miley08}. With a rest-frame radio power at 1.4~GHz between $3\times 10^{26}$ W Hz$^{-1}$ and $8\times 10^{27}$ W Hz$^{-1}$ ($8\times 10^{26}$ W Hz$^{-1}$ to $3\times 10^{28}$ W Hz$^{-1}$ at 500~MHz, Tab.~\ref{tab:atcaresults}), our sources are intermediate between typical dusty starburst galaxies and the most powerful radio galaxies at similar redshifts. To calculate radio spectral indices, we used our own measurements at 5.5~GHz and 9.0~GHz, 1.4~GHz observations from the NVSS catalog, and for the MRCR-SUMSS sample the \citet{Broderick2007} results at 0.408~GHz, 0.843~GHz, and 2.368~GHz. Spectral indices are estimated from best fits to the power law of the radio spectral energy distribution, giving values between $\alpha=-0.8$ and $-1.4$, without clear evidence of spectral breaks. Table~\ref{tab:atcaresults} lists the results for each individual source. For the MRCR-SUMSS sample, our results agree with those of \citet{Broderick2007}. For the CENSORS sample, no previous measurements of the radio spectral index were available. We use the measured radio fluxes and spectral indices to extrapolate the radio power down to the rest-frame frequencies for which empirical calibrations of the kinetic energy of the radio source have been derived. The observed jet power is only a small fraction ($\le 1$\%) of the mechanical power of the radio jet. Specifically, we use the relationship of \citet{Willott1999} which is based on the 151~MHz flux in the rest frame (and given in units of $10^{28}$ W Hz$^{-1}$ sr$^{-1}$), $L_{151,28}$, and set $L_{\rm jet} = 3 \times 10^{45} f^{3/2} L_{151, 28}^{6/7} {\rm erg} \ {\rm s}^{-1}$, where $f$ represents the astrophysical uncertainties and is typically between 1 and 20. Here, we use $f = 10$ \citep[see also][]{cattaneo09}. We also use the calibration of \citet{Cavagnolo2010} which measures the work needed to inflate the X-ray cavities found in low-redshift galaxy clusters as a proxy to the mechanical power of the radio jet, $\log P_{cav}= 0.75(\pm 0.14) \log(P_{1.4} + 1.91(\pm 0.18))$, where $P_{cav}$ is the kinetic power of the X-ray cavity given in units of $10^{42}$ erg s$^{-1}$, and $P_{1.4}$ the radio power at 1.4~GHz in the rest frame. Both approaches give broadly similar results with differences of typically about 0.1~dex for the MRCR-SUMSS sources, and differences of 0.3~dex for the CENSORS sources, which have steeper spectral indices. We list all results in Tab.~\ref{tab:atcaresults}. \subsection{Bolometric AGN emission} For a first estimate we consider the [OIII]$\lambda$5007 luminosity (Tab.~\ref{table:ExtinctionElectronDensityMassEnergy}) as a signature of the nuclear activity in our galaxies. Even if they neglect extinction, our measured [OIII]$\lambda$5007 line fluxes indicate luminosities in the range of 0.1$-$few $\times 10^{44}$~erg~s$^{-1}$, in the range of powerful quasars and only somewhat fainter than the [OIII]$\lambda$5007 luminosities of the most powerful HzRGs. As discussed in more detail in Nesvadba et al. (2015, in prep.), the line ratios in our galaxies are also consistent with being photoionized by their powerful AGNs. Correcting the fluxes for extinction, which is relatively low in our targets (see \S\ref{ssec:extinction}), increases these values by factors of a few. Using the relationship of \citet{Heckman2004}, this would correspond to bolometric luminosities of the AGNs on the order of ${\cal L}_{bol}$ = 3500$\times {\cal L}$([OIII]) or a few $10^{46-47}$ erg s$^{-1}$. Although this estimate is known to overestimate the intrinsic bolometric luminosities of radio-loud quasars by factors of up to a few, this does not change the result at an order-of-magnitude level as we are stating here. \subsection{Broad-line components and properties of black holes} \label{ssec:BLR} Constraining the AGN properties of high-z radio galaxies is notoriously difficult since it is the very nature of radio galaxies that the direct line of sight into the nucleus is obscured. However, in a few fortuitous cases \citep[e.g.,][]{Nesvadba2011a}, broad H$\alpha$\ lines have been observed that are likely to trace the gas kinematics within a few light-days from the supermassive black hole \citep[][]{kaspi00,peterson04}. Such lines can be used to constrain the mass of the black hole and its accretion rate \citep[e.g.,][]{GreeneHo2005}. As described in \S\ref{s:PresentationOfOurAnalysisTools}, we fitted the spectra of our sources with single Gaussian components. Within the uncertainties of our data this yields acceptable fits to the integrated spectra of most targets and for most emission lines. However, three of our targets, NVSS~J002431$-$303330, NVSS~J012932$-$385433, and NVSS~J201943$-$364542, require additional H$\alpha$ components. These components have FWHM $\ge$ 3000~km s$^{-1}$, significantly broader than the systemic line emission. Moreover, NVSS~J234235$-$384526 clearly shows a second [OIII]$\lambda$5007 component. Fainter, marginally detected broad [OIII]$\lambda$5007 components may also be present in NVSS~J002431$-$303330 and NVSS~J004000$-$303333. The origin of broad (FWHM $\gg$ 1000~km s$^{-1}$) H$\alpha$ line emission at high redshift has either been attributed to winds driven by starbursts \citep[e.g.,][]{leTiran2011b, Shapiro2009,Nesvadba2007b} or active galactic nuclei on galaxy-wide scales \citep[e.g.,][]{Alexander2010, Nesvadba2011a, Harrison2012}, or alternatively, to gas motions in the deep gravitational potential wells very near the supermassive black hole \citep[e.g.,][]{Alexander2008, Coppin2008, Nesvadba2011a}. For galaxies like NVSS~J234235$-$384526, and perhaps NVSS~J002431$-$303330 and NVSS~J004000$-$303333, which only have relatively broad [OIII] components, it is clear that the broad-line emission probes gas in the narrow-line region or outside, at larger galactocentric radii. This is similar to high-z quasars \citep[e.g.,][]{Harrison2012, Nesvadba2011b}, since forbidden lines are collisionally de-excited at the high electron densities of the broad-line region \citep[e.g.,][]{Sulentic2000}. The non-detection of these components in the Balmer lines make it unlikely that these winds encompass large gas masses. High electron densities and ionization parameters can boost the luminosity of the [OIII]$\lambda\lambda$4959,5007 lines without implying large gas masses (e.g., Ferland 1993). For galaxies where we only observe broad components in the H$\alpha$ and [NII]$\lambda\lambda$6548,6583 complex, with widths that make it difficult to uniquely associate the broad-line emission with either line, the situation is less clear. Line widths of $\ge 3000$~km s$^{-1}$ have been considered as evidence of BLRs in submillimeter galaxies and quasars at z~$\sim$~2 \citep{Alexander2008, Coppin2008}; however, extended emission lines with FWHM $\sim$ 3500~km s$^{-1}$ have been observed in very powerful radio galaxies at similar redshifts \citep[e.g.,][]{Nesvadba2006}, and given the generally large line widths in our sources, it is clear that the ISM is experiencing a phase of strong kinetic energy injection. Among our sources, NVSS~J201943$-$364542 clearly stands out in terms of line width and line luminosity. It is the only galaxy with a line as broad as (FWHM = 8250~km s$^{-1}$) and as luminous as (${\cal L}=1.1\times 10^{44}$~erg~s$^{-1}$) the H$\alpha$ emission from nuclear broad-line regions in powerful radio galaxies at similar redshifts \citep{Nesvadba2011a}, and exceeds the ``typical'' bona fide AGN-driven wind by nearly an order of magnitude in line width. We do not find a broad component in [OIII] that would correspond to the one seen in H$\alpha$. Following \citet{GreeneHo2005} we can use the width and the luminosity of the H$\alpha$ line to estimate the mass and accretion rate of the supermassive black hole, M$_{\mathrm{BH}} = 2.1^{+2.2}_{-1.1} \times 10^9$ M$_{\odot}$ and $L_{bol,AGN} = 1.4\times10^{12} {\cal L}_{\odot}$ and $\mathcal{L}_{\mathrm{bol}} = 5.3 \times 10^{45}$ erg~s$^{-1}$. The Eddington luminosity of a M$_{\mathrm{BH}} = 2.1^{+2.2}_{-1.1} \times 10^9$ M$_{\odot}$ black hole is $\mathcal{L}_{\mathrm{Edd}} = 2.8 \times 10^{47}$ erg~s$^{-1}$, corresponding to an Eddington ratio of 2\%. These values are well within the range found for more powerful radio galaxies at z~$\ge$~2 \citep{Nesvadba2011a}, including the remarkably low black hole Eddington ratio compared to many bright high-redshift quasars \citep[see][for a detailed discussion]{Nesvadba2011a}. We caution, however, that we did not take into account that the extinction might be greater than that of optically selected quasars. Our non-detection of H$\beta$ implies A(H$\beta$)~$\ge$~1.7, which would be less than the $A_V=3.5$ mag used by \citet{GreeneHo2005}, if taken at face value. If the unified model applies for these galaxies \citep[][]{antonucci93, drouart12}, then differences in inclination are likely the largest uncertainty of about a factor of 2, with little impact on our results. \citet{drouart14} find higher Eddington ratios from Herschel/SPIRE observations of five of the \citet{Nesvadba2011a} sources when adopting a bolometric correction factor of 6 between the far-infrared and bolometric luminosity of these galaxies. The measured far-infrared luminosities are lower by factors of up to 5 than the ${\cal L}_{bol}$ derived from the H$\alpha$ line luminosity. The more moderate line widths (FWHM $\ge$ 3000~km s$^{-1}$) and line luminosities of NVSS~J002431$-$303330 and NVSS~J012932$-$385433 make interpreting the nature of the broad-line components in these galaxies more difficult, and it is not possible with the present data alone to clearly distinguish between the wind and the black hole hypothesis. Under the assumption that these lines probe the AGN broad-line region, we find (using the same approach as above and using the measurements listed in Tabs.~\ref{tab:spec1}~and~\ref{tab:spec2}) $L_{bol,AGN}^{0129}= 6.4\times 10^{44}$~erg~s$^{-1}$ and $L_{bol,AGN}^{0024}= 8.5\times 10^{44}$~erg~s$^{-1}$ for NVSS~J012932-385433 and NVSS~J002431$-$303330, respectively, and black hole masses of M$_{\mathrm{BH}}^{0129} = 2.8 \times 10^8$~M$_{\odot}$ and M$_{\mathrm{BH}}^{0024} = 2.4 \times 10^8$~M$_{\odot}$, respectively. The Eddington ratios would be as low as for NVSS~J201943$-$364542, about 2$-$3\%. We caution, however, that we have no unique constraint to distinguish between AGN broad-line emission, and gas interacting with the radio jet on larger scales in these two sources. If confirmed to be tracing BLR gas, these two galaxies would have supermassive black holes with masses closer to submillimeter galaxies than more powerful radio galaxies at z~$\sim$~2, although their accretion rates would be significantly lower than in submillimeter galaxies \citep[which accrete near the Eddington limit;][]{Alexander2008}. However, their kinetic jet power \citep[$5\times 10^{46}$ erg s$^{-1}$ for both sources, using the approach of][]{Cavagnolo2010}, would slightly exceed their Eddington luminosities, $L_{Edd}=3.6$ and $3.1\times 10^{46}$ erg s$^{-1}$, and their bolometric luminosities implied by H$\alpha$ would be two orders of magnitude lower than those implied by their [OIII] luminosities. Although super-Eddington accretion is not impossible, it is very rarely observed, which is why we are doubtful that these are bona fide AGN broad-line regions. \section{Additional line emitters and dynamical mass estimates of our HzRGs} \label{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes} Many high-redshift radio galaxies do not exist in solitude. A large number of imaging and spectroscopic studies have demonstrated conclusively that many massive radio galaxies at z~$\ge$~2 are surrounded by galaxy overdensities at the same redshift, as well as significant reservoirs of molecular, atomic, and ionized gas, including diffuse intra-halo gas \citep[][]{leFevre1996, Venemans2007, Hayashi2012, Galametz2012, vanOjik1997, VillarMartin2003, DeBreuck2003a, Nesvadba2009, wylezalek13, collet14}. With the small field of view of only 8\arcsec$\times$8\arcsec, SINFONI can only constrain the very nearby environment of HzRGs out to a few tens of kpc (8\arcsec\ correspond to 64~kpc at z~$\sim$~2); however, this small-scale environment is particularly interesting, e.g., to study how accretion and merging may affect the evolutionary state of the radio galaxy. Given the presence of extended gas clouds well outside the radio galaxy itself, which we attribute to AGN-driven winds and bubbles, it may not be immediately clear how to identify a secondary line emitter within a few tens of kpc from the radio galaxy as another galaxy in each individual case. We have three such examples, NVSS~J004000$-$303333, NVSS~J201943$-$364542, and NVSS~J144932$-$385657. For the last, we argue in \S\ref{ssec:rotation} why we favor the interpretation of an AGN-driven bubble. For NVSS~J004000$-$303333 and NVSS~J201943$-$364542 the situation is more difficult. In either case, the line emission cannot be geometrically associated with the radio jet, which strongly disfavors a direct physical connection between jet and gas. The redshifts in both cases are significantly offset from those in the radio galaxy itself. In NVSS~J201943$-$364542, the line widths of this second component are also very narrow, FWHM $=$ 320~km s$^{-1}$, and the light is emitted from within about 1\arcsec\ (8~kpc at z~$=$~2.1), as would be expected from a low-mass galaxy in the halo of NVSS~J201943$-$364542. The high [OIII]/H$\beta$ ratios observed in this putative companion are consistent with the values observed in rest-frame UV selected, fairly low-mass galaxies such as Lyman-break galaxies (LBGs) and other blue, intensely star-forming galaxies at high redshifts. The [OIII]$\lambda$5007 luminosity of our source, ${\cal L} = 5.3 \times 10^{42}$~erg~s$^{-1}$, does not stand out compared to the LBGs of \citet{Pettini2001}, for example, which typically have ${\cal L} = {\rm a\ few} \times 10^{42}$~erg~s$^{-1}$. In NVSS~J004000$-$303333, the line emission is closer to the HzRG, within about 1\arcsec, and brighter, ${\cal L}$([OIII]) = $4\times 10^{43}$~erg~s$^{-1}$. This may indicate that the gas is at least partially lit up by photons originating from the AGN in the radio galaxy. The proximity to the radio galaxy may also suggest that this gas is already dominated by the gravitational potential of the radio galaxy itself, either as part of a satellite galaxy that is being accreted (e.g., Ivison et al. 2008, Nesvadba et al. 2007), or perhaps because it is associated with an extended stellar halo forming around the HzRG, as observed in other cases \citep[][see also \citealt{collet14}]{Hatch2009} With both hypotheses, the projected velocity of these additional line emitters would be dominated by gravitational motion, and can therefore be used as an order-of-magnitude measure of the dynamical mass of the central radio galaxy and its underlying dark matter halo. Assuming that the system is approximately virialized, we set $M = v_c^2\ R / G$, where the circular velocity, $v_c$, is $v_c = v_{{\rm obs}} / \sin{i}$, and where $R$ is the projected radius, and $G$ the gravitational constant. With $v_{\rm obs} = 408$~km s$^{-1}$ for the companion of NVSS~J201943$-$364542, $v_{\rm obs}=350$~km s$^{-1}$ for NVSS~J004000$-$303333, and projected distances of 24~kpc and 8~kpc, respectively, we find dynamical mass estimates of $9\times 10^{11}\ \sin^{-1}{i}$~M$_{\odot}$ for NVSS~J004000$-$303333 and of $3\times 10^{11}\ \sin^{-1}{i}$~M$_{\odot}$ for NVSS~J201943$-$364542, respectively. Both are in the range of stellar and dynamical masses estimated previously for the most powerful HzRGs \citep{Seymour2007, deBreuck2010, Nesvadba2007b, VillarMartin2003}. \section{Signatures of AGN feedback} \label{sec:results.globalproperties} \subsection{Comparison with other massive high-z galaxies} \label{ssec:buitrago} Given the complexity of the gas kinematics of high-redshift galaxies and incomplete sets of observational constraints, the astrophysical mechanism that dominates the gas kinematics is not always easy to identify {\it \emph{ab initio}}. It is therefore illustrative to compare our sources with sets of other massive, contemporary galaxy populations with imaging spectroscopy to highlight the peculiarities of our galaxies in an empirical way. Specifically, we compare our sample with the stellar-mass selected sample of \citet{buitrago13} of ten galaxies with M$_{stellar}$$\ge 10^{11}$ M$_{\odot}$ at z$\sim$1.4 without obvious AGN signatures, and with the submillimeter selected dusty starburst galaxies of \citet{alaghband12} and \citet{menendez13} without very powerful AGN, and those of \citet{harrison12} and \citet{alexander10} with powerful obscured quasars. The \citet{buitrago13} and starburst-dominated submillimeter galaxies (SMGs) generally have similar or even larger velocity gradients than our sources, and these gradients are often very regular with a monotonic rise from one side of the emission-line region to the other. Total velocity offsets are between 200 and 600 km s$^{-1}$ in the \citet{buitrago13} sample, and between 100 and 600 km s$^{-1}$ in the SMGs of \citet{alaghband12} and \citet{menendez13}. In this comparison we discard one galaxy of \citet{alaghband12} which shows AGN characteristics in the optical, but not in the far-infrared. The sizes of emission-line regions in both samples are between 1\arcsec\ and 3\arcsec. A significant difference between our HzRGs and the comparison samples are, however, the highest FWHMs in our sources, between 400 km s$^{-1}$ and 1500 km s$^{-1}$. FWHMs in the \citet{buitrago13} sample are 100$-$300 km s$^{-1}$, and in the starburst-dominated SMGs they are between 160 and 470 km s$^{-1}$. Even in the SMGs, where the gas kinematics may be severely affected by ongoing galaxy mergers, the line widths are significantly lower than in our galaxies. Larger FWHMs than in the mass-selected and starburst-dominated sources are also found by \citet[][see also \citealt{Alexander2010}]{Harrison2012}, who study the ionized gas kinematics in eight sub-mm selected starburst/AGN composites at z~$\sim$~1.4$-$3.4 with imaging spectroscopy of [OIII]$\lambda$5007. Their galaxies are detected at 1.4~GHz with luminosities of $10^{24-25}$~W~Hz$^{-1}$, too low to disentangle the contribution of star formation and radio source. Bolometric luminosities of their obscured AGN are a few $10^{46-47}$ erg s$^{-1}$, similar to powerful radio galaxies. They find FWHM=200$-$1500 km s$^{-1}$, not very different from our targets, and total velocity offsets of 100$-$800 km s$^{-1}$. However, their galaxies have integrated [OIII] line profiles that are very different from those in our sample. Where signal-to-noise ratios permit, their lines show a narrow component superimposed onto a broad line, often with widths well in excess of FWHM $\sim$ 1000~km s$^{-1}$. Similar [OIII] profiles were reported by \citet{Nesvadba2011b} and \citet{CanoDiaz2012} for other obscured quasars at similar redshifts. In turn, our HzRGs have one, generally broad, line component with FWHM$=$500$-$1000 km s$^{-1}$. This suggests that significant parts of the gas in high-z quasar hosts are not strongly perturbed by the AGN. \citet{Nesvadba2011b} find that the broadest line widths are found in an unresolved region near the AGN, where [OIII] luminosities are likely boosted by high electron densities. Consistent with this finding, the quasars of \citet{Harrison2012} exhibit velocity curves that are fairly regular and flattened at low surface brightness, consistent with the turnover expected for rotation curves. Although our sources do have extended lower surface-brightness line emission with more quiescent kinematics, their contribution to the integrated line profile is smaller, and this gas is outside the region where a connection with the radio source is obvious \citep[][]{villar03}. In quasars and Seyfert galaxies at low redshift, \citet{husemann13} and \citet{mullersanchez11} also find that the gas is only perturbed near the radio source, in broad agreement with our findings \citep[but see][]{liu13}. \subsection{Outflows or disk rotation?} \label{ssec:rotation} The large velocity offsets of up to 2500 km s$^{-1}$ in the most powerful HzRGs make it relatively easy to conclude that the gas is driven by powerful, non-gravitational processes \citep[][]{Nesvadba2006,Nesvadba2008}. In the sources we are studying here, the situation is not so clear. Although we do find total velocity offsets of a few 100 km s$^{-1}$ in the six galaxies with resolved gas kinematics, in four cases they are relatively small, $\Delta v=$ 300-450 km s$^{-1}$, and, as we just discussed in \S\ref{ssec:buitrago}, within the range found in massive early-type galaxies at high redshift without powerful AGNs. If we assume that this gas is in a rotating disk and derive a mass estimate from the virial theorem (\S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes}), this corresponds to masses of $0.5-1.2\times 10^{11}$ M$_{\odot}$ for typical velocities $v/sin\ i =$150$-$225 km s$^{-1}$, radii R$=$5 kpc (corresponding to a typical observed disk radius of 0.6\arcsec), and an average inclination of 45$^\circ$ \citep[][]{drouart12}. This is similar to the expected mass range of a few $10^{11}$ M$_{\odot}$ of our targets that we derived previously (\S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes}), and within the stellar mass range of powerful HzRGs of a few $10^{11}$ M$_{\odot}$ \citep[][]{seymour07, debreuck10}. Moreover, the existence of kpc-scale rotating disks in a few low-redshift radio galaxies \citep[e.g.,][]{emonts05, Nesvadba2011c} suggests that disk rotation is very possible in galaxies with radio sources not much weaker than we find here. It is interesting, however, that our sources have at the same time smaller velocity gradients and larger nebulosities than in the mass-selected sample of \citet{buitrago13}. If the velocity gradients we observe are indeed due to rotation, then this would imply that our radio galaxies have shallower mass profiles, which might point toward a different mass assembly history in the two populations. A similar situation is found, however, in the radio-loud, intermediate-redshift quasars of \citet{fu09}, which also have gas that is relatively extended compared to the size of the stellar component of their galaxies, and probes velocity ranges that are smaller than those expected from stellar velocity dispersions. They conclude that the gas kinematics are unlikely to be directly related to rotational motion. At high redshift we cannot probe a similarly clear tracer of galaxy kinematics such as stellar absorption line widths, and therefore cannot address this question directly. Additional arguments point toward the outflow hypothesis. The extended emission-line regions in all but one source \citep[discussed further in][]{collet14} are aligned with the radio jet axis within 20$^\circ-30^\circ$ as would be expected if the jet were inflating an overpressurized bubble of hot gas. Moreover, hydrodynamic models of jet-driven winds do predict that jets with kinetic power of $10^{46-47}$ erg s$^{-1}$ should accelerate gas to about 500 km s$^{-1}$, which is consistent with our findings, in particular if we take seeing (\S\ref{ssec:kinematics}) and projection effects into account (see also \S\ref{ssec:wagner}). Most galaxies have velocity maps with significant irregularities even at our relatively low spatial resolution of several kpc. Inspection of Figs.~\ref{fig:maps1} to~\ref{fig:maps3} shows that all but perhaps three sources, NVSS~J012932$-$385433, NVSS~J204601$-$335656, and CEN~072, have significant residuals from simple, smooth, monotonic velocity gradients. This implies significant non-circular motion with sudden velocity jumps of up to several 100 km s$^{-1}$, over scales of a few kpc. A good example is NVSS~J002431$-$303330, where we see two areas that are redshifted relative to the surrounding gas by about 200 km s$^{-1}$, or CEN~134, which shows two large regions toward the northwest and southeast, which are blueshifted by up to about 150$-$200 km s$^{-1}$ relative to a central ridge of more redshifted gas. It is clear that such kinematics cannot arise from simple disk rotation. We also find very extended gas in some cases, in particular in NVSS~J144932$-$385657, where the line emission extends over 30~kpc, clearly larger than an individual galaxy. The velocity offset between the two bubbles is 800 km s$^{-1}$, and the gas has FWHM line widths of up to about 800 km s$^{-1}$. The K-band image of \citet{Bryant2009a} shows fuzzy, clumpy structures at the sensitivity limit of the data near the eastern emission-line region, but no single, clearly detected continuum source consistent with a galaxy. At the depth of the image, we would have likely missed galaxies with masses significantly lower than that of the radio galaxy; however, it is not clear in this case how the lopsided morphology of the southern bubble relative to the continuum of the radio galaxy, nor the fairly high gas surface brightness in this putative small galaxy could be explained. We therefore consider this source as an example of an AGN-driven wind, similar to other HzRGs. The eastern bubble is much smaller than the western bubble; however, similar asymmetries are found in more powerful radio galaxies at the same redshift (Nesvadba et al. 2015 in prep.). Moreover, the eastern blueshifted bubble is near the edge of our data cube and may extend farther outside the field of view. In CEN~134, NVSS~234235$-$384526, NVSS~J144932$-$385657, and NVSS~J004000$-$303333, the gas is elongated along the radio jet axis, and in NVSS~J002431$-$303330, NVSS~J004000$-$303333, NVSS~J030431$-$315308, NVSS~J144932$-$385657, NVSS~J234235$-$384526, and CEN~134 the largest velocity offsets are also found near that direction (Figs.~\ref{fig:maps1} to~\ref{fig:maps3}). This is reminiscent of the ``jet-cloud interactions'' found in radio galaxies near and far \citep[e.g.,][]{tadhunter91} and even in AGNs with low radio power \citep[][]{fu09,mullersanchez11,husemann13} in the nearby Universe. Given the small velocity gradients and relatively small gas masses, only a very small fraction of the kinetic jet power would be needed to accelerate the gas to the observed velocities, even if all of the gas were participating in outflows. We approximate the total observed bulk kinetic energy of the gas simply by summing over the bulk kinetic energy in each spatial pixel \begin{equation} \mathrm{E_{bulk}} = \frac{1}{2} \times \mathrm{M_{ion}^{corr}} \: \sum_{i \ \in \ \mathrm{EELR}} \frac{\Sigma(i)}{\Sigma_{\mathrm{tot}}} \times v(i)^2 ,\end{equation} where $M_{ion}$ is the mass of warm ionized gas estimated in \S\ref{ssec:gasmass} corrected for extinction, and $v$ the velocity offset in each pixel from the central velocity, which we consider an acceptable approximation of the systemic velocity of the galaxy. The parameters $\Sigma(i)$ and $\Sigma(tot)$ are the gas surface brightness in each spatial pixel and the total line surface brightness, respectively. Measuring the energy in each spatial pixel allows us to take the irregular gas kinematics into account. To probe the disk kinematics out to faint surface brightness, we use the gas kinematics as measured from [OIII], and scale by the H$\alpha$/[OIII] ratio in the integrated spectrum. In galaxies where both lines are bright enough to be probed individually, their surface brightness and kinematics are sufficiently similar to justify this approach \citep[see also][]{nesvadba08}. With the velocities and warm ionized gas masses in Tab.~\ref{table:Energy}, we find values between $E_{bulk,min}=0.2\times 10^{56}$ erg and $E_{bulk,max}=1\times 10^{57}$ erg in bulk kinetic energy in these galaxies. This corresponds to a small fraction of the mechanical energy carried by the radio jet, a few percentage points or less (Tab.~\ref{table:Energy}), if we assume typical jet lifetimes on the order of $10^{6-7}$ yrs. This age range is expected from the dynamical time of our radio jets, assuming a jet advance speed of $0.1 \ c$ \citep{Wagner2012}. It is consistent with the general range of a few $10^6$ yrs found by \citet{blundell99} from spectral aging considerations of powerful HzRGs \citep[see also][]{kaiser97}. Thus, energetically, it would be possible for jets with the observed kinetic power to accelerate the gas to the observed velocities. In particular, this is the case for J144932$-$385657, where the bulk kinetic energy amounts to 4\% of the kinetic energy carried by the radio jet. \subsection{Random motion and kinetic energy} The large line widths are perhaps the most outstanding kinematic property of the gas in our galaxies. They are similar to those found in the most powerful radio galaxies at z$\sim$2 \citep[][]{nesvadba08}, and greater by factors of 2$-$3 than in other types of massive high-z galaxies. Each source shows a range of line widths, and we deliberately compare them with the broadest widths near the nucleus because we want to quantify the impact of the AGN on the surrounding gas, and this is the gas that we expect to be most affected by the radio jet. Comparing the amplitudes of the velocity gradients, $\Delta v/2$, with Gaussian line widths $\sigma=FWHM/2.355$, we generally find ratios of $v/\sigma$ $=$0.3$-$0.8 (NVSS~J144932$-$385657 has a value of 2.3), compared to $v/\sigma =$1$-$3.5 in the sample of \citet{buitrago13}. Rotationally supported disk galaxies in the nearby universe typically have $v/\sigma \sim$10. Individual values of our sources are listed in Tab.~\ref{table:ExtinctionElectronDensityMassEnergy}. At z$\sim$2 we cannot infer directly whether these line widths reflect spatially unresolved velocity offsets on smaller scales, strong turbulent motion, or a combination of both. In all cases, this suggests the presence of an additional source of kinetic energy in our galaxies that is stirring the gas up, and which is absent in the general population of very massive high-z galaxies. Finding v/$\sigma\lesssim$1 also implies that the gas, even if it is in a rotating disk, cannot be in a stable configuration. Except for implausibly high inclination angles, gas in the line wings is at velocities above the local escape velocity from the disk, and is therefore not gravitationally bound. Nevertheless, most of the gas may be bound to the galaxy itself, since the escape velocity of galaxies of a few $10^{11}$ M$_{\odot}$ is about 700 km s$^{-1}$ \citep[][]{nesvadba06}. This would imply that most of the gas that is being lifted off the disk will slow down as it rises to larger galactic radii, and ultimately rain back toward the center of the radio galaxy \citep[e.g.,][]{Alatalo2011}. This is particularly the case if turbulence is indeed the cause of the line broadening, since turbulent dissipation times are very short \citep[][]{maclow99}. Disks with low v/$\sigma$ values that are highly turbulent and not gravitationally bound have been studied in a few individual cases at low redshift \citep[][]{Alatalo2011,Nesvadba2011c}, and are characterized by large line widths and complex line profiles as we find here. These disks are very different from classical thin disks, for example, in spiral galaxies. The complex line profiles and low volume filling factors suggest the gas is generally filamentary and diffuse, and cannot form gravitationally bound clouds and stars. Densities even in the molecular gas traced by CO millimeter line emission are only on the order of a few 1000 cm$^{-3}$ \citep[][]{nesvadba10}, not very different from those we find in the ionized gas of HzRGs \citep[][]{collet14, nesvadba08}. To understand the peculiar properties of these disks, in particular the absence of clear signatures of ongoing and past star formation, \citet{Nesvadba2011c} proposed that the dense gas in these disks may have formed from the diffuse ISM through the pressure enhancement in the cocoon inflated by the radio jet. Although this scenario requires more observations at low redshift to be confirmed, the broad properties of our targets may suggest that they may be fundamentally similar to these gas-rich radio galaxies in the nearby Universe. It is interesting to compare the kinetic energy in random motion with that in bulk motion. To constrain the energy of random motion (which we loosely refer to as turbulent energy) we set \begin{equation} \mathrm{E_{turb}} = \frac{3}{2} \times \mathrm{M_{ion}^{corr}} \sum_{i \ \in \ \mathrm{EELR}} \frac{\Sigma(i)}{\Sigma_{\mathrm{tot}}} \times \sigma(i)^2 \end{equation} with velocity dispersion $\sigma$, i.e., FWHM/2.355. With the line widths measured previously, we find values in the range $E_{turb,min}=2.6\times 10^{57}$ erg and $E_{turb,max}=31\times 10^{57}$ erg in turbulent kinetic energy. For typical jet ages of a few $10^{6-7}$ yrs, this corresponds to energy injection rates of a few $10^{42}$ to $10^{44}$ erg s$^{-1}$ (our precise numbers are for a fiducial $10^7$ yrs). The turbulent energy corresponds to a few tenths of a percent to a small percentage of the mechanical energy carried by the radio jet. Under the assumption that bulk velocities are solely from radial motion, ratios between bulk and turbulent kinetic energy are typically between 0.2 and $<$10\%, only NVSS~J144932$-$385657 with very extended gas has a ratio of 30\%. The small contribution of the bulk kinetic energy to the total kinetic energy budget of the gas implies that the uncertainties owing to the unknown split between rotational and outflow motion do not have a large impact on the total kinetic energy budget of the ionized gas. Finding that the largest ratio of bulk to turbulent motion is in the galaxy with the most extended emission-line regions, NVSS~J144932$-$385657 is interesting. More extended emission-line regions would be consistent with an outflow where dissipational losses through turbulence or random motion are less important. Summing the bulk and turbulent kinetic energy, we find that the AGN deposits 3-10\% of the jet kinetic energy in the gas. This is similar to the range found in very powerful HzRGs \citep[][]{nesvadba08}, but a significant difference is that the energy in random motion appears to be an order of magnitude greater than that in ordered motion (on kpc scales), whereas both are roughly equal in very powerful sources. Depending on the nature of the `turbulent' motion, this can have one of two consequences. Either the line broadening is caused by strong velocity changes over very small scales, in which case we may underestimate the intrinsic velocity of the gas because of beam smearing effects, and perhaps blending of high-velocity gas components with more quiescent gas in an underlying disk. Low-redshift equivalents include the compact radio galaxies of \citet{holt08}, for example, or the broadening is indeed from turbulent motion, in which case even fairly high-velocity gas may decelerate rapidly (because turbulent dissipation times are likely short \citep[][]{maclow99}) and rain back onto the disk. It appears uncertain in either case whether significant parts of the observed gas will escape from these galaxies in the current radio activity cycle, unless our radio sources undergo phases of significantly higher power during the current activity cycle, and we have merely observed them in an atypically low-power phase. In either case, our findings are at odds with the simple scenario whereby AGN feedback acts mainly by removing the gas \citep[e.g.,][]{dimatteo05}. A cyclical model whereby gas cools down and accumulates at small radii near the supermassive black hole before igniting another feedback episode, as has previously been suggested for the central galaxies of massive galaxy clusters \citep[][]{PizzolatoSoker2005}, may therefore be more appropriate here. Statistical evidence for recurrent AGN activity has recently also been discussed by \citet{hickox14}. \subsection{Comparison with hydrodynamic jet models} \label{ssec:wagner} We will now compare our model with the hydrodynamic models of \citet{Wagner2012} who quantify the energy transfer from jets into the ambient gas in radio jet cocoons. They assume a clumpy, fractal two-phase medium with a density distribution set by interstellar turbulence like that found by \citet{padoan11}. The range in jet power they cover with their 29 models is $10^{43-46}$ erg s$^{-1}$, well matched to our sources, and Eddington ratios $\eta=10^{-4}$ to $1$. They find that 30\%\ of the jet energy is transferred into the gas, mainly through ram pressure transfer of partially thermalized gas streams occurring in low-density channels between denser clouds. This energy transfer corresponds approximately to the ratio of gas kinetic to jet mechanical energy that we measure in our data. The values found in the simulations are somewhat higher than the values we find; however, our data can only be a lower limit to the actual energy transfer since we do not observe all gas phases. In particular the hot, X-ray emitting gas and the molecular gas are missing. Likewise, uncertainties may arise from the details of the density distribution of the gas, filling factors, etc., since the ISM properties of high-redshift galaxies are not very well constrained yet. In particular, cloud sizes -- a parameter that is not observationally constrained at z$\sim$2 -- appears to play a major role \citep[][]{Wagner2012}. Overall, given these uncertainties, we consider the correspondence between the model and data of an energy transfer in the range of a few percentage points very encouraging. This correspondence is also illustrated in Fig.~\ref{fig:wagnerfig}, which was inspired by Fig.~11 in \citet{Wagner2012}. It shows the expected gas velocities as a function of radio power for a range of Eddington ratios. Our jets span the range of about $10^{46-47}$ erg s$^{-1}$, which for Eddington ratios of 0.1 to 0.01 correspond to velocities of about 250$-$500 km s$^{-1}$. This is somewhat higher than the velocity offsets we observe, which might in part be attributable to orientation effects and beam smearing. However, what we plot in Fig.~\ref{fig:wagnerfig} are not the velocity offsets, but the Gaussian widths of the integrated spectral lines, i.e., the overall luminosity-weighted range of velocities from random gas motion. This may further underline that a large part of the energy injected by the radio jet in our galaxies does not result in an ordered, large-scale outflow, but is either causing local small-scale bulk motion or is ultimately being transformed into turbulent motion. As \citet{Wagner2012} point out, estimating the long-term behavior of the gas kinematics over kpc scales is not possible with the current set of simulations, which only follow the evolution of the gas in the first $10^3$ kyrs. The hatched area in Fig.~\ref{fig:wagnerfig} indicates the range of velocity dispersions measured by \citet{buitrago13} and illustrates that for jets less powerful that those of our sources, it will be difficult to identify the fingerprints of high-redshift AGNs with observations of the kinematics of warm ionized gas in the presence of other processes. \subsubsection{Radiative quasar feedback?} Another possible feedback mechanism that has recently been widely discussed in the literature is radiative feedback from the bolometric energy output of AGNs. It is attractive to explain the black hole bulge scaling relationships via radiative processes because the accretion rates implied by number counts of optically selected quasars appear to be a good match to the local black hole demographics \citep[e.g.,][]{yu02}. In spite of recent claims of AGN-driven bubbles and winds in bright, low-redshift quasars \citep[e.g.,][]{liu13}, statistical evidence is still rather scarce. For example, so far it has not been possible to find deviations in the star formation rates of X-ray selected AGN hosts and galaxies without AGNs \citep[e.g.,][]{mullaney12}. Radiation pressure has received particular attention in the recent literature. Each time a photon scatters on a dust (or gas) particle in the ISM it produces a small recoil in the ISM particle, with a net momentum transfer for fluxes that are high enough, in particular when the gas is optically thick, so that many interactions happen per photon. \citet{murray05} derived analytic equations to approximate the expected gas velocities as a function of quasar luminosity. Using their Equation~17, we set \begin{eqnarray} V(r) = 2\sigma \sqrt{ (\frac{{\cal L}}{{\cal L}_M} - 1) \ln{\frac{r}{R_0}}}, \end{eqnarray} where $\sigma$ is the stellar velocity dispersion of the host galaxy, which we estimate below; ${\cal L}$ is the quasar luminosity; $R_0$ is the launch radius of the outflow; and $r$ the radius at which the velocity of the wind is measured; ${\cal L}_M = \frac{4\ f_g\ c} {G} \sigma^4$ is a critical luminosity that depends on the stellar velocity dispersion $\sigma$, the speed of light $c$, gravitational constant $G$, and the gas fraction $f_g$. For ${\cal L}>{\cal L}_M$, radiation pressure may launch a wind. These equations are appropriate for the limiting case of an optically thick wind, in which case the interaction is extremely efficient. For these estimates we assume a launch radius of the wind ($R_0$) of a few 100 pc \citep[the sizes of the circumnuclear molecular disks found in low-redshift ULIRGs][ and the lowest value in the AGN feedback models of \citealt{ciotti09,ciotti10}]{downes98}, and an outflow radius, $r$, of 5~kpc, roughly the radius that we spatially resolve. We have an $L_{bol}$ estimate for J201943$-$364542 from H$\alpha$, which gives $5 \times 10^{45}$ erg s$^{-1}$. For a fiducial mass of $5\times 10^{10}$ M$_{\odot}$ in a pressure-supported isothermal sphere (approximated by the lowest dynamical mass estimate found in \S\ref{s:additionalLineEmittersTracersOfEnvironmentDynamicalProbes}) , we find a velocity dispersion of $\sigma=\sqrt{M\ G / (5\times R)}$ with stellar velocity dispersion $\sigma$, mass $M$, gravitational constant $G$, and radius $R$. For higher mass estimates, the critical luminosity will also be greater. We assume R=3~kpc, which gives a velocity dispersion of $\sigma=$210 km s$^{-1}$. The critical luminosity to launch a wind, ${\cal L}_M$, for this velocity dispersion and a gas fraction of 10\% is ${\cal L}_M = 3.5\times 10^{46}$ {\rm erg\ s}$^{-1}$. This suggests that the AGNs in our sources, unless J201943$-$364542 is atypically weak, do not have sufficiently powerful quasars to launch fast outflows. We should also note that the Murray et al. estimate is, strictly speaking, only valid for buried quasars, whereas the overall low extinction in our sources (\S\ref{ssec:extinction}) suggests that our galaxies are not very dusty. Hence, the actual energy transfer from the AGNs to the gas should be lower than estimated here. For this estimate, we used a very low value for the fiducial mass. Had we used the average mass of the HzRGs from the Herge sample instead \citep[][]{seymour07,debreuck10}, $2\times 10^{11}$ M$_{\odot}$, we would have found a circular velocity of 420 km s$^{-1}$, and a critical luminosity of $5.5\times 10^{47}$ erg s$^{-1}$. \section{A generic phase in the evolution of massive high-redshift galaxies} \label{sec:ensemble} The radio luminosity functions of \citet{Willott2001} and \citet{Gendre2010} suggest that galaxies with a radio power of $10^{27-28}$ W Hz$^{-1}$ have co-moving number densities of $10^{-(7-6)}$~Mpc$^{-3}$ at z$\sim2$, whereas the general population of massive high-redshift galaxies have densities of a few $10^{-5}$ Mpc$^{-3}$ \citep[e.g.,][for M$_{stellar}\ge 10^{10.5}$ M$_{\odot}$]{mancini09}. This suggests that phases of radio activity in this power range could be a generic phase in the evolution of massive galaxies at these redshifts, if the activity timescales are short enough, a few $10^7$ yrs. To estimate the duty cycle correction, we assumed a typical formation epoch of z$\sim$10 for massive high-z galaxies \citep[e.g.,][]{rocca13}, which implies an age of about 2~Gyrs for massive galaxies at z$\sim2$. The ratio of co-moving number densities of radio galaxies and massive galaxies overall would then imply a duty cycle of about 100, i.e., an activity timescale of a few $10^7$ yrs, which is consistent with the young ages of high-redshift radio sources estimated from adiabatic jet expansion models \citep[][see also \citealt{kaiser97}]{blundell99}, and the empirical finding of \citet{sajina07} that one-third of the infrared selected starburst galaxies at z$\sim$2 have extended radio jets with powers of a few $10^{25}$ W Hz$^{-1}$. It thus appears possible that the majority of massive galaxies experienced such a phase of moderate radio activity if these phases were sufficiently short \citep[a few $10^7$ yrs, see also][]{venemans07, nesvadba08}. By studying a sample of radio-selected galaxies, we might expect to predominantly probe galaxies in dense environments \citep[][]{best00,hatch14}. Eight of our sources are part of the CARLA survey \citep[][]{wylezalek13} with Spitzer, which measures the density of galaxies with mid-infrared colors consistent with being at redshifts z$\ge$1.3 around HzRGs. Only one source, NVSS~J204601$-$335656, has an environment that exceeds the density of galaxies in the field by more than 3$\sigma$. This suggests that environment does not likely play a dominant role, particularly as massive galaxies generally prefer dense environments \citep[e.g.,][]{baldry06}. \section{Summary} \label{ssec:summary} We presented a combined SINFONI and ATCA study of 18 high-redshift radio galaxies at z$\sim2-3.5$ taken from the MRCR-SUMSS and CENSORS surveys. Their radio power is in the range of a few $10^{26-27}$ W Hz$^{-1}$ at 1.4~GHZ, $1-2$ orders of magnitude lower than in the most powerful high-redshift radio galaxies known, but higher than that produced by the most intense high-z starbursts alone. Our goal is to investigate the ability of moderately powerful radio jets to accelerate and heat the gas of their host galaxies in order to regulate the star formation, and the significance of this process in such a case. In the near-infrared imaging spectroscopy observations of our sample, we typically detect faint, unresolved continuum emission around which we find extended emission-line regions (clearly seen in [OIII] and H$\alpha$). The kinematic properties of this ionized gas are diverse among our sample: some sources (e.g., NVSS~J012932$-$385433, CEN~072) show large-scale and smooth velocity gradients, but of small amplitude (typically $\lesssim 400$~km s$^{-1}$), while other sources have very irregular velocity fields (e.g., NVSS~J002431$-$303330). A common feature of all sources in our sample are their large velocity dispersions, FWHM~=~$400 - 1000$~km s$^{-1}$. The small ratios of bulk velocities to velocity dispersions indicate that the observed ionized gas cannot be in a stable rotating disk, even in cases where smooth velocity gradients are detected. Our estimates of ionized gas masses are in the range of a few $10^{8}$~M$_{\odot}$, which is at least one order of magnitude less than was previously found in the most powerful radio galaxies. For two sources, we found distinct emission-line regions in the vicinity of the radio galaxy. Assuming they are associated with satellite galaxies, we used them to estimate the dynamical mass of the radio galaxy, and we found a few $10^{11}$~M$_{\odot}$, which is comparable to the typical mass of HzRGs \citep[e.g.,][]{Seymour2007, deBreuck2010}. In one source (NVSS~J201943$-$364542), we find a broad component of H$\alpha$ with FWHM~$\sim$~8250~km s$^{-1}$. We interpret this as the signature of the nuclear broad-line region and derive the mass ($\rm M_{BH} \sim 2.1$~M$_{\odot}$) and bolometric luminosity ($5.3 \times 10^{45}$ erg s$^{-1}$) of the supermassive black hole. This suggests an Eddington ratio of $\sim$~2~\%. We explore different possible sources of energy that can explain the large observed kinetic energy in the ionized gas: transfer from the radio jet or radiation pressure from the large bolometric luminosity of the AGNs. We show that an energy transfer from the radio jet to the ionized gas is a plausible scenario. Our estimates demonstrate that a fraction of the radio jet power is sufficient to power the kinematics of the ionized gas. Our observations are in agreement with the predictions of hydrodynamical models \citep[see, e.g.,][]{Wagner2012}. \section*{Acknowledgments} We are very grateful to the staff at Paranal for having carried out the observations on which our analysis is based and to the staff at the ATCA for their hospitality during our visitor-mode observations. We thank the anonymous referee for inspiring comments which helped improve the paper. NPHN wishes to thank G.~Bicknell, C.~Tadhunter, and J.~Silk for interesting discussions. She also wishes to thank C.~Harrison for interesting discussions and for pointing out a missing factor of 1/3 in her previous estimates of warm ionized gas masses. CC acknowledges support from the Ecole Doctorale Astronomie \& Astrophysique de l'Ile de France. Parts of this research were conducted by the Australian Research Council Centre of Excellence for All-sky Astrophysics (CAASTRO), through project number CE110001020. \bibliographystyle{aa}
1,314,259,996,125
arxiv
\section{Introduction} In this paper we investigate utility maximization problems for a financial market where asset prices follow a diffusion process with an unobservable Gaussian mean reverting drift modelled by an Ornstein-Uhlenbeck process. It is a companion paper to \cite{Gabih et al (2022) PowerFixed} where we examine in detail the maximization of expected power utility of terminal wealth which is treated as a stochastic optimal control problem under partial information. A special feature of that paper is that for the construction of optimal portfolio strategies investors estimate the unknown drift not only from observed asset prices. They also incorporate external sources of information such as news, company reports, ratings or their own intuitive views on the future asset performance. These outside sources of information are called expert opinions. In the present paper we focus on the well posedness of the above stochastic control problem which in the literature is often overlooked or taken for granted. For Gaussian drift processes which are potentially unbounded well posedness in general cannot be guaranteed for utility functions which are not bounded from above. This is the case for log-utility and power utility with relative risk aversion smaller than those of log-utility. For log-utility well posedness can be shown quite easily and holds without restriction to the model parameters. However, the case of power utility is much more demanding and leads to restrictions on the choice of model parameters such as the investment horizon and parameters controlling the variance of the asset price and drift processes. This phenomenon was already observed in Kim and Omberg \cite{Kim and Omberg (1996)} for a financial market with an observable drift modeled by an Ornstein-Uhlenbeck process. They coined the terminology \textit{nirvana strategies}. Such strategies generate in finite time a terminal wealth with a distribution leading to infinite expected utility. Note that this is a property of the distribution of terminal wealth and realizations of terminal wealth need not to be infinite. The same holds for the generating strategies which might be even suboptimal. That phenomenon was also observed in Korn and Kraft \cite[Sec. 3]{Korn and Kraft (2004)} who coined it ``I-unstability'', in Angoshtari \cite{Angoshtari2013,Angoshtari2016} who studied power utility maximization problems and their well posedness for financial market models with cointegrated asset price processes and in Battauz et al.~\cite{Battauz et al (2017)} for markets with defaultbale assets. Kim and Omberg also pointed out that financial market models allowing investors to attain nirvana do not properly reflect reality. Thus, there are not only mathematical reasons to exclude combinations of model parameters allowing for attaining nirvana, i.e., not ensuring well posed optimization problems. This problem is addressed in the present paper and we derive sufficient conditions to the model parameters leading to bounded maximum expected utility of terminal wealth. \smallskip Portfolio selection problems for market models with partial information on the drift have been intensively studied in the last years. We refer to to Lakner \cite{Lakner (1998)} and Brendle \cite{Brendle2006} for models with Gaussian drift and to Rieder and Bäuerle \cite{Rieder_Baeuerle2005}, Sass and Haussmann \cite{Sass and Haussmann (2004)} for models in which the drift is described by a continuous-time hidden Markov chain. A generalization of these approaches and further references can be found in Björk et al. \cite{Bjoerk et al (2010)}. Utility maximization problems for investors with logarithmic preferences in market models with non-observable Gaussian drift process and discrete-time expert opinions are addressed in a series of papers \cite{Gabih et al (2014),Gabih et al (2019) FullInfo,Sass et al (2017),Sass et al (2021),Sass et al (2022)} of the present authors and of Sass and Westphal. The case of continuous-time expert opinions and power utility maximization is treated in a series of papers by Davis and Lleo, see \cite{Davis and Lleo (2013_1),Davis and Lleo (2020)}. For models with drift processes described by continuous-time hidden Markov chains and power utility maximization we refer to Frey et al.~\cite{Frey et al. (2012),Frey-Wunderlich-2014}. Finally, explicit solutions of the power utility maximization problem addressed in this paper can be found in our our companion paper \cite{Gabih et al (2022) PowerFixed}. \smallskip The paper is organized as follows. In Section \ref{market_model} we introduce the financial market model with partial information of the drift and formulate the portfolio optimization problem. The well posedness of that problem is studied in Section \ref{sec_wellposedness}. The main result is Theorem \ref{theo_bound_V} providing an upper bound for the expected terminal wealth expressed in terms of the solution to some matrix Riccati differential equation and involving the initial value of the non-observable drift. That results allows to deduce sufficient conditions to the model parameters ensuring the well posedness of the utility maximization problem under full information. The respective conditions for the case of partial information follow from the projection the above upper bound on the $\sigma$-algebra representing the initial information of the partially informed investor. These conditions become quite explicit for market models with a single risky asset which are considered in Subsection \ref{WellPosedSpecialCase}. Section \ref{numeric_result} illustrates the theoretical findings by results of some numerical experiments and visualize the derived restrictions on the model parameters. The appendix collects proofs which are removed from the main text. \smallskip \paragraph{Notation} Throughout this paper, we use the notation $I_d$ for the identity matrix in $\R^{d\times d}$, $0_{d}$ denotes the null vector in $\R^d$, $0_{d\times m}$ the null matrix in $\R^{d\times m}$. For a symmetric and positive-semidefinite matrix $A\in\R^{d\times d}$ we call a symmetric and positive-semidefinite matrix $B\in\R^{d\times d}$ the \emph{square root} of $A$ if $B^{\top}B=A$. The square root is unique and will be denoted by $A^{1/2}$. For a generic process $X$ we denote by $\mathbb{G}^X$ the filtration generated by $X$. \section{Financial Market and Optimization Problem} \medskip \label{market_model} \subsection{Price Dynamics} \label{PriceDynamics} We model a financial market with one risk-free and multiple risky assets. The setting is based on Gabih et al.~\cite{Gabih et al (2014),Gabih et al (2019) FullInfo} and Sass et al.~\cite{Sass et al (2017),Sass et al (2022),Sass et al (2021)}. For a fixed date $T>0$ representing the investment horizon, we work on a filtered probability space $(\Omega,\mathcal{G},\mathbb{G},\P)$, with filtration $\mathbb{G}=(\mathcal {G}_t)_{t \in [0,T]}$ satisfying the usual conditions. All processes are assumed to be $\mathbb{G}$-adapted. We consider a market model for one risk-free bond with constant price $S^0_t=1$ and $d$ risky securities whose return process $R=(R^{1},\ldots,R^{d})$ is defined by \begin{align} dR_t=\mu_t\, dt+\sigma_R\, dW^{R}_t, \label{ReturnPro} \end{align} for a given $d_1$-dimensional $\mathbb{G}$-adapted Brownian motion $W^{R}$ with $d_1\geqd$. The volatility matrix $\sigma_R\in\mathbb R^{d\timesd_1}$ is assumed to be constant over time such that $\Sigma_{R}:=\sigma_R\volR^{\top}$ is positive definite. In this setting the price process $S=(S^1,\ldots,S^{d})$ of the risky securities reads as \begin{align} dS_t&=diag(S_t)\, dR_t,~~ S_0=s_0, \label{stockmodel} \end{align} with some fixed initial value $s_0=(s_0^1,\ldots,s_0^d)$. Note that for the solution to the above SDE it holds \begin{align*} \log S_t^{i}-\log s_0^{i} &= \int\limits_0^t \mu_s^{i}ds +\sum\limits_{j=1}^{d_1}\Big( \sigma_R^{ij}W_t^{R,j}-\frac{1}{2} (\sigma_R^{ij})^2 t\Big) =R_t^{i}-\frac{1}{2}\sum\limits_{j=1}^{d_1} (\sigma_R^{ij})^2 t ,\quad i=1,\ldots,d. \end{align*} So we have the equality $\mathbb{G}^R = \mathbb{G}^{\log S} = \mathbb{G}^S$. This is useful since it allows to work with $R$ instead of $S$ in the filtering part. The dynamics of the drift process $\mu=(\mu_t)_{t\in[0,T]}$ in \eqref{ReturnPro} are given by the stochastic differential equation (SDE) \begin{eqnarray} \label{drift} d\mu_t=\kappa(\overline{\mu}-\mu_t)\, dt+\sigma_{\mu}\, dW^{\mu}_t, \end{eqnarray} where $\kappa\in\mathbb R^{d\timesd}$, $\sigma_{\mu}\in\mathbb R^{d\timesd_2}$ and $\overline{\mu}\in\mathbb R^{d} $ are constants such that the matrices $\kappa$ and $\Sigma_{\mu}:=\sigma_{\mu}\voldrift^{\top}$ are positive definite, and $W^{\mu}$ is a $d_2$-dimensional Brownian motion such that $d_2\geqd$. For the sake of simplification and shorter notation we assume that the Wiener processes $W^{R}$ and $W^{\mu}$ driving the return and drift process, respectively, are independent. We refer to Brendle \cite{Brendle2006}, Colaneri et al. \cite{Colaneri et al (2021)} and Fouque et al.~\cite{Fouque et al. (2015)} for the general case. Here, $\overline{\mu}$ is the mean-reversion level, $\kappa$ the mean-reversion speed and $\sigma_{\mu}$ describes the volatility of $\mu$. The initial value $\mu_0$ is assumed to be a normally distributed random variable independent of $W^{\mu}$ and $W^{R}$ with mean $\overline{m}_0\in \R^{d}$ and covariance matrix $\overline{q}_0\in\mathbb R^{d\timesd}$ assumed to be symmetric and positive semi-definite. \subsection{Expert Opinions} \label{Expert_Opinions} We assume that investors observe the return process $R$ but they neither observe the factor process $\mu$ nor the Brownian motion $W^{R}$. They do however know the model parameters such as $\sigma_R,\kappa, \overline{\mu}, \sigma_{\mu} $ and the distribution $\mathcal{N}(\overline{m}_0,\overline{q}_0)$ of the initial value $\mu_0$. Information about the drift $\mu$ can be drawn from observing the returns $R$. A special feature of our model is that investors may also have access to additional information about the drift in form of \textit{expert opinions} such as news, company reports, ratings or their own intuitive views on the future asset performance. The expert opinions provide noisy signals about the current state of the drift arriving at known deterministic time points $0=t_0<t_1<\ldots<t_{N-1}<T$. For the sake of convenience we also write $t_N = T$ although no expert opinion arrives at time $t_N$. The signals or ``expert views'' at time $t_k$ are modelled by $\R^d$-valued Gaussian random vectors $Z_k=(Z_k^1,\ldots,Z_k^{d})^{\top}$ with \begin{align} \label{Expertenmeinungen_fest} Z_k=\mu_{t_k}+{\Gamma}^{\frac{1}{2}}\varepsilon_k,\quad k=0,\ldots,n-1, \end{align} where the matrix $\Gamma\in\R^{d\timesd}$ is symmetric and positive definite. Further, $(\varepsilon_k)_{k=0,...,n-1}$ is a sequence of independent standard normally distributed random vectors, i.e., $\varepsilon_k\sim \mathcal{N}(0,I_d)$. It is also independent of both the Brownian motions $W^R, W^\mu$ and the initial value $\mu_0$ of the drift process. Thus given $\mu_{t_k}$ the expert opinion $Z_k$ is $\mathcal{N}(\mu_{t_k},\Gamma)$-distributed. So, $Z_k$ can be considered as an unbiased estimate of the unknown state of the drift at time $t_k$. The matrix $\Gamma$ is a measure of the expert's reliability. Its diagonal entries $\Gamma^{ii}$ are just the variances of the expert's estimate of the drift component $\mu^i$ at time $t_k$. The larger $\Gamma^{ii}$ the less reliable is the expert's estimate. Note that one may also allow for relative expert views where experts give an estimate for the difference in the drift of two stocks instead of absolute views. This extension can be studied in Sch\"ottle et al.~\cite{Schoettle et al. (2010)} where the authors show how to switch between these two models for expert opinions by means of a pick matrix. In addition to expert opinions arriving at discrete time points we also consider expert opinions arriving continuously over time as in the BLCT model of Davis and Lleo \cite{Davis and Lleo (2013_1),Davis and Lleo (2020)}. This is motivated by the results of Sass et al.~\cite{Sass et al (2021)} who show that for increasing arrival intensity $\lambda$ and an expert's variance $\Gamma$ growing linearly in $\lambda$ asymptotically for $\lambda\to \infty $ the information drawn from these expert opinions is essentially the same as the information one gets from observing yet another diffusion process. This diffusion process can then be interpreted as an expert who gives a continuous-time estimation about the current state of the drift. Let this estimate be given by the diffusion process \begin{align}\label{continuous-expert} dJ_t = \mu_t dt +\sigma_{J} dW_t^{J} \end{align} where $W_t^{J}$ is a $d_3$-dimensional Brownian motion independent of $W_t^R$ and $W^{\mu}$ and such that with $d_3\geqd$. The volatility matrix $\sigma_{J}\in\mathbb R^{d\timesd_3}$ is assumed to be constant over time such that the matrix $\Sigma_{J}:=\sigma_{J}\volexp^{\top}$ is positive definite. In \cite{Gabih et al (2022) PowerFixed} we show that based on this model and on the diffusion approximations provided in \cite{Sass et al (2022),Sass et al (2021)} one can find efficient approximative solutions to utility maximization problems for partially informed investors observing high-frequency discrete-time expert opinions. \subsection{Investor Filtration} \label{Investor_Filtration} We consider various types of investors with different levels of information. The information available to an investor is described by the \textit{investor filtration} $\mathbb{F}^H=(\mathcal{F}^H_t)_{t\in[0,T]}$. Here, $H$ denotes the information regime for which we consider the cases $H=R,Z,J,F$, where \[\begin{array}{rcll} \mathbb{F}^{R}&=& (\mathcal {F}_t^{R})_{t \in [0,T]} & \text{with }\mathcal {F}_t^{R}=\sigma(R_s,~ s\le t) \vee \mathcal{F}_0^I, \\[0.5ex] \mathbb{F}^{Z}&=& (\mathcal {F}_t^{Z})_{t \in [0,T]} & \text{with }\mathcal {F}_t^{Z}=\sigma(R_s, s\le t,\,~(t_k,Z_k),~ t_k\le t) \vee \mathcal{F}_0^I, \\[0.5ex] \mathbb{F}^{J}&=& (\mathcal {F}_t^{J})_{t \in [0,T]} & \text{with }\mathcal {F}_t^{J}=\sigma(R_s,J_s,~ s\le t) \vee \mathcal{F}_0^I, \\[0.5ex] \mathbb{F}^F&=& (\mathcal {F}_t^{F})_{t \in [0,T]} & \text{with }\mathcal {F}_t^{F}=\sigma(R_s, \mu_s,~ s\le t). \end{array} \] We assume that the $\sigma$-algebras $\mathcal{F}_t^H$ are augmented by the null sets of $P$. Above we denote by $\mathcal{F}_0^I$ the $\sigma$-algebra representing the initial information of the partially informed investors ($H=R,J, Z$) about $\mu_0$. We assume that at $t=0$ these investors start with the same initial information, i.e., $\mathcal{F}_0^H=\mathcal {F}_0^I\subset \mathcal F_0^{F}$, $H=R,J,Z$. This initial information $\mathcal{F}_0^I$ models prior knowledge about the drift process at time $t=0$, e.g., from observing returns or expert opinions in the past before the trading period $[0,T]$. We assume that the conditional distribution of the initial value drift $\mu_0$ given $\mathcal{F}_0^H$ is the normal distribution $\mathcal{N}(m_0,q_0)$ with mean $m_0\in \R^{d}$ and covariance matrix $q_0\in\mathbb R^{d\timesd}$ assumed to be symmetric and positive semi-definite. For typical examples we refer to the companion paper \cite[Sec.~2.3]{Gabih et al (2022) PowerFixed} We call the investor with filtration $\mathbb{F}^H=(\mathcal{F}^H_t)_{t\in[0,T]}$ the $H$-investor. The $R$-investor observes only the return process $R$, the $Z$-investor combines return observations with the discrete-time expert opinions $Z_k$ while the $J$-investor observes the return process together with the continuous-time expert $J$. Finally, the $F$-investor has full information and can observe the drift process $\mu$. For stochastic drift this case is not realistic, but we use it as a benchmark. It will serve as a limiting case for high-frequency expert opinions with fixed covariance matrix $\Gamma $. We will denote an investor with investor filtration $\mathbb{F}^H$ as $H$-investor. \subsection{Portfolio and Optimization Problem} We describe the self-financing trading of an investor by the initial capital $x_0>0$ and the $\mathbb{F}^H$-adapted trading strategy $\pi=(\pi_t)_{t\in[0,T]} $ where $\pi_t\in\R^{d}$. Here $\pi_t^{i}$ represents the proportion of wealth invested in the $i$-th stock at time $t$. The assumption that $\pi$ is $\mathbb{F}^H$-adapted models that investment decisions have to be based on information available to the $H$-investor which he obtains from from observing assets prices ($H=R$) combined with expert opinions ($H=Z,J$) or with the drift process ($H=F$). Following the strategy $\pi$ the investor generates a wealth process $(X_t^{\pi})_{t\in [0,T]}$ whose dynamics reads as \begin{eqnarray} \label{wealth_phys} \frac{dX_t^{\pi}}{X_t^{\pi}}= \pi_t^{\top}dR_t &= & \pi_t^{\top}\mu_t\; dt+\pi_t^{\top}\sigma_R\; dW_t^{R},\quad X_0^{\pi}=x_0. \end{eqnarray} We denote by \begin{equation} \label{set_admiss_0} \mathcal{A}_0^H=\Big\{\pi= (\pi_t)_{t} \colon \pi_t\in\mathbb R^{d}, \text{ $\pi$ is $\mathbb{F}^H$-adapted }, X^\pi_t > 0, {\mathbb{E}}\Big[ \int\nolimits_0^T \|\pi_t\|^2\, dt \Big]<\infty \Big\} \end{equation}% the class of {\em admissible trading strategies}. We assume that the investor wants to maximize the expected utility of terminal wealth for a given utility function $U : \R_+\rightarrow\R$ modelling the risk aversion of the investor. In our approach we will use the function \begin{align \label{util_def} \mathcal U_{\theta}(x):=\frac{x^{\theta}}{\theta},\quad \theta\in(-\infty,0)\cup(0,1). \end{align} The limiting case for $\theta\rightarrow 0$ for the power utility function leads to the logarithmic utility $\mathcal U_0(x):=\ln x$, since we have $\mathcal U_{\theta}(x)-\frac{1}{\theta}=\frac{x^{\theta}-1}{\theta} \xrightarrow[~\theta\rightarrow 0~]{\text{}} \log x.$ The optimization problem thus reads as \begin{align} \label{opti_org} \mathcal V_0^H(x_0):=\sup\limits_{\pi\in\mathcal{A}_0^{H}} \mathcal D_0^H(x_0;\pi) \quad \text{where}\quad \mathcal D_0^H(x_0;\pi) = {\mathbb{E}}\left[\mathcal U_{\theta}(X_T^{\pi})~|~\mathcal{F}^H_0, X_0^{\pi}=x_0 \right],~\pi\in\mathcal A^{H}_0, \end{align} where we call $\mathcal D_0^H(x_0;\pi)$ \textit{reward function} or \textit{performance criterion} of the strategy $\pi$ and $ \mathcal V_0^H(x_0)$ \textit{value function} to given initial capital $x_0$. This is for $H\neq F$ a maximization problem under partial information since we have required that the strategy $\pi$ is adapted to the investor filtration $\mathbb F^{H}$ while the drift coefficient of the wealth equation \eqref{wealth_phys} is not $\mathbb F^{H}$-adapted, it depends on the non-observable drift $\mu$. Note that for $x_0 > 0$ the solution of the SDE \eqref{wealth_phys} is strictly positive. This guarantees that $X_T^{\pi}$ is in the domain of logarithmic and power utility. \section{Well Posedness of the Optimization Problem} \label{sec_wellposedness} \subsection{Well Posedness} \label{Well Posedness} Solving the utility maximization problem \eqref{opti_org} for the various information regimes $H=R,Z,J,F$ requires conditions under which the optimization problem is well posed. Under these conditions the maximum expected utility of terminal wealth cannot explode in finite time as it is the case for so-called nirvana strategies described in Kim and Omberg \cite{Kim and Omberg (1996)} and Angoshtari \cite{Angoshtari2013}. Such strategies generate in finite time a terminal wealth with a distribution leading to infinite expected utility although the realizations of terminal wealth may be finite. We start by describing the model of the financial market via the parameter \begin{align} \varrho:=\{T,\theta,d,\sigma_R,\sigma_{\mu},\kappa,\overline{\mu},x_0,\overline{m}_0,\overline{q}_0, m_0,q_0 \} \label{modellparameter_1} \end{align} taking values in a suitable chosen set of parameter values $\mathcal P$. For emphasizing the dependence on the parameter $\varrho$ we rewrite \eqref{opti_org} as \begin{align} \label{opti_org2} \mathcal V_0^H(\varrho):=\sup\limits_{\pi\in\mathcal{A}_0^{H}} \mathcal D_0^H(\varrho;\pi) \quad \text{where}\quad \mathcal D_0^H(\varrho;\pi) = {\mathbb{E}}\left[\mathcal U_{\theta}(X_T^{\pi})~|~\mathcal{F}^H_0, \varrho\right],\quad~\pi\in\mathcal A^{H}_0. \end{align} Compared to notation in \eqref{opti_org} this is a slight abuse of notation but it facilitates the study the dependence of the reward and value function on \textit{all} model parameters contained in $\varrho$ and not only on one of its components, the initial wealth $x_0$. For a given paramter $\varrho$ we want to study if the performance criterion of the optimization problem \eqref{opti_org} is well-defined in the following sense. \begin{definition} \label{def_wohlgesteltheit} For a given financial market with parameter $\varrho\in \mathcal P$ we say that the utility maximization problem \eqref{opti_org2} for the $H$-investor is \textit{well-posed}, if there exists a constant $C_{\valueorigin}^{H}=C_{\valueorigin}^{H}(\varrho)<\infty$ depending on $\varrho$ such that $$\mathcal V_0^{H}(\varrho)\leq C_{\valueorigin}^{H}. $$ The set \begin{align} \nonumber \mathcal P^{H}=\{\varrho \in\mathcal P:~ \text{problem } \eqref{opti_org2} \text{ is well posed} \} \subset \mathcal P \end{align} is called set of \textit{feasible parameters} of the financial market model for which \eqref{opti_org2} is well-posed. \end{definition} \subsection{Log-Utility and Power Utility with ${\theta<0}$} \label{well_posed_log} For power utility with parameter $\theta<0$ it holds $\mathcal U_\theta(x)<0$. Hence, in that case we can choose $C_{\valueorigin}^{H}=0$ and the optimization problem is well-posed for all model parameters $p\in\mathcal P$ with $\theta<0$. For log-utility ($\theta=0$) the utility function is no longer bounded from above but as we show in our companion paper \cite[Subsec.~4.1]{Gabih et al (2022) PowerFixed} the value function $\mathcal V_0^H(x_0)$ is bounded from above by some positive constant $C_{\valueorigin}^{H}$ for any selection of the remaining model parameters in $\varrho\in\mathcal P$. Hence it holds $\{\varrho\in \mathcal P: \theta\le 0\}\subset \mathcal P^{H} $. More challenging is the case of power utility with positive parameter $\theta\in (0,1)$ which is also not bounded from above. That case is investigated in the remainder of this section. We note that this approach can also be applied to log-utility leading to an alternative proof of well poesedness for the maximization of expected log-utility, for details we refer to Kondakji \cite[Sec.~4.2]{Kondkaji (2019)}. \subsection{Power Utility with ${\theta\in(0,1)}$} \label{well_posed_power} For the study of well posedness it will be convenient to extend the concept of the fully informed $F$-investor who has access to observations of the return and drift process to an (artifical) investor who observes also expert opinions. That investor is called $G-$investor and defined by the investor filtration $\mathbb F^{G}=\mathbb{G}$ which is the underlying filtration to which all stochastic process of the financial market model are adapted. Comparing the $F$- and $G$-investor the additional information from the observation of expert opinions (and the driving Wiener processes $W^R,W^\mu, W^J$) will not lead to superior performance of the $G$-investor in the considered utility maximization problem but it leads to the inclusion $\mathbb F^{H}\subset \mathbb F^{G}$ for $H=R,Z,J,F$. Note that for the $F$-investor we only have $\mathbb F^{R} \subset \mathbb F^{F}$ but in general $\mathbb F^{Z}, \mathbb F^{J} \not\subset \mathbb F^{F}$. Analogous to the other investors we define for $H=G$ the set of admissible strategies $\mathcal{A}^{G}_0$, the performance criterion $\mathcal D_0^{G}$ and the value function $\mathcal{V}^{G}_0$ as in \eqref{set_admiss_0} and \eqref{opti_org2}, respectively. Next we want to derive estimates of the value function $\mathcal V_0^{H}$ of the $H$-investor in terms of the value function $\mathcal V_0^{G}$ of the $G$-investor. Let us fix a strategy $\pi\in\mathcal A^{H}_0$, then tower property of the conditional expectation with $ \mathcal F_0^{H}\subset\mathcal F_0^{G}$ implies \begin{align} \mathcal D_0^{H}(\varrho;\pi)&={\mathbb{E}} \big[\mathcal U_{\theta}(X_T^{\pi}) \big{|} \mathcal F_0^{H}\big] ={\mathbb{E}} \big[ {\mathbb{E}} \big[\mathcal U_{\theta}(X_T^{\pi}) \big| \mathcal F_0^{G} \big] \big{|} \mathcal F_0^{H}\big] ={\mathbb{E}} \big[\mathcal D_0^{G}(\varrho;\pi)\big{|}\mathcal F_0^{H}\Big]. \end{align} Taking supremum on both sides and using $\mathcal A^{H}_0\subset\mathcal A^{G}_0$ we obtain \begin{align} \nonumber \mathcal V_0^{H}(\varrho)= \sup_{\pi\in\mathcal A^{H}_0} \mathcal D_0^{H}(\varrho;\pi) &=\sup_{\pi\in\mathcal A^{H}_0} {\mathbb{E}} \big[\mathcal D_0^{G}(\varrho;\pi)\big{|} \mathcal F_0^{H}\big]\leq {\mathbb{E}} \big[\sup_{\pi\in\mathcal A^{H}_0} \mathcal D_0^{G}(\varrho;\pi)\big{|} \mathcal F_0^{H}\big] \\ & \leq {\mathbb{E}} \big[\sup_{\pi\in\mathcal A^{G}_0} \mathcal D_0^{G}(\varrho;\pi)\big{|} \mathcal F_0^{H}\big] = {\mathbb{E}} \big[ \mathcal V_0^{G}(\varrho)\big{|} \mathcal F_0^{H}\big]. \label{bounded_2} \end{align} In the sequel we will derive conditions under which $\mathcal V_0^{G}(\varrho)$ is bounded. Then estimate \eqref{bounded_2} will allow us to derive conditions for the boundedness of $\mathcal V_0^{H}(\varrho)$ for the other information regimes $H$. We will need the following lemma where we denote by $\mu_u^{t,m}$ the drift at time $u\in[t,T]$ starting at time $t\in[0,T]$ from $m\in\R^d$ . \begin{lemma} \label{Helplemma_Psi_Expect} Let $\gamma} %{c_{\Psi}\in\mathbb R\backslash\{0\}$, $t\in[0,T)], z>0$ and the stochastic process $(\Psi_s^{t,z,m})_{s\in[t,T}$ be defined by \begin{align} \Psi_s^{t,z,m}:=z\exp\Big\{ \gamma} %{c_{\Psi} \int\nolimits_t^s (\mu_u^{t,m})^{\top} \Sigma_{R}^{-1}\mu_u^{t,m}\; du \Big\} \label{Psi_pro} \end{align} and the function $d:[0,T]\times \R^d\to\R_{+}$ be defined by $~d(t,m):={\mathbb{E}} \big[\Psi_T^{t,1,m}\big],~$ for $t\in[0,T]$ and $m\in\R^d$. Then it holds \begin{align} d(t,m):=\exp\Big\{m^{\top} A_{\gamma} %{c_{\Psi}}(t) m+B_{\gamma} %{c_{\Psi}}^{\top} (t)m +C_{\gamma} %{c_{\Psi}} (t) \Big\} \label{Psi_4} \end{align} Here $A_{\gamma} %{c_{\Psi}}(t)$, $B_{\gamma} %{c_{\Psi}}(t)$ and $C_{\gamma} %{c_{\Psi}}(t)$ are functions in $t\in[0,T]$ taking values in $\mathbb R^{{d}\times {d}}$, $\mathbb R^{d}$ and $\mathbb R$, respectively, satisfying the following system of ODEs \begin{align} \label{A_bound} \frac{d A_{\gamma} %{c_{\Psi}} (t)}{dt}&=-2 A_{\gamma} %{c_{\Psi}} (t)\Sigma_{\mu}A_{\gamma} %{c_{\Psi}} (t)+\kappa^{\top} A_{\gamma} %{c_{\Psi}} (t)+A_{\gamma} %{c_{\Psi}} (t)\kappa-\gamma} %{c_{\Psi}\Sigma_R^{-1},\quad A_{\gamma} %{c_{\Psi}}(T)=0_{d\timesd},\\[2ex] \label{B_bound} \frac{dB_{\gamma} %{c_{\Psi}}(t)}{dt}&=-2A_{\gamma} %{c_{\Psi}}(t)\kappa\overline{\mu}+\left[\kappa^{\top}-2A_{\gamma} %{c_{\Psi}}(t)\Sigma_{\mu}\right] B_{\gamma} %{c_{\Psi}}(t),\quad \hspace{1.5cm}B_{\gamma} %{c_{\Psi}}(T)=0_{d}, \\[2ex] \label{C_bound} \frac{dC_{\gamma} %{c_{\Psi}}(t)}{dt}&=-\frac{1}{2}B_{\gamma} %{c_{\Psi}}^{\top}(t) \Sigma_{\mu} B_{\gamma} %{c_{\Psi}}(t)-B_{\gamma} %{c_{\Psi}}^{\top}(t)\kappa\overline{\mu}-tr\{ \Sigma_{\mu} A_{\gamma} %{c_{\Psi}}(t)\},\quad \hspace{0.4cm}C_{\gamma} %{c_{\Psi}}(T)=0. \end{align} \end{lemma} \begin{proof} The proof is given in Appendix \ref{proof_lemma_Helplemma_Psi_Expect}. \end{proof} Note that equation \eqref{A_bound} is a Riccati equation for symmetric matrix-valued function $A_{\gamma} %{c_{\Psi}}$ while equation \eqref{B_bound} is a system of $d$ linear differential equations whose solution $B_{\gamma} %{c_{\Psi}}$ is obtained given $A_{\gamma} %{c_{\Psi}}$. Finally given $A_{\gamma} %{c_{\Psi}}$ and $B_{\gamma} %{c_{\Psi}}$ the function scalar function $C_{\gamma} %{c_{\Psi}}$ is obtained by integrating the right hand side of \eqref{C_bound}. \begin{theorem} \label{theo_bound_V} For a model parameter $\varrho$ with $\theta\in(0,1)$ the value function of the $G$-investor satisfies \begin{align*} \mathcal V_0^{G}(\varrho)\leq \frac{x_0^{\theta}}{\theta} \, d^{1-\theta}(0,\mu_0), \end{align*} where the function $d:[0,T]\times\R^d\to \R_{+}$ is given by \eqref{Psi_4} for $\gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2}$ and $\mu_0$ is the initial value of the drift. \end{theorem} % % \begin{proof} The proof is given in Appendix \ref{proof_lemma_lemma_bound_V}. \end{proof} The last theorem together with the fact that for $\theta\le 0$ the problem is well posed (see the reasoning at te beginning of this section) allows to give the following characterization of the $G$-investor's set $\mathcal P^{G}$ of feasible model parameters by the inclusion $\underline{\mathcal P}^{G}\subset \mathcal P^{G}$ where \begin{align} \underline{\mathcal P}^{G}=\{\varrho\in\mathcal P: \theta\in(0,1) \text{ and } d(0,\mu_0) \text{ given in \eqref{Psi_4} is bounded}\} \cup \{\varrho\in\mathcal P: \theta\le 0\} . \label{modellparameter_3} \end{align} We are now in a position to characterize the set of feasible model parameters $\mathcal P^H$ for $H=R,Z,J,F$ by substituting the estimate for $\mathcal V_0^{G}(\varrho)$ from Theorem \ref{theo_bound_V} into \eqref{bounded_2} yielding \begin{align} \mathcal V_0^{H}(\varrho)\leq \frac{x_0^{\theta}}{\theta}{\mathbb{E}} [d(0,\mu_0) | \mathcal F_0^{H}]. \label{bound_value_H} \end{align} \medskip \paragraph{Well posedness for full information (${H=F}$)} For the $F$-Investor the drift is known and from the above inequality it follows \begin{align} \mathcal V_0^{F}(\varrho)\leq \frac{x_0^{\theta}}{\theta} {\mathbb{E}} [d(0,\mu_0) | \mathcal F_0^{F}]= \frac{x_0^{\theta}}{\theta} d(0,\mu_0), \nonumber \end{align} which implies that the inclusion given in \eqref{modellparameter_3} for $\mathcal P^{G}$ also holds for the set of feasible model parameters $\mathcal P^{F}$ for the $F$-investor. The restrictions in \eqref{modellparameter_3} to the feasible model parameters $\varrho$ for $\theta\in(0,1)$ are given implicitly via the boundedness of $d(0,\mu_0)$ where $d$ is given in view of \eqref{Psi_4}. They can be further analysed by studying conditions for non-explosive solutions of Riccati equation \eqref{A_bound} for the matrix-valued function $A_{\gamma} %{c_{\Psi}}$ on the investment horizon $[0,T]$. The boundedness of the solution to \eqref{A_bound} carries over to the boundedness of the solution to the linear differential equation \eqref{B_bound} for $B_{\gamma} %{c_{\Psi}}$ and also to $C_{\gamma} %{c_{\Psi}}$ which is obtained by integrating the right hand side of \eqref{C_bound}. Thus we obtain \begin{cor}[Sufficient condition for well posedness, full information] \label{suff_cond_full} \ \\ The utility maximization problem \eqref{opti_org2} for the fully informed $F$-investor is well-posed for all parameters $\varrho\in \underline{\mathcal P}^{F}\subset \mathcal P^{F}$ where \begin{align} \nonumber \underline{\mathcal P}^{H}= & \Big\{\varrho\in\mathcal P: \theta\in(0,1), \text{ for } \gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2} \text{ is } A_{\gamma} %{c_{\Psi}} \text{ bounded on } [0,T] \Big\}\\& \cup \{\varrho\in\mathcal P: \theta\le 0\} . \label{modellparameter_full} \end{align} \end{cor} It is well known that in general closed-form solutions of Riccati differential equations are available only for the one-dimensional case ($d=1$). More details about this special case can be found below in Subsec.~\ref{WellPosedSpecialCase}. \medskip \paragraph{Well posedness for partial information (${H=R,Z,J}$)} For the partially informed investors the random variable $d(0,\mu_0)$ in \eqref{bound_value_H} is no longer $\mathcal F_0^{H}$-measurable and we have to compute the conditional expectation using the conditional distribution of the initial drift value $\mu_0$ given $\mathcal F_0^{H}$. The result is given below in Theorem \ref{theorem_partial_Inv} for which we need the following lemma. The proofs can be found in Appendix \ref{proof_lemma_Erw_exp_normal} and \ref{proof_theorem_partial_Inv}, respectively. \begin{lemma} \label{Erw_exp_normal} Let $Y\sim\mathcal N(\mu_Y,\Sigma_Y)$ be a $d$-dimensional Gaussian random vector, ${U}\in \mathbb R^{d\times d}$ a symmetric and invertible matrix such that ${I}_{d}-2{U}\Sigma_Y$ is positive definite and $b\in \mathbb R^d$. Then it holds \begin{align}\nonumber {\mathbb{E}} \big[\exp\{(Y+b)^{\top} {U}(Y+b)\}\big]=& \big(\rm{det}({I}_{d}-2{U} \Sigma_Y)\big)^{-1/2}\times \\& \exp\big\{(\mu_Y+b)^{\top}\; ({I}_{d}-2{U}\Sigma_Y)^{-1} {U} \,(\mu_Y+b) \big\}. \label{{Erw_exp_normal2}} \end{align} \end{lemma} \begin{theorem} \label{theorem_partial_Inv} Let for $H=R,Z,J$ the conditional distribution of the initial value of the drift $\mu_0$ given $\mathcal{F}_0^H$ be the normal distribution $\mathcal{N}(m_0,q_0)$ with mean $m_0\in \R^{d}$ and covariance matrix $q_0\in\mathbb R^{d\timesd}$. Further assume that ${K}=I_{d}-2A_{\gamma} %{c_{\Psi}}(0)q_0$ for $\gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2}$ is positive definite. Then it holds \begin{align} \mathcal V_0^{H}(\varrho)&\leq \frac{x_0^{\theta}}{\theta}C_{\valueorigin}^{H} \quad \text{with}\quad C_{\valueorigin}^{H}= d(0,m_0) (\det({K}))^{-1/2} {\exp\Big\{\frac{1}{2}}{a}^{\top}q_0{K}^{-1}{a}\Big\}, \label{bound_7} \end{align} where $a:=2A_{\gamma} %{c_{\Psi}}(0)m_0+B_{\gamma} %{c_{\Psi}}(0)$. \end{theorem} From the above theorem we deduce the following sufficient condition for well posedness. \begin{cor}[Sufficient condition for well posedness, partial information] \label{suff_cond_partial} \ \\ The utility maximization problem \eqref{opti_org2} for the partially informed $H$-investor, $H=R,J, Z$, is well-posed for all parameters $\varrho\in \underline{\mathcal P}^{H}\subset \mathcal P^{H}$ where \begin{align} \nonumber \underline{\mathcal P}^{H}= & \Big\{\varrho\in\mathcal P: \theta\in(0,1), \text{ for } \gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2} \text{ is } A_{\gamma} %{c_{\Psi}} \text{ bounded on } [0,T]\text{ and } \\& I_{d}-2A_{\gamma} %{c_{\Psi}}(0)q_0 \text{ positive definite} \Big\} \cup \{\varrho\in\mathcal P: \theta\le 0\} \label{modellparameter_partial}. \end{align} \end{cor} For $\theta\in(0,1)$ well posedness can be guaranteed for a model parameter $\varrho$ for which the Riccati equation \eqref{A_bound} has a bounded solution on $[0,T]$ and if the initial value of the conditional covariance $q_0$ is not ``too large'' such that ${K}$ is positive definite. While the first condition already implies the well posedness for the problem under full information the latter condition is an additional restriction for the partially informed case. Note that for $H=F$ we can set $q_0=0$ which yields the positive definite matrix ${K}=I_{d}$. Further we note that the specific model settings for the observations available to the the partially informed investor do not play a role for the well posedness except the initial conditional covariance $q_0$. \subsection{Market Models With a Single Risky Asset} \label{WellPosedSpecialCase} The above conditions for the well posedness given in terms of the boundedness of $d(0,\mu_0)$ and $A_{\gamma} %{c_{\Psi}}(0)$ are quite abstract and its verification requires that the solution of the Riccati ODE \eqref{A_bound} is bounded on $[0,T]$. While in the multi-dimensional case Riccati ODEs in general can be solved only numerically these equations enjoy a closed-form solution in the one-dimensional case. This allows to give more explicit characterizations of the set of feasible parameters for market models with a single risky asset only. The following lemma gives explicit conditions to the model parameters under which \eqref{A_bound} has a bounded solution on $[0,T]$. For the proof we refer to Kondakji \cite[Lemma A.1.3, A.2.2 and A.2.3]{Kondkaji (2019)} \begin{lemma} \label{lemma_Beding_beschr} Let $d=1$, $\theta\in(0,1), \gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2}$ and \begin{align} \label{Diskriminante_Barier} \Delta_{\gamma} %{c_{\Psi}}=4\kappa^2\Big(1-2 \gamma} %{c_{\Psi}\Big(\frac{\sigma_{\mu}}{\kappa\sigma_{R}}\Big)^2 \Big) \quad \text{and} \quad \delta_{\gamma} %{c_{\Psi}}:=\frac{1}{2}\sqrt{|\Delta_{\gamma} %{c_{\Psi}}|}. \end{align} Then it holds for the Riccati differential equation \eqref{A_bound} on $[0,T]$ \begin{enumerate} \item For $\Delta_{\gamma} %{c_{\Psi}}\geq0$ there is a bounded solution for all $T>0$. \item For $\Delta_{\gamma} %{c_{\Psi}}<0$ a bounded solution exists only if $T<T_{\gamma} %{c_{\Psi}}^{E}$ with the explosion time \begin{align} T^E:=\frac{1}{\delta_{\gamma} %{c_{\Psi}}} \Big(\frac{\pi}{2}+\arctan\frac{\kappa}{\delta_{\gamma} %{c_{\Psi}}}\Big). \label{t_Explosion_AV} \end{align} \end{enumerate} \end{lemma} The above lemma allows to give more explicit sufficient conditions for well posedness given for the general multi-dimensional case in Corollary \ref{suff_cond_full} and \ref{suff_cond_partial}. They can be formulated in terms of the parameters $\kappa,\sigma_\mu,\sigma_{R}$ describing the variance of the asset price and drift process, the investment horizon $T$, the parameter $\theta$ of the utility function and the initial conditional variance $q_0$. Analyzing the inequality $T<T^E$ and \eqref{t_Explosion_AV} we obtain \begin{cor}[Sufficient condition for well posedness, single risky asset] \label{suff_cond_d1} \ \\ Let $d=1$, $\gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2}$ and $\Delta_{\gamma} %{c_{\Psi}}, T_{\gamma} %{c_{\Psi}}^{E}$ as given in \eqref{Diskriminante_Barier} and \eqref{t_Explosion_AV}, respectively. \begin{enumerate} \item The utility maximization problem \eqref{opti_org2} for the \textbf{fully informed} $F$-investor is well-posed for all parameters $\varrho\in \underline{\mathcal P}^{F}\subset \mathcal P^{F}$ where \begin{align} \label{modellparameter_full_d1} \underline{\mathcal P}^{F}= & \big\{\varrho\in\mathcal P: \theta\in(0,1), \kappa, \sigma_{\mu}, \sigma_{R}~ \text{such that}~\Delta_{\gamma} %{c_{\Psi}}\geq0 \big\}~ \cup\\ \nonumber & \big\{\varrho\in\mathcal P: \theta\in(0,1), \kappa, \sigma_{\mu}, \sigma_{R}~ \text{such that}~\Delta_{\gamma} %{c_{\Psi}}<0 \text{ and } T<T_{\gamma} %{c_{\Psi}}^{E} \big\} \cup \big\{\varrho\in\mathcal P: \theta\le 0\big\} . \end{align} \item The utility maximization problem \eqref{opti_org2} for the \textbf{partially informed} $H$-investor, $H=R,J, Z$, is well-posed for all parameters $\varrho\in \underline{\mathcal P}^{H}\subset \mathcal P^{H}$ where \begin{align} \nonumber \underline{\mathcal P}^{F}= & \big\{\varrho\in\mathcal P: \theta\in(0,1), \kappa, \sigma_{\mu}, \sigma_{R}~ \text{such that}~\Delta_{\gamma} %{c_{\Psi}}\geq0, q_0<(2A_{\gamma} %{c_{\Psi}}(0))^{-1} \big\}~ \cup\\ \nonumber & \big\{\varrho\in\mathcal P: \theta\in(0,1), \kappa, \sigma_{\mu}, \sigma_{R}~ \text{such that}~\Delta_{\gamma} %{c_{\Psi}}<0 \text{ and } T<T_{\gamma} %{c_{\Psi}}^{E}, q_0<(2A_{\gamma} %{c_{\Psi}}(0))^{-1} \big\} \cup\\& \label{modellparameter_partial_d1} \big\{\varrho\in\mathcal P: \theta\le 0\big\} . \end{align} \end{enumerate} \end{cor} \section{Numerical Results} \label{numeric_result} In this section we illustrate the theoretical findings of the previous sections by results of some numerical experiments. They are based on a stock market model where the unobservable drift $(\mu_t)_{t\in[0,T]}$ follows an Ornstein-Uhlenbeck process as given in \eqref{drift} whereas the volatility is known and constant. For simplicity, we assume that there is only one risky asset in the market, i.e. $d=1$. If not stated otherwise our numerical experiments are based on model parameters as given in Table \ref{parameter}. \begin{table}[ht] \begin{tabular}{|lll|r||ll|r|} \hline \rule{0mm}{2.5ex}% Drift & mean reversion level& $\overline{\mu}$ & $0$ & Investment horizon & $T$ & $1~$ year \\ \hline \rule{0mm}{2.5ex}% & mean reversion speed& $\kappa$ & $3$ & Power utility parameter & $\theta$ & $0.3$ \\ \hline & volatility &$\sigma_{\mu}$ &$1$ & Volatility of stock & $\sigma_R$ & $0.25$ \\\hline \rule{0mm}{2.5ex}% & mean of $\mu_0$ & $\overline{m}_0$ & $\overline{\mu}=0$& Initial estimate & $m_0=\overline{m}_0$ & $0$ \\ \hline &variance of $\mu_0$ & $\overline{q}_0$ & $\frac{\sigma_{\mu}^2}{2\kappa}=0.1\overline{6}$ & & $q_0=\overline{q}_0$ & $0.1\overline{6}$ \\ \hline \end{tabular} \\[1ex] \centering \caption{\label{parameter} \small Model parameters for the numerical experiments } \end{table} \begin{figure}[t!h] \hspace*{-0mm} \includegraphics[width=1\textwidth]{fig_nirvana_zoom.png \centering \caption{\label{nirvana_abbild} \parbox[t]{0.7\textwidth}{ \small Subset $\underline{\mathcal P}^{F}$ of the set of feasible parameters $\mathcal P^{F}$ \newline depending on~~ $\theta~~$ and $T$ (top left), \hspace*{2em}$\theta$ and $\sigma_R$ (top right), \newline \phantom{depending on~~} $\sigma_{\mu}$ and $T$ (middle left), $\sigma_{\mu}$ and $\theta$ ~ (middle right), \newline \phantom{depending on~~} $\sigma_R$ and $T$ (bottom left), $\sigma_R$ and $\sigma_{\mu}$ (bottom right). \newline The other parameters are given in Table \ref{parameter}. }} \end{figure} In Section \ref{sec_wellposedness} we have specified sufficient conditions to the model parameters for which the optimization problem is well-posed. For market models with a single risky asset which is assumed for the numerical experiments these conditions are given Corollary \ref{suff_cond_d1}. In Figure \ref{nirvana_abbild} we visualize the subset $\underline{\mathcal P}^{F}$ of the set of feasible parameters $\mathcal P^{F}$ for which well posedness for the utility maximization problem of the fully informed investor can be guaranteed. In particular, we show the dependence of $\mathcal P^{F}$ on the investment horizon $T$, the power utility parameter $\theta$, the volatility $\sigma_{\mu}$ of the drift and the volatility $\sigma_{R}$ of the stock price. The two top panels show the subset $\underline{\mathcal P}^{F}$ depending on $\theta, T$ and $\sigma_R$. It can be seen that for negative $\theta$, i.e.~for investors which are more risk averse than the log-utility investor, the optimization problem is always well-posed. Moreover, the top left panel shows that for the selected parameters the optimization problem is well-posed for all $T>0$ if the parameter $\theta$ does not exceed the critical value $\theta^{E}\approx 0.36$. For $\theta> \theta^{E}$, i.e.~for investors with sufficiently small risk-aversion, the optimization problem is no longer well-posed for all investment horizons $T$, but only up to a critical investment horizon $T^E=T^E(\theta)$ depending on $\theta$ and given in \eqref{t_Explosion_AV}. The larger $\theta$, the smaller is that critical investment horizon $T(\theta)$. For the limiting case $\theta\rightarrow 1$ it holds $T^E(\theta)\rightarrow 0$. The top right panel shows for an investment horizon fixed to $T=1$ the subset $\underline{\mathcal P}^{F}$ depending on $\theta$ volatility $\sigma_R$ of stock price. It can be seen that larger values for the stock volatility allow to choose larger values of $\theta$. The two panels in the middle illustrate the influence of the drift volatility $\sigma_{\mu}$ on the subset $\underline{\mathcal P}^{F}$ . The left panel shows that the optimization problem is well-posed for all $T>0$ as long as the volatility $\sigma_{\mu}$ of the drift does not exceed the critical value $\sigma_{\mu}^{E}\approx 1.15$. For volatilities $\sigma_{\mu}>\sigma_{\mu}^{E}$ the optimization problem is well-posed only for investment horizons $T$ smaller than the critical horizon $T^E=T^E(\sigma_{\mu})$ that depends on $\sigma_{\mu}$ and is given in \eqref{t_Explosion_AV}. In the right panel we investigate for fixed investment horizon $T=1$ the dependence of $\mathcal P^F$ on the drift volatility $\sigma_{\mu}$ and the power utility parameter $\theta$. While for $\theta<0$ there are no further restrictions on the parameters, this is no longer true for $\theta\in(0,1)$. The larger the volatility $\sigma_{\mu}$ the smaller one has to choose $\theta$. The bottom two panels illustrate the influence of stock volatility $\sigma_R$ on the subset $\underline{\mathcal P}^{F}$ . In contrast to the volatility $\sigma_{\mu}$ of the drift, smaller values of $\sigma_{R}$ imply that the optimization problem is well-posed only for smaller $T$ as it can be seen in the bottom left panel. If $\sigma_{R}$ does not exceed the critical value $\sigma_{R}^E\approx 0.22$, then the optimization problem is well-posed only up to a critical investment horizon $T^E=T^E(\sigma_{R})$ that depends on $\sigma_{R}$. The larger $\sigma_R$, the larger the horizon can be set. However, for $\sigma_R$ exceeding the critical value $\sigma_{R}^E$ the control problem is well defined for any horizon time $T>0$. The bottom right panel shows the dependence of $\mathcal P^F$ on the two volatilities $\sigma_R$ and $\sigma_{\mu}$. Note that the two regions are separated by a straight line as it can be deduced from \eqref{Diskriminante_Barier} and \eqref{t_Explosion_AV}. Finally, we consider the case of a partial informed investor. Then an additional condition on the initial covariance $q_0$ of the filter has to imposed to ensure well posedness. We refer to Corollary \ref{suff_cond_partial} for the multi-dimensional case and to Corollary \ref{suff_cond_d1} and \eqref{modellparameter_partial_d1} for the special case of markets with a single risky asset. That condition requires that $1-2A_{\gamma} %{c_{\Psi}}(0)q_0>0$ which is satisfied for the the model parameters from Table \ref{parameter} since $q_0=0.1\overline{6}<0.675=\frac{1}{2A_{\gamma} %{c_{\Psi}}(0)}$. \begin{appendix} \section{Proof of Lemma \ref{Helplemma_Psi_Expect}} \label{proof_lemma_Helplemma_Psi_Expect} \begin{proof} Consider first the function $D\in\mathcal C^{1,2}$ defined as follows \begin{align} {D}:[0,T]\times \mathbb R^{+}\times\mathbb R^{d}\rightarrow \mathbb R_{+};\qquad {D}(t,z,m): = {\mathbb{E}} \left[\Psi_T^{t,z,m}\right]. \label{Def_Psi} \end{align} Then it holds ${D}(T,z,m)= {\mathbb{E}} \left[\xi_T^{T,z,m}\right]={\mathbb{E}} \left[z\right]=z$.\\ The dynamics of the drift $\mu$ and the process $\Psi$ for $s\in[t,T]$ read as \begin{align} \begin{pmatrix} d\mu_s^{t,m} \\ d\Psi_s^{t,z,m} \end{pmatrix} = \begin{pmatrix} \kappa(\overline{\mu}-\mu_s^{t,m}) \\ \gamma} %{c_{\Psi}\Psi_s^{t,z,m}\big(\mu_s^{t,m}\big)^{\top} \Sigma_{R}^{-1} \mu_s^{t,m} \end{pmatrix} ds + \begin{pmatrix} \sigma_{\mu} \\ {0}_{1\timesd_2} \end{pmatrix} dW_s^{\mu} ;\quad \begin{pmatrix} \mu_t^{t,m} \\ \Psi_t^{t,z,m} \end{pmatrix} = \begin{pmatrix} m \\ z \end{pmatrix}. \nonumber \end{align} The drift and the diffusion coefficients of the last equation satisfy the Lipschitz- and linear growth conditions. Moreover, the Feynman-Kac-Formula for the expectation from \eqref{Def_Psi} leads to the following partial differential equation for ${D}$ \begin{align} 0=\frac{\partial }{\partial t}{D}(t,z,m)+\nabla_{m}^{\top}{D}(t,z,m)\; \kappa(\overline{\mu}-m) &+\frac{1}{2}tr\{\nabla_{m\filter}{D}(t,z,m) \Sigma_{\mu}\} + \gamma} %{c_{\Psi} zm^{\top}\Sigma_R^{-1} m \frac{\partial G_b(t,z,m)}{\partial z} \label{Cauchy_psi} \end{align} with ${D}(T,z,m)=z$ as terminal condition. The above terminal value problem can be solved with the following separation ansatz \begin{align} {D}(t,z,m)=z\; d(t,m) ,\quad d(T,m)=1. \label{subst_phi_0} \end{align} At time $t$ we have $\mu_t^{t,m}=m$ and $\Psi_t^{t,1,m}=1$ so that we obtain ${\mathbb{E}} \big[\Psi_T^{t,1,m}\big]={D}(t,1,m)= d(t,m)$ which is the function $d$ defined in Lemma \eqref{Helplemma_Psi_Expect}. Plugging \eqref{subst_phi_0} into \eqref{Cauchy_psi} leads to the following linear parabolic PDE for $d$ \begin{align} \nonumber 0=\frac{\partial}{\partial t}d(t,m)+\nabla_{m}^{\top} d(t,m)\; \kappa(\overline{\mu}-m)& +\frac{1}{2}tr\{\nabla_{m\filter} d(t,m) \Sigma_{\mu}\}+\gamma} %{c_{\Psi}m^{\top}\Sigma_R^{-1}m \; d(t,m) \end{align} with terminal value $d(T,m)=1$. For solving the above PDE the ansatz \begin{align} \nonumber d(t,m)=\exp\big\{m^{\top}A_{\gamma} %{c_{\Psi}}(t)m+B_{\gamma} %{c_{\Psi}}^{\top}(t)m+C_{\gamma} %{c_{\Psi}}(t)\big\}. \end{align} leads to the system of ODEs for $A_{\gamma} %{c_{\Psi}}$, $B_{\gamma} %{c_{\Psi}}$ and $C_{\gamma} %{c_{\Psi}}$, which are given in \eqref{A_bound}, \eqref{B_bound} and \eqref{C_bound}. \end{proof} % % % % \section{Proof of Theorem \ref{theo_bound_V}} \label{proof_lemma_lemma_bound_V} \begin{proof} In the proof we follow an approach presented in Angoshtari \cite[Theorem 1.8]{Angoshtari2013}. Let $(\xi_t)_{t\in[0,T]}$ be a stochastic process satisfying the SDE \begin{align} d\xi_t= \xi_t \mu_t^{\top}\;\Sigma_{R}^{-1}\sigma_R\;dW_t^{R} ,\quad \xi_0=1, \; ~ \mu_0=\overline{m}_0, \label{xi_prozess_DGL} \end{align} with the solution \begin{align} \xi_t=\exp\Big\{ -\frac{1}{2}\int\nolimits_0^t \| \mu_s^{\top}\;\Sigma_{R}^{-1}\sigma_R \|^2 ds -\int\nolimits_0^t \mu_s^{\top}\;\Sigma_{R}^{-1}\sigma_R\;dW_s^{R}\Big\}. \label{xi_prozess} \end{align} For $t_0\in[0,T]$ we denote by $\mu_t^{t_0,\overline{m}}$ the solution to the drift equation \eqref{drift} starting at time $t_0$ with initial value $\overline{m}$, by $X_t^{\pi,t_0,x,\overline{m}}$ the solution to the wealth equation \eqref{wealth_phys} with initial values $(x,\overline{m})$ and by $\xi_t^{t_0,z,\overline{m}}$ the solution of \eqref{xi_prozess_DGL} at time $t$ with initial values $(z,\overline{m})$. Applying It\^{o}'s-formula it holds \begin{align} d(X^{\pi,0,x,\overline{m}_0}~\xi^{0,1,\overline{m}_0})_t&=X_t^{\pi,0,x,\overline{m}_0}~d\xi_t^{0,1,\overline{m}_0}+\xi_t^{0,1,\overline{m}_0}\;dX_t^{\pi,0,x,\overline{m}_0}+d\langle X^{\pi,0,x,\overline{m}_0},\xi^{0,1,\overline{m}_0}\rangle_t\nonumber\\ &=\xi_t^{0,1,\overline{m}_0}~X_t^{\pi,0,x,\overline{m}_0}[\pi_t^{\top}\sigma_R-\mu_t^{0,\overline{m}_0}\Sigma_{R}^{-1}\sigma_R]\;dW_t^{R}. \end{align} Moreover, Fatou's Lemma implies that the non-negative process $(X^{\pi}\xi)_t$ is a supermartingal, and as consequence it holds \begin{align} x-{\mathbb{E}} [X_T^{\pi,0,x,\overline{m}_0}~\xi_T^{0,1,\overline{m}_0} ]\geq0. \label{super_M1} \end{align} From the other hand let $f:\mathbb R^+\rightarrow \mathbb R$ be the associated Legendre-Fenchel-Transformation of the utility function $ \mathcal U_{\theta}(x)$ defined for every $w>0$ by \begin{align} \label{Konjugierte} f(w):=\sup\limits_{x\in\mathbb R^+}\left\{ \mathcal U_{\theta}(x)-xw\right\}= \frac{1-\theta}{\theta}w^{-\frac{\theta}{1-\theta}}. \end{align} Since $\xi_T^{0,1,\overline{m}_0}>0$, it holds for every $w>0$ \begin{align} \label{Konj_Disk} f(\xi_T^{0,1,\overline{m}_0}~ w) =\sup\limits_{x\in\mathbb R^+} \Big\{\mathcal U_{\theta}(x)-x\;\xi_T^{0,1,\overline{m}_0}\; w\Big\} \geq \mathcal U_{\theta}(X_T^{\pi,0,x,\overline{m}_0})-X_T^{\pi,0,x,\overline{m}_0}\; \xi_T^{0,1,\overline{m}_0}\; w. \end{align} Now for $w>0$ inequality \eqref{super_M1} implies that \begin{align} {\mathbb{E}} \big[\mathcal U_{\theta}(X_T^{\pi,0,x,\overline{m}_0})\big]& \leq {\mathbb{E}} \big[ \mathcal U_{\theta}(X_T^{\pi,0,x,\overline{m}_0}) \big]+w \big(x-{\mathbb{E}} \big[X_T^{\pi,0,x,\overline{m}_0}~\xi_T^{0,1,\overline{m}_0}\big]\big)\nonumber\\ &={\mathbb{E}} \big[\mathcal U_{\theta}(X_T^{\pi,0,x,\overline{m}_0})-X_T^{\pi,0,x,\overline{m}_0}\;\xi_T^{0,1,\overline{m}_0}\; \text{$w$}\big]+x w\nonumber\\ & \leq {\mathbb{E}} \Big[f(\xi_T^{0,1,\overline{m}_0}~w)\Big]+x \text{$w$},\nonumber \end{align} where the last inequality follows from \eqref{Konj_Disk}. For the term $f(\xi_T^{0,1,\overline{m}_0}~w)$ we now apply \eqref{Konjugierte} to obtain \begin{align} {\mathbb{E}} \big[\mathcal U_{\theta}(X_T^{\pi,0,x,\overline{m}_0})\big]& \leq\frac{1-\theta}{\theta} w^{-\frac{\theta}{1-\theta}} {\mathbb{E}} \big[\big(\xi_T^{0,1,\overline{m}_0}\big)^{-\frac{\theta}{1-\theta}}\big]+xw. \label{Zielfkt_Absch2} \end{align} Since the last inequality holds for every admissible strategy $\pi\in\mathcal A^{G}$ and for every $w>0$, we can take the supremum over all strategies $\pi\in\mathcal A^{G}$ on the left-hand side and the infimum over all $w>0$ in the right-hand side to obtain \begin{align} \mathcal V_0^{G} =\sup\limits_{\pi \in\mathcal A^{G}} {\mathbb{E}} \big[\mathcal U_{\theta}(X_T^{\pi,0,x,\overline{m}_0})\big] &\leq \frac{1}{\theta}x^{\theta}\ \big( {\mathbb{E}} \big[\big(\xi_T^{0,1,\overline{m}_0}\big)^{-\frac{\theta}{1-\theta}}\big] \big)^{1-\theta}. \label{Zielfkt_Absch3} \end{align} The problem is now reduced to investigate if the expectation in the r.h.s. of \eqref{Zielfkt_Absch3} is bounded. It holds \begin{align} \big(\xi_T^{0,1,m}\big)^{-\frac{\theta}{1-\theta}}&=\exp\Big\{ \frac{\theta}{1-\theta}\Big( \frac{1}{2}\int\nolimits_0^T \big\| \big(\mu_s^{0,m}\big)^{\top}\Sigma_{R}^{-1}\sigma_R \big\|^2\;ds +\int\nolimits_0^T\big(\mu_s^{0,m}\big)^{\top}\Sigma_{R}^{-1}\sigma_R \; dW_s^{R} \Big)\Big\} =\Lambda_T \cdot \Psi_T^{0,1,m}, \nonumber \end{align} where $\Psi_T^{0,1,m}$ is given in \eqref{Psi_pro} with $\gamma} %{c_{\Psi}=\frac{\theta}{2(1-\theta)^2}$, and the term $\Lambda_T$ is given by \begin{align} \Lambda_T= \exp\Big\{\int\nolimits_0^T \frac{\theta}{1-\theta} \big(\mu_s^{0,m}\big)^{\top}\Sigma_{R}^{-1}\sigma_R \; dW_s^{R} -\frac{1}{2}\int\nolimits_0^T \big\|\frac{\theta}{1-\theta} \big(\mu_s^{0,m}\big)^{\top}\Sigma_{R}^{-1}\sigma_R \big\|^2\;ds\bigg\}. \nonumber \end{align} We now introduce a new probability measure ~$P^{\ast}$ given by $\frac{dP^{\ast}}{dP~}=\Lambda_T$ so that the expectation from \eqref{Zielfkt_Absch3} can be expressed as follows \begin{align} {\mathbb{E}} \Big[\big(\xi_T^{0,1,m}\big)^{\frac{-\theta}{1-\theta}} \big]& ={\mathbb{E}} \big[\Lambda_T \cdot \Psi_T^{0,1,m}\big] ={\mathbb{E}} ^{\ast}\Big[\Psi_T^{0,1,m}\Big]=d(0,m). \nonumber \end{align} This expectation can be expressed according to \eqref{Psi_4} in Lemma \ref{Helplemma_Psi_Expect}, where ${\mathbb{E}} ^{\ast}$ denotes the expectation under the new probability measure. \end{proof} \section{Proof of Lemma \ref{Erw_exp_normal}} \label{proof_lemma_Erw_exp_normal} \begin{proof} It holds \begin{align} {\mathbb{E}} \big[\exp\big\{(Y+{b})^{\top}{U} (Y+{b}) \big\}\big] &= \big((2\pi)^d\rm{det}(\Sigma_Y)\big)^{-1/2}\int\nolimits_{\mathbb R^{d}}\exp\{f_{Y}(y)\}dy \label{quad_fx} \end{align} with $f_{Y}(y):=-\frac{1}{2}(y-\mu_Y)^{\top} \Sigma_Y^{-1}(y-\mu_Y) +(y+{b})^{\top} {U}(y+{b})$. Quadratic complement with respect to $y$ with the notation $\Sigma_{Z}:=(\Sigma_Y^{-1}-2 {U})^{-1}$ and $\mu_{Z}:=\Sigma_{Z}(2{U} {b}+\Sigma_Y^{-1}\mu_Y)$ yields \begin{align} f_{Y}(y)&=-\frac{1}{2}(y-\mu_{Z})^{\top}\;\Sigma_{Z}^{-1}\;(y-\mu_{Z})+\frac{1}{2}\mu_{Z}^{\top}\Sigma_{Z}^{-1}\mu_{Z} +{b}^{\top}{U}{b}-\frac{1}{2}\mu_Y^{\top}\Sigma_Y^{-1}\mu_Y\nonumber\\ &=-\frac{1}{2}(y-\mu_{Z})^{\top}\; \Sigma_{Z}^{-1}\;(y-\mu_{Z})+(\mu_Y+{b})^{\top}\; ({I}_{d}-2{U}\Sigma_Y)^{-1} {U} (\mu_Y+{b}). \label{expo_fx} \end{align} Substituting \eqref{expo_fx} into \eqref{quad_fx} and using that for a Gaussian random vector $Z\sim\mathcal N(\mu_{Z},\Sigma_{Z})$ the probability density function is $f_Z(y)=\big((2\pi)^d\rm{det}(\Sigma_Z)\big)^{-1/2}\exp\{-\frac{1}{2}(y-\mu_{Z})^{\top}\; \Sigma_{Z}^{-1}\;(x-\mu_{Z})\}$ with $\int_{\R^d} f_Z(x)\, dx =1$ and the relation $ (\rm{det}(\Sigma_Z)^{1/2}\rm{det}(\Sigma_Y)^{-1/2}= (\rm{det}(\Sigma_Y^{-1}-2 {U})\rm{det}(\Sigma_Y))^{-1/2} =(\rm{det}(I_d-2{U}\Sigma_Y))^{-1/2}$ proves the assertion. \end{proof} \section{Proof of Theorem \ref{theorem_partial_Inv}} \label{proof_theorem_partial_Inv \begin{proof} We recall \eqref{bound_value_H} stating $\mathcal V_0^{H}(\varrho)\leq \frac{x_0^{\theta}}{\theta}{\mathbb{E}} [d(0,\mu_0) | \mathcal F_0^{H}]$. For the $H$-investors with $H=R,J,Z$ the conditional distribution of $\mu_0$ given $\mathcal F_0^{H}$ is the Gaussian distribution $\mathcal N(m_0,q_0)$. Thus we can write the conditional expectation ${\mathbb{E}} [ d(0,\mu_0) | \mathcal F_0^{H}]$ as ${\mathbb{E}} \big[ d(0,m_0+q_0^{1/2}\varepsilon ) \big]$ with a random variable $~\varepsilon\sim\mathcal N(0,I_{d})$ independent of $\mathcal F_0^{H}$. To simplify the notation we write in the following $A,B,C$ instead of $A_{\gamma} %{c_{\Psi}}(0), B_{\gamma} %{c_{\Psi}}(0), C_{\gamma} %{c_{\Psi}}(0)$, respectively. Substituting into \eqref{bound_value_H} and using representation \eqref{Psi_4} we deduce \begin{align} {\mathbb{E}} [ d(0,\mu_0) | \mathcal F_0^{H}] &= {\mathbb{E}} \big[ d(0,m_0+q_0^{1/2}\varepsilon ) \big]\nonumber\\ &={\mathbb{E}} \big[\exp\big\{ (m_0+{q_0}^{1/2}\varepsilon)^{\top} A (m_0+{q_0}^{1/2}\varepsilon) +B^{\top} (m_0+{q_0}^{1/2}\varepsilon) +C\big\} \big]\nonumber\\ &={\mathbb{E}} \big[\exp\big\{ m_0^\top A m_0+B^{\top} m_0 +C\big\} \exp\big\{ ({q_0}^{1/2}\varepsilon)^{\top} A{q_0}^{1/2}\varepsilon +(2m_0^\top A+B^{\top}) {q_0}^{1/2}\varepsilon \big\} \big]\nonumber\\ &=d(0,m_0){\mathbb{E}} \big[ \exp\big\{ ({q_0}^{1/2}\varepsilon)^{\top} A{q_0}^{1/2}\varepsilon +(2 Am_0+B)^{\top} {q_0}^{1/2}\varepsilon \big\} \big]\nonumber\\ &=d(0,m_0)\exp\Big\{-\frac{1}{4}{a}^{\top}A^{-1}{a}\Big\} {\mathbb{E}} \big[\exp\{Y^{\top} {A}Y\}\big]\label{exponents} \end{align} where $Y=q_0^{1/2}\varepsilon+\frac{1}{2}A^{-1}a \sim\mathcal{N}(\mu_Y,\Sigma_Y)$ is a Gaussian random vector with mean $\mu_Y=\frac{1}{2}A^{-1}a$ and covariance matrix $\Sigma_Y=q_0$. We recall that $a=2Am_0+B $ and ${K}=I_{d}-2Aq_0$. Applying Lemma \ref{Erw_exp_normal} with ${U}=A$ and ${b}=0_d$ yields \begin{align*} {\mathbb{E}} \big[\exp\{Y^{\top} {A}Y\}\big] &= \big(\rm{det}(K)\big)^{-1/2} \exp\big\{\mu_Y^{\top}\; K^{-1} A \,\mu_Y\big\} \end{align*} and we obtain from \eqref{exponents} \begin{align} \label{expect_d} {\mathbb{E}} [ d(0,\mu_0) | \mathcal F_0^{H}] &= d(0,m_0) \big(\rm{det}(K)\big)^{-1/2} \exp\Big\{\mu_Y^{\top}\; K^{-1} A \,\mu_Y -\frac{1}{4}{a}^{\top}A^{-1}{a}\Big\} \end{align} Rearranging terms in the exponent using the symmetry of $A$ yields \begin{align*} \mu_Y^{\top}\; K^{-1} A \,\mu_Y -\frac{1}{4}{a}^{\top}A^{-1}{a} &= \frac{1}{4}{a}^{\top} A^{-1}K^{-1} \,{a}-\frac{1}{4}d^{\top}A^{-1}d\\ &= \frac{1}{4}{a}^{\top} A^{-1}\big(I_{d\timesd}-K\big)K^{-1} \,{a} =\frac{1}{4}{a}^{\top} A^{-1}2Aq_0 K^{-1} \,{a} = \frac{1}{2}{a}^{\top}q_0 K^{-1} \,{a}. \end{align*} Substituting into \eqref{expect_d} and \eqref{bound_value_H} proves the claim. \end{proof} \end{appendix} \bibliographystyle{acm}
1,314,259,996,126
arxiv
\section{Appendix} \subsection{Proof of Lemma~\ref{lemma:functor-gfp-general}} \begin{proof} We have $\cF(g,\Fungfp\cF(B))\in\cA(\Fungfp\cF(B),\cF(B',\Fungfp\cF(B)))$ thus defining an $\cF_{B'}$-coalgebra structure on $\Fungfp\cF(B)$ and hence there exists a unique morphism $\Fungfp\cF(g)$ such that \[ \cF(B',\Fungfp\cF(g))\Compl\cF(g,\Fungfp\cF(B))=\Fungfp\cF(g)\,, \] that is $\cF(g,\Fungfp\cF(g))=\Fungfp\cF(g)$. Functoriality follows: consider also $g'\in\cB(B',B'')$, then we know that $h=\Fungfp\cF(g'\Compl g)$ satisfies $\cF(g'\Compl g,h)=h$ by the definition above. Now $h'=\Fungfp\cF(g')\Compl\Fungfp\cF(g)$ satisfies the same equation by functoriality of $\cF$ and because $\cF(g,\Fungfp\cF(g))=\Fungfp\cF(g)$ and $\cF(g',\Fungfp\cF(g'))=\Fungfp\cF(g')$, and hence $h'=h$ by Lemma~\ref{lemma:equations-final-coalgebra}, taking $l=\cF(g'\Compl g,\Fungfp\cF(B))$. In the same way one proves that $\Fungfp\cF(\Id)=\Id$. \end{proof} \subsection{Proof of Lemma~\ref{lemma:strfun-gfp-general}} \begin{proof} The part of the statement which concerns the functor $\Strfun{\Fungfp{\Vsnot F}}$ is a direct application of Lemma~\ref{lemma:functor-gfp-general} so we only have to deal with the strength. Let us prove naturality so let $\Vect f\in\LCAT^n(\Vect X,\Vect{X'})$ and $g\in\LCAT(Y,Y')$, we must prove that the following diagram commutes \[ \begin{tikzpicture}[->, >=stealth] \node (1) {$\Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}$}; \node (2) [right of=1, node distance=5.4cm] {$\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl Y}{\Vect X})$}; \node (3) [below of=1, node distance=1.2cm] {$\Tens{\Excl {Y'}}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect{X'})}$}; \node (4) [below of=2, node distance=1.2cm] {$\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl {Y'}}{\Vect{X'}})$}; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw (1) -- node {$\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X}$} (2); \draw (1) -- node [swap] {$\Tens{\Excl g}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)}$} (3); \draw (2) -- node {$\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl g}{\Vect f})$} (4); \draw (3) -- node {$\Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}}$} (4); \end{tikzpicture} \] Let $h_1=\Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}} \Compl(\Tens{\Excl g}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)})$ and $h_2=\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl g}{\Vect f})\Compl\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X}$ be the two morphisms we must prove equal. We use Lemma~\ref{lemma:equations-final-coalgebra}, taking the following morphism $l$. \begin{equation*} \begin{tikzpicture}[->, >=stealth] \node (1) {$\Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)} =\Excl Y\ITens \Strfun{\Vcsnot F}(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))$}; \node (2) [below of=1, node distance=1.2cm] {$\Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)})$}; \node (3) [below of=2, node distance=1.2cm] {$\Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)})$}; \tikzstyle{every node}=[midway,auto,font=\scriptsize] \draw (1) -- node {$\Strnat{\Vcsnot F}_{Y,(\Vect X, \Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}$} (2); \draw (2) -- node {$\Strfun{\Vcsnot F}(\Tens{\Excl g} {\Vect f},\Id)$} (3); \end{tikzpicture} \end{equation*} With these notations we have \begin{align*} &\Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}},h_1)\Compl l\\ &= \Strfun{\Vcsnot F} (\Tens{\Excl {Y'}}{\Vect{X'}},\Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}}) \Compl \Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}}, \Tens{\Excl g}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)})\\ &\quad\quad \Compl \Strfun{\Vcsnot F}(\Tens{\Excl g}{\Vect f}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}) \Compl \Strnat{\Vcsnot F}_{Y,(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}\\ &= \Strfun{\Vcsnot F} (\Tens{\Excl {Y'}}{\Vect{X'}},\Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}}) \Compl \Strfun{\Vcsnot F}(\Tens{\Excl g}{\Vect f}, \Tens{\Excl g}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)}) \Compl \Strnat{\Vcsnot F}_{Y,(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}\\ &= \Strfun{\Vcsnot F} (\Tens{\Excl {Y'}}{\Vect{X'}},\Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}}) \Compl \Strnat{\Vcsnot F}_{Y',(\Vect{X'},\Strfun{\Fungfp{\Vcsnot F}}(\Vect{X'}))} \Compl (\Tens{\Excl g}{\Strfun{\Vcsnot F}} (\Vect f,\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)))\\ &\quad\quad\quad\quad\quad\quad \text{ by naturality of }\Strnat{\Vcsnot F}\\ &= \Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}} \Compl(\Tens{\Excl g}{\Strfun{\Vcsnot F}} (\Vect f,\Strfun{\Fungfp{\Vcsnot F}}(\Vect f))) \text{ by~\Eqref{eq:final-coalg-strength-charact}}\\ &= \Strnat{\Fungfp{\Vcsnot F}}_{Y',\Vect{X'}} \Compl(\Tens{\Excl g}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)}) \text{ by~Lemma~\ref{lemma:functor-gfp-general}} \end{align*} so that $\Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}},h_1)\Compl l=h_1$ as required. On the other hand we have \begin{align*} &\Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}},h_2)\Compl l\\ &= \Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}}, \Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl g}{\Vect f})) \Compl \Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}}, \Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X})\\ &\quad\quad \Compl \Strfun{\Vcsnot F}(\Tens{\Excl g}{\Vect f}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}) \Compl \Strnat{\Vcsnot F}_{Y,(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}\\ &= \Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}}, \Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl g}{\Vect f})) \Compl \Strfun{\Vcsnot F}(\Tens{\Excl g}{\Vect f}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)})\\ &\quad\quad \Compl \Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X}, \Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X}) \Compl \Strnat{\Vcsnot F}_{Y,(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}\\ &= \Strfun{\Vcsnot F}(\Tens{\Excl g}{\Vect f}, \Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl g}{\Vect f})) \Compl\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X} \text{ by~\Eqref{eq:final-coalg-strength-charact}}\\ &= \Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl g}{\Vect f}) \Compl\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X} \text{ by~Lemma~\ref{lemma:functor-gfp-general}} \end{align*} so that $\Strfun{\Vcsnot F}(\Tens{\Excl {Y'}}{\Vect{X'}},h_2)\Compl l=h_2$ which proves our contention. The commutation of the diagrams of Figure~\ref{fig:strength-monoidality} for $\Strnat{\Fungfp{\Vcsnot F}}$ is proven similarly.% \end{proof} \subsection{Proof of Lemma~\ref{lemma:rel-embedding-retraction}} \begin{proof} Let $a\in E$, since $(a,a)\in\Id_E=t\Compl s$, there must exist $b\in F$ such that $(a,b)\in s$ and $(b,a)\in t$. If $(a,b')\in s$ then $(b,b')\in s\Compl t\subseteq\Id_F$ and hence $b'=b$. It follows that $s$ is a total function $E\to F$. Let $(a,b)\in s$ (that is $a\in E$ and $b=s(a)$). Since $t\Compl s=\Id_E$, we must have $(b,a)\in t$. Conversely let $(b,a)\in t$, we have $(b,s(a))\in s\Compl t$ and hence $b=s(a)$. We have proven that $t=\Eset{(s(a),a)\mid a\in E}$. If $a,a'\in $ satisfy $s(a)=s(a')$ we have therefore $(a,a')\in t\Compl s=\Id_E$ and hence $a=a'$; this shows that $s$ is injective. \end{proof} \subsection{Proof of Lemma~\ref{lemma:rel-fixpoint-final}} \begin{proof} Let $(E,t)$ be an $\Vsnot F$-coalgebra. We define a sequence $e_n\in\REL(E,\Funfp{\Vsnot F})$ as follows: $e_0=\emptyset$ and $e_{n+1}=\Vsnot F(e_n)\Compl t$. Then $e_n\subseteq e_{n+1}$ for all $n$ by an easy induction, using the fact that $\Vsnot F$ is locally continuous. Let $e=\Union_{n=0}^\infty e_n\in\REL(E,\Funfp{\Vsnot F})$, by local continuity of $\Vsnot F$ we have \( \Vsnot F(e)\Compl t = \left(\Union_{n=0}^\infty\Vsnot F(e_n)\right)\Compl t = \Union_{n=0}^\infty (\Vsnot F(e_n)\Compl t) = \Union_{n=0}^\infty e_{n+1}=e \) which means that \[ e\in\COALGFUN\REL{\Vsnot F}((E,t),(\Funfp{\Vsnot F},\Id))\,. \] We end the proof by showing that $e$ is the unique such morphism, so let \(e'\in\COALGFUN\REL{\Vsnot F}((E,t),(\Funfp{\Vsnot F},\Id))\) which means that $e'\in\REL(E,\Funfp{\Vsnot F})$ and $\Vsnot F(e')\Compl t=e'$. Let $i_n\in\REL(\Funfp{\Vsnot F},\Funfp{\Vsnot F})$ be defined by induction by $i_0=\emptyset$ and $i_{n+1}=\Vsnot F(i_n)$. Then $(i_n)_{n\in\Nat}$ is monotone and $\Union_{n=0}^\infty i_n=\Id$ by definition of $\Funfp{\Vsnot F}$. We prove by induction on $n$ that $\forall n\in\Nat\ i_n\Compl e'=i_n\Compl e$. Clearly $i_0\Compl e'=i_0\Compl e=\emptyset$. Next \begin{align*} i_{n+1}\Compl e' &= \Vsnot F(i_n)\Compl \Vsnot F(e')\Compl t\\ &= \Vsnot F(i_n\Compl e')\Compl t\\ &= \Vsnot F(i_n\Compl e)\Compl t\text{\quad by inductive hypothesis}\\ &= i_{n+1}\Compl e\,. \end{align*} Therefore $e'=\left(\Union_{n\in\Nat}i_n\right)\Compl e'=\Union_{n\in\Nat}(i_n\Compl e')=\Union_{n\in\Nat}(i_n\Compl e)=e$. \end{proof} \subsection{Proof of Proposition~\ref{prop:hom-continuous-dir-cocontinuous}} \begin{proof} Let $\cD$ be a directed set of sets and let $H$ be a set. For each $E\in\cD$ let $s_E\in\REL(\Vsnot F(E),H)$ so that $(s_E)_{E\in\cD}$ defines a cocone, that is, for each $E,F\in\cD$ such that $E\subseteq F$, one has $s_E=s_F\Compl\Vsnot F(\Relii_{E,F})$. Let $L=\Union\cD$. Let $s\in\REL(\Vsnot F(L),H)$ be given by \( s=\Union_{E\in\cD}s_E\Compl\Vsnot F(\Relip_{E,L}) \). Let $E\in\cD$, we have \( s\Compl\Vsnot F(\Relii_{E,L}) =\Union_{F\in\cD}s_F\Compl\Vsnot F(\Relip_{F,L}\Compl\Relii_{E,L}) \) so that $s_E\subseteq s\Compl\Vsnot F(\Relii_{E,L})$ (since $s_F\Compl\Vsnot F(\Relip_{F,L}\Compl\Relii_{E,L})=s_E$ when $F=E$). We prove the converse inclusion. Let $F\in\cD$ and let $G\in\cD$ be such that $E,F\subseteq G$ (remember that $\cD$ is directed). We have \begin{align*} &s_F\Compl\Vsnot F(\Relip_{F,L}\Compl\Relii_{E,L}) =s_F\Compl\Vsnot F(\Relip_{F,G}\Compl\Relip_{G,L} \Compl\Relii_{G,L}\Compl\Relii_{E,G})\\ &\quad=s_F\Vsnot F(\Relip_{F,G}\Compl\Relii_{E,G}) =s_G\Vsnot F(\Relii_{F,G})\Compl\Vsnot F(\Relip_{F,G}\Compl\Relii_{E,G})\\ &\quad\subseteq s_G\Compl\Vsnot F(\Relii_{E,G})=s_E \end{align*} where we have used the fact that $\Relii_{F,G}\Compl\Relip_{F,G}\subseteq\Id_G$ and hence $\Vsnot F(\Relii_{F,G}\Compl\Relip_{F,G})\subseteq\Id_{\Vsnot F(G)}$ by local continuity of $\Vsnot F$. So $s_F\Compl\Vsnot F(\Relip_{F,L}\Compl\Relii_{E,L})\subseteq s_E$ for all $F\in\cD$ and hence $s\Compl\Vsnot F(\Relii_{E,L})\subseteq s_E$ as contended. Let now $s'\in\REL(\Vsnot F(L),H)$ be such that $s'\Compl\Vsnot F(\Relii_{E,L})=s_E$ for each $E\in\cD$, we show that $s'=s$ thus proving the uniqueness part of the universal property. For $E\in\cD$, let $\theta_E=\Relii_{E,L}\Relip_{E,L}\in\REL(L,L)$. Then $(\theta_E)_{E\in\cD}$ is a directed family (for $\subseteq$) and $\Union_{E\in\cD}\theta_E=\Id_L$. By local continuity of $\Vsnot F$, we have \begin{align*} s' &=s'\Compl\Id_{\Vsnot F(L)} =s'\Compl\Union_{E\in\cD}\Vsnot F(\theta_E)\\ &=\Union_{E\in\cD}s'\Compl\Vsnot F(\Relii_{E,L}) \Compl\Vsnot F(\Relip_{E,L}) =\Union_{E\in\cD}s_E\Compl\Vsnot F(\Relip_{E,L})=s \end{align*} by our assumption on $s'$ and by definition of $s$. This shows that the cocone $(\Vsnot F(\Relii_{E,L}))_{E\in\cD}$ on $\Vsnot F\Compl\Relii$ is colimiting, thus proving that $\Vsnot F\Compl\Relii$ is directed cocontinuous. \end{proof} \subsection{Proof of Lemma~\ref{lemma:hom-conts-stable-fixpoint}} \begin{proof} As usual we assume that $n=1$ to increase readability. We need to prove first that $\Fungfp{\Vsnot F}$ is monotone on morphisms, so let $s,t\in\REL(E,F)$ with $s\subseteq t$. We have $\Fungfp{\Vsnot F}(s)=\Union_{n\in\Nat}s_n$ and $\Fungfp{\Vsnot F}(t)=\Union_{n\in\Nat}t_n$ with $s_0=t_0=\emptyset$, $s_{n+1}=\Vsnot F(s,s_n)$ and $t_{n+1}=\Vsnot F(t,t_n)$ (we use the action of $\Fungfp{\Vsnot F}$ on morphisms resulting from Lemma~\ref{lemma:functor-gfp-general} and from the characterization of the morphisms to the final object given in the proof of Lemma~\ref{lemma:rel-fixpoint-final}). By induction and hom-monotonicity of $\Vsnot F$ we have $\forall n\in\Nat\ s_n\subseteq t_n$ and hence $\Fungfp{\Vsnot F}(s)\subseteq\Fungfp{\Vsnot F}(t)$. Let us prove now local continuity so let $D\subseteq\REL(E,F)$ be directed and let $t=\Union D$, we prove that $\Fungfp{\Vsnot F}(t)=\Union_{s\in D}\Fungfp{\Vsnot F}(s)\in\REL(\Fungfp{\Vsnot F}(E),\Fungfp{\Vsnot F}(F))$ using Lemma~\ref{lemma:equations-final-coalgebra} (with the notations of that lemma, we take $l=\Vsnot F(t,\Fungfp{\Vsnot F}(E))$). % We have \[ \Vsnot F_F(\Fungfp{\Vsnot F}(t))\Compl\Vsnot F(t,\Fungfp{\Vsnot F}(E))=\Fungfp{\Vsnot F}(t) \] by definition of the functor $\Fungfp{\Vsnot F}$ and \begin{align*} &\Vsnot F_F(\Union_{s\in D}\Fungfp{\Vsnot F}(s)) \Compl\Vsnot F(t,\Fungfp{\Vsnot F}(E))\\ &= \Union_{s\in D}\Vsnot F(F,\Fungfp{\Vsnot F}(s)) \Compl\Union_{s\in D}\Vsnot F(s,\Fungfp{\Vsnot F}(E)) \text{\quad by hom-cont.}\\ &= \Union_{s\in D}\Vsnot F(s,\Fungfp{\Vsnot F}(s)) = \Union_{s\in D}\Fungfp{\Vsnot F}(s)\,. \end{align*} In the second equation, we used the facts that $D$ is directed and the monotonicity of $\Vsnot F$ and $\Fungfp{\Vsnot F}$ on morphisms. Let $E\subseteq F$, we prove that $\Fungfp{\Vsnot F}(E)\subseteq\Fungfp{\Vsnot F}(F)$. This results from the observation that if $E'\subseteq F'$, then $\Vsnot F_E(E')\subseteq\Vsnot F_F(F')$ and hence $\forall n\in\Nat\ \Vsnot F_E^n(\emptyset)\subseteq\Vsnot F_F^n(\emptyset)$. Let us check that $\Fungfp{\Vsnot F}(\Relii_{E,F})=\Relii_{\Fungfp{\Vsnot F}(E),\Fungfp{\Vsnot F}(F)}\in\REL(\Fungfp{\Vsnot F}(E),\Fungfp{\Vsnot F}(F))$. We have \begin{multline*} \Vsnot F(F,\Fungfp{\Vsnot F}(\Relii_{E,F}))\Compl\Vsnot F(\Relii_{E,F},\Fungfp{\Vsnot F}(E))\\ =\Vsnot F(\Relii_{E,F},\Fungfp{\Vsnot F}(\Relii_{E,F}))=\Fungfp{\Vsnot F}(\Relii_{E,F}) \end{multline*} % by definition of the functor $\Fungfp{\Vsnot F}$ and \begin{align*} \Vsnot F(F,\Relii_{\Fungfp{\Vsnot F}(E),\Fungfp{\Vsnot F}(F)})\Compl\Vsnot F(\Relii_{E,F},\Fungfp{\Vsnot F}(E)) &=\Relii_{\Vsnot F(E,\Fungfp{\Vsnot F}(E)), \Vsnot F(F,\Fungfp{\Vsnot F}(F))}\\ &=\Relii_{\Fungfp{\Vsnot F}(E),\Fungfp{\Vsnot F}(F)} \end{align*} by strictness of $\Vsnot F$. The equation follows by Lemma~\ref{lemma:equations-final-coalgebra}, so that the functor $\Fungfp{\Vsnot F}$ is strict. \end{proof} \subsection{Proof of Lemma~\ref{lemma:nuts-limpl-charact}} \begin{proof} Let $t\in\Total{\Limpl XY}$ and let $u\in\Total X$. Let $v'\in\Orth{\Total Y}$, since $u\times v'\in\Total{\Tens X{\Orth Y}}$ we have $t\cap(u\times v')\not=\emptyset$ and hence $(\Matappa tu)\cap v'\not=\emptyset$. Therefore $\Matappa tu\in\Biorth{\Total Y}=\Total Y$. Conversely assume that $\forall u\in\Total X\ \Matappa tu\in\Total Y$. Let $u\in\Total X$ and $v'\in\Total{\Orth Y}=\Orth{\Total Y}$. Since $\Matappa tu\in\Total Y$ we have $(\Matappa tu)\cap v'\not=\emptyset$ and hence $t\cap(u\times v')\not=\emptyset$ and this shows that $t\in\Total{\Limpl XY}$. \end{proof} \subsection{Proof of Lemma~\ref{lemma:NUTS-iso-charact}} \begin{proof} Assume that $t$ is an iso in $\NUTS$ so that there is $t'\in\NUTS(Y,X)$ such that $\Matapp{t'}t=\Id_{\Web X}$ and $\Matapp t{t'}=\Id_{\Web Y}$ and since we know that the isos in $\REL$ are the bijections we know that $t$ is a bijection. The fact that $\forall u\subseteq\Web X\ u\in\Total X\Equiv t(u)\in\Total Y$ results from the fact that both $t$ and $t'=\Funinv t$ are morphisms in $\NUTS$. The converse implication is obvious. \end{proof} \subsection{Proof of Lemma~\ref{lemma:limpl-tens-charact}} \begin{proof} The condition is obviously necessary, let us prove that it is sufficient so assume that $t$ fulfills it and let us prove that $t\in\Total{\Limpl{\Tens{X_1}{X_2}}{Y}}$. To this end it suffices to prove that $\Orth t\in\Total{\Limpl{\Orth Y}{\Orth{\Tensp{X_1}{X_2}}}}$. So let $v'\in\Total{\Orth Y}$ and let us prove that $\Matappa{\Orth t}{v'}\in\Total{\Orth{\Tensp{X_1}{X_2}}}=\Orth{\Eset{\Tens{u_1}{u_2}\mid u_1\in\Total{X_1}\text{ and }u_2\in\Total{X_2}}}$. So let $u_i\in\Total{X_i}$ for $i=1,2$. We know that $\Matappa t{\Tensp{u_1}{u_2}}\in\Total Y$ and hence $\Matappap t{\Tensp{u_1}{u_2}}\cap v'\not=\emptyset$, that is $\Tensp{u_1}{u_2}\cap\Matappap{\Orth t}{v'}\not=\emptyset$, proving our contention. \end{proof} \subsection{Proof of Lemma~\ref{lemma:Assoc-tens-limpl}} \begin{proof} Let $t\in\Total{\Limpl{\Tensp{X_1}{X_2}}{Y}}$ and let us prove that $s=\Matappa\Assoc t\in\Total{\Limpl{X_1}{\Limplp{X_2}{Y}}}$. Given $u_i\in\Total{X_i}$ is suffices to prove that $\Matappa{\Matappap{t'}{u_1}}{u_2}\in\Total Y$ which results from the fact that $\Matappa{\Matappap{s}{u_1}}{u_2}=\Matappa t{\Tensp{u_1}{u_2}}$. Conversely let $s\in\Total{\Limpl{X_1}{\Limplp{X_2}{Y}}}$ and let us prove that $t=\Matappa{\Funinv\Assoc}{s}\in\Total{\Limpl{\Tensp{X_1}{X_2}}{Y}}$. This results from lemma~\ref{lemma:limpl-tens-charact} and from the equation $\Matappa{\Matappap{s}{u_1}}{u_2}=\Matappa t{\Tensp{u_1}{u_2}}$. \end{proof} \subsection{Proof of Lemma~\ref{lemma:nuts-excl-map}} \begin{proof} The condition is obviously necessary, so let us assume that it holds. By Lemma~\ref{lemma:nuts-orth-morph}, it suffices to prove that $\Orth t\in\NUTS(\Orth Y,\Orthp{\Excl X})$. Let $v'\in\Total{\Orth Y}$, we prove that $\Matappa{\Orth t}{v'}\in\Orth{\Total{\Excl Y}}$. So let $u\in\Total X$, since $\Matappa t{\Promhc u}\in\Total Y$ and hence $\Matappap t{\Promhc u}\cap v'\not=\emptyset$, that is $\Matappap{\Orth t}{v'}\cap\Promhc u\not=\emptyset$. \end{proof} \subsection{Proof of Lemma~\ref{lemma:nuts-excl-map-bil}} \begin{proof} We deal with the case $k=2$. The condition is necessary since, if $u_1\in\Total{X_1}$ and $u_2\in\Total{X_2}$, then $\Tens{\Promhc{u_1}}{\Promhc{u_2}}\in\Total{\Tens{\Excl{X_1}}{\Excl{X_2}}}$. So assume that it holds. Let $t'=\Curlin(t)\in\REL(\Limpl{\Web{X_1}}{\Limplp{\Web{X_2}}{\Web Y}})$. Let $u_1\in\Total{X_1}$, we have $\Matappa{t'}{\Promhc{u_1}}\in\Part{\Web{\Limpl{\Excl{X_2}}Y}}$. Let $u_2\in\Total{X_2}$, we have $\Matappa{\Matappap{t'}{\Promhc{u_1}}}{\Promhc{u_1}} =\Matappa t{\Tensp{\Promhc{u_1}}{\Promhc{u_2}}}\in\Total Y$ by our assumption. It follows by Lemma~\ref{lemma:nuts-excl-map} that $\Matappa{t'}{\Promhc{u_1}}\in\Total{\Limpl{\Excl{X_2}}Y}$ and since this holds for any $u_1\in\Total{X_1}$ we actually have $t'\in\NUTS(\Excl{X_1},\Limpl{\Excl{X_2}}Y)$. It follows that $t=\Funinv{\Curlin}(t')\in\NUTS(\Tens{\Excl{X_1}}{\Excl{X_2}},Y)$ as contended. \end{proof} \subsection{Proof of Lemma~\ref{lemma:NUTS-excl}} \begin{proof} Given an object $X$ of $\NUTS$, we set $\Der X=\Der{\Web X}\in\REL(\Web{\Excl X},\Web X)$ and $\Digg X=\Digg{\Web X}\in\REL(\Web{\Excl X},\Web{\Excll X})$. Given $u\in\Total X$, we have $\Matappa{\Der X}{\Promhc u}=u\in\Total X$ and $\Matappa{\Digg X}{\Promhc u}=\Prommhc u\in\Total{\Excll X}$. It follows by Lemma~\ref{lemma:nuts-excl-map} that $\Der X\in\NUTS(\Excl X,X)$ and $\Digg X\in\NUTS(\Excl X,\Excll X)$. Naturality and monadicity trivially hold because they hold in $\REL$: we have an obvious faithful forgetful functor $\NUTS\to\REL$ which commutes with all $\LL$ categorical constructs. We are left with defining the strong monoidality structure of $\Excl\_$ (Seely isomorphisms), for $\Seelyz\in\NUTS(\One,\Excl\top)$ we take the same morphism as in $\REL$. And we set $\Seelyt_{X_1,X_2}=\Seelyt_{\Web{X_1},\Web{X_2}} \in\REL(\Web{\Tens{\Excl{X_1}}{\Excl{X_2}}},\Web{\Exclp{\With{X_1}{X_2}}})$. Let $u_i\in\Total{X_i}$ for $i=1,2$. We have \( \Matappa{\Seelyt_{X_1,X_2}}{\Tensp{\Promhc{u_1}}{\Promhc{u_2}}} =\Promhc{(\With{u_1}{u_2})}\in\Total{\Exclp{\With{X_1}{X_2}}} \) since $\With{u_1}{u_2}\in\Total{\With{X_1}{X_2}}$, and hence by Lemma~\ref{lemma:nuts-excl-map-bil} we have \( \Seelyt_{X_1,X_2}\in\NUTS(\Tensp{\Excl{X_1}}{\Excl{X_2}}, \Exclp{\With{X_1}{X_2}}) \). Any element $w$ of $\Total{\With{X_1}{X_2}}$ is of shape $w=\With{u_1}{u_2}$ with $u_i\in\Total{X_i}$, namely $u_i=\Matappa{\Proj i}w$. We have \( \Matappa{\Funinv{(\Seelyt_{X_1,X_2})}}{\Promhc w} =\Tens{\Promhc{u_1}}{\Promhc{u_2}}\in\Total{\Tens{\Excl{X_1}}{\Excl{X_2}}} \) and hence by Lemma~\ref{lemma:nuts-excl-map} we have $\Funinv{(\Seelyt_{X_1,X_2})} \in\NUTS(\Exclp{\With{X_1}{X_2}},\Tensp{\Excl{X_1}}{\Excl{X_2}})$. This ends the proof that $\NUTS$ is a model of classical Linear Logic since the required commutations obviously hold because they hold in $\REL$. \end{proof} \subsection{Full proof of Theorem~\ref{th:VNUTS-model}} \begin{proof} Concerning Condition~(\ref{enum:seel-mull-3}), let $(\Vsnot X_i)_{i=1}^k$ be elements of $\VNUTS n$ and let $\Vsnot X\in\VNUTS k$. Considering $\Vsnot X$ and the $\Vsnot X_i$'s as strong functors, we know that $\Vsnot X\Comp\Vect{\Vsnot X}$ is a strong functor $\NUTS^n\to\NUTS$. We simply have to exhibit a VNUTS whose associated strong functor is $\Vsnot X\Comp\Vect{\Vsnot X}$. Let $\Vsnot F=\Web{\Vsnot X}\Comp\Web{\Vect{\Vsnot X}}$ (composition of variable sets, Section~\ref{sec:strong-VS-Seely-model}). Let $\Vect X\in\NUTS^n$, each $\Strfun{\Vsnot X_i}(\Vect X)$ is an object of $\NUTS$ and hence\\ $(\Strfun{\Vsnot F}(\Web{\Vect{X}}), \Total{\Vsnot X}(\Strfun{\Vsnot X_1}(\Vect X),\dots,\Strfun{\Vsnot X_k}(\Vect X)))$ is a NUTS. Moreover given $\Vect t\in\NUTS^n(\Vect X,\Vect Y)$, we know that for each $i=1,\dots,k$, one has $\Strfun{\Vsnot X_i}(\Vect t)\in\NUTS(\Strfun{\Vsnot X_i}(\Vect X),\Strfun{\Vsnot X_i}(\Vect Y))$ since $\Vsnot X_i$ is a VNUTS. Since $\Vsnot X$ is a VNUTS we have \begin{multline*} \Strfun{\Vsnot F}(\Vect t)\\ \in\NUTS(\Strfun{\Vsnot X}(\Strfun{\Vsnot X_1}(\Vect X),\dots,\Strfun{\Vsnot X_k}(\Vect X)),\Strfun{\Vsnot X}(\Strfun{\Vsnot X_1}(\Vect Y),\dots,\Strfun{\Vsnot X_k}(\Vect Y)))\,. \end{multline*} Let $X\in\Obj\NUTS$ and $\Vect Y\in\Obj{\NUTS^k}$. For $i=1,\dots,k$ we know that $\Strnat{\Vsnot X_i}_{X,\Vect Y}\in\NUTS(\Tens{\Excl X}{\Strfun{\Vsnot X_i}(\Vect Y)},\Strfun{\Vsnot X_i}(\Tens{\Excl X}{\Vect Y}))$. Therefore \begin{multline*} \Strfun{\Vsnot X}((\Strnat{\Vsnot X_i}_{X,\Vect Y})_{i=1}^k)\\ \in\NUTS(\Strfun{\Vsnot X}((\Tens{\Excl X}{\Strfun{\Vsnot X_i}(\Vect Y)})_{i=1}^k), \Strfun{\Vsnot X}((\Strfun{\Vsnot X_i}(\Tens{\Excl X}{\Vect Y}))_{i=1}^k)) \end{multline*} and hence \begin{multline*} \Strfun{\Vsnot X}((\Strnat{\Vsnot X_i}_{X,\Vect Y})_{i=1}^k)\Compl\Strnat{\Vsnot X}_{X,(\Strfun{\Vsnot X_i}(\Vect Y))_{i=1}^k}\\ \in \NUTS(\Tens{\Excl X}{\Strfun{\Vsnot X} ((\Strfun{\Vsnot X_i}(\Vect Y))_{i=1}^k)},\Strfun{\Vsnot X}((\Strfun{\Vsnot X_i}(\Tens{\Excl X}{\Vect Y}))_{i=1}^k) )\,. \end{multline*} Moreover we have \begin{align*} \Strnat{\Vsnot F}_{\Web X,\Web{\Vect Y}} &=\Strfun{\Web{\Vsnot X}}((\Strnat{\Web{\Vsnot X_i}}_{\Web X,\Web{\Vect Y}})_{i=1}^k)\Compl\Strnat{\Web{\Vsnot X}}_{\Web X,(\Web{\Strfun{\Vsnot X_i}(\Vect Y)})_{i=1}^k}\\ &\hspace{5em}\text{\quad by definition of }\Vsnot F\\ &=\Strfun{\Web{\Vsnot X}}((\Strnat{\Web{\Vsnot X_i}}_{\Web X,\Web{\Vect Y}})_{i=1}^k) \Compl\Strnat{\Web{\Vsnot X}}_{\Web X,(\Strfun{\Web{\Vsnot X_i}}(\Web{\Vect Y}))_{i=1}^k}\\ &=\Strfun{\Vsnot X}((\Strnat{\Vsnot X_i}_{X,\Vect Y})_{i=1}^k)\Compl\Strnat{\Vsnot X}_{X,(\Strfun{\Vsnot X_i}(\Vect Y))_{i=1}^k} \end{align*} using again the fact that $\Vsnot X$ and the $\Vsnot{X}_i$'s are VNUTS. This shows that the pair $\Vsnot Y=(\Web{\Vsnot Y},\Total{\Vsnot Y})$ given by $\Web{\Vsnot Y}=\Vsnot F$ and $\Total{\Vsnot Y}(\Vect X)=\Total{\Vsnot X}(\Strfun{\Vsnot X_1}(\Vect X),\dots,\Strfun{\Vsnot X_k}(\Vect X))$ is a VNUTS whose associated strong functor is $\Vsnot X\Comp\Vect{\Vsnot X}$ thus proving our contention. Concerning Condition~(\ref{enum:seel-mull-4}), let us deal only with the case of $\Excl\_$, the others being similar. We have to exhibit a unary VNUTS $\Vsnot X$ whose associated strong functor $\NUTS\to\NUTS$ coincides with $\Excl\_$ (which is known to be a strong functor $\NUTS\to\NUTS$ by Section~\ref{sec:NUTS-exponential} and by the general considerations of Section~\ref{sec:strong-fun-LL-operations}). For $\Web{\Vsnot X}$, which has to be a variable set $\REL\to\REL$, we take the interpretation $\Vsnot E$ of $\Excl\_$ in the model $\REL$ (Section~\ref{sec:strong-VS-Seely-model}) which is an element of $\VREL 1$, that is, a unary variable set. Next, given $X\in\Obj\NUTS$, we take $\Total{\Vsnot X}(X)=\Total{\Excl X}$. Condition~(\ref{enum:vnuts-cond-tot}) in the definition of VNUTS holds by functoriality of $\Excl\_$ on $\NUTS$. Condition~(\ref{enum:vnuts-cond-strength}) holds by the definition of $\Strnat{\Vsnot F}_{\Web X,\Web Y}$ as described in Section~\ref{sec:strong-fun-LL-operations} which coincides with $\Monoidalt\Compl\Tensp{\Digg X}{\Excl Y}\in\NUTS(\Tens{\Excl X}{\Excl Y},\Excl{\Tensp{\Excl X}{Y}})$. Let us now turn to Condition~(\ref{enum:seel-mull-5}) which is a bit more challenging. \subsubsection{Fixed Points of VNUTS}\label{sec:fix-VNUTS} Let first $\Vsnot X=(\Web{\Vsnot X},\Total{\Vsnot X})$ be a \emph{unary} VNUTS. Let $E=\Funfp{\Strfun{\Web{\Vsnot X}}}$ which is the least set such that $\Strfun{\Web{\Vsnot X}}(E)=E$, that is $E=\Union_{n=0}^\infty\Strfun{\Web{\Vsnot X}}^n(\emptyset)$. Let $\Phi:\Tot E\to\Tot E$ be defined as follows: given $\cS\in\Tot E$, then $(E,\cS)$ is a NUTS, and we set $\Phi(\cS)=\Total{\Vsnot X}(E,\cS)\in\Tot{\Strfun{\Web{\Vsnot X}}(E)}=\Tot E$. This function $\Phi$ is monotone. Let indeed $\cS_1,\cS_2\in\Tot E$ with $\cS_1\subseteq \cS_2$. Then we have $\Id\in\NUTS((E,\cS_1),(E,\cS_2))$ and therefore, by Condition~(\ref{enum:vnuts-cond-tot}) satisfied by $\Vsnot X$, we have \begin{align*} \Id=\Strfun{\Web{\Vsnot X}}(\Id) &\in\NUTS(\Strfun{\Vsnot X}(E,\cS_1),\Strfun{\Vsnot X}(E,\cS_2))\\ &\hspace{3em}=\NUTS((E,\Phi(\cS_1)),(E,\Phi(\cS_2)) \end{align*} which means that $\Phi(\cS_1)\subseteq\Phi(\cS_2)$. By the Knaster Tarski Theorem (remember that $\Tot E$ is a complete lattice), $\Phi$ has a greatest fixpoint $\cT$ that we can describe as follows. Let $(\cT_\alpha)_{\alpha\in\Ordinals}$, where $\Ordinals$ is the class of ordinals, be defined by: $\cT_0=\Part E$ (the largest possible notion of totality on $E$), $\cT_{\alpha+1}=\Phi(\cT_\alpha)$ and $\cT_\lambda=\Inter_{\alpha<\lambda}\cT_\alpha$ when $\lambda$ is a limit ordinal. This sequence is decreasing (easy induction on ordinals using the monotonicity of $\Phi$) and there is an ordinal $\theta$ such that $\cT_{\theta+1}=\cT_\theta$ (by a cardinality argument; we can assume that $\theta$ is the least such ordinal). The greatest fixpoint of $\Phi$ is then $\cT_\theta$ as easily checked. By construction $((E,\cT_\theta),\Id)$ is an object of $\COALGFUN{\NUTS}{\Strfun{\Vsnot X}}$, we prove that it is the final object. So let $(Y,t)$ be another object of the same category. Since $(\Web Y,t)$ is an object of $\COALGFUN\REL{\Strfun{\Web{\Vsnot X}}}$ and since $(E,\Id)$ is the final object in that category, we know by Lemma~\ref{lemma:rel-fixpoint-final} that there is exactly one $e\in\REL(\Web Y,E)$ such that $\Strfun{\Web{\Vsnot X}}(e)\Compl t=e$. We prove that actually $e\in\NUTS(Y,(E,\cT_\theta))$ so let $v\in\Total Y$. We prove by induction on the ordinal $\alpha$ that $\Matappa ev\in\cT_\alpha$. For $\alpha=0$ it is obvious since $\cT_0=\Part E$. Assume that the property holds for $\alpha$ and let us prove it for $\alpha+1$. We have $\Matappa tv\in\Total{\Vsnot X}(Y)=\Total{\Strfun{\Vsnot X}(Y)}$ since $t\in\NUTS(Y,\Strfun{\Vsnot X}(Y))$. Since $\Strfun{\Vsnot X}(e)\in\NUTS(\Strfun{\Vsnot X}(Y),\Strfun{\Vsnot X}(E,\cT_\alpha))$ and since $\Strfun{\Vsnot X}(E,\cT_\alpha)=(E,\cT_{\alpha+1})$ we have $\Matappa{(\Strfun{\Vsnot X}(e)\Compl t)}v\in\cT_{\alpha+1}$, that is $\Matappa ev\in\cT_{\alpha+1}$. Last if $\lambda$ is a limit ordinal and if we assume $\forall\alpha<\lambda\ \Matappa ev\in\cT_\alpha$ we have $\Matappa ev\in\Inter_{\alpha<\lambda}\cT_\alpha=\cT_\lambda$. Therefore $\Matappa ev\in\cT_\theta$. We use $\Fungfp{\Strfun{\Vsnot X}}$ to denote this final coalgebra $(E,\cT_\theta)$ (its definition depends only on $\Strfun{\Vsnot X}$ and does not involve the strength $\Strnat{\Vsnot X}$). So we have proven the first part of Condition~(\ref{enum:seel-mull-5}) in the definition of a Seely model of $\MULL$ (see Section~\ref{def:categorical-muLL-models}). As to the second part, let $\Vsnot X$ be an $n+1$-ary VNUTS. We know by the general Lemma~\ref{lemma:strfun-gfp-general} that there is a uniquely defined strong functor $\Fungfp{\Vsnot X}:\NUTS^n\to\NUTS$ such that \begin{Itemize} \item $\Strfun{\Fungfp{\Vsnot X}}(\Vect X)=\Fungfp{(\Strfun{\Vsnot X}_{\Vect X})}$, so that $\Strfun{\Vsnot X}(\Vect X,\Strfun{\Fungfp{\Vsnot X}}(\Vect X))=\Strfun{\Fungfp{\Vsnot X}}(\Vect X)$, for all $\Vect X\in\Obj{\NUTS^n}$, \item $\Strfun{\Vsnot X}(\Vect t,\Strfun{\Fungfp{\Vsnot X}}(\Vect t))=\Strfun{\Fungfp{\Vsnot X}}(\Vect t)$ for all $\Vect t\in\NUTS(\Vect X,\Vect Y)$ \item and $\Strfun{\Vsnot X}(\Tens{Y}{\Vect X},\Strnat{\Fungfp{\Vsnot X}}_{Y,\Vect X})\Compl\Strnat{\Vsnot X}_{Y,(\Vect X,\Strfun{\Fungfp{\Vsnot X}}(\Vect X))}=\Strnat{\Fungfp{\Vsnot X}}_{Y,\Vect X}$ for all $Y\in\Obj\NUTS$ and $\Vect X\in\Obj{\NUTS^n}$. \end{Itemize} To end the proof, it will be enough to exhibit an $n$-ary VNUTS $\Vsnot Y=(\Web{\Vsnot Y},\Total{\Vsnot Y})$ whose associated strong functor coincides with $\Fungfp{\Vsnot X}$. We know that $\Web{\Vsnot X}$ is a variable set $\REL^{n+1}\to\REL$ so let $\Vsnot F=\Fungfp{\Web{\Vsnot X}}=\Funfp{\Web{\Vsnot X}}$ which is a variable set $\REL^n\to\REL$ (see Section~\ref{sec:VS-fixpoints}). Let $\Vect X\in\Obj{\NUTS^n}$, we have $\Web{\Strfun{\Fungfp{\Vsnot X}}(\Vect X)}=\Web{\Fungfp{(\Strfun{\Vsnot X}_{\Vect X})}}=\Union_{n=0}^\infty\Web{\Strfun{\Vsnot X}_{\Vect X}}^n(\emptyset)=\Strfun{\Vsnot F}(\Web{\Vect X})$. Let $\Vect t\in\NUTS^n(\Vect X,\Vect Y)$, then $\Strfun{\Fungfp{\Vsnot X}}(\Vect t)$ is the unique element $s$ of $\NUTS(\Strfun{\Fungfp{\Vsnot X}}(\Vect X),\Strfun{\Fungfp{\Vsnot X}}(\Vect Y))$ (this hom-set is a subset of $\REL(\Vsnot F(\Web{\Vect X}),\Vsnot F(\Web{\Vect Y}))$) which satisfies $\Strfun{\Vsnot X}(\Vect t,s)=s$, that is $\Strfun{\Web{\Vsnot X}}(\Vect t,s)=s$. This means that $\Strfun{\Fungfp{\Vsnot X}}(\Vect t)=s=\Strfun{\Vsnot F}(\Vect t)$. By a completely similar uniqueness argument we have $\Strnat{\Fungfp{\Vsnot X}}_{X,\Vect Y}=\Strnat{\Vsnot F}_{\Web X,\Web{\Vect Y}}$ for all $X\in\Obj{\NUTS}$ and $\Vect Y\in\Obj{\NUTS^n}$. So we set $\Web{\Vsnot Y}=\Vsnot F$. Next, given $\Vect X\in\Obj{\NUTS^n}$ we set \begin{multline*} \Total{\Vsnot Y}(\Vect X)=\Total{\Strfun{\Fungfp{\Vsnot X}}(\Vect X)}\\ \in\Tot{\Web{\Strfun{\Fungfp{\Vsnot X}}(\Vect X)}}=\Tot{\Strfun{\Vsnot F}(\Web{\Vect X})} \end{multline*} Given $\Vect t\in\NUTS(\Vect X,\Vect Y)$ we have \begin{multline*} \Strfun{\Vsnot F}(\Vect t)=\Strfun{\Fungfp{\Vsnot X}}(\Vect t)\\ \in\NUTS((\Strfun{\Vsnot F}(\Web{\Vect X}),\Total{\Vsnot Y}(\Vect X)),(\Strfun{\Vsnot F}(\Web{\Vect Y}),\Total{\Vsnot Y}(\Vect Y)) \end{multline*} since $(\Strfun{\Vsnot F}(\Web{\Vect X}),\Total{\Vsnot Y}(\Vect X))=\Strfun{\Fungfp{\Vsnot X}}(\Vect X)$ and similarly for $\Vect Y$. Last since $\Strnat{\Vsnot F}_{\Web X,\Web{\Vect Y}}=\Strnat{\Fungfp{\Vsnot X}}_{X,\Vect Y}\in\NUTS(\Tens{\Excl X}{\Strfun{\Fungfp{\Vsnot X}}(\Vect Y)},\Strfun{\Fungfp{\Vsnot X}}(\Tens X{\Vect Y}))$ we know that $\Vsnot Y=(\Web{\Vsnot Y},\Total{\Vsnot Y})$ is a VNUTS whose associated strong functor is $\Fungfp{\Vsnot X}$. This ends the proof that $(\NUTS,(\VNUTS n)_{n\in\Nat})$ is a Seely model of $\MULL$. \end{proof} \section{Introduction} Propositional Linear Logic is a well-established logical system introduced by Girard in~\cite{Girard87}. It provides a fine-grain analysis of proofs in intuitionistic and classical logic, and more specifically of their cut-elimination. $\LL$ features a logical account of the structural rules (weakening, contraction) which are handled implicitly in intuitionistic and classical logic. For this reason, $\LL$ has many useful outcomes in the Curry-Howard based approach to the theory of programming: logical understanding of evaluation strategies, new syntax of proofs/programs (proof-nets), connections with other branches of mathematics (linear algebra, functional analysis, differential calculus), new operational semantics (geometry of interaction) etc. However propositional $\LL$ is not a reasonable programming language, by lack of data-types and iteration or recursion principles. This is usually remedied by extending propositional $\LL$ to the $2^{\mathrm{nd}}$ order, thus defining a logical system in which Girard's System $\SF$~\cite{Girard89} can be embedded. Another option to turn propositional $\LL$ into a programming language --~closer to usual programming~-- is to extend it with least and greatest fixed points of formulas. Such an extension was early suggested by Girard in an unpublished note~\cite{Girard92}, though the first comprehensive proof-theoretic investigation of such an extension of $\LL$ is recent: in~\cite{Baelde12} Baelde considers an extension $\MUMALL$ of Multiplicative Additive $\LL$ sequent calculus with least and greatest fixed points. His motivations arose from a proof-search and system verification perspective and therefore his $\MUMALL$ logical system is a predicate calculus. Our purpose is to develop a more Curry-Howard oriented point of view on $\LL$ with fixed points and therefore we stick to the proposition calculus setting of~\cite{Girard89}. But, unlike~\cite{Baelde12} we include the exponentials in our system from the beginning% \footnote{Exponentials are not considered in $\MUMALL$ because some form of exponential can be encoded using inductive/coinductive types, however these exponentials are not fully satisfactory from our point of view because their denotational interpretation does not satisfy all required isomorphisms; specifically, the \emph{Seely isos} are lacking.}% , so we call it $\MULL$ rather than propositional $\MUMALL$ and we consider it as an alternative to the ``system $\SF$'' approach to representing programs in $\LL$. Our system $\MULL$ could also have applications to session types, in the line of~\cite{LindleyMorris16}. The $\nu$-introduction rule of $\MULL$ (Park's rule, that is rule~\Ngfp{} of Section~\ref{sec:MULL-syntax}) leads to subtle cut-elimination rewrite rules for which Baelde could prove cut-elimination in $\MUMALL$, showing for instance that a proof of the type of integers $\Lfpll\zeta{(\Plus\One\zeta)}$ necessarily reduces to an integer (in contrast with $\LL$, $\MUMALL$ enjoys only a restricted form of sub-formula property). There are alternative proof-systems for the same logic, involving infinite or cyclic proofs, see~\cite{BaeldeDoumaneSaurin16}, whose connections with the aforementioned finitary proof-system are not completely clear yet. Since the proof-theory (and hence the ``operational semantics'') of $\MULL$ is still under development, it is important to investigate its categorical semantics, whose definition does not rely on the precise choice of inference and rewrite rules we equip $\MULL$ with, see the \emph{Outcome} § below. We develop here a categorical semantics of $\MULL$ extending the standard notion of Seely category% \footnote{Sometimes called new-Seely category: it is a cartesian symmetric monoidal closed category with a $\ast$-autonomous structure and a comonad $\Excl\_$ with a strong symmetric monoidal structure from the cartesian product to the tensor product.}% of classical $\LL$, see~\cite{Mellies09}. Such a model of $\MULL$ consists of a Seely category $\cL$ and of a class of functors $\cL^n\to\cL$ for all possible arities $n$ which will be used for interpreting $\MULL$ formulas with free variables. These functors have to be equipped with a strength to deal properly with contexts in the rule~\Ngfp, see Section~\ref{sec:MULL-comments} for a discussion on these contexts in particular. Then we develop a simple instance of this setting which consists in taking for $\cL$ the category of sets and relations, a well-known Seely model of $\LL$. The \emph{variable sets} are the strong functors we consider on this category. They are the pairs $\Vsnot F=(\Strfun{\Vsnot F},\Strnat{\Vsnot F})$ where $\Strnat{\Vsnot F}$ is the strength and $\Strfun{\Vsnot F}:\REL^n\to\REL$ is a functor which is Scott-continuous in the sense that it commutes with directed unions of morphisms. This property implies that $\Strfun{\Vsnot F}$ maps injections to injections and is cocontinuous on the category of sets and injections. There is no special requirement about the strength $\Strnat{\Vsnot F}$ beyond naturality, monoidality and compatibility with the comultiplication of the comonad $\Excl\_$. Variable sets form a Seely model of $\MULL$ where linear negation is the identity on objects. The formulas $\Lfpll\zeta F$ and $\Gfpll\zeta F$ are interpreted as the same variable set, exactly as $\ITens$ and $\IPar$ are interpreted in the same way (and similarly for additives and exponentials). This denotational ``degeneracy'' at the level of types is a well known feature of $\REL$ which does not mean at all that the model is trivial. For instance normal multiplicative exponential $\LL$ proofs which have distinct relational interpretations have distinct associated proof-nets~\cite{CarvalhoDeFalco12,Carvalho16}. Last we enrich this model $\REL$ by considering sets equipped with an additional structure of \emph{totality}: a \emph{non-uniform totality space} (NUTS) is a pair $X=(\Web X,\Total X)$ where $\Web X$ is a set and $\Total X$ is a set of subsets of $\Web X $which intuitively represent the total, that is, terminating computations of type $X$. This set $\Total X$ is required to coincide with its bidual for a duality expressed in terms of non-empty intersections. This kind of definition by duality is ubiquitous in $\LL$ since~\cite{Girard87} and has been categorically formalized as \emph{double gluing} in~\cite{HylandSchalk03}. We don't use this categorical formalization here however as it would not simplify the presentation. One nice feature of this specific duality is that the bidual of a set of subsets of $\Web X$ is simply its upwards-closure (wrt.~inclusion)\footnote{This new model is a major simplification wrt.~notions of totality on coherence spaces~\cite{Girard86} or Loader's totality spaces~\cite{Loader94} where biduality is much harder to deal with because it combines totality with a form of determinism.}, see Lemma~\ref{lemma:NUTS-biorth-uppper-closed}. Given two NUTS $X$ and $Y$ there is a natural notion of \emph{total relation} $t\subseteq\Web X\times\Web Y$ giving rise to a category $\NUTS$ which is easily seen to be a Seely model of $\LL$. To turn it into a categorical model of $\MULL$, we need a notion of strong functors $\NUTS^n\to\NUTS$. Rather than considering them directly as functors, we define \emph{variable non-uniform totality spaces} (VNUTS) as pairs $\Vsnot X=(\Web{\Vsnot X},\Total{\Vsnot X})$ where $\Web{\Vsnot X}:\REL^n\to\REL$ is a variable set and, for each tuple $\Vect X=(\List X1n)$ of VNUTS's, $\Total{\Vsnot X}(\Vect X)$ is a totality structure on the set $\Strfun{\Web{\Vsnot X}}(\Web{\Vect X})$. It is also required that the action of the functor $\Strfun{\Web{\Vsnot X}}$ on $\NUTS$ morphisms and the strength $\Strnat{\Vsnot X}$ respect this totality structures. Then it is easy to derive from such a VNUTS $\Vsnot X$ a strong functor $\NUTS^n\to\NUTS$ and we prove that, equipped with these strong functors, $\NUTS$ is a model of $\MULL$. \paragraph{Outcome} % One major benefit of this construction is that it gives a value to all proofs of $\MULL$, invariant by cut-elimination. Moreover, the fact that this value is total shows in a \emph{syntax independent way} that when $\pi$ is for instance a $\MULL$ proof of $\Plus\One\One$ (the type of booleans), the value associated with $\pi$ is non-empty, that is, $\pi$ has a defined boolean value $\True$ or $\False$% \footnote{Or both because our $\NUTS$ model accepts non-determinism. By adding a \emph{non-uniform} coherence relation as defined in~\cite{BucciarelliEhrhard99,Boudes11} to the model one can show that this value is actually a uniquely defined boolean. See also Section~\ref{sec:example-integers}.}% . We could also obtain this by a normalization theorem: $\pi$ reduces to one of the two normal proofs of $\Plus\One\One$ (and if we prove for instance a Church-Rosser theorem we will know that this proof is unique). Such proofs would depend of course on the actual presentation of the syntax whereas our denotational argument does not. \paragraph{Related work}% There is a vast literature on extending logic with fixed point that we cannot reasonably summarize, see the discussions in~\cite{DoumanePHD,BaeldeDoumaneSaurin16}. Cut-elimination of such systems has been extensively investigated, see for instance~\cite{BrotherstonSimpson11,MomiglianoTiu12,McDowellMiller00,CamposFiore20}. Closer to ours is the work of Santocanale~\cite{Santocanale02} and its categorical interpretation in $\mu$-bicomplete categories~\cite{FortierSantocanale13} which, unlike most contributions in this field, considers also categorical interpretations of proofs. Santocanale \emph{et al.}~consider circular proofs whereas we use Park's rule. A deeper difference lies in the logic itself: from an $\LL$ point of view the logic considered by Santocanale \emph{et al.}~is \emph{purely additive linear logic with least and greatest fixed points $\MUALL$} which seems too weak in our Curry-Howard perspective. And indeed $\mu$-bicomplete categories do not provide the monoidal and exponential structures required for interpreting $\MULL$. In~\cite{Loader97}, that we became aware of only recently (and seems related to the earlier report~\cite{Geuvers92}), Loader extends the simply typed $\lambda$-calculus with inductive types and develops its denotational semantics. His models are cartesian closed categories $\cC$ equipped with a class of strong functors and seem very close to ours (Section~\ref{sec:cat-models}): one might think that any of our models yields a Loader model as its Kleisli category. This is not the case because in a Loader model the category $\cC$ is cocartesian% \footnote{To account for the disjunction of his logical system which is crucial for defining interesting data-types such as the integers.} % whereas the Kleisli category of a Seely category is not cocartesian in general: this would require to have an iso between $\Exclp{\Plus XY}$ and $\Plus{\Excl X}{\Excl Y}$ which is usually absent. Loader studies two concrete instances of his models: one is based on recursion theory (partial equivalence relations) and the other on a notion of domains with totality described as a model of $\LL$. This model might give rise to one of our Seely models, this point requires further studies. Our NUTS are quite different from Loader totality domains which feature a notion of ``consistency'' enforcing some kind of determinism and, combined with totality, allow the Kleisli category to be cocartesian as well. Our model is based on $\REL$ and therefore is compatible with non-determinism~\cite{BucciarelliEhrhardManzonetto12} and PCF recursion. This is important for us because we would like to consider rules beyond Park's rule for inductive and coinductive types, based on PCF fixed points --~with further guardedness conditions for guaranteeing termination~-- in the spirit of~\cite{Coquand93,Paulin-Mohring93,Gimenez98} or even on infinite terms in the spirit of~\cite{BaeldeDoumaneSaurin16}. We mention also the work of Clairambault~\cite{Clairambault09,Clairambault13} who investigates the game with totality semantics of an extension of intuitionistic logic with least and greatest fixed points (independently of~\cite{Loader97,Geuvers92}). A Kleisli-like connection with his work should be sought too. \paragraph{Notations} We use the following conventions: $\Vect a$ stands for a list $(\List a1n)$. A unary operation $f$ is extended to lists of arguments in the obvious way: $f(\Vect a)=(f(a_1),\dots,f(a_n))$. When we write natural transformations, we very often omit the objects where they are taken keeping them implicit for the sake of readability, when they can easily be retrieved from the context. If $\cA$ is a category then $\Obj\cA$ is its class of objects and if $A,B\in\Obj\cA$ then $\cA(A,B)$ is the set of morphisms from $A$ to $B$ in $\cA$ (all the categories we consider are locally small). If $\cF:\cA\times\cB\to\cC$ is a functor and $A\in\Obj\cA$ then $\cF_A:\cB\to\cC$ is the functor defined by $\cF_A(B)=\cF(A,B)$ and $\cF_A(f)=\cF(A,f)$ (we often write $A$ instead of $\Id_A$). Most proofs can be found in an Appendix. \section{Categorical models of $\LL$}\label{sec:LL-models} \subsection{Seely categories.} \label{sec} We recall the basic notion of categorical model of $\LL$. Our main reference is the notion of a \emph{Seely category} as presented in~\cite{Mellies09}. We refer to that survey for all the technical material that we do not recall here. A Seely category is a symmetric monoidal closed category (SMCC) $(\LCAT,\ITens,\One,\Leftu,\Rightu,\Assoc,\Sym)$ where $\Leftu_X\in\cL(\Tens\One X,X)$, $\Rightu_X\in\cL(\Tens X\One,X)$, $\Assoc_{X,Y,Z}\in\cL(\Tens{(\Tens XY)}{Z},\Tens{X}{(\Tens{Y}{Z})})$ and $\Sym_{X,Y}\in\cL(\Tens XY,\Tens YX)$ are natural isomorphisms satisfying coherence diagrams that we do not record here. We use $\Limpl XY$ for the object of linear morphisms from $X$ to $Y$, $\Evlin\in\LCAT(\Tens{(\Limpl XY)}{X},Y)$ for the evaluation morphism and $\Curlin$ for the linear curryfication map $\LCAT(\Tens ZX,Y)\to\LCAT(Z,\Limpl XY)$. We assume $\LCAT$ to be $\ast$-autonomous with dualizing object $\Sbot$ (this object is part of the structure of a Seely category). We use $\Orth X$ for the object $\Limpl X\Sbot$ of $\LCAT$ (the dual, or linear negation, of $X$). It is also assumed that $\LCAT$ is cartesian with final object $\top$, product $\With {X_1}{X_2}$ with projections $\Proj1,\Proj2$. By $\ast$-autonomy $\LCAT$ is cocartesian with initial object $0$, coproduct $\IPlus$ and injections $\Inj i$. We also assume to be given a comonad $\Excl\_:\LCAT\to\LCAT$ with counit $\Der X\in\LCAT(\Excl X,X)$ (\emph{dereliction}) and comultiplication $\Digg X\in\LCAT(\Excl X,\Excl{\Excl X})$ (\emph{digging}) together with a strong symmetric monoidal structure (Seely natural isos $\Seelyz:\Sone\to\Excl\top$ and $\Seelyt$ with $\Seelyt_{X_1,X_2}:\Tens{\Excl{X_1}}{\Excl{X_2}}\to\Exclp{\With{X_1}{X_2}}$ for the functor $\Excl\_$, from the symmetric monoidal category $(\LCAT,\IWith)$ to the symmetric monoidal category $(\LCAT,\ITens)$ satisfying an additional coherence condition wrt.~$\Digg{}$). This strong monoidal structure allows to define a lax monoidal structure $(\Monoidalz,\Monoidalt)$ of $\Excl\_$ from $(\LCAT,\ITens)$ to itself. More precisely $\Monoidalz\in\LCAT(\Sone,\Excl\Sone)$ and $\Monoidalt_{X_1,X_2}\in \LCAT(\Tens{\Excl{X_1}}{\Excl{X_2}},\Exclp{\Tens{X_1}{X_2}})$ are defined using $\Seelyz$, $\Seelyt$, $\Der{}$ and $\Digg{}$ (and are not isos in most cases). Also, for each object $X\in\Obj\cL$, there is a canonical structure of commutative $\ITens$-comonoid on $\Excl X$ given by $\Weak X\in\cL(\Excl X,\One)$ and $\Contr X\in\cL(\Excl X,\Tens{\Excl X}{\Excl X})$. The definition of these morphisms involves all the structure of $\Excl\_$ explained above, and in particular the Seely isos. We use $\Int\_$ for the ``De Morgan dual'' of $\Excl\_$: $\Int X=\Orthp{\Exclp{\Orth X}}$ and similarly for morphisms. \subsection{Oplax monoidal comonads} \label{sec:oplax-monoidal} Let $\cM$ be a symmetric monoidal category (with the same notations as above for the tensor product) and $(T,\epsilon,\mu):\cM\to\cM$ be a comonad ($\epsilon$ is the unit and $\mu$ the multiplication). An \emph{oplax monoidal} structure on $T$ consists of a morphism $\theta^0\in\cM(T\Sone,\Sone)$ and a natural transformation $\theta^2_{X_1,X_2}\in\cM(T(\Tens{X_1}{X_2}),\Tens{T(X_1)}{T(X_2)})$ subject to standard symmetric monoidality and compatibility with $\epsilon$ and $\mu$, this latter reading $\Tensp{\epsilon_{X_1}}{\epsilon_{X_2}}\Compl\theta_{X_1,X_2} =\epsilon_{\Tens{X_1}{X_2}}$ and: \begin{center} \begin{tikzcd} T(\Tens{X_1}{X_2}) \rightarrow[r,"\theta_{X_1,X_2}"] \rightarrow[d,"\mu_{\Tens{X_1}{X_2}}"] &[-0.3em]\Tens{TX_1}{TX_2} \rightarrow[r,"\Tens{\mu_{X_1}}{\mu_{X_2}}"] &[0.8em]\Tens{T^2X_1}{T^2X_2}\\ T^2(\Tens{X_1}{X_2}) \rightarrow[rr,"T(\theta_{X_1,X_2})"] &&T(\Tens{TX_1}{TX_2}) \rightarrow[u,"\theta_{TX_1,TX_2}"] \end{tikzcd} \end{center} Then the Kleisli category $\cM_T$ has a canonical symmetric monoidal structure, with unit $\Sone$ and tensor product $\Tens{X_1}{X_2}$ defined as in $\cM$ for objects. Given $f_i\in\cM_T(X_i,Y_i)$, $\Tensi T{f_1}{f_2}\in\cM_T(\Tens{X_1}{X_2},\Tens{Y_1}{Y_2})$ is defined as \begin{center} \begin{tikzcd} T(\Tens{X_1}{X_2}) \rightarrow[r,"\theta^2_{X_1,X_2}"] &\Tens{TX_1}{TX_2} \rightarrow[r,"\Tens{f_1}{f_2}"] &\Tens{Y_1}{Y_2} \end{tikzcd}. \end{center} Let $\Klfree T:\cM\to\cM_T$ be the canonical functor which acts as the identity on objects and maps $f\in\cM(X,Y)$ to $f\Compl\epsilon_X\in\cM_T(X,Y)$. \subsection{Eilenberg-Moore category and free comodules} \label{sec:EM-Kl-category} Let $\LCAT$ be a Seely category. Since $\Excl\_$ is a comonad we can define the category $\Em\LCAT$ of $\IExcl$-coalgebras (Eilenberg-Moore category of $\Excl\_$). An object of this category is a pair $P=(\Coalgca P,\Coalgstr P)$ where $\Coalgca P\in\Obj\LCAT$ and $\Coalgstr P\in\LCAT(\Coalgca P,\Excl{\Coalgca P})$ is such that $\Der{\Coalgca P}\Compl\Coalgstr P=\Id$ and $\Digg{\Coalgca P}\Compl\Coalgstr P=\Excl{\Coalgstr P}\Compl\Coalgstr P$. Then $f\in\Em\LCAT(P,Q)$ if $f\in\LCAT(\Coalgca P,\Coalgca Q)$ and $\Coalgstr Q\Compl f=\Excl f\Compl\Coalgstr P$. The functor $\Excl\_$ can be seen as a functor from $\LCAT$ to $\Em\LCAT$ mapping $X$ to $(\Excl X,\Digg X)$ and $f\in\LCAT(X,Y)$ to $\Excl f$. It is right adjoint to the forgetful functor $\Em\LCAT\to\LCAT$. Given $f\in\LCAT(\Coalgca P,X)$, we use $\Prom f\in\Em\LCAT(P,\Excl X)$ for the morphism associated with $f$ by this adjunction, one has $\Prom f=\Excl f\Compl\Coalgstr P$. If $g\in\Em\LCAT(Q,P)$, we have $\Prom f\Compl g=\Promp{f\Compl g}$. Then $\Em\LCAT$ is cartesian with final object $(\One,\Coalgstr\One=\Monoidalz)$ still denoted as $\One$ and product $\Tens{P_1}{P_2}=(\Tens{\Coalgca{P_1}}{\Coalgca{P_2}}, \Coalgstr{\Tens{P_1}{P_2}})$ with $\Coalgstr{\Tens{P_1}{P_2}}: \begin{tikzcd} \Tens{\Coalgca{P_1}}{\Coalgca{P_2}} \rightarrow[r,"\Tens{\Coalgstr{P_1}}{\Coalgstr{P_2}}"] &[1em]\Tens{\Excl{\Coalgca{P_1}}}{\Excl{\Coalgca{P_2}}} \rightarrow[r,"\Monoidalt_{\Coalgca{P_1},\Coalgca{P_2}}"] &[0.4em]\Exclp{\Tens{\Coalgca{P_1}}{\Coalgca{P_2}}} \end{tikzcd}$. This category is also cocartesian with initial object $(0,\Coalgstr0)$ still denoted as $0$ and coproduct $\Plus{P_1}{P_2} =(\Plus{\Coalgca{P_1}}{\Coalgca{P_2}},\Coalgstr{\Plus{P_1}{P_2}})$ with $\Coalgstr{\Plus{P_1}Q}$ defined as follows. For $i=1,2$ one defines $h^i:\Coalgca{P_i}\to\Exclp{\Plus{\Coalgca{P_1}}{\Coalgca{P_2}}}$ as \begin{tikzcd} \Coalgca P_1 \rightarrow[r,"\Coalgstr{P_1}"] &[-1em]\Excl{\Coalgca{P_1}} \rightarrow[r,"\Excl{\Inj i}"] &[-1em]\Exclp{\Plus{\Coalgca{P_1}}{\Coalgca{P_2}}} \end{tikzcd} and then $\Coalgstr{\Plus{P_1}{P_2}}$ is the unique morphism $\Plus{\Coalgca{P_1}}{\Coalgca{P_2}}\to\Exclp{\Plus{\Coalgca{P_1}}{\Coalgca {P_2}}}$ such that $\Coalgstr{\Plus{P_1}{P_2}}\Compl\Inj i=h_i$ for $i=1,2$. More details can be found in~\cite{Mellies09}. We use $\Contr P\in\Em\LCAT(P,\Tens PP)$ (\emph{contraction}) for the diagonal and $\Weak P\in\Em\LCAT(P,\One)$ (\emph{weakening}) for the unique morphism to the final object. \subsubsection{The $\LL$ model of free comodules on a given coalgebra}\label{sec:free-comodules-model} Given an object% \footnote{In this paper we could restrict to the case where $P$ is a tensor of ``free coalgebras'' $(\Excl{X_i},\Digg{X_i})$ but it is more natural to deal with the general case, which will be quite useful in further work, see Section~\ref{sec:conclusion}.} $P$ of $\Em\LCAT$, we can define a functor $\Fcomod P:\LCAT\to\LCAT$ which maps an object $X$ to $\Tens{\Coalgca P}{X}$ and a morphism $f$ to $\Tens{\Coalgca P}{f}$. This functor is clearly an oplax monoidal comonad (with structure maps defined using $\Weak P$, $\Contr P$ and the monoidal structure of $\LCAT$)\footnote{The definition of this comonad uses only the comonoid structure of $\Coalgca P$. The $\Excl\_$-structure will be used later.}. A coalgebra for this comonad is a \emph{$P$-comodule}. By Section~\ref{sec:oplax-monoidal} the Kleisli category $\Kcomod\LCAT P=\Klp\LCAT{\Fcomod P}$ of this comonad (that is, the category of free $P$-comodules) has a canonical structure of symmetric monoidal category (SMC). We set $\Klfree P=\Klfree{\Fcomod P}:\LCAT\to\Kcomod\LCAT P$. Girard showed in~\cite{Girard98c} that $\Kcomod\LCAT P$ is a Seely model of $\LL$ with operations on objects defined in the same way as in $\LCAT$, and using the coalgebra structure of $P$ for the operations on morphisms. Intuitively, $P$ should be considered as a given context and $\Kcomod\LCAT P$ as a model in this context. This idea appears at various places in the literature, see for instance~\cite{CurienFioreMunch16,UustaluVene08}. Let us summarize this construction. If $f_i\in\Kcomod\LCAT P(X_i,Y_i)$ for $i=1,2$ then $\Mtens P{f_1}{f_2}=\Tensi{\Fcomod P}{f_1}{f_2}\in\Kcomod\LCAT P(\Tens{X_1}{X_2},\Tens{Y_1}{Y_2})$ is given by \begin{center} \begin{tikzcd} \Coalgca P\ITens X_1\ITens X_2 \rightarrow[r,"\Contr{\Coalgca P}\ITens\Id"] &[1.4em]\Coalgca P\ITens \Coalgca P\ITens X_1\ITens X_2 \rightarrow[d,phantom,"\nvisom"]\\[-1em] \Tens{Y_1}{Y_2} &[-1.2em]\Coalgca P\ITens X_1\ITens \Coalgca P\ITens X_2 \rightarrow[l,swap,"\Tens{f_1}{f_2}"] \end{tikzcd} \end{center} The object of linear morphisms from $X$ to $Y$ in $\Kcomod\LCAT P$ is $\Limpl XY$, and the evaluation morphism $\Mevlin P\in\Kcomod\LCAT P(\Tens{(\Limpl XY)}{X},Y)$ is simply $\Klfree P(\Evlin)$. Then it is easy to check that if $f\in\Kcomod\LCAT P(\Tens ZX,Y)$, that is $f\in\LCAT(\Coalgca P\ITens Z\ITens X,Y)$, the morphism $\Curlin(f)\in\Kcomod\LCAT P(Z,\Limpl XY)$ satisfies the required monoidal closedness equations. With these definitions, the category $\Kcomod\LCAT P$ is $\ast$-autonomous, with $\Sbot$ as dualizing object. Specifically, given $f\in\Kcomod\LCAT P(X,Y)$, then $\Morth Pf$ is the following composition of morphisms: \begin{center} \begin{tikzcd} \Tens{\Coalgca P}{\Orth Y} \rightarrow[r,"\Tens{\Coalgca P}{\Orth f}"] &\Coalgca P\ITens(\Limpl{\Coalgca P}{\Orth X}) \rightarrow[r,"\Evlin"] &[-1em]\Orth X \end{tikzcd} \end{center} using implicitly the iso between $\Orthp{\Tens{Z}{X}}$ and $\Limpl Z{\Orth X}$, and the $\ast$-autonomy of $\LCAT$ allows to prove that indeed $\Mbiorth Pf=f$. The category $\Kcomod\LCAT P$ is easily seen to be cartesian with $\top$ as final object, $\With{X_1}{X_2}$ as cartesian product (and projections defined in the obvious way, applying $\Klfree P$ to the projections of $\LCAT$). Last we define a functor $\Mexcl P\_:\Kcomod\LCAT P\to\Kcomod\LCAT P$ by $\Mexcl PX=\Excl X$ and, given $f\in\Kcomod\LCAT P(X,Y)$, we define $\Mexcl Pf\in\Kcomod\LCAT P(\Excl X,\Excl Y)$ as \begin{tikzcd} \Tens{\Coalgca P}{\Excl X} \rightarrow[r,"\Tens{\Coalgstr P}{\Excl X}"] &\Tens{\Excl{\Coalgca P}}{\Excl X} \rightarrow[r,"\Monoidalt"] &[-1em]\Exclp{\Tens{{\Coalgca P}}{X}} \rightarrow[r,"\Excl f"] &[-1em]\Excl Y \end{tikzcd} and this functor has a comonad structure $(\Mder P,\Mdigg P)$ defined by $\Mder P=\Klfree P(\Der{})$ and $\Mdigg P=\Klfree P(\Digg{})$\footnote{The definition of $\Mexcl Pf$ requires $P$ to be a $\oc$-coalgebra and not simply a commutative $\ITens$-comonoid. Of course if $\oc$ is the free exponential as in~\cite{Girard98c} the latter condition implies the former.}. \begin{remark} Any $p\in\Em\LCAT(P,Q)$ induces a functor $\Kcomod\LCAT p:\Kcomod\LCAT Q\to\Kcomod\LCAT P$ which acts as the identity on objects and maps $f\in\Kcomod\LCAT Q(X,Y)$ to $\Kcomod\LCAT p(f)=f\Compl(\Tens pX)\in\Kcomod\LCAT P(X,Y)$. This functor is strict monoidal symmetric and preserves all the constructions of $\LL$, for instance $\Kcomod\LCAT p(\Mdigg Q)=\Mdigg P$ (simply because $\Kcomod\LCAT p\Comp\Klfree Q=\Klfree P$) and also $\Kcomod\LCAT p(\Mexcl Qf)=\Mexcl P{(\Kcomod\LCAT p(f))}$. We can actually consider $\Kcomod\LCAT\_$ as a functor from $\Op{\Em\LCAT}$ to the category of Seely categories and functors which preserve their structure on the nose. This functor could probably more suitably be considered as a fibration in the line of~\cite{PowerRobinson97}, Section~7. \end{remark} \subsection{Strong functors on $\LCAT$}\label{sec:gen-strong-functors} Given $n\in\Nat$, an \emph{$n$-ary strong functor} on $\LCAT$ is a pair $\Vcsnot F=(\Strfun{\Vcsnot F},\Strnat{\Vcsnot F})$ where $\Strfun{\Vcsnot F}:\LCAT^n\to\LCAT$ is a functor and $\Strnat{\Vcsnot F}_{X,\Vect Y}\in\LCAT(\Tens{\Excl X}{\Strfun{\Vcsnot F}(\Vect Y)},\Strfun{\Vcsnot F}(\Tens{\Excl X}{\Vect Y}))$ is a natural transformation, called the \emph{strength} of $\Vcsnot F$. We use the notation $\Tens Z{(\List Y1n)}=(\Tens{Z}{Y_1},\dots,\Tens{Z}{Y_n})$. It is assumed moreover that the diagrams of Figure~\ref{fig:strength-monoidality} commute, expressing the monoidality of this strength as well as its compatibility with the comultiplication of $\Excl\_$. \begin{figure*}[t] {\footnotesize \begin{tikzcd} (\Excl{X_1}\ITens\Excl{X_2}) \ITens{\Strfun{\Vcsnot F}}(\Vect{Y}) \rightarrow[r,"\Tens{\Seelyt}{\Strfun{\Vcsnot F}(\Vect Y)}"] \rightarrow[d,swap,"\Tens{\Excl{X_1}} {\Strnat{\Vcsnot F}_{X_2,\Vect Y}}"] &[2.2em]\Exclp{\With{X_1}{X_2}}\ITens \Strfun{\Vcsnot F}(\Vect Y) \rightarrow[dd,"\Strnat{\Vcsnot F}_{\With{X_1}{X_2},\Vect Y}"] \\ \Excl{X_1}\ITens \Strfun{\Vcsnot F}(\Excl{X_2}\ITens\Vect Y) \rightarrow[d,swap,"\Strnat{\Vcsnot F}_{X_1, \Tens{\Excl{X_2}}{\Vect Y}}"] & \\ \Strfun{\Vcsnot F}(\Excl{X_1}\ITens\Excl{X_2} \ITens \Strfun{\Vcsnot F}(\Vect Y)) \rightarrow[r,"\Strfun{\Vcsnot F}(\Tens{\Seelyt}{\Vect Y})"] &\Strfun{\Vcsnot F}(\Exclp{\With{X_1}{X_2}}\ITens\Vect Y) \end{tikzcd} \quad \begin{tikzcd} \Tens{\Sone}{\Strfun{\Vcsnot F}(\Vect Y)} \rightarrow[r,"\Tens{\Seelyz}{\Strfun{\Vcsnot F}(\Vect Y)}"] \rightarrow[d,phantom,"\visom"] &[2.2em]\Tens{\Excl\top}{\Strfun{\Vcsnot F}(\Vect Y)} \rightarrow[d,"\Strnat{\Vcsnot F}_{\top,\Vect Y}"] \\ \Strfun{\Vcsnot F}(\Tens\Sone{\Vect Y}) \rightarrow[r,"\Strfun{\Vcsnot F}(\Tens{\Seelyz}{\Vect Y})"] &\Strfun{\Vcsnot F}(\Excl{\top}\ITens\Vect Y) \end{tikzcd} \quad \begin{tikzcd} \Tens{\Excl X}{\Strfun{\Vcsnot F}(\Vect Y)} \rightarrow[r,"\Tens{\Digg X}{\Strfun{\Vcsnot F}(\Vect Y)}"] \rightarrow[d,swap,"\Strnat{\Vcsnot F}_{X,\Vect Y}"] &[2.2em]\Tens{\Excll X}{\Strfun{\Vcsnot F}(\Vect Y)} \rightarrow[d,"\Strnat{\Vcsnot F}_{\Excl X,\Vect Y}"]\\ \Strfun{\Vcsnot F}(\Tens{\Excl X}{\Vect Y}) \rightarrow[r,"\Vcsnot{\Vcsnot F}(\Tens{\Digg X}{\Vect Y})"] &\Strfun{\Vcsnot F}(\Tens{\Excll X}{\Vect Y}) \end{tikzcd} } \caption{Monoidality and $\Digg{}$ diagrams for strong functors} \label{fig:strength-monoidality} \end{figure*} The main purpose of this definition is that for any object $P$ of $\Em\cL$ one can lift $\Vcsnot F$ to a functor $\Mfun{\Vcsnot F}P:\Kcomod\LCAT P^n\to\Kcomod\LCAT P$ as follows. First one sets $\Mfun{\Vcsnot F}P(\Vect X)=\Strfun{\Vcsnot F}(\Vect X)$. Then, given $\Vect f\in\Kcomod\LCAT P^n(\Vect X,\Vect Y)$ we define $\Mfun{\Vcsnot F}{P}(\Vect f)\in\Kcomod\LCAT P({\Vcsnot F}(\Vect X),{\Vcsnot F}(\Vect Y))$ as % {\footnotesize \begin{center} \begin{tikzcd} \Tens{\Coalgca P}{\Vcsnot F(\Vect X)} \rightarrow[r,"\Tens{\Coalgstr P}{\Id}"] &[1.2em]\Tens{\Excl{\Coalgca P}}{\Vcsnot F(\Vect X)} \rightarrow[r,"\Strnat{\Vcsnot F}"] &[-1em]\Vcsnot F(\Tens{\Excl{\Coalgca P}}{\Vect X}) \rightarrow[r,"\Strfun{\Vcsnot F}(\Der{\Coalgca P}\ITens\Vect X)"] &[2em]\Vcsnot F(\Tens{{\Coalgca P}}{\Vect X}) \rightarrow[d,swap,"\Vcsnot F(\Vect f)"]\\ &&&\Vcsnot F(\Vect Y) \end{tikzcd} \end{center}% }% \noindent The fact that we have defined a functor results from the three diagrams of Figure~\ref{fig:strength-monoidality} and from the definition of $\Weak P$ and $\Contr P$ based on the Seely isomorphisms. \begin{remark} Since the seminal work of Moggi~\cite{Moggi89} strong functors play a central role in semantics for representing \emph{effects}. Our adaptation of this notion to the present $\LL$ setting follows the definition of an $\cL$-tensorial strength in~\cite{KobayashiS97}. \end{remark} \subsubsection{Operations on strong functors}% \label{sec:strong-fun-LL-operations}% There is an obvious unary identity strong functor $\Strid$ and for each object $Y$ of $\LCAT$ there is an $n$-ary $Y$-valued constant strong functor $\Strcst Y$; in the first case the strength natural transformation is the identity morphism and in the second case, it is defined using $\Weak{\Excl X}$. Let $\Vcsnot F$ be an $n$-ary strong functor and $\List{\Vcsnot G}1n$ be $k$-ary strong functors. Then one defines a $k$-ary strong functor $\Vcsnot H=\Vcsnot F\Comp(\List{\Vcsnot G}1n)$: the functorial component $\Strfun{\Vcsnot H}$ is defined in the obvious compositional way. The strength is% {\footnotesize \begin{center} \begin{tikzcd} \Tens{\Excl X}{\Strfun{\Vcsnot H}(\Vect Y)} \rightarrow[r,"\Strnat{\Vcsnot F}"] &[-1em]\Strfun{\Vcsnot F}((\Tens{\Excl X} {\Strfun{\Vcsnot G_i}(\Vect Y)})_{i=1}^n) \rightarrow[r,"\Strfun{\Vcsnot F}((\Strnat{\Vcsnot G_i})_{i=1}^k)"] &[1.2em]\Strfun{\Vcsnot F}(( {\Strfun{\Vcsnot G_i}(\Tens{\Excl X}{\Vect Y})})_{i=1}^n) \end{tikzcd} \end{center} }% \noindent% and is easily seen to satisfy the commutations of Figure~\ref{fig:strength-monoidality}. Given an $n$-ary strong functor, we can define its \emph{De Morgan dual} $\Orth{\Vcsnot F}$ which is also an $n$-ary strong functor. On objects, we set $\Strfun{\Orth{\Vcsnot F}}(\Vect Y)=\Orth{\Strfun{\Vcsnot F}(\Orth{\Vect Y})}$ and similarly for morphisms. The strength of $\Orth{\Vcsnot F}$ is defined as the Curry transpose of the following morphism (remember that $\Limpl{\Excl X}{\Orth{\Vect Y}}=\Orthp{\Tens{\Excl X}{\Vect Y}}$ up to canonical iso):% {\footnotesize% \begin{center} \begin{tikzcd} \Excl X\ITens{\Orth{\Strfun{\Vcsnot F}(\Orth{\Vect Y})}} \ITens\Strfun{\Vcsnot F}(\Limpl{\Excl X}{\Orth{\Vect Y}}) \rightarrow[r,phantom,"\isom"] &[-1em]\Excl X \ITens\Strfun{\Vcsnot F}(\Limpl{\Excl X}{\Orth{\Vect Y}}) \ITens{\Orth{\Strfun{\Vcsnot F}(\Orth{\Vect Y})}} \rightarrow[d,"\Strnat{\Vcsnot F}\ITens\Id"]\\[-0.2em] \Strfun{\Vcsnot F}(\Orth{\Vect Y}) \ITens{\Orth{\Strfun{\Vcsnot F}(\Orth{\Vect Y})}} \rightarrow[d,swap,"\Evlin\Compl\Sym"] & \Strfun{\Vcsnot F}(\Excl X \ITens(\Limpl{\Excl X}{\Orth{\Vect Y}})) \ITens{\Orth{\Strfun{\Vcsnot F}(\Orth{\Vect Y})}} \rightarrow[l,swap,"\Strfun{\Vcsnot F}(\Evlin)\ITens\Id"]\\[-0.2em] \Sbot & \end{tikzcd} \end{center} }% \noindent Then it is possible to prove, using the $\ast$-autonomy of $\LCAT$, that $\Biorth{\Vcsnot F}$ and $\Vcsnot F$ are canonically isomorphic (as strong functors)\footnote{In the concrete settings considered in this paper, these canonical isos are actually identity maps.}. As a direct consequence of the definition of $\Orth{\Vcsnot F}$ and of the canonical iso between $\Biorth{\Vcsnot F}$ and $\Vcsnot F$ we get: \begin{lemma}\label{lemma:strfun-comp-orth} $\Orthp{\Vcsnot F\Comp(\List{\Vcsnot G}1n)}=\Orth{\Vcsnot F}\Comp(\List{\Orth{\Vcsnot G}}1n)$ up to canonical iso. \end{lemma} The bifunctor $\mathord\ITens$ can be turned into a strong functor: one defines the strength as% \footnote{This definition, as well as the following one, shows that our assumption that the strength is available for ``context object'' of shape $\Excl X$ only cannot be disposed of.}% {\footnotesize \begin{center} \begin{tikzcd} \Excl X\ITens Y_1\ITens Y_2 \rightarrow[r,"\Contr{\Excl X}\ITens\Id"] &[2em]\Excl X\ITens \Excl X\ITens Y_1\ITens Y_2 \rightarrow[r,phantom,"\isom"] &[-1em]\Excl X\ITens Y_1\ITens \Excl X\ITens Y_2 \end{tikzcd} \end{center} }% \noindent By De Morgan duality, this endows $\IPar$ with a strength as well. The bifunctor $\IPlus$ is also endowed with a strength, simply using the distributivity of $\ITens$ over $\IPlus$ (which results from the monoidal closedness of $\LCAT$). By duality again, $\IWith$ inherits a strength. The functor $\Excl\_$ is equipped with the strength\\ {\footnotesize \begin{tikzcd} \Tens{\Excl X}{\Excl Y} \rightarrow[r,"\Tens{\Digg X}{\Excl Y}"] &[0.6em]\Tens{\Excl{\Excl X}}{\Excl Y} \rightarrow[r,"\Monoidalt"] &[-1em]\Exclp{\Tens{{\Excl X}}{Y}} \end{tikzcd} }. \subsection{Fixed Points of strong functors} \label{sec:fixpoints-functors} The following facts are standard in the literature on fixed points of functors. \begin{definition} Let $\cA$ be a category and $\cF:\cA\to\cA$ be a functor. A \emph{coalgebra}% \footnote{Not to be confused with the coalgebras of Section~\ref{sec:EM-Kl-category} which must satisfy additional properties of compatibility with the comonad structure of $\Excl\_$.}% of $\cF$ is a pair $(A,f)$ where $A$ is an object of $\cA$ and $f\in\cA(A,\cF(A))$. Given two coalgebras $(A,f)$ and $(A',f')$ of $\cF$, a coalgebra morphism from $(A,f)$ to $(A',f')$ is an $h\in\cA(A,A')$ such that $f'\Compl h=\cF(h)\Compl f$. The category of coalgebras of the functor $\cF$ will be denoted as $\COALGFUN\cA\cF$. The notion of algebra of an endofunctor is defined dually (reverse the directions of the arrows $f$ and $f'$) and the corresponding category is denoted as $\ALGFUN\cA\cF$. \end{definition} By Lambek's Lemma, if $(A,f)$ with $f\in\cA(A,\cF(A))$ is a final object in $\COALGFUN\cA\cF$ then $f$ is an iso. We assume that this iso is always the identity% \footnote{This assumption is highly debatable from the view point of category theory where the notion of equality of objects is not really meaningful. It will be dropped in a longer version of this paper.}% as this holds in our concrete models so that this final object $(\Fungfp\cF,\Id)$ satisfies $\cF(\Fungfp\cF)=\Fungfp\cF$. We focus on coalgebras rather than algebras for reasons which will become clear when we deal with fixed points of strong functors. This universal property of $\Fungfp{\cF}$ gives us a powerful tool for proving equalities of morphisms. \begin{lemma}\label{lemma:equations-final-coalgebra} Let $A\in\Obj\cA$ and let $f_1,f_2\in\cA(A,\Fungfp{\cF})$. If there exists $l\in\cA(A,\cF(A))$ such that $\cF(f_i)\Compl l=f_i$ for $i=1,2$, then $f_1=f_2$. \end{lemma} \begin{lemma}\label{lemma:functor-gfp-general} Let $\cF:\cB\times\cA\to\cA$ be a functor such that, for all $B\in\Obj\cB$, the category $\COALGFUN \cA{\cF_B}$ has a final object. Then there is a functor $\Fungfp\cF:\cB\to\cA$ such that $(\Fungfp\cF(B),\Id)$ is the final object of $\COALGFUN \cA{\cF_B}$ (so that $\cF(B,\Fungfp\cF(B))=\Fungfp\cF(B)$) for each $B\in\Obj\cB$, and, for each $g\in\cB(B,B')$, $\Fungfp\cF(g)$ is uniquely characterized by $\cF(g,\Fungfp\cF(g))=\Fungfp\cF(g)$. \end{lemma} We consider now the same $\Fungfp\cF$ operation applied to strong functors on a model $\LCAT$ of $\LL$. Let $\Vcsnot F$ be an $n+1$-ary strong functor on $\LCAT$ (so that $\Strfun{\Vcsnot F}$ is a functor $\LCAT^n\times\LCAT\to\LCAT$). Assume that for each $\Vect X\in\Obj{\LCAT^n}$ the category $\COALGFUN{\LCAT}{\Strfun{\Vcsnot F}_{\Vect X}}$ has a final object. We have defined $\Fungfp{\Strfun{\Vcsnot F}}:\LCAT^n\to\LCAT$ characterized by $\Strfun{\Vcsnot F}(\Vect X,\Fungfp{\Strfun{\Vcsnot F}}(\Vect X))=\Fungfp{\Strfun{\Vcsnot F}}(\Vect X)$ and $\Strfun{\Vcsnot F}(\Vect f,\Fungfp{\Strfun{\Vcsnot F}}(\Vect f))=\Fungfp{\Strfun{\Vcsnot F}}(\Vect f)$ for all $\Vect f\in\LCAT^n(\Vect X,\Vect{X'})$ (Lemma~\ref{lemma:functor-gfp-general}). For each $Y,\Vect X\in\LCAT$, we define $\Strnat{\Vcsnot{\Fungfp F}}_{Y,\Vect X}\in\LCAT(\Tens{\Excl Y}{\Fungfp{\Strfun{\Vcsnot F}}(\Vect X)},\Fungfp{\Strfun{\Vcsnot F}}(\Tens{\Excl Y}{\Vect X}))$. We have% {\footnotesize% \begin{center} \begin{tikzcd} \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)} =\Excl Y\ITens \Strfun{\Vcsnot F}(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)) \rightarrow[r,"\Strnat{\Vcsnot F}_{Y,(\Vect X, \Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}"] &[2em]\Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}) \end{tikzcd} \end{center} }% \noindent% exhibiting a $\Strfun{\Vcsnot F}_{\Tens{\Excl Y}{\Vect X}}$-coalgebra structure on $\Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}$. Since $\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl Y}{\Vect X})$ is the final coalgebra of the functor $\Strfun{\Vcsnot F}_{\Tens{\Excl Y}{\Vect X}}$, we define $\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X}$ as the unique morphism $\Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}\to\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl Y}{\Vect X})$ such that the following diagram commutes% \begin{equation}\label{eq:final-coalg-strength-charact} {\footnotesize \begin{tikzcd} \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)} \rightarrow[r,dotted,"\Strnat{\Vcsnot F}_{Y,(\Vect X, \Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}"] \rightarrow[d,swap,"\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X}"] &[0.5em]\Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X}, \Tens{\Excl Y}{\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)}) \rightarrow[d,"{\Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X}, \Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X})}"] \\ \Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X}, \Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl Y}{\Vect X})) \rightarrow[r,equals] &\Strfun{\Fungfp{\Vcsnot F}}(\Tens{\Excl Y}{\Vect X}) \end{tikzcd} } \end{equation} \begin{lemma}\label{lemma:strfun-gfp-general} Let $\Vcsnot F$ be an $n+1$-ary strong functor on $\LCAT$ such that for each $\Vect X\in\Obj{\LCAT^n}$, the category $\COALGFUN\LCAT{\Strfun{\Vcsnot F}_{\Vect X}}$ has a final object $\Fungfp{\Strfun{\Vcsnot F}_{\Vect X}}$. Then there is a unique $n$-ary strong functor $\Fungfp{\Vcsnot F}$ such that $\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)=\Fungfp{\Strfun{\Vcsnot F}_{\Vect X}}$ (and hence $\Strfun{\Vcsnot F}(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))=\Strfun{\Fungfp{\Vcsnot F}}(\Vect X)$), $\Strfun{\Vcsnot F}(\Vect f,\Strfun{\Fungfp{\Vcsnot F}}(\Vect f))=\Strfun{\Fungfp{\Vcsnot F}}(\Vect f)$ for all $\Vect f\in\LCAT^n(\Vect X,\Vect{X'})$ and $\Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X},\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X})\Compl\Strnat{\Vcsnot F}_{Y,(\Vect X,\Strfun{\Fungfp{\Vcsnot F}}(\Vect X))}=\Strnat{\Fungfp{\Vcsnot F}}_{Y,\Vect X}$. \end{lemma} \begin{lemma}\label{lemma:strfun-lfp-general} Let $\Vcsnot F$ be an $n+1$-ary strong functor on $\LCAT$ such that for each $\Vect X\in\Obj{\LCAT^n}$, the category $\ALGFUN\LCAT{\Strfun{\Vcsnot F}_{\Vect X}}$ has an initial object $\Funlfp{\Strfun{\Vcsnot F}_{\Vect X}}$. Then there is a unique $n$-ary strong functor $\Funlfp{\Vcsnot F}$ such that $\Strfun{\Funlfp{\Vcsnot F}}(\Vect X)=\Funlfp{\Strfun{\Vcsnot F}_{\Vect X}}$ (and hence $\Strfun{\Vcsnot F}(\Vect X,\Strfun{\Funlfp{\Vcsnot F}}(\Vect X))=\Strfun{\Funlfp{\Vcsnot F}}(\Vect X)$), $\Strfun{\Vcsnot F}(\Vect f,\Strfun{\Funlfp{\Vcsnot F}}(\Vect f))=\Strfun{\Funlfp{\Vcsnot F}}(\Vect f)$ for all $\Vect f\in\LCAT^n(\Vect X,\Vect{X'})$ and $\Strfun{\Vcsnot F}(\Tens{\Excl Y}{\Vect X},\Strnat{\Funlfp{\Vcsnot F}}_{Y,\Vect X})\Compl\Strnat{\Vcsnot F}_{Y,(\Vect X,\Strfun{\Funlfp{\Vcsnot F}}(\Vect X))}=\Strnat{\Funlfp{\Vcsnot F}}_{Y,\Vect X}$. Moreover $\Orth{(\Funlfp{\Vcsnot F})}=\Fungfp{(\Orth{\Vcsnot F})}$ \end{lemma} \begin{proof} Apply Lemma~\ref{lemma:strfun-gfp-general} to the strong functor $\Orth{\Vcsnot F}$. \end{proof} \subsection{A categorical axiomatization of models of $\MULL$} \label{sec:cat-models} Our general definition of Seely categorical model of $\MULL$ is based on the notions and results above. We refer in particular to Section~\ref{sec:gen-strong-functors} for the basic definitions of operations on strong functors in our $\LL$ categorical setting. \begin{definition}\label{def:categorical-muLL-models} A \emph{categorical model} or \emph{Seely model} of $\MULL$ is a pair $(\cL,\Vect\cL)$ where \begin{Enumerate} \item\label{it:def-muLL-model-1} $\cL$ is a Seely category\label{enum:seel-mull-1} \item\label{it:def-muLL-model-2} $\Vect\cL=(\cL_n)_{n\in\Nat}$ where $\cL_n$ is a class of strong functors $\cL^n\to\cL$, and $\cL_0=\Obj\cL$\label{enum:seel-mull-2} \item\label{it:def-muLL-model-3} if $\Vcstnot X\in\cL_n$ and $\Vcstnot X_i\in\cL_k$ (for $i=1,\dots,n$) then $\Vcstnot X\Comp\Vect{\Vcstnot X}\in\cL_k$ and all $k$ projection strong functors $\cL^k\to\cL$ belong to $\cL_k$\label{enum:seel-mull-3} \item\label{it:def-muLL-model-4} the strong functors $\ITens$ and $\IWith$ belong to $\cL_2$, the strong functor $\Excl\_$ belongs to $\cL_1$ and, if $\Vcstnot X\in\cL_n$, then $\Orth{\Vcstnot X}\in\cL_n$ \label{enum:seel-mull-4} \item\label{it:def-muLL-model-5} and last, for all $\Vcstnot X\in\cL_1$ the category $\COALGFUN{\cL}{\Strfun{\Vsnot X}}$ (see Section~\ref{sec:fixpoints-functors}) has a final object. So for any $\Vsnot X\in\cL_{k+1}$ there is a strong functor $\Fungfp{\Vsnot X}:\cL^k\to\cL$ (see Lemma~\ref{lemma:strfun-gfp-general}). It is required that $\Fungfp{\Vsnot X}\in\cL_k$. \label{enum:seel-mull-5} \end{Enumerate} \end{definition} \begin{remark} By Conditions~\ref{it:def-muLL-model-2} and~\ref{it:def-muLL-model-3} (applied with $n=0$), all constant strong functors are in $\cL_n$, for all $n$. Therefore given $\Vcstnot X\in\cL_{k+1}$ and $\Vect X\in\Obj \cL^k$, the strong functor $\Vcsnot X(\_,\Vect X)$ is in $\cL_1$ by Condition~\ref{it:def-muLL-model-3}. This explains why we can apply Lemma~\ref{lemma:strfun-gfp-general} in Condition~\ref{it:def-muLL-model-5}. \end{remark} Our goal is now to outline the interpretation of $\MULL$ formulas and proofs in such a model. This requires first to describe the syntax of formulas and proofs. \begin{remark} One can certainly also define a notion of categorical model of $\MULL$ in a linear-non-linear adjunction setting as presented in~\cite{Mellies09}. This is postponed to further work. \end{remark} \subsubsection{Syntax of $\MULL$}\label{sec:MULL-syntax} We assume to be given an infinite set of propositional variables $\LLvars$ (ranged over by Greek letters $\zeta,\xi\dots$). We introduce a language of propositional $\LL$ formulas with least and greatest fixed points. \begin{multline*} A,B,\dots \Bnfeq \One \Bnfor {\mathord{\perp}} \Bnfor \Tens AB \Bnfor \Par AB\\ \Bnfor 0 \Bnfor \top \Bnfor \Plus AB \Bnfor \With AB \Bnfor \Excl A \Bnfor \Int A \Bnfor\zeta \Bnfor \Lfpll\zeta A \Bnfor \Gfpll\zeta A\,. \end{multline*} The notion of closed formula is defined as usual, the two last constructions being the only binders. \begin{remark} In contrast with second-order $\LL$ or dependent type systems where open formulas play a crucial role, in the case of fixed points, all formulas appearing in sequents and other syntactical devices allowing to give types to programs will be closed. In our setting, open types/formulas appear only locally, for allowing the expression of (least and greatest) fixed points. \end{remark} We can define two basic operations on formulas. \begin{Itemize} \item \emph{Substitution}: $\Subst AB\zeta$, taking care of not binding free variables (uses $\alpha$-conversion). \item \emph{Negation} or \emph{dualization}: defined by induction on formulas $\Orth\One={\mathord{\perp}}$, $\Orth{\mathord{\perp}}=\One$, $\Orthp{\Par AB}=\Tens{\Orth A}{\Orth B}$, $\Orthp{\Tens AB}=\Par{\Orth A}{\Orth B}$, $\Orth0=\top$, $\Orth\top=0$, $\Orthp{\With AB}=\Plus{\Orth A}{\Orth B}$, $\Orthp{\Plus AB}=\With{\Orth A}{\Orth B}$, $\Orthp{\Excl A}=\Int{\Orth A}$, $\Orthp{\Int A}=\Excl{\Orth A}$, $\Orth\zeta=\zeta$, $\Orthp{\Lfpll\zeta A}=\Gfpll\zeta{\Orth A}$ and $\Orthp{\Gfpll\zeta A}=\Lfpll\zeta{\Orth A}$. Obviously $\Biorth A=A$ for any formula $A$. \end{Itemize} \begin{remark} The only subtle point of this definition is negation of propositional variables: $\Orth\zeta=\zeta$. This entails $\Orthp{\Subst BA\zeta}=\Subst{\Orth B}{\Orth A}\zeta$ by an easy induction on $B$. If we consider $B$ as a compound logical connective with placeholders labeled by variables then $\Orth B$ is its De~Morgan dual. This definition of $\Orth\zeta$ is also a natural way of preventing the introduction of fixed points wrt.~variables with negative occurrences. % As an illustration, if we define as usual $\Limpl AB$ as $\Par{\Orth A}{B}$ then we can define $E=\Lfpll\zeta{(\With\One{(\Limpl{\Excl\zeta}\zeta}))}$ which \emph{looks like} the definition of a model of the pure $\lambda$-calculus as a recursive type. But this is only an illusion since we actually have $E=\Lfpll\zeta{(\With\One{(\Par{\Int\zeta}\zeta}))}$ so that $\Limpl{\Excl E}{E}$ \emph{is not a retract} of $E$. And indeed if it were possible to define a type $D$ such that $\Limpl{\Excl D}{D}$ is isomorphic to (or is a retract of) $D$ then we would be able to type all pure $\lambda$-terms in our system and this would contradict the fact that $\MULL$ enjoys strong normalization and has a denotational semantics based on totality as shown below. \end{remark} Our logical system extends the usual unilateral sequent calculus of classical propositional $\LL$~\cite{Girard87}, see also~\cite{Mellies09} Section~3.1 and~3.13. In this setting we deal with sequents $\Seq{\List A1n}$ where the $A_i$'s are formulas. It is important to notice that the order of formulas in this list is not relevant, which means that we keep the exchange rule implicit as it is usual in sequent calculus. To the standard rules% \footnote{Notice that the \emph{promotion rule} of $\LL$ has a condition on contexts similar to that of the rule \Ngfp{} below: to deduce $\Seq{\Delta,\Excl A}$ from $\Seq{\Delta,A}$ it is required that all formulas in the context $\Delta$ are of shape $\Int B$, that is $\Delta=\Int\Gamma$.} % of~\cite{Mellies09} Fig.~1, we add the two next introduction rules for fixed point formulas which are essentially borrowed to~\cite{Baelde12} (see Section~\ref{sec:MULL-comments}) {\footnotesize \[ \begin{prooftree} \hypo{\Seq{\Gamma,\Subst{F}{\Lfpll\zeta F}{\zeta}}} \infer1[\Nlfp]{\Seq{\Gamma,\Lfpll\zeta F}} \end{prooftree} \quad \begin{prooftree} \hypo{\Seq{\Delta,A}} \hypo{\Seq{\Int\Gamma,\Orth A,\Subst{F}{A}{\zeta}}} \infer2[\Ngfp]{\Seq{\Delta,\Int\Gamma,\Gfpll\zeta F}} \end{prooftree}\,. \]% } By taking, in the last rule, $\Delta=\Orth A$ and proving the left premise by an axiom, we obtain the following derived rule \[\begin{prooftree} \hypo{\Seq{\Int\Gamma,\Orth A,\Subst{F}{A}{\zeta}}} \infer1[\Ngfpbis]{\Seq{\Int\Gamma,\Orth A,\Gfpll\zeta F}} \end{prooftree}\,.\] \noindent% The corresponding cut-elimination rule is described in Section~\ref{sec:MULL-cut-elim}. For the other connectives (which are the standard connectives of $\LL$), the cut-elimination rules are the usual ones as described in~\cite{Girard87,Mellies09}. \subsubsection{Comments}\label{sec:MULL-comments} Let us summarize and comment the differences between our system and Baelde's $\MUMALL$. \begin{Itemize} \item Baelde's logical system is a \emph{predicate calculus} whereas our system is a \emph{propositional calculus}. Indeed, Baelde is mainly interested in applying $\MUMALL$ to program verification where the predicate calculus is essential for expressing properties of programs. We have a Curry-Howard perspective where formulas are seen at types and proofs as programs and where a propositional logical system is sufficient. \item Our system has exponentials whereas Balede's system has not because they can be encoded in $\MUMALL$ to some extent. However the exponentials encoded in that way do not satisfy all required isos (in particular the ``Seely morphisms'' are not isos with Baelde's exponentials) and this is a serious issue if we want to encode some form of $\lambda$-calculus in the system and consider it as a programming language. \item Our \Ngfp{} rule differs from Baelde's by the fact that we admit a context in the right premise. Notice that all formulas of this context must bear a $\Int\_$ modality: this restriction is absolutely crucial for allowing to express the cut-elimination rule in Section~\ref{sec:MULL-cut-elim} which uses an operation of substitution of proofs in formulas and this operation uses structural rules on the context. The semantic counterpart of this operation is described in Section~\ref{sec:strong-fun-LL-operations} where it appears clearly that it uses the fact that $P$ is an object of $\Em\LCAT$. Such a version of \Ngfp{} with a context would be problematic in Baelde's system by lack of built-in exponentials. \end{Itemize} \subsubsection{Syntactic functoriality of formulas}% \label{sec:synt-functoriality}% The reduction rule for the \Nlfp{}/\Ngfp{} cut requires the possibility of substituting a proof for a propositional variable in a formula. More precisely, let $(\zeta,\List\xi 1k)$ be a list of pairwise distinct propositional variables containing all the free variables of a formula $F$ and let $\Vect C=(\List C1k)$ be a sequence of closed formulas. Let $\pi$ be a proof of $\Seq{\Int\Gamma,\Orth A,B}$, then one defines% \footnote{Again the fact that the formulas of the context bear a $\Int\_$ is absolutely necessary to make this definition possible.} % a proof $\Substbis F{\pi/\zeta,\Vect C/\Vect\xi}$ of \[ \Seq{\Int\Gamma,\Orthp{\Substbis F{A/\zeta,\Vect C/\Vect\xi}},\Substbis F{B/\zeta,\Vect C/\Vect\xi}} \] by induction on $F$, adapting the corresponding definition in~\cite{Baelde12}. We illustrate this definition by two inductive steps. Observe, in these examples, how the exchange rule is used implicitly. Assume first that $F=\Lfpll\xi G$ (so that $(\zeta,\xi,\List\xi 1k)$ is a list of pairwise distinct variables containing all free variables of $G$). Let $G'=\Subst G{\Vect C}{\Vect\xi}$ whose only possible free variables are $\zeta$ and $\xi$. The proof $\Substbis F{\pi/\zeta,\Vect C/\Vect\xi}$ is defined by \[{\footnotesize \begin{prooftree} \hypo{} \ellipsis{$\Substbis {G}{\pi/\zeta, \Subst{(\Lfpll\xi {G'})}{B}{\zeta}/\xi,\Vect C/\Vect\xi}$} {\Seq{\Int\Gamma,\Orthp{\Substbis {G'}{A/\zeta, \Subst{(\Lfpll\xi {G'})}{B}{\zeta}/\xi}}, \Substbis {G'}{B/\zeta,\Subst{(\Lfpll\xi {G'})}{B}{\zeta}/\xi}}} \infer1[\Nlfp]{\Seq{\Int\Gamma,\Orthp{\Substbis {G'}{A/\zeta, \Subst{(\Lfpll\xi {G'})}{B}{\zeta}/\xi}}, \Subst{(\Lfpll\xi {G'})}{B}{\zeta}}} \infer1[\Ngfpbis]{\Seq{\Int\Gamma, \Orthp{\Subst{(\Lfpll\xi {G'})}{A}{\zeta}}, \Subst{(\Lfpll\xi {G'})}{B}{\zeta}}} \end{prooftree} } \] Notice that this case uses the additional parameters $\Vect C$ in the definition of this substitution with $k+1$ parameters in the inductive hypothesis. To see that the last inference in this deduction is an instance of \Ngfpbis{}, set $H=\Subst{\Orth{G'}}{\Orth A}\zeta$ and notice that % $\Orthp{\Substbis {G'}{A/\zeta, \Subst{(\Lfpll\xi {G'})}{B}{\zeta}/\xi}}= \Substbis{H}{ \Orth{\Subst{(\Lfpll\xi {G'})}{B}{\zeta}}/\xi}$ and $\Orthp{\Subst{(\Lfpll\xi {G'})}{A}{\zeta}}=\Gfpll\xi{H}$. Another example is $F=\Tens{G_1}{G_2}$: $\Substbis F{\pi/\zeta,\Vect C/\Vect\xi}$ is defined as \begin{center}{\footnotesize \begin{prooftree} \hypo{} \ellipsis{$\Substbis{G_1}{\pi/\zeta,\Vect C/\Vect\xi}$} {\Seq{\Int\Gamma,\Orthp{{\Subst{G_1'}{A}{\zeta}}},{\Subst{G_1'}{B}{\zeta}}}} \hypo{} \ellipsis{$\Substbis{G_2}{\pi/\zeta,\Vect C/\Vect\xi}$} {\Seq{\Int\Gamma,\Orthp{{\Subst{G_2'}{A}{\zeta}}},{\Subst{G_2'}{B}{\zeta}}}} \infer2[\Ntens] {\Seq{\Int\Gamma,\Int\Gamma,\Orthp{{\Subst{G_1'}{A}{\zeta}}}, \Orthp{{\Subst{G_2'}{A}{\zeta}}}, \Tens{\Subst{G_1'}{B}{\zeta}}{\Subst{G_2'}{B}{\zeta}}}} \infer[double]1[\Ncontr]{\Seq{\Int\Gamma,\Orthp{{\Subst{G_1'}{A}{\zeta}}}, \Orthp{{\Subst{G_2'}{A}{\zeta}}}, \Tens{\Subst{G_1'}{B}{\zeta}}{\Subst{G_2'}{B}{\zeta}}}} \infer1[\Npar]{\Seq{\Int\Gamma, \Orthp{{\Subst{G_1'}{A}{\zeta}}}\IPar\Orthp{{\Subst{G_2'}{A}{\zeta}}}, \Tens{\Subst{G_1'}{B}{\zeta}}{\Subst{G_2'}{B}{\zeta}}}} \end{prooftree} } \end{center} Observe that we use in an essential way the fact that all formulas of the context are of shape $\Int H$ (even if $F$ is exponential-free) when we apply contraction rules on this context. \subsubsection{Cut elimination}% \label{sec:MULL-cut-elim}% The only reduction that we will mention here is \Nlfp{}/\Ngfp{}. Let $\theta$ be \[{\footnotesize \begin{prooftree} \hypo{} \ellipsis{$\pi$}{\Seq{\Lambda,\Subst F{\Lfpll\zeta F}\zeta}} \infer1[\Nlfp]{\Seq{\Lambda,\Lfpll\zeta F}} \hypo{} \ellipsis{$\lambda$}{\Seq{\Delta,\Orth A}} \hypo{} \ellipsis{$\rho$}{\Seq{\Int\Gamma,A,\Orthp{\Subst FA\zeta}}} \infer2[\Ngfp]{\Seq{\Delta,\Int\Gamma,\Orthp{\Lfpll\zeta F}}} \infer2[\Ncut]{\Seq{\Lambda,\Delta,\Int\Gamma}} \end{prooftree} } \] and let $\rho'$ be the proof \[{\footnotesize \begin{prooftree} \hypo{} \ellipsis{$\rho$}{\Seq{\Int\Gamma,A,\Orthp{\Subst FA\zeta}}} \infer1[\Ngfpbis]{\Seq{\Int\Gamma,A,\Orthp{\Lfpll\zeta F}}} \end{prooftree}} \] Then $\theta$ reduces to the proof shown in Figure~\ref{fig:fp-reduction}. \begin{figure*} \centering {\footnotesize % \begin{prooftree} \hypo{} \ellipsis{$\Subst F{\rho'}\zeta$} {\Seq{\Int\Gamma,\Subst FA\zeta},\Orthp{\Subst F{\Lfpll\zeta F}\zeta}} \hypo{} \ellipsis{$\pi$}{\Seq{\Lambda,\Subst F{\Lfpll\zeta F}\zeta}} \infer2[\Ncut]{\Seq{\Lambda,\Int\Gamma},\Subst FA\zeta} \hypo{} \ellipsis{$\lambda$}{\Seq{\Delta,\Orth A}} \hypo{} \ellipsis{$\rho$}{\Seq{\Int\Gamma,A,\Orthp{\Subst FA\zeta}}} \infer2[\Ncut]{\Seq{\Delta,\Int\Gamma,\Orthp{\Subst FA\zeta}}} \infer2[\Ncut]{\Seq{\Lambda,\Delta,\Int\Gamma,\Int\Gamma}} \infer[double]1[\Ncontr]{\Seq{\Lambda,\Delta,\Int\Gamma}} \end{prooftree} % } \caption{Reduction \Nlfp{}/\Ngfp} \label{fig:fp-reduction} \end{figure*} This reduction rule uses the functoriality of formulas as well as the $\wn$-contexts in the \Ngfp{} rule. \begin{remark} In~\cite{Baelde12} it is shown that $\MUMALL$ enjoys cut-elimination. We will show in a further paper how this method based on reducibility can be adapted to our $\MULL$. Notice that a cut-free proof has not the sub-formula property in general because of rule \Ngfp{}. Though, the normalization theorem makes sure that a proof of a sequent \emph{which does not contain any $\nu$-formula} reduces to a cut-free proof enjoying the sub-formula property. \end{remark} \subsubsection{Interpreting formulas and proofs (outline)} We assume to be given a $\MULL$ Seely model $(\cL,\Vect\cL)$, see Section~\ref{sec:cat-models}. With any formula $A$ and any repetition-free sequence $\Vect\zeta=(\List\zeta 1k)$ of type variables containing all the free variables of $A$, we associate $\Tsem A_{\Vect\zeta}\in\cL_k$ in the obvious way, for instance $\Tsem{\Tens AB}_{\Vect\zeta}=\mathord\ITens\Comp(\Tsem A_{\Vect\zeta},\Tsem B_{\Vect\zeta})\in\cL_k$ by conditions~(\ref{enum:seel-mull-4}) and~(\ref{enum:seel-mull-3}) in Definition~\ref{def:categorical-muLL-models} and $\Tsem{\Gfpll\zeta A}_{\Vect\zeta}=\Fungfp{(\Tsem A_{\Vect\zeta,\zeta})}$ using condition~(\ref{enum:seel-mull-5}). Then $\Tsem{\Orth A}_{\Vect\zeta}=\Orth{\Tsem{A}_{\Vect\zeta}}$ up to a natural isomorphism. In this outline, we keep symmetric monoidality isomorphisms of $\cL$ and of $\Excl\_$ implicit (see for instance~\cite{Ehrhard18} how \emph{monoidal trees} allow to take them into account). With any $\Gamma=(\List A1n)$ we associate an object $\Tsem\Gamma$ of $\cL$ and with any proof $\pi$ of $\Seq\Gamma$ we associate a morphism $\Psem\pi\in\cL(\One,\Tsem\Gamma)$ using the categorical constructs of $\cL$ in a straightforward way, see~\cite{Mellies09}. Then one proves that if $\pi$ and $\pi'$ are proofs of $\Seq\Gamma$ and $\pi$ reduces to $\pi'$ by the cut-elimination rules, then $\Psem\pi=\Psem{\pi'}$. This is done by an inspection of the various cut-elimination rules. In the case of \Nlfp{}/\Ngfp{} cut-elimination, we need the following lemma. \begin{lemma} Let $\Gamma=(\List D1n)$ be a sequence of closed formulas and let $P=\Excl{\Psem{D_1}}\ITens\cdots\ITens\Excl{\Tsem{D_n}}$, considered as an object of $\Em\LCAT$. Let $F$ be a formula and $\zeta,\List\xi 1k$ be a repetition-free list of variables containing all the free variables of $F$ so that $\Vcsnot F=\Tsem F_{\zeta,\Vect\xi}$ is a strong functor $\LCAT^{k+1}\to\LCAT$ which belongs to $\LCAT_k$. As shown in Section~\ref{sec:gen-strong-functors} this strong functor lifts to a functor $\Mfun{\Vcsnot F}P:\Kcomod\LCAT P^{k+1}\to \Kcomod\LCAT P$. Let $\pi$ be a proof of $\Seq{\Int\Gamma,\Orth A,B}$, so that $\Psem\pi\in\Kcomod\LCAT P(\Tsem A,\Tsem B)$. Let $\Vect C=(\List C1k)$ be a list of closed formulas. Then \begin{multline*} \Psem{\Substbis F{\pi/\zeta,\Vect C/\Vect \xi}} =\Mfun{\Vcsnot F}P(\Psem\pi,\Tsem{C_1},\dots,\Tsem{C_k})\\ \in\Kcomod\LCAT P( \Strfun{\Vcsnot F}(\Tsem A,\Tsem{C_1},\dots,\Tsem{C_k}), \Strfun{\Vcsnot F}(\Tsem B,\Tsem{C_1},\dots,\Tsem{C_k}))\,. \end{multline*} \end{lemma} \noindent% The proof of the lemma is a simple verification. Notice that we use the fact that the objects of $\Kcomod\LCAT P$ are the same as those of $\LCAT$. \section{Sets and relations}\label{sec:sets-rel} The category $\REL$ has sets as objects, and given sets $E$ and $F$, $\REL(E,F)=\Part{E\times F}$. Identity is the diagonal relation and composition is the usual composition of relations, denoted by simple juxtaposition. If $t\in\REL(E,F)$ and $u\subseteq E$ then $\Matappa tu=\Eset{b\in F\mid\exists a\in u\ (a,b)\in t}$. \subsection{$\REL$ as a model of $\LL$.}\label{sec:REL-LL-model} This category is a well-known model of $\LL$ in which $\One=\Botlin=\Eset{\Onelem}$, $\Tens EF=(\Limpl EF)=\Par EF=E\times F$ so that $\Orth E=E$. As to the additives, $0=\top=\emptyset$ and $\Bwith_{i\in I}E_i=\Bplus_{i\in I}E_i=\Union_{i\in I}\Eset i\times E_i$. The exponentials are given by $\Excl E=\Int E=\Mfin E$ (finite multisets of elements of $E$ which are functions $m:E\to\Nat$ such that $m(a)\not=0$ for finitely many $a$'s; addition of multisets is defined in the obvious pointwise way, and $\Mset{\List a1k}$ is the multiset which maps $a$ to the number of $i$'s such that $a_i=a$). For the additives and multiplicatives, the operations on morphisms are defined in the obvious way. Let us be more specific about the exponentials. Given $s\in\REL(E,F)$, $\Excl s\in\REL(\Excl E,\Excl F)$ is \(\Excl s=\{(\Mset{\List a1n},\Mset{\List b1n})\mid n\in\Nat \text{ and }\forall i\ (a_i,b_i)\in s\}\), $\Der E\in\REL(\Excl E,E)$ is given by $\Der E=\Eset{(\Mset a,a)\mid a\in E}$ and $\Digg E\in\REL(\Excl E,\Excll E)$ is given by $\Digg E=\{(m_1+\cdots+m_n,\Mset{\List m1n})\mid\forall i\ m_i\in\Mfin E\}$. Last $\Seelyz\in\REL(\One,\Excl\top)$ is $\Seelyz=\Eset{(\Onelem,\Mset{})}$ and $\Seelyt_{E,F}\in\REL(\Tens{\Excl E}{\Excl F},\Excl{(\With EF)})$ is given by \begin{multline*} \Seelyt_{E,F}=\{((\Mset{\List a1k},\Mset{\List b1l}),\\ \Mset{(1,a_1),\dots,(1,a_k),(2,b_1),\dots,(2,b_l)}) \mid\\ \List a1k\in E\text{ and }\List b1l\in F\}\,. \end{multline*} Weakening $\Weak E\in\REL(\Excl E,\One)$ and $\Contr E\in\REL(\Excl E,\Tens{\Excl E}{\Excl E})$ are given by $\Weak E=\Eset{(\Mset{},\Onelem)}$ and \(\Contr E=\{(m_1+m_2,(m_1,m_2))\mid m_i\in\Mfin E\text{ for }i=1,2\}\). \subsection{Locally continuous functors on $\REL$} \label{sec:hom-continuous} The following considerations on continuity of functors are standard, see~\cite{Wand79}. A functor $\Vsnot F:\REL^n\to\REL$ is locally continuous if, for all $\Vect E,\Vect F\in\REL^n$ and all directed set $D\subseteq\REL^n(\Vect E,\Vect F)$, one has $\Vsnot F(\Union D)=\Union\Eset{\Vsnot F(\Vect s)\mid\Vect s\in D}$ (we use $\Union D$ for the component-wise union). This implies in particular that if $\Vect s\subseteq\Vect t$, one has $\Vsnot F(\Vect s)\subseteq\Vsnot F(\Vect t)$ (taking $D=\Eset{\Vect s,\Vect t}$). To simplify notations assume that $n=1$ (but what follows holds for all values of $n$). \begin{lemma}\label{lemma:rel-embedding-retraction} Let $E$ and $F$ be sets and let $s\in\REL(E,F)$ and $t\in\REL(F,E)$. Assume that $t\Compl s=\Id_E$ and that $s\Compl t\subseteq\Id_F$. Then $s$ is (the graph of) an injective function and $t=\{(b,a)\in F\times E\mid (a,b)\in s\}$. \end{lemma} \begin{lemma} Let $\Vsnot F:\REL\to\REL$ be a locally continuous functor. Assume that $E\subseteq F$ and let $\Relii_{E,F}=\Eset{(a,a)\mid a\in E}\in\REL(E,F)$ and $\Relip_{E,F}=\Eset{(a,a)\mid a\in E}\in\REL(F,E)$. Then $\Vsnot F(\Relii_{E,F})\in\REL(\Vsnot F(E),\Vsnot F(F))$ is an injective function. \end{lemma} \begin{proof} We have $\Relip_{E,F}\Compl\Relii_{E,F}=\Id_E$ and $\Relii_{E,F}\Compl\Relip_{E,F}\subseteq\Id_F$ and hence $\Vsnot F(\Relip_{E,F})\Compl\Vsnot F(\Relii_{E,F})=\Id$ by functoriality and $\Vsnot F(\Relii_{E,F})\Compl\Vsnot F(\Relip_{E,F})\subseteq\Id$ by local continuity. The announced property results from Lemma~\ref{lemma:rel-embedding-retraction}. \end{proof} Let $\RELI$ be the category whose objects are sets and morphisms are set inclusions (so that $\RELI(E,F)$ has $\Relii_{E,F}$ as unique element if $E\subseteq F$ and is empty otherwise). Then $\Relii$ can be thought of as the ``inclusion functor'' $\RELI\to\REL$, acting as the identity on objects. Obviously, $\RELI$ is cocomplete\footnote{Notice that it is not complete, for instance is has no final object.}. \begin{proposition}\label{prop:hom-continuous-dir-cocontinuous} If $\Vsnot F:\REL\to\REL$ is locally continuous then $\Vsnot F\Compl\Relii:\RELI\to\REL$ is directed-cocontinuous (that is, preserves the colimits of directed sets of sets). \end{proposition} The proof can be found in \cite{Wand79}. We know that a locally continuous functor $\Vsnot F$ maps inclusions to injections, we shall say that $\Vsnot F$ is \emph{strict} if it maps inclusions to inclusions, that is, if $E\subseteq F$ then $\Vsnot F(E)\subseteq\Vsnot F(F)$ and $\Vsnot F(\Relii_{E,F})=\Relii_{\Vsnot F(E),\Vsnot F(F)}$ (which implies $\Vsnot F(\Relip_{E,F})=\Relip_{\Vsnot F(E),\Vsnot F(F)}$). As a direct consequence of Proposition~\ref{prop:hom-continuous-dir-cocontinuous}, we get: \begin{lemma}\label{lemma:str-hom-cont-scott} If $\Vsnot F$ is strict locally continuous then, for any directed set of sets $\cD$, one has $\Vsnot F(\Union\cD)=\Union_{E\in\cD}\Vsnot F(E)$. \end{lemma} \subsection{Variable sets and basic constructions on them}\label{sec:variable-sets} \begin{definition} An $n$-ary \emph{variable set} is a strong functor $\Vsnot V:\REL^n\to\REL$ such that $\Strfun{\Vsnot V}$ is locally continuous and strict. \end{definition} \label{sec:basic-variable-sets} By the general considerations of Section~\ref{sec:gen-strong-functors}, there is a constant strong functor $\REL^n\to\REL$ with value $E$ for each set $E$. There are projection strong functors $\REL^n\to\REL$, $\times$ (that is $\ITens$) and $+$ (that is $\IPlus$) define strong functors $\REL^2\to\REL$, $\Mfin\_$ (that is $\Excl\_$) defines a strong functor $\REL\to\REL$. Strong functors on $\REL$ are stable under composition, and if $\Vsnot V$ is a strong functor $\REL^n\to\REL$ then there is a ``dual'' strong functor $\Orth{\Vsnot V}$ (which is actually identical to $\Vsnot V$ in this very simple model). To check that these strong functors $\Vsnot V$ are variable sets we have only to check that the underlying functors $\Strfun{\Vsnot V}$ are strict locally continuous. We deal with $\Excl\_$ and composition, the other cases are similar. We already defined the functor% \footnote{We use the same notation for the strong functor and its underlying functor, this slight abuse of notations should not lead to confusions.} % $\Excl\_$ in Section~\ref{sec:REL-LL-model}. If $s\subseteq t\in\REL(E,F)$, it follows from the definition that $\Excl s\subseteq\Excl t$. Let $D\subseteq\REL(E,F)$ be directed, we prove $\Excl{(\Union D)}\subseteq\Union_{s\in D}\Excl s$: an element of $\Excl{(\Union D)}$ is a pair $(\Mset{\List a1k},\Mset{\List b1k})$ with $(a_i,b_i)\in\Union D$ for $i=1,\dots,k$. Since $D$ is directed, there is an $s\in D$ such that $(a_i,b_i)\in s$ for $i=1,\dots,k$ and the inclusion follows. Strictness is obvious. \label{sec:comp-var-sets} Let $\Vsnot V_i:\REL^n\to\REL$ be variable sets for $i=1,\dots,k$ and let $\Vsnot W:\REL^k\to\REL$ be a variable set. Then the functor $\Strfun{\Vsnot W}\Comp\Vect{\Strfun{\Vsnot V}}:\REL^n\to\REL$ is clearly strict locally continuous (since these conditions are preservation properties) from which it follows that the strong functor $\Vsnot U=\Vsnot W\Comp\Vect{\Vsnot V}$ is a variable type. \subsubsection{Fixed point of a variable set}\label{sec:VS-fixpoints} Let $\Vsnot F:\REL\to\REL$ be a strict locally continuous functor. Since $\emptyset\subseteq\Vsnot F(\emptyset)$ we have $\Vsnot F^n(\emptyset)\subseteq\Vsnot F^{n+1}(\emptyset)$ for all $n\in\Nat$, by induction on $n$ and hence $\Vsnot F(\Union_{n=0}^\infty\Vsnot F^n(\emptyset))=\Union\Vsnot F^n(\emptyset)$ by Lemma~\ref{lemma:str-hom-cont-scott} since $\Eset{\Vsnot F^n(\emptyset)\mid n\in\Nat}$ is directed. Let $\Funfp{\Vsnot F}=\Union_{n=0}^\infty\Vsnot F^n(\emptyset)$, so that $(\Funfp{\Vsnot F},\Id_{\Funfp{\Vsnot F}})$ is an $\Vsnot F$-coalgebra. \begin{lemma}\label{lemma:rel-fixpoint-final} The coalgebra $(\Funfp{\Vsnot F},\Id)$ is final in $\COALGFUN\REL{\Vsnot F}$. \end{lemma} Notice that $(\Funfp{\Vsnot F},\Id)$ is also an initial object in $\ALGFUN\REL{\Vsnot F}$. When we insist on considering $\Funfp{\Vsnot F}$ as a final coalgebra, we denote it as $\Fungfp{\Vsnot F}$. \begin{lemma}\label{lemma:hom-conts-stable-fixpoint} Let $\Vsnot F:\REL^{n+1}\to\REL$ be a strict locally continuous functor. The functor $\Fungfp{\Vsnot F}:\REL^n\to\REL$ is strict locally continuous. \end{lemma} Let $\Vsnot V:\REL^{n+1}\to\REL$ be a variable set, by Lemma~\ref{lemma:strfun-gfp-general}, there is a unique strong functor $\Fungfp{\Vsnot V}:\REL^n\to\REL$ which is characterized by: $\Strfun{\Fungfp{\Vsnot V}}(\Vect E)=\Fungfp{\Strfun{\Vsnot V}_{\Vect E}}$, for each $\Vect s\in\REL^n(\Vect E,\Vect F)$, $\Strfun{\Fungfp{\Vsnot V}}(\Vect s)=\Strfun{\Vsnot V}(\Vect s,\Fungfp{\Strfun{\Vsnot V}}(\Vect s))$ and last $\Strfun{\Vsnot V}(\Excl E\ITens\Vect F,\Strnat{\Vsnot V}_{E,\Vect F})=\Strnat{\Vsnot V}_{E,\Vect F}$. \begin{lemma}\label{lemma:REL-variable-set-fixpoint} The functor $\Fungfp{\Vsnot V}$ is a variable set. \end{lemma} \begin{proof} By the conditions above satisfied by $\Fungfp{\Vsnot V}$ we have that $\Strfun{\Fungfp{\Vsnot V}}=\Fungfp{\Strfun{\Vsnot V}}$ and hence $\Strfun{\Fungfp{\Vsnot V}}$ is strict locally continuous by Lemma~\ref{lemma:hom-conts-stable-fixpoint}. \end{proof} \subsubsection{A model of $\MULL$ based on variable sets} \label{sec:strong-VS-Seely-model} Let $\VREL n$ be the class of all $n$-ary variable sets, so that $\VREL 0=\Obj\REL$. The fact that $(\REL,(\VREL n)_{n\in\Nat})$ is a Seely model of $\MULL$ in the sense of Definition~\ref{def:categorical-muLL-models} results mainly from the fact that we take \emph{all} variable sets in the $\VREL n$'s so that there is essentially nothing to check. More explicitly: (\ref{enum:seel-mull-1}) holds by Section~\ref{sec:REL-LL-model}, (\ref{enum:seel-mull-2}) holds by construction, (\ref{enum:seel-mull-3}) holds by the fact that variable sets compose as explained in Section~\ref{sec:comp-var-sets} (notice that this condition refers to the general composition of strong functors defined at the beginning of Section~\ref{sec:strong-fun-LL-operations}), (\ref{enum:seel-mull-4}) holds by Section~\ref{sec:basic-variable-sets} and by the fact that the De~Morgan dual of a strong functor is strong, see Section~\ref {sec:strong-fun-LL-operations} and (\ref{enum:seel-mull-5}) holds by Lemma~\ref{lemma:REL-variable-set-fixpoint}. \section{Non-uniform totality spaces}\label{sec:NUTS}% We enrich the model of Section~\ref{sec:sets-rel} with a notion of totality, we use notations from that section for operations on sets and relations. \subsection{Basic definitions.} Let $E$ be a set and let $\cT\subseteq\Part E$. We define $ \Orth\cT=\{u'\subseteq E\mid\forall u\in\cT\ u\cap u'\not=\emptyset\} $. If $\cS\subseteq\cT\subseteq\Part E$ then $\Orth\cT\subseteq\Orth\cS$. We also have $\cT\subseteq\Biorth\cT$ and therefore $\Triorth\cT=\Orth\cT$. The biorthogonal closure has a nice and simple characterization. \begin{lemma}\label{lemma:NUTS-biorth-uppper-closed} Let $\cT\subseteq\Part E$, then $\Biorth\cT=\Upcl\cT=\{v\subseteq E\mid\exists u\in\cT\ u\subseteq v\}$. \end{lemma} \begin{proof} The $\supseteq$ direction is obvious, the converse is not difficult either: let $u\in\Biorth\cT$. Then $E\setminus u\notin\Orth\cT$, so there is $v\in\cT$ such that $v\cap(E\setminus u)=\emptyset$, that is $v\subseteq u$. Hence $u\in\Upcl\cT$. \end{proof} A \emph{non-uniform totality space} (NUTS) is a pair $X=(\Web X,\Total X)$ where $\Web X$ is a set and $\Total X\subseteq\Part{\Web X}$ satisfies $\Total X=\Biorth{\Total X}$, that is $\Total X=\Upcl{\Total X}$. Of course we set $\Orth X=(\Web X,\Orth{\Total X})$. \begin{example} Let $X=(\Nat,\Total X)$ where $\Total X$ is the set of all infinite subsets of $\Nat$. It is a NUTS because a superset of an infinite set is infinite. Then $\Web{\Orth X}=\Nat$ and $\Total{\Orth X}$ is the set of all cofinite subsets of $\Nat$ (the subsets $u$ of $\Nat$ such that $\Nat\setminus u$ is finite). If, with the same web $\Nat$, we take $\Total X=\Eset{u\subseteq\Nat\mid u\not=\emptyset}$ (again $\Total X=\Upcl{\Total X}$ obviously), then $\Total{\Orth X}=\Eset\Nat$. \end{example} We define four basic NUTS: $0=(\emptyset,\emptyset)$, $\top=(\emptyset,\Eset\emptyset)$ and $\One=\Botlin=(\Eset\Oneelem,\Eset{\Eset\Oneelem})$. Given NUTS $X_1$ and $X_2$ we define a NUTS $\Tens{X_1}{X_2}$ by $\Web{\Tens{X_1}{X_2}}=\Web{X_1}\times\Web{X_2}$ and \( \Total{\Tens{X_1}{X_2}} =\Upcl{\Eset{\Tens{u_1}{u_2}\mid u_i\in\Total{X_i}\text{ for }i=1,2}} \) where $\Tens{u_1}{u_2}=u_1\times u_2$. And then we define $\Limpl XY=\Orthp{\Tens X{\Orth Y}}$. \begin{lemma}\label{lemma:nuts-limpl-charact} $t\in\Total{\Limpl XY}\Equiv\forall u\in\Total X\ \Matappa tu\in\Total Y$. \end{lemma} We define the category $\NUTS$ whose objects are the NUTS and $\NUTS(X,Y)=\Total{\Limpl XY}$, composition being defined as the usual composition in $\REL$ (relational composition) and identities as the diagonal relations. Lemma~\ref{lemma:nuts-limpl-charact} shows that we have indeed defined a category. \subsubsection{Multiplicative structure} \begin{lemma}\label{lemma:NUTS-iso-charact} Let $X$ and $Y$ be NUTS and $t\in\NUTS(X,Y)$. Then $t$ is an iso in $\NUTS$ iff $t$ is (the graph of) a bijection $\Web X\to\Web Y$ such that $\forall u\subseteq\Web X\ u\in\Total X\Equiv t(u)\in\Total Y$. \end{lemma} \begin{lemma}\label{lemma:nuts-orth-morph} Let $t\subseteq\Web X\times\Web Y$. One has $t\in\NUTS(X,Y)$ iff $\Orth t=\Eset{(b,a)\mid (a,b)\in t}\in\NUTS(\Orth Y,\Orth X)$. \end{lemma} \begin{proof} This is an obvious consequence of Lemma~\ref{lemma:nuts-limpl-charact} and of the fact that $\Limplp XY=\Orth{\Tensp X{\Orth Y}}$ and $\Limplp{\Orth Y}{\Orth X}=\Orthp{\Tens{\Orth Y}{X}}$. \end{proof} \begin{lemma}\label{lemma:limpl-tens-charact} Let $t\subseteq\Web{\Limpl{\Tens{X_1}{X_2}}{Y}}$. One has $t\in\NUTS(\Tens{X_1}{X_2},Y)$ iff for all $u_1\in\Total{X_1}$ and $u_2\in\Total{X_2}$ one has $\Matappa t{\Tensp{u_1}{u_2}}\in\Total Y$. \end{lemma} \begin{lemma}\label{lemma:Assoc-tens-limpl} The bijection $\Assoc_{\Web{X_1},\Web{X_2},\Web Y}$ is an isomorphism from $\Limpl{\Tensp{X_1}{X_2}}{Y}$ to $\Limpl{X_1}{\Limplp{X_2}{Y}}$. \end{lemma} We turn now $\ITens$ into a functor, its action on morphisms being defined as in $\REL$. Let $t_i\in\NUTS(X_i,Y_i)$ for $i=1,2$, we have $\Tens{t_1}{t_2}\in\NUTS(\Tens{X_1}{X_2},\Tens{Y_1}{Y_2})$ by Lemma~\ref{lemma:limpl-tens-charact} and by the equation \( \Matappa{\Tensp{t_1}{t_2}}{\Tensp{u_1}{u_2}} =\Tens{\Matappap{t_1}{u_1}}{\Matappap{t_2}{u_2}} \). This functor is monoidal, with unit $\One$ and symmetric monoidality isomorphisms $\Leftu$, $\Rightu$, $\Sym$ and $\Assoc$ defined as in $\REL$. The only non-trivial thing to check is that $\Assoc$ is indeed a morphism, namely \( \Assoc_{\Web{X_1},\Web{X_2},\Web{X_3}} \in\NUTS(\Tens{\Tensp{X_1}{X_2}}{X_3},\Tens{X_1}{\Tensp{X_2}{X_3}}) \). This results from Lemma~\ref{lemma:Assoc-tens-limpl} and from the observation that \( \Orthp{\Tens{\Tensp{X_1}{X_2}}{X_3}} =\Limplp{\Tensp{X_1}{X_2}}{\Orth{X_3}}\) and\\ \(\Orthp{\Tens{X_1}{\Tensp{X_2}{X_3}}} =\Limplp{X_1}{\Limplp{X_2}{\Orth{X_3}}} \). The SMC category $\NUTS$ is closed, with $\Limpl XY$ as internal hom object from $X$ to $Y$, and evaluation morphism \( \Evlin=\Eset{(((a,b),a),b\mid a\in\Web X\text{ and }b\in\Web Y} \) which indeed belongs to $\NUTS(\Tens{\Limplp XY}{X},Y)$ by Lemma~\ref{lemma:limpl-tens-charact} since, for all $t\in\Total{\Limpl XY}$ and $u\in\Total X$ we have \( \Matapp\Evlin{\Tensp tu}=\Matapp tu\in\Total Y \). This category $\NUTS$ is also $\ast$-autonomous with dualizing object ${\mathord{\perp}}=\One$. \subsubsection{Additive structure} Let $(X_i)_{i\in I}$ be an at most countable family of objects of $\NUTS$. We define $X=\Bwith_{i\in I}X_i$ by: $\Web X=\Union_{i\in I}\Eset i\times\Web{X_i}$ and \( \Total X=\{u\subseteq\Web X\mid\forall i\in I\ \Matappa{\Proj i}{u}\in\Total{X_i}\} \) where $\Proj i=\Eset{((i,a),a)\mid a\in \Web{X_i}}$. It is clear that $\Total X=\Upcl{\Total X}$ and hence $X$ is an object of $\NUTS$. By definition of $X$ and by Lemma~\ref{lemma:nuts-limpl-charact} we have $\forall i\in I\ \Proj i\in\NUTS(X,X_i)$. Given $\Vect t=(t_i)_{i\in I}$ with $\forall i\in I\ t_i\in\NUTS(Y,X_i)$, we have $\Tuple{\Vect t}\in\NUTS(Y,X)$ as easily checked (using Lemma~\ref{lemma:nuts-limpl-charact} again). It follows that $(\Bwith_{i\in I}X_i,(\Proj i)_{i\in I})$ is the cartesian product of the $X_i$'s in $\NUTS$. This shows that the category $\NUTS$ has all countable products and hence is cartesian. Since it is $\ast$-autonomous, the category $\NUTS$ is also cocartesian, coproduct being given by $\Bplus_{i\in I}X_i=\Orthp{\Bwith_{i\in I}\Orth{X_i}}$. It follows that $X=\Bplus_{i\in I}X_i=\Orthp{\Bwith_{i\in I}\Orth{X_i}}$ satisfies $\Web X=\Union_{i\in I}\Eset i\times\Web{X_i}$ and % {\footnotesize \[\Total X= \{v\subseteq\Union_{i\in I}\Eset i\times\Web{X_i} \mid\exists i\in I\,\exists u\in\Total{X_i}\ \Eset i\times u\subseteq v\} \]}% as easily checked. Notice that the final object is $\top=(\emptyset,\Eset\emptyset)$ and that $0=\Orth\top=(\emptyset,\emptyset)$. \subsubsection{Exponential}\label{sec:NUTS-exponential} We extend the exponential of $\REL$ with totality. If $u\in\Part{\Web X}$ we set $\Promhc u=\Mfin u\in\Part{\Web{\Excl X}}$. Then we set $\Web{\Excl X}=\Mfin{\Web X}$ and \( \Total{\Excl X}=\Biorth{\Eset{\Promhc u\mid u\in\Total X}} =\Upcl{\Eset{\Promhc u\mid u\in\Total X}} \). \begin{lemma}\label{lemma:nuts-excl-map} Let $t\subseteq\Mfin{\Web X}\times\Web Y$. One has $t\in\NUTS(\Excl X,Y)$ iff for all $u\in\Total X$ one has $\Matappa t{\Promhc u}\in\Total Y$. \end{lemma} \begin{lemma}\label{lemma:nuts-excl-map-bil} Let $t\subseteq\Mfin{\Web{X_1}}\times\cdots\times\Mfin{\Web{X_k}}\times\Web Y$. One has $t\in\NUTS(\Excl{X_1}\ITens\cdots\ITens\Excl{X_k},Y)$ iff for all $\Vect u\in\prod_{i=1}^k\Total{X_i}$ one has $\Matappa t{(\Promhc{u_1}\ITens\cdots\ITens\Promhc{u_2})}\in\Total Y$. \end{lemma} \begin{lemma} If $t\in\NUTS(X,Y)$ then $\Excl t\in\NUTS(\Excl X,\Excl Y)$. \end{lemma} \begin{proof} By Lemma~\ref{lemma:nuts-excl-map} and the fact that $\Matappa{\Excl t}{\Promhc u}=\Promhc{\Matappap tu}$. \end{proof} To prove that $\NUTS$ is a categorical model of $\LL$, it suffices to show that the various relational morphisms defining the strong symmetric monoidal monadic structure of $\Excl\_$ in $\REL$ (see Section~\ref{sec:REL-LL-model}) are actually morphisms in $\NUTS$. This is essentially straightforward and based on Lemma~\ref{lemma:nuts-excl-map}. \begin{lemma}\label{lemma:NUTS-excl} Equipped with $\Der{}$, $\Digg{}$, $\Seelyz$ and $\Seelyt{}$ defined as in $\REL$, $\Excl\_$ is a symmetric monoidal comonad which turns $\NUTS$ into a Seely model of $\LL$. \end{lemma} \subsection{Variable non-uniform totality spaces (VNUTS)} \label{sec:VNUTS-definition} Let $E$ be a set, we use $\Tot E$ for the set of all \emph{totality candidates} on $E$, that is, of all subsets $\cT$ of $\Part E$ such that $\cT=\Biorth\cT$% . In other words $\cT\in\Tot E$ means that $\cT=\Upcl\cT$ by Lemma~\ref{lemma:NUTS-biorth-uppper-closed}. Ordered by $\subseteq$, this set $\Tot E$ is a complete lattice. We need now to define a notion of strong functors $\cX:\NUTS^n\to\NUTS$ for defining a model in the sense of Definition~\ref{def:categorical-muLL-models}. One crucial feature of such a functor will be that $\Web{\Strfun\cX(\Vect X)}$ depends only on $\Web{\Vect X}$. \begin{definition} Let $n\in\Nat$, an $n$-ary VNUTS is a pair $\Vsnot X=(\Web{\Vsnot X},\Total{\Vsnot X})$ where $\Web{\Vsnot X}:\REL^n\to\REL$ is a variable set $\Web{\Vsnot X}=(\Strfun{\Web{\Vsnot X}},\Strnat{\Web{\Vsnot X}})$ (see Section~\ref{sec:strong-VS-Seely-model}) and $\Total{\Vsnot X}$ is an operation which with each $n$-tuple $\Vect X$ of objects of $\NUTS{}$ associates an element $\Total{\Vsnot X}(\Vect X)$ of $\Tot{\Strfun{\Web{\Vsnot X}}(\Web{\Vect X})}$ in such a way that \begin{Enumerate} \item for any $\Vect t\in\NUTS^n(\Vect X,\Vect Y)$, the element $\Strfun{\Web{\Vsnot X}}(\Vect t)$ of\\ $\REL(\Strfun{\Web{\Vsnot X}}({\Web{\Vect X}}),{\Web{\Vsnot X}}(\Web{\Vect Y}))$ belongs to $\NUTS(\Strfun{\Vsnot X}(\Vect X),\Strfun{\Vsnot X}(\Vect Y))$\\ (where $\Strfun{\Vsnot X}(\Vect X)$ denotes the NUTS $(\Strfun{\Web{\Vsnot X}}(\Web{\Vect X}),\Total{\Vsnot X}(\Vect X))$ \label{enum:vnuts-cond-tot} \item and for any $\Vect Y\in\Obj{\NUTS^n}$ and any $X\in\Obj\NUTS$ one has $\Strnat{\Web{\Vsnot X}}_{\Web X,\Web{\Vect Y}}\in\NUTS(\Tens{\Excl X}{\Strfun{\Vsnot X}}(\Vect Y),\Strfun{\Vsnot X}(\Tens{\Excl X}{\Vect Y}))$. In other words, for an $u\in\Total X$ and $v\in\Tot{\Vsnot X}(\Vect Y)$, one has $\Matappa{\Strnat{\Web{\Vsnot X}}_{\Web X,\Web{\Vect Y}}}{\Tensp{\Promhc u}{w}}\in\Tot{\Vsnot X}(\Tens{\Excl X}{\Vect Y})$\label{enum:vnuts-cond-strength}. \end{Enumerate} \end{definition} \begin{lemma}\label{lemma:VNUTS-strong-functor} Any VNUTS $\Vsnot X:\NUTS^n\to\NUTS$ induces a strong functor $\cX:\NUTS^n\to\NUTS$ which satisfies \begin{Itemize} \item $\Web{\Strfun\cX(\Vect X)}=\Strfun{\Web{\Vsnot X}}(\Web{\Vect X})$, \item $\Total{\Strfun\cX(\Vect X)}=\Total{\Vsnot X}(\Vect X)$, \item $\Strfun\cX(\Vect t)=\Strfun{\Web{\Vsnot X}}(\Vect t)\in\NUTS(\Strfun{\Vsnot X}(\Vect X),\Strfun{\Vsnot X}(\Vect Y))$ if $\Vect t\in\NUTS^n(\Vect X,\Vect Y)$, \item and $\Strnat\cX_{X,\Vect Y}=\Strnat{\Web{\Vsnot X}}_{\Web X,\Web{\Vect Y}}$ \end{Itemize} and the correspondence $\Vsnot X\mapsto\cX$ is injective. \end{lemma} \begin{proof} It is clear that $\cX$ so defined is a strong functor. Let us check that $\Vsnot X$ can be retrieved from $\cX$. Given a set $E$, $(E,\Part E)$ is a NUTS that we denote as $\Nutsm(E)$. Notice that $\Nutsm$ can be extended into a functor $\REL\to\NUTS$ which acts as the identity on morphisms. There is also a forgetful functor $\Nutsf:\NUTS\to\REL$ which maps $X$ to $\Web X$ and acts as the identity on morphisms ($\Nutsm$ is right adjoint to $\Nutsf$). Let $\Vsnot X$ be a unary VNUTS and let $\cX:\NUTS\to\NUTS$ be the associated strong functor. Then we have $\Strfun{\Web{\Vsnot X}}=\Nutsf\Comp\Strfun\cX\Comp\Nutsm$ and $\Strnat{\Web{\Vsnot X}}_{E,F}=\Strnat\cX_{\Nutsm(E),\Nutsm(F)}$ for any sets $E$ and $F$. Last, given a NUTS $X$, we have that $\Total{\Vsnot X}(X)$ is just the totality component of the NUTS $\cX(X)$. This shows that the VNUTS which induces $\cX$ can be retrieved from $\cX$. \end{proof} For this reason we use $\Vsnot X$ to denote the functor $\cX$. \begin{remark} Another possibility would be to define a VNUTS at the first place as a strong functor $\cX:\NUTS^n\to\NUTS$ satisfying additional properties whose purpose would be to make the definition of the underlying $\Vsnot X$ possible. This option, suggested by the reviewers, will be explored further. It is crucial to notice that these additional properties (that is, the existence of $\Vsnot X$) are crucial in the proof of Theorem~\ref{th:VNUTS-model}. In particular, it is essential that $\Web{\cX(X)}$ depends only on $\Web X$. \end{remark} Given $n\in\Nat$ let $\VNUTS n$ be the class of strong $n$-ary VNUTS. We identify $\VNUTS 0$ with the class of objects of the Seely category $\NUTS$. The following refers to Definition~\ref{def:categorical-muLL-models} \begin{theorem}\label{th:VNUTS-model} $(\NUTS,(\VNUTS n)_{n\in\Nat})$ is a Seely model of $\MULL$. \end{theorem} \begin{proof}[Partial proof] We deal with Condition~(\ref{enum:seel-mull-5}).\\ Let first $\Vsnot X=(\Web{\Vsnot X},\Total{\Vsnot X})$ be a unary VNUTS. Let $E=\Funfp{\Strfun{\Web{\Vsnot X}}}$ which is the least set such that $\Strfun{\Web{\Vsnot X}}(E)=E$, that is $E=\Union_{n=0}^\infty\Strfun{\Web{\Vsnot X}}^n(\emptyset)$. Let $\Phi:\Tot E\to\Tot E$ be defined as follows: given $\cS\in\Tot E$, then $(E,\cS)$ is a NUTS, and we set $\Phi(\cS)=\Total{\Vsnot X}(E,\cS)\in\Tot{\Strfun{\Web{\Vsnot X}}(E)}=\Tot E$. This function $\Phi$ is monotone. Let indeed $\cS_1,\cS_2\in\Tot E$ with $\cS_1\subseteq\cS_2$. Then we have $\Id\in\NUTS((E,\cS_1),(E,\cS_2))$ and therefore, by Condition~(\ref{enum:vnuts-cond-tot}) satisfied by $\Vsnot X$, we have $\Id=\Strfun{\Web{\Vsnot X}}(\Id)\in\NUTS(\Strfun{\Vsnot X}(E,\cS_1),\Strfun{\Vsnot X}(E,\cS_2))=\NUTS((E,\Phi(\cS_1)),(E,\Phi(\cS_2))$ which means that $\Phi(\cS_1)\subseteq\Phi(\cS_2)$. By the Knaster Tarski Theorem (remember that $\Tot E$ is a complete lattice), $\Phi$ has a greatest fixpoint $\cT$ that we can describe as follows. Let $(\cT_\alpha)_{\alpha\in\Ordinals}$, where $\Ordinals$ is the class of ordinals, be defined by: $\cT_0=\Part E$ (the largest possible notion of totality on $E$), $\cT_{\alpha+1}=\Phi(\cT_\alpha)$ and $\cT_\lambda=\Inter_{\alpha<\lambda}\cT_\alpha$ when $\lambda$ is a limit ordinal. This sequence is decreasing (easy induction on ordinals using the monotonicity of $\Phi$) and there is an ordinal $\theta$ such that $\cT_{\theta+1}=\cT_\theta$ (by a cardinality argument; we can assume that $\theta$ is the least such ordinal). The greatest fixpoint of $\Phi$ is then $\cT_\theta$ as easily checked. By construction $((E,\cT_\theta),\Id)$ is an object of $\COALGFUN{\NUTS}{\Strfun{\Vsnot X}}$, we prove that it is the final object. So let $(Y,t)$ be another object of the same category. Since $(\Web Y,t)$ is an object of $\COALGFUN\REL{\Strfun{\Web{\Vsnot X}}}$ and since $(E,\Id)$ is the final object in that category, we know by Lemma~\ref{lemma:rel-fixpoint-final} that there is exactly one $e\in\REL(\Web Y,E)$ such that $\Strfun{\Web{\Vsnot X}}(e)\Compl t=e$. We prove that actually $e\in\NUTS(Y,(E,\cT_\theta))$ so let $v\in\Total Y$. We prove by induction on the ordinal $\alpha$ that $\Matappa ev\in\cT_\alpha$. For $\alpha=0$ it is obvious since $\cT_0=\Part E$. Assume that the property holds for $\alpha$ and let us prove it for $\alpha+1$. We have $\Matappa tv\in\Total{\Vsnot X}(Y)=\Total{\Strfun{\Vsnot X}(Y)}$ since $t\in\NUTS(Y,\Strfun{\Vsnot X}(Y))$. Since $\Strfun{\Vsnot X}(e)\in\NUTS(\Strfun{\Vsnot X}(Y),\Strfun{\Vsnot X}(E,\cT_\alpha))$ and $\Strfun{\Vsnot X}(E,\cT_\alpha)=(E,\cT_{\alpha+1})$ we have $\Matappa{(\Strfun{\Vsnot X}(e)\Compl t)}v\in\cT_{\alpha+1}$, that is $\Matappa ev\in\cT_{\alpha+1}$. Last if $\lambda$ is a limit ordinal and if we assume $\forall\alpha<\lambda\ \Matappa ev\in\cT_\alpha$ we have $\Matappa ev\in\Inter_{\alpha<\lambda}\cT_\alpha=\cT_\lambda$. Therefore $\Matappa ev\in\cT_\theta$. We use $\Fungfp{\Strfun{\Vsnot X}}$ to denote this final coalgebra $(E,\cT_\theta)$ (its definition depends only on $\Strfun{\Vsnot X}$ and does not involve the strength $\Strnat{\Vsnot X}$). So we have proven the first part of Condition~(\ref{enum:seel-mull-5}) in the definition of a Seely model of $\MULL$ (see Section~\ref{def:categorical-muLL-models}). As to the second part, let $\Vsnot X$ be an $n+1$-ary VNUTS. We know by the general Lemma~\ref{lemma:strfun-gfp-general} how to build a strong functor $\Fungfp{\Vsnot X}:\NUTS^n\to\NUTS$ with suitable properties. To end the proof, it suffices to exhibit an $n$-ary VNUTS $\Vsnot Y=(\Web{\Vsnot Y},\Total{\Vsnot Y})$ whose associated strong functor coincides with $\Fungfp{\Vsnot X}$. The construction of $\Vsnot Y$ is essentially straightforward, using the constructions available in $\REL$. \end{proof} \begin{remark} For any closed formula $A$, the web of its interpretation $\Tsemn A$ in $\NUTS$ coincides with its interpretation $\Tsemr A$ in $\REL$. It is also easy to check that for any proof $\pi$ of $\Seq A$, one has $\Psemn\pi=\Psemr\pi$ (this can be formalized using the functor $\Nutsf:\NUTS\to\REL$ introduced in the proof of Lemma~\ref{lemma:VNUTS-strong-functor}, which acts trivially on morphisms). \end{remark} \todo[inline]{I've removed the remark about topological functors which I found useless.} \begin{remark} The same method can be applied in many contexts. For instance, we can replace $\REL$ with the category of coherence spaces --~where least and greatest fixpoints are interpreted in the same way~-- and $\NUTS$ with the category of coherence spaces with totality where the interpretations will be different. One of the reviewers suggested that this situation might be generalized using the concept of \emph{topological functors}, this option will be explored in further work. \end{remark} \subsection{Examples of data-types} \subsubsection{Integers}\label{sec:example-integers} The type of ``flat integers'' is defined by $\Tnat=\Lfpll\zeta{(\Plus\One\zeta)}$. In $\REL$, $\Plus 1\zeta$ is interpreted as the unary variable set $\Tsemr{\Plus\One\zeta}_\zeta:\REL\to\REL$ which maps a set $E$ to $\Plus\One E=\Eset{(1,\Onelem)}\cup(\Eset 2\times E)$. Hence $\Tsemr\Tnat$ is the least set such that $\Tsem\Tnat=\Eset{(1,\Onelem)}\cup(\Eset 2\times \Tsem\Tnat)$ that is, the set of all tuples $\Snum n=(2,(2,(\cdots(1,\Onelem)\cdots)))$ where $n$ is the number of occurrence of $2$ so that we can consider the elements of $\Tsem\Tnat$ as integers. We have $\Web{\Tsemn\Tnat}=\Tsemr\Tnat$ and we compute $\Total{\Tsemn\Tnat}$ dually wrt.~the proof of Theorem~\ref{th:VNUTS-model}: it is the least fixed point of the operator $\Phi:\Tot{\Tsemr\Tnat}\to\Tot{\Tsemr\Tnat}$ such that, if $\cT\in\Tot{\Tsemr\Tnat}$ then $\Phi(\cT)=\Eset{u\subseteq\Tsemr\Tnat\mid\Snum 0\in u\text{ or }\Eset{\Snum n\in\Tsemr\Tnat\mid\Snum{n+1}\in u}\in\cT}$. Therefore $\Tot{\Tsemn\Tnat}=\Eset{u\subseteq\Tsemr\Tnat\mid u\not=\emptyset}$. % \begin{theorem} If $\pi$ is a proof of $\Seq\Tnat$ then $\Psemn\pi=\Psemr\pi$ is a non-empty subset of $\Tsemr\Tnat$. \end{theorem} Indeed we know that $\Psemr\pi=\Psemn\pi\in\Total{\Tsemn\Tnat}$. Using an additional notion of coherence (which can be fully compatible with $\REL$ as in the non-uniform coherence space models of~\cite{BucciarelliEhrhard99,Boudes11}) we can also prove that $\Psemr\pi$ has at most one element, and hence is a singleton $\Eset n$. This is a denotational version of normalization expressing that indeed $\pi$ ``has a value'' (and actually exactly one, which expresses a weak form of confluence). \subsubsection{Binary trees with integer leaves} This type can be defined as $\tau=\Lfpll\zeta{(\Tnat\IPlus\Tensp\zeta\zeta)}$. Then an element of $\Tsemr\tau=\Web{\Tsemn\tau}$ is an element of the set described by the following syntax: $\alpha,\beta,\cdots\Bnfeq\Leaf n\Bnfor\Bnode\alpha\beta$. A computation similar to the previous one shows that $\Tot{\Tsemn\tau}=\Eset{u\subseteq\Tsemr\tau\mid u\not=\emptyset}$. \subsubsection{An empty type of streams of integers} After reading~\cite{BaeldeDoumaneSaurin16}, one could be tempted to define the type of streams of integers as $\sigma_0=\Gfpll\zeta{(\Tens\Tnat\zeta)}$. The variable set $\Tsemr{\Tens\Tnat\zeta}_\zeta:\REL\to\REL$ maps a set $E$ to $\Nat\times E$. The least fixed point of this operation on sets is $\emptyset$ and hence $\Web{\Tsemn\sigma}=\emptyset$ and notice that $\Tot\emptyset=\Eset{\emptyset,\Eset\emptyset}$. In that case, the operation $\Phi:\Tot{\emptyset}\to\Tot\emptyset$ maps $\cT$ to $\Eset{u\times v\mid v\in\cT\text{ and }u\in\Part\Nat\setminus\Eset\emptyset}$ and hence $\Eset\emptyset$ to itself. It follows that $\Total{\Tsemn{\sigma_0}}=\Eset\emptyset$ that is $\Tsemn{\sigma_0}=\top$, the final object of $\NUTS$. What is the meaning of this trivial interpretation? It simply reflects that, though $\sigma_0$ has a lot of non trivial proofs in $\MULL$, it is impossible to extract any finite information from these proofs within $\MULL$, and accordingly all these proofs are interpreted as $\emptyset$. \begin{theorem} In $\MULL$, there is no proof of $\Seq{\Orth{\sigma_0},\Tnat}$. \end{theorem} In other words there is no proof of $\Seq{\Limpl{\sigma_0}\Tnat}$ in $\MULL$; typically a function extracting the first element of a stream would be a proof of this type\dots{} if it would exist! Here is the argument: if $\pi$ were a proof of $\Seq{\Orth{\sigma_0},\Tnat}$, we would have $\Psem\pi\in\NUTS(\Tsemn{\sigma_0},\Tsemn\Tnat)$ and hence $\Matappa{\Psem\pi}\emptyset\in\Total{\Tsemn\Tnat}$ which is not the case since $\Matappa{\Psem\pi}\emptyset=\emptyset$. If types like $\sigma_0$ are meaningful in a proof-search perspective, their relevance as data-types in a Curry-Howard approach to $\MULL$ is dubious. \subsubsection{A non-empty type of streams of integers} We set now $\sigma=\Gfpll\zeta{(\One\Bwith\Tensp{\Tnat}{\zeta})}$. This type looks like the previous one, but the type $\One$ leaves space for \emph{partial} empty streams. Warning: it is \emph{not} a type of finite or infinite streams; the $\IWith$ means that this empty stream will not be a total element: it will have to be complemented by some total element from the right argument of the $\IWith$. More precisely $\Tsemr{\One\IWith\Tensp\Nat\zeta}_\zeta:\REL\to\REL$ is the variable set which maps a set $E$ to $\Eset{(1,\Onelem)}\cup\Eset 2\times\Nat\times E$ so that up to renaming $\Web{\Tsemn\sigma}=\Fseq\Nat$ (all \emph{finite sequences} of integers). In this case, the operator $\Phi:\Tot{\Fseq\Nat}\to\Tot{\Fseq\Nat}$ maps $\cT$ to \[ \Eset{v\subseteq\Fseq\Nat\mid\Emptyseq\in v\text{ and }\exists n\in\Nat,u\in\cT\ \Eset n\times u\subseteq v} \] where we use $\Emptyseq$ for the empty sequence. So for instance {\footnotesize% \begin{align*} \Phi^0(\Part{\Fseq\Nat})&=\Part{\Fseq\Nat}\quad \Phi^1(\Part{\Fseq\Nat})=\Eset{u\in\Part{\Fseq\Nat}\mid\Emptyseq\in u}\\ \Phi^3(\Part{\Fseq\Nat})&=\Eset{u\in\Part{\Fseq\Nat}\mid \exists n_1,n_2\ \Emptyseq,(n_1),(n_1,n_2)\in u}\,. \end{align*}% } The greatest fixed point is reached in $\omega$ steps: \begin{multline*} \Tot{\Tsemn{\sigma}} =\Inter_{n<\omega}\Phi^n(\Part{\Fseq\Nat})\\ =\Eset{u\subseteq\Fseq\Nat\mid\exists f\in\Nat^\omega\,\forall k<\omega\ (f(1),\dots,f(k))\in u}\,. \end{multline*} So a total subset of $\Web{\Tsemn\sigma}$ must contain (at least) an infinite stream of integer. For this type of streams $\sigma$ it is easy to build a proof of $\Seq{\Orth\sigma,\Tnat}$ extracting the first element of a stream, interpreted as $\Eset{((n),n)\mid n\in\Nat}$. \section{Conclusion and further work}\label{sec:conclusion} We will study next the semantics of infinite proofs of $\MULL$ (whose definition extends that of~\cite{BaeldeDoumaneSaurin16} for $\MUMALL$). A crucial step is to prove that these infinite proofs can be interpreted as total sets in $\NUTS$, this will be presented in a further paper. This interpretation of proofs is based on the interpretation of their finite approximations in $\REL$ (remember that the interpretations of a $\MULL$ proof in $\NUTS$ and in $\REL$ are \emph{exactly the same set}). Our models will also serve as guidelines for the design of a functional language based on $\MULL$, generalizing Gödel's System T in the spirit of~\cite{Loader97} though, as explained in the Introduction, Loader's syntax is not fully compatible with $\LL$ as it is based on \emph{cocartesian} cartesian closed categories. Our system will primarily implement Park's rule, but we will also consider other options based on polymorphism in the spirit of~\cite{Mendler91,CamposFiore20} or~\cite{Matthes98}, or general recursion with guardedness restrictions as in~\cite{Coquand93,Paulin-Mohring93,Gimenez98}. Its syntax will be based on the idea of representing data-types as \emph{positive} formulas of $\MULL$ interpreted in $\Em\cL$ and therefore equipped with morphisms implementing weakening, contraction and promotion: as noticed in~\cite{Baelde12}, $\Lfpll\zeta\_$ is a positive operation whereas $\Gfpll\zeta\_$ is negative. In $\Em\cL$, the $\IPlus$ of $\LL$ is a coproduct and the $\ITens$ is a cartesian product as expected (and $\ITens$ distributes over $\IPlus$). The targeted calculus will feature a notion of values (positive terms) accounting for the morphisms of $\Em\cL$, substitution in terms being allowed only for values because only them can safely be discarded and duplicated. Thanks to this choice of design, weakening and contraction will remain implicit operations as in the usual $\lambda$-calculus. Our calculus will have explicit promotion and dereliction operations, allowing to implement both CBN and CBV in the same setting, just as in Levy's Call-by-push-value~\cite{LevyP06,Ehrhard16a}. We thank the reviewers of this paper for their careful reading and very useful suggestions. This work was partly funded by the ANR project PPS, ANR-19-CE48-0014. This paper is a preprint version of an article published at LICS'21. \subsubsection{% \newcommand{\qqbox}[1]{\quad\hbox{{#1}}\quad} \newcommand{\qbox}[1]{\quad\hbox{{#1}}} \newcommand{\bbbox}[1]{\ \hbox{{#1}}\ } \newcommand{\bbox}[1]{\ \hbox{{#1}}} \newcommand\rulename[1]{\smallbreak\noindent\textbf{#1}:} \newcommand\Proofbreak{\smallbreak} \def\vrule height.9ex width.8ex depth-.1ex{\vrule height.9ex width.8ex depth-.1ex} \newcommand{\proofitem}[1]{\subsubsection*{\mdseries\textit{#1}}} \newcommand{\proofitem{Proof.}}{\proofitem{Proof.}} \newcommand{\Endproof}{ \ifmmode \else \leavevmode\unskip\penalty9999 \hbox{}\nobreak\hfill \fi \quad\hbox{$\Box$} \par\medskip} \newcommand\Condl[1]{\label{cond:#1}} \newcommand\Condr[1]{(\ref{cond:#1})} \newcommand\Eqref[1]{(\ref{#1})} \newcommand\IE{\textsl{i.e.}~} \newcommand\Eg{\textsl{e.g.}~} \newcommand\EG{\textsl{E.g.}~} \newcommand\resp{\textsl{resp.}~} \renewcommand{\phi}{\varphi} \renewcommand\epsilon{\varepsilon} \renewcommand\sharp{\#} \newcommand{\quad\hbox{iff}\quad}{\quad\hbox{iff}\quad} \newcommand{\quad\hbox{if}\quad}{\quad\hbox{if}\quad} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\Rightarrow}{\Rightarrow} \newcommand\Equiv{\Leftrightarrow} \newcommand{\mid}{\mid} \newcommand{\circ}{\circ} \newcommand{\bigwedge}{\bigwedge} \newcommand{\wedge}{\wedge} \newcommand{\bigvee}{\bigvee} \newcommand{\vee}{\vee} \newcommand{\rightarrow}{\rightarrow} \newcommand\Pole{\Bot} \newcommand{{\mathord{\perp}}}{{\mathord{\perp}}} \newcommand{\top}{\top} \newcommand\Seqempty{\langle\rangle} \newcommand\cA{\mathcal{A}} \newcommand\cB{\mathcal{B}} \newcommand\cC{\mathcal{C}} \newcommand\cD{\mathcal{D}} \newcommand\cE{\mathcal{E}} \newcommand\cF{\mathcal{F}} \newcommand\cG{\mathcal{G}} \newcommand\cH{\mathcal{H}} \newcommand\cI{\mathcal{I}} \newcommand\cJ{\mathcal{J}} \newcommand\cK{\mathcal{K}} \newcommand\cL{\mathcal{L}} \newcommand\cM{\mathcal{M}} \newcommand\cN{\mathcal{N}} \newcommand\cO{\mathcal{O}} \newcommand\cP{\mathcal{P}} \newcommand\cQ{\mathcal{Q}} \newcommand\cR{\mathcal{R}} \newcommand\cS{\mathcal{S}} \newcommand\cT{\mathcal{T}} \newcommand\cU{\mathcal{U}} \newcommand\cV{\mathcal{V}} \newcommand\cW{\mathcal{W}} \newcommand\cX{\mathcal{X}} \newcommand\cY{\mathcal{Y}} \newcommand\cZ{\mathcal{Z}} \newcommand\Fini{{\mathrm{fin}}} \def\frownsmile{% \mathrel{\vbox{\hbox{${\frown}$}\vspace{-2ex}\hbox{${\smile}$}\vspace{-.5ex}}}} \def\smilefrown{% \mathrel{\vbox{\hbox{${\smile}$}\vspace{-2ex}\hbox{${\frown}$}\vspace{-.5ex}}}} \newcommand\Part[1]{{\mathcal P}({#1})} \newcommand\Parti[1]{{\mathcal I}({#1})} \newcommand\Union{\bigcup} \newcommand\Inter{\bigcap} \newcommand{\multimap}{\multimap} \def\frownsmile{% \mathrel{\vbox{\hbox{${\frown}$}\vspace{-2ex}\hbox{${\smile}$}\vspace{-.5ex}}}} \def\smilefrown{% \mathrel{\vbox{\hbox{${\smile}$}\vspace{-2ex}\hbox{${\frown}$}\vspace{-.5ex}}}} \newcommand\CScoh[3]{{#2}\mathrel{\frownsmile_{{#1}}}{#3}} \newcommand\CScohs[3]{{#2}\mathrel{{\frown}_{#1}}{#3}} \newcommand\CScohstr[3]{\CScohs{#1}{#2}{#3}} \newcommand\CSincoh[3]{{#2}\mathrel{\smilefrown_{{#1}}}{#3}} \newcommand\CSincohs[3]{{#2}\mathrel{{\smile}_{#1}}{#3}} \newcommand\CSeq[3]{{#2}\mathrel{{=}_{#1}}{#3}} \newcommand\Myleft{} \newcommand\Myright{} \newcommand\Web[1]{\Myleft|{#1}\Myright|} \newcommand\Suppsymb{\operatorname{\mathsf{supp}}} \newcommand\Supp[1]{\operatorname{\mathsf{supp}}({#1})} \newcommand\Suppms[1]{\operatorname{\mathsf{set}}({#1})} \newcommand\Emptymset{[]} \newcommand\Mset[1]{[{#1}]} \newcommand\Sset[1]{({#1})} \newcommand\MonP{P} \newcommand\MonPZ{\MonP_0} \newcommand\Cl[1]{\mbox{\textsf{Cl}}({#1})} \newcommand\ClP[1]{\mbox{\textsf{Cl}}_{\MonP}({#1})} \newcommand\Star{\star} \newcommand\CohName{\mbox{\textbf{Coh}}} \newcommand\Par[2]{{#1}\mathrel{\IPar}{#2}} \newcommand\Parp[2]{\left({#1}\mathrel{\IPar}{#2}\right)} \newcommand\ITens{\otimes} \newcommand\Tensn{\mathord\otimes} \newcommand\Tens[2]{{#1}\ITens{#2}} \newcommand\Tensi[3]{{#2}\ITens_{#1}{#3}} \newcommand\Tensp[2]{\left({#1}\ITens{#2}\right)} \newcommand\IWith{\mathrel{\&}} \newcommand\With[2]{{#1}\IWith{#2}} \newcommand\IPlus{\oplus} \newcommand\Plus[2]{{#1}\IPlus{#2}} \newcommand\Orth[2][]{#2^{{\mathord{\perp}}_{#1}}} \newcommand\Orthp[2][]{(#2)^{{\mathord{\perp}}_{#1}}} \newcommand\Bwith{\mathop{\&}} \newcommand\Bplus{\mathop\oplus} \newcommand\Bunion{\mathop\cup} \newcommand\Scal[2]{\langle{#1}\mid{#2}\rangle} \newcommand\Pair[2]{\langle{#1},{#2}\rangle} \newcommand\Inj[1]{\overline\pi_{#1}} \newcommand\GlobalIndex{I} \newcommand\Index{\GlobalIndex} \newcommand\Relbot{{\mathord{\perp}}_\Index} \newcommand\MonPZI{{\MonPZ}^\Index} \newcommand\ClPI[1]{{\ClP{{#1}}}^\Index} \newcommand\WebI[1]{\Web{#1}^\Index} \newcommand\Scalb[2]{\Scal{\Bar{#1}}{\Bar{#2}}} \newcommand\Ortho[2]{{#1}\mathrel{\bot}{#2}} \newcommand\Orthob[2]{\Bar{#1}\mathrel{\bot}\Bar{#2}} \newcommand\Biorth[1]{#1^{{\mathord{\perp}}\Fbot}} \newcommand\Biorthp[3]{{#1}^{{\mathord{\perp}}_{#2}{\mathord{\perp}}_{#3}}} \newcommand\Triorth[1]{{#1}^{{\mathord{\perp}}\Fbot{\mathord{\perp}}}} \newcommand\Triorthp[4]{{#1}^{{\mathord{\perp}}_{#2}{\mathord{\perp}}_{#3}{\mathord{\perp}}_{#4}}} \newcommand\Relpretens[2]{\cR_{#1}\bar\ITens\cR_{#2}} \newcommand\Relpreplus[2]{\cR_{#1}\bar\IPlus\cR_{#2}} \newcommand\RelP[1]{\widetilde{#1}} \newcommand\Eqw[2]{\delta({#1},{#2})} \newcommand\Eqwb[2]{\Eqw{\Bar{#1}}{\Bar{#2}}} \newcommand\PFacts[1]{\cF({#1})} \newcommand\Facts{\cF(\MonPZI)} \newcommand\RelL[1]{\overline{#1}} \newcommand\PRel[1]{R_{#1}} \newcommand\PFamb[2]{[\Bar{#1},\Bar{#2}]} \newcommand\Char[1]{\epsilon_{#1}} \newcommand\Fproj[2]{\pi_{#1}(\Bar{#2})} \newcommand\One{1} \newcommand\Onelem{*} \newcommand\Pbot[1]{{\mathord{\perp}}_{#1}} \newcommand\PBot[1]{{\mathord{\perp}}_{#1}} \newcommand\PRBot[1]{{\mathord{\perp}}_{#1}} \newcommand\PROne[1]{1_{#1}} \newcommand\Pproj[1]{\pi_{#1}} \newcommand\Zext[1]{\zeta_{\Bar{#1}}} \newcommand\Aext[1]{\bar\zeta_{\Bar{#1}}} \newcommand\Mall{\hbox{\textsf{MALL}}} \newcommand\RMall{\Mall(\Index)} \newcommand\RMallr[1]{\Mall({#1})} \newcommand\FDom[1]{d({#1})} \newcommand\RBot[1]{{\mathord{\perp}}_{#1}} \newcommand\ROne[1]{\One_{#1}} \newcommand\Seq[1]{\vdash{#1}} \newcommand\RSeq[2]{\vdash_{#1}{#2}} \newcommand\Restr[2]{{#1}|_{#2}} \newcommand\FRestr[2]{{#1}|_{#2}} \newcommand\FSem[1]{{#1}^{*}} \newcommand\PSem[1]{{#1}^{*}} \newcommand\FFamb[2]{{#1}\langle\Bar{#2}\rangle} \newcommand\Premskip{\hskip1cm} \newcommand\Forg[1]{\underline{#1}} \newcommand\Phase[1]{{#1}^\bullet} \newcommand\Punit[1]{1^{#1}} \newcommand\Reg[1]{R_{#1}} \newcommand\Cont[1]{{#1}^\circ} \newcommand\Neutral{e} \newcommand\RNeutral[1]{\Neutral_{#1}} \newcommand\POne{1} \newcommand\relstack[2]{\underset{#2}{#1}} \newcommand\RPlus[4]{{#3}\relstack\IPlus{{#1},{#2}}{#4}} \newcommand\RWith[4]{{#3}\relstack\IWith{{#1},{#2}}{#4}} \newcommand\PReq[1]{=_{#1}} \newcommand\Fam[2]{\hbox{\textrm{Fam}}_{#1}(#2)} \newcommand\Famfunc[1]{\hbox{\textrm{Fam}}_{#1}} \newcommand\Pval[1]{\rho_{#1}} \newcommand\Fcoh{\mathsf C} \newcommand\Fincoh{\overline\Fcoh} \newcommand\Feq{\mathsf E} \newcommand\RZero{0} \newcommand\RTop{\top} \newcommand\EmptyFam{\emptyset} \newcommand\Partial[1]{\Omega_{#1}} \newcommand\PMall{\hbox{\textsf{MALL}}_\Omega} \newcommand\PRMall{\Mall_\Omega(\Index)} \newcommand\Homo[1]{H(#1)} \newcommand\RProd[2]{\Lambda_{#1}{({#2})}} \newcommand\FProd[1]{\widetilde{#1}} \newcommand\CltoPCl[1]{{#1}^P} \newcommand\Partfin[1]{{\cP_\Fini}({#1})} \newcommand\RLL{\LL(\Index)} \newcommand\RLLP{\LL^+(\Index)} \newcommand\RLLext{\LL^\mathrm{ext}(\Index)} \newcommand\IExcl{{\mathord{!}}} \newcommand\IInt{{\mathord{?}}} \newcommand\RExcl[1]{\IExcl_{#1}} \newcommand\RInt[1]{\IInt_{#1}} \newcommand\Fempty[1]{0_{#1}} \newcommand\FAct[1]{{#1}_*} \newcommand\Noindex[1]{\langle{#1}\rangle} \newcommand\Card[1]{\#{#1}} \newcommand\Multi[1]{\mathsf m({#1})} \newcommand\FamRestr[2]{{#1}|_{#2}} \newcommand\Pact[1]{{#1}_*} \newcommand\Pinj[1]{\zeta_{#1}} \newcommand\Locun[1]{1^J} \newcommand\Isom\simeq \newcommand\Webisom{\simeq_{\mathsf w}} \newcommand\FGraph[1]{\mathsf{g}({#1})} \newcommand\GPFact[1]{\mathsf f_0({#1})} \newcommand\GFact[1]{\mathsf f({#1})} \newcommand\NUCS{non-uniform coherence space} \newcommand\NUCSb{non-uniform coherence space } \newcommand\NUCSs{non-uniform coherence spaces} \newcommand\NUCSsb{non-uniform coherence spaces } \newcommand\Nucs{\mathbf{nuCS}} \newcommand\Comp{\mathrel\circ} \newcommand\Funinv[1]{#1^{-1}} \newcommand\Reindex[1]{{#1}^*} \newcommand\Locbot[1]{{\mathord{\perp}}({#1})} \newcommand\Locone[1]{1({#1})} \newcommand\LocBot{\Locbot} \newcommand\LocOne{\Locone} \newcommand\INJ[1]{\mathcal I({#1})} \newcommand\COHP[1]{\mathcal C({#1})} \newcommand\FuncFam[1]{\mathrm{Fam}_{#1}} \newcommand\SET{\mathbf{Set}} \newcommand\Locmod[2]{{#1}({#2})} \newcommand\FuncPhase[1]{\mathcal{F}_{#1}} \newcommand\Trans[2][]{{\widehat{#2}}_{#1}} \newcommand\Useful[1]{\Web{#1}_{\mathrm u}} \newcommand\Limpl[2]{{#1}\multimap{#2}} \newcommand\Limplp[2]{\left({#1}\multimap{#2}\right)} \newcommand\ConstFam[2]{{#1}^{#2}} \newcommand\Projtr[1]{{\mathsf{pr}}^{#1}} \newcommand\Injtr[1]{{\mathsf{in}}^{#1}} \newcommand\Center[1]{\mathcal C({#1})} \newcommand\Derel[1]{\mathrm{d}_{#1}} \newcommand\Mspace[2]{{#2}^*_{#1}} \newcommand\CStoMS[1]{{#1}^+} \newcommand\MStoCS[1]{{#1}^-} \newcommand\PhaseCoh[1]{{\mathbf{Coh}}_{#1}} \newcommand\Nat{{\mathbb{N}}} \newcommand\Relint{{\mathbb{Z}}} \newcommand\Natnz{{\Nat^+}} \newcommand\Rien{\Omega} \newcommand\DenEq[2]{{#1}\sim{#2}} \newcommand\VMon[2]{\langle{#1},{#2}\rangle} \newcommand\VMonc[3]{\langle{#1},{#2},{#3}\rangle} \newcommand\Biind[2]{\genfrac{}{}{0pt}{1}{#1}{#2}} \newcommand\Adapt[1]{\cA_{#1}} \newcommand\Rllp[1]{\widetilde{#1}} \newcommand\Emptyfun[1]{0_{#1}} \newcommand\Multisetfin[1]{\mathcal{M}_{\mathrm{fin}}({#1})} \newcommand\Myparag[1]{\noindent\textbf{#1.}\ } \newcommand\Webinf[1]{\Web{#1}_\infty} \newcommand\Fin[1]{\mathsf{F}({#1})} \newcommand\Phasefin{\mathbb{F}} \newcommand\Fspace[1]{\mathrm f{#1}} \newcommand\Space[1]{\mathrm m{#1}} \newcommand\Ssupp[1]{\Supp{#1}} \newcommand\FIN{\mathbf{Fin}} \newcommand\FINH[2]{\FIN({#1},{#2})} \newcommand\FINF[1]{\FIN[{#1}]} \newcommand\SETINJ{\mathbf{Inj}} \newcommand\EMB{\FIN_{\mathrm e}} \newcommand\SNat{\mathsf N} \newcommand\Snat{\mathsf N} \newcommand\Iter[1]{\mathsf{It}_{#1}} \newcommand\Piter[2]{\Iter{#1}^{(#2)}} \newcommand\Case[1]{\mathsf{Case}_{#1}} \newcommand\Pfix[2]{\mathsf{Y}_{#1}^{(#2)}} \newcommand\Ifthenelse[3]{\mathtt{if}\,{#1}\,\mathtt{then}\,{#2}\,% \mathtt{else}\,{#3}} \newcommand\Ptrace[1]{\|{#1}\|} \newcommand\Enat[1]{\overline{#1}} \newcommand\FA[1]{\forall{#1}} \newcommand\Trcl[2]{\langle {#1},{#2}\rangle} \newcommand\Faproj[1]{\epsilon^{#1}} \newcommand\Faintro[3]{\Lambda^{{#1},{#2}}({#3})} \newcommand\Bool{\mathbf{Bool}} \newcommand\True{\mathsf{true}} \newcommand\False{\mathsf{false}} \newcommand\Tarrow{\rightarrow} \newcommand\Diffsymb{\mathrm D} \newcommand\Diff[3]{\mathrm D_{#1}{#2}\cdot{#3}} \newcommand\Diffexp[4]{\mathrm D_{#1}^{#2}{#3}\cdot{#4}} \newcommand\Diffvar[3]{\mathrm D_{#1}{#2}\cdot{#3}} \newcommand\Diffterm[3]{\mathrm D_{#1}{#2}\cdot{#3}} \newcommand0{0} \newcommand\App[2]{\left({#1}\right){#2}} \newcommand\Apppref[1]{\left({#1}\right)} \newcommand\Appp[3]{\left({#1}\right){#2}\,{#3}} \newcommand\Abs[2]{\lambda{#1}\,{#2}} \newcommand\Abst[3]{\lambda\Var{#1}^{#2}\,{#3}} \newcommand\Diffp[3]{\frac{\partial{#1}}{\partial{#2}}\cdot{#3}} \newcommand\Derp[3]{\frac{\partial{#1}}{\partial{#2}}\cdot{#3}} \newcommand\Derplist[4]{% \frac{\partial^{#4}{#1}}{\partial#2_1\cdots\partial#2_{#4}}% \cdot\left(#3_1,\dots,#3_{#4}\right)} \newcommand\Derplistexpl[6]{% \frac{\partial^{#6}{#1}}{\partial{#2}\cdots\partial{#3}}% \cdot\left({#4},\dots,{#5}\right)} \newcommand\Derpmult[4]{% \frac{\partial^{#4}{#1}}{\partial{#2}^{#4}}% \cdot\left(#3_1,\dots,#3_{#4}\right)} \newcommand\Derpm[4]{% \frac{\partial^{#4}{#1}}{\partial{#2}^{#4}}% \cdot\left({#3}\right)} \newcommand\Derpmultgros[4]{% \frac{\partial^{#4}}{\partial{#2}^{#4}}\left({#1}\right)% \cdot\left(#3_1,\dots,#3_{#4}\right)} \newcommand\Derpmultbis[4]{% \frac{\partial^{#4}{#1}}{\partial{#2}^{#4}}% \cdot{#3}} \newcommand\Derpmultbisgros[4]{% \frac{\partial^{#4}}{\partial{#2}^{#4}}\left({#1}\right)% \cdot{#3}} \newcommand\Derppar[3]{\left(\Derp{#1}{#2}{#3}\right)} \newcommand\Derpgros[3]{\frac{\partial}{\partial{#2}}\Bigl({#1}\Bigr)% \cdot{#3}} \newcommand\Derpdeux[4]{\frac{\partial^2{#1}}{\partial{#2}\partial{#3}}% \cdot{#4}} \newcommand\Derpdeuxbis[5]{\frac{\partial^{#5}{#1}}{\partial{#2}\partial{#3}}% \cdot{#4}} \newcommand\Diffpv[3]{\frac{\partial}{\partial{#2}}{#1}\cdot{#3}} \newcommand\Diffpd[5]{\frac{\partial^2{#1}}{\partial{#2}\partial{#3}}% \cdot\left({#4},{#5}\right)} \newcommand\Parag[1]{\S\ref{#1}} \newcommand\Redamone{\mathrel{\beta^{0,1}}} \newcommand\Redonecan{\mathrel{{\bar\beta}^1_{\mathrm D}}} \newcommand\Redpar{\mathrel{\rho}} \newcommand\Redparcan{\mathrel{\bar\rho}} \newcommand\Sat[1]{#1^*} \newcommand\Can[1]{\left\langle{#1}\right\rangle} \newcommand\Candiff[3]{\Delta_{#1}{#2}\cdot{#3}} \newcommand\Eq{\simeq} \newcommand\List[3]{#1_{#2},\dots,#1_{#3}} \newcommand\Listbis[3]{#1_{#2}\dots #1_{#3}} \newcommand\Listc[4]{{#1}_{#2},\dots,{#4},\dots,{#1}_{#3}} \newcommand\Listbisc[4]{{#1}_{#2}\dots{#4}\dots{#1}_{#3}} \newcommand\Absent[1]{\widehat{#1}} \newcommand\Kronecker[2]{\delta_{{#1},{#2}}} \newcommand\Eqindent{\quad} \newcommand\Subst[3]{{#1}\left[{#2}/{#3}\right]} \newcommand\Substv[3]{{#1}\left[{#2\,}/{#3}\right]} \newcommand\Substz[2]{{#1}\left[0/{#2}\right]} \newcommand\Substpar[3]{\Bigl({#1}\Bigr)\left[{#2}/{#3}\right]} \newcommand\Substbis[2]{{#1}\left[{#2}\right]} \newcommand\Substapp[2]{{#1}{#2}} \newcommand\Substlinapp[2]{{#2}[{#1}]} \newcommand\Span[1]{\overline{#1}} \newcommand\SN{\mathcal{N}} \newcommand\WN{\mathcal{N}} \newcommand\Extred[1]{\mathop{\mathrm R}^{\textrm{ext}}{(#1)}} \newcommand\Onered[1]{\mathop{\mathrm R}^1{(#1)}} \newcommand\NO[1]{\mathop{\mathrm N}({#1})} \newcommand\NOD[1]{\mathop{\mathrm N_0}({#1})} \newcommand\Freemod[2]{{#1}\left\langle{#2}\right\rangle} \newcommand\Mofl[1]{{#1}^\#} \newcommand\Mlext[1]{\widetilde{#1}} \newcommand\Difflamb{\Lambda_D} \newcommand\Terms{\Lambda_{\mathrm D}} \newcommand\Diffmod{\Freemod R\Difflamb} \newcommand\Factor[1]{{#1}!} \newcommand\Binom[2]{\genfrac{(}{)}{0pt}{}{#1}{#2}} \newcommand\Multinom[2]{\mathsf{mn}(#2)} \newcommand\Multinomb[2]{\genfrac{[}{]}{0pt}{}{#1}{#2}} \newcommand\Suite[1]{\bar#1} \newcommand\Head[1]{\mathrel{\tau^{#1}}} \newcommand\Headlen[2]{{\mathrm{L}}({#1},{#2})} \newcommand\Betaeq{\mathrel{\mathord\simeq_\beta}} \newcommand\Betadeq{\mathrel{\mathord\simeq_{\beta_{\mathrm D}}}} \newcommand\Vspace[1]{E_{#1}} \newcommand\Real{\mathbb{R}} \newcommand\Realp{\mathbb{R}_{\geq 0}} \newcommand\Realpto[1]{(\Realp)^{#1}} \newcommand\Realto[1]{\Real^{#1}} \newcommand\Realptow[1]{(\Realp)^{\Web{#1}}} \newcommand\Realpc{\overline{\mathbb{R}^+}} \newcommand\Realpcto[1]{\Realpc^{#1}} \newcommand\Izu{[0,1]} \newcommand\Rational{\mathbb Q} \newcommand\Ring{R} \newcommand\Linapp[2]{\left\langle{#1}\right\rangle{#2}} \newcommand\Linappbig[2]{\Bigl\langle{#1}\Bigr\rangle{#2}} \newcommand\Fmod[2]{{#1}\langle{#2}\rangle} \newcommand\Fmodr[1]{{\Ring}\langle{#1}\rangle} \newcommand\Imod[2]{{#1}\langle{#2}\rangle_\infty} \newcommand\Imodr[1]{{\Ring}\langle{#1}\rangle_\infty} \newcommand\Res[2]{\langle{#1},{#2}\rangle} \newcommand\Transp[1]{\Orth{#1}} \newcommand\Idmat{\operatorname{I}} \newcommand\FINMOD[1]{\operatorname{\mathbf{Fin}\,}({#1})} \newcommand\Bcanon[1]{e_{#1}} \newcommand\Mfin[1]{\mathcal M_\Fini({#1})} \newcommand\Mfinc[2]{\mathcal M_{#1}({#2})} \newcommand\Expt[2]{\operatorname{exp}_{#1}({#2})} \newcommand\Dexpt[2]{\operatorname{exp}'_{#1}({#2})} \newcommand\Semtype[1]{{#1}^*} \newcommand\Semterm[2]{{#1}^*_{#2}} \newcommand\Elem[1]{\operatorname{Elem}({#1})} \newcommand\Fcard[1]{\operatorname{Card}({#1})} \newcommand\Linhom[2]{\SCOTTLIN({#1},{#2})} \newcommand\Hom[2]{\operatorname{Hom}(#1,#2)} \newcommand\Compana{\circ_\cA} \newcommand\IConv{\mathrel{*}} \newcommand\Exclun[1]{\operatorname{u}^{#1}} \newcommand\Derzero[1]{\partial^{#1}_0} \newcommand\Dermorph[1]{\partial^{#1}} \newcommand\Dirac[1]{\delta_{#1}} \newcommand\Diracm{\delta} \newcommand\Lintop[1]{\lambda({#1})} \newcommand\Neigh[2]{\operatorname{\cV}_{#1}(#2)} \newcommand\Matrix[1]{\mathsf{M}({#1})} \newcommand\Ev{\operatorname{\mathsf{Ev}}} \newcommand\Evlin{\operatorname{\mathsf{ev}}} \newcommand\REL{\operatorname{\mathbf{Rel}}} \newcommand\RELK{\operatorname{\mathbf{Rel}_\oc}} \newcommand\RELPOC{\operatorname{\mathbf{RelC}}} \newcommand\Diag[1]{\Delta^{#1}} \newcommand\Codiag[1]{\operatorname{a}^{#1}} \newcommand\Norm[1]{\|{#1}\|} \newcommand\Normsp[2]{\|{#1}\|_{#2}} \newcommand\Tpower[2]{{#1}^{\otimes{#2}}} \newcommand\Termty[3]{{#1}\vdash{#2}:{#3}} \newcommand\Polyty[3]{{#1}\vdash_!{#2}:{#3}} \newcommand\Ruleskip{\quad\quad\quad\quad} \newcommand\Sterms{\Delta} \newcommand\Polysymb{!} \newcommand\Pterms{\Delta^{\Polysymb}} \newcommand\SPterms{\Delta^{(\Polysymb)}} \newcommand\Nsterms{\Delta_0} \newcommand\Npterms{\Delta_0^{\Polysymb}} \newcommand\Nspterms{\Delta^{(\Polysymb)}_0} \newcommand\Relspan[1]{\overline{#1}} \newcommand\Rel[1]{\mathrel{#1}} \newcommand\Reltr[1]{\mathrel{#1^*}} \newcommand\Redmult{\bar{\beta}^{\mathrm{b}}_\Delta} \newcommand\Red{\beta_\Delta} \newcommand\Redst[1]{\mathop{\mathsf{Red}}} \newcommand\Redredex{\beta^1_\Delta} \newcommand\Redredexm{\beta^{\mathrm{b}}_\Delta} \newcommand\Multn{\mathsf{m}} \newcommand\Shape[1]{\mathcal{T}(#1)} \newcommand\Tay[1]{{#1}^*} \newcommand\Deg[1]{\textsf{deg}_{#1}} \newcommand\Linearize[2]{\mathcal L^{#1}_{#2}} \newcommand\Fmodrel{\Fmod} \newcommand\Redeq{=_\Delta} \newcommand\Kriv{\mathsf K} \newcommand\Dom[1]{\operatorname{\mathsf{D}}(#1)} \newcommand\Cons{::} \newcommand\Addtofun[3]{{#1}_{{#2}\mapsto{#3}}} \newcommand\Tofclos{\mathsf T} \newcommand\Tofstate{\mathsf T} \newcommand\BT{\operatorname{\mathsf{BT}}} \newcommand\NF{\operatorname{\mathsf{NF}}} \newcommand\Msubst[3]{\partial_{#3}(#1,#2)} \newcommand\Msubstbig[3]{\partial_{#3}\bigl(#1,#2\bigr)} \newcommand\MsubstBig[3]{\partial_{#3}\Bigl(#1,#2\Bigr)} \newcommand\Symgrp[1]{\mathfrak S_{#1}} \newcommand\Tcoh{\mathrel{\frownsmile}} \newcommand\Tcohs{\mathrel{\frown}} \newcommand\Size{\mathop{\textsf{size}}} \newcommand\Varset{\mathcal{V}} \newcommand\NC[1]{\langle{#1}\rangle} \newcommand\Tuple[1]{\langle{#1}\rangle} \newcommand\Treeterms{\mathsf{T}} \newcommand\Tree[2]{\mathop{\mathrm{tree}_{#1}}({#2})} \newcommand\Treeeq[1]{P_{#1}} \newcommand\Fvar[1]{\mathop{\textsf{fv}}({#1})} \newcommand\Avar[1]{\Fvar{#1}} \newcommand\Msetofsubst[1]{\bar F} \newcommand\Invgrp[2]{\mathsf G({#1},{#2})} \newcommand\Invgrpsubst[3]{\mathsf G({#1},{#2},{#3})} \newcommand\Eqofpart[1]{\mathrel{\mathord{\sim}_{#1}}} \newcommand\Order[1]{\left|{#1}\right|} \newcommand\Inv[1]{{#1}^{-1}} \newcommand\Linext[1]{\widetilde{#1}} \newcommand\Linauto{\operatorname{\mathsf{Iso}}} \newcommand{\Ens}[1]{\{#1\}} \newcommand\Dummy{\_} \newcommand\Varrename[2]{{#2}^{#1}} \newcommand\Pcoh[1]{\mathsf P{#1}} \newcommand\Pcohp[1]{\Pcoh{(#1)}} \newcommand\Pcohc[1]{\overline{\mathsf P}{#1}} \newcommand\Pcohcp[1]{\Pcohc{(#1)}} \newcommand\Pcohsig[1]{\mathsf B_1{#1}} \newcommand\Base[1]{e_{#1}} \newcommand\Matapp[2]{{#1}\Compl{#2}} \newcommand\Matappa[2]{{#1}\cdot{#2}} \newcommand\Matappap[2]{\left({#1}\cdot{#2}\right)} \newcommand\PCOH{\mathbf{Pcoh}} \newcommand\PCOHKL{\mathbf{Pcoh}_!} \newcommand\PCOHEMB{\mathbf{Pcoh}_{\mathord\subseteq}} \newcommand\Leftu{\lambda} \newcommand\Rightu{\rho} \newcommand\Assoc{\alpha} \newcommand\Sym{\gamma} \newcommand\Msetempty{\Mset{\,}} \newcommand\Tenstg[3]{\Tens{(\Tens{#1}{#2})}{#3}} \newcommand\Tenstd[3]{\Tens{#1}{(\Tens{#2}{#3})}} \newcommand\Banach[1]{\mathsf e{#1}} \newcommand\Absval[1]{\left|{#1}\right|} \newcommand\Msetsum[1]{\Sigma{#1}} \newcommand\Retri\zeta \newcommand\Retrp\rho \newcommand\Subpcs{\subseteq} \newcommand\Cuppcs{\mathop{\cup}} \newcommand\Impl[2]{{#1}\Rightarrow{#2}} \newcommand\Basemax[1]{\mathsf c_{#1}} \newcommand\Isosucc{\mathsf s} \newcommand\Leqway{\ll} \newcommand\Funofmat[1]{\mathsf{fun}(#1)} \newcommand\Funkofmat[1]{\mathsf{Fun}(#1)} \newcommand\Pcoin[3]{\mathsf{ran}({#1},{#2},{#3})} \newcommand\Tsem[1]{\llbracket{#1}\rrbracket} \newcommand\Tsemr[1]{\llbracket{#1}\rrbracket^{\REL}} \newcommand\Tsemn[1]{\llbracket{#1}\rrbracket^{\NUTS}} \newcommand\Psemr[1]{\llbracket{#1}\rrbracket^{\REL}} \newcommand\Psemn[1]{\llbracket{#1}\rrbracket^{\NUTS}} \newcommand\Csem[2]{\llbracket{#1}\rrbracket^{#2}} \newcommand\Tseme[1]{\llbracket{#1}\rrbracket^{\mathord\oc}} \newcommand\Psem[1]{\llbracket{#1}\rrbracket} \newcommand\Tnat\iota \newcommand\Fix[1]{\operatorname{\mathsf{fix}}(#1)} \newcommand\If[3]{\operatorname{\mathsf{if}}(#1,#2,#3)} \newcommand\Pred{\operatorname{\mathsf{pred}}} \newcommand\Succ[1]{\operatorname{\mathsf{succ}}(#1)} \newcommand\Num[1]{\underline{#1}} \newcommand\Loop\Omega \newcommand\Loopt[1]{\Omega^{#1}} \newcommand\Ran[1]{\mathsf{ran}(#1)} \newcommand\Dice[1]{\operatorname{\mathsf{coin}}(#1)} \newcommand\Tseq[3]{{#1}\vdash{#2}:{#3}} \newcommand\Eseq[5]{{#1}\vdash{#2}:{((#3\to #4)\to #5)}} \newcommand\Timpl\Impl \newcommand\Simpl\Impl \newcommand\Pcfred{\beta_{\mathrm{PCF}}} \newcommand\PCFP{\mathsf{pPCF}} \newcommand\PCF{\mathsf{PCF}} \newcommand\PCFPZ{\mathsf{pPCF}^-} \newcommand\Curry[1]{\operatorname{\mathsf{Cur}}({#1})} \newcommand\Fixpcoh[1]{\operatorname{\mathsf{Fix}}_{#1}} \newcommand\Pnat{\mathbf N} \newcommand\Evcontext[1]{\mathsf C_{#1}} \newcommand\Evhead[1]{\mathsf H_{#1}} \newcommand\Reduct[1]{\operatorname{\mathsf{red}}({#1})} \newcommand\Ready[1]{\operatorname{\mathsf{ready}}({#1})} \newcommand\Undefready[1]{\operatorname{\mathsf{ready}}({#1})\mathord\uparrow} \newcommand\Defready[1]{\operatorname{\mathsf{ready}}({#1})\mathord\downarrow} \newcommand\Contextempty{\emptyset} \newcommand\States{\operatorname{\mathsf S}} \newcommand\Stateloop[1]{\perp^{#1}} \newcommand\Matred{\operatorname{\mathsf{Red}}} \newcommand\Vecrow[1]{\operatorname{\mathsf r}_{#1}} \newcommand\Veccol[1]{\operatorname{\mathsf c}_{#1}} \newcommand\Plotkin[1]{\cR^{#1}} \newcommand\Redone[1]{\stackrel{#1}\rightarrow} \newcommand\Redonetr[1]{\stackrel{#1}{\rightarrow^*}} \newcommand\Redoned{\rightarrow_{\mathsf d}} \newcommand\Transition{\leadsto} \newcommand\Probatransp{\operatorname{\mathsf p}} \newcommand\Lg{\operatorname{\mathsf{lg}}} \newcommand\Emptyseq{()} \newcommand\Cbsdirect[1]{{#1}^+} \newcommand\Cbsdual[1]{{#1}^-} \newcommand\Cbsres[3]{\left\langle#2,#3\right\rangle_{#1}} \newcommand\Cbsdirectn[2]{\left\|{#2}\right\|_{#1}^+} \newcommand\Cbsdualn[2]{\left\|{#2}\right\|_{#1}^-} \newcommand\Cbsdirectp[1]{\mathsf C_{#1}^+} \newcommand\Cbsdualp[1]{\mathsf C_{#1}^-} \newcommand\Cbsofpcs[1]{\mathsf{cbs}({#1})} \newcommand\Init[2][]{\mathop{\mathord\downarrow}_{#1}{#2}} \newcommand\Final[2][]{\mathop{\mathord\uparrow}_{#1}{#2}} \newcommand\Poop[1]{{#1}^{\mathrm{op}}} \newcommand\Dual[1][]{\mathrel{\mathord\perp}_{#1}} \newcommand\PP{PP} \newcommand\Scottsymb{\mathsf S} \newcommand\Weak[1]{\operatorname{\mathsf{w}}_{#1}} \newcommand\Weakmu[1]{\operatorname{\mathsf{w}}'_{#1}} \newcommand\Contr[1]{\operatorname{\mathsf{contr}}_{#1}} \newcommand\Contrmu[1]{\operatorname{\mathsf{contr}}'_{#1}} \newcommand\WeakS[1]{\Weak{#1}^\Scottsymb} \newcommand\ContrS[1]{\Contr{#1}^\Scottsymb} \newcommand\Contrc[1]{\operatorname{\mathsf{contr}}_{#1}} \newcommand\Der[1]{\operatorname{\mathsf{der}}_{#1}} \newcommand\Dermu[1]{\operatorname{\mathsf{der}}'_{#1}} \newcommand\Coder[1]{\operatorname{\mathord\partial}_{#1}} \newcommand\Digg[1]{\operatorname{\mathsf{dig}}_{#1}} \newcommand\Diggmu[1]{\operatorname{\mathsf{dig}}'_{#1}} \newcommand\DerS[1]{\Der{#1}^{\Scottsymb}} \newcommand\DiggS[1]{\Digg{#1}^{\Scottsymb}} \newcommand\Lfun{\operatorname{\mathsf{fun}}} \newcommand\Fun[1]{\widehat{#1}} \newcommand\LfunS{\Lfun^\Scottsymb} \newcommand\FunS{\Fun^\Scottsymb} \newcommand\Ltrace{\mathsf{tr}} \newcommand\Trace{\mathsf{Tr}} \newcommand\LtraceS{\mathsf{tr}^\Scottsymb} \newcommand\TraceS{\mathsf{Tr}^\Scottsymb} \newcommand\Id{\operatorname{\mathsf{Id}}} \newcommand\IdS{\Id^\Scottsymb} \newcommand\Proj[1]{\pi_{#1}} \newcommand\ProjS[1]{\Proj{#1}^\Scottsymb} \newcommand\EvS{\Ev^\Scottsymb} \newcommand\EvlinS{\Evlin^\Scottsymb} \newcommand\Excl[1]{\oc{#1}} \newcommand\Exclmu[1]{\oc'{#1}} \newcommand\Exclp[1]{\oc({#1})} \newcommand\Exclmup[1]{\oc'({#1})} \newcommand\Int[1]{\wn{#1}} \newcommand\Intmu[1]{\wn'{#1}} \newcommand\Intp[1]{\wn({#1})} \newcommand\Intmup[1]{\wn'({#1})} \newcommand\ExclS{\Excl} \newcommand\IntS{\Int} \newcommand\Prom[1]{#1^!} \newcommand\Prommu[1]{#1^{!'}} \newcommand\Promm[1]{{#1}^{!!}} \newcommand\PromS{\Prom} \newcommand\PrommS{\Promm} \newcommand\SCOTTLIN{\operatorname{\mathbf{ScottL}}} \newcommand\SCOTTPOC{\operatorname{\mathbf{ScottC}}} \newcommand\SCOTTLINKL{\operatorname{\mathbf{ScottL}}_\oc} \newcommand\SCOTTLINK{\operatorname{\mathbf{ScottL}}_\oc} \newcommand\PPLIN{\operatorname{\mathbf{PpL}}} \newcommand\PPLINK{\operatorname{\mathbf{PpL}}_\oc} \newcommand\PPPOC{\operatorname{\mathbf{PpC}}} \newcommand\RELLIN{\operatorname{\mathbf{RelL}}} \newcommand\PERLIN{\operatorname{\mathbf{PerL}}} \newcommand\PERPOC{\operatorname{\mathbf{PerC}}} \newcommand\PERLINK{\operatorname{\mathbf{PerL}_\oc}} \newcommand\Mix{\operatorname{\mathsf{mix}}} \newcommand\Realize[1]{\Vdash_{#1}} \newcommand\Collapseeq[1]{\sim_{#1}} \newcommand\Relsem[1]{\left[{#1}\right]} \newcommand\Perofpp[1]{\epsilon_0({#1})} \newcommand\Perofppf{\epsilon_0} \newcommand\Scottofpp[1]{\sigma_0({#1})} \newcommand\Scottofppf{\sigma_0} \newcommand\Perofpprel[1]{\mathrel B_{#1}} \newcommand\Perofppr[1]{\Collapseeq{\Perofpp{#1}}} \newcommand\Perofppro[1]{\Collapseeq{\Orth{\Perofpp{#1}}}} \newcommand\Kapp[2]{#1(#2)} \newcommand\Left{\mathsf l} \newcommand\Up{\mathsf u} \newcommand\Upleft{\mathsf{ul}} \newcommand\Upright{\mathsf{ur}} \newcommand\Collapsecat{\operatorname{\mathsf e}} \newcommand\Hetercat{\operatorname{\mathsf e}} \newcommand\Hetercatmod{\operatorname{\mathsf e_{\mathsf{mod}}}} \newcommand\Heterobjnext[1]{\ulcorner{#1}\urcorner} \newcommand\Heterobjext[1]{\llcorner{#1}\lrcorner} \newcommand\Heternext[1]{\ulcorner{#1}\urcorner} \newcommand\Heterext[1]{\llcorner{#1}\lrcorner} \newcommand\Collapseobj[1]{\ulcorner{#1}\urcorner} \newcommand\Collapsenext{\rho} \newcommand\EXT{\sigma} \newcommand\COLL{\epsilon} \newcommand\COLLL{\epsilon_0} \newcommand\Subper{\sqsubseteq} \newcommand\Unionper{\bigsqcup} \newcommand\Purerel{\cD_{\mathsf r}} \newcommand\Pureper{\cD_{\mathsf e}} \newcommand\Pureperstep{\Phi_{\mathsf e}} \newcommand\Subpo{\sqsubseteq} \newcommand\Unionpo{\bigsqcup} \newcommand\Purepo{\cD_{\mathsf s}} \newcommand\Purepostep{\Phi_{\mathsf s}} \newcommand\Subpp{\sqsubseteq} \newcommand\Unionpp{\bigsqcup} \newcommand\Purepp{\cD_{\mathsf{pp}}} \newcommand\Pureheter{\cD_{\mathsf{h}}} \newcommand\Pureppstep{\Phi_{\mathsf{pp}}} \newcommand\Appmod{\operatorname{\mathsf{app}}} \newcommand\Lammod{\operatorname{\mathsf{lam}}} \newcommand\Kleisli[1]{{#1}_{\Excl{}}} \newcommand\Initw[2]{\Init[\Web{#1}]{#2}} \newcommand\Finalw[2]{\Final[\Web{#1}]{#2}} \newcommand\Heterofpp{\mathsf h} \newcommand\Collofper{\mathsf q} \newcommand\Hetermod{Z} \newcommand\Relincl\eta \newcommand\Relrestr\rho \newcommand\Seely{\mathsf m} \newcommand\Seelyz{\mathsf m^0} \newcommand\Seelyt{\mathsf m^2} \newcommand\Seelyinv{\mathsf n} \newcommand\Seelyinvz{\mathsf n^0} \newcommand\Seelyinvt{\mathsf n^2} \newcommand\Monoidal{\mu} \newcommand\Monoidalz{\mu^0} \newcommand\Monoidalt{\mu^2} \newcommand\Compl{\,} \newcommand\Curlin{\mathsf{cur}} \newcommand\Cur{\mathsf{Cur}} \newcommand\Op[1]{{#1}^{\mathsf{op}}} \newcommand\Kl[1]{{#1}_\oc} \newcommand\Em[1]{{#1}^\oc} \newcommand\NUM[1]{\underline{#1}} \newcommand\SUCC{\mathsf{succ}} \newcommand\PRED{\mathsf{pred}} \newcommand\IF{\mathsf{if}} \newcommand\IFV[3]{\IF(#1,(#2)\,#3)} \newcommand\Ifv[4]{\IF(#1,#2,#3\cdot #4)} \newcommand\FIX{\mathsf{fix}} \newcommand\FIXV[2]{\mathsf{fix}\,#1\cdot#2} \newcommand\FIXVT[3]{\mathsf{fix}\,#1^{#2}\,#3} \newcommand\APP[2]{(#1)#2} \newcommand\ABST[3]{\lambda #1^{#2}\, #3} \newcommand\MUT[3]{\mu #1^{#2}\, #3} \newcommand\MU[2]{\mu #1\, #2} \newcommand\NAMED[2]{[#1]#2} \newcommand\Abstpref[2]{\lambda #1^{#2}\,} \newcommand\NAT{\iota} \newcommand\IMPL[2]{#1\Rightarrow #2} \newcommand\SEQ[4]{#1\vdash #2:#3\mid #4} \newcommand\SEQI[3]{#1\vdash #2:#3} \newcommand\SEQN[3]{#1\vdash #2\mid #3} \newcommand\SEQP[3]{#1\vdash #2\mid #3} \newcommand\SEQS[4]{#1\mid #2:#3\vdash #4} \newcommand\HOLE{[\ ]} \newcommand\Redpcf{\mathord{\to}} \newcommand\Redpcfi{\mathord{\to_{\mathsf{PCF}}}} \newcommand\Redclean{\relstack\to{\mathsf{cl}}} \newcommand\Freenames{\mathsf{FN}} \newcommand\STARG[1]{\mathsf{arg}(#1)} \newcommand\STIF[2]{\mathsf{ifargs(#1,#2)}} \newcommand\STSUCC{\mathsf{succ}} \newcommand\STPRED{\mathsf{pred}} \newcommand\EMPTY{\epsilon} \newcommand\CONS[2]{#1\cdot #2} \newcommand\PUSH[2]{#1\cdot #2} \newcommand\STATE[3]{#1\star #2\mid #3} \newcommand\TAPP[2]{#1\star#2} \newcommand\PROC[2]{#1\star#2} \newcommand\Div[1]{#1\uparrow} \newcommand\Conv[2]{#1\downarrow#2} \newcommand\Convproba[1]{\mathrel{\downarrow^{#1}}} \newcommand\Pcftrad[1]{{#1}^\bullet} \newcommand\Transcl[1]{{#1}^*} \newcommand\Conn[1]{\gamma_{#1}} \newcommand\Sing[1]{\mathsf C(#1)} \newcommand\RELW{\operatorname{\mathbf{RelW}}} \newcommand\Natrelw{\mathsf{N}} \newcommand\Least{\bot} \newcommand\Posnat{\overline{\mathsf N}} \newcommand\Botnat{{\mathord{\perp}}} \newcommand\Coalg[1]{h_{#1}} \newcommand\Ifrel{\mathsf{if}} \newcommand\Eval[2]{\langle#1,#2\rangle} \newcommand\Let[3]{\mathsf{let}(#1,#2,#3)} \newcommand\Add{\mathsf{add}} \newcommand\Exp{\mathsf{exp}_2} \newcommand\Cmp{\mathsf{cmp}} \newcommand\Unift{\mathsf{unif}_2} \newcommand\Unif{\mathsf{unif}} \newcommand\Closed[1]{\Lambda^{#1}_0} \newcommand\Open[2]{\Lambda^{#2}_{#1}} \newcommand\Redmat[1]{\mathsf{Red}(#1)} \newcommand\Redmats{\mathsf{Red}} \newcommand\Redmato[2]{\mathsf{Red}(#1,#2)} \newcommand\Mexpset[2]{\mathsf L(#1,#2)} \newcommand\Expmonisoz{\Seely^0} \newcommand\Expmonisob[2]{\Seely^2_{#1,#2}} \newcommand\Expmonisobn{\Seely^2} \newcommand\Injms[2]{#1\cdot#2} \newcommand\Natobj{\mathsf{N}} \newcommand\Natalg{h_\Natobj} \newcommand\Snum[1]{\overline{#1}} \newcommand\Sif{\overline{\mathsf{if}}} \newcommand\Ssuc{\overline{\mathsf{suc}}} \newcommand\Sfix{\mathsf{Y}} \newcommand\Vect[1]{\vec{#1}} \newcommand\Rts[1]{\cR^{#1}} \newcommand\Probw[1]{\mathsf p(#1)} \newcommand\Spath[2]{\mathsf{R}(#1,#2)} \newcommand\Obseq{\sim} \newcommand\Obsleq{\lesssim} \newcommand\Ptest[1]{{#1}^+} \newcommand\Ntest[1]{{#1}^-} \newcommand\Plen[1]{|#1|^+} \newcommand\Nlen[1]{|#1|^-} \newcommand\Shift{\mathsf{shift}} \newcommand\Shvar[2]{\App{\Shift_{#1}}{#2}} \newcommand\Probe{\mathsf{prob}} \newcommand\Pprod{\mathsf{prod}} \newcommand\Pchoose{\mathsf{choose}} \newcommand\Argsep{\,} \newcommand\Simplex[1]{\Delta_{#1}} \newcommand\Embi{\eta^+} \newcommand\Embp{\eta^-} \newcommand\Shvec[2]{{#1}\left\{{#2}\right\}} \newcommand\ShvecBig[2]{{#1}\Big\{{#2}\Big\}} \newcommand\Msetu[1]{\mathsf o(#1)} \newcommand\Msetb[2]{\mathsf o(#1,#2)} \newcommand\Mfinex[2]{\cM^-(#1,#2)} \newcommand\Mfinr[2]{\Mfin{#1,#2}} \newcommand\Canb[1]{\mathsf e_{#1}} \newcommand\Prem[2]{\pi(#1,#2)} \newcommand\Nrem[2]{\mu(#1,#2)} \newcommand\Restrms[3]{\mathsf W(#1,#2,#3)} \newcommand\Shleft[2]{\mathsf S(#1,#2)} \newcommand\Listarg[3]{\,#1_{#2}\cdots#1_{#3}} \newcommand\Rseg[2]{[#1,#2]} \newcommand\Iftrans[2]{#1^\bullet_{#2}} \newcommand\Bnfeq{\mathrel{\mathord:\mathord=}} \newcommand\Bnfor{\,\,\mathord|\,\,} \newcommand\PPCF{\mathsf{pPCF}} \newcommand\Hempty{\ \ } \newcommand\Hole[2]{[\Hempty]^{#1\vdash #2}} \newcommand\Thole[3]{#1^{#2\vdash #3}} \newcommand\Thsubst[2]{#1[#2]} \newcommand\Tseqh[5]{\Tseq{#1}{\Thole{#2}{#3}{#4}}{#5}} \newcommand\Locpcs[2]{{#1}-{#2}} \newcommand\Deriv[1]{{#1}'} \newcommand\Klfun[1]{\Fun{#1}} \newcommand\Distsp[3]{\mathsf d_{#1}(#2,#3)} \newcommand\Distspsymb[1]{\mathsf d_{#1}} \newcommand\Distobs[2]{\mathsf d_{\mathsf{obs}}(#1,#2)} \newcommand\Fibloc[1]{\mathsf T#1} \newcommand\Subcoh{\subseteq} \newcommand\Coh[3]{#2\coh_{#1}#3} \newcommand\Incoh[3]{#2\incoh_{#1}#3} \newcommand\Scoh[3]{#2\scoh_{#1}#3} \newcommand\Sincoh[3]{#2\sincoh_{#1}#3} \newcommand\COH{\mathbf{Coh}} \newcommand\COHT{\mathbf{CohT}} \newcommand\COHPO{\COH_\subseteq} \newcommand\Cohlub{\cup} \newcommand\Tcohca[1]{\underline{#1}} \newcommand\Tcoht[1]{\mathsf T{#1}} \newcommand\Tcohtp[1]{\cT(#1)} \newcommand\Tcohlubm{\cup^-} \newcommand\Tcohlubp{\cup^+} \newcommand\Subcohtm{\Subcoh^-} \newcommand\Subcohtp{\Subcoh^+} \newcommand\TCOHPOP{\COHPO^+} \newcommand\TCOHPOM{\COHPO^-} \newcommand\Tot[1]{\mathop{\mathsf{Tot}}(#1)} \newcommand\Vcsnot[1]{\mathbb{#1}} \newcommand\Vcstnot[1]{\mathbb{#1}} \newcommand\Vsnot[1]{\mathbb{#1}} \newcommand\Vstnot[1]{\mathbb{#1}} \newcommand\Strfun[1]{\overline{#1}} \newcommand\Strnat[1]{\widehat{#1}} \newcommand\COHLIN{\mathbf{Coh}} \newcommand\Cohemb[2]{\eta_{#1,#2}^+} \newcommand\Cohret[2]{\eta_{#1,#2}^-} \newcommand\Indcoh[1]{{#1}^+} \newcommand\Procoh[1]{{#1}^-} \newcommand\Ddvcs[1]{\Orth{#1}} \newcommand\Ddvcst[1]{\Orthp{#1}} \newcommand\Ddvcstnp[1]{\Orth{#1}} \newcommand\Cohempty{\emptyset} \newcommand\Vcsfp[1]{\sigma\,#1} \newcommand\ALGFUN[2]{\mathbf{Alg}_{#1}(#2)} \newcommand\COALGFUN[2]{\mathbf{Coalg}_{#1}(#2)} \newcommand\COHLINT{\mathbf{CohT}} \newcommand\Vcstca[1]{\underline{#1}} \newcommand\Vcstot[1]{\mathsf T{#1}} \newcommand\Vcstotp[1]{\mathsf T{(#1)}} \newcommand\Vcstmuu[1]{\mu\,#1} \newcommand\Vcstmu[1]{\mu\,#1} \newcommand\Vcstnu[1]{\nu\,#1} \newcommand\Lfpll[2]{\mu#1\,#2} \newcommand\Gfpll[2]{\nu#1\,#2} \newcommand\LLvars{\mathcal V} \newcommand\LLvneg[1]{\overline{#1}} \newcommand\Oneelem{*} \newcommand\Tvarneg[1]{\overline{#1}} \newcommand\Tvcst[2]{[#1]_{#2}} \newcommand\Naxiom{(\textsf{ax})} \newcommand\Ncut{(\textsf{cut})} \newcommand\None{($\One$)} \newcommand\Ntens{($\ITens$)} \newcommand\Nbot{(${\mathord{\perp}}$)} \newcommand\Npar{($\IPar$)} \newcommand\Ntop{($\top$)} \newcommand\Nplusl{($\IPlus_1$)} \newcommand\Nplusr{($\IPlus_2$)} \newcommand\Nwith{($\IWith$)} \newcommand\Nweak{(\textsf{w})} \newcommand\Ncontr{(\textsf{c})} \newcommand\Nder{(\textsf{d})} \newcommand\Nprom{(\textsf{p})} \newcommand\Nlfp{($\mu$-\textsf{fold})} \newcommand\Ngfp{($\nu$-\textsf{rec})} \newcommand\Ngfpfold{($\nu$-\textsf{fold})} \newcommand\Ngfpbis{($\nu$-\textsf{rec}${}'$)} \newcommand\Nmixz{(\textsf{mix}${}_0$)} \newcommand\Nzero{($0$)} \newcommand\Nsucc{(\textsf{succ})} \newcommand\Nintit{(\textsf{it}_\Tnat)} \newcommand\Ngprom{(\textsf{gp})} \newcommand\Ngweak{(\textsf{gw})} \newcommand\Ngcontr{(\textsf{gc})} \newcommand\Cut[2]{\langle #1\mid #2\rangle} \newcommand\Proofs[1]{\mathsf{PR}(#1)} \newcommand\Redcand[1]{\mathsf{RC}(#1)} \newcommand\Redint[2]{|#1|(#2)} \newcommand\Redintc[1]{|#1|} \newcommand\Redparam[3]{#1:#2/#3} \newcommand\Prone{()} \newcommand\Prtens[2]{\mathord\otimes(#1,#2)} \newcommand\Prin[2]{\mathord\oplus_{#1}(#2)} \newcommand\Prprom[1]{\Prom{#1}} \newcommand\Prfoldmu[1]{\mathsf{fold}_\mu(#1)} \newcommand\Prrec[1]{\mathsf{rec(#1)}} \newcommand\Redcbot[1]{{\mathord{\perp}}(#1)} \newcommand\Ordinals{\mathbb O} \newcommand\Reduces{\rightarrow} \newcommand\MULL{\mu\mathsf{LL}} \newcommand\MUMALL{\mu\mathsf{MALL}} \newcommand\MUALL{\mu\mathsf{ALL}} \newcommand\LL{\mathsf{LL}} \newcommand\Conat{\mathsf{read}_\Tnat} \newcommand\Sone{1} \newcommand\Sbot{\mathord\bot} \newcommand*{\isoarrow}[1]{\rightarrow[#1,"\rotatebox{90}{\(\sim\)}"]} \tikzset{cong/.style={draw=none,edge node={node [sloped, allow upside down, auto=false]{$\cong$}}}, Isom/.style={draw=none,every to/.append style={edge node={node [sloped, allow upside down, auto=false]{$\cong$}}}}} \newcommand\isom{\mathrel{\stackon[-0.1ex]{\makebox*{\scalebox{1.08}{\AC}}{=\hfill\llap{=}}}{{\AC}}}} \newcommand\nvisom{\rotatebox[origin=cc] {-90}{$ \isom $}} \newcommand\visom{\rotatebox[origin=cc] {90} {$ \isom $}} \newcommand\Coalgca[1]{\underline{#1}} \newcommand\Coalgstr[1]{h_{#1}} \newcommand\LCAT{\cL} \newcommand\LCATLIN{\cL} \newcommand\LCATPO{\LCAT_{\mathord\subseteq}} \newcommand\Obj[1]{\mathsf{Obj}(#1)} \newcommand\Promp[1]{\Prom{(#1)}} \newcommand\Termm[1]{\tau_{#1}} \newcommand\Fcomod[1]{\mathsf{fc}_{#1}} \newcommand\Klp[2]{{#1}_{#2}} \newcommand\Mtens[3]{#2\otimes_{#1}#3} \newcommand\Mevlin[1]{\Evlin_{#1}} \newcommand\Kcomod[2]{#1[#2]} \newcommand\Morth[2]{{#2}^{{\mathord{\perp}}[#1]}} \newcommand\Mbiorth[2]{{#2}^{{\mathord{\perp}}[#1]{\mathord{\perp}}[#1]}} \newcommand\Mexcl[2]{\oc_{#1}{#2}} \newcommand\Mder[1]{\Der{}[#1]} \newcommand\Mdigg[1]{\Digg{}[#1]} \newcommand\Mfun[2]{{#1}[#2]} \newcommand\Strid{\cI} \newcommand\Strcst[1]{\cK^{#1}} \newcommand\Fungfp[1]{\nu#1} \newcommand\Funlfp[1]{\mu#1} \newcommand\Funfp[1]{\sigma#1} \newcommand\Totop[1]{\Theta(#1)} \newcommand\Lnat{\iota_l} \newcommand\Gtnat{\mathsf{nat}} \newcommand\Rec[3]{\mathsf{rec}(#1,#2,#3)} \newcommand\Gredwh{\beta_{wh}} \newcommand\Appsep{\,} \newcommand\GODELT{\mathsf T} \newcommand\Tgtrad[1]{#1^*} \newcommand\Tgtradc[1]{#1^*} \newcommand\Tgtradp[1]{#1^+} \newcommand\Pprom[3]{\chi(#1,\Excl{#2}/#3)} \newcommand\Ppromcl[1]{\chi(#1)} \newcommand\Greduc[1]{\mathsf{Red}_{#1}} \newcommand\Var[1]{\mathsf{#1}} \newcommand\Exclmuext[1]{\mathsf{ext^{\oc'}}(#1)} \newcommand\Seemuz{{\mathsf m^0}'} \newcommand\Seemu{{\mathsf m^2}'} \newcommand\Seeinvmuz{{\mathsf n^0}'} \newcommand\Seeinvmu{{\mathsf n^2}'} \newcommand\Wweak{\mathsf w} \newcommand\Wder[1]{\mathsf d(#1)} \newcommand\Wcontr[2]{\mathsf c(#1,#2)} \newcommand\Winl[1]{1\cdot#1} \newcommand\Winr[1]{2\cdot#1} \newcommand\Wshadow[1]{\mathsf w\cdot#1} \newcommand\Wprl[1]{\mathsf p_1(#1)} \newcommand\Wprr[1]{\mathsf p_2(#1)} \newcommand\Subtree{\mathsf{st}} \newcommand\Emptypath{\langle\rangle} \newcommand\Conspath[2]{#1\,#2} \newcommand\Pathes[1]{\mathsf{paths}(#1)} \newcommand\Tcohforg{\cU} \newcommand\Tcohempty{\cZ} \newcommand\ST{\mathsf{T}} \newcommand\SF{\mathsf{F}} \newcommand\Eset[1]{\left\{#1\right\}} \newcommand\Excll[1]{\oc\oc{#1}} \newcommand\RELI{\REL^{\subseteq}} \newcommand\Relii{\eta^+} \newcommand\Relip{\eta^-} \newcommand\Botlin{\mathord\perp} \newcommand\Vssupp[3]{\operatorname{\mathsf{supp}}_{#1,#2}(#3)} \newcommand\Vssuppv[2]{\Vect{\operatorname{\mathsf{supp}}}_{#1}(#2)} \newcommand\Vsproj[1]{\operatorname{\mathsf{Pr}}_{#1}} \newcommand\Indexv[1]{\Index^{\langle #1\rangle}} \newcommand\Ftimes{\mathord\times} \newcommand\Fplus{\mathord+} \newcommand\Fmfin{\cM} \newcommand\Upcl[1]{\mathord\uparrow#1} \newcommand\Total[1]{\mathcal T(#1)} \newcommand\Ltsnuts[1]{#1} \newcommand\NUTS{\mathbf{Nuts}} \newcommand\Cotuple[1]{[{#1}]} \newcommand\Promhc[1]{#1^{(\mathord\oc)}} \newcommand\Prommhc[1]{#1^{(\mathord\oc\mathord\oc)}} \newcommand\TreeZ{\langle\rangle} \newcommand\TreeO{*} \newcommand\TreeB[2]{\langle#1,#2\rangle} \newcommand\Nleaves{\mathsf L} \newcommand\Trees[1]{\cT_{#1}} \newcommand\TreeExt[2]{{\mathord{#1}}^{#2}} \newcommand\TreeExtp[3]{({\mathord{#1}}^{#2}(#3))} \newcommand\Treeiso[3]{{{\mathord{#1}}^{#2}_{#3}}} \newcommand\Symfunc[1]{\widehat{#1}} \newcommand\Symiso[3]{\widehat{\mathord{#1}}^{#2,#3}} \newcommand\VNUTS[1]{\mathbf{Vnuts}_{#1}} \newcommand\VREL[1]{\REL_{#1}} \newcommand\Nutsm{\mathsf p} \newcommand\Nutsf{\mathsf u} \newcommand\Leaf[1]{\langle#1\rangle} \newcommand\Bnode[2]{\langle#1,#2\rangle} \newcommand\Fseq[1]{{#1}^{<\omega}} \newcommand\Klfree[1]{\mathsf F_{#1}}
1,314,259,996,127
arxiv
\section{Introduction} Non-negatively curved Riemannian manifolds have been of interest from the beginning of global Riemannian geometry. Most examples are obtained via product and quotient constructions, starting from compact Lie groups. These include all homogeneous spaces and biquotients. Using a gluing method, J.Cheeger constructed non-negatively curved metrics on the connect sum of any two rank one symmetric spaces, see \cite{Cheeger}. A breakthrough came with K.Grove and W.Ziller's generalization of this gluing method to the class of cohomogeneity one manifolds. A manifold $M$ is called a \emph{cohomogeneity one manifold} if there exists a compact Lie group $G$ acting on $M$ by isometries and the cohomogeneity of the action, defined as $\mathrm{cohom}(M,G) = \dim(M/G)$, is equal to $1$. Since the orbit space is one dimensional, it is either a circle or a closed interval $I$. In the former case, $M$ always carries a $G$ invariant metric with non-negative sectional curvature. In the latter one, there are precisely two singular orbits $B_\pm$ with isotropy subgroups $K^\pm$ corresponding to the endpoints of $I$, a minimal geodesic between the singular orbits, and principal orbits corresponding to the interior points with isotropy subgroup $H$. By the slice theorem, $K^\pm/H$ are spheres $\mathbb{S}^{l_\pm-1}$ ($l_\pm \geq 1$), and $M$ can be reconstructed by gluing two disk bundles along a principal orbit as follows \begin{equation}\label{gluediskbundles} M = G \times_{K^-}\mathbb{D}^{l_-} \cup_{G/H}G \times_{K^+}\mathbb{D}^{l_+}, \end{equation} where $\mathbb{D}^{l_\pm}$ is the normal disk to $B_\pm$. Therefore we can identify $M$ with the groups $H\subset \set{K^-,K^+} \subset G$ by the gluing construction (\ref{gluediskbundles}). K.Grove and W.Ziller showed that any cohomogeneity one manifold with codimension two singular orbits admits a non-negatively curved metric \cite{GZMilnor}. In the same paper, it was conjectured that any cohomogeneity one manifold admits a non-negatively curved metric. This turns out to be false. The first examples of an obstruction were discovered by K.Grove, L.Verdiani, B.Wilking and W.Ziller in \cite{GVWZ}. The most interesting examples are the higher dimensional Kervaire spheres (of dimension $9$ and up) which are the only exotic spheres that can carry a cohomogeneity one action, see \cite{St}. In light of the construction of examples with non-negative sectional curvature and the examples of obstructions to non-negatively curved metrics, it is important to answer the question \emph{"How large is the class of cohomogeneity one manifolds that admit a non-negatively curved metric?"} which was raised by W.Ziller in \cite{Zsurvey1}. In this paper, we generalize the examples in \cite{GVWZ} to a larger family: \begin{thmmain} \label{thmmain} Let $K'/H' = \mathbb{S}^k$ with $k\geq 2$ and $\rho : K' \longrightarrow SO(m)$ be a faithful irreducible representation, which is not the one of $K'$ on $\mathbb R^{k+1}$, and such that $\rho(H')\subset SO(m-1)$. For any integer $n\geq m+2$, set $G = SO(n)$ and \begin{eqnarray} K^- & = & \rho(K')\times SO(n-m) \subset SO(m)\times SO(n-m) \subset SO(n) \nonumber\\ K^+ & = & \rho(H')\times SO(n-m+1)\subset SO(m-1)\times SO(n-m+1) \subset SO(n) \nonumber \\ H & = & \rho(H')\times SO(n-m) \subset SO(n),\nonumber \end{eqnarray} then the cohomogeneity one manifold $M$ defined by the groups $H\subset \set{K^-,K^+}\subset G$ does not admit a $G$ invariant metric with non-negative sectional curvature. \end{thmmain} In \cite{GVWZ}, the theorem was proved under the additional assumptions that the slice representation of $K'$ is not contained in the symmetric square $\mathrm{Sym}^2\rho$ and $\rho(K')$ does not act transitively on the sphere $\mathbb{S}^{m-1} = SO(m)/SO(m-1)$. This theorem is optimal in the sense that if $\rho$ is the representation of $K'$ on $\mathbb R^{k+1}$, then the manifold $M$ does admit an invariant non-negatively curved metric, since it is diffeomorphic to the homogeneous space $SO(n+1)/(\rho(K')\times SO(n-m+1))$ endowed with the cohomogeneity one action of $SO(n) \subset SO(n+1)$. On the other hand, if $n=m+1$ one obtains an interesting cohomogeneity one manifold where it is not yet known wether that manifold carries a non-negatively curved invariant metric or not. The theorem can be extended to the case where $\rho$ is not necessarily irreducible, see Theorem \ref{thmorth}. A representation $\rho : K'\longrightarrow SO(m)$ with $\rho(H') \subset SO(m-1)$ is called a \emph{class one} representation, see Definition \ref{defclassone}. Many class one representations are not faithful, see Proposition \ref{propclassonetypekernel} and Table \ref{tabletypeclassone}, so they are excluded by the faithfulness requirement in \cite{GVWZ}. However Theorem \ref{thmorth} allows non-faithful class one representation as a subrepresentation of $\rho$ and it gives us many more examples, see Section \ref{secgroupcompweylgroup}. Similar results also hold if $\rho$ is a complex or quaternionic representation, i.e., $G=U(n)$ or $Sp(n)$ which are stated in the Theorem \ref{thmexamplescomplex} and Theorem \ref{thmexamplesquater}. In the above theorem, Theorem \ref{thmorth}, Theorem \ref{thmexamplescomplex} and Theorem \ref{thmexamplesquater}, the group $K'$ does not need to be connected. But the corresponding groups $K^\pm$ on the universal cover $\widetilde{M}$ will be connected. Therefore in the rest of the paper, we can assume that all groups are connected. \smallskip The paper is organized as follows. In Section \ref{secprelim}, we briefly recall basic properties of cohomogeneity one manifolds, the generalized Weyl group and metric properties. Also this section includes an introduction to class one representations. A more detailed discussion of such representations is given in Appendix \ref{appclassone}. In Section \ref{secgroupcompweylgroup} we prove some basic properties of the new examples which will be used in the proof of the orthogonal case. In Section \ref{secKillingvf}, we study the consequences of the non-negativity assumption on the metrics. Wilking's rigidity theorem for non-negatively curved manifolds\cite{Wilkingduality} plays an important role. Section \ref{secprooforth} is devoted to the proof in the orthogonal case. Using properties of the metrics developed in the previous sections, we draw a contradiction by looking at sectional curvatures of 2 distinct classes of $2$-planes. In the last section, we sketch an outline of the proofs in the case where $G=U(n)$ or $Sp(n)$. \medskip \emph{Acknowledgements:} The paper is part of the author's Ph.D. thesis at University of Pennsylvania. The author wants to thank his advisor, Prof. Wolfgang Ziller, for his generous supports and great patience, and to Prof. Kristopher Tapp for valuable discussions on Wilking's rigidity results. \section{Preliminaries}\label{secprelim} In this section, we recall some basic and well-known facts about cohomogeneity one manifolds. For more detail, we refer to, for example, \cite{Alekseevsy} and \cite{GWZ}. As mentioned already, there are precisely two non-principal orbits $B_\pm$ in a simply connected cohomogeneity one manifold. Suppose $M$ is endowed with an invariant metric $g$ and the distance between the two non-principal orbits is $L$. Let $c(t)$, $t \in \mathbb R$ be a geodesic minimizing the distance with $c(0) = p_- \in B_-$ and $c(L) = p_+ \in B_+$. The isotropy subgroups at $p_\pm$ are denoted by $K^\pm$ and the principal isotropy subgroup at any point $c(t)$, $t\in (0,L)$, is denoted by $H$. We can draw the following group diagram for the manifold $M$: \[ \xymatrix{ & G \\ K^- \ar@{-}[ur]^{ }& & K^+ \ar@{-}[ul]^{} \\ & H \ar@{-}[ul] \ar@{-}[ur] } \] The group diagram $H \subset \set{K^-, K^+}\subset G$ is not uniquely determined by the manifold since one can switch $K^-$ with $K^+$, change $g$ to another invariant metric and choose another minimal geodesic $c(t)$. \begin{defn} Two group diagrams are called \emph{equivalent} if they determine the same cohomogeneity one manifold up to equivariant diffeomorphism. \end{defn} The following lemma characterizes which two group diagrams are equivalent, see in \cite{GWZ}. \begin{lem}\label{LemGroupEquivalent} Two group diagrams $H \subset \set{K^-, K^+} \subset G $ and $\tilde{H} \subset \set{\tilde{K}^-, \tilde{K}^+} \subset G$ are equivalent if and only if after possibly switching the roles of $K^-$ and $K^+$, the following holds: There exist elements $b \in G$ and $a \in N(H)_0$, where $N(H)_0$ is the identity component of the normalizer of $H$, with $\tilde{K}^- = b K^- b^{-1}$, $\tilde{H} = bHb^{-1}$, and $\tilde{K}^+ = ab K^+ b^{-1} a^{-1}$. \end{lem} \begin{rem}\label{remequivalentdiag} If $c(t)$ is the minimal normal geodesic between the two singular orbits, then $b\star c(t)$ is another minimal geodesic between $B_\pm$ and the associated group diagram is obtained by conjugating all isotropy groups by the element $b$. We can assume that $b \in N(H) \cap N(K^-)$ in order to fix $H$ and $K^-$. Conjugation by an element $a$ as in the above lemma usually corresponds to changing the invariant metric on the manifold. \end{rem} Let $C\subset M$ be the image of the minimal geodesic $c(t)$. Then the Weyl group $\mathcal{W}$ of the $G-$action on $M$ is by definition the stablizer of $C$ modulo its kernel $H$. $\mathcal{W}$ is characterized in the following proposition. \begin{prop}\label{propWeyl} The Weyl group $\mathcal{W}$ of a cohomogeneity one manifold is a dihedral subgroup of $N(H)/H$. It is generated by involutions $w_\pm \in (N(H)\cap K^\pm)/H$ and $C/\mathcal{W} = M/G =[0,L]$. Each of these involutions can be represented as an element $a\in K^\pm\cap H $ such that $a^2$ but not $a$ lies in $H$. \end{prop} Using the group action, the invariant metric is determined by its restriction to the minimal geodesic $c(t)$. Suppose $c(t)$ is parameterized by arc length, i.e., $T=\frac{\mathrm{d}}{\mathrm{d} t}$ has length $1$, then we can write $g$ as \begin{equation*} g = \mathrm{d} t^2 + g_t, \end{equation*} and $\set{g_t}_{t\in [0,L]}$ is a one-parameter family of homogeneous metrics on the orbits $M_t = G.c(t)$. Fix a bi-invariant inner product $Q$ on the Lie algebra $\mathfrak{g}$ of $G$ and let $\mathfrak{p} = \mathfrak{h}^\perp$ be the orthogonal complement of the Lie algebra $\mathfrak{h}$ of $H$. For each $X \in \mathfrak{p}$, let $X^*$ be the Killing vector field generated by $X$ along $c(t)$, i.e.,$X^*(t) = \frac{\mathrm{d}}{\mathrm{d} s}|_{s=0} \exp(sX).c(t)$. For each $t\in (0,L)$, $M_t$ is diffeomorphic to the homogeneous space $G/H$, and hence $T_{c(t)}M_t$ can be identified with $\mathfrak{p}$ by means of Killing vector fields as $X \mapsto X^*(t)$. Then $g_t$ defines an inner product on $\mathfrak{p}$ which is invariant under the isotropy action of $\mathrm{Ad}_H$. We set \begin{equation}\label{eqnmetricgt} g_t(X,Y) = g_t(X^*(t), Y^*(t)) = Q(P_t X, Y) \mbox { for } X, Y \in \mathfrak{p}, \end{equation} where $P_t : \mathfrak{p} \longrightarrow \mathfrak{p}$ is a $Q$-symmetric $\mathrm{Ad}_H$-equivariant endomorphism. The metric $g$ is completely determined by the one parameter family $\set{P_t}$, $t\in [0,L]$, and at $t =0$ and $L$, $P_t$ should satisfy further conditions to guarantee smoothness of $g$. On the other hand, each principal orbit $M_t$ is a hypersurface in $M$ with normal vector $T$. If $S_t X = S_tX^*(t) = -\nabla_{X^*} T $ is the shape operator of $M_t$ at $c(t)$, we have \begin{equation}\label{eqnshapeop} S_t = -\dfrac{1}{2} P_t^{-1}P_t' \end{equation} in terms of $P$. \smallskip In the rest of this section, we give a short introduction to class one representations with more details in Appendix \ref{appclassone}. This particular class of representations is well studied, see, for examples, \cite{VKrepresentation} and \cite{WallachMin}. First we recall: \begin{defn}\label{defclassone} A representation $(\mu, W)$ of a compact Lie group $K$ is called a \emph{real (complex or quaternionic) representation} if $W$ is a vector space over $\mathbb R$ ($\mathbb{C}$ or $\mathbb{H}$). \end{defn} \begin{defn}\label{defreptype} Suppose $W$ is an irreducible representation over the complex numbers, then $W$ is called a \emph{real} representation or of \emph{real type} if it comes from a representation over reals by extension of scalars. It is of \emph{quaternionic type} if it comes from a representation over quaternions by restriction of scalars. It is of \emph{complex type} if it is neither real or quaternionic. \end{defn} For any complex representation $\mu$, let $\mu^*$ denote its complex conjugate. $\mu^*$ is equivalent to $\mu$ if $\mu$ is of type real or quaternionic and they are non-equivalent if $\mu$ is of complex type. Suppose $K$ is a compact connected Lie group, then the complexification of a real irreducible representation $\sigma$ is one of the following \begin{enumerate} \item $\sigma_{\mathbb{C}} = \mu$, where $\mu$ is an irreducible representation of real type, \item $\sigma_{\mathbb{C}} = \mu \oplus \mu^*$, where $\mu$ is irreducible and of complex type, \label{itemrealcomplex} \item $\sigma_{\mathbb{C}} = \mu \oplus \mu$, where $\mu$ is irreducible and of quaternionic type. \label{itemrealquater} \end{enumerate} In class (\ref{itemrealcomplex}) and (\ref{itemrealquater}), we often write $\sigma = [\mu]_\mathbb R$. Conversely, suppose $\mu$ is a complex irreducible representation of $K$ with degree $n$. If $\mu$ is of real type, then there exists a real vector space $\mathbb R^n \subset \mathbb{C}^n$ which is $\mu(K)$ invariant. Let $\sigma$ be the restriction of $\mu$ on $\mathbb R^n$, then $\sigma$ is a real irreducible representation with $\sigma_{\mathbb{C}} = \mu$. If $\mu$ is not of real type, then we identify $\mathbb{C}^{n}$ with $\mathbb R^{2n}$ and forget the complex structure on it. Thus $\mathbb R^{2n}$ is invariant under the $\mu(K)$ action and it gives us a real irreducible representation. We denote this representation by $\sigma$ and then $\sigma_{\mathbb{C}}$ is either $\mu \oplus \mu^*$ or $\mu \oplus \mu$ depending on the type of $\mu$. A quaternionic irreducible representation is obtained by extending the scalar field to the quaternions for a complex irreducible representation and the converse also holds. \smallskip Given an $n$ dimensional real irreducible representation $(\sigma, W)$ of a compact Lie group $K$, we briefly discuss the classification of the equivariant endomorphisms, i.e., the endomorphism $f: W \longrightarrow W$ with $f(\sigma(g).v) = \sigma(g).f(v)$ for any $g\in K$ and $v\in W$. If $\sigma$ is in class $(1)$, then from Schur's lemma, $f = a \mathrm{Id}$ for some constant $a \in \mathbb R$ and $\mathrm{Id}$ is the identity map of $W$. If $\sigma$ is in class $(2)$, then $W$ has an orthonormal basis such that $f$ is in the following form under that basis \begin{equation}\label{eqnequivarmapcpx} f= \begin{pmatrix} a_0 I_{m} & -a_1 I_{m} \\ a_1 I_{m} & a_0 I_{m} \end{pmatrix}, \end{equation} where $a_0$, $a_1$ are constants in $\mathbb R$, $n=2m$ and $I_m$ is the $m\times m$ identity matrix. If $\sigma$ is in class $(3)$, then there exists an orthonormal basis of $W$ such that $f$ has the following form under that basis \begin{equation}\label{eqnequivarmapquater} f = \begin{pmatrix} a_0 I_m & -a_1 I_m & -a_2 I_m & -a_3 I_m \\ a_1 I_m & a_0 I_m & a_3 I_m & -a_2 I_m \\ a_2 I_m & -a_3 I_m & a_0 I_m & a_1 I_m \\ a_3 I_m & a_2 I_m & -a_1 I_m & a_0 I_m \end{pmatrix}, \end{equation} where $n=4m$, $a_0$, $a_1$, $a_2$ and $a_3$ are constants in $\mathbb R$. \smallskip \begin{defn} A pair $(K,H)$ of compact Lie groups with $H\subset K$ and $K/H= \mathbb{S}^k$ ($k\geq 2$) is called a \emph{spherical pair}. \end{defn} If we assume that $K$ is connected, the image of $\mu$ will be a closed subgroup of $SO(l)$, $U(l)$ or $Sp(l)$ if $\mu$ is over $\mathbb R$, $\mathbb{C}$ or $\mathbb{H}$. For each group pair $(K,H)$ with $H \subset K$ a closed subgroup, we have \begin{defn} A non-trivial irreducible representation $(\mu, W)$ of $K$ is called \emph{a class one representation of the pair ($K,H$)} if $\mu(H)$ fixes a nonzero vector $w_0 \in W$. \end{defn} \begin{rem} From Proposition \ref{propclassonetypekernel}, the class one representations of spherical pairs are almost effective, i.e., the kernel of $\mu$ is discrete, except for the following cases: \begin{enumerate} \item $(K,H)=(U(n),U(n-1)_m)$ and $\mu = ae_1 - ae_n$ where $a$ is a positive integer. The kernel is the diagonal embedded $U(1) \subset U(n)$. \item $(Sp(n)\times Sp(1), Sp(n-1)\times Sp(1))$ and $\mu = a \varpi_2$ where $a$ is a positive integer. The kernel contains the $Sp(1)$ factor. \item $(Sp(n)\times U(1), Sp(n-1)\times U(1)_m)$ and $\mu = (a\varpi_1 + b\varpi_2) \otimes \mathrm{id}$ where $a$, $b$ are non-negative integers with $a+b\geq 1$ and $\mathrm{id}$ is the trivial representation of $U(1)$. The kernel contains the $U(1)$ factor. \end{enumerate} More information on the non-trivial kernels for class one representations is given in Proposition \ref{propclassonetypekernel} and listed in Table \ref{tabletypeclassone}. \end{rem} In the case where $(K,H)= (SO(k+1),SO(k))$, let $\varpi_1$ be the highest weight of the standard representation $\varrho_{k+1}$ on $\mathbb R^{k+1}$, then the class one representations over $\mathbb R$ are precisely those with the highest weights as $m\varpi_1$, $m=1, 2, \ldots$. These representation spaces can be realized as the space of homogeneous harmonic polynomials. Let $\set{x_1, \ldots, x_{k+1}}$ be the basis of $\mathbb R^{k+1}$ and $SO(k+1)$ act by the matrix multiplication. Then an element $A \in SO(k+1)$ acts on a polynomial $f(x_1, \ldots, x_{k+1})$ through the action on the variables, i.e., $(A.f)(x_1, \ldots, x_{k+1}) = f(A^{-1}.x_1, \ldots, A^{-1}.x_{k+1})$. A polynomial $f$ is called harmonic if $\Delta f = 0$ where $\Delta = \sum_{i=1}^{k+1} \frac{\mathrm{d}^2}{\mathrm{d} x_i^2}$. Let $H_m$ be the space of homogeneous harmonic polynomials in $x_1, \ldots, x_{k+1}$ of degree $m$, then the representation of $SO(k+1)$ on $H_m$ has the highest weight $m\varpi_1$. All of them are of real type. If $k+1$ is odd, then the representation for any positive $m$ is faithful. If $k+1$ is even, then the class one representation is faithful if and only if $m$ is odd. As we will see that the class one representations of other spherical pairs $(K,H)$ are the irreducible components when $H_m$ is restricted to the subgroup $K$ of $SO(k+1)$. The real, complex and quaternionic class one representations of the spherical pairs are classified in Appendix \ref{appclassone}. Further properties are discussed there as well. \section{Weyl group and smoothness}\label{secgroupcompweylgroup} First let us state the most general result in the orthogonal case as a generalization of the theorem in the introduction using the concept of class one representation: \begin{thm}\label{thmorth} Let $K'/H' = \mathbb{S}^k$ with $k\geq 2$ and $\rho : K' \longrightarrow SO(m)$ be a faithful representation that contains a (not necessarily faithful) irreducible class one representation $\mu$ of the pair $(K',H')$ such that one of the followings holds: \begin{enumerate} \item $\deg \mu \geq k+2$ if $\mu$ is of real type or the multiplicity of $\mu$ in $\rho$, denoted by $\mathrm{mul}(\mu,\rho)$, is equal to $1$; \item $\deg \mu \geq 2(k+2)$ if $\mu$ is of complex type and $\mathrm{mul}(\mu,\rho)\geq 2$; \item $\deg \mu \geq 4(k+2)$ if $\mu$ is of quaternionic type and $\mathrm{mul}(\mu,\rho) \geq 2$. \end{enumerate} We assume that $n \geq m+2$ if $\mathrm{mul}(\mu,\rho) = 1$, and that $n \geq m+3$ if $\mathrm{mul}(\mu,\rho) \geq 2$. If we set $G = SO(n)$ and \begin{eqnarray} K^- & = & \rho(K')\times SO(n-m) \subset SO(m)\times SO(n-m) \subset SO(n) \nonumber\\ K^+ & = & \rho(H')\times SO(n-m+1)\subset SO(m-1)\times SO(n-m+1) \subset SO(n) \label{groupdiagram}\\ H & = & \rho(H')\times SO(n-m) \subset SO(n),\nonumber \end{eqnarray} then the cohomogeneity one manifold $M$ defined by the groups $H\subset \set{K^-,K^+}\subset G$ does not admit a $G$ invariant metric with non-negative sectional curvature. \end{thm} \begin{rem} Since we do not assume that $\mu$ is faithful, a non-faithful class one representation is allowed in this construction. In other words, if $\mu$ is a class one representation with $\deg \mu \geq k+2$, we choose a representation $\tau$ with $\ker \tau \cap \ker \mu =\set{1}$. Then $\rho = \tau \oplus \mu$ satisfies the conditions in the theorem. For example, take $(K',H') = (SO(6),SO(5))$ with $k = 5$ and let $\mu$ be the $20$ dimensional representation of $SO(6)$ with the highest weight $2\varpi_1$. To construct a cohomogeneity one manifold, we can choose $\tau = \varrho_6$ as the standard representation of $SO(6)$ which is faithful and let $\rho = \tau \oplus \mu$. \end{rem} \begin{rem} If $\rho$ contains only one copy of $\mu$ or $\mu$ is of real type, then Table \ref{tableclassonedimcomparereal} in Proposition \ref{propdimcomparereal} lists all real class one representations which have dimensions smaller than $k+2$. We see that only the following representations are excluded by the assumption, $\deg \mu \geq k+2$: the defining representation of $K'$ on $\mathbb R^{k+1}$, the $9$ dimensional representation $\varrho_9$ of the pair ($Spin(9)$, $Spin(7)$), the $5$ dimensional representation of $(Sp(2),Sp(1))$ and the $3$ dimensional representations of $(SU(2), \set{\mathrm{Id}})$ and $(U(2),U(1))$. All of these representations are not faithful, so one needs to add another representation $\tau$ with $\ker \tau \cap \ker \mu =\set{\mathrm{Id}}$ to define a cohomogeneity one manifold. For such manifolds, we do not know if they admit an invariant metric with non-negative curvature. \end{rem} \begin{rem} If $\rho$ contains more than one copy of $\mu$ and $\mu$ is not of real type, then the further restriction $\deg \mu \geq 2 (k+2)$ or $4 (k+2)$ excludes $8$ more representations as listed in the last part of Table \ref{tableclassonedimcomparereal}. Among them, the following representations are faithful: $\mu = [2\varpi_1]_\mathbb R$ for the pair $(SU(3),SU(2))$, $\mu=[\varpi_1 +\varpi_2]_\mathbb R$ for the pair $(Sp(2),Sp(1))$ and $\mu = [3\varpi_1]_\mathbb R$, $[5\varpi_1]_\mathbb R$ and $[7\varpi_1]_\mathbb R$ for the pair $(Sp(1),\set{1})$. The first one is of complex type and the other four are of quaternionic type. They can be used to construct cohomogeneity one manifolds without adding other representations, for example, take $(K',H') = (Sp(1),\set{1})$ and $\rho = [3\varpi_1]_\mathbb R \oplus [3\varpi_1]_\mathbb R$. Thus Theorem \ref{thmorth} does not give obstruction for such manifolds. \end{rem} \begin{rem} The lowest dimensional example of Theorem \ref{thmorth} is obtained as follows. Take $(K',H')= (SO(3),SO(2))$ with $K'/H' = \mathbb{S}^2$, then the lowest dimensional class one representation $\mu$ with $\deg > 3$ is the unique $5-$dimensional representation of $SO(3)$ which is also faithful. If $m=5$ and $n=7$, the manifold $M$ has dimension $20$ and isotropy groups \begin{equation*} \mu(SO(2))\times SO(2) \subset \set{\mu(SO(3))\times SO(2), \mu(SO(2)) \times SO(3)} \subset SO(7). \end{equation*} Notice that this example is already covered by Theorem 3.2 in \cite{GVWZ}. \end{rem} \smallskip We now describe the explicit embedding of the groups. $SO(m)\times SO(n-m)$ is embedded in $G=SO(n)$ block-wise, i.e., $SO(m)$ sits in the upper-left $m\times m$-block and $SO(n-m)$ is in the lower-right block. By assumption, $\rho$ is a faithful orthogonal representation of $K'$ with representation space $V=\mathbb R^m$. Let $W_1, \ldots, W_{\alpha}$ be invariant subspaces of $V$ such that they are pairwisely orthogonal, and the restrictions of $\rho(K')|_{W_i} = \mu$ are equivalent and irreducible. Let $U$ be the orthogonal complement of $W = W_1 \oplus \cdots \oplus W_{\alpha}$. According to the decomposition of $V$ into invariant spaces, we can write $\rho$ as \begin{equation}\label{eqndecomprep} \rho = \tau \oplus \mu \oplus \cdots \oplus \mu, \end{equation} where $\tau$ is restriction of $\rho$ to $U$ and $\mu$ is the class one representation of $(K',H')$. Furthermore, $\mu$ is not a subrepresentation of $\tau$. Suppose $r=\dim U$, $l=\dim W_i = \deg \mu$, then $m=r+ \alpha l$. By choosing a suitable basis of $V$, $\rho(x)$ for $x\in K'$, is a block diagonal matrix in $SO(m)$: \begin{equation}\label{eqnrhox} \rho(x) = \begin{pmatrix} \tau(x) & & & \\ & \mu(x) & & \\ & & \ddots & \\ & & & \mu(x) \\ \end{pmatrix} \in SO(r) \times SO(l) \times \cdots \times SO(l). \end{equation} When $\mu$ is restricted to the subgroup $H'\subset K'$, it is not irreducible any more. Let $m_0$ be the multiplicity of the trivial representation in the restriction $\mu |_{H'}$. Hence for any element $y \in H'$, $\mu(y) \in SO(l-m_0) \subset SO(l)$ and $SO(l-m_0)$ is embedded as the upper left $(l-m_0)\times (l-m_0)$ block in $SO(l)$. \smallskip With the explicit description of the embeddings, we can prove the following proposition on the Weyl group of the new examples. \begin{prop}\label{propweyl} The Weyl group $\mathcal{W}$ is isomorphic to $\mathbb{Z}_2 \times \mathbb{Z}_2 $. \end{prop} \noindent \emph{Proof : } For any element $x\in K'$ and $A\in SO(n-m)$, let $M(x,A)$ denote the block diagonal matrix $\mathrm{diag}(\rho(x), A)$ with $\rho(x) \in SO(m)$. If $x \in H'$, $A$ can be considered as a matrix in $SO(n-m+1)$ since $\rho(x) \in SO(m-1)$. First notice that $w_+$ can be represented by the an element $a \in K^+ = \rho(H')\times SO(n-m+1)$ which is not in $H$, but $a^2 \in H$. Let \begin{equation}\label{eqnwep} w_+ = M(\mathrm{id}, \left(\begin{smallmatrix} -1 & & \\ & -1 & \\ & & I_{n-m-1} \end{smallmatrix}\right)) = a= \begin{pmatrix} I_{m-1} & & & & \\ & -1 & & & \\ & & -1 & & \\ & & & I_{n-m-1} & \\ \end{pmatrix}, \end{equation} where $I_{k}$ is the $k\times k$ identity matrix. Suppose $b = M(x,A)$ is a representative of $w_-$, i.e., $b \in N_{K^-}(H)$ and $b^2$ but not $b \in H$. In the following we will determine the element $b$ in three different cases depending on the class one representation $\mu$. \textsc{Case 1:} $\mu$ is of real type and $m_0 = 1$. In each $W=W_i$($i =1, \ldots, \alpha$), let $v$ be a unit vector fixed by $\mu(H')$(or equivalently by $H$) and $X$ be its orthogonal complement. Then $\dim X = l-1$. Since $b \in N_{K^-}(H)$, $b.v$ is also fixed by $H$ and has the same length as $v$, i.e., $b.v= \pm v$. Since $b$ is an orthogonal transformation, $b$ maps $X$ to itself, i.e., when restricted on $W$, $b=\left(\begin{smallmatrix} b_1 & 0 \\ 0 & \det b_1 \end{smallmatrix}\right)$ where $b_1 \in O(l-1)$. Therefore the representative $b$ has the following matrix form: \begin{equation}\label{eqnwem} w_- = b = \begin{pmatrix} A_1 & & & & & &\\ & A_2 & & & & &\\ & & \det A_2 & & & &\\ & & & \ddots & & &\\ & & & & A_2 & & \\ & & & & & \det A_2 & \\ & & & & & & I_{n-m} \end{pmatrix}, \end{equation} where $A_1 = \tau(x) \in SO(r)$, $A_2 = b_1 \in O(l-1)$ and there are $\alpha$ copies of $\left(\begin{smallmatrix} A_2 & \\ & \det A_2 \end{smallmatrix}\right)$. \textsc{Case 2:} $\mu$ is not of real type and $m_0 = 2$. In this case we have $\mu = \nu \oplus \nu^*$ where $\nu$ is a complex class one representation for the pair $(K',H')$. If $\nu$ is of quaternionic type, then $\nu^* = \nu$ and $\mu = \nu \oplus \nu$. As in Case 1, we have $W= X \oplus_\perp Y$ where $Y$ is the $2$ dimensional subspace fixed by $\mu(H')$ and $X$ is its orthogonal complement. The orthogonal transformation $\mu(x)$ maps $X$ and $Y$ to themselves and the matrix $\mu(x)$ has the form $\left(\begin{smallmatrix} b_1 & 0 \\ 0 & b_2 \end{smallmatrix}\right) \in S(O(l-2)\times O(2))$. Since $\mu(x)^2 \in H$ implies $b_2^2 = I_2$, $b_2$ is a symmetric matrix. Since $\mu = \nu \oplus \nu^*$, $b_2$ commutes with the matrix $\left(\begin{smallmatrix} 0 & -1 \\ 1 & 0 \end{smallmatrix}\right)$ which implies $b_2 = \pm I_2$. Therefore $b$ also has the matrix form as in (\ref{eqnwem}). \textsc{Case 3: }$\mu$ is not in Case 1 or Case 2. From the classification of class one representations in Theorem \ref{thmclassonecpx} and their types in Proposition \ref{propclassonetypekernel}, the only representation of this type is $(K',H') = (Sp(k),Sp(k-1))$ where $\mu$ has the highest weight $p \varpi_1 + q \varpi_2$ with $p\geq 1$. Here $\varpi_1$ and $\varpi_2$ are the 1st and 2nd fundamental weights of $Sp(n)$. Since $\rho$ is a faithful representation, we have $x\in K'\cap N_{K'}(H') - H'$ and $x^2 \in H'$. Take $x = -I_k \in Sp(k)$, then $x$ satisfies these restrictions. If we view $Sp(k)$ as a subgroup of $SO(4k)$, then $\mu$ is contained in the restriction of some class one representation $\nu$ of $SO(4k)$ to $Sp(k)$ by Theorem \ref{thmharmonic}. The representation space of $\nu$ is consisted of homogeneous harmonic polynomials, so the image $\nu(x)$ is $\pm \mathrm{Id}$ depending on the parity of the degree of the polynomials. Therefore $\mu(x)$ is equal to $\pm \mathrm{Id}$ and the element $w_-$ can be represented by the following matrix: \begin{equation}\label{eqnwemsp} w_- = b = \begin{pmatrix} A_1 & & &\\ & \varepsilon I_{\alpha l} & &\\ & & & I_{n-m} \end{pmatrix}, \end{equation} where $A_1 = \tau(x) \in SO(r)$ and $\mu(x) = \varepsilon I_l$ with $\varepsilon = \pm 1$. In each case, from the given representatives of $w_\pm$, it is easy to check that $w_+ w_- = w_- w_+$ which is not an element in $H$. Thus $\mathcal{W} =<w_-, w_+> \cong \mathbb{Z}_2 \times \mathbb{Z}_2$. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \begin{rem} Let $a$ be an element in $N(H)_0$ which does not lie in $N(K^-)$ or $N(K^+)$ and let $\bar{M}$ be the cohomogeneity one manifold defined by the group diagram $H \subset \set{K^-, aK^+ a^{-1}} \subset G$. As pointed out in Remark \ref{remequivalentdiag}, $M$ and $\bar{M}$ usually have different Weyl groups and different invariant metrics though they are $G$-equivariantly diffeomorphic. Therefore, if one family of invariant metrics does not admit non-negative curvature, it does not necessarily follow that the other family is obstructed as well. In our example, if the multiplicity of the trivial representation in $\mu|_{H'}$ is equal to one, i.e., $m_0 = 1$, then our arguments which show the obstructions work for all equivalent diagrams. This is the case for most of the class one representations of spherical pairs, see, for example, Theorem \ref{thmclassonecpx}. On the other hand, if $m_0 \geq 2$, then we have to put the further restriction on the diagram: the Lie algebra of $K^+$ contains the subspace $\mathrm{span}\set{E_{m,m+1}, \ldots, E_{m,n}}$. Here we use $E_{i,j}$, $i\neq j$, to denote the skew-symmetric matrix having $1$ in the $i,j$-entry, $-1$ in the $j,i$-entry and zero otherwise. \end{rem} \smallskip Using the explicit representatives of the generators of the Weyl group, we have the following smoothness condition of an invariant metric on $M$. \begin{lem}\label{lemweylsymmetry} For any $G-$invariant metric $g$ on $M$, let $h(t)$ be the length of the Killing vector field generated by $E_{m,m+1}$ along the geodesic $c(t)$. Then $h(t)$ is an even function with $h(0)\neq 0$ and $h(L) = 0$. \end{lem} \noindent \emph{Proof : } The fact that $E_{m,m+1}$ lies in the Lie algebra of $K^+$ but not $K^-$ implies that $h(0)\neq 0$ and $h(L) = 0$. The generator $w_-$ is a reflection of $c(t)$ at the point $p_-$ and maps $c(t)$ to $c(-t)$. The induced map $\mathrm{d} w_-$ takes $T_{c(t)}M$ to the tangent space $T_{c(-t)}M$. From the matrix form (\ref{eqnwem}) and (\ref{eqnwemsp}) of a representative of $w_-$, we have $\mathrm{d} w_- (E^*_{m,m+1}(t))$ = $\pm E^*_{m,m+1}(-t)$. Therefore $h(t)=h(-t)$, i.e., $h(t)$ is an even function. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip \smallskip \section{Restrictions on the Metric Along the Normal Geodesic}\label{secKillingvf} In this section, we begin the study of the invariant metrics in our examples. In general, the family of $G$-invariant metrics on $M$ is very large. There are many rigidity results for positively curved metrics. For example, in even dimensions, L. Verdiani classified all positively curved cohomogeneity one manifolds, see \cite{VerdianiEven1} and \cite{VerdianiEven2}. In odd dimensions, K.Grove, B.Wilking and W.Ziller obtained a short list of cohomogeneity one manifolds which possibly have invariant positively curved metrics in \cite{GVWZ}. Recently K.Grove, L.Verdiani and W.Ziller have succeeded in constructing positively metric on one of them in \cite{GVZPos}. On the other hand, there are few rigidity results for non-negatively curved metrics, even in the cohomogeneity one case. Recently, B.Wilking proved some fundamental rigidity theorems for non-negatively curved manifolds in a general setting which will play an important role in our proof. \smallskip Let $c: \mathbb R \longrightarrow M$ be a geodesic and let $\Lambda$ be an $(n-1)$-dimensional family of normal Jacobi fields. The Ricatti operator $L(t)$ is the endmorphism of the normal bundle $(\dot{c}(t))^\perp$ defined by $L(t)J(t) = J'(t)$ for $J \in \Lambda$. From Theorem 9 and Corollary 10 in \cite{Wilkingduality}, We have \begin{thm}[Wilking's Rigidity Theorem]\label{thmwilking} Suppose the Riccati operator for an $(n-1)$-dimensional family $\Lambda$ of normal Jacobi fields is self-adjoint, and define the smooth subbundle $\Upsilon$ of $(\dot{c}(t))^\perp$ by: \[ \Upsilon = \mathrm{span}\set{J \in \Lambda | J(t) = 0 \mbox{ for some }t \in \mathbb R}. \] Then, if $M$ has non-negative curvature, we have: \begin{equation} \Lambda = \Upsilon \oplus \set{J \in \Lambda | J \mbox{ is parallel }}, \label{eqndecompJacobi} \end{equation} and \[ \Upsilon(t) = \set{J(t) |J \in \Upsilon}\oplus\set{J'(t)|J \in \Upsilon, J(t) =0}. \] A point $t_0\in \mathbb R$ or $c(t_0)$ is said to be \emph{singular} if $J(t_0) = 0$ for some $J\in \Upsilon$. Otherwise $t_0$ is said to be \emph{generic}. Thus if $J\in \Lambda$ and $J(t_0) \perp \Upsilon(t_0)$ at a generic $t_0$, then $J$ is parallel along $c(t)$, $t\in \mathbb R$. \end{thm} \begin{rem} If there exists a subbundle $E \subset (\dot{c}(t))^\perp$ that is invariant under parallel transport, then Theorem \ref{thmwilking} can also be applied to a $\mathrm{rank}$ $E$ dimensional family of normal Jacobi fields in $E$ with self-adjoint Ricatti operator. \end{rem} In our example, let $c(t)$ be the minimal geodesic between $B_\pm$, and $X^*(c(t))$, $X\in \mathfrak{h}^\perp$, an $(n-1)$ dimensional family of Jacobi fields. Its Riccati operator $L(t)$ is self adjoint, since it is equal to the shape operator $- \frac{1}{2} P_t^{-1}P'_t$, see (\ref{eqnshapeop}). We will apply Theorem \ref{thmwilking} to a subfamily of these Jacobi fields. Let $\mathfrak{g} = \mathfrak{so}(n)$ and $\mathfrak{h}$ be the Lie algebras of $G=SO(n)$ and $H=\rho(K')\times SO(n-m)$ respectively. Choose the bi-invariant inner product $Q= -\frac{1}{2} \mathrm{Tr}$ on $\mathfrak{g}$ for which $\set{E_{i,j}}$ is an orthonormal basis and let $\mathfrak{p}$ be the orthogonal complement of $\mathfrak{h} \subset \mathfrak{g}$. First we identify some subspaces of $\mathfrak{p}$. Let \begin{eqnarray} \mathfrak{q}_0 & = & \mathrm{span} \set{E_{i,j}|1\leq i\leq r, m+1\leq j \leq n }\nonumber \\ \mathfrak{q}_1 & = & \mathrm{span} \set{E_{i,j}|r+1\leq i\leq r+l, m+1\leq j \leq n }\label{eqnsubspaceq} \\ & \cdots & \nonumber\\ \mathfrak{q}_\alpha & = & \mathrm{span} \set{E_{i,j}|m+1-l\leq i\leq m, m+1\leq j \leq n }\nonumber, \end{eqnarray} and $\mathfrak{q} = \mathfrak{q}_0 + \mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha$. We write the last subspace $\mathfrak{q}_\alpha$ as a sum of two subspaces as follows: \begin{eqnarray} \mathfrak{n}_1 & = & \mathrm{span}\set{E_{i,j}| m+1-l\leq i \leq m -1, m+1 \leq j \leq n},\label{eqnsubspacen}\\ \mathfrak{n}_2 & = & \mathrm{span}\set{E_{m,j}| m +1 \leq j \leq n}\nonumber . \end{eqnarray} Let $\mathfrak{q}^\perp$ be the $Q$-orthogonal complement of $\mathfrak{q}$ in $\mathfrak{p}$. Since $\mathfrak{q}^\perp$ is the fixed point set by the isotropy action of the subgroup $SO(n-m) \subset H = \rho(H')\times SO(n-m)$, Schur's lemma implies that the Killing vector field $X^*$, $X \in \mathfrak{q}^\perp$, is orthogonal along $c(t)$ to $Y^*$ for any $Y \in \mathfrak{q}$. \smallskip \noindent \emph{Terminology. } In the rest of the paper, for any two subspaces $\mathfrak{p}_1$, $\mathfrak{p}_2 \subset \mathfrak{p}$, the notation $\mathfrak{p}_1^* \perp \mathfrak{p}_2^*$ means that any Killing vector field generated by an element in $\mathfrak{p}_1$ is orthogonal to any Killing vector field generated by an element in $\mathfrak{p}_2$ along $c(t)$. \smallskip Since parallel translation commutes with the action of $\mathrm{Ad}_{H}$, and since $\mathfrak{q}^\perp$ is the fixed point set of $SO(n-m)$ in $H$, it follows that $(\mathfrak{q}^\perp)^*$, and hence also $\mathfrak{q}^*$, is invariant under parallel translation. By the same reasoning, $P^{-1}P'$ preserves $\mathfrak{q}^*$ and thus $\mathfrak{q}^*$ forms a self adjoint family of Jacobi fields to which we can thus apply Theorem \ref{thmwilking}. We determine the component $\Upsilon$ in the splitting (\ref{eqndecompJacobi}) of the Jacobi fields $\mathfrak{q}^*$. $p_+ = c(L)$ is a singular point. The element $w_+$ fixes $p_+$ and reflects $c(t)$ about $p_+$. Let $q_- = c(2L) = w_+(p_-) \in B_-$, then the isotropy subgroup at $q_-$ is $K^-_1 = \mathrm{Ad}_{w_+} K^-$ with Lie algebra $\mathfrak{k}^-_1 = \mathrm{Ad}_{w_+} \mathfrak{k}^-$. Similarly, $w_-$ fixes $p_-$ and reflects $c(t)$ about $p_-$. Let $q_+ = c(-L) = w_-(p_+) \in B_+$, then $q_+$ has isotropy subgroup $K^+_1 = \mathrm{Ad}_{w_-} K^+$ with Lie algebra $\mathfrak{k}^+_1 = \mathrm{Ad}_{w_-} \mathfrak{k}^+$. Since $w_-.w_+ = w_+.w_-$, the image of $q_-$ under the reflection $w_-$ about $p_-$ is $w_- (q_-) = w_-. w_+(p_-) = w_+.w_-(p_-) = w_+(p_-) = q_-$, i.e., $c(2L) = c(-2L)$. Therefore $c(t)$ is a closed geodesic with period $4L$ and the singular points are $p_+ = c(L)$ and $q_+ = c(3L)$. The vanishing Killing vector fields are those generated by the vectors in the Lie algebras of the isotropy subgroups at singular points. Notice that if $X \in \mathfrak{q}$, then $X^*(p_-) \ne 0$ and $X^*(q_-) \ne 0$. Theorem \ref{thmwilking} implies \begin{lem}\label{lemvanshingKillingVF} If $Y \in \mathfrak{q}$ such that $Y^*(p_-) \perp X^*(p_-)$ for all $X \in \mathfrak{n}_2$, then $Y^*$ is a parallel Jacobi field along $c(t)$. \end{lem} In the following, we prove some properties of the invariant metrics $g$ on $M$ under the non-negative sectional curvatures assumption. \smallskip First we observe \begin{lem}\label{lemq0orth} Suppose $(M,g)$ is non-negatively curved, then $\mathfrak{q}_0^*$ is orthogonal to $(\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha)^*$ along $c(t)$. \end{lem} \noindent \emph{Proof : } At the generic point $p_-=c(0)$, the metric $g$ restricted to the singular orbit $B_- \cong G/K^-$ is $\mathrm{Ad}_{K^-}$ invariant. The actions of $\mathrm{Ad}_{K^-}$ on $\mathfrak{q}_0$ and $\mathfrak{q}_i$ $(i>0)$, are $\tau \otimes \rho_{n-m}$ and $\mu \otimes \rho_{n-m}$ respectively, where $\varrho_{n-m}$ is the standard representation of $SO(n-m)$ on $\mathbb R^{n-m}$. Since $\tau$ does not contain $\mu$ as a subrepresentation, $\mathfrak{q}_0^*$ is orthogonal to $\mathfrak{q}_i^*$ at $p_-$. In particular $\mathfrak{q}_0^*$ is orthogonal to $\mathfrak{n}_2^*$, so any Killing vector field generated by a vector in $\mathfrak{q}_0$ is parallel along $c(t)$. Hence $\mathfrak{q}_0^*$ is orthogonal to $(\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha)^*$ along $c(t)$. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip Next we study the metric $g$ on the space $(\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha)^*$. Recall $h^2(t) = g(E^*_{m,m+1},E^*_{m,m+1})$ is an even function with $h(0) \ne 0$ and W.L.O.G. we may assume that $h(0) = 1$. \subsection{$\mu$ is of real type or the multiplicity of $\mu$ in $\rho$ is one.} We first consider the case where $\mu$ is of real type. We denote $E_{r+(i-1)l+a,m+\xi}$ by $E_{a,i,\xi}$ for $a = 1, \ldots, l$, $i =1, \ldots, \alpha$ and $\xi = 1, \ldots, n-m$. Since $\mathrm{Ad}_{K^-}$ commutes with $P(0)$, the restriction of $P(0)$ to $\mathfrak{q}_i$, composed with the projection to $\mathfrak{q}_j$, is an equivalence between the $K^-$ irreducible representations $\mathfrak{q}_i$ and $\mathfrak{q}_j$. Since they are orthogonal, Schur's Lemma implies that $P(0)(E_{a,i,\xi}) = f_{i,j} E_{a,j,\xi}$ for some constant $f_{i,j}\in \mathbb R$. Furthermore, $f_{i,j}=f_{j,i}$ from the $Q$-symmetry of $P_t$. In terms of inner product of Killing vector fields, we have \begin{equation}\label{eqnfijreal} f_{i,j} = g(E_{1,i,1}^*, E_{1,j,1}^*)_{c(0)}. \end{equation} The assumption $h(0) = 1$ implies that $f_{\alpha,\alpha} = 1$. \begin{lem}\label{lemorth} Suppose $(M,g)$ is non-negatively curved, then we have \begin{enumerate} \item $E_{a,i,\xi}^*$ is a parallel Jacobi field along $c(t)$ if $a\ne l$; \item $E_{a,i,\xi}^*$ is orthogonal to $E_{b,j,\zeta}^*$ along $c(t)$ if $a\neq b$ or $\xi \neq \zeta$; \item $E_{a,i,\xi}^*$ has the same length as $E_{a,i,\zeta}^*$ along $c(t)$; \item At the point $p_- = c(0)$, $P_0(E_{a,i,\xi})= \sum_{j=1}^{\alpha} E_{a,j,\xi} f_{i,j}$. \end{enumerate} \end{lem} \noindent \emph{Proof : } At the generic point $c(0)$, the $\mathrm{Ad}_{K^-}$ actions on $\mathfrak{q}_i$ and $\mathfrak{q}_j$ are equivalent and given by the irreducible representation $\mu \otimes \varrho_{n-m}$, from Schur's lemma and the fact that $\mu$ is of real type or the multiplicity of $\mu$ in $\rho$ is one, $E^*_{a,i,\xi}(0)$ is orthogonal to $\Upsilon(0)= \mbox{span} \set{E^*_{l,\alpha,\varsigma}(0)| \varsigma = 1,\ldots, n-m}$ for $a\ne l$. Hence it is a parallel vector field from the last part of Theorem \ref{thmwilking} which proves (1). On each principal orbit $M_t \cong G/H$, $\mathrm{Ad}_H$ acts on each $\mathfrak{q}_i$ ($i>0$), by the representation $Res^{K'}_{H'}(\mu) \otimes \varrho_{n-m}$. By Schur's lemma we have $E^*_{a,i,\xi}$ is orthogonal to $E^*_{b,j,\zeta}$ along $c(t)$ if $\xi \neq \zeta$ and $E^*_{a,i,\xi}$ has the same length as $E^*_{a,i,\zeta}$. This proves (3) and one case of (2) where $\xi \neq \zeta$. Suppose $\zeta = \xi$ and $a\neq b$. If none of $a$ or $b$ is equal to $l$, then the two vector fields $E_{a,i,\xi}^*$ and $E^*_{b,j,\xi}$ are parallel from (1). Using Schur's lemma again and the fact that $a\ne b$, they are orthogonal to each other at $c(0)$ and then along the normal geodesic $c(t)$. If one of $a$ and $b$, say $b$, is equal to $l$, then $E^*_{a,i,\xi}$ is a parallel vector field. Write $E_{l,j,\xi}^*(0) = (E_{l,j,\xi}-\lambda E_{l,\alpha,\xi})^*(0) + \lambda E_{l,\alpha, \xi}^*(0)$, where the constant $\lambda$ is determined by the following equation: \begin{equation*} g(E_{l,j,\xi}^*, E_{l,\alpha,\xi}^*)_{c(0)} = \lambda g(E_{l,\alpha,\xi}^*, E_{l,\alpha,\xi}^*)_{c(0)}. \end{equation*} Thus $(E_{l,j,\xi}-\lambda E_{l,\alpha,\xi})^*(0) \perp \Upsilon(0)$ and hence $(E_{l,j,\xi}-\lambda E_{l,\alpha,\xi})^*$ is a parallel vector field. Furthermore, $E_{a,i,\xi}^*$ is orthogonal to $(E_{l,j,\xi}-\lambda E_{l,\alpha,\xi})^*$ at $c(0)$, so they are orthogonal to each other along $c(t)$. Thus $E_{a,i,\xi}^*$ is orthogonal to $E^*_{l,j,\xi}$ along $c(t)$. The formula of $P_0(E_{a,i,\xi})$ in $(4)$. follows easily from the defining equation (\ref{eqnfijreal}) of $f_{i,j}$ and $(2)$. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \medskip From Lemma \ref{lemorth} above, the restriction of the endomorphism $P$ on $(\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha)^*$ at $t=0$ has the following matrix form: \begin{equation}\label{eqnPinitial} P_0 = \begin{pmatrix} f_{1,1} I_l & \cdots & f_{1,\alpha}I_l\\ \vdots & \ddots & \vdots\\ f_{\alpha,1}I_l & \cdots & f_{\alpha, \alpha}I_l \end{pmatrix}. \end{equation} We have seen that there are plenty of parallel Killing vector fields in $(\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha)^*$. Using these parallel vector fields, we can determine the restriction of $P_t$ on $(\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha)^*$. \begin{thm}\label{thmPtrealtype} If the cohomogeneity one manifold $(M,g)$ has non-negative sectional curvature and the class one representation $\mu$ is of real type, then for any $i = 1, \ldots, \alpha$ and $\xi = 1, \ldots, n-m$, we have \begin{enumerate} \item $P_t(E_{a,i,\xi}) = \sum_{j=1}^{\alpha} f_{i,j} E_{a,j,\xi}$, for $a = 1, \cdots, l-1$; \item $P_t(E_{l,i,\xi}) = \sum_{j=1}^{\alpha}p_{i,j}(t) E_{l,j,\xi}$ and $p_{i,j}(t)$ is defined as \begin{equation}\label{eqnpijreal} p_{i,j}(t) = (h^2(t)-1) a_i a_j + f_{i,j}, \end{equation} where $a_i = f_{i,\alpha}$ and $a_\alpha = 1$. \end{enumerate} \end{thm} \noindent \emph{Proof : } Part (1) is obvious since every component of both $E^*_{a,i,\xi}$ and $E^*_{a,j,\xi}$ are parallel vector fields along $c(t)$ if $a\leq l-1$. For part (2), let \begin{equation*} X_i = E_{l,i,\xi} - a_i E_{l,\alpha,\xi}, \quad i = 1, ..., \alpha. \end{equation*} Then $X_i \in \mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha$ and generates a Killing vector field $X_i^*$ along $c(t)$. By the definition $a_i = f_{i,\alpha}$, the defining equation of $f_{i,\alpha}$ in (\ref{eqnfijreal}) and $(4)$ in Lemma \ref{lemorth}, we have \begin{equation*} g(X_i^*, E_{l,\alpha, \xi}^*)_{c(0)} = 0, \end{equation*} or $X_i^*(0) \perp \Upsilon(0)$. Therefore $X_i^*$ is a parallel vector field. By the formula (\ref{eqnshapeop}) of the shape operator, we have \begin{equation*} P'_t(X_i) = 0 \quad \forall t \in \mathbb R. \end{equation*} Since $P_t(E_{l,i,\xi}) = \sum_{j=1}^{\alpha}p_{i,j}(t) E_{l,j,\xi}$ for some functions $p_{i,j}(t)$, we have \begin{eqnarray*} & & P_t'(E_{l,i,\xi} - a_i E_{l,\alpha,\xi}) = P_t'(E_{l,i,\xi}) - a_i P_t'(E_{l,\alpha,\xi}) \\ & = & \sum_{j=1}^{\alpha} p_{i,j}'(t)E_{l,j,\xi} - a_i \sum_{j=1}^{\alpha} p_{\alpha, j}'(t)E_{l,j,\xi} = 0. \end{eqnarray*} Therefore we have the following system of ordinary differential equations for $p_{i,j}(t)$: \begin{equation}\label{eqnpijode} p_{i,j}'(t) - a_i p_{\alpha,j}'(t) = 0. \quad \forall i,j = 1,..., \alpha. \end{equation} One easily sees that it has the solution \begin{equation*} p_{i,j}(t) = a_i a_j h^2(t) + f_{i,j} - a_i f_{j,\alpha} \end{equation*} which finishes our proof. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip Now we consider the case when the multiplicity of $\mu$ in $\rho$ is one, i.e., $\alpha = 1$. Since $P_{1,1}(0)$ is symmetric and $\mathrm{Ad}_{K^-}$ equivariant, the off-diagonal terms in the formulas (\ref{eqnequivarmapcpx}) and (\ref{eqnequivarmapquater}) of equivariant endomorphisms vanish, i.e., $P_{1,1}(0) = f_{1,1} I_l$ with some constant $f_{1,1}\in \mathbb R$. From a similar argument as in the previous case, we have \begin{thm}\label{thmPtmulone} If the cohomogeneity one manifold $(M,g)$ has non-negative sectional curvature and the multiplicity of the class one representation $\mu$ in $\rho$ is one, then for any $\xi = 1, \ldots, n-m$, we have \begin{enumerate} \item $P_t(E_{a,1,\xi}) = f_{1,1} E_{a,1,\xi}$, if $a = 1, \cdots, l-1$; \item $P_t(E_{l,1,\xi}) = p_{1,1}(t) E_{l,1,\xi}$ and $p_{1,1}(t) = (h^2(t) -1)f_{1,1}^2 + f_{1,1}$. \end{enumerate} \end{thm} \subsection{$\mu$ is of complex or quaternionic type.} We consider the complex case first and the quaternionic case will follow easily. Let $l=2p$ and $\beta = 2\alpha$. Recall that $W_i$ is the subspace of $V$ such that $\rho|_{W_i} = \mu$. Choose an orthonormal basis $\set{e_1, \ldots, e_{2p}}$ of $W_i$ such that $\mu(x)$ has the form $\left(\begin{smallmatrix} A & -B \\ B & A \end{smallmatrix}\right)$ with $A, B$ being $p\times p$ matrices and $\mu(H')$ fixes the two vectors $e_p$ and $e_{2p}$. Under this basis any $\mathrm{Ad}_{\mu(K')}$-equivariant endomorphism has the block-form $\left(\begin{smallmatrix} aI_p & -b I_p \\ b I_p & a I_p \end{smallmatrix}\right)$as in (\ref{eqnequivarmapcpx}) with constants $a$, $b \in \mathbb R$. Using the fact that the endomorphism commutes with the rotation in the $\set{e_p, e_{2p}}$ plane, we may assume that $E_{m,m} \in K^+$. Since $\mu$ is of complex type, the $\mathrm{Ad}_{K^-}$-equivariant map $P(0)$ has a block form and the $(i,j)$-block is given by \begin{equation*} \begin{pmatrix} f_{2i-1,2j-1} I_p & f_{2i,2j-1} I_p \\ f_{2i-1,2j} I_p & f_{2i,2j} I_p \end{pmatrix}, \end{equation*} where $f_{a,b}\in \mathbb R$ is constant for $a,b = 1,\ldots, \beta =2\alpha$ and satisfies the following identities: \begin{equation}\label{eqnfijidenitycpx} f_{2i-1,2j-1}= f_{2i,2j}, \quad f_{2i,2j-1} + f_{2i-1,2j} = 0, \quad f_{2i-1,2j-1} = f_{2j-1,2i-1}, \quad f_{2i,2j-1} + f_{2j,2i-1} = 0. \end{equation} The last two are due to the fact that $P(0)$ is $Q$-symmetric. Similar to the case when $\mu$ is of real type, we define $E_{a,i,\xi} = E_{r+(i-1)p + a, m+\xi}$ for $a=1, \ldots, p$, $i=1,\ldots, \beta$ and $\xi= 1,\ldots, n-m$. Then we have \begin{thm}\label{thmPijcpx} If the cohomogeneity one manifold $(M,g)$ has non-negative sectional curvature and the class one representation $\mu$ is of complex type, then for any $i = 1, \ldots, \beta$ and $\xi = 1, \ldots, n-m$, we have \begin{enumerate} \item $P_t(E_{a,i,\xi}) = \sum_{j=1}^{\beta} f_{i,j} E_{a,j,\xi}$, if $a = 1, \cdots, p-1$; \item $P_t(E_{p,i,\xi}) = \sum_{j=1}^{\beta}p_{i,j}(t) E_{p,j,\xi}$ and $p_{i,j}(t)$ is defined as \begin{equation}\label{eqnpijcpx} p_{i,j}(t) = (h^2(t)-1) a_i a_j + f_{i,j}, \end{equation} where $a_i = f_{i,\beta}$. \end{enumerate} \end{thm} \smallskip Now we consider the case when $\mu$ is of quaternionic type and the multiplicity of $\mu$ in $\rho$ is bigger than one. Let $l = 4p$ and $\beta = 4\alpha$. From the formula (\ref{eqnequivarmapquater}) of equivariant endomorphisms in this case and a similar argument in the complex case, $P(0) = (f_{a,b}I_p)_{1\leq a, b \leq \beta}$ where $f_{a,b}\in \mathbb R$ is constant and satisfies the following identities: \begin{eqnarray}\label{eqnfijidentityquater} & & f_{4i-3,4j-3} = f_{4i-2,4j-2} = f_{4i-1,4j-1} = f_{4i,4j} = f_{4j,4i} \nonumber\\ & & -f_{4i-2,4j-3} = f_{4i-3,4j-2} = f_{4i,4j-3} = -f_{4i-1,4j} = f_{4j-1,4i} \\ & & -f_{4i-1,4j-3} = -f_{4i,4j-2} = f_{4i-3,4j-1} = f_{4i-2, 4j} = -f_{4j-2,4i} \nonumber \\ & & -f_{4i,4j-3} = f_{4i-1,4j-2} = -f_{4i-2,4j-1} = f_{4i-3,4j} = -f_{4j-3,4i} \nonumber \end{eqnarray} for $i,j = 1, \ldots, \alpha$. We denote $E_{r+(i-1)p+a,m+\xi}$ by $E_{a,i,\xi}$ for $a=1,\ldots, p$, $i=1,\ldots, \beta$ and $\xi = 1, \ldots, n-m$, and then we have \begin{thm}\label{thmPijquater} If the cohomogeneity one manifold $(M,g)$ has non-negative sectional curvature and the class one representation $\mu$ is of quaternionic type, then for any $i = 1, \ldots, \beta$ and $\xi = 1, \ldots, n-m$, we have \begin{enumerate} \item $P_t(E_{a,i,\xi}) = \sum_{j=1}^{\beta} f_{i,j} E_{a,j,\xi}$, if $a = 1, \cdots, p-1$; \item $P_t(E_{p,i,\xi}) = \sum_{j=1}^{\beta} p_{i,j}(t) E_{p,j,\xi}$ and $p_{i,j}(t)$ is defined as \begin{equation}\label{eqnpijcpx} p_{i,j}(t) = (h^2(t)-1) a_i a_j + f_{i,j}, \end{equation} where $a_i = f_{i,\beta}$. \end{enumerate} \end{thm} \section{Proof Of Theorem \ref{thmorth}}\label{secprooforth} In this section, we will develop some contradiction from the assumption that the manifold $(M,g)$ is non-negatively curved. First we list some lemmas which will be used in the proof of Theorem \ref{thmorth}. \begin{lem}\label{lemnontransitive} The image $\mu(K')\subset SO(l)$ does not act transitively on the sphere $\mathbb{S}^{l-1}=SO(l)/SO(l-1)$. \end{lem} \noindent \emph{Proof : } Recall that $l=\deg \mu \geq k+2$. If $\mu(K')$ could act transitively on $\mathbb{S}^{l-1}$, then $K'$ would act transitively on both $\mathbb{S}^k$ and $\mathbb{S}^{l-1}$ and $l-1\geq k + 1$. By the classification of the transitive action on the spheres, we have that $(K',H')$ is either $(SO(7),SO(6))$ with $\mu(SO(7)) = Spin(7)\subset SO(8)$ or $(SO(9),SO(8))$ with $\mu(SO(9))=Spin(9)\subset SO(16)$. But in both cases, by the classification in Theorem \ref{thmclassonecpx}, $\mu$ is not a class one representation for the pair $(K',H')$ which is a contradiction. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip The following was already used in \cite{GVWZ} and we state it as a lemma without proof. \begin{lem}\label{lemanalytic} Suppose $f(t)$ is a $C^2$ non constant even function on $(-\varepsilon, \varepsilon)$ for some $\varepsilon >0$ with $f(0) = 0$, then there is no such constant $\gamma \geq 0$ that satisfies the following inequality: \begin{equation}\label{ineqnder} \gamma^2(f(t))^2 - (f'(t))^2 \geq 0 \end{equation} \end{lem} We will compute the sectional curvatures for some $2$-planes in our examples. The formula in terms of $P_t$ is well established in \cite{GZposRic} and we quote it for the convenience of the reader. \begin{thm}[Grove-Ziller]\label{thmformulasec} If $X, Y \in \mathfrak{p}$, the sectional curvatures of $M$ at $c(t)$ are determined by \begin{eqnarray*} (a)\quad \quad g(R(X,Y)X,Y) & = & Q(A_{\_}(X,Y),[X,Y]) - \frac{3}{4}Q(P[X,Y]_{\mathfrak{p}},[X,Y]_{\mathfrak{p}}) \\ & & +Q(A_{+}(X,Y), P^{-1}A_{+}(X,Y)) - Q(A_{+}(X,X), P^{-1}A_{+}(Y,Y))\\ & & +\frac{1}{4}Q(P'X,Y)^2 -\frac{1}{4}Q(P'X,X)Q(P'Y,Y) \\ (b)\quad \quad g(R(X,Y)T,Y) & = & -\frac{1}{2} Q(P'X,P^{-1}A_{+}(Y,Y))+\frac{1}{2} Q(P'Y, P^{-1}A_{+}(X,Y)) \\ & & +\frac{3}{4}Q([X,Y],P'Y) \\ (c)\quad \quad g(R(X,T)X,T) & = & Q((-\frac{1}{2} P'' + \frac{1}{4} P'P^{-1}P')X,X). \end{eqnarray*} \end{thm} \begin{rem} In the formulas above, $[X,Y]_{\mathfrak{p}}$ is the $\mathfrak{p}$-component of $[X,Y]$ and $A_\pm : \mathfrak{p} \times \mathfrak{p} \longrightarrow \mathfrak{g}$ are defined as \begin{equation*} A_\pm(X,Y) = \frac{1}{2}([X, P_t Y]\mp [P_t X, Y]). \end{equation*} \end{rem} \smallskip \smallskip From Lemma \ref{lemweylsymmetry}, $h(t)$ is an even function with $h(0)\ne 0$ and $h(L) = 0$, so $f(t) = h^2(0)-h^2(t) = 1 - h^2(t)$ satisfies the assumptions in Lemma \ref{lemanalytic}. We will show the inequality (\ref{ineqnder}) holds for some constant $\gamma \geq 0$ using nonnegativity of the sectional curvature of a carefully chosen $2$-plane. Theorem \ref{thmorth} then follows from Lemma \ref{lemanalytic}. \smallskip \noindent \emph{Proof of Theorem \ref{thmorth}: } In the following argument, only the entries in the lower right $(n-r)\times (n-r)$-block of $\mathfrak{g}=\mathfrak{so}(n)$ are involved, so without loss of generality we may assume that $r=0$ or equivalently the representation $\rho$ is a sum of $\alpha$ copies of $\mu$. \smallskip \textsc{Case 1:} The representation $\mu$ is of real type. Since $P_0$ defined in (\ref{eqnPinitial}) is symmetric and positive definite, we can write \begin{equation*} \begin{pmatrix} f_{1,1} & \cdots & f_{1,\alpha} \\ \vdots & \ddots & \vdots \\ f_{\alpha,1} & \cdots & f_{\alpha,\alpha} \end{pmatrix} = A D A^\top, \end{equation*} where $A = (A_{i,j})_{\alpha\times \alpha}$ is an orthogonal matrix and $D = \mathrm{diag} (d_1, \ldots, d_\alpha)$ is a diagonal matrix with positive entries. Define the following vectors in $\mathfrak{q}_1 + \cdots + \mathfrak{q}_\alpha$: \begin{eqnarray} X^u & = & \sum_{i=1}^{l-1}b_iE_{i,u,1} + E_{l,u,2}\label{eqnXu}\\ Y^u & = & \sum_{i=1}^{l-1}b_iE_{i,u,2} + E_{l,u,1},\label{eqnYu} \end{eqnarray} where $u = 1, ..., \alpha$ and $\sum_{i=1}^{l-1}b_i^2 = 1$. Further conditions of the $b_i$'s will be determined later on. In the matrix $A$, there is a column, say the $i_0$-th column, with $A_{\alpha, i_0}\neq 0$. We denote $A_{u,i_0}$ by $A_u$, $u = 1,\ldots, \alpha$, and define the following two vectors $X$, $Y$ in $\mathfrak{p}$: \begin{equation} X = \sum_{u=1}^{\alpha} A_u X^u, \quad Y = \sum_{u=1}^{\alpha} A_u Y^u. \label{eqnXY} \end{equation} From the definitions of $X^u$ and $Y^v$, it is easy to see that $[X^u, Y^u] = 0$, and if $u \neq v$, then \[ [X^u, Y^v] = \sum_{i=1}^{l-1} b_i (E_{vl, i+(u-1)l} + E_{i+(v-1)l,ul}).\] Hence \begin{eqnarray*} [X,Y] & = & \sum_{u,v =1}^{\alpha}A_u A_v [X^u, Y^v]= \sum_{i=1}^{l-1}\sum_{u\neq v}A_u A_v b_i(E_{vl, i+(u-1)l} + E_{i+(v-1)l,ul})\\ & = & 0. \end{eqnarray*} The fact that $X$ commutes with $Y$ makes the computation of the sectional curvature of the $2-$plane spanned by $X^*$ and $Y^*$ easier since the first two terms in the curvature formula $(a)$ in Theorem \ref{thmformulasec} vanish. The other four terms are computed in Proposition \ref{propComputationReal} below. If we plug in the result of each term into the formula of the sectional curvature, we have \begin{eqnarray*} ||X^*\wedge Y^*||^2 \sec(X^*,Y^*)_{c(t)} & = & (h^2(t)-1)^2 Q(X_0, P_t^{-1}(X_0)) - (d_{i_0} A_\alpha)^4 h^2(t)(h'(t))^2\\ & - & Q(A_{+} (X,X), P_t^{-1}A_{+} (X,X)). \end{eqnarray*} Here $X_0 \in \mathfrak{p}$ is specified in Proposition \ref{propComputationReal} and is orthogonal to $\mathfrak{k}^-$ with respect to $Q$. Since $P_t$ is positive definite as well as $P_t^{-1}$, we have $Q(A_{+} (X,X), P_t^{-1}A_{+} (X,X))\geq 0$. Therefore $\sec(X^*,Y^*) \geq 0$ implies that \begin{equation*} (h^2(t)-1)^2 Q(X_0, P_t^{-1}(X_0)) -\left(d_{i_0} A_\alpha \right)^4 h^2(t)(h'(t))^2 \geq 0. \end{equation*} The existence of the constant $\gamma \geq 0$ follows from the facts that $A_\alpha \neq 0$ and $Q(X_0,P_t^{-1}(X_0))$ is bounded from above near $t=0$. \smallskip \textsc{Case 2:} The representation $\mu$ is not of real type and the multiplicity of $\mu$ in $\rho$ equals to $1$. In this case $P_0$ is a scalar matrix and $P_t$ is a diagonal matrix. It is easy to see that the proof in the previous case works if we choose $X=X^1$ and $Y = Y^1$ in (\ref{eqnXu}) and (\ref{eqnYu}). \smallskip \textsc{Case 3:} The representation $\mu$ is not of real type and the multiplicity of $\mu$ in $\rho$ is bigger than $1$. Let $p=\frac{1}{2} l$ and $\beta = 2\alpha$ if $\mu$ is of complex type and let $p=\frac{1}{4} l$ and $\beta = 4\alpha$ if it is of quaternionic type. In both cases, we have $p\geq k+2$. In each case we define the vector $E_{a,i,\xi}$ for $a=1,\ldots, p$, $i = 1,\ldots,\beta$ and $\xi = 1, \ldots, n-m$. Then similarly we can define vectors $X^u$ and $Y^u$ for $u=1,\ldots, \beta$ and use them to define the vectors $X$ and $Y$ as in \textsc{Case 1}. The formulas of $P_t(X)$ and $P_t(Y)$ are obtained by Theorem \ref{thmPijcpx} and Theorem \ref{thmPijquater} respectively and the rest of the proof will follow \textsc{Case 1}. Note that the number of the constants $b_i$'s in $X$ or $Y$ is equal to $p -1$. The existence of the vector $X_0$ which is orthogonal to $\mathfrak{k}^-$ follows from the fact that $p \geq k+2$. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip In the case where $\mu$ is real type, the non-vanishing terms in the sectional curvature of the $2$-plane spanned by $X$ and $Y$ are computed in the following \begin{prop}\label{propComputationReal} For the vectors $X$ and $Y$ defined in (\ref{eqnXY}), by choosing a proper value for the constant $b_i$, we have \begin{enumerate} \item There exists some $X_0 \in \mathfrak{p}$ which is orthogonal to $\mathfrak{k}^-$ with respect to $Q$ such that $A_{+} (X,Y) = (h^2(t)-1)X_0$; \item $A_{+} (X,X) = A_{+} (Y,Y)$; \item $Q(P_t'(X),Y) = 0$; \item $-\dfrac{1}{4}Q(P_t'(X),X)Q(P_t'(Y),Y) = -\left(d_{i_0} A_\alpha \right)^4 h^2(t)(h'(t))^2$. \end{enumerate} \end{prop} \noindent \emph{Proof : } First we compute the endomorphism $P_t$ on $X$ and $Y$. From the defining equations (\ref{eqnXu}) of $X^u$ and (\ref{eqnYu}) of $Y^u$, we have \begin{equation}\label{eqnPXu} P_t(X^u) = \sum_{i=1}^{l-1}\sum_{r=1}^{\alpha}b_i f_{u,r}E_{i,r,1} + \sum_{r=1}^{\alpha}p_{u,r}(t)E_{l,r,2} \end{equation} and \begin{equation}\label{eqnPYv} P_t(Y^v) = \sum_{j=1}^{l-1}\sum_{s=1}^{\alpha}b_j f_{v,s}E_{j,s,2} + \sum_{s=1}^{\alpha}p_{v,s}(t)E_{l,s,1}, \end{equation} then \begin{eqnarray*} [X^u, P_t(Y^v)] & = & [\sum_{i=1}^{l-1}b_iE_{i+(u-1)l,\alpha l+1} + E_{ul,\alpha l+2}, \quad \sum_{j=1}^{l-1}\sum_{s=1}^{\alpha}b_j f_{v,s}E_{j+(s-1)l,\alpha l+2} + \sum_{s=1}^{\alpha}p_{v,s}(t)E_{sl,\alpha l+1}]\\ & = & \left(\sum_{i=1}^{l-1} b_i^2f_{v,u} - p_{v,u}(t)\right)E_{\alpha l+2,\alpha l+1} + \sum_{i=1}^{l-1}\sum_{s=1}^{\alpha}b_i(f_{v,s}E_{i+(s-1)l,ul}- p_{v,s}(t)E_{i+(u-1)l,sl}), \end{eqnarray*} and \begin{eqnarray*} [P_t(X^u), Y^v] & = & [\sum_{i=1}^{l-1}\sum_{r=1}^{\alpha}b_i f_{u,r}E_{i+(r-1)l,\alpha l+1} + \sum_{r=1}^{\alpha}p_{u,r}(t)E_{rl,\alpha l+2}, \quad \sum_{j=1}^{l-1}b_j E_{j+(v-1)l,\alpha l+2} + E_{vl,\alpha l+1}]\\ & = & \left(\sum_{i=1}^{l-1}b_i^2f_{u,v}-p_{u,v}(t)\right)E_{\alpha l+2,\alpha l+1} + \sum_{i=1}^{l-1} \sum_{r=1}^{\alpha}b_i(f_{u,r}E_{vl,i+(r-1)l}-p_{u,r}(t)E_{rl,i+(v-1)l}). \end{eqnarray*} Therefore \begin{equation}\label{eqnApXuYv} A_{+} (X^u,Y^v) = \dfrac{1}{2} \sum_{i=1}^{l-1}\sum_{s=1}^{\alpha}b_i(f_{v,s}E_{i+(s-1)l,ul}+f_{u,s}E_{i+(s-1)l,vl} -p_{v,s}(t)E_{i+(u-1)l,sl} - p_{u,s}(t)E_{i+(v-1)l,sl}).\\ \end{equation} From the above equation, only the terms as $E_{j+(r-1)l,wl}$ have nonzero coefficients in $A_{+} (X,Y)$ and it is denoted by $c_{j,r,w}$. From the formula (\ref{eqnApXuYv}) and the bi-linearity of $A_{+}$, we have \begin{eqnarray}\label{eqncjrw1} c_{j,r,w} & = & \dfrac{1}{2} b_j\left( \sum_{v=1}^{\alpha}(f_{v,r}A_wA_v - p_{v,w}(t)A_rA_v) + \sum_{u=1}^{\alpha}(f_{u,r}A_u A_w - p_{u,w}A_uA_r)\right)\nonumber\\ & = & b_j \left(A_w\sum_{v=1}^{\alpha} f_{v,r}A_v - A_r\sum_{v=1}^{\alpha} p_{v,w}(t)A_v\right). \end{eqnarray} We can compute the terms in (\ref{eqncjrw1}) explicitly as follows, \begin{equation}\label{eqnc1} \sum_{v=1}^{\alpha}f_{v,r}A_v = \sum_{v=1}^{\alpha}\sum_{i=1}^{\alpha}A_{v,i}d_i A_{r,i} A_{v,i_0} = (A^\tau A D A^\tau)_{i_0,r} = d_{i_0} A_{r,i_0} = d_{i_0} A_r \end{equation} and \begin{eqnarray} \sum_{v=1}^{\alpha}p_{v,w}(t)A_v & = & \sum_{v=1}^{\alpha}(a_v a_w h^2(t) A_v +f_{v,w}A_v -a_v f_{w,\alpha}A_v)\nonumber \\ & = & d_{i_0}A_w + (a_w h^2(t) -f_{w,\alpha})\sum_{v=1}^{\alpha}a_vA_v \nonumber\\ & = & d_{i_0}A_w + a_w(h^2(t) -1) \sum_{v=1}^{\alpha}f_{v,\alpha}A_v\nonumber\\ & = & d_{i_0}A_w + d_{i_0}a_w A_\alpha (h^2(t) - 1),\label{eqnc2} \end{eqnarray} where the first equality follows Theorem \ref{thmPtrealtype}. By substituting the new expressions (\ref{eqnc1}) and (\ref{eqnc2}) back in the expression (\ref{eqncjrw1}) of $c_{j,r,w}$, we have \begin{equation}\label{eqnc} c_{j,r,w} = - d_{i_0} A_r A_\alpha a_w b_j (h^2(t) - 1). \end{equation} If $r\neq w$, then $E_{j+(r-1)l,wl}$ is orthogonal to $\mathfrak{k}^-$ with respect to $Q$. If $r=w$, by the formula (\ref{eqnc}) for $c_{j,r,w}$, $c_{j,r,r}$ is a multiple of $b_j$. From the assumption that the representation $\rho$ is the direct sum of $\mu$ and the embedding in (\ref{eqnrhox}), the image $\mu_*(v)$(for any $v\in \mathfrak{k}'$) is block-wise diagonally embedded in $\mathfrak{so}(m)$. Hence if for some $r$, the vector $v_r = \sum_{j=1}^{l-1}c_{j,r,r}E_{j+(r-1)l,rl}$ is orthogonal to $\mathfrak{k}^-$, then all vectors $v_q$'s are orthogonal to $\mathfrak{k}^-$. By Lemma \ref{lemnontransitive} of the non-transitivity of the action $\mu(K')$ on the sphere $SO(l)/SO(l-1)$ and by choosing the proper values of $b_i$'s, $v_r$ is orthogonal to $\mathfrak{k}^-$. Therefore $A_{+} (X,Y) = (h^2(t) - 1)X_0$ for some $X_0 \in \mathfrak{p}$ which is orthogonal to $\mathfrak{k}^-$ with respect to $Q$ and $(1)$ is proved. \smallskip By (\ref{eqnXu}) and (\ref{eqnPXu}), we have \begin{eqnarray*} [X^u, P_t(X^v)] & = & [\sum_{i=1}^{l-1}b_iE_{i+(u-1)l,\alpha l+1} + E_{ul,\alpha l+2}, \quad \sum_{j=1}^{l-1}\sum_{s=1}^{\alpha}b_j f_{v,s}E_{j+(s-1)l,\alpha l+1} + \sum_{s=1}^{\alpha}p_{v,s}(t)E_{sl,\alpha l+2}]\nonumber \\ & = & \sum_{i\neq j\mbox{, or }u\neq s}b_i b_jf_{v,s}E_{j+(s-1)l,i+(u-1)l} + \sum_{s\neq u}p_{v,s}(t)E_{sl,ul}. \end{eqnarray*} By (\ref{eqnYu}) and (\ref{eqnPYv}), we have the same result for $[Y^u, P_t(Y^v)]$, therefore $A_{+} (X,X) = [X,P_t(X)] = [Y,P_t(Y)] = A_{+} (Y,Y)$ which proves the formula in $(2)$. \smallskip Next we will prove the formulas in $(3)$ and $(4)$ which will finish the proof of the proposition. By (\ref{eqnPXu}) and the differential equation (\ref{eqnpijode}) for $p_{ij}$ we have \begin{equation*} P_t'(X^u) = \sum_{r=1}^{\alpha}p'_{r,u}(t)E_{l,r,2} = 2h(t)h'(t)\sum_{r=1}^{\alpha}a_u a_r E_{l,r,2}, \end{equation*} so \begin{eqnarray*} Q(P_t'(X^u), Y^v) & = & 2 a_u h(t)h'(t)\sum_{r=1}^{\alpha}a_r \left(\sum_{j=1}^{l-1}b_j Q(E_{l,r,2}, E_{j,v,2}) + Q(E_{l,r,2}, E_{l,v,1})\right)\\ & = & 0, \end{eqnarray*} and then $Q(P_t'(X),Y) = 0$. By taking the inner product with $X^v$ instead of $Y^v$, we have \begin{eqnarray*} Q(P_t'(X^u), X^v) & = & 2 a_u h(t)h'(t)\sum_{r=1}^{\alpha}a_r \left(\sum_{j=1}^{l-1}b_j Q(E_{l,r,2}, E_{j,v,1}) + Q(E_{l,r,2}, E_{l,v,2})\right)\\ & = & 2a_u a_v h(t)h'(t). \end{eqnarray*} Therefore \begin{eqnarray*} Q(P_t'(X),X) & = & 2\left(\sum_{u,v =1}^{\alpha}A_uA_v a_u a_v \right)h(t)h'(t) = 2\left(\sum_{u =1}^{\alpha}A_u a_u \right)^2 h(t)h'(t)\\ & = & 2 \left( d_{i_0} A_\alpha \right)^2 h(t)h'(t), \end{eqnarray*} where the last equality follows either from (\ref{eqnc1}) or (\ref{eqnc2}). Similarly for $Q(P_t'(Y),Y)$ and hence $(4)$ follows, which finishes the proof. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip \begin{rem} In the introduction, we pointed out that there are unknown but interesting cases when $m\geq k+2$ and $n=m+1$. The minimal dimension of these manifolds is $15$ when $k=2$, $m=5$ and $n=6$. This manifold has cohomology ring different from the two $15$ dimensional symmetric spaces $\mathbb{S}^{15}$ and $SO(8)/(SO(5)\times SO(3))$. The geometry of these examples will be studied in another paper. \end{rem} \smallskip \section{Cohomogeneity One Manifolds for $G=U(n)$ And $Sp(n)$}\label{secunitarysymp} In this section, we will generalize our examples to the cases where $G = U(n)$ and $Sp(n)$. First let us state the theorem in each case. \begin{thm}\label{thmexamplescomplex} Let $K'/H'=\mathbb{S}^k$ with $k\geq 2$ and $\rho : K'\longrightarrow U(m)$ be a faithful representation. Suppose $\rho$ contains a class one representation $\mu: K' \longrightarrow U(l)$ of the pair $(K',H')$ with $2l\geq k+2$ and the multiplicity of $\mu$ in $\rho$ is $1$. For any integer $n\geq m+2$, set $G=U(n)$ and \begin{eqnarray} K^- & = & \rho(K')\times U(n-m) \subset U(m)\times U(n-m) \subset U(n) \nonumber\\ K^+ & = & \rho(H')\times U(n-m+1)\subset U(m-1)\times U(n-m+1) \subset U(n) \label{groupdiagramcomplex}\\ H & = & \rho(H')\times U(n-m) \subset U(n),\nonumber \end{eqnarray} then the cohomogeneity one manifold $M$ defined by the groups $H\subset \set{K^-,K^+}\subset G$ does not admit a $G$ invariant metric with non-negative sectional curvature. \end{thm} \begin{rem} Proposition \ref{propdimcomparecpx} lists the complex class one representations which have dimension smaller than $\frac{1}{2} (k+2)$. It shows that if $\mathrm{mul}(\mu,\rho) = 1$, then only the defining representations of $SU(l)$, $U(l)$, $Sp(l)$ and $Sp(l)\times U(1)$ are excluded by the above theorem. The cohomogeneity one manifolds from these representations are equivariantly diffeomorphic to the homogeneous spaces $U(n+1)/\Phi(K')\times U(n-l+1)$ where $\Phi : K' \longrightarrow U(l)$ is the defining representation, so they carry non-negatively curved metrics. \end{rem} Similar to Lemma \ref{lemnontransitive} in the orthogonal case, we have the following lemma: \begin{lem}\label{lemnontransitivecpx} Assume that $K'$, $H'$ and $\mu$ as in Theorem \ref{thmexamplescomplex} with $2l\geq k+2$, then $\mu(K')$ does not act transitively on $\mathbb{C}\mathrm{P}^{l-1} = U(l)/(U(l-1)\times U(1))$. \end{lem} \noindent \emph{Proof : } From the classification of the transitive actions on complex projective spaces, see \cite{Bessegeodesic}, p.195, we only need to check the pair $(SU(2),U(1))$ for $\mathbb{C}\mathrm{P}^1$, $(U(n),U(n-1))$ for $\mathbb{C}\mathrm{P}^{n-1}$ and $(Sp(n),Sp(n-1))$ for $\mathbb{C}\mathrm{P}^{2n-1}$. In the first case, the subgroup $U(1) \subset SU(2)$ does not fix any vector in $\mathbb{C}^2$. In the last two cases, we have $2l=k+1$ which contradicts the assumption on $l$. Therefore the action of $\mu(K')$ on $\mathbb{C}\mathrm{P}^{l-1}$ is non-transitive. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip \smallskip \noindent \emph{Sketch of the Proof of Theorem \ref{thmexamplescomplex}: } Suppose $\rho$ has the decomposition $\tau \oplus \mu $ and $\mu$ is not equivalent to any of subrepresentations in $\tau$. Let $c(t)$ be the normal geodesic connecting the two singular orbits $B_\pm = G/K^\pm$ with $c(0)=p_- \in B_-$ and $c(L)=p_+ \in B_+$. Similar to the orthogonal case, the Weyl group is isomorphic to $\mathbb{Z}_2 \times \mathbb{Z}_2$ and the generators have the following representatives: \begin{equation*} w_+ = \begin{pmatrix} I_{m-1} & & \\ & -1 & \\ & & I_{n-m} \end{pmatrix}, \end{equation*} and \begin{equation*} w_- = \begin{pmatrix} A_1 & & & & \\ & & A_2 & & \\ & & & \varepsilon & \\ & & & & I_{n-m} \end{pmatrix} \mbox{ or } \begin{pmatrix} A_1 & & \\ & \varepsilon I_{l} & \\ & & I_{n-m} \end{pmatrix}, \end{equation*} where $A_1 \in U(r)$, $A_2 \in U(l-1)$ and $\varepsilon = \pm 1$. In addition to the matrices $\set{E_{i,j}}_{1\leq i\neq j \leq n}$, let $F_{i,j}$ be the symmetric matrix with $\imath = \sqrt{-1}$ in the $i,j-$ and $j,i-$entries if $i\neq j$ and $\sqrt{2}\imath$ in the $i,i-$entry if $i=j$. Then $\set{E_{i,j}}$ and $\set{F_{i,j}}$ form an orthonormal basis of the Lie algebra $\mathfrak{u}(n)$ of $U(n)$ with $Q = -\frac{1}{2} \Re \mathrm{Tr}$, where $\Re$ takes the real part of a complex number. Without loss of generality, we may assume that $r=0$, i.e., $\rho=\mu$ and then $m=l$. Let $\mathfrak{p}$ be the orthogonal complement of the Lie algebra $\mathfrak{h}$ of $H$ in the Lie algebra $\mathfrak{g}=\mathfrak{u}(n)$ of $G$ and $\mathfrak{q}$ be the subspace of $\mathfrak{p}$ spanned by the vectors $\set{E_{a,i}}$ and $\set{F_{a,i}}$ for $a=1,\ldots, m$ and $i=m+1,\ldots, n$. Let $h^2(t) = g(E_{m,m+1}^*,E_{m,m+1}^*)_{c(t)}$ and then we may assume that $h(0) = 1$. From Schur's Lemma and Wilking's Rigidity Theorem, we have \begin{prop}\label{propPtCpx} Suppose that the metric $g$ is non-negatively curved, then we have \begin{enumerate} \item $P(E_{a,i}) =E_{a,i} \mbox{ and } P(F_{a,i}) = F_{a,i}$, if $a=1,\cdots, m-1$; \item $P(E_{m,i}) =h^2(t) E_{m,i} \mbox{ and } P(F_{m,i}) = h^2(t) F_{m,i}$, \end{enumerate} where $i=m+1, \ldots, n$. \end{prop} From the collapsing of the Killing vector field $E^*_{m,m+1}$ at $p_+$ and Weyl symmetry at $p_-$, $h(t)$ is an even function and $h(L)=0$. Let \begin{eqnarray*} X & = & \sum_{i=1}^{m-1} b_i E_{i,m+1} + E_{m,m+2} + \sum_{i=1}^{m-1} c_i F_{i,m+1} + F_{m,m+2} \\ Y & = & \sum_{j=1}^{m-1} b_j E_{j,m+2} + E_{m,m+1} + \sum_{j=1}^{m-1} c_j F_{j,m+2} + F_{m,m+1}, \end{eqnarray*} where $\sum_{i=1}^{m-1} (b_i^2+c_i^2) = 2$. A computation shows that $[X,Y] = 0$ and results in the following claim: \begin{claim} For properly chosen values of the constants $b_i$ and $c_i$, we have \begin{enumerate} \item There exists some $X_0 \in \mathfrak{p}$ which is orthogonal to $\mathfrak{k}^-$, with respect to $Q$ such that $A_{+}(X,Y) = (h^2(t)-1)X_0$; \item $A_{+}(X,X) = A_{+}(Y,Y)$; \item $Q(P'_t(X),Y) = 0$; \item $-\dfrac{1}{4}Q(P'_t(X),X)Q(P'_t(Y),Y) = -4 (h(t)h'(t))^2$. \end{enumerate} \end{claim} The existence of $X_0$ follows from the non-transitivity of the $\mu(K')$ action on $\mathbb{C}\mathrm{P}^{l-1} = U(l)/(U(l-1)\times U(1))$ proved in Lemma \ref{lemnontransitivecpx}. The same argument as in the orthogonal case shows that the non-negativity of the sectional curvature of the $2-$plane spanned by $X^*$ and $Y^*$ gives the desired contradiction. This completes the proof in the unitary case. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip Finally we discuss the case where $G=Sp(n)$. \begin{thm}\label{thmexamplesquater} Let $K'/H'=\mathbb{S}^k$ with $k\geq 2$ and $\rho : K'\longrightarrow Sp(m)$ be a faithful representation. Suppose $\rho$ contains a class one representation $\mu : K' \longrightarrow Sp(l)$ of the pair $(K',H')$ with $4 l\geq k+2$ and the multiplicity of $\mu$ in $\rho$ is $1$. For any integer $n\geq m+2$, set $G=Sp(n)$ and \begin{eqnarray} K^- & = & \rho(K')\times Sp(n-m) \subset Sp(m)\times Sp(n-m) \subset Sp(n) \nonumber\\ K^+ & = & \rho(H')\times Sp(n-m+1)\subset Sp(m-1)\times Sp(n-m+1) \subset Sp(n) \label{groupdiagramqua}\\ H & = & \rho(H')\times Sp(n-m) \subset Sp(n),\nonumber \end{eqnarray} then the cohomogeneity one manifold $M$ defined by the groups $H\subset \set{K^-,K^+}\subset G$ does not admit a $G$ invariant metric with non-negative sectional curvature. \end{thm} \begin{rem} Proposition \ref{propdimcomparequa} lists the quaternionic class one representations which have dimension smaller than $\frac{1}{4}(k+2)$. It show that only the standard representation of $Sp(l)$ for the pair $(Sp(l),Sp(l-1))$ is excluded by the above theorem. The cohomogeneity one manifold from this representation has non-negatively curved metric since it is equivariantly diffeomorphic to the homogeneous space $Sp(n+1)/Sp(l)\times Sp(n-l+1)$. \end{rem} We have the following result on quaternionic projective spaces which is analogues to Lemma \ref{lemnontransitive} and Lemma \ref{lemnontransitivecpx}: \begin{lem}\label{lemnontransitivequater} Assume that $K'$, $H'$ and $\mu$ are as in Theorem \ref{thmexamplesquater} with $4l\geq k+2$, then $\mu(K')$ does not act transitively on $\mathbb{H}\mathrm{P}^{l-1} = Sp(l)/(Sp(l-1)\times Sp(1))$. \end{lem} \noindent \emph{Proof : } From the classification of the transitive actions on $\mathbb{H}\mathrm{P}^{l-1}$, see \cite{Bessegeodesic}, p.195, we have $\mu(K') = Sp(l)$ and then $H'=Sp(l-1)$, $\mu$ is the standard representation of $K' = Sp(l)$. However in this case, $k = 4l -1$ which contradicts the assumption $4l \geq k+2$. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \smallskip \noindent \emph{Sketch of the Proof of Theorem \ref{thmexamplesquater}: } The proof follows as in the complex case where $G= U(n)$ step by step. Suppose $\set{1,\imath, \jmath, \kappa}$ is the basis of $\mathbb{H}$ over the reals such that $\imath^2 = \jmath^2 = \kappa^2 = -1$ and $\kappa = \imath \jmath$. Let $G_{i,j}$ denote the symmetric matrix with $1$ in the $i,j-$ and $j,i-$entries if $i \ne j$, and $\sqrt{2}$ in the $i,i-$entries. Thus $\set{E_{i,j}, \imath G_{i,j}, \jmath G_{i,j}, \kappa G_{i,j}}$ forms an orthonormal basis of the Lie algebra $\mathfrak{sp}(n)$ of $Sp(n)$ with $Q= - \frac{1}{2} \Re \mathrm{Tr}$. Without loss of generality, we may assume that $r = 0$, i.e., $m = l$. Let $h^2(t) = g(E^*_{m,m+1},E^*_{m,m+1})_{c(t)}$ and we may assume that $h(0) = 1$. For the endomorphism $P_t$, one proves the following proposition which is similar to the orthogonal and complex cases. \begin{prop}\label{propPtQuater} Suppose that the metric $g$ is non-negatively curved, then we have \begin{enumerate} \item $P(E_{a,i}) = E_{a,i}$ and $P(\theta G_{a,i}) = \theta G_{a,i}$, if $a= 1,\cdots, m-1$; \item $P(E_{m,i}) = h^2(t) E_{m,i}$ and $P(\theta G_{m,i}) = h^2(t) \theta G_{m,i}$, \end{enumerate} where $i = m+1, \ldots, n$ and $\theta$ can be one of $\imath$, $\jmath$ and $\kappa$. \end{prop} Furthermore, $h(t)$ is an even function with $h(L) = 0$. To get the desired contradiction we choose the vectors $X$ and $Y$ as follows: \begin{eqnarray*} X & = & \sum_{i=1}^{m-1} b_i E_{i,m+1} + E_{m, m+2} + \sum_{i=1}^{m-1} c_i G_{i,m+1} + c G_{m, m+2}, \\ Y & = & \sum_{j=1}^{m-1} b_j E_{j,m+2} + E_{m, m+1} + \sum_{j=1}^{m-1} c_j G_{j,m+2} + c G_{m, m+1}, \end{eqnarray*} where $c=\imath + \jmath +\kappa$, $b_i$'s are real numbers and $c_i$'s are pure quaternionic numbers, i.e. the real part is zero. These constants satisfy the equation $1 - c^2 - \sum_{i=1}^{l-1}(b_i^2 -c_i^2) = 0$. A computation shows that $[X,Y] = 0$ and the following claim: \begin{claim} For properly chosen values of the constants $b_i$ and $c_i$, we have \begin{enumerate} \item There exists some $X_0 \in \mathfrak{p}$ which orthogonal to $\mathfrak{k}^-$, with respect to $Q$ such that $A_{+}(X,Y) = (h^2(t)-h^2(0))X_0$; \item $A_{+}(X,X) = A_{+}(Y,Y)$; \item $Q(P'_t(X),Y) = 0$; \item $-\dfrac{1}{4}Q(P'_t(X),X)Q(P'_t(Y),Y) = -16 (h(t)h'(t))^2$. \end{enumerate} \end{claim} The existence of $X_0$ follows from the non-transitivity of the $\mu(K')$ action on $\mathbb{H}\mathrm{P}^{l-1}$ proved in Lemma \ref{lemnontransitivequater}. The contradiction now follows as in the orthogonal case. \hfill \hbox{$\rlap{$\sqcap$}\sqcup$} \medskip \medskip
1,314,259,996,128
arxiv
\section{introduction} There is a continuous effort in describing of supercooled liquids using different approaches \cite{Goetze1,Goetze2,Jaeckle,Leutheusser}. However the phenomenon is generally not complete understood. Supercooled fluids reveal normally a stretched exponential decay of typical (e.g. density--density) correlation functions and a non-Arrhenius behavior of the associated relaxation times. This slowing down in the dynamical behavior can be illustrated by a strongly curved trajectory in the Arrhenius plot (relaxation time $\tau $ versus the inverse temperature $T^{-1}$), empirically described by the well known Williams--Landel--Ferry (WLF) relation \cite{WLF}. But in contrast to conventional phase transitions a long range order is not developed.\\ The characteristic slowing down of the dynamics is usually explained by an increasing cooperativity of local processes with decreasing temperature \cite{Adams}. This behavior is an universal phenomena of the glass transition.\\ Mode coupling theories \cite{Goetze1,Goetze3,Goetze4} (MCT) predict the existence of an ergodic behavior above a critical temperature $T_c$ and a nonergodic behavior below $T_c$. Note that $T_c$ is in the range between the melting temperature $T_m$ and the glass temperature $T_g$, i.e. $T_m>T_c>T_g$. At $T_c$ the system undergoes a sharp transition from an ergodic state to a state with partially frozen (density) fluctuations. The slow $\alpha$--process within the MCT is thought to correspond to the actual dynamic glass transition whereas the fast $\beta $--process is often identified with a cage rattling or the boson peak.\\ Actually, the nonergodic state obtained from the original MCT below $T_c$ are approximately stable only for a finite time interval. Strongly cooperative processes lead to a slow decay of apparently frozen structures. This slow decay shows the typical above mentioned properties corresponding to the dynamics of the main glass transition (WLF like behavior of the relaxation time, stretched exponential decay of the correlation function). This effect can be partially described in terms of an extended mode coupling theory \cite{Goetze2,Goetze3} introducing additional hopping processes.\\ There exist also various alternative descriptions \cite{Jaeckle,Fredrickson1} which explain the cooperative motion of the particles inside a supercooled liquid below $T_c$. One of these possibilities is the spin facilitated Ising model \cite{Fredrickson1,Fredrickson2,Fredrickson3,Fredrickson4}, originally introduced by Fredrickson and Andersen. The basic idea of these models consists of a coarse graining of space and time scales and simultaneously a reduction of the degrees of freedom. In details that means: \begin{enumerate} \item {\it Coarse graining of spatial scales}: The supercooled liquid is divided into cells in such a way that each cell contains a sufficiently large number of particles which realize a representative number of molecular motions. \item {\it Reduction of the degrees of freedom}: Each cell will be characterized by only one degree of freedom, i.e. the cell structure enables us to attach to each cell an observable $s_j$ (usually denoted as spin) which characterizes the actual dynamic state of particles inside the cell $j$. The usual realization is given by the local density $\rho _j$ (particles per cell) with $s_j = 1$ if $\rho _j>\bar \rho $ and $s_j=-1$ if $\rho _j<\bar \rho $ where $\bar \rho $ is the averaged density of the system. This mapping implies consequently different mobilities of the particles inside such a cell, i.e. $s_j=1$ corresponds to the immobile solid like state and $s _j=-1$ to the mobile state of cell $j$. The set of all spin observables forms a configuration, the time expansion of the corresponding probability distribution obeys a master equation. \item {\it Coarse graining of the time scale}: This step bases on the assumption that fast processes (e.g. the $\beta$--process) are well separated from the slow $\alpha$--process. Hence, the original Liouville equation of the supercooled liquid can be projected onto a simple master equation without any memory terms. Therefore, the spin facilitated kinetic Ising model is suitable for a description of a supercooled liquid well below $T_c$ within the MCT and for sufficiently large time scales. \end{enumerate} To make the time evolution of the glass configurations more transparent we use the argumentation following the idea of Fredrickson and Andersen \cite{Fredrickson1,Fredrickson2,Fredrickson3,Fredrickson4}, i.e. we suppose that the basic dynamics is a simple process $s_j=+1\leftrightarrow s_j=-1$ controlled by the thermodynamical Gibb's measure and by self-induced topological restrictions. In particular, an elementary flip at a given cell is allowed only if the number of the nearest neighbored mobile cells ($s_j=-1$) is equal or larger than a restriction number $f$ with $0<f<z$ ($z$: coordination number). So, elementary flip processes and geometrical restrictions lead to the cooperative rearrangement of the underlying system and therefore to a mesoscopic model describing a supercooled liquid below $T_c$. Such models \cite{Fredrickson1,Fredrickson2,Fredrickson3,Fredrickson4} are denoted as $f$--spin facilitated Ising model on a $d$--dimensional lattice, SFM$\left[ f,d\right]$. The SFM$\left[ f,d\right] $ can be classified as an Ising-like model the kinetics of which is confined by restrictions of the ordering of nearest neighbors to a given lattice cell. This self--adapting environments influence in particular the long time behavior of the spin-spin and therefore of the corresponding density-density correlation functions. These models were studied numerically \cite{Schulz1,Schulz2,Schulz3,Harrowell} (SFM$\left[ 2,2\right] $) and recently also analytically \cite{Schulz4} (SFM$\left[ f,d\right] $).\\ For the present investigations, we generalize the usual SFM$\left[ f,d\right] $ by introduction of additional short range interactions which favor (antiferromagnetic case) or prevent (ferromagnetic case) the formation of liquid--solid (mobile--immobile) interfaces. It is the aim to study a two--dimensional generalized SFM$\left[ 2,2\right]$ using Monte--Carlo simulations. \section{model} We consider a generalized spin facilitated Ising model with nearest-neighbor interactions in two dimensions. The Hamiltonian of the model is the same as that of the standard two-dimensional Ising model with an external field \begin{equation} H= - h (J\ \sum_{<ij>} s_i\ s_j + \sum_{i} s_i)\;,\qquad s_i=\pm 1. \label{esho10} \end{equation} In our notation, the inverse temperature $T$ and the Boltzmann constant $k$ has been absorbed into the field $h$. Physically, the field corresponds to the difference of the energy per cell in the liquid and the solid state. In our later discussions, for convenience, we simply denote $h=1/T$. Here the coupling constant $J$ describes the nearest-neighbor interactions. In case of $J=0$, the original Fredrickson model is recovered. As above mentioned, the dynamic evolution of the generalized SFM$\left[ 2,2\right] $ is subjected to a topological constraint that a spin flip is only possible if \begin{equation} \frac {1}{2} \sum_{i} (1+ s_i) \leq f. \label{esho20} \end{equation} In our simulations, the Metropolis algorithm is used and $f$ takes its typical value $f=d$ with $d$ being the space dimension, here we chose $d=2$. To assure that the system evolves into the physical section of the phase space, the initial configuration is always taken to be $s_i \equiv -1$ which means that we start from the complete liquid--like state. After the system has reached its equilibrium, we measure the auto-correlation function \begin{equation} A(t) = \frac{1}{L^d}\, <\sum_i s_i(t^\prime) s_i(t+t^\prime)> \label{esho30} \end{equation} with $L$ being the lattice size. Practically an average over $t^\prime$ is made in the numerical measurements. The lattice sizes are taken to be $L=50$ or $L=100$ depending on $h$ and $J$. Up to the time regime of our simulations, no visible finite size effect has been observed. To achieve reliable results and estimated statistical errors, we have performed five runs of simulations. Total samples for average range from $50\ 000$ to $500\ 000$. More samples are for larger values of $h$ and/or $J$. \section{results and discussion} In the low temperature regime, the original Fredrickson Andersen model gives rise to a drastical enhancement of the relaxation time which is characterized inevitably with glassy materials. As demonstrated in \cite{schutri} there is no indication for a real glass transition or a critical temperature as predicted by mode--coupling theory \cite{Goetze1,Goetze2}. For large time $t$, empirical approaches suggest a stretched exponential decay of the auto-correlation function \begin{equation} A(t) \sim exp[-(t/\tau)^\gamma]. \label{esho40} \end{equation} where the exponent $\gamma$ is presumable not an universal exponent, i.e. weakly depending on $h$. As a function of the $h$ (inverse temperature), the relaxation time $\tau$ increases faster than according to an exponential law $\ln \tau \sim c+h$ manifested as a non-Arrhenius behavior but there is no singularity $\tau \rightarrow \infty$ at finite temperatures as suggested by the Williams--Landel--Ferry relation \cite{WLF}. The exponent $\gamma$ offers a weak dependence on $h$ which is also confirmed by our simulations presented below. For the generalized SFM$\left[ 2,2\right] $ we are interested in the role of the extra coupling $J$. Physically, a positive exchange coupling $J > 0$ and $h > 0$ intend to support the creation of solid--solid pairs which are partially frozen in, e.g. such a coupling tends to enhance the relaxation time.\\ We observe that for fixed $h$, even if it is small, the auto-correlation decays in a stretched exponential form for large time $t$ and more interestingly, the relaxation time $\tau$ increases also rapidly following a non-Arrhenius law when the coupling $J$ increases. The exponent $\gamma$ exhibits also a weak dependence on the coupling $J$. In Fig. \ref{f1}, the auto-correlation for $h=0.40$ and for different values of the coupling $J$ is displayed in semi-log scale with lines of circles. Obviously the decay is not an exponential one. The dotted lines are the stretched exponential fit to the curves. We see clearly the fit is rather good. The resulting relaxation time $\tau$ and the exponent $\gamma$ for different $h$ and $J$ are listed in Table \ref {t1}. For comparison, results for the original Fredrickson Andersen model ($J=0$) are also included. In Fig. \ref {ftau}, we have plotted the correlation time $\tau$ as a function of the coupling $J$ for different values of $h$ in semi-log scale. Clearly it is a non-Arrhenius behavior. Our model has two parameters $h$ and $J$. For large $h$ and/or $J$, a strong freezing process manifested in a strong slowing down of the relaxation time is observed and the auto-correlation shows similar dependence on both $h$ and $J$, respectively. Alternatively to the autocorrelation function let us consider the magnetization $M(h,J)$. This quantity is an essential one also within our re-interpretation of the kinetic Ising model as an appropriate candidate to describe glasses. The magnetization corresponds to the density of the immobile particles. In Fig. \ref {f2}, the dependence of the relaxation time $\tau$ and the exponent $\gamma$ on $M(h,J)$ are depicted for different couplings $J$ and different fields $h$. A nice collapse of the data is observed. A non-zero coupling $J$ practically induces a short range spatial correlation. However, our results show that such a short spatial correlation length does not change dramatically the properties of the glass system. The interaction can be also anti-ferromagnetic, i.e. with a negative coupling constant $J$. In this case, the relaxation time $\tau$ decreases when the magnitude of $J$ becomes larger. This can be seen in the last block of Table \ref {t1}. However, we again find a nice collapse of th data with negative $J$ and positive $J$ when the magnetization $M$ is chosen to be the scaling variable. This is shown in Fig.\ref {f3} (a). The situation is slightly different for the exponent $\gamma$. In Fig. \ref {f3} (b), for small $|J|$, it joins to the data points with positive $J$. But for bigger $|J|$, i.e. small $M$ and with short relaxation times $\tau$, the exponent $\gamma$ decreases rather than increases as in the case of positive $J$. This phenomenon is also understandable. A static antiferromagnetic coupling favors the coexistence of both liquid and solid like regions in the neighborhood. But the limit of $J \rightarrow -\infty$ for fixed $h$ does not exactly correspond to a high temperature state. \section{conclusions} We obtain as a main result that the dynamics of the generalized SFM$\left[ 2,2\right]$ is controlled only by the magnetization $M(h,J)$, i.e. the density of up (or down) spins. When we plot the correlation time $\tau$ as a function of $M(h,J)$, all data points for different values of $h$ and $J$ collapse to a single curve. The collapse of the data points for the exponent $\gamma$ is also observed except for the case with a strong antiferromagnetic coupling. These results show that the stretched exponential behavior is rather universal. Our simulations have not yet covered the regime near the critical point of the standard Ising model ($h \rightarrow 0$ but $hJ$ remains finite at its critical value). In this critical regime, both the glass transition and the second order phase transition take place. To study the mixed critical behavior of two phase transitions is an interesting extension of the present work. Finally, it should be remarked that the empirical stretching exponent $\gamma$ is rather close to 0.5. This result is in agreement with recent analytical calculations \cite{schutri}. \begin{table}[h]\centering \begin{tabular}{ccc|ccc} \hline $(h,J)$ & $\tau$ & $\gamma$ & $(h,J)$ & $\tau$ & $\gamma$ \\ (0.2,0.0) & 2.4(4) & 0.54(2) & (0.2,0.4) & 3.4(3) & 0.48(1)\\ (0.4,0.0) & 7.5(11) & 0.48(2) & (0.2,0.6) & 6.(1) & 0.48(2)\\ (0.5,0.0) & 16.(1) & 0.47(1) & (0.2,0.8) & 12.(1) & 0.48(1)\\ (0.6,0.0) & 33.(3) & 0.44(1) & (0.2,1.0) & 30.(3) & 0.47(1) \\ (0.7,0.0) & 89.(9) & 0.42(2) & (0.2,1.2) & 89.(8) & 0.45(1)\\ (0.8,0.0) & 378.(36) & 0.44(1) & (0.2,1.4) & 550.(52) & 0.44(1)\\ (1.0,0.0) & 10900.(1145) & 0.44(2) & (0.2,1.6) & 12000.(1040) & 0.44(2)\\ \hline \hline $(h,J)$ & $\tau$ & $\gamma$ & $(h,J)$ & $\tau$ & $\gamma$\\ (0.4,0.1) & 14.(3) & 0.48(2) & (0.5,0.1) & 39.(3) & 0.46(1)\\ (0.4,0.2) & 26.(4) & 0.47(2) & (0.5,0.2) & 132.(9) & 0.44(2)\\ (0.4,0.3) & 64.(5) & 0.45(1) & (0.5,0.3) & 1185.(98) & 0.45(2)\\ (0.4,0.4) & 210.(17) & 0.44(1) & (0.5,0.4) & 19800.(1940) & 0.43(2) \\ (0.4,0.5) &1620.(110) & 0.45(2) & & & \\ (0.4,0.6) &21900.(800) & 0.44(2) & & & \\ \hline \hline $(h,J)$ & $\tau$ & $\gamma$ & & & \\ (0.8,0.02) & 530.(54) & 0.43(2) & & & \\ (0.8,0.04) & 1250.(105) & 0.44(1) & & & \\ (0.8,0.06) & 3090.(201) & 0.44(2) & & & \\ (0.8,0.08) & 6950.(747) & 0.44(2) & & & \\ (0.8,0.10) &17750.(1140) & 0.44(2) & & & \\ \hline \hline $(h,J)$ & $\tau$ & $\gamma$ & $(h,J)$ & $\tau$ & $\gamma$\\ (0.4,0.1) & 14.(3) & 0.48(2) & (1.0,$-$0.02) & 2963.(274) & 0.44(2) \\ (0.5,0.1) & 39.(3) & 0.46(1) & (1.0,$-$0.06) & 840.(45) & 0.44(2)\\ (0.6,0.1) & 162.(17) & 0.44(1) & (1.0,$-$0.10) & 152(15) & 0.39(1)\\ (0.7,0.1) & 1220.(150) & 0.45(2) & (1.0,$-$0.14) & 57.(5) & 0.39(1)\\ (0.8,0.1) &17750.(1140) & 0.44(2) & (1.0,$-$0.18) & 20(2) & 0.38(1)\\ & & & (1.0,$-$0.22) & 9.(1) & 0.36(1)\\ \hline \end{tabular} \caption{The correlation time $\tau$ and the exponent $\gamma$ measured for different different couplings $J$ and $h$. } \label{t1} \end{table} \begin{figure}[h] \epsfysize=6.5cm \epsfclipoff \fboxsep=0pt \setlength{\unitlength}{0.6cm} \begin{picture}(9,9)(0,0) \put(-1.,-0.5){{\epsffile{glafig1a.eps}}} \put(0.0,8){\makebox(0,0){\footnotesize $A(t)$}} \put(8.0,0.5){\makebox(0,0){\footnotesize $t$}} \epsfysize=6.5cm \put(12.,-0.5){{\epsffile{glafig1b.eps}}} \end{picture} \caption{The auto-correlation in semi-log scale with $h=0.40$ (a) at the couplings $J=0.10$, $0.20$, $0.30$ and $0.40$; (b) at the couplings $J=0.50$, $0.60$ (from below). The dotted lines represent the stretched exponential form fitted to the curves.} \label{f1} \end{figure} \newpage \begin{figure}[p]\centering \epsfysize=12cm \epsfclipoff \fboxsep=0pt \setlength{\unitlength}{1cm} \begin{picture}(13.6,12)(0,0) \put(0,-0.5){{\epsffile{glafigtau.eps}}} \put(0.5,8.0){\makebox(0,0){\footnotesize $\tau$}} \put(9.5,0.5){\makebox(0,0){\footnotesize $J$}} \end{picture} \caption{ The dependence of the correlation time $\tau$ on the coupling constant $J$. The corresponding fields are $h=0.80$, $0.50$, $0.40$ and $0.20$ respectively (from left). All curves show a non-Arrhenius behavior. } \label{ftau} \end{figure} \newpage \begin{figure}[h] \epsfysize=6.5cm \epsfclipoff \fboxsep=0pt \setlength{\unitlength}{0.6cm} \begin{picture}(9,9)(0,0) \put(-1.,-0.5){{\epsffile{glafig2a.eps}}} \put(0.0,7.5){\makebox(0,0){\footnotesize $\tau$}} \put(9.5,0.5){\makebox(0,0){\footnotesize $M$}} \epsfysize=6.5cm \put(12.,-0.5){{\epsffile{glafig2b.eps}}} \put(13.0,7.5){\makebox(0,0){\footnotesize $\gamma$}} \end{picture} \caption{The collapse of (a) correlation times $\tau$ in semi-log scale and (b) the exponent $\gamma$. Circles with a solid line, squares, diamonds, triangles and stars correspond to $J=0.00$, $h=0.20$, $h=0.40$ , $h=0.50$ and $h=0.80$ respectively. Data are taken from the first five blocks of Table \protect\ref {t1}.} \label{f2} \end{figure} \newpage \begin{figure}[h] \epsfysize=6.5cm \epsfclipoff \fboxsep=0pt \setlength{\unitlength}{0.6cm} \begin{picture}(9,9)(0,0) \put(-1.,-0.5){{\epsffile{glafig3a.eps}}} \put(0.0,7.5){\makebox(0,0){\footnotesize $\tau$}} \put(9.5,0.5){\makebox(0,0){\footnotesize $M$}} \epsfysize=6.5cm \put(12.,-0.5){{\epsffile{glafig3b.eps}}} \put(13.0,7.5){\makebox(0,0){\footnotesize $\gamma$}} \end{picture} \caption{The collapse of (a) correlation times $\tau$ in semi-log scale and (b) the exponent $\gamma$. Circles with a solid line, squares and diamonds correspond to $J=0.00$, $J=0.10$ and $h=1.00$ with negative $J$ respectively. Data are taken from the last two blocks of Table \protect\ref {t1}.} \label{f3} \end{figure} \newpage
1,314,259,996,129
arxiv
\section{Introduction} Irregular surfaces seem harder to tackle than regular surfaces. It is not yet clear how they fit in the geography of surfaces of general type, although some numerical restrictions are known. For instance, by \cite{appendix}, $p_g\geq 2q-4$ and if equality holds then $S$ is birational to a product of a curve of genus 2 and a curve of genus $q-2$. Here we consider surfaces with $p_g=2q-3$. Since $p_g\ge q\ge 0$, such surfaces satisfy $q\geq 3$. Surfaces with $p_g=q=3$ have been completely classified by Hacon and the second author in \cite{pgq3} and, independently, by Pirola (\cite{pirola}), who completed the partial classification contained in \cite{ccm}. There are only two such surfaces: one is the symmetric product of a genus 3 curve and has $K^2=6 (=6\chi)$, while the other one is a free $\mathbb Z_2$-quotient of the product of a curve of genus 2 and a curve of genus 3 and has $K^2=8(=8\chi)$. The second example is characterized by the existence of an irrational pencil of genus $\ge 2$. In \cite{bnp} Barja, Naranjo and Pirola prove the inequality $K^2\ge 8\chi$ under some technical assumptions on the base locus of the canonical system. Furthermore, they make a detailed study of the case $q=4$ (and hence $p_g=5$), showing that the inequality $K^2\ge 8\chi$ holds in this case without any extra assumption. The only known surfaces with $q=4$ and $p_g=5$ have an irrational pencil of genus $\ge 2$. In \cite{bnp} it is shown that there are precisely two families of such surfaces. In both cases the surfaces are free $\mathbb Z_2$-quotients of products of curves and therefore satisfy $K^2=8\chi$. Here we study the case $q\geq 5$. If $S$ has an irrational pencil of genus $g\ge 2$, then we have the following classification: \begin{thm}\label{irrpencil} Let $q\ge 5$ be an integer and let $S$ be a minimal complex surface of general type with $q(S)=q$ and $p_g(S)=2q-3$. If there exists a fibration $f\colon S\to B$ with $B$ a curve of genus $\ge 2$, then there are the following possibilities: \begin{enumerate} \item $S$ is the product of two curves of genus 3; \item $S=(C\times F)/\mathbb Z_2$, where $C$ is a curve of genus $2q-3$ with a free action of $\mathbb Z_2$, $F$ is a curve of genus 2 with a $\mathbb Z_2$-action such that $F/\mathbb Z_2$ has genus 1 and $\mathbb Z_2$ acts diagonally on $C\times F$. In this case $f$ is the map induced by the projection $C\times F\to C$, the curve $B=C/\mathbb Z_2$ has genus $q-1$ and the general fibre $F$ of $f$ has genus 2. \end{enumerate} In either case, $S$ satisfies $K^2_S=8\chi(S)$. \end{thm} In the general case we prove inequalities for the invariants of $S$ which are weaker than the one in \cite{bnp} but require no extra assumptions: \begin{thm}\label{inequality} Let $q\ge 5$ be an integer and let $S$ be a minimal complex surface of general type with $q(S)=q$ and $p_g(S)=2q-3$. Then: \begin{enumerate} \item $K_S^2\geq 7\chi(S)-1$; \item if $K_S^2<8\chi(S)$, then $|K_S|$ has fixed components and the degree of the canonical map is 1 or 3; \item if $\chi(S)\geq 5$ and $K_S^2<8\chi(S)-6$, then the canonical map is birational. \end{enumerate} \end{thm} Finally, we analyze the Albanese map: \begin{prop}\label{albmap} Let $q\ge 5$ be an integer and let $S$ be a minimal complex surface of general type with $q(S)=q$ and $p_g(S)=2q-3$. Let $\alpha\colon S\to A$ be the Albanese map. \begin{enumerate} \item if $S$ has an irrational pencil of genus $\ge 2$ and it is not the product of two curves of genus 3, then $\alpha$ is 2-to-1 onto its image; \item if $S$ has no irrational pencil of genus $\ge 2$ or it is the product of two curves of genus 3, then $\alpha$ is birational onto its image. \end{enumerate} \end{prop} \medskip In view of the discussion and results above, it is natural to ask some questions: {\bf Question 1.} {\em Is the inequality $K^2\ge 8\chi$ true for all surfaces with $p_g=2q-3$ and $q\ge 4$?} \smallskip {\bf Question 2.} {\em Are there any surfaces with $p_g=2q-3$ and $q\ge 4$ which have no irrational pencil of genus $\ge 2$?} \smallskip Although we have no further evidence, we believe that the answer to Question 2 should be No and therefore, in view of Theorem \ref{irrpencil}, the answer to Question 1 should be Yes. \medskip \noindent{\it Acknowledgement:} We wish to thank the referee for his careful reading of the paper and for suggesting a simplification of the proofs of Theorem 1.2 and Proposition 1.3. \subsection{Notation and assumptions} Throughout all the paper $S$ denotes a minimal complex surface of general type such that $p_g(S)=2q-3$, where $q:=q(S)$ is the irregularity. Notice that $\chi(S)=q-2$. We also assume $q\ge 5$. We denote by $\varphi\colon S\to \mathbb P^{2q-4}$ the canonical map, by $A$ the Albanese variety and by $\alpha\colon S\to A$ the Albanese map. An {\em irrational pencil} of $S$ of genus $b>0$ is a fibration $f\colon S\to B$ where $B$ is a curve of genus $b$. \section{Proof of Theorem \ref{irrpencil}} In this section we prove Theorem \ref{irrpencil}. We assume that there exists a pencil $f\colon S\to B$ where $B$ is a curve of genus $b\ge 2$ and we denote by $g\ge 2$ the genus of a general fibre of $S$. \begin{comment} These surfaces can be completely classified as follows: \begin{thm}\label{irrpencil} There are the following possibilities for $S$: \begin{enumerate} \item $S$ is the product of two curves of genus 3; \item $S=(C\times F)/\mathbb Z_2$, where $C$ is a curve of genus $2q-3$ with a free action of $\mathbb Z_2$, $F$ is a curve of genus 2 with a $\mathbb Z_2$-action such that $F/\mathbb Z_2$ has genus 1 and $\mathbb Z_2$ acts diagonally on $C\times F$. \end{enumerate} In either case, $S$ satisfies $K^2_S=8\chi(S)$. \end{thm} \end{comment} \begin{proof} If $S$ is the product of two curves, then the computation of the invariants of $S$ shows that we are in case (i). So we assume from now on that $S$ is not a product. By the Lemme on p. 344 of \cite{appendix}, we have: \begin{equation}\label{b+g} q<b+g, \end{equation} and so $q-2< (b-1)+(g-1)$. By the Corollaire on p. 343 of \cite{appendix}, we have: \begin{equation}\label{chi} q-2=\chi(S)\ge (b-1)(g-1), \end{equation} and so $ (b-1)+(g-1)> (b-1)(g-1)$. The last inequality holds if and only if either $b=2$ or $g=2$. Suppose $b=2$. Then the inequalities \eqref{b+g} and \eqref{chi} give $g=q-1$ and one has equality in \eqref{chi}. Hence by the Corollaire on p. 343 of \cite{appendix} the fibration $f$ is isotrivial with every fibre smooth, namely, in the terminology of \cite{serrano}, $f$ is a {\em quasi-bundle}. We denote by $F$ the fibre of $f$. By \S 1 of \cite{serrano} (cf. also \cite{serranoC}) there exist a curve $C$ and a finite group $G$ that acts faithfully on $C$ and on $F$ in the following way: 1) the diagonal action on $C\times F$ is free and $S$ is isomorphic to $(C\times F)/G$; 2) $C/G$ is isomorphic to $B$ and $f$ is induced by the projection $C\times F\to C$; 3) $F/G$ has genus $q-2$. Since $F$ has genus $q-1$ and $q-2\ge 3$ by assumption, condition 3) contradicts the Hurwitz formula (an example of this type with $q=4$ is given in \cite[\S 7]{bnp}). Hence $b=2$ does not occur. Suppose now $g=2$. The same reasoning as above gives $b=q-1$. As in the previous case, by the results of Serrano there exist a curve $F$ of genus $2$, a curve $C$ and a finite group $G$ that acts on $C$ and $F$ in such a way that: 1) the diagonal action of $G$ on $C\times F$ is free and $S$ is isomorphic to the quotient surface $(C\times F)/G$; 2) $C/G$ is isomorphic to $B$ and $f$ is induced by the projection $C\times F\to C$; 3) $F/G$ has genus $1$. \smallskip Let $d$ denote the order of $G$. The Hurwitz formula applied to the quotient map $F\to F/G$ gives: $$ \frac{2}{d}=\sum_{1}^k(1-\frac{1}{m_i}), $$ where $m_1, \dots m_k$ are the order of the stabilizers of the special orbits of $G$ on $F$. It is easy to check that the possibilities are the following: \begin{eqnarray} d=4, k=1, m_1=2;\\ \nonumber d=3, k=1, m_1=3;\\ \nonumber d=2, k=2, m_1=m_2=2. \end{eqnarray} In particular, the group $G$ is abelian. We are going to exclude the first two possibilities by using the fundamental relations of \cite[Prop. 2.1]{ritaabel}. More precisely, denote by $E$ the quotient curve $P\in E$ the only branch point of the map $F\to F/G=E$. If $G=\mathbb Z_4$, then by \cite[Prop. 2.1]{ritaabel}, there exist $L\in \Pic(E)$ such that: $$4L\equiv 2P,$$ which is impossible by degree reasons. If $G=\mathbb Z_2\times \mathbb Z_2$, then by \cite[Prop. 2.1]{ritaabel}, there exist $L\in \Pic(E)$ such that: $$2L\equiv P,$$ which is again impossible by degree reasons. Finally, if $G=\mathbb Z_3$, then by \cite[Prop. 2.1]{ritaabel} there exist $L\in \Pic(E)$ such that: $$3L\equiv P,$$ and we have again a contradiction. So we have $G=\mathbb Z_2$. Since the diagonal action on $C\times F$ is free, the group $\mathbb Z_2$ acts freely on $C$. Hence $C$ has genus $2q-3$ and it is easy to check that the surface $(C\times F)/G$ has the right invariants. \end{proof} \section{Proofs of Theorem \ref{inequality} and Proposition \ref{albmap}} Since the proofs of Theorem \ref{inequality} and Proposition \ref{albmap} are very similar, we give both in the same section. \smallskip \begin{proof}[Proof of Theorem \ref{inequality}] If $S$ has an irrational pencil of genus $\ge 2$, then $K^2_S=8\chi(S)$ by Theorem \ref{irrpencil}. Hence we may assume that $S$ has no such pencil. We write $K_S=|D|+Z$, where $Z$ is the fixed part. By \cite{xiaopencil}, a surface whose canonical image is a curve has irregularity at most $2$, hence the image of the canonical map of $S$ is a surface and the system $|D|$ is irreducible. \medskip \underline{Step 1:} {\em If $Z=0$, then $K^2_S\ge 8\chi(S)$.}\newline Consider the natural map $v\colon \bigwedge^2H^0(\Omega^1_S)\to H^0(K_S)$ and denote by $\bar{v}\colon \mathbb P(\bigwedge^2 H^0(\Omega^1_S))\to \mathbb P(H^0(K_S))$ the corresponding rational map of projectives spaces. By the Castelnuovo--De Franchis Theorem, the kernel of $v$ does not contain any non zero simple tensor $\eta_1\wedge \eta_2$. Hence, denoting by $G$ the Grassmannian of lines in $\mathbb P(H^0(\Omega^1_S))$, $\bar{v}$ restricts to a morphism $G\to |K_S|$ which is finite onto its image, and therefore surjective, since $G$ has dimension $2q-4$. So every $\sigma\in H^0(K_S)$ is of the form $\eta_1\wedge \eta_2$ for $\eta_1, \eta_2\in H^0(\Omega^1_S)$, and for $\sigma$ general there exist also $\eta_3, \eta_4$ such that $\eta_1, \dots \eta_4$ are independent and $\sigma=\eta_3\wedge \eta_4$. Following \cite{bnp}, we say that $S$ is {\em generalized Lagrangian}. Given $\sigma$ and $\eta_1, \dots \eta_4$ as above, one considers the subsystem $\mathcal W$ of $|K_S|$ generated by the divisors of zeros of the $2$-forms $\eta_i\wedge\eta_j$, $1\le i<j\le 4$. Since $|K_S|$ is irreducible and $\mathcal W$ contains the divisor of zeros of the general form $\sigma$, the system $\mathcal W$ has no fixed part. Then we have $K^2_S\ge 8\chi(S)$, by \cite[Thm.1.2]{bnp}. \medskip By Step 1, from now we may assume $Z\neq 0$. We analyze the behaviour of the canonical map, obtaining inequalities for each possible case. We denote by $d$ the degree of the canonical map and by $\Sigma$ the canonical image. \medskip \underline{Step 2:} {\em If $d=1$, then $K^2_S\ge 7\chi(S)-1$.}\newline By Th\'eor\`eme 3.2 and Remarque 3.3 of \cite{deb1} we have: $$K^2_S\geq 3p_g(S)+q-7+K_SZ+\frac{1}{2}DZ,$$ and in our case this can be rewritten as: $$K^2_S\geq 7q-16+K_SZ+\frac{1}{2}DZ=7\chi(S)-2+K_SZ+\frac{1}{2}DZ\geq 7\chi(S)-1$$ where the last inequality is a consequence of the 2-connectedness of canonical curves. \medskip \underline{Step 3:} {\em If $d>1$, then $p_g(\Sigma)=0$.}\newline By \cite[Thm. 3.1]{beauville}, if $p_g(\Sigma)>0$ then $p_g(\Sigma)=p_g(S)=2q-3$ and $\Sigma$ is the canonical image of a smooth minimal surface of general type. Hence by the Castelnuovo inequality we have $\deg\Sigma\ge 3p_g(S)-7$. If $\varphi$ is not birational, this gives: $$K_S^2\geq 6p_g(S)-14=12q-32=12\chi(S)-8. $$ Since by assumption $\chi(S)=q-2\ge 3$, the above inequality contradicts the Bogomolov--Miyaoka-Yau inequality $K^2\le 9\chi$. This proves that if $p_g(\Sigma)>0$ the map $\varphi$ is birational. \medskip \underline{Step 4:} {\em The case $d=2$ does not occur.}\newline Assume that $d=2$ and denote by $\iota$ the involution of $S$ induced by $\varphi$. Since $S$ has no irrational pencil of genus $\ge 2$, the irregularity of $S/\iota$ is at most 1. It follows that the subspace $V\subseteq H^0(\Omega^1_S)$ on which $\iota$ acts as multiplication by $-1$ has dimension $\ge q-1\ge 4$. For any $\eta_1, \eta_2\in V$, the $2$-form $\eta_1\wedge \eta_2$ is invariant under $\iota$, hence it induces a global $2$-form on $S/\iota$. By Step 3, this $2$-form is identically zero. Hence $\eta_1\wedge \eta_2$ vanishes identically on $S$ and, by the Castelnuovo--De Franchis Theorem, $S$ has an irrational pencil of genus $\ge 2$, against the assumptions. \medskip \underline{Step 5:} {\em $d\le 4$ and if $d=4$, $K_S^2\geq 8\chi(S)$.}\newline By Theorem 3 of \cite{xiaoirreg} and the assumption $q\geq 5$, the image $\Sigma$ of the canonical map is not ruled by lines. Hence the degree $m$ of $\Sigma\subset \mathbb P^{2q-4}$ satisfies $m\geq 2q-4=2\chi(S)$. Thus $d\geq 5$ would yield a contradiction to the Miyaoka-Yau inequality, whilst $d=4$ yields $K_S^2\geq 8\chi(S)$. \medskip \underline{Step 6:} {\em If $d=3$, then $K^2_S\ge 8\chi(S)-6$. }\newline By Theorem 3 of \cite{xiaoirreg} and the assumption $q\geq 5$, the image $\Sigma$ of the canonical map is not ruled by lines and so, by \cite[(3.4) Addendum]{smalldeg}, the degree $m$ of $\Sigma \subset \mathbb P^n$ must satisfy $$m\geq \frac{4}{3}(n- 2).$$ Hence we have: $$\deg\Sigma \ge \frac{4}{3}(2\chi(S)- 2), $$ and so $D^2\geq 8\chi(S)-8$. Thus, again by 2-connectedness of the canonical divisors, $K^2_S\geq 8\chi(S)-6$. \medskip \underline{Step 7:} {\em If $d=3$ then $K_S^2\geq 7\chi(S)-1$. }\newline By Step 6 we need only to show the inequality for $\chi(S)=3$ and for $\chi(S)=4$. As in Step 6, the canonical image $\Sigma$ is not ruled by lines. So we have $\deg\Sigma\ge 6$ for $\chi(S)=3$ and $\deg\Sigma\ge 8$ for $\chi(S)=4$. For $\chi(S)=3$ , we have $D^2\ge 3\deg\Sigma\ge 18$, yielding $K_S^2\geq 20=7\chi(S)-1$. \smallskip Assume $\chi(S)=4$ and suppose for contradiction that $K_S^2\leq 26$. We have: $$24\ge D^2\ge 3\deg\Sigma\ge 24.$$ It follows that $\deg\Sigma=8$, $D^2=24$, $K_SD=26$ and the system $|D|$ is free. Since the surface $\Sigma$ is not ruled by lines, it is a (weak) Del Pezzo surface, i.e., it is the anticanonical image of $\mathbb P^1\times\mathbb P^1$, $\mathbb F_1$ or $\mathbb F_2$. So $\Sigma$ contains a pencil of conics, whose pull back to $S$ we denote by $|G|$. Then we can write $D=2G+H$, where $H$ is effective, $G D=6$ and $h^0(S,G)\geq 2$. The index theorem gives $G^2\leq 1$. On the other hand, by \cite{xiaoirreg}, $K_SG+G^2\geq 4q-4=20$ and this contradicts $K_SD=26$. \end{proof} \medskip \begin{proof}[Proof of Proposition \ref{albmap}] If $S$ is the product of two curves of genus 3 then $\alpha$ is of course an embedding. Also, it is easy to check statement (i) using Theorem \ref{irrpencil}. So assume that $S$ has no irrational pencil of genus $\ge 2$ and assume for contradiction that $\alpha$ is not birational. Denote the image of $\alpha$ by $Y$. Recall that a subvariety of an abelian variety is of general type if and only if it is not ruled by tori. Since $S$ has no irrational pencil of genus $\ge 2$, it follows that $Y$ is of general type and has no irrational pencil of genus $\ge 2$, either. We have $q(Y)\ge q$, since $Y$ generates $A$, and $q(Y)\le q$, $p_g(Y)\le p_g(S)=2q-3$, since $S$ dominates $Y$. On the other hand, the Th\'eor\`eme on p. 345 of \cite{appendix} gives $p_g(Y)\ge 2q(Y)-3$. Summing up, we have $q(Y)=q$ and $p_g(Y)=2q-3$. The canonical map of $S$ factors through the canonical map of $Y$, hence it is not birational. By Step 3 of the proof of Theorem \ref{inequality}, the canonical map of $Y$ is not birational either and by Step 4 of the same proof, the canonical map of $Y$ has degree $\ge 3$. Hence the canonical map of $S$ has degree at least 6, contradicting Step 5 of the proof of Theorem \ref{inequality}. \end{proof}
1,314,259,996,130
arxiv
\section*{High-Q, nonlinear optical microresonators} High-Q, nonlinear optical microresonators (more precisely: dielectric whispering gallery mode or ring-type resonators) have recently attracted growing attention in the scientific community. In particular frequency comb generation in high-Q microresonators has, within a few years, evolved to a research field of its own\cite{DelHaye2007,Savchenkov2008c,Levy2010,Razzari2010,Kippenberg2011,Papp2012,Li2012}. In microresonator based frequency combs a cw pump laser is coupled to a high finesse resonator(whereby, for thermal stability reasons, the pump laser is effectively blue detuned; see below). The high light intensities resulting from the high cavity finesse and the strong modal confinement enable cascaded four-wave mixing (FWM), which can give rise to hundreds of equidistant and coherent optical lines that, together with the pump laser, can constitute a frequency comb (cf. Fig.~1c). The comb line spacing corresponds to the free-spectral range of the microresonator or equivalently the inverse cavity roundtrip time $T_R$. The achievable comb line spacings range from several GHz up to THz frequencies. FWM based microresonator combs can perform on a level required for optical frequency metrology applications\cite{DelHaye2008,Papp2012,DelHaye2012}. However, these systems often suffer from significant frequency and amplitude noise\cite{Herr2012} and, unlike conventional mode-locked laser based frequency combs, do not correspond to ultra-short pulses in the time domain. The latter can be understood by the constant but arbitrary phase relations between the comb lines, which result from the formation process (cf. Fig.~1c)\cite{Herr2012}. External line-by-line phase and intensity adjustment may be used after comb generation for pulse-shaping\cite{Ferdous2011,Papp2011a}, but this is restricted to only a small number of comb modes. We note, that very recently evidence for direct ultra-short pulse generation in a Si$_3$N$_4$ microresonator has been found, which is to date unexplained\cite{Saha2012a}. In a different system comprising a fiber cavity with laser gain and a nonlinear high-index silica microresonator in filter configuration, generation of high-repetition rate pico-second pulses was shown\cite{Peccianti2012}. In the case of a strongly driven nonlinear microresonator, the intracavity field as function of the pump laser detuning cannot be described by a Lorentzian-shape resonance (cf. Fig.~1c). Instead, the resonance is asymmetrically shifted towards lower frequencies by the Kerr-nonlinearity when the intracavity power increases. This leads to a bistable behavior, that is, two possible solutions for the intracavity power can exist for a particular pump laser detuning. The two solutions are usually referred to as {\it upper} branch (higher intracavity power) and {\it lower} branch (lower intracavity power) solutions (cf. Fig.~1c). These two branches correspond respectively to blue and red detuned operation of the pump laser. Besides the Kerr-nonlinear resonance shift, an increased intracavity power also results in an additional shift of the resonance frequency towards lower frequencies via a combined effect of thermal expansion and thermal refractive index change (induced by absorptive heating). The combined Kerr-nonlinear and thermal effects lead to an non-Lorentzian, triangular resonance shape ({\it thermal triangle}) when the pump laser is scanned with decreasing optical frequency over the resonance (cf. Fig 2a, inset)\cite{Ilchenko1992a,Carmon2004}. It is important to note that the resonance frequency self-locks to the pump laser\cite{Carmon2004}, when the pump laser is blue detuned (upper branch) with respect to the effective resonance frequency; the system is thermally unstable if the pump laser is effectively red detuned (lower branch)\cite{Ilchenko1992a,Carmon2004}. This self-stability is exploited in microresonator based frequency comb generation where the pump laser is operated effectively blue detuned. In this work, we show that tuning the pump laser through the effective zero detuning frequency, into the lower branch (effectively red detuned) after previously following the upper branch (effectively blue detuned; observing concomitant FWM), leads to the formation of temporal dissipative cavity solitons in a microresonator. This regime is qualitatively different from the stable operating regime of microresonator based frequency combs used so far, which have relied on pumping the resonator from the blue sideband and have not crossed the zero-detuning point, after which the resonator becomes thermally unstable. In contrast to fiber cavity experiments\cite{Leo2010}, the soliton pulses form spontaneously without the need for external stimulation. The number of solitons formed can be controlled by the pump laser detuning. The generated solitons remain stable until the pump laser is switched off without the need for active feedback on either the resonator or the pump laser. This remarkable stability in the presence of solitons, despite operating on the usually thermally unstable lower branch (effectively red detuned pump laser), will be discussed below. Our discovery enables converting a cw laser into a train of femto-second pulses, which corresponds to a low noise smooth spectral envelope frequency comb in the time domain. In the present work we use a MgF$_2$ microresonator\cite{Grudinin2006,Liang2011,Grudinin2012,Wang2013} that meets the basic requirements for temporal dissipative soliton formation, that is Kerr-nonlinearity (also responsible for the parametric gain) and anomalous dispersion. The microresonator is characterized by a free-spectral range (FSR) of 35.2~GHz (Fig. 1a and Methods) and a coupled resonance width of 450~kHz. The resonator's measured (cf. Fig.~1b)\cite{DelHaye2009a} anomalous group velocity dispersion (GVD) is $\beta_2=-9.39$~ps$^2$km$^{-1}$, which in the context of microresonators can be conveniently expressed in terms of the parameter\cite{Savchenkov2011,Herr2012} $D_2 =-c/n_0\cdot D_1^2\cdot\beta_2=2\pi \cdot 16$~kHz that describes the deviation of the resonance frequencies $\omega_{\mu}=\omega_{0}+D_{1}\mu +\tfrac{1}{2}D_2\mu^{2} +\tfrac{1}{6}D_3\mu^{3} + ...$ from an equidistant frequency grid defined by $\omega_0+\mu D_1$, where $c$ is the speed of light and $n_0$ the refractive index of MgF$_2$ and $\mu$ the relative mode number with respect to the pumped mode $\omega_0$, where $\mu=0$ by definition. $D_{1}/2\pi$ is the FSR of the resonator at the frequency $\omega_0$. The frequency deviation increases quadratically with increasing relative mode number $\mu$ as evidenced in Fig.~1b; $D_3$ and higher order terms can be neglected in the present case. To search for soliton states in the microresonator we scan a pump laser (fiber laser; wavelength $1553$~nm; linewidth~$\sim10$~kHz) with decreasing optical frequency $\omega_{p}$ over a high-$Q$ resonance of the crystalline MgF$_{2}$ resonator. This approach is motivated by the pump laser detuning being a critical parameter for the existence of cavity solitons\cite{Wabnitz1993,Barashenkov1996,Leo2010}. Fig.~2b shows the evolution of the optical spectrum during the laser scan. Reducing the laser-cavity detuning leads to a build-up of intra-cavity power and once a critical power threshold\cite{Kippenberg2004a,Matsko2005a} is reached widely spaced primary sidebands are generated by FWM, followed by secondary lines filling in the spectral gaps as frequently observed in FWM based microresonator combs\cite{Ferdous2011,Herr2012,Grudinin2012}. While performing the scan, the RF (radio frequency) signal (electronically down-mixed to 20 $\mathrm{MHz}$) that results from the beating between neighboring comb lines is sampled and Fourier-transformed. A necessary signature of stable soliton formation is a low-noise, narrow RF signal, resulting from the repetitive output-coupling of a soliton pulse. The Fourier-transformed, sampled RF signal is contained in Fig. 2c. Indeed, we observe a transition from a broad, noisy RF signal to a single, low-noise RF beat note for a particular laser detuning. This transition coincides with the beginning of a series of discrete steps in the transmission, which deviates markedly from the expected thermal triangle (the RF beatnote remains narrow throughout all steps). Note that, observations similar to the discrete transmission steps have been made in a $\chi^{(2)}$ nonlinear microwave resonator, and connected to soliton formation\cite{Gasch1984}. To determine the effective pump laser detuning, we record a Pound-Drever-Hall (PDH) error signal\cite{Drever1983} while scanning over the resonance. Strinkingly, the first step, that is the transition to low noise, coincides with the zero crossing of the PDH signal, which marks the effective zero detuning frequency. This observation implies that the occurrence of the steps coincides with the transition from the upper branch regime of microresonator based FWM combs to the so far unexplored lower branch regime To understand the intriguing observations of discrete steps in the transmission and the possible connection to soliton-formation we carry out numerical simulations based on the coupled mode-equations approach (cf. Methods)\cite{Chembo2010}. Note that the coupled mode equations are equivalent to the Lugiato-Lefever equation when third and higher order dispersion can be neglected\cite{Matsko2011a}, which is the case in the present microresonator (cf. Fig. 1b). The simulated system corresponds to a typical MgF$_{2}$ microresonator similar to the one used for the experiments. The remaining resonator parameters are refractive and nonlinear indices, as well as the effective mode-volume. We neglect effects of non-unity mode-overlap, interactions with other mode families, any particularities of the resonator geometry and thermal effects. The resulting set of coupled mode equations is numerically propagated in time (cf. Methods and Supplementary Information, SI). Results of a numerical simulation including 101 comb modes are shown in Fig.~3. The blue curve in Fig. 3a shows the simulated intracavity power as function of the normalized detuning $\zeta_{o}=2(\omega_{0}-\omega_{p})/\kappa$ of the pump laser $\omega_\mathrm{p}$ from the cold resonance frequency, where $\kappa=\omega_{0}/Q$ denotes the cavity decay rate. Note that the intracavity power (the blue curve in Fig.~3a) is equivalent to the experimental transmission trace in Fig. 2a (an increased transmission corresponds to a drop in intracavity power). Due to the nonlinear Kerr-shift of the resonance frequency, the effective detuning between pump laser and resonance is smaller than $\zeta_{0}$. Remarkably, the step features are very well reproduced, implying that the simulation includes all relevant physical mechanism. In agreement with the experiment the number and height of steps fluctuate in repeated numerical scans as a result of random seeding of the optical modes (corresponding to vacuum fluctuations). Numerically tracing out all possible comb evolutions yields the orange curves in Fig. 3a. The first part of the evolution of the optical spectrum, shown in Fig.~3b follows the known pathway for FWM based comb formation\cite{Herr2012}. Later on, with each step in the transmission the optical spectrum becomes less modulated until it eventually reaches a perfectly smooth envelope state (frame XI). To reveal the potential underlying soliton formation we investigate the time dependent waveform in Fig.~3c by phase-coherently adding the individual simulated optical modes. Indeed, the first step (frame V) corresponds to a transition to a state where multiple pulses inside the cavity exist. Further steps can be associated with a stepwise reduction of the number of pulses propagating in the resonator. The separation between multiple pulses in the resonator is random. To confirm the soliton nature of these pulses we perform a simulation of 501 modes (cf. Fig. 3d,e,f) and analyze a state of five pulses. We compare the numerical simulation with an approximate analytical solution of the Lugiato-Lefever equation\cite{Lugiato1987}. For multiple solitons the analytical solution \cite{Wabnitz1993} has the form \begin{equation} \ensuremath{\Psi(\phi)\simeq C_{1}+C_{2}\cdot\sum_{j=1}^{N}{\rm sech}(\sqrt{\frac{2(\omega_0-\omega_p)}{D_{2}}}(\phi-\phi_{j}))} \label{eq:AnalyticalSoliton} \end{equation} where $\Psi$ denotes the complex field amplitude, $\phi$ the angular coordinate inside the resonator, $\phi_{j}$ the angular coordinate of the $j$th soliton, $\omega_{p}$ the pump frequency, and where $N$ is the number of solitons. The complex numbers $C_{1}$ and $C_{2}$ are fully determined by the resonator parameters and the pump conditions and the ratio $|C_2|^2$/$|C_1|^2$ of soliton peak power to cw background can typically exceed several hundreds (cf. SI for details). Indeed, the close to perfect match between analytic solution and numerical result shows that the pulses forming in the microresonator are stable temporal dissipative cavity solitons. These solitons emerge from the modulated intracavity waveform, which may explain their spontaneous formation as opposed to fiber cavities where stimulating writing pulses are required\cite{Leo2010}. Eq. \ref{eq:AnalyticalSoliton} allows for the estimation of the minimal temporal soliton width (FWHM) \begin{align} \Delta t_\mathrm{min}^\mathrm{FWHM} \approx 2 \sqrt{\frac{-\beta_2}{\gamma {\cal F} P_\mathrm{in}}},\label{eq:PulseDuration} \end{align} where ${\cal F}$ is the cavity's finesse, $P_\mathrm{in}$ the coupled pump power, $\beta_2$ the GVD and $\gamma=\tfrac{\omega}{c}\tfrac{n_2}{A_\mathrm{eff}}$ the effective nonlinearity with the nonlinear mode area $A_\mathrm{eff}$ and the nonlinear refractive index $n_2$ (cf. SI). Having shown the soliton nature of the pulses in numerical simulations, we can interpret the blue curve in Fig.~3a based on general limits\cite{Barashenkov1996} applying to solitons as solutions of the Lugiato-Lefever equation. Adopting these criteria for the present case we identify three main regions in Fig.~3a colored red, yellow and green (cf. SI). Solitons with a constant temporal envelope can only exist in the green area. While the yellow area still allows for solitons with a time varying envelope (``breather solitons'' \cite{Matsko2012}), solitons can not exist in the red area. Note that in the red area on the left, the system may undergo chaotic Hopf-bifurcations\cite{Barashenkov1996}. For different number of solitons we can derive the total power inside the resonator by averaging the respective analytic soliton solution (eq. \ref{eq:AnalyticalSoliton}) over one cavity roundtrip time. The result is shown as dark gray dashed lines in Fig. 3a, and is in excellent agreement with the numerically observed steps (to account for the limitation due to the low mode number an additional correction factor of order unity is applied). The intracavity power changes discontinuously with the number of solitons present in the cavity (cf. SI). To experimentally generate the soliton states, we develop a method that allows for reliably tuning into the desired soliton state. This method relies on tuning the laser with an appropriately chosen tuning speed into the soliton state (see SI for detail). Once generated, the solitons remain stable until the pump laser is switched off and no active stabilization or feedback on either the resonator or the pump laser is required. The latter observation is surprising as pumping a microresonator on the lower branch (effectively red detuned) implies thermal instability\cite{Ilchenko1992a,Carmon2004} and would usually require active stabilization techniques. In the presence of solitons, however, the fraction of the pump light that propagates at a similar velocity together with the high intensity soliton inside the resonator, experiences a much larger phase shift in one cavity roundtrip. While the main portion of the pump light is effectively red detuned (lower branch) the small portion overlapping with the soliton inside the resonator is effectively blue detuned (upper branch) and responsible for the self-stability of the system\cite{Carmon2004}. This is evidenced by the series of steps that correspond to a set of small thermal triangles (characteristic for stable blue detuning; one triangle per realized soliton state) as shown in Fig.~2a (experiment) and Fig.~3a (simulation). We note that this self-stability is unique to micro-resonators and not observed in fiber cavities. A more detailed discussion can be found in the SI. For clarity we note that a hypothetical disturbance in the laser detuning larger than the length of the small thermal triangles would terminate the soliton operation. Having access to stable operation of these soliton states, we experimentally investigate - in addition to their RF beatnote and optical spectrum (Fig.~4a) - their temporal characteristics by performing a frequency-resolved optical gating experiment (FROG, Fig.~4b)\cite{Kane1993,Dudley1999}. This corresponds to a second-harmonic generation (SHG) autocorrelation experiment, where the frequency-doubled light is spectrally resolved (Fig.~5b and Methods). In contrast to auto-correlation, the FROG method allows a reliable identification of ultra-short pulses via the associated minimal bandwidth of the SHG spectrum given by the time-bandwidth-product (TBP). A comparison of auto-correlation and FROG method is provided in the SI. In full consistency with the numerical simulations, we observe single and multiple soliton states. The single soliton state is characterized by a smooth spectral envelope, without spectral gaps. The power spectral envelope exhibits a sech$^{2}$-shape (3~dB bandwidth 1.6~THz corresponding to more than 45~modes) as expected from the Fourier transform of a sech-shaped soliton pulse. Based on the TBP for solitons of~0.315 (cf. SI) the expected pulse duration is 197~fs. The observed low phase noise RF beatnote is resolution bandwidth limited to 1 kHz and its signal-to-noise ratio exceeds 60 dB. The FROG trace shows a train of pulses well separated by the cavity roundtrip time of $T_{R}=$28.4~ps, corresponding to the FSR of 35.2~GHz. The multi-soliton states (here shown for the case of two and five solitons), show a more structured optical spectrum. This structure reflects the number and distribution of solitons in the cavity (cf. SI). The RF beat note generated in the multi-soliton states is of similar quality as in the single soliton state. Importantly, the FROG measurement allows for a full reconstruction (neglecting a time direction ambiguity) of intensity and phase of the pulses (cf. Fig. 5a). The reconstructed intensity profile is consistent with the expected sech$^{2}$-shape for solitons and the reconstructed temporal width of 200 fs (FWHM) is in agreement with the bandwidth of the optical spectrum and the expectation based on eq. (\ref{eq:PulseDuration}). The FROG traces show that it is the full spectrum that contributes to each pulse. To further corroborate the presented results an independent intensity sampling method is applied to a single soliton state. The high repetition rate prohibits a direct sampling, which would require hundreds of GHz bandwidth in detection and recording. This limitation can be overcome by stretching the optical waveform in time using an Ultrafast Temporal Magnifier\cite{Salem2009} (PicoLuz LLC, Fig.~5d). While the time resolution of about 2.5 ps does not allow for determining the duration of the pulse, this single-shot method, as opposed to auto-correlation-type experiments, does not rely on averaging. The results in Fig.~5c clearly show optical pulses separated by $T_{R}$, with constant pulse amplitudes, as expected for solitons. We emphasize that in all temporal characterization experiments neither phase and intensity adjustment (except for pump suppression) nor spectral filtering are applied. The gain window ($\geq 4$~THz) of the optical amplifier used before temporal characterization supports more than 100 comb modes. Combining experimental, numerical and analytical results, we have demonstrated spontaneous formation of dissipative temporal cavity solitons in a MgF$_{2}$ microresonator. The duration of these soliton pulses depends on the pump power and the dispersion of the resonator (cf. eq.~2). In the present case the optical pulses are in the range from 200~fs. Given the possibility of dispersion engineering in microresonators and the broadband nature of the parametric gain, significantly shorter pulses are conceivable. The stable solitons allow ultra-short pulses to be continuously coupled out of the microresonator yielding a train of ultra-short pulses. If only one soliton is present in the cavity this pulse train corresponds to a low-noise, optical frequency comb with only low line-to-line power variation. Comparable frequency combs have so far not been generated in microresonators. As our results only depend on the generic properties of Kerr-nonlinear microresonators with anomalous dispersion, they apply equally to other microresonator comb platforms. We note that the observations of the step signature is not a spectrally local particularity when driving one particular resonance but is observed when driving any resonance of the same mode family within the tuning range ($\pm$0.5~nm) of the pump laser. Soliton formation, as revealed in our work, may also, at least partially, explain the generation of femto-second pulses in Si$_3$N$_4$ resonators\cite{Saha2012a} reported recently. Moreover, our results are in agreement with very recent numerical work on temporal dissipative soliton formation in microresonators\cite{Coen2013,Coen2013c,Chembo2013}. In contrast to mode-locked lasers, which rely on laser gain, no additional element, such as a saturable absorber is required for stable operation in the microresonator case, which relies on parametric gain (A detailed discussion on the difference to mode-locked laser is contained in the SI). It is moreover worthwhile emphasizing that temporal dissipative cavity solitons are different from dissipative solitons in lasers, which already find widespread use\cite{Grelu2012}. From a frequency domain perspective, soliton formation enables microresonator based frequency combs with low noise and smooth spectral envelopes. These low noise spectra with unprecedentedly small line-to-line power variations are essential to frequency domain application in telecommunications\cite{Pfeifle2012}, broadband spectroscopy\cite{Diddams2007} and astronomy\cite{Steinmetz2008,Li2008a}. Recent theoretical work suggests that octave spanning combs may be obtainable in the soliton regime\cite{Coen2013,Lamont2013} directly from a microresonator. From a time domain perspective soliton formation in a microresonator enables ultra-short pulse generation in a microresonator. In combination with chip-scale\cite{Levy2010,Razzari2010} integration this opens the route towards compact, stable and low cost ultra-short pulse sources\cite{Grelu2012}, which can also operate in wavelength regimes (such as the mid-infrared\cite{Wang2013}), where broadband laser gain media do not exist. Moreover, femto-second pulses in conjunction with external broadening\cite{Dudley2006} (see SI for a first demonstration) provide a viable route to a microresonator RF-to-optical link\cite{Telle1999,Cundiff2003}. Moreover, the unique stability of dissipative solitons\cite{Grelu2012} is of interest to low phase-noise microwave generation\cite{Savchenkov2008c}. \section*{Methods} \subsection*{Experimental setup and parameters:} The pump laser (fiber laser Koheras Adjustik; 1553~nm wavelength; shortterm linewidth $10$~kHz) is amplified by an erbium doped fiber amplifer (EDFA) and evanescently coupled to the MgF$_2$ resonator (free spectral range $35.2$~GHz; refractive index $n_0=1.37$) via a tapered optical fiber. The coupled resonance width of $\kappa/2\pi=450$ kHz (quality factor $Q=4\times10^{8}$, Finesse ${\cal F} = 78 \times 10^4$) has been measured using modulation sidebands of a scanning laser. The dispersion of the resonator has been determined following ref. \cite{DelHaye2009a} to $D_2/2\pi=16$~kHz that is $\beta_2=-9.39$~ps$^2$km$^{-1}$. A typical coupled pump power of $P_\mathrm{in}=5-30$~mW leads to circulating powers $P_\mathrm{circ}$ of several hundreds of Watts. The estimated effective mode area $A_\mathrm{eff}=90 \times 10^{-12}$m$^2$ (effective mode volume $V_\mathrm{eff}=5.6\times10^{-13}\mathrm{~m}^{3}$) and the nonlinear refractive index $n_{2}=0.9\times10^{-20}\mathrm{~m}^{2}\mathrm{W}^{-1}$ yields an estimated nonlinear parameter of $\gamma = \tfrac{\omega_\mathrm{p}}{c}\tfrac{n_2}{A_\mathrm{eff}}=4.1 \times 10^{-4}$m$^{-1}$W$^{-1}$. \subsection*{Numerical simulation:} The simulations are based on the coupled mode equations (cf. SI) which are numerically propagated in time using an adaptive step-size Runge-Kutta integrator. The simulated resonator is defined similar to the experimental resonator by its resonance frequencies $\omega_{\mu}$ given by $D_{1}/2\pi=35.2$~GHz, $D_{2}/2\pi=10$~kHz, $D_{3}/2\pi=-130$~Hz (these values where measured for a resonator similar to the one used here) and a quality factor of $Q=2\times10^{8}$. All other parameters are equal to the values listed for the experimental resonator. The coupled pump power is set to $P_{\mathrm{in}}=100\mathrm{~mW}$ at a pump frequency of $\omega_{p}/2\pi=193$ THz. Short, simulated pump power drops can be used to induce transitions between different soliton states. \subsection*{FROG Experiment:} Prior to the FROG\cite{Kane1993,Dudley1999} experiment the optical spectra are sent through a fiber-Bragg grating for pump supression ($-30$ dB) and are subsequently amplified to 50 mW. Dispersion compensating fiber (Thorlabs DCF3,DCF38) is used for approximate dispersion compensation. In the FROG setup (cf. Fig.~5b) the generated optical pulses, are interferometrically split and recombined with a variable delay in a nonlinear BBO crystal. This results in second harmonic generation (SHG) whenever the optical pulses in the two arms of the interferometer overlap temporally in the BBO crystal. The generated SHG light is spectrally resolved and recorded as a function of delay by a CCD-spectrometer, yielding a FROG trace. Each FROG trace consists of nearly 1000 spectra with individual exposure times of 800 ms. The N-by-N (N=63) FROG trace of the single pulse state in Fig.~5a is analyzed using a principal component generalized projection algorithm\cite{Kane2008}, after noise removal via Fourier-filtering. The FROG reconstruction error is defined as $\epsilon=\frac{1}{N}\sqrt{\sum_{i,j}^{N}(M_{ij}^\mathrm{meas}-M_{ij}^\mathrm{reco})^{2}}$, where $M_{ij}^\mathrm{meas}$ and $M_{ij}^\mathrm{reco}$ denote the elements of the $N\times N$ matrices representing the measured and reconstructed FROG traces. \section*{Acknowledgments} The authors thank R. Salem and A. Gaeta for providing the PicoLuz, LLC Ultrafast Temporal Magnifier and advice when evaluating the data. This work was supported by the DARPA program QuASAR, the Swiss National Science Foundation (SFN) as well as a Marie Curie IAPP program. MLG acknowledges support from the RFBR grant 13-02-00271. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 263500. \section*{Author Contribution} TH designed and performed the experiments and analyzed the data. MLG and TH performed the numerical simulations. MLG developed the analytic description. VB assisted in the experiments. JDJ assisted in the Temporal Magnifier experiment. TH and MLG fabricated the sample. CYW assisted in sample fabrication. NMK assisted in developing the analytic description. TH, MLG, and TJK wrote the manuscript. TJK supervised the project. \newpage \begin{figure}[htbp] \centering \includegraphics[width=1\textwidth]{figure1c_small} \caption{\textbf{MgF$_2$ microresonator, dispersion and bistability. a.} MgF$_{2}$ crystal carrying two whispering-gallery-mode microresonators of different size (the smaller one with an FSR of 35.2 GHz is used). A optical whispering-gallery-mode propagates along the circumference $L$ of the resonator within the roundtrip time $T_{R}$. The smaller panels show a magnified view of the resonator and the simulated optical mode profile. \textbf{b.} Second order anomalous dispersion (FSR increases with mode number) of the microresonator with 35.2 GHz FSR shown as the deviation of the measured resonance frequencies (blue dots) $\omega_\mu = \omega_0+ D_1 \mu+\tfrac{1}{2}D_2\mu^2 + ... $ from an equidistant frequency grid $\omega_0+\mu D_1$ (horizontal grey line), where $\mu$ denotes the relative mode number and $D_1$ corresponds to the FSR at the frequency $\omega_0$. The resonator's anomalous dispersion is accurately described by $D_2/2\pi=16$~kHz and higher order terms neglected (red dashed line). The grey vertical lines mark spectral intervals of 25~nm width ($\mu=0$ corresponds to 1553~nm). \textbf{ c.} Bistable intracavity power as function of laser detuning for a linear cavity (blue) and a nonlinear cavity (orange). The dashed line marks the unstable branch. The regimes of FWM based microresonator combs on the upper branch and solitons on the lower branch are marked. In FWM combs the phases of the comb lines are constant but random, leading to a periodic but un-pulsed intracavity waveform (cf. left inset). The presence of a soliton implies synchronized phases and a pulsed intracavity waveform (cf. right inset).} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.73\textwidth]{figure2b_small} \caption{\textbf{Transmission and beatnote a.} Transmission observed when scanning a laser over a high-Q Kerr-nonlinear resonance in a MgF$_{2}$ resonator (coupled pump power 5 mW). The transmission signal follows the expected thermal triangle (cf. inset) with deviations in the form of discrete steps (green shading). \textbf{b.} Evolution of the optical power spectrum for three different positions in the scan; the spectrum (II) and in particular the mesa shaped one (III) exhibit a high-noise RF beat signal. \textbf{c.} Down-mixed RF beat signal. \textbf{d.} Experimental main setup composed of pump laser and resonator followed by an optical spectrum analyzer (OSA), an oscilloscope to record the transmission and to sample the down-mixed beatnote (via the third harmonic of a local oscillator LO at 11.7 GHz), and an electrical spectrum analyzer (ESA) to monitor the beatnote. Before beatnote detection the pump is filtered out by a narrow fiber-Bragg grating (FBG) in transmission; FPC: Fiber polarization controller, EDFA: erbium-doped fiber amplifier, PD: photodetector. \textbf{e.} Transmission and Pound-Drever-Hall (PDH) error signal. Effective blue/red detunings are shaded blue/red.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1\textwidth]{figure3b_small} \caption{\textbf{Numerical Simulations of dissipative temporal soliton formation in a microresonator a.} Intracavity power (blue, corresponding to the transmission signal in Fig. 2a when mirrored horizontally) during a simulated laser scan (101 simulated modes) over a resonance in a MgF$_{2}$ resonator. The step features are well reproduced. The orange lines trace out all possible evolutions of the system during the scan. The dashed lines show an analytical description of the steps. The green area corresponds to the area where solitons exist, the yellow area allows for solitons with a time variable envelope; no solitons can exist in the red area. \textbf{b/c.} Optical spectrum and intracavity intensity for different positions I-XI in the laser scan. \textbf{d.} Optical spectrum obtained when simulating 501 modes and stopping the simulated laser scan in the soliton-regime. \textbf{e.} Intracavity intensity for the comb state in (d) showing five solitons ($T_\mathrm{R}$ roundtrip time). \textbf{f.} Zoom into one of the soliton states showing the numerical results for the field real (red) and imaginary part (dark blue). The respective analytical soliton solutions are shown in light blue and orange.} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1\textwidth]{figure4_small} \caption{\textbf{Experimental demonstration of temporal dissipative soliton states} \textbf{a.} Optical spectra of three select comb states with one, two and five solitons, respectively. The insets show the RF beat note, which is resolution bandwidth limited to 1 kHz width in all cases. The red line in the optical spectrum of the one pulse state shows the spectral hyperbolic-secant envelope expected for soliton pulses with a of 3~dB bandwidth of 1.6 THz. \textbf{b.} FROG traces of the comb states in (a) displaying the signal of the single and multiple pulses. (The FROG setup is shown in Fig.~5b) } \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=1\textwidth]{figure5_small} \caption{\textbf{Temporal characterization of ultra-short pulses} \textbf{a.} Higher resolution experimental FROG trace of a one soliton pulse (left). The reconstruction converges to a FROG error of $\epsilon=1.7$\% in good agreement with the experimental trace (middle). The reconstruction (right) of intensity and phase yields an estimated pulse duration of 200 fs (FWHM). \textbf{b.} Setup of the FROG experiment. \textbf{c.} Sampled optical intensity of the microresonator output over a duration of $40$~ps. The three measurements (frame 1,2,3) are separated from one another by a duration of $4$~ns corresponding to approx. 140 round-trip times $T_\mathrm{R}=28.4$~ps \textbf{d.} Setup of the intensity sampling experiment including the PicoLuz LLC Temporal Magnifier and a 4~GHz sampling oscilloscope. FBG: fiber-Bragg grating (in transmission), DCF: dispersion compensating fiber, EDFA: erbium-doped fiber amplifier, SHG: second harmonic generation, FLC: fiber laser comb (250MHz rep. rate), PD: photodetector, FPC: fiber polarization controller, HWP: half-wave-plate, QWP: quarter-wave-plate. } \end{figure} \clearpage \section*{\huge Supplementary Information} \section{Numerical simulation} When a pump laser with frequency $\omega_p$ is coupled to a high-Q Kerr-nonlinear resonator, a system of nonlinear coupled mode equations \cite{Chembo2010,Chembo2010a,Chembo2010b,Chembo2010d,Maleki2010,Matsko2011a,Herr2012,Chembo2013} can be used to describe the evolution of the mode amplitudes $A_\mu$, which describe the number of photons $|A_\mu|^2$ in the mode with index $\mu$ and resonance frequency $\omega_\mu = \omega_0+D_1 \mu+\tfrac{1}{2} D_2 \mu^2 + \tfrac{1}{6}D_3\mu^3 + ...$ ($D_1=2\pi/T_R$, $D_2$ and $D_3$ corresponding to FSR, second and third order dispersion; fourth and higher order dispersion terms may be introduced in analogous manner). All mode numbers $\mu$ are defined relative to the pumped mode $\mu=0$. The set of coupled mode equations reads: \begin{align} \frac{\partial A_\mu}{\partial t}&=-\frac{\kappa}{2} A_\mu+\delta_{\mu_0}\sqrt{\eta\kappa}s_\mathrm{in} e^{-i(\omega_p-\omega_0)t}+ig\!\!\!\!\!\sum_{\mu',\mu'',\mu'''}\!\!\!\!A_{\mu'}A_{\mu''}A^*_{\mu'''}e^{-i(\omega_{\mu'}+\omega_{\mu''}-\omega_{\mu'''}-\omega_\mu)t},\nonumber\\ s_\mathrm{out} &= s_\mathrm{in}-\sqrt{\eta\kappa}\sum A_\mu e^{-i(\omega_\mu-\omega_p)t}. \end{align} Here, $\kappa = \kappa_0 + \kappa_{ext}$ denotes the cavity decay rate as a sum of intrinsic decay rate $\kappa_0$ and coupling rate to the waveguide $\kappa_{ext}$, $\eta=\kappa_{ext}/\kappa$ is coupling efficiency ($\eta=1/2$ for critical coupling), and $|s_\mathrm{in,out}| = \sqrt{P_\mathrm{in,out}/\hbar\omega_0}$ denote the amplitudes of the pump and output powers, and $\delta_{\mu 0}$ is Kronecker's delta. The nonlinear coupling coefficient \begin{align} g&=\frac{\hbar\omega^2_0 cn_2}{n_0^2V_\mathrm{eff}}. \end{align} describes the cubic Kerr-nonlinearity of the system with the refractive index $n_0$, nonlinear refractive index $n_2$, the effective cavity nonlinear volume $V_\mathrm{eff}={\cal A_\mathrm{eff}}L$ (with effective nonlinear mode-area ${\cal A_\mathrm{eff}}$ and circumference of the cavity $L$), the speed of light $c$ and the Planck constant $\hbar$. The summation includes all $\mu',\mu'',\mu'''$ respecting the relation $\mu=\mu'+\mu''-\mu'''$. The values of $D_{1}/2\pi=35.2$ GHz, $D_{2}/2\pi=10$ kHz and $D_{3}/2\pi=-130$ Hz were measured following ref \cite{DelHaye2009a} for a resonator similar to the one employed in the present experiments. The mode volume $V_\mathrm{eff}=5.6 \times 10^{-13}~$~m$^3$ was inferred from finite element simulations (such as shown in Fig.1 in the main text). The system of coupled mode equations may be rewritten in a dimensionless way, where the explicit time dependence in the nonlinear terms is removed. This is achieved by using the scaling $f=\sqrt{\frac{8g\eta P_\mathrm{in}}{\kappa^2 \hbar \omega_0}}$, $\tau=\kappa t/2$ and phase transformation $a_\mu=A_\mu\sqrt{2g/\kappa}e^{-i(\omega_\mu-\omega_p-\mu D_1)t}$: \begin{align} \frac{\partial a_\mu}{\partial \tau}&=-[1+i\zeta_\mu] a_\mu +i\sum_{\mu'\leq \mu''}(2-\delta_{\mu'\mu''}) a_{\mu'} a_{\mu''} a^*_{\mu'+\mu''-\mu}+ \delta_{0\mu} f. \label{simeqs} \end{align} Here $a_\mu$ can be interpreted as the slowly varying amplitude of the comb modes close to the mode frequency $\omega_\mu$ and $\tau=\kappa t/2$ denotes the normalized time. The quantity $\zeta_{\mu}=2(\omega_\mu-\omega_p- \mu D_1)/\kappa$ is a formal measure of detuning defined by the cold resonance frequencies $\omega_\mu$ and an equidistant $D_1$-spaced frequency grid. In the dimensionless form all frequencies, detunings and magnitudes are measured in units of cold cavity resonance linewidth so that $|a_\mu|^2=1$ corresponds to the nonlinear mode shift of half a cold resonance width, which also corresponds to both single mode bistability and degenerate oscillations thresholds \cite{Chembo2010}. This set of coupled mode equations (\ref{simeqs}) serves as the basis for the numerical simulations. It is propagated in time using an adaptive stepsize Runge-Kutta integrator. Random vacuum field fluctuation are introduced to seed the initial degenerate four-wave-mixing process. The time of simulation grows cubically with the number of modes taken into account. If $2K$ sidebands and the pump are considered, then the total number of nonlinear terms in all $2K+1$ equations is $\frac{1}{3}(K+1)(8K^2+7K+3)$. As opposed to the modes $A_\mu$, the fields $a_\mu$ correspond to an equidistant frequency grid. Note that amplitude and phase modulation implicitly included in the time dependence of $a_\mu$ include frequency deviations from the equidistant grid and in particular noisy comb states with multiple lines per resonance. For the stationary soliton solutions discussed in the main text and later on here, the amplitudes and phases of $a_\mu$ are constant in time when third and higher order dispersions are neglected. Throughout the simulations we neglect thermal effects. We also neglect the frequency dependence of nonlinearity, losses and mode-overlap, interactions with other mode families, and any particularities of the resonator geometry. In our numerical simulations we also test the influence of the higher order dispersion terms on the formation of soliton states. We varied $D_{3}$ from 0 to $10^{4}$~s$^{-1}$ and found that this does not prevent soliton formation and only affects the pulse repetition rate. \section{Analytical description of solitons in a microresonators} To describe the internal field in a nonlinear microresonator the Nonlinear Schr\"odinger Equation (NLS) may be used when third and higher order dispersion terms are neglected \cite{Matsko2011a}: \begin{align} \frac{\partial A}{\partial t}-i\frac{1}{2} D_2 \frac{\partial^2 A}{\partial \phi^2} - i g |A|^2A = -\left(\frac{\kappa}{2} +i(\omega_0-\omega_p)\right)A+\sqrt{\frac{\kappa\eta P_\mathrm{in}}{\hbar\omega_0}}. \label{nls1} \end{align} Here $A(\phi,t)=\sum_\mu A_\mu e^{i\mu\phi-i(\omega_\mu-\omega_p) t}$ is the slowly varying field amplitude and $\phi$ is the angular coordinate inside the resonator. This equation may be formally obtained from the nonlinear equation for the slowly varying amplitude in time domain by using the formal substitution: \begin{align} \omega_\mu = \omega_0+D_1 \mu+\frac{1}{2}D_2 \mu^2 = \omega_0+D_1 \mu-\frac{1}{2}D_2 \frac{\partial^2}{\partial \phi^2}, \end{align} as $\sum_\mu (i\mu)^n A_\mu e^{i\mu\phi-i(\omega_\mu-\omega_p) t}=F.T.\left[\frac{\partial^n}{\partial \phi^n}A(\phi,t)\right]$ (details of analogous derivation for a fiber are given in e.g. \cite{Boyd2007}). Transforming eq. (\ref{nls1}) to its dimensionless form gives: \begin{align} i\frac{\partial\Psi}{\partial\tau}+\frac{1}{2}\frac{\partial^{2}\Psi}{\partial \theta^{2}}+|\Psi|^{2}\Psi=(-i+\zeta_{0})\Psi+if. \label{eq:nls} \end{align} Here $\theta=\phi\sqrt{\frac{1}{2d_{2}}}$ is the dimensionless longitudinal coordinate, $\Psi(\tau,\phi)=\sum a_{\mu}(\tau)e^{i\mu\phi}$ is the waveform, and $d_{2}=D_{2}/\kappa$ is the dimensionless dispersion. Equation (\ref{eq:nls}) is identical to the Lugiato-Lefever equation \cite{Lugiato1987}, where a transversal coordinate is used instead of a longitudinal one in our case. Using the ansatz of a stationary ($\frac{\partial\Psi}{\partial\tau}=0$) soliton on a continuous-wave (cw) background \cite{Wabnitz1993} we find an expression for a single soliton \begin{align} & \Psi=\Psi_{0}+\Psi_{1}\simeq\Psi_{0}+B e^{i\varphi_{0}}{\rm sech}(B\theta),\label{eq:singlesoliton} \end{align} where the real number $B$ defines both, width and amplitude of the soliton and $\varphi_{0}$ defines the phase angle. We note that eq. (\ref{eq:singlesoliton}) is not an exact solution of eq. (\ref{eq:nls}), for which exact soliton solutions are known only in the case of zero losses \cite{Barashenkov1996}. The constant cw background $\Psi_{0}$ can be found by inserting $\Psi_0$ into eq. (\ref{eq:nls}) as the lowest branch \cite{Barashenkov1996} of the solution of \begin{align} & (|\Psi_{0}|^{2}-\zeta_{0}+i)\Psi_{0}=if, \end{align} which, eventually, results when $\zeta_{0}>\sqrt{3}$ (bistability criterion) and large enough detunings $f^2<\frac{2}{27}\zeta_0(\zeta_0^2+9)$ in: \begin{align} |\Psi_0|^2&=\frac{2}{3}\zeta_0-\frac{2}{3}\sqrt{\zeta_0^2-3}\cosh\!\left(\frac{1}{3}\,\,\mathrm{arcosh}\!\left(\frac{2\zeta_0^2+18\zeta_0-27f^2}{2(\zeta_0^2-3)^{2/3}}\right)\right),\nonumber\\ \Psi_0&=\frac{if}{|\Psi_0|^2-\zeta_0+i}\simeq\frac{f}{\zeta_{0}^{2}}-i\frac{f}{\zeta_{0}}. \label{eq:Psi0} \end{align} The soliton component $\Psi_{1}$ in eq. (\ref{eq:singlesoliton}) is approximated by the bright soliton solution of the undriven, undamped (ordinary) NLS, which is the limit case for $\zeta_{0}\gg1$. The parameters $B$ and $\varphi_{0}$ can be derived based on general conditions for the soliton attractor \cite{Wabnitz1993}, which yields \begin{align} B&\simeq\sqrt{2\zeta_{0}}, \label{eq:B} \end{align} \begin{align} \cos(\varphi_{0})& \simeq\frac{\sqrt{8\zeta_{0}}}{\pi f}. \label{solparms} \end{align} Based on eqs. (\ref{eq:Psi0},\ref{eq:B}) we can estimate the ratio $R$ of soliton peak power to cw pump background: \begin{align} R=\frac{|B|^2}{|\Psi_0|^2}=\frac{2\zeta_0^3}{f^2}. \end{align} For a maximal detuning $\zeta_0 = \zeta_0^\mathrm{max}$ (see eq. (\ref{eq:lim1}) below) we find: \begin{align} R_\mathrm{max}=\frac{\pi^6 f^4}{2^8}= \left(\frac{\pi^3 g\eta P_\mathrm{in}}{2\kappa^2 \hbar \omega_0}\right)^2. \end{align} Extending eq. (\ref{eq:singlesoliton}) to the case of multiple solitons inside the resonator gives \begin{align} & \Psi(\phi)\simeq \underbrace{\Psi_{0}}_{C_1}+\underbrace{\left(\frac{4\zeta_{0}}{\pi f}+i\sqrt{2\zeta_{0}-\frac{16\zeta_{0}^{2}}{\pi^{2}f^{2}}}\right)}_{C_2}\sum_{j=1}^{N}\,\mathrm{sech}(\sqrt{\frac{\zeta_{0}}{d_{2}}}(\phi-\phi_{j})).\label{waveform} \end{align} It was shown in \cite{Wabnitz1993} that if a pair of solitons in a train is separated by a distance $\phi_{j+1}-\phi_{j} \gtrsim (8/B) \sqrt{2d_2}$ the pair of solitons does not interact. This puts a possible limit $N_\mathrm{max} < \frac{2\pi}{8}\sqrt{\zeta_0/d_2}$ of a maximum number of stationary solitons in the resonator and consequently the maximum number of ``steps'' in intracavity power and transmission. Assuming that soliton can only emerge for $\zeta_0>\sqrt{3}$ (bistability criterion), we find $N_\mathrm{max} \approx \sqrt{\kappa/D_2}$, which remarkably coincides with the distance between the first generated primary sidebands $\mu_\mathrm{th, min}$ in the process of comb generation \cite{Herr2012}. \section{Limit conditions for solitons in microresonators} \label{limitssection} By substituting $|h|=|f|/\sqrt{2\zeta_{0}^{3}}$, $\gamma=1/\zeta_{0}$, $\tilde\theta=\sqrt{2\zeta_0}\theta$, $\Psi=\sqrt{2\zeta_0}\tilde\Psi$ and changing the phase of the pump, (\ref{eq:nls}) is transformed to the damped driven NLS equation for the stationary case: \begin{align} \frac{\partial^{2}\tilde\Psi}{\partial \tilde\theta^{2}}+2|\tilde\Psi|^{2}\tilde\Psi-\tilde\Psi=-i\gamma\tilde\Psi-h,\label{eq:nls-1} \end{align} which was analyzed for infinite boundary conditions in \cite{Barashenkov1996}. In particular the condition for the soliton existence $h>2\gamma/\pi$ transforms into: \begin{align} \zeta_{0}^\mathrm{max}=\pi^{2}f^{2}/8.\label{eq:lim1} \end{align} Eq. (\ref{eq:lim1}) can also be found from the requirement that the right part in equation (\ref{solparms}) must be smaller than unity. In \cite{Barashenkov1996} it was further shown analytically that the boundaries separating the regimes of existence of solitons (as described in the main text) are defined by characteristic curves for $\tilde\Psi_{0}$ in (\ref{eq:nls-1}). In our case this translates into \begin{align} |\Psi|_{\pm}^{2}=\frac{2}{3}\zeta_{0}\pm\frac{1}{3}\sqrt{\zeta_{0}^{2}-3} \end{align} Numerical simulations for our system with periodic boundary conditions show that all these limits remain valid with very good quantitative agreement for a sufficiently large number $K\gg \frac{2}{\pi}\sqrt{\frac{\zeta_0^\mathrm{max}}{d_2}}=f\mu_\mathrm{th, min}/\sqrt{2}$ of modes (typically a few hundred). \section{Analytical description of steps in the intracavity power} The height of steps in the intracavity power can be found by averaging the waveform amplitude (eq. \ref{waveform}) squared over one roundtrip for different numbers $N$ of solitons: \begin{align} \overline{|\Psi|^{2}}&=|\Psi_{0}|^{2}+N\cdot\xi(K)\frac{1}{2\pi}\int_{0}^{2\pi}(\Psi_{1}^{2}+\Psi_{0}\Psi_{1}^{*}+\Psi_{1}\Psi_{0}^{*}) d\phi\nonumber\\ &=|\Psi_{0}|^{2}+N\sqrt{2d_2}(\Psi_0'\cos\phi_0+\Psi_0''\sin\phi_0+\sqrt{2\zeta_0}/\pi)\simeq \frac{f^2}{\zeta_0^2}+N\xi(K)\frac{2}{\pi}\sqrt{d_2\zeta_0}. \end{align} As shown in Fig.~3 in the main manuscript, this approach also describes the laser tuning dependence of the step height. When comparing to the numerical simulations with a rather low mode number, we use a correction factor $\xi(K)$ of order unity $(\xi(K)\simeq1.3$ in the case of 101 simulated modes). For a higher number of simulated modes (e.g. $501$) this correction is not required. \section{Optical spectrum and temporal width of solitons in a microresonator} The optical spectrum $\Psi(\mu)$ of the soliton has the same hyperbolic secant form as the time domain waveform. Mathematically this corresponds to the Fourier transform of a hyperbolic secant being again a hyperbolic secant: \begin{align} \Psi(\mu)=\mathrm{F.T.}\left[\sqrt{2\zeta_{0}}\,\mathrm{sech}\left(\sqrt{\frac{\zeta_{0}}{d_{2}}}\phi\right)\right]=\sqrt{d_{2}/2}\,\mathrm{sech}\left(\frac{\pi\mu}{2}\sqrt{\frac{d_{2}}{\zeta_{0}}}\right). \end{align} Using the relation for the optical frequency $\omega = \omega_p + \mu D_1$ and the time $t=\tfrac{\phi}{2\pi} T_\mathrm{R}=\phi/D_1$ spectral envelopes and the soliton waveform can be rewritten: \begin{align} \Psi(\omega-\omega_{p}) = \sqrt{d_{2}/2}\,\mathrm{sech}((\omega-\omega_{p})/\Delta\omega) \ \ \mathrm{with} \ \Delta\omega=\frac{2D_{1}}{\pi}\sqrt{\frac{\zeta_{0}}{d_{2}}}. \end{align} and \begin{align} \Psi(t)=\sqrt{2\zeta_{0}}\,\mathrm{sech}(t/\Delta t),\ \ \mathrm{with} \ \Delta t=\frac{1}{D_{1}}\sqrt{\frac{d_{2}}{\zeta_{0}}}. \end{align} The minimal achievable soliton duration can be found by using $\zeta_0^\mathrm{max}$ (eq. \ref{eq:lim1}) in the above equation for $\Delta t$: \begin{align} \Delta t_\mathrm{min}=\frac{1}{\pi D_{1}}\sqrt{\frac{\kappa D_2 n_0^2 V_\mathrm{eff}}{\eta P_\mathrm{in} \omega_0 c n_2}}. \end{align} This equation can be recast in form of the group velocity dispersion $\beta_2 = \tfrac{-n_0}{c}D_2/D_1^2$, the nonlinear parameter $\gamma=\tfrac{\omega}{c}\tfrac{n_2}{\cal A_\mathrm{eff}}$ (for simplicity we assume critical coupling $\eta=1/2$ and on resonant pumping): \begin{align} \Delta t_\mathrm{min}=\frac{2}{\sqrt{\pi}}\sqrt{\frac{-\beta_2}{\gamma \cal{F} P_\mathrm{in}}}, \end{align} where denotes the finesse ${\cal F}=\tfrac{D_1}{\kappa}$ of the cavity. Note that the values $\Delta \omega$ and $\Delta t$ need to be multiplied by a factor of $2\,\mathrm{arccosh}(\sqrt{2})=1.763$ to yield the FWHM of the sech$^2$-shaped power spectrum and pulse intensity, respectively. For the time bandwidth product (TBP) we find $\Delta t\Delta\omega=2/\pi$ or, when considering the FWHM of spectral and temporal power (in units of Hz and s), $\mathrm{TBP}=0.315$. In the case of $N$ multiple solitons inside the cavity there is a more structured optical spectrum $\Psi_{N}$, resulting from interference of single soliton spectra $\Psi_{j}(\mu)$, where the relative phases of these spectra are determined by the positions $\phi_{j}$ of individual solitons: $\Psi_{N}(\mu)=\Psi(\mu)\sum_{j=1}^{N}\mathrm{e^{i\mu\phi_{j}}}$. The line-to-line variations can be high, however, the overall averaged spectrum still follows the single soliton shape and is proportional to $\sqrt{N}\Psi(\mu)$. \section{Laser tuning method to achieve soliton states} Experimentally the soliton states in the MgF$_{2}$ resonator can not be stably reached by slowly (manually) tuning into the soliton state. The obstacle lies in the temperature drop the resonator experiences when transiting from the upper branch state (high intracavity power) to the lower branch soliton state (lower intracavity power). This sudden temperature drop leads to a blue-shift of the resonance frequency and a loss of the soliton state. On the other hand, when tuning very quickly into the soliton state the resonator is still cold and its subsequent heating will again lead to a loss of the soliton state. We solve this problem by tuning into the soliton state with an ideal, intermediate tuning speed, such that the resonator reaches the soliton state in a thermal equilibrium, that is, neither too hot nor too cold. This method is illustrated in Fig.~\ref{fig:laserTuningMethod}. Practically this is achieved by programming an electronic laser frequency ramp signal defined by the three parameters laser tuning speed, and laser start and end wavelength. This signal is used to control the piezo tuning of the fiber laser. The laser frequency ramp is performed in the direction of decreasing optical frequency in order to first follow the upper branch before jumping onto the lower branch. Once found for a particular resonator, the parameters do not need to be changed and allow reliable generation of soliton states at the push of a button. In contrast to fiber cavity experiments, the soliton pulses form spontaneously without the need for external stimulation. The number of solitons formed can be controlled by the pump laser detuning. Once generated the solitons remain stable for hours until the pump laser is switched off without the need for active feedback on neither the resonator nor the pump laser. The stability of the soliton states is discussed in the next Section. \section{Self-stability of soliton states} As it is discussed in the main text the soliton states are achieved when the pump laser is detuned to the lower, effectively red detuned branch (cf. Fig.~2e in the main text). The generated solitons remain stable without any active external feed back applied to the system. This is remarkable and indeed surprising as operating on the lower branch (effectively red detuned) is usually thermally unstable\cite{Ilchenko1992a,Carmon2004} and would require active stabilization techniques. The thermal instability of the lower branch of the Kerr-bistability curve can be explained by the negative slope (decreasing intracavity power for increased laser detuning $\zeta_0$). In the presence of solitons, however, the fraction of the pump light that propagates at a similar velocity (The difference between phase and group velocity will result in an effectively longer soliton pulse. The presented reasoning remains valid). together with the high intensity solitons inside the resonator, experiences a much larger phase shift in one cavity roundtrip, compared to the fraction seeing only the cw component. One may interpret the situation in terms of two superimposed bistability curves; one bistability curve for the fraction of the pump light overlapping with the cw component and one bistability curve for the fraction of the pump light overlapping with the solitons. Fig.~\ref{fig:DualBistabCurve} illustrates the respective bistability curves of the intracavity average power for the case of one soliton. The bistability curve resulting from the soliton is much lower in height as the spatial length of a soliton is small compared to the roundtrip length of the resonator. Due to the higher peak power (when compared to the intracavity cw component) the bistability curve resulting from the soliton extends to larger detunings. While the main portion of the pump light is effectively red detuned (lower branch) the small portion overlapping with the soliton inside the resonator is effectively blue detuned (upper branch). Importantly, the resulting combined average intracavity power (the sum of the two bistability curves) has a positive slope (increasing intracavity power for increasing laser wavelength), which is responsible for the effective thermal self-stability. The resulting positive slope can indeed be evidenced for various number of solitons in the experiment (cf. main text Fig.~2a, a negative transmission slope corresponds to a positive slope in average intracavity power), as well as in the numerical simulation (blue curve, main text Fig.~3a). It is important to note that a combined positive slope requires that the positive slope (induced by the soliton) dominates over the negative slope (related to the intracavity cw component). This puts a limit on the maximum length of the cavity: For long cavities, such as fiber cavities, the intracavity power is dominated by the cw component and active stabilization\cite{Leo2010}) of the system is required. This explains why self-stability of the solitons is only observed in microresonators. Finally, we note that while the intracavity power in the soliton state is dominated by the soliton component, the PDH signal is dominated by the large fraction of the pump light that is effectively red detuned (Experimentally this can be seen by the only small changes to the PDH signal in the red detuned regime when transmission steps occur, cf. main text Fig.~2e. \section{Frequency resolved optical gating (FROG) vs. the auto-correlation technique} In microresonator based systems the proof of a truly pulsed time-domain waveform inside the microresonator (and pulses coupled out directly form a the microresonator) is challenging. The difficulty lies in distinguishing the truly pulsed scenario (where all phases equal such that a pulse forms) from the case where all phases between comb lines are constant in time but arbitrary, which results in a periodic, modulated time domain output, which however is not truly pulsed, eg. \cite{Ferdous2011, Papp2011a}. Figure~\ref{fig_SI_FROGvsAutocorr} compares the two cases (not pulsed and pulsed), based on simulated data. While the FROG measurement can clearly distinguish between the the two cases, a simple auto-correlation measurement can not. Indeed, the auto-correlation trace can exhibits narrow spikes, that are easily confused with pulses, even in the case where no pulses are present in the cavity. This applies in particular to experimental data, where detector noise and background further complicate reliable analysis of auto-correlation data. A detailed understanding and discussion of the background in experimental auto-correlation traces is essential to correctly interpret the results. \section{Soliton mode-locking in lasers vs. soliton formation in microresonators} \label{comparison} This section compares soliton formation in microresonators with soliton mode-locking in lasers where a saturable absorber is necessary for soliton stability: \\ {\bf Soliton mode-locking in lasers:} Generally mode-locking requires a pulse shaping mechanism, which can be achieved in different ways for example via a fast saturable absorber that forms the circulating intensity inside the laser cavity into a pulse\cite{Haus2000a}. Here, the shortest achievable pulse duration is limited by the relaxation time of the fast saturable absorber. Another mode-locking mechanism is soliton mode-locking, where the pulse shaping mechanism is provided by the formation of solitons in the presence of negative group velocity dispersion and self-phase modulation via the cavity’s non-linearity. This method is widely used and well understood in the context of mode-locked laser and ultra-short optical pulse generation\cite{Kaertner1996,Kaertner1998}. While the pulse shaping does not rely on the effect of a saturable absorber, it has been shown theoretically and experimentally that soliton mode-locked lasers still require a saturable absorber, which ensures the stability of the soliton against the growth of a narrow-bandwidth cw background\cite{Kaertner1996,Kaertner1998}. This cw background arises from the interaction and reshaping of the soliton in the laser cavity and subsequently experiences a larger gain as the soliton, which due to its broadband spectral nature falls into the outer, lower gain parts of the spectral laser gain window. It is important to note that in the case of soliton mode-locking the relaxation time of the saturable absorber does not limit the achievable pulse duration; it merely ensures the suppression of the continuum on intermediate timescales\cite{Jung1995}. \\ \\ {\bf Soliton formation in microresonators:} Soliton formation in microresonators is similar to soliton mode-locking in lasers. As in the case of a soliton-mode locked laser solitons are formed due to a balance between cavity nonlinearity and self-phase modulation. However, while microresonators are driven by a continuous wave pump laser they are not lasers. The conversion of the continuous pump laser light into other frequency components and the amplification of the newly generated frequency components rely exclusively on the parametric gain due to the Kerr-nonlinearity of the resonator material. The cw pump laser coincides directly with a spectral component of the solition. Importantly, a saturable absorber is not required for the stability of the solitons as detailed below: Mathematically the coherently driven, damped Kerr-nonlinear microresonator is described by the Lugiato-Lefever equation\cite{Lugiato1987}, which is identical to a damped, driven nonlinear Schrödinger equation. Dissipative temporal cavity solitons, superimposed onto a weak continuous wave background, have been proven to exist as stable mathematical solutions to this equation\cite{Wabnitz1993}. Due to the cavity loss, these solitons are dissipative in their nature and their persistence requires a source of energy for replenishment. The latter is provided by continuously and coherently driving the cavity. \\ The continuous wave background on which the solitons exist in the case of a microresonator is very different from the detrimental cw background in a soliton mode-locked lasers. It is a coherent internal field originating from the pump laser. It is not a narrow bandwidth low intensity cw background pulse produced by perturbation of the soliton. As opposed to a spectrally limited but continuous laser gain medium (continuous in the sense that it can amplify any frequency component within the gain bandwidth) the parametric gain profile is highly frequency selective (as energy conservation needs to be fulfilled in the frequency conversion processes). Moreover, the parametric gain profile depends on the light frequencies and intensities present in the cavity and relies on the phase coherent interaction between all these light frequencies. While it cannot replace a stringent mathematical stability analysis as mentioned above, this illustrates that the growth of a destabilizing cw background is generally not supported in a microresonator. Hence, stable soliton formation in a microresonator does not require a saturable absorber and the solitons are well described by the Lugiato-Lefever equation. This also is in perfect agreement with our experiments, which reveal the generation of stable solitons in optical microresonators. \section{Spectral Broadening} \label{broadening} Self-referencing via e.g. \textit{f-2f} or \textit{2f-3f} interferometry \cite{Telle1999,Cundiff2003} is an important future technical milestone for microresonator based combs (including soliton based combs). Necessary for these self-referencing schemes is a minimal optical comb bandwidth of an octave (\textit{f-2f}) or two thirds of an octave (\textit{2f-3f}). So far however, self-referencing of microresonator combs has not been possible as no system was capable of generating sufficiently broad spectra while maintaining the low-noise level required for metrology operation. The discovery of soliton formation in microresonators and the generation of ultra-short optical pulses enables spectral broadening using techniques that have been developed for conventional mode-locked lasers. Here we demonstrate in a first proof-of-concept experiment external broadening of a soliton based microresonator frequency comb to a broadband spectrum. Note that we do not employ the same resonator as in the main text but a larger MgF$_2$ resonator with an FSR of only 14.1~GHz. Based on the spectral envelope the pulse duration in the one soliton state is estimated to 112~fs. The pulses are amplified in an erbium doped fiber amplifier to approximately 3.2~W average power. The amplified and dispersion-compensated pulses are sent into 2~m of highly-nonlinear fiber. The achieved spectral bandwidth is close to two thirds of an octave as required for self-referencing (Fig.~\ref{fig:broadening}). No indication of added noise in the RF beatnote is observed. A more careful analysis of coherence\cite{Coen2002,Dudley2006} in the broadened spectrum is beyond the present scope and subject of future work. The presented results show that external broadening techniques can in principle be applied to microresonator based combs and open a viable route towards future self-referencing of microresonator based combs. \newpage \begin{figure} \centering \includegraphics[width=0.6\textwidth]{fig_laserTuningMethod_small} \caption{\textbf{Laser tuning method to achieve soliton states.} \textbf{a.} Illustration of the laser tuning method, where a laser frequency scan (green) is performed that stops when the targeted comb state, marked by a orange dot in the corresponding transmission signal (the grey line illustrates the signal that would have been observed if the scan had been continued). The system remains stably in this state when the appropriate scan speed is chosen. In this ideal scenario the temperature (which starts increasing as soon as light is coupled to the resonator) reaches the steady-state equilibrium temperature of the targeted state when the system has reached this state via laser detuning. If the laser scan is performed too slow (fast) then the resulting temperature will be too high (low) and destabilize the system. \textbf{b.} Experimental laser scan over a resonance, showing a pronounced step followed by multiple smaller steps. \textbf{c.} Demonstration of the adaptive scanning method. The laser scan is stopped after the transition to the soliton regime. The appropriate choice of scan speed allows the system to operate stably in a soliton state. The coupled pump power is 30~mW.} \label{fig:laserTuningMethod} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{soliton_bistabcurves_small} \caption{\textbf{Thermal self-stability of soliton states:} This figure is a more detailed version of Fig.~1c in the main text. The higher level of detail is required to explain the self-stability of soliton states. When tuning into the resonance with increasing wavelength (decreasing optical frequency) the intracavity power is described by the upper branch of the bistability curve. After the transition to a soliton state the major fraction of the pump light is described by the lower branch bistability curve (red). The fraction of the pump light that propagates with the soliton inside the microresonator (cf. inset) experiences a larger phase shift and is effectively blue detuned on the upper branch of another bistability curve (blue). The shape of the latter bistability curve depends on the number of solitons present in the cavity and their peak power as discussed in the text. The resulting intracavity power can be inferred by adding the bistability curves all fractions of the pump light resulting in the yellow curve. A positive slope in the combined average intracavity power implies thermal stability of the system.} \label{fig:DualBistabCurve} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\textwidth]{fig_AutoCorr_vs_FROG_small} \label{fig:timedomainmethod} \caption{\textbf{Autocorrelation vs. frequency resolved optical gating} The left column (case 1), shows the situation of a comb spectrum with phases which are constant in time, but arbitrary and not such that a pulse forms inside the microresonator ($T_\mathrm{R}$ denotes the light roundtrip time of the microresonator). The right column (case 2) corresponds to the situation, where a single pulse circulates inside the microresonator. The expected auto-correlation and FROG signals are shown for both cases. Note that the intra-cavity waveforms as well as the autocorrelation and FROG signals have been rescaled to the same peak values. In the pulsed case 2 the reached peak intensities in the intracavity waveform and consequently in the auto-correlation and FROG signal are much higher than in the not pulsed case 1.} \label{fig_SI_FROGvsAutocorr} \end{figure} \begin{figure} \centering \includegraphics[width=1\textwidth]{fig_broadening_small} \label{fig:broadening} \caption{\textbf{External broadening of ultra-short pulses (estimated 112~fs) from a microresonator:} \textbf{a} Optical single soliton spectrum (blue) generated in a 14.1~GHz MgF$_2$ resonator with 80~kHz resonance width. The spectrum is amplified and broadened to a supercontinuum (red) in a highly-nonlinear fiber. \textbf{b,c} Radio-frequency (RF) repetititon rate beatnote observed when detecting the optical spectra on a fast photodetector before and after spectral broadening (resolution bandwidth 1~kHz). } \end{figure} \clearpage \renewcommand{\emph}{\textit} \bibliographystyle{naturemag}
1,314,259,996,131
arxiv
\section{Introduction} Bicycling, as a traditional means of mobility, has gained more popularity in recent years. Bicycle mode share is rising in response to issues in modern cities, such as increases in traffic jams, land use, energy consumption, air pollution, climate change, and physical inactivity \cite{flusche2012bicycling,rupi2019visual}. This trend has continued during the COVID-19 pandemic, as it is reported that bicycling levels have significantly increased in many countries despite lockdowns and travel restrictions \cite{buehler2021covid}. However, there has been an alarming increase in bicyclist fatalities over the last decade. The National Highway Traffic Safety Administration's report shows that in the United States, the number of bicyclists' fatalities has increased by more than 35\% since 2010 \cite{national2021fatality}. One potential reason for this increase is due to the auto-centric nature of much of US roadway design, which often overlooks the safety and comfort needs of vulnerable road users like bicyclists \cite{schultheiss2018historical}. Furthermore, current crash reports only focus on vehicle-bicyclist crashes that result in fatalities, and have very limited information on bicyclists' discomfort, behaviors, and responses in different roadway designs and contextual settings \cite{robartes2017effect}. The absence of applicable data has been recognized as a limiting factor for many transport and urban planning studies, especially for bicyclists \cite{willberg2021comparing}. Therefore, it is necessary to identify innovative approaches to understand how bicyclists' behavior, sense of safety, and comfort is affected under different roadway conditions during the design and planning phases. To achieve this, bicyclists' behavior and psycho-physiological state data should be explored and evaluated in different roadway settings. Various methods of collecting bicycle safety, risk, comfort and behavior data have been utilized in the past. These studies spanned across surveys, observational studies, naturalistic studies and simulation studies. For example, interviews and surveys ask participants about their behaviors and comfort in a certain design context either by imagination or after an actual bike ride. However, the subjective response does not always reflect what the participant will do in a real-road setting and can suffer from hypothetical bias \cite{fitch2018relationship}. Observational studies can record realistic changes within the environment and bicyclists' responses in real-world conditions, but are unable to track bicyclists' physiological changes \cite{chidester2001pedestrian}. Naturalistic studies can further record bicyclists' responses in real-world conditions through different sensing modalities such as GPS, ECG, or mobile eye trackers \cite{rupi2019visual}. However, these studies have potential risks for participants as they may be placed in dangerous roadway settings (e.g. distraction in high traffic density area) \cite{stelling2018study}. Furthermore, in naturalistic studies, it is difficult to control many environmental factors that may impact the independent variables, especially for physiological and behavioral factors, which makes causal inferences especially difficult \cite{fitch2020psychological,teixeira2020does}. Experimental studies conducted through Immersive Virtual Environments (IVE) are an emerging approach that minimizes the hypothetical bias of subjective surveys while offering a controlled, low-risk, and immersive environment to evaluate the responses of bicyclists to different roadway designs and conditions. One of the main challenges in previous IVE simulation studies was the integration of human sensing techniques into the experiment. In IVE related literature, participants' physiological responses have been applied to evaluate different design alternatives for buildings \cite{francisco2018occupant}, hospitals \cite{chias20193d}, and other civil infrastructure systems \cite{awada2021integrated}. However, only a few recent studies have applied bicyclist physiological sensing in IVE simulators \cite{cobb2021bicyclists}, and a deeper understanding of bicyclists' psychological and physiological responses in different roadway design and conditions is still needed. Overall, we still have very a limited understanding of bicyclists' physiological responses in different roadway environments, especially in IVE studies. Some naturalistic studies have been conducted to evaluate bicyclist's behavior and physiological responses in different contextual settings \cite{guo_robartes_angulo_chen_heydarian_2021, mcneil2015influence,rupi2019visual,teixeira2020does}. These preliminary studies revealed that psycho-physiological metrics (e.g., heart rate (HR), gaze variability, and skin conductance) are indicators of how participants' behaviors and perceptions may change in different contextual settings. By integrating a bicycle simulator with an IVE, this research aims to overcome some of the limitations found in previous research by conducting a repeated measures experiment to collect bicyclists' physiological responses (specifically, gaze variability and HR) in different urban roadway designs. With an IVE, we are able to control other roadway environmental factors, such as infrastructure design, vehicle traffic volume, traffic signal phase, vehicular speeds and gaps, vehicle types, lighting, and weather conditions to better quantify the relationships between bicyclists' behavior (e.g., speed, lane position) and physiological responses (e.g., gaze variability, HR). These measurements can be used as surrogate data to better understand how different types of roadway conditions and infrastructure designs result in higher rates of physiological stimulation. In this paper, we leverage human sensing tools (e.g., wearable devices) together with a bicycle simulator in IVE to specifically evaluate and model the bicyclists' psycho-physiological and behavioral responses. We consider three roadway design scenarios (shared bike lane [sharrow], standard dedicated bike lane, and protected bike lane with pylons) and evaluate bicyclists' HR, gaze measures, and speed within each environment. After performing extensive feature extractions on HR and gaze data, we leverage linear mixed effect models to compare bicyclists' responses across the simulated environments. In this paper, we first provide detailed background on previous bicycle safety studies in naturalistic, simulated, and virtual reality (VR) environments. We then discuss the applicability of bicyclists' physiological responses and gaze measures in understanding their behaviors and physiological states, such as stress level and cognitive load. By providing the methodology of our experimental design, we dive into the details of the experiment. We then provide the results of bicyclist's HR and gaze variability together with bicycling performance within each environment with a linear mixed effect modeling approach. We conclude with a discussion on comparing different physiological responses and reasons behind the results. \section{Literature Review} This section is divided into literature reviews of different categories of bicyclists' behavior studies, including stated preference surveys (\ref{sec:preference}), observational studies (\ref{sec:observational}), naturalistic studies (\ref{sec:real_world}), and bicycle simulators (\ref{sec:simulators}). An overview of different measures of human psycho-physiology in experimental settings with a focus on bicyclists studies follows (\ref{sec:psycho_measure}). \subsection{Stated Preferences Survey} \label{sec:preference} Surveys have been widely used to study bicyclists' behavior, particularly when faced with a lack of real-world data. Surveys, when composed carefully, can efficiently assess large populations of bicyclists and have been used to study a wide variety of topics such as perceived safety and comfort \cite{chaurand2013cyclists,abadi2018bicyclist}. For instance, \cite{chaurand2013cyclists} studied the perceived risk of bicyclists and drivers in certain interactions. Results suggest that perceived risk is higher for drivers compared to bicyclists. Additionally, the perceived risk of bicyclists is higher when interacting with a car than with another bike. Another study investigated the perceived level of comfort by bicyclists near urban truck loading zones in varying conditions of truck traffic, bicycle lane marking type, and traffic signs \cite{abadi2018bicyclist}. Results indicate that the existence of trucks in the traffic is a significant factor in reducing bicyclists' perceived comfort. Additionally, the study finds that women are generally more affected by the truck traffic than men. While these types of studies add significant value to our understanding of the effect of contextual settings on bicyclists' perceived risk and comfort, they often are riddled with issues such as limitations on external validity. For instance, these studies may suffer from hypothetical bias, in which participants' responses to surveys may not reflect their real response in a naturalistic situation \cite{fitch2018relationship}. For instance, \cite{fitch2018relationship} reports that imagined ratings of comfort while biking may have a negative bias as high as 15\% difference in comfort and safety when compared to real-world situations. Additionally, surveys and subjective measures generally cannot be used to understand the temporal dimension of the effect of certain contextual elements on bicyclists. For instance, in the case of perceived comfort, it is not possible to understand the exact moment in which a bicyclist felt discomfort, or to what level the discomfort varies among different people and at different locations. On the other hand, physiological measures can be used as surrogate metrics to understand the time span of contextual elements' effects on the participants. \subsection{Observational Study} \label{sec:observational} Observational studies can minimize the risk of hypothetical bias from stated preference surveys and provide a real-world assessment of bicyclists' responses in specific locations. For example, an observational study conducted in Boston, United States, investigated distracted cycling behavior and reported the prevalence of two types of distractions: auditory (ear buds/phones in or on ears), and visual/tactile (electronic device or other object in hand). Almost one-third of all bicyclists exhibited distracted behavior at four high traffic intersections during peak commuting hours. The highest proportion of distracted bicyclists was observed during the midday commute (between 13:30-15:00) \cite{wolfe2016distracted}. Another observational study with 2187 cyclists in Germany shows 22.7\% bicyclists are engaged in a secondary task such as wearing headphones or earphones (13.1\%) or interacting with other cyclists (7.0\%) \cite{huemer2019secondary}. In observational studies, the collected data relies only on the behavioral responses that the observers can visually discern, without having the ability to manipulate different factors, such as traffic density or noise levels \cite{daniels2008effects}. In recent years, the utilization of cameras has greatly increased the popularity of video-based observational studies. For instance, an observational study in China recorded 112 hours of video footage with 13,407 bicyclists riding shared bikes and 2061 riding personal bikes. Not wearing a helmet, violating traffic lights, riding in the opposite direction of traffic, not holding the handlebar with both hands, and riding in a non-bicycle lane are identified as top unsafe behaviors \cite{gao2020unsafe}. Overall, observational studies can only evaluate bicyclists' behaviors in existing environments, and are unable to collect bicyclist's psycho-physiological responses. Psycho-physiological measures of bicyclists may be helpful for understanding the reason behind distraction, the length of the distraction, and to what level the stimulus affects the rider's decision-making. \subsection{Naturalistic Study} \label{sec:real_world} The development of mobile sensing technologies has enabled mobility data collection from a variety of modalities. In this line of research, a number of naturalistic studies have investigated bicyclists' behaviors and physiological responses, such as HR, heart rate variability (HRV) \cite{doorley2015analysis}, and gaze \cite{rupi2019visual}. For example, the changes in ambient light levels can affect bicyclists' perception of the environment by changing the gaze reactions \cite{uttley2018eye}. These studies show preliminary evidence on how infrastructure design is associated with cycling stress, although variations are found between different cities and road types \cite{fitch2020psychological, teixeira2020does,guo_robartes_angulo_chen_heydarian_2021}. In \cite{fitch2020psychological}, through using a BodyGaurd II heart beat-to-beat interval measuring device, the HRV results from 20 female participants suggest that only local roads (with no dividing yellow line and low car speed and volumes) consistently provided less stress to the participants compared to collector (medium to high volume road with striping for dividing traffic) and arterial roads (high volume, multi-lane). One of the limitations identified in the study is that environments with protected or separate bike lanes are not included, so the results cannot indicate how those designs might compare to the as-built designs \cite{fitch2020psychological}. Lack of environmental control (e.g., traffic, weather, etc.) could be the main cause of the uncertainty, which undermines the interpretability of these results and suggests the need for further research. Additionally, only a limited number of human sensing devices can be applied in naturalistic settings, as many of these devices are intrusive. For example, the electroencephalogram (EEG) measurement devices is more capable for in-lab testing. This will have an effect on participant's behavior and safety, which may result in degraded data quality. Lastly, the potential risks of injuries and fatalities for participants triggers ethical concerns for naturalistic studies. For example, \cite{stelling2018study} conducted a study in real traffic examining glance behavior of teenage bicyclists when listening to music. The study was terminated after fourteen participants when the results indicated that a substantial percentage of participants cycling with music experienced a decrease in their visual performance \cite{stelling2018study}. \subsection{Bicycle Simulator With IVE Study} \label{sec:simulators} The advancements in VR and bicycle simulation over the past decade has led to a rapid increase in its application among researchers, designers, and engineers to evaluate human responses to alternative infrastructure designs. The combination of IVE and instrumented physical bicycle simulators provides a high level of immersion and flexibility in experimental designs. Furthermore, it enables user engagement and allows subjective analysis of participants to better understand their behavior and preferences to the changes in simulated environments \cite{nazemi2018studying}. For instance, \cite{xu2017exploring} was able to evaluate 30 participants' behaviors in an IVE, by designing a straight path with four sections of varying traffic conditions. Through this experiment, results suggest that the existence of a bike lane in low traffic conditions significantly improved cyclist lane-keeping performance \cite{xu2017exploring}. A more recent study compared cycling behaviors in an IVE between a keyboard controlled bicycle and an instrumented bicycle where participants could pedal. The results indicated that there is more variance in the instrumented bicycle experiments in different measurements, such as speed, head movement, acceleration, and braking behaviors \cite{bogacz2020comparison}. Validation studies were also performed to compare the bicycling behavior between IVE bicycle simulator and naturalistic studies \cite{o2017validation,guo_robartes_angulo_chen_heydarian_2021}. Although there is a limited number of validation studies, promising results are shown in the validity of cycling performance such as lane position and speed \cite{o2017validation}. However, most IVE related studies are limited to observing cycling behaviors and preferences without exploring bicyclists' psycho-physiological responses. \subsection{Measurement of Physiological Responses} \label{sec:psycho_measure} Past studies have pointed out the utility of humans' physiological signals such as cardiovascular (e.g., HR), skin temperature, skin conductance, brain signals, gaze variability, and gaze entropy in understanding emotions, stress levels, anxiety, and cognitive load \cite{tavakoli2021driver,kim2018stress,lohani2019review}. As we are focusing on wearable and eye tracking devices for this study, the following sections provide additional background on the correlation between stress level, cognitive load, HR, and gaze patterns. \subsubsection{HR/HRV} Studies in different areas have used HR measures to analyze different human states in response to changes within an environment. While in medical applications HR is generally retrieved through devices such as Electrocardiography (ECG), wearable based devices (e.g., smartwatches) generally use photoplethysmogram (PPG) technology. ECG measures the electrical activity of the heart through the application of contact electrodes, while PPG records the blood volume in veins using infrared technology. The blood volume measurement is then used to estimate the HR (i.e., beats per minute), and HRV \cite{lohani2019review,tavakoli2021harmony,tavakoli2021leveraging}. HRV features are a set of signal properties that are calculated based on the beat-to-beat intervals in a person's HR, such as the root mean squared of the successive intervals (RMSSD) \cite{tavakoli2020personalized,kim2018stress}. Both HR and HRV metrics are used in the literature for understanding human's state. In general, studies have shown that an increase in stress level is associated with an increase in HR, and a decrease in RMSSD features \cite{tavakoli2021harmony,kim2018stress,napoli2018uncertainty}. More specifically, in bicycling studies, an association has been found between perceptions of risk and HR \cite{doorley2015analysis,fitch2020psychological}. For instance, a naturalistic study in Ireland showed that situations bicyclists perceive to be risky are likely to elicit higher HR responses. This study also found that busy roads and roundabouts without bike lanes were perceived as more dangerous and risky compared to roads where cyclists are separated from traffic \cite{doorley2015analysis}. \subsubsection{Eye Tracking} \label{eye_tracking_definition} In addition to HR, eye gaze patterns have also been used in different studies to infer human state. Different features such as blinking rate, saccade and fixation duration, gaze variability in different directions, stationary gaze entropy (SGE), and gaze transition entropy (GTE) have been shown to be correlated to different states such as work load, stress level, and emotions \cite{shiferaw2019review}. A fixation in eye patterns refers to maintaining the eye gaze on a specific location \cite{purves2001types}. At each fixation, the gaze is approximately stationary. The transitions between fixations are called saccades where the point of fixation changes rapidly to a new fixation point. The variation and sequence of fixations and saccades were shown to be correlated with human states such as stress and work load \cite{may1990eye}. In addition to fixation and saccade, gaze variability is an additional feature that refers to the standard deviation in gaze angles in both vertical and horizontal directions. In general, two measures can be calculated for entropy. The first one is based on the definition of uncertainty associated with a choice \cite{shiferaw2019review}. With more randomness in a system, the entropy also increases. This is calculated through Shannon's equation \cite{shannon1948mathematical}. In the gaze analysis research, this first entropy is referred to as the stationary gaze entropy (SGE), which shows the overall predictability of fixation locations and can be a proxy for the gaze dispersion \cite{shannon1948mathematical}. For a set of fixation locations in a sequence of eye movements, if we assign fixation locations to spatial bins of $p_i$, we can calculate the SGE as: \begin{equation} \label{equation:sge} SGE = -\sum_{\textit{i=1}}^{n} p_{i} \log_{2}p_{i} \end{equation} Different studies have used SGE for human state analysis. For instance, SGE was used for detecting task demand, complexity, experience, workload, drowsiness, and being under the influence of alcohol \cite{shiferaw2019review}. The second measure of gaze entropy is gaze transition entropy (GTE), which is the conditional entropy that takes into account the temporal dependency between different fixations. GTE is a measure of predictability of the next fixation location given the current fixation location. For a sequence of transitions between different spatial bins of $i$ and $j$, with a probability of $p_{ij}$ the GTE can be calculated as: \begin{equation} \label{equation:gte} GTE = -\sum_{\textit{i=1}}^{n} p_{i} \sum_{\textit{j=1}}^{n} p_{ij} \log_{2}p_{ij} \end{equation} Conceptually, for each specific combination of task demand and scene complexity, an optimal level of GTE exists \cite{shiferaw2019review}. The optimal GTE can be imagined as the result of the interaction between human internal state and the amount of information provided by the external context. Deviation from the optimal GTE can provide information about changes in human state. For example, increase in stress, anxiety, and frequency of emotional episodes are associated with an increased level of GTE (relative to the optimum). While a decrease in the level of GTE (relative to the optimum) can be due to usage of depressants such as alcohol \cite{shiferaw2019review}. In addition to gaze entropy, gaze data is usually analyzed based on the Area of Interests (AOI). People divert their attention away from the previous fixation to another, which reflects the changes in mental concentration through AOI. By mapping fixations with the AOI, it is possible to obtain a statistical description of key gaze parameters, including the fixation duration, fixation counts, and saccade counts. In transportation related studies, road center is a frequently used AOI \cite{wang2014sensitivity}. The percentage of fixations in the road center AOI is referred to as the percentage of road center (PRC) feature \cite{guo2020interacting}. PRC has been shown to increase with elevated cognitive demand \cite{engstrom2005effects}. For bicycle related studies, eye tracking behaviors have been analyzed in several real-world experiments. In a naturalistic study from Germany, 20 participants cycled at five defined test locations while wearing a mobile eye tracking system. The outcome shows that spatially open locations are related with higher level of perceived risk, and more cycling experience and greater familiarity with a location may lead to a more foresighted and focused gaze behavior \cite{von2020gaze}. Another naturalistic study in Italy investigated bicyclists’ eye gaze behavior at signalized intersections. They collected this data through a mobile eye tracker from 16 participants in a 3-kilometer corridor. The results show that intersections that force bicyclists to merge with vehicle traffic yield notable differences in features of gaze behavior. For instance, when approaching intersections, the moment of first fixation on the traffic lights occurs earlier for the case of no bike lane as compared to the case with separate bike lane. Additionally, for inexperienced cyclists, intersections without a separate bike lane were associated with an increase in gaze variability and looking around \cite{rupi2019visual}. To our knowledge, no previous research has studied bicyclist's gaze behaviors in IVE with bicycle simulators. As previous studies have shown that human physiology (e.g., HR) is correlated with different changes in human states such as stress, comfort, and cognitive load, which may be used to infer bicyclists' behavioral changes with respect to the environmental variations \cite{tavakoli2021harmony,roe2020urban,tavakoli2021leveraging}. While previous studies provided significant insights into the effect of the road environment on bicyclists' state and behaviors, most of these studies did not explore the variation in bicyclists' physiological measures in response to changes in roadway design. In this study, by integrating a physical bicycle simulator within an IVE, we collect multimodal human sensing data from 50 participants through three different bicycle infrastructure designs on the same road. Through linear mixed effect modeling, we provide evidence on the strong association between bicyclists' psycho-physiological features and different roadway designs. \section{Methodology} The methodology section is divided into multiple subsections which describe the experiment design (section \ref{sec:experiment_design}), alternative scenarios (section \ref{sec:alternative}), bicycle Simulator setup (section \ref{sec:bicycle_simulator}), data collection (section \ref{sec:data_collection}), experiment procedure (section \ref{sec:experiment procedure}), participant recruitment (section \ref{sec:participant}), data preprocessing (section \ref{sec:preprocessing}), and statistical modeling (section \ref{sec:stat modeling}). Figure \ref{fig:System_archetecture} shows a general framework of this study. \begin{figure} \centering \includegraphics[width=\linewidth]{images/System_framework_2_notext.jpg} \caption{System architecture of data collection. Design context: 1) Road geometry information from Google map; 2) Road texture from real world measurement; 3) Vehicle modeling and traffic simulation in Unity asset store; 4) Buildings modeling from 3DMax. IVE Platform: 5) Unity: 3D gaming engine; 6) SteamVR: integrating hardware with Unity. Simulator Setup: 7) Wahoo Kickr Climber: simulates physical grade changes by adjusting bike incline; 8) Wahoo Kickr headwind: headwind proportional to bike speed; 9) Wahoo Kickr Smart Trainer and ANT+: biking dynamics simulation; 10) Trek Verve physical bike; 11) HTC VIVE Pro Eye: VR headset with eye tracking; 12) Controllers: steering and braking of the bike. Data Collection: 13) C\# scripts in Unity to record: 14) Position, 15) Cycling performance and 16) Interactions on controllers (touch, click or press); 17) TobiiPro Unity API collects: 18) Eye tracking data; 19) OBS studio: records room videos and VR videos simultaneously as shown in 20); 21) Android smartwatch collects: 22) Heart rate and 23) hand acceleration data. Data Preprocessing: 24) Opencv: video and image processing; 25) Openpose: pose data extraction from videos; 26) Amazon S3: smartwatch data on the cloud; 27) Python: numpy and pandas for data cleaning, management and analysis; 28) R: statistical modeling.} \label{fig:System_archetecture} \end{figure} \subsection{Experiment Design} \label{sec:experiment_design} This research studies the effect of different roadway designs on bicyclists' physiological states. The independent variables are demographic information (i.e., age, gender, bicycling attitude, and VR experience), the subjective realism of the IVE, as well as the three categorical variables of different roadway designs in IVE with a bicycle simulator: (1) the as-built shared bike lane environment (sharrows), (2) separate bike lane, and (3) protected bike lane with pylons. The dependent variables are different measurements of cycling performance (i.e., speed, lane position) and physiological responses (i.e., eye tracking and HR features) from integrated or mobile sensors. \subsection{Road Environment and Alternative Designs in IVE} \label{sec:alternative} The IVE is developed based on a real-world location - the Water Street corridor in the city of Charlottesville, Virginia. This area has consistent volume of bicyclists and pedestrians and has been identified as a priority corridor by Virginia Department of Transportation for pedestrian crashes. The simulated road includes four city blocks, with a 4\% downhill grade in one of the segments (Figure \ref{fig:road} - (f)). The road has shared lane markings (sharrows) without separate bike lanes. At the third intersection, there is a traffic signal, and there is a parking lane in the westbound direction. Figure \ref{fig:road} shows a comparison between the real environment and the IVE created in Unity software and the location of the real-world environment. \begin{figure} \centering \includegraphics[width=\linewidth]{images/Road_environment.jpg} \caption{Real-world road design and alternative designs in IVE, (a) Real world location on the map; (b) Street view of the real world location; (c) Street view of as-built environment in IVE; (d) Street view of separate bike-lane environment in IVE; (e) Street view of protected bike lane environment in IVE; (f) the vertical road profile, road segment one is a 4\% downhill road} \label{fig:road} \end{figure} Based on technical drawings provided by the City of Charlottesville and in-field measurements (\ref{fig:System_archetecture}-1), a one-to-one road environment is built in Unity (\ref{fig:System_archetecture}-5) with SteamVR platform (\ref{fig:System_archetecture}-6). The road textures are made from high resolution images of the real-world surfaces to make sure colors and surface details are representative of the real-world environment (Figure \ref{fig:System_archetecture}-2). The traffic volume is calculated from two weeks of real-world observations of the number of vehicles passing through the selected corridor. Based on the observations, four similar car models in Unity are generated randomly within the IVE to create traffic flows (Figure \ref{fig:System_archetecture}-3). The buildings in the IVE are modeled individually in 3DMax software and then imported into the IVE (Figure \ref{fig:System_archetecture}-4). The bike lane width for both the separate bike lane and protected bike lane was designed to be 4ft (1.2m) wide, the minimum requirement based on Manual on Uniform Traffic Control Devices e(MUTCD) guidelines. Typically, bike lanes are to be between 4 to 6 feet (1.2 to 1.8m) wide. However, re-configuring the roadway markings for vehicular traffic and the inclusion of a bike lane only left enough space for one 5 foot (1.5m) bike lane on the roadway. Since this project aimed at having both a protected and separate bike lane, the standard bike lane width of 4 feet (1.2m) for both environments was deemed appropriate so that there would be no discrepancies between lane widths and user comfort during experimentation. The protected bike lane had an additional 1 foot (0.3m) wide buffer zone between the edge of the bike lane and the vehicular traffic with yellow pylons placed every 6 feet (1.8m). \subsection{Bicycle Simulator Setup}\label{sec:bicycle_simulator} The series of Wahoo indoor bicycling training equipment \cite{wahoowebsite2021} are used to build the physical simulator. The Kickr Climb can apply grade changes to the bike (Figure \ref{fig:System_archetecture}-7). The Kickr Headwind provides headwind in front of the bicyclist based on the speed (Figure \ref{fig:System_archetecture}-8). The Kickr smart trainer and the ANT+ are necessary to provide haptic feedback to the bicyclist, which is compatible with Unity and StreamVR for data communication (Figure \ref{fig:System_archetecture}-9). A physical Trek Verve bike (Fig.\ref{fig:System_archetecture}-10) is installed as the body structure of the bicycle simulator to improve the realism. HTC Vive Pro Eye headsets (Figure \ref{fig:System_archetecture}-11) with the controllers (Figure \ref{fig:System_archetecture}-12) are used in the simulator. The spatial location of the controllers allows the system to detect turning movements, and the buttons on the controllers can be modified for braking action to control the bicycle speed. A more detailed description of the bicycle simulator setup is available in our previous study \cite{guo_robartes_angulo_chen_heydarian_2021, guo2021orclsim}. \subsection{Data collection} \label{sec:data_collection} This section will introduce the data collection settings and procedures as demonstrated by the system framework in Figure \ref{fig:System_archetecture}. \subsubsection{Cycling Performance} As introduced above, Wahoo indoor bicycling training equipment are connected to Unity and SteamVR. With the scripts written in C\# programming language in Unity (Figure \ref{fig:System_archetecture}-13), it is possible to extract the position, speed and direction of any object in the IVE, including the headset, controllers, bicycle and other virtual objects such as vehicles at any given time (Figure \ref{fig:System_archetecture}-14,15). Meanwhile, the standby scripts also collect any input from the controllers such as the pulled trigger values (0 to 1), which represents the brake for the bike simulator (Figure \ref{fig:System_archetecture}-16). All the cycling performance data is collected per frame with system timestamp once the Unity starts any scenario. Once the experimental trial is completed, the text data will be saved locally in the computer for each participant. The Unity data is recorded at a frequency of 30 Hz. \subsubsection{Eye Tracking} The HTC VIVE Pro Eye has an integrated Tobii Pro eye tracker. With the Tobii Pro Unity SDK \cite{tobiiwebsite2021}, it can be utilized to collect eye tracking data for further data analysis (Figure \ref{fig:System_archetecture}-17). We have created documents and sample code about how to set up the Tobii Pro SDK in Unity \cite{xiangwebsite2021}. The output of Tobii Pro raw data is the 3D gaze direction, gaze origin and pupil diameter (Figure \ref{fig:System_archetecture}-18). The frequency of eye tracking data is 120Hz. \subsubsection{Video Recording} As can be seen from bottom left of Figure \ref{fig:System_archetecture}(a,b), the two cameras are placed in positions where they can capture different angles of the bicyclist. The bicyclist's point of view in the IVE is monitored throughout the experiment. We use Open Broadcaster Software (OBS) Studio (\cite{obsstudio2021}, Figure \ref{fig:System_archetecture}-19) to integrate all video recordings simultaneously with a fixed frequency of 30 Hz and resolution of 1080p (1920×1080) for each video source (Figure \ref{fig:System_archetecture}-20). Video information for each video source (i.e., creation date, duration, height and width) can be extracted using windows file system information, which will be utilized for time synchronization. \subsubsection{Heart Rate Measurement} Our system uses two Android smartwatches (one for each wrist) that are equipped with the “SWEAR” app \cite{boukhechba2020swear} for collecting long-term data from smartwatches (Figure \ref{fig:System_archetecture}-21). The SWEAR app records HR (1 Hz) and hand acceleration (100 Hz) (Figure \ref{fig:System_archetecture}-22,23). The data collection is turned on before the experiment by the researcher and then no further action is required. Both watches are connected to a smartphone via Bluetooth, the smartphone and computer are on the same wifi network, and the time of all devices are synchronized with the online server before the experiment. All data from the smartwatches will be stored on the local device and then uploaded to Amazon S3 cloud storage (Figure \ref{fig:System_archetecture}-26). \subsubsection{Surveys} A pre-experiment survey in the study aims to collect participants' demographic information (age, gender), their prior VR experience and what types of bicyclists they are (biking attitude). After the experiment, a post-experiment survey is followed to collect their experience in the IVE and their subjective safety preferences for different scenarios. \subsection{Experiment Procedure} \label{sec:experiment procedure} Figure \ref{fig:experiment_procedure} shows the experiment procedure. Once a prospective participant signed up, a researcher contacted the participant both via email and phone call a day before the experiment to confirm their reservation and health condition (due to the COVID-19 requirement). Upon arrival, each participant is asked to sign the consent form approved by the IRB office and put on two smartwatches on both wrists, before completing the pre-experiment survey. After finishing the pre-experiment survey, instructions are given on how to use the VR headset, controllers, and the bike simulator. After the bike is adjusted to a comfortable position, the participant is mounts the bike and puts on the headset. Next, the participant is guided through the eye tracker calibration. After the IVE system setup, the participant is placed into a familiarization scenario (without any vehicle traffic) to become accustomed to interacting with the IVE. In this environment, the participant can practice pedaling, steering, and braking, and the practice procedure can be repeated until the participant feels comfortable. If the participant feels any motion sickness, they may stop the experiment at any point and still receive compensation for participation. Once the participant is comfortable in the training environment, they experience the three design scenarios in random order, where each experiment trial lasts about two minutes, with a two minute break between each scenario. Once the participant has completed all three scenarios, they are asked to complete the post-experiment survey. On average, each participant spends 30 minutes completing the experimental procedure. \begin{figure} \centering \includegraphics[width=\linewidth]{images/experiment_procedure_2.png} \caption{Experiment procedure} \label{fig:experiment_procedure} \end{figure} \subsection{Participants} \label{sec:participant} 51 participants were recruited for the experiment. Most of the participants are local bicyclists, university students and faculty members who are familiar with the study corridor. All participants are 18 or older and without color blindness. During the study, one participant could not finish the experiment due to motion sickness. For the remaining 50 participants (23 female and 27 male), the mean age is 34.1 with a standard deviation of 12.9 (1 participant did not reveal his/her age information); the age distribution is shown in Figure \ref{fig:age_gender}. \begin{figure} \centering \includegraphics[width=\linewidth]{images/age_gender.jpg} \caption{Age and gender distribution of participants based on the demographic data. Overall 23(46\%) are females and 27(54\%) are males. One male participant didn't reveal his age information} \label{fig:age_gender} \end{figure} \subsection{Data Preprocessing} \label{sec:preprocessing} This section introduces the necessary data preprocessing steps to extract the valid information, including time synchronization and feature extraction for the modeling section. Data extracted from different sensors have different frequencies. This required the resampling of each sensor separately: the cycling performance and video data is resampled at 30 Hz, the eye tracking data is resampled at 120 Hz, and the HR is resampled at 1 Hz. All data are trimmed to only include the data from the moment when the participants start pedaling until they pass the third intersection \ref{fig:road} in the IVE. The next step is combining the raw gaze direction from eye tracking data with the point of view videos, which helps transform the 3D gaze direction into 2D videos. This process allows us to visualize what the participants are looking at in the IVE. As shown in the lower left of Figure \ref{fig:System_archetecture}(d), the green and blue dots represent the left and right eye gaze points, respectively. The scripts for eye tracking data preprocessing are posted online \cite{xiangwebsite2021}. \subsubsection{Fixations and Percentage of Road Center (PRC)} Fixations are the most common feature of eye tracking to make inferences about cognitive processes or states. Fixations are the moments when eyes stop scanning about the scene and hold the central foveal vision in certain places to look for detailed information of the target object. We adapt our program from \textit{pygaze}, an open-source toolbox for eye tracking \cite{dalmaijer2014pygaze}, with 25 ms minimum duration and 100 pixel maximum dispersion thresholds to extract the fixation information from the 2D videos with gaze information. The mean fixation length is defined as the average length of all fixation events for any time interval of interest. From the distribution of fixations, the most frequent bins of fixations are considered as the road center, any fixations with a spatial distance less than 12$^\circ$ (as suggested by literature) will be taken as the road center fixations \cite{wang2014sensitivity}. To convert the 12$^\circ$ to actual pixel distance in the 2D gaze vector, the following equation is used (\ref{equ:distance}): \begin{equation} \label{equ:distance} D = \frac{tan(r)*w/2}{tan(FOV/2)} \end{equation} Where $r$ is the radius in angle of the road center, for which we choose 12$^\circ$, $w$ is the width of the field of view video, which is 1920 pixel. $FOV$ is the field of view angle of the headset, which is 110$^\circ$ for the HTC VIVE PRO eye. Therefore, the $D$ in our study is 142.88 pixel, which means any fixations with a euclidean distance less than $D$ will be considered as road center fixations. The PRC is defined as the percentage of fixation length within the road center among all fixations for any time interval of interest. \subsubsection{Gaze Entropy} As discussed in section \ref{eye_tracking_definition}, there are two types of gaze entropy measures: stationary gaze entropy (SGE) and gaze transition entropy (GTE). SGE provides a measure of overall predictability for fixation locations, which indicates the level of gaze dispersion during a given viewing period \cite{shiferaw2019review}. The SGE is calculated using equation (\ref{equation:sge}): \begin{equation} H(x) = -\sum_{i=1}^{n}(p_i)log_2(p_i) \tag{\ref{equation:sge}} \end{equation} $H(x)$ is the value of SGE for a sequence of data $x$ with length $n$, $i$ is the index for each individual state, $p_i$ is the proportion of each state within $x$. To calculate the SGE, the visual field is divided into spatial bins of discrete state spaces to generate probability distributions. Specifically, the coordinates are divided into spatial bins of 100 × 100 pixel. To get the trend of SGE, it is calculated in a rolling window of one second (120 data points in gaze raw data). Gaze transition entropy (GTE) is retrieved by applying the conditional entropy equation to first order Markov transitions of fixations with the followed equation: \begin{equation} H_{c}(x) = -\sum_{i=1}^{n}(p_i) \sum_{i=1}^{n}p(i,j) log_2 p(i,j) \tag{\ref{equation:gte}} \end{equation} Here $H_{c}(x)$ is the value of GTE, and $p(i, j)$ is the probability of transitioning from state i to state j. The other variables have the same definitions as in the SGE equation (\ref{equation:sge}). \subsubsection{Change Points in HR} In order to understand the abrupt changes in bicyclists' HR, we perform change point detection through Bayesian Change Point (BCP) analysis. As previously noted, an increase in HR can be associated with an increase in stress level, anxiety, and possibly negative emotions. Using a change point detector we can detect the moments in the time series of a bicyclist's HR that can be associated to increased stress levels in the biking scenario. This has also been explored in previous studies using BCP for HR analysis \cite{tavakoli2021multimodal,tavakoli2021harmony,guo2021orclsim}. In order to perform BCP, we use the Bayesian Change point model in Barry and Hartigan's book \cite{barry1993bayesian}. This model assumes a constant mean for the data and attempts to detect the point in which the mean changes. We perform this process in R \cite{team2013r} using the package \textit{bcp} \cite{erdman2007bcp}. \subsection{Statistical Modeling} \label{sec:stat modeling} a Linear Mixed Effects Model (LMM) was chosen to model the different response variables across participants. LMM facilitates the analysis as it has the ability to (1) characterize group and individual behavior patterns in a formal way, (2) acknowledge both group and individual differences, and (3) incorporate additional covariates \cite{krueger2004comparison}. LMM is centered around the idea of variability across participants \cite{brown2021introduction,fox2002linear}. Similar to simple linear regression, a LMM estimates the fixed factor effect (e.g., the effect of different scenarios on participant's heart rate and gaze measures). A LMM also estimates random effects, which arises from individual differences across participants. For instance, the effect of having a bike lane on participant's HR varies by participant (e.g., one participant experiences 20 percent increase in HR while another participant experiences 10 percent). A LMM is defined as: \begin{equation}\label{lmm} y = X\beta + bz +\epsilon \end{equation} In equation (\ref{lmm}), $y$ is the dependent variable (e.g. heart rate or gaze measures), $X$ is the matrix of predictor variables, $\beta$ is the vector of fixed-effect regression coefficients, $b$ is the matrix of random effects, $z$ is the coefficients for each random effect, and $\epsilon$ is the error term of unexplained residuals. Additionally, the elements of the $b$ and $\epsilon$ matrices are: \begin{equation} b_{ij} \sim N(0,\psi_k^{2}),Cov(b_k,b_{k'}) \end{equation} \begin{equation} \epsilon_{ij} \sim N(0,\sigma^{2}\lambda_{ijj}),Cov(\epsilon_{ij},\epsilon_{ij'}) \end{equation} The LMM is applied using the lme4 package in R \cite{bates2007lme4}. To evaluate participants' physiological responses in different road environments, as shown in Table \ref{table:variable definition}, different behavioral and physiological responses are treated as dependent variables in each LMM model, including cycling performance (speed and lateral lane position), eye tracking metrics (SGE, GTE, PRC, and mean fixation length), and HR metrics (mean HR and number of HR change points). Independent variables include type of bicycle infrastructure (as-built, separate bike lane, and protected bike lane), age (older or younger than 30), attitude towards cycling (based on pre-survey response), prior VR experience, and participants' sense of realism for bike speed and braking in the IVE. Additionally, each participant is treated as a random effect in the model. If statistically significant effects are revealed in any LMM models for the scenario variable, post hoc contrasts will be performed for multiple comparisons using Fisher's Least Significant Difference (LSD) \cite{williams2010fisher}. All statistical analyses were performed at a 95\% confidence level ($\alpha = 0.05$). \begin{table}[h!] \caption{Summary of independent and dependent variables in LMM models} \label{table:variable definition} \centering \begin{tabular}{c c c c } \hline Variable Type & Variable Name & Categories & Data Source \\ \hline & Scenario & 3 & IVE Design \\ & Age & 2 & Pre-survey\\ Independent & Gender & 2 & Pre-survey \\ (Categorical) & Type of bicyclist & 4 & Pre-survey \\ & VR experience & 4 & Pre-survey\\ & Realism of bike steering & 5 & Post-survey\\ & Realism of bike speed & 5 & Post-survey\\ \hline & Cycling speed & km/h & Unity \\ & Lateral lane position & m & Unity \\ & Horizontal gaze variability & pixel & Eye Tracking \\ Dependent & Percentage of road center gaze & percentage & Eye Tracking \\ (Continuous) & Mean fixation length & second & Eye Tracking \\ & Stationary gaze entropy & bits & Preprocessing \\ & Gaze transition entropy & bits & Preprocessing \\ & HR & bpm & Smartwatch \\ & Number of HR change points & count & Preprocessing \\ \hline \end{tabular} \end{table} \section{Results} This section reports the results of the experiment. The following subsections describe the summary statistics (from the pre- and post-experiment surveys), the bicyclists’ physical behavior (cycling speed and lateral lane position), and physiological responses (eye tracking and HR) in different roadway designs. \subsection{Survey Response} \subsubsection{Pre-experiment Survey} All participants indicated that they have some level of prior knowledge of VR, although only one participant owns VR equipment and uses it regularly, as shown in Figure \ref{fig:survey}-a. The majority of the participants have a positive attitude towards cycling, as shown in Figure \ref{fig:survey}-b, with only two participants expressing hesitancy of cycling under any condition. \begin{figure} \centering \includegraphics[width=\linewidth]{images/biking_attitude_VR_experience_2.jpg} \caption{Summary of some survey responses. (a) Prior knowledge on VR, (b) Type of bicyclist} \label{fig:survey} \end{figure} \subsubsection{Post-experiment Survey} In the post-experiment survey, the majority of participants indicated that the virtual environment was immersive, with 94\% of participants choosing a 4 or 5 on the 5-point Likert scale (mean=4.42), with 4 and 5 indicating "immersed" and "very immersed", respectively. Most participants also found that the virtual environment was to scale (94\% chose 4 or 5, mean=4.54). The participant's feelings of speed and steering realism were both above average with 50\% indicating a 4 or 5 level of realism (mean = 3.56), and 54\% indicating a 4 or 5 for the steering realism (mean = 3.60). Fig.\ref{fig:preferences} shows the results of participants' scenario preferences. Participants indicated an overwhelming preference towards the protected bike lane (69\% rate it as the safest), followed by separate bike lane (22\% rate it as the safest), and the as-built scenario rated the least preferred safe environment (10\% rate it as the safest). \begin{figure} \centering \includegraphics[width=\linewidth]{images/scenario_scenario_preference_barplot.jpg} \caption{Scenario preference across participants based on the post-experiment survey. Note that 94\% of participants choose a 4 or 5 on the 5-point Likert scale (mean=4.42), with 5 indicating a full immersion } \label{fig:preferences} \end{figure} \subsection{Cycling Performance} Two LMM are built individually for average speed and lateral lane position to estimate the relationship between the independent variables and participants' cycling performance. For the mean speed LMM, there is a significant difference between the as-built and protected bike lane scenarios ($\beta = -1.209, SE = 0.383, p < 0.01$). Bicyclists' mean speed in the protected bike lane with pylons scenario (13.88 km/h) is significantly lower compared to the as-built scenario (15.09 km/h). No significant differences are found between the separate bike lane scenario (14.94 km/h) and the as-built scenario, as shown on Figure \ref{fig:cycling_performance} - a. Similarly, there is no significant difference in participants' speed between the bike lane scenario and protected bike lane with pylons. The random effect for the mean speed model is significant ($\beta =-1.209, SE = 0.383, p < 0.001$), suggesting that it is necessary to treat the participant as a random factor in the model. This is also indicative of individual differences across participants in their cycling performance. For the mean lateral lane position, no significant differences are found across the three scenarios, although the difference between as-built and protected bike lane with pylons scenarios are marginally significant ($\beta =-0.129, SE = 0.070, p=0.068$). As shown in Figure \ref{fig:cycling_performance} - b, the average distance to the roadside curb for the three scenarios (as-built, separate bike lane, and protected bike lane with pylons) are 0.97 m, 0.88 m and 0.84 m respectively. The greater the average distance to the curb is, the smaller lateral distance between the bicycle and the vehicles. Therefore, there is a trend of participants moving closer to the curb to stay away from vehicles with the presence of separate bike lane or protected bike lane with pylons. The random effects for this model is significant ($\beta =0.971, SE = 0.067, p < 0.001$) as well. \begin{figure} \centering \includegraphics[width=\linewidth]{images/Speed_lane_position_s.jpg} \caption{Cycling performance measured through speed (a) and lateral position (b) across different scenarios. Note that there is a trend for participants to move closer to the road curb to stay away from vehicles with the presence of separate bike lane or protected bike lane with pylons. } \label{fig:cycling_performance} \end{figure} \subsection{Eye Tracking} Five LMM are built individually for each eye tracking dependent variable (Table \ref{table:variable definition}) to estimate the relationship between the independent variables and participants' eye tracking metrics (horizontal gaze variability, PRC, mean fixation length, SGE, and GTE). We first plot the eye tracking heat map in the field of view to get an overview of the gaze distribution. As shown in Figure \ref{fig:gaze_density}, visual observations from the gaze heat map indicate that the as-built scenario has a more dispersed distribution than the other two scenarios. The separate bike lane scenario appears to have a higher concentration in the center of the gaze area, followed by the protected bike lane with pylons scenario. \begin{figure} \centering \includegraphics[width=\linewidth]{images/Gaze_density_plot.jpg} \caption{Gaze density heat map for different scenarios. Note that visual observations from the gaze heat map indicate that the as-built scenario has a more dispersed distribution than the other two scenarios} \label{fig:gaze_density} \end{figure} \subsubsection{Horizontal Gaze Variability} As illustrated above, a LMM is built for evaluating the relationship between participants' horizontal gaze variability and the independent variables. In Figure \ref{fig:gaze_x_scenario}, the result of the horizontal gaze variability model shows the random effects were significant ($\beta = 86.257, SE =32.990, p < 0.05$), which suggests that it was necessary to treat the participant as a random factor in the model. Both the separate bike lane and protected bike lane scenarios are statistically significant predictors for the horizontal gaze variability ($\beta = -16.349 , SE = 4.288, p< 0.001$ and $\beta = -12.645, SE = 4.278, p< 0.01$, respectively). As shown in Figure \ref{fig:gaze_x_scenario} - a, a significant lower horizontal gaze variability is observed both in the separate bike lane and protected bike lane, which indicates that participants are more focused directly ahead rather than laterally looking around the road environment. Another significant factor revealed by the model is the realism score of the bike speed from the post-experiment survey ($\beta = -13.991, SE = 6.287, p < 0.05$). Generally speaking, the higher realism of bike speed the participants indicate, the lower horizontal gaze variability they show during the experiment (Figure \ref{fig:gaze_x_scenario} - b, except for the small group who selected 2). No significant results are found in terms of the steering realism score. \begin{figure} \centering \includegraphics[width=\linewidth]{images/gaze_x.jpg} \caption{Horizontal gaze variability within different scenarios (a) as well as within different ratings of the realism of the bike speed (b). Note that a significant lower horizontal gaze variability is achieved both in the separate bike lane and protected bike lane. Additionally, the higher realism of bike speed the participants indicate, the lower horizontal gaze variability they show during the experiment} \label{fig:gaze_x_scenario} \end{figure} \subsubsection{Percentage of Road Center Fixation} A similar LMM is built for the percentage of road center fixation. As shown in Figure \ref{fig:fixation_PRC}, a similar result is presented by the LMM for the horizontal gaze variability; the random effects are also significant ($\beta = 91.993, SE = 6.045, p < 0.001$). For the independent variables, both the separate bike lane ($\beta = 4.083, SE = 0.947, p< 0.001$) and protected bike lane ($\beta = 2.558, SE = 0.938, p< 0.01$) scenarios are statistically significant. The percentage of road center fixation in the separate bike lane is slightly higher than the protected bike lane, which aligns with visual observation of the gaze heat map (Figure \ref{fig:gaze_density}). This result indicates participants focus their gaze most on the road center in the separate bike lane. The realism score of the bike speed from the post-experiment survey is also significant ($\beta = 2.892, SE = 1.218, p < 0.05$). \begin{figure} \centering \includegraphics[width=\linewidth]{images/PRC.jpg} \caption{PRC and mean fixation length within different scenarios. Note that the percentage of road center fixation in separate bike lane is slightly higher than the protected bike lane, which aligns with the visual observation in gaze heat map } \label{fig:fixation_PRC} \end{figure} \subsubsection{Mean Fixation Duration} The LMM model for the mean fixation duration shows the random effects are significant ($\beta = 0.242, SE = 0.051, p < 0.001$), and both the separate bike lane and protected bike lane scenarios are statistically significant predictors of mean fixation duration ($\beta = 0.015, SE = 0.007, p< 0.05$ and $\beta = 0.014, SE = 0.007, p< 0.05$, respectively). As shown in Figure \ref{fig:meanfixation}, a significantly higher fixation duration is observed both in the separate bike lane and protected bike lane scenarios compared to the as-built scenario. No significant results are found for other independent variables. \begin{figure} \centering \includegraphics[width=\linewidth]{images/mean_fixation_length_scenario_boxplot.jpg} \caption{Mean fixation duration within different scenarios. A significantly higher fixation duration is observed both in the separate bike lane and protected bike lane scenarios compared to the as-built scenario.} \label{fig:meanfixation} \end{figure} \subsubsection{Gaze Entropy} Two LMM are built for SGE and GTE. In both models, the random effects are significant ($\beta = 1.951, SE = 0.413, p< 0.001$ and $\beta = 0.791, SE = 0.306, p< 0.05$, respectively). Other than the random effects, only the separate bike lane in the SGE model is a significant predictor ($\beta = -0.224, SE = 0.079, p< 0.01$). As shown in Figure \ref{fig:gaze_entropy} - a, the SGE in the separate bike lane environment is significantly lower than the as-built environment. No significant results are observed in the GTE model. \begin{figure} \centering \includegraphics[width=\linewidth]{images/gaze_entropy.jpg} \caption{Stationary gaze entropy (a), and gaze transition entropy (b) within different scenarios. Note that the SGE in the separate bike lane environment is significantly lower than the as-built environment.} \label{fig:gaze_entropy} \end{figure} \subsection{HR} \subsubsection{Mean HR} An LMM is built for mean HR during each experiment to compare the overall HR levels in different infrastructure designs. The random effects are significant ($\beta = 92.892, SE = 29.191, p< 0.01$). No significant results are found between different road designs (Figure \ref{fig:mean_HR} - a). A significant result is shown in type of bicyclist attitude based on the survey result ($\beta = -8.410, SE = 4.082, p<0.05$). As shown in Figure \ref{fig:mean_HR} - b, the more positive attitude participants have on biking, the lower mean HR they have during the experiment. Note that only two participants responded to the bicycling attitude question with 'No way, no how' so only six data points for this category are available. \begin{figure} \centering \includegraphics[width=\linewidth]{images/mean_HR.jpg} \caption{Mean HR within different scenarios (a) as well as attitude towards biking (b). Note that the more positive attitude participants have on biking, the lower mean HR they have during the experiment. } \label{fig:mean_HR} \end{figure} \subsubsection{HR Change Point} In addition to the overall HR level, we are also interested in the abrupt changes in participants' HR. By utilizing the BCP method, we are able to extract the abrupt HR changes for each scenario. Figure \ref{fig:hr_change_point} illustrates the average frequency of HR change points in different scenarios. The LMM model shows that both the separate bike lane ($\beta = -0.393, SE = 0.145, p< 0.01$) and protected bike lane ($\beta = -0.360, SE = 0.145, p< 0.05$) have significantly lower frequency of HR change point than the as-built scenario. The frequency of HR change points in the as-built design is almost twice that of the separate bike lane and protected bike lane. The distribution of HR change points are shown in Figure \ref{fig:hr_change_point_scenario_distribution}. There are three peaks in Figure \ref{fig:hr_change_point_scenario_distribution} - a, where all take place before the participant arrives at an intersection. Among the three intersections, the peak of the HR change point in the third intersection, which has a traffic signal, happens earlier than the other two intersections. Figure \ref{fig:hr_change_point_scenario_distribution} - b is the density plot of HR change points for different scenarios. Each scenario appears to have two peaks, with the as-built design having higher peaks in the first intersection and the third intersection. The density plots of the separate bike lane and protected bike lane scenarios are smoother than the as-built design. This indicates that the HR change points in the as-built scenario are more subjective to roadway environmental changes. In other words, the separate bike lane and protected bike lane may reduce the effect of environment changes to the HR changes. \begin{figure} \centering \includegraphics[width=\linewidth]{images/change_point_frequency_scenario_barplot.jpg} \caption{Frequency of HR change points within different scenarios. Note that both the separate bike lane and protected bike lane have significantly lower frequency of HR change point than the as-built scenario.} \label{fig:hr_change_point} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{images/HR_change_point_distribution.jpg} \caption{HR change points distribution (a) overall distribution on the road; (b) distribution over each scenario. Note that each scenario appears to have two peaks, with the as-built design having higher peaks in the first intersection and the third intersection. The density plots of separate bike lane and protected bike lane scenarios are smoother than the as-built design, which indicates in the the HR change points in as-built scenario are more subjective to roadway environmental changes. } \label{fig:hr_change_point_scenario_distribution} \end{figure} \section{Discussion} \subsection{Cycling Performance} The results show that the roadway design can affect cycling performance. Among the three roadway designs, the speed in the protected bike lane is significantly lower than the other two designs. On average, participants cycled at a lower speed when they are separated from the vehicle lane. However, this contradicts with a similar IVE study (with 50 participants) where bicyclists ride at lower speeds in the no bike lane condition versus the bike lane condition \cite{cobb2021bicyclists}. The differences may be due to (1) the different IVE settings. In \cite{cobb2021bicyclists}, A screen provides the forward view while in this study, a head-mounted display provides the forward view, this implies that different IVE settings will affect bicyclists' responses, as discussed by previous studies \cite{bogacz2020comparison,guo2021orclsim}. (2) the different road environments. Our IVE is modeled from the real-road and all the participants are local bicyclists, they are more familiar with the as-built designs. (3) The vehicle settings. The vehicles are randomly generated based on the empirically observed distribution of vehicle arrivals from the start point with a fixed routine in the vehicle lane. For some participants, when they are cycling in the shared vehicle lane in the as-built scenario, the approaching vehicles from behind will slow down and follow them until the vehicle lane is cleared. Therefore, in those cases, the participants will only see an open street ahead of them without any cars passing by. In the post-experiment survey, some participants mentioned that they were motivated to ride faster when there were no vehicles passing by them. Based on our statistics of how many vehicles have passed the participants during the experiment, the average number of passed vehicle during the experiment for as-built design (0.77) is less than separate bike lane (0.88) and protected bike lane (1.03). (4) The traffic volume and speed. The traffic volume is relatively low in our experiment. On average, there is 0.9 vehicle passing the participant, which is lower than related studies. Water Street is an urban road with a speed limit of 25 mph and vehicles within the IVE were designed to travel constantly at this speed limit, which may indicate that the effect of different roadway designs to bicyclists' speed is subjective to traffic volume and vehicle speed. In addition, some participants report that they feel the protected bike lane is narrower than expected, they want to avoid hitting the pylons during the experiment, which may potentially lower their speed and decrease the lateral distance to the curb. For the lateral lane position, the LMM model result is marginally significant, bicyclists tend to stay away from the vehicle lane when there is a bike lane, which will lead to a larger lateral distance when a vehicle is trying to pass them. This can help to increase bicyclists' comfort level and safety, which has been shown by many previous studies \cite{mcneil2015influence,nazemi2021studying}. \subsection{Gaze Behavior} The gaze behaviors also vary in different roadway designs. Generally, compared with separate bike lane and protected bike lane scenarios, participants in the as-built scenario have a wider horizontal distribution of fixations, indicated by a larger horizontal gaze variability and a lower PRC, which can be a sign of active searching strategy in the environment. The higher PRC in the separate bike lane and protected bike lane scenarios might be an indicator of higher cognitive workload \cite{engstrom2005effects}, but this does not necessarily suggest a sub-optimal state for the gaze strategy. According to the Yerkes-Dodson law \cite{yerkes1908relation}, there is an optimal range of cognitive load; if the current cognitive load is lower than optimal range, increased cognitive workload with higher arousal level can help to improve the performance. The addition of a separate bike lane may have impacted the cognitive workload of the participants to focus on keeping the bicycle in the center of the bike lane. The increase in cognitive workload was lower for protected bike lane, where bicyclists tend to keep closer to the curbside instead of being on the center of the bike lane. In the as-built scenario, a shorter fixation duration is observed, which as discussed by previous research is related to a higher hazard estimation of bicyclists \cite{von2020gaze}. It is further verified by our post-experiment survey in which most of the participants rate the as-built scenario as the least safe scenario. In terms of the two alternative designs, the separate bike lane scenario seems to have a more focused gaze behavior than the protected bike lane. The phenomenon is also identified by the gaze entropy results. Only the separate bike lane scenario has a significant lower stationary gaze entropy, which quantifies the overall spatial dispersion of gaze. To our knowledge, the effect of roadway design on gaze entropy of bicyclists has not previously been examined. Increase in SGE indicates a change in the spatial areas that information is being sampled from, which is illustrated by the less populated and more dispersed depiction of fixation density, as discussed in a driver-related study \cite{shiferaw2018stationary}. Meanwhile, for the GTE, no significant differences are found between the three scenarios. The increase in GTE reflects a more random pattern of transitions between fixations. Variation in GTE is related to scene complexity and task demand \cite{shiferaw2019review}. One possible reason for the result may be when designing a separate bike or protected bike lane, participants may feel obligated to maintain lateral bike lane position (especially in IVE), which offsets the effect of new roadway designs with a less scene complexity for necessary visual information retrieval. \subsection{Heart Rate Variation} For the HR response, we did not find any significant difference in mean HR between the three scenarios. The correlations between HR/HRV and subjective safety ratings were also found to be weak in previous naturalistic studies \cite{doorley2015analysis,fitch2020psychological}. In our controlled experiment, the association is even less conclusive. However, when considering the abrupt changes in HR, after extracting the HR change point from the raw data, it is found that both the separate bike lane and protected bike lane scenarios have a significant lower frequency of HR change point than as-built scenario. We further explore the spatial distribution of the HR change point and find that the peaks of the HR change point occur more frequently prior to reaching the intersections. Based on our results, in our low-task requirement scenario, the intersection is more associated with higher number of change points, possibly showing higher stress level than other sites. The position of HR change point depends on the types of intersections. The first intersection has more HR change points than the other two intersections. There are two possible reasons. First, as the first intersection is in the beginning of the experiment, participants may still need some time to get used to the IVE, even if they have practiced before. Second, as indicated by previous study, spatially open locations increase the level of perceived risk \cite{von2020gaze}, the road segment after the first intersection is a downhill road. After entering the first intersection, bicyclists will have a wider field of view, which can lead to increased level of perceived risk. Our study further verified this finding by abrupt changes in HR. Moreover, participants seem to have HR change points earlier in more complex intersections (specifically, the third intersection with traffic lights and stop lines). This can be explained by the fact that more complex intersections require more time to prepare for crossing. This aligns with another naturalistic study which shows bicyclists' first fixation to the traffic light occurs earlier in a no bike lane road \cite{rupi2019visual}. While not in biking studies, similar results were achieved in other transportation studies with respect to other road users such as drivers' stress levels and emotions when getting closer to the intersections. For instance, recent studies both through subjective measures \cite{bustos2021predicting}, self-reports \cite{dittrich2021drivers}, and increases in HR \cite{tavakoli2021harmony,tavakoli2021leveraging} have all shown that drivers experience higher stress level as they arrive at an intersection. However, we note that our findings need to be further verified by future studies due to the limited number and type of intersections in this study. Moreover, the distribution of HR change point for the separate bike lane and protected bike lane scenario are smoother than the as-built scenario. The separate bike lane has more delayed peaks than the other two designs. The reasons behind it should be further explored in future studies. One important point with respect to the HR in our study is the short duration of each scenario as well as the overall experiment. Because the HR was sampled at a relatively lower frequency (1 Hz), the number of data points per scenario becomes significantly smaller as compared to other data sources such as gaze measures (120 Hz). The low frequency might be another reason for the insignificant results in the comparison of mean HR across the three scenarios. In the future work of this study, we are planning to use the raw PPG readings from the watches to enhance the depth of the HR modeling within different scenarios. While HR is sampled at 1 Hz, PPG is sampled at 100 Hz but with the caveat of being affected by the motion artifacts. However, we should note that even with a lower number of data points, a change point detector, when applied to the overall data of a participant, can learn the proper distribution and find the moments of abrupt increases, which are spatially intuitive as well (e.g., being close to intersections). \subsection{Demographics and Survey} Although a majority of the 50 participants rate the protected bike lane with pylons scenario as the safest design, the post hoc comparison does not reveal too many differences between the separate bike lane and protected bike lane scenarios. The protected bike lane scenario has a lower average speed. The average lane position is closer to the road curbside and the separate bike lane design has a slightly higher gaze concentration. Other than these findings, there is little evidence showing significant differences between these two alternative designs in terms of cycling behavior and physiological responses. These results indicate there exists some differences between participants' subjective ratings and objective behavioral response. No gender or age differences are found to be statistically significant in this study. It is widely accepted that female bicyclists have a lower cycling participation rates \cite{mitra2019can}, as well as stronger preferences than males for greater separation from motor traffic \cite{aldred2017cycling}. While some studies report minor gender differences \cite{cobb2021bicyclists}, it is argued that female bicyclists using the lanes had significantly more positive associations with the protected lanes than males. In other words, the protected bike lanes can somehow help close the gender gap in cycling \cite{dill2014can,aldred2017cycling}. It is worth noting that most of the participants in this study are regular bicyclists, so they are not representative of the entire population. Evidence of stronger preferences among older people is also limited \cite{aldred2017cycling}. Another notable finding from this study is that the realism of the speed is more related to bicyclists' physiological changes than the realism of steering. This can be task-dependent, as in our experiment, most of the road is straight and only a few steering maneuvers are required around the second intersection. \section{Limitations and Future Work} Limitations of this study are as follows. First, the duration of the experiment is short, some findings in this experiment needs further study. Building a longer road segment in the IVE with more street blocks can be a solution. However, we should note that longer duration is more likely to cause motion sickness or fatigue, which can lead to performance degradation. Thus, future work needs to find an optimal duration for the study accounting for the trade-off between avoiding motion sickness and retrieving longer time series of data. Second, although a practice scenario is introduced in the beginning and the order of the scenarios is randomized, a learning effect can exist. This means that the participant might become more familiar through the experiments as they progress through the scenarios, which can affect HR, gaze, and speed. Third, more benchmark studies are needed to verify the findings in real-world environments. We have conducted a pilot study of six participants both in IVE and real-road, and preliminary findings show that most of the physiological responses in IVE are representative of the real-world \cite{guo_robartes_angulo_chen_heydarian_2021}. Nonetheless, a benchmark study in the real-world is needed for the same participant groups to validate the findings. Future work will be focused on benchmarking the IVE setup in more diverse scenarios and with a higher number of participants. While in this study we focused on the design, it can be the case that different events within each design can impact the participant's psycho-physiology. For instance, when considering the effect of a separate bike lane, it could be the case that increase in traffic density, speed of vehicles, and other environmental factors can affect how participant's psycho-physiology changes within each scenario. In an attempt to isolate the effect of the roadway design, these variables had little or no variation in this experimental design. Future work will be focused on simulating more detailed events within each alternative design to better illustrate the interaction between environmental properties (e.g., presence of bike lane) and events (e.g., vehicles passing with high speed). While we focused on HR and gaze, we note that human psycho-physiology is a complicated matter which is not bound by only two measures. The addition of wearable devices with more detailed sensors (e.g., skin conductance, skin temperature, and breathing patterns) may provide additional insight on the bicyclists' psycho-physiology. \section{Conclusion} This study explores bicyclists' physiological and behavioral changes in different urban roadway designs. In an immersive virtual environment, a bicycle simulator with integrated mobile sensing devices is used in the experiment to record bicyclists’ behavioral and physiological responses on the same road with three different roadway designs: shared bike lane (as-built), separate bike lane, and protected bike lane with pylons. Results from 50 participants indicate that (1) the protected bike lane design has the highest subjective safety rating; (2) participants in the protected bike lane scenario have the lowest cycling speed and highest lateral distance to the vehicle lane, indicating the potential for safer bicycling behavior with lower speeds and increased separation from vehicles; (3) bicyclists focus their gaze on the cycling task more in the separate and protected bike lane scenarios, indicating the potential for decreased distractions when cycling in a separate or protected bike lane compared to shared bike lane; (4) creating separation zones for bicyclists (whether separate bike lane or protected bike lane) has the potential to reduce the stress level, as indicated by decreased HR changes compared to the shared bike lane; and (5) the immersive virtual environment can be an efficient and safe tool to evaluate bicyclists' behavioral and physiological responses in different alternative roadway designs. \bibliographystyle{plain}
1,314,259,996,132
arxiv
\section{Introduction Particle and fluid manipulations utilizing electric fields have been extensively studied due to their advantages in various applications, including colloid science \cite{anderson1989colloid}, micro/nanofluidics \cite{bazant2010induced,Zhong2015a}, chemistry \cite{Vilkner2004,West2008}, biology \cite{jiang2010separating,Squires2013}, biomedicine \cite{keren2003protein,Huang2010}, etc. Among these manipulation methods, induced charge electrophoresis (ICEP) is receiving increasingly significant interests because of its great potential in various applications, ranging from the manipulations of droplets \cite{schnitzer2013nonlinear,Schnitzer2013drop} and particles \cite{boymelgreen2012alternating,Feng2015}, to the device development in lab-on-a-chip systems, e.g., micromixers \cite{Daghighi2013,Feng2016pofchaotic}, microvalves \cite{sugioka2010high,daghighi2011micro} and micromotors \cite{squires2006breaking,Boymelgreen2014}. When a conducting (ideally polarizable) particle is subjected to an external electric field, it polarizes immediately. The polarization surface charges attract counterions in the electrolyte solution, establishing an induced electric double layer (EDL). The interactions between the applied electric field and the induced EDL lead to fluid flow, known as induced charge electroosmosis (ICEO). The particle motions due to ICEO is termed as induced charge electrophoresis (ICEP). The ICEP velocity is a quadratic function of the strength of the applied electric field because the zeta potential of the conducting (ideally polarizable) particle is induced by the applied electric field \cite{Yariv2005}, \begin{equation} \zeta_i=-\phi+\int_A{\phi dA}/A, \label{eq1zetaidefinition} \end{equation} where $\zeta_i$ is the induced zeta potential of the conducting particle, $\phi$ is the applied electrical potential on the particle surface, and $A$ is the area of the particle surface. Pioneering studies of the ICEP were carried out in colloid science decades ago \cite{Gamayunov1986,Dukhin1986}. Thanks to the rapid advancement in material science and nanotechnology, which provides various kinds of conducting particles for micro/nanofluidics \cite{Garcia-Sanchez2012,Zhong2014,Zhong2015}, ICEP regains researchers' attention in recent years \cite{bazant2010induced,ramos2011electrokinetics,Feng2016}. As the particles are often bounded or contained in channels or chambers in reality, the boundary effect is of significant importance in the relevant applications. Some studies have been conducted considering the planer wall effect in the ICEP motion of particles \cite{gangwal2008induced,Yariv2009remote,Hamed2009near,kilic2011induced,Sugioka2011}. The repulsion or attraction effects of the planar wall on different particles, ranging from cylinders \cite{zhao2007effect,kilic2011induced}, Janus particles \cite{gangwal2008induced}, spheres \cite{Yariv2009remote,Hamed2009near,kilic2011induced}, to ellipsoids \cite{Sugioka2011}, have been reported. However, in such studies, the walls are all straight and uncharged. Investigations on ICEP behavior of particles near a curved and/or charged non-conducting wall remain limited. In an effort to improve the physical insights of this problem, we hereby carry out a comprehensive study on the ICEP motion of a conducting cylinder suspended in a non-conducting cylindrical pore. The analytical evaluations on the cylinder velocities reveal that the cylinder is not only driven into translation but also, surprisingly, into rotation when the cylinder and the cylindrical pore are eccentric. The ICEP rotation of the cylinder presents a promising potential as a micromotor, which has long been crucial in the development of micromachines for biomedicine \cite{zhao2013influence}, biochemistry \cite{guix2014nano}, environmental science \cite{gao2014environmental}, etc., thus, remains a hot topic of scientific and technological interests. Various micromotors have been proposed and studied \cite{liu2011autonomous,wu2013self,mou2013self}. Most of such studies are centered on Janus particles \cite{squires2006breaking,mou2013self}, segmented rods with different metals \cite{liu2011autonomous}, and microtubes with layer-by-layer deposited metals \cite{wu2013self}. ICEP rotations of nonspherical particles \cite{Yariv2005,saintillan2006hydrodynamic,yariv2008slender}, Janus \cite{squires2006breaking} particles and cylinders in pair interactions \cite{Feng2016} have been reported. An ICEP micromotor composed of three Janus particles has been proposed and theoretically analyzed years ago \cite{squires2006breaking}. Lately, the ICEP rotation of a doublet Janus particle has been experimentally captured \cite{Boymelgreen2014}. However, all these proposed structures are composed of different materials, which are complicated and bring fabrication challenges. We hereby propose a micromotor utilizing the ICEP rotation of the cylinder in the cylindrical pore, which has advantages of simple geometry and material property. Thus, it is easy to fabricate. The analysis shows that the micromotor is capable of providing a large rotational velocity and bearing a heavy load. The study could contribute to the understanding of the ICEP behavior of a conducting cylinder in non-conducting cylindrical pore, and provide helpful insights in micromotor development. \section{Mathematical formulation} A two-dimensional (2D) conducting cylinder is suspended in a non-conducting cylindrical pore filled with an electrolyte solution. A Cartesian coordinate system is introduced in which the centers of the cylinder and the cylindrical pore are on the positive $x-$axis (Fig.\ \ref{fig1}). As the cylinder and the cylindrical pore are commonly eccentric, a bipolar coordinate system is defined in the Cartesian coordinates \cite{keh1991boundary}, \begin{equation} x=a\frac{\sinh\tau}{\cosh\tau-\cos\sigma},\quad y=a\frac{\sin\sigma}{\cosh\tau-\cos\sigma}, \label{eq1bipolarcoord} \end{equation} so as to conveniently describe the eccentric geometry (Fig.\ \ref{fig1}). Here $-\infty<\tau<\infty$; $0<\sigma\le 2\pi$; $(\tau, \, \sigma)$ denote the coordinates of the bipolar coordinate system; and $a$ is a positive constant in the bipolar coordinates. The surfaces of the cylinder and the cylindrical pore are indicated by $\tau =\tau_i$ and $\tau_o$, respectively, in the bipolar coordinates. \begin{figure}[!htb \centering \includegraphics[width=3.25in]{Fig1.eps} \caption{Schematic illustration of the conducting cylinder suspended in the cylindrical pore. $(\pm a,0)$ are the two foci of the bipolar coordinates; $\mathbf{e}_{\tau}$ and $\mathbf{e}_{\sigma}$ are the unit vectors in the bipolar coordinates that normal and tangent to the cylinder surface, respectively; $\tau=\tau_i$ and $\tau_o$ indicate the surfaces of the cylinder and the cylindrical pore, respectively; $(a\coth\tau_i,\,0)$ and $(a\coth\tau_o,\,0)$ are the centers of the cylinder and the cylindrical pore, respectively, in the Cartesian coordinates; $R_i=a/{\sinh\tau_i}$ and $R_o=a/{\sinh\tau_o}$ are the radii of the cylinder and the cylindrical pore, respectively; and $\tilde\varepsilon$ is the distance between the centers of the cylinder and the cylindrical pore. The uniform electric field $-(E_0 \sin\theta_0 \mathbf{e}_x+E_0 \cos\theta_0 \mathbf{e}_y)$ is imposed, where the electric field phase angle $\theta_0$ defines the direction of the electric field.} \label{fig1} \end{figure} To quantitatively describe the eccentric geometry, two parameters, i.e., the radius ratio $R_r$ and the eccentricity $\varepsilon$, are introduced, \begin{equation} R_r=\frac{R_i}{R_o},\quad \varepsilon=\frac{\tilde{\varepsilon}}{R_o-R_i}, \label{Rr} \end{equation} where $R_i$ and $R_o$ are the radii of the cylinder and the cylindrical pore, respectively; $\tilde{\varepsilon}$ is the distance between the centers of the cylinder and the cylindrical pore (Fig.\ \ref{fig1}). When the eccentricity $\varepsilon$ decreases to zero, the cylinder and the cylindrical pore become concentric. The bulk fluid outside the EDLs is electrically neutral. Thus, the Laplace equation is applied, \begin{equation} \nabla^2\phi=0, \label{eq2LaplaceEq} \end{equation} where $\phi$ is the electrical potential of the bulk fluid. The electric field lines are expelled by the EDL on the cylinder. Hence, the no-flux condition is applied, \begin{equation} \mathbf{e}_{\tau}\cdot\nabla\phi=0 \quad\text{at}\quad \tau=\tau_i, \label{eq3noflux} \end{equation} where $\mathbf{e}_{\tau}$ is the unit vector normal to the cylinder surface in the bipolar coordinates (Fig.\ \ref{fig1}). Given the uniformly applied electric field $-(E_0 \sin\theta_0 \mathbf{e}_x+E_0 \cos\theta_0 \mathbf{e}_y)$, the boundary condition on the cylindrical pore is, \begin{equation} \phi =E_0 a \frac{\sin\sigma\cos\theta_0+\sinh\tau_o\sin\theta_0}{\cosh\tau_o-\cos\sigma} \quad \text{at} \quad \tau =\tau_o, \label{eq4potentialDirichlet} \end{equation} using the Dirichlet condition, or \begin{eqnarray} \frac{\partial \phi}{\partial \tau} &=&E_0 a \frac{(1-\cosh\tau_o\cos\sigma)\sin\theta_0-\sinh\tau_o\sin\sigma\cos\theta_0}{(\cosh\tau_0-\cos\sigma)^2} \nonumber\\ && \text{at} \quad \tau =\tau_o, \label{eq4potentialNeumann} \end{eqnarray} using the Neumann condition. Both conditions lead to the uniform electric field $-(E_0 \sin\theta_0 \mathbf{e}_x+E_0 \cos\theta_0 \mathbf{e}_y)$ in the fluid flow when the cylinder disappears, although Eq.\ (\ref{eq4potentialNeumann}) does not define the tangential electric field on the cylindrical pore. This paper presents the derivation using the Dirichlet condition (Eq.\ (\ref{eq4potentialDirichlet})). For the derivation using the Neumann condition (Eq.\ (\ref{eq4potentialNeumann})), pleases refer to Section C of the Supplementary. Solving Eq.\ (\ref{eq2LaplaceEq}) together with Eqs.\ (\ref{eq3noflux}) and (\ref{eq4potentialDirichlet}), the electrical potential is obtained, \begin{eqnarray} \phi&=&E_0 a \sin\theta_0+ E_0 a \sum_{n=1}^{\infty} \frac{2\text{e}^{-n\tau_o}\cosh n(\tau_i-\tau)}{\cosh n(\tau_i-\tau_o)} \nonumber\\ && \times(\sin\theta_0\cos n\sigma+\cos\theta_0\sin n\sigma). \label{eq5phiDirichlet} \end{eqnarray} The zeta potential of the non-conducting cylindrical pore is fixed, $\zeta_f$; while that of the conducting cylinder is induced by the imposed electric field and obtained by substituting Eq.\ (\ref{eq5phiDirichlet}) into Eq.\ (\ref{eq1zetaidefinition}), \begin{eqnarray} \zeta_i&=&-E_0 a\sum_{n=1}^{\infty}\frac{2\text{e}^{-n\tau_o}}{\cosh n(\tau_i-\tau_o)} \nonumber\\ && \times(\sin\theta_0\cos n\sigma+\cos\theta_0\sin n\sigma) \quad \text{at} \quad \tau=\tau_i. \label{eq7zetaiDirichlet} \end{eqnarray} The tangential electric fields $E_\sigma$ on the cylinder and the cylindrical pore are obtained from Eq.\ (\ref{eq5phiDirichlet}) through $\mathbf{E}=-\nabla\phi$, \begin{eqnarray} E_\sigma&=&E_0 (\cosh\tau_i-\cos\sigma)\sum_{n=1}^{\infty}\frac{2n\text{e}^{-n\tau_o}}{\cosh n(\tau_i-\tau_o)} \nonumber\\ && \times(\sin\theta_0 \sin n\sigma-\cos\theta_0 \cos n\sigma) \quad \text{at} \quad \tau=\tau_i,\quad \label{eq7E11innerDirichlet} \end{eqnarray} \begin{eqnarray} E_\sigma&=&E_0 (\cosh\tau_o-\cos\sigma)\sum_{n=1}^{\infty}2n\text{e}^{-n\tau_o} \nonumber\\ && \times(\sin\theta_0 \sin n\sigma-\cos\theta_0 \cos n\sigma) \quad \text{at} \quad \tau=\tau_o.\quad \label{eq7E11outerDirichlet} \end{eqnarray} The surrounding electric field exerts electrostatic force and/or moment on the cylinder, \begin{equation} \mathbf{F}=\int_A{\mathbf{\Pi}\cdot \mathbf{e}_\tau dA}, \label{eq15-DEPforce} \end{equation} \begin{equation} \mathbf{M}=R_i\int_A{\mathbf{e}_\tau\times(\mathbf{\Pi}\cdot\mathbf{e}_\tau)dA}. \label{eq16-DEPmoment} \end{equation} The electric field is expelled by the EDLs on the cylinder. Thus, only the tangential electric field $E_\sigma$ remains. The Maxwell stress tensor $\mathbf{\Pi}_e=\varepsilon_w(\mathbf{E}\mathbf{E}-\frac{1}{2}E^2\mathbf{I})$ on the cylinder surface is normal, $\mathbf{\Pi}_e\cdot \mathbf{e}_\tau=-\varepsilon_wE_\sigma^2\mathbf{e}_\tau/2$. Therefore, it can be concluded that the electrostatic moment $M_e$ is zero. Substituting Eq.\ (\ref{eq5phiDirichlet}) into Eq.\ (\ref{eq15-DEPforce}) through the Maxwell stress tensor, the electrostatic forces per unit length on the cylinder are obtained, \begin{equation} F_{e,x}=\pi\varepsilon_w E_0^2 a \sum_{n=1}^{\infty} \frac{2 n^2 \text{e}^{-2n\tau_o}}{\cosh^2 n(\tau_i-\tau_o)}, \label{eq15FexD} \end{equation} \begin{equation} F_{e,y}=0. \label{eq15FeyD} \end{equation} The 2D fluid flow is described by the biharmonic equation of the stream function $\psi$, \begin{equation} \nabla^4\psi=0 \label{eq8-biharmonic}, \end{equation} where $\psi$ is related to the velocities in the bipolar coordinates through, \begin{equation} u_\sigma=h\frac{\partial \psi }{\partial \tau },\quad u_\tau=-h\frac{\partial \psi }{\partial \sigma }, \label{eq9-psiu} \end{equation} where $h=(\cosh\tau-\cos\sigma)/a$. The general solution of the stream function $\psi$ was given by Jeffery \cite{jeffery1922rotation}, \begin{eqnarray} h\psi &=& A_0\cosh\tau+B_0\tau(\cosh\tau-\cos\sigma)+C_0\sinh\tau \nonumber\\ && +D_0\tau\sinh\tau+\sum_{n=1}^{\infty}[a_n\cosh(n+1)\tau \nonumber\\ && +b_n\cosh(n-1)\tau+c_n\sinh(n+1)\tau \nonumber\\ && +d_n\sinh(n-1)\tau]\cos n\sigma+h_1\tau\sin\sigma \nonumber\\ &&+\sum_{n=1}^{\infty}[e_n\cosh(n+1)\tau+f_n\cosh(n-1)\tau \nonumber\\ && +g_n\sinh(n+1)\tau+h_n\sinh(n-1)\tau]\sin n\sigma.\quad \label{eq14-psi} \end{eqnarray} For practical electrolyte concentrations ($10^{-6}\sim10^{-3}$ mol/L), the EDL thickness $\lambda_D$ ranges from nanometers to sub-micrometers. It is typically much smaller than the characteristic length of either natural colloidal systems or artificial microfluidic devices. Therefore, the thin EDL approximation is adopted ($\lambda_D\ll \text{min}(R_i, \, R_o-R_i)$). Under such condition, the electric field is coupled to the flow field through the Helmholtz-Smoluchowski formula, \begin{equation} \mathbf{u}_\sigma=-\frac{\varepsilon_w\zeta}{\mu}E_\sigma\mathbf{e}_\sigma, \label{eq0slipu} \end{equation} where $\varepsilon_w$ and $\mu$ are the dielectric permittivity and the viscosity of the electrolyte solution, respectively; $\zeta$ is the zeta potential, given as $\zeta_i$ (Eq.\ (\ref{eq7zetaiDirichlet})) and $\zeta_f$ for the conducting cylinder and the non-conducting cylindrical pore, respectively. Eq.\ (\ref{eq0slipu}) holds for hydrophilic surfaces. The systematic analysis on electrokinetic phenomena occurred on hydrophilic surfaces can be referred from Refs.\ \cite{yariv2009asymptotic,schnitzer2012macroscale}. For hydrophobic surfaces, the relationship between the slip velocity and the zeta potential alters. The detailed derivations can be referred from Ref.\ \cite{maduar2015electrohydrodynamics}. Substituting Eqs.\ (\ref{eq7zetaiDirichlet}) and (\ref{eq7E11innerDirichlet}) into Eq.\ (\ref{eq0slipu}), the slip velocity on the cylinder is obtained. After mathematical manipulations, the boundary condition of fluid flow on the cylinder is expressed as, \begin{eqnarray} \mathbf{u}&=&\sum_{n=1}^{\infty}\left(K_{i,n}\cos n\sigma+\Lambda_{i,n}\sin n\sigma\right)\mathbf{e}_\sigma \nonumber\\ && +U_x \mathbf{e}_x+U_y \mathbf{e}_y+\Omega R_i \mathbf{e}_\sigma \quad \text{at} \quad \tau=\tau_i, \label{eq16innerslipD} \end{eqnarray} where $K_{i,n}$ and $\Lambda_{i,n}$ are the coefficients of $\cos n\sigma$ and $\sin n\sigma$, respectively. Their expressions can be referred from Eqs.\ (S.12) $\sim$ (S.14) and (S.23) $\sim$ (S.24) of the Supplementary. Substituting Eq.\ (\ref{eq7E11outerDirichlet}) and $\zeta_f$ into Eq.\ (\ref{eq0slipu}), the boundary condition of fluid flow on the cylindrical pore is obtained, \begin{eqnarray} \mathbf{u} &=&U_i \tilde{\zeta} \left[-\text{e}^{-\tau_o}\cos\theta_0+\sinh\tau_o\sum_{n=1}^{\infty}2\text{e}^{-n\tau_o} \right.\nonumber\\ &&\left. \times\left(\cos\theta_0\cos n\sigma-\sin\theta_0\sin n\sigma\right)\right]\mathbf{e}_\sigma \quad \text{at} \quad \tau=\tau_o.\quad\quad \label{eq15outerslipD} \end{eqnarray} where $U_i=\frac{\varepsilon_w E_0^2 R_i}{\mu}$ and $\tilde{\zeta}=\frac{\zeta_f}{E_0 R_i}$ are the velocity scale of the cylinder and the dimensionless zeta potential of the cylindrical pore, respectively. The fluid flow can be decomposed into two parts according to its linearity. First, we consider the flow due to the electrokinetic slip velocities on the stationary cylinder and the cylindrical pore. The stream function $\psi$ of this part is determined by substituting Eq.\ (\ref{eq14-psi}) into the first term of Eq.\ (\ref{eq16innerslipD}) and Eq.\ (\ref{eq15outerslipD}) through Eq.\ (\ref{eq9-psiu}). The expressions of the coefficients are listed in Section A of the Supplementary. Substituting the stream function $\psi$ with the obtained coefficients into Eqs.\ (\ref{eq15-DEPforce}) and (\ref{eq16-DEPmoment}) through the viscous stress tensor $\mathbf{\Pi}_H=-p\mathbf{I}+\mu[\nabla\mathbf{u}+(\nabla\mathbf{u})^T]$, the hydrodynamic forces and moment per unit length on the cylinder are obtained as shown by Eqs.\ (S.25) $\sim$ (S.27) in Section A of the Supplementary. Next, we consider the flow due to the cylinder motion $U_x \mathbf{e}_x+U_y \mathbf{e}_y+\Omega R_i \mathbf{e}_\sigma$, which arouses drag forces and moment on the cylinder. The derivation of the drag forces and moment is presented in Section B of the Supplementary. Since the cylinder is in free suspension, the net force and moment exerted on it should vanish. Sum up the obtained electrostatic, hydrodynamic, and drag forces along the $x-$axis, i.e., Eqs.\ (\ref{eq15FexD}), (S.25), (S.44); along the $y-$axis, i.e., Eqs.\ (\ref{eq15FeyD}), (S.26), (S.45); and the moments, i.e., Eqs.\ (S.27), (S.46), the cylinder velocities are obtained, \begin{equation} U_x =U_{ES,x}+U_{ICEO,x}+U_{EO,x}, \label{eq30UxD} \end{equation} where \begin{equation} \begin{split} U_{ES,x} &=\frac{1}{2} U_i \sum_{n=1}^{\infty}\frac{n^2 \text{e}^{-2n\tau_o}\sinh\tau_i \left[\tau_i-\tau_o-\tanh(\tau_i-\tau_o)\right]}{\cosh^2 n(\tau_i-\tau_o)}, \end{split}\tag{22a} \end{equation} \begin{equation} \begin{split} U_{ICEO,x} &=\frac{1}{2} U_i \left\{\frac{\text{e}^{-2\tau_o} \sinh\tau_i \tanh(\tau_i-\tau_o)}{\cosh(\tau_i-\tau_o)}\left[\frac{\cos2\theta_0}{\cosh(\tau_i-\tau_o)}\right.\right.\\ &\left.\left.-\frac{2\text{e}^{-\tau_o}\cosh\tau_i}{\cosh2(\tau_i-\tau_o)}+\frac{\text{e}^{-2\tau_o}}{\cosh3(\tau_i-\tau_o)}\right]\right.\\ & \left. +\sum_{n=2}^{\infty} \left[\frac{n\text{e}^{-2n\tau_o} \sinh\tau_i \tanh(\tau_i-\tau_o)}{\cosh n(\tau_i-\tau_o)}\right.\right.\\ &\left.\left. \times\left(\frac{2\text{e}^{\tau_o}\cosh\tau_i}{\cosh(n-1)(\tau_i-\tau_o)} -\frac{2\text{e}^{-\tau_o}\cosh\tau_i}{\cosh(n+1)(\tau_i-\tau_o)}\right.\right.\right.\\ & \left.\left.\left. +\frac{\text{e}^{-2\tau_o}}{\cosh(n+2)(\tau_i-\tau_o)}\right)\right.\right.\\ &\left.\left.-\frac{(n+1)\text{e}^{-2n\tau_o} \sinh\tau_i \tanh(\tau_i-\tau_o)}{\cosh(n-1)(\tau_i-\tau_o)\cosh(n+1)(\tau_i-\tau_o)}\right]\right\}, \end{split}\tag{22b} \end{equation} \begin{equation} \begin{split} U_{EO,x} &=-U_i \tilde{\zeta} \text{e}^{-\tau_o}\tanh(\tau_i-\tau_o)\sinh\tau_o \sin\theta_0, \end{split}\tag{22c} \end{equation} \begin{eqnarray} U_y &=& -\frac{1}{2}U_i \frac{\text{e}^{-2\tau_o} \cosh\tau_i \tanh(\tau_i-\tau_o) \sin2\theta_0}{\cosh^2(\tau_i-\tau_o)} \nonumber\\ && +U_i \tilde{\zeta} \frac{\text{e}^{-\tau_o}\sinh(\tau_i-\tau_o)\left[1-\frac{\cosh\tau_i\sinh\tau_o}{\cosh(\tau_i-\tau_o)}\right]\cos\theta_0}{\sinh\tau_i},\nonumber\\ \label{eq30UyD} \end{eqnarray} \begin{eqnarray} \Omega &=& -\frac{U_i}{R_i}\frac{\text{e}^{-2\tau_o}\tanh(\tau_i-\tau_o) \sin2\theta_0}{2\cosh^2(\tau_i-\tau_o)} \nonumber\\ && -\frac{U_i}{R_i} \frac{\tilde{\zeta}\text{e}^{-\tau_o}\sinh\tau_o\big[1+\tanh(\tau_i-\tau_o)\big]\cos\theta_0}{\sinh\tau_i}. \label{eq30UrotD} \end{eqnarray} Clearly, the cylinder velocities are trigonometric functions of the electric field phase angle $\theta_0$. The cylinder velocities can be manipulated by changing $\theta_0$. Three factors contribute to the cylinder motion, namely the electrostatic (ES) force, the induced charge electroosmotic (ICEO) flow, and the electroosmotic (EO) flow. All these three factors contribute to $U_x$ (Eqs.\ (\ref{eq30UxD}) and (S.70)). Only the ICEO and the EO flows contribute to $U_y$ and $\Omega$ (Eqs.\ (\ref{eq30UyD}), (\ref{eq30UrotD}), (S.71) and (S.72)). \section{Results and discussion} \subsection{Cylinder velocities}\label{Secvelocities} As introduced previously, three factors contribute to the cylinder velocities. The cylinder velocities due to these factors as well as the total velocities are characterized in this section. The influences of $R_r$ and $\varepsilon$ on the cylinder velocities are presented in Figs.\ \ref{Fig2} $\sim$ \ref{Fig4} with $\theta_0=\pi/6$ and $\tilde{\zeta}=1$. The influences of $\theta_0$ and $\tilde{\zeta}$ on the cylinder velocities are shown in Figs.\ S.1 $\sim$ S.3 of the Supplementary with $R_r=\varepsilon=0.5$. The ES component of cylinder velocity is irrelevant to $\tilde{\zeta}$ and $\theta_0$ (Fig.\ S.1). It solely contributes to $U_x$. As the induced zeta potential $\zeta_i$ is a function of the applied electric field, the ICEO component is irrelevant to $\tilde{\zeta}$ but is a trigonometric function of 2$\theta_0$ (Figs.\ S.1 $\sim$ S.3). The variations of cylinder velocities with $\tilde{\zeta}$ and $\theta_0$ follow the same trend in the two conditions (Figs.\ S.1 $\sim$ S.3). \begin{figure*}[!htb \centering \includegraphics[width=6.5in]{Fig2.eps} \caption{Variation of the cylinder velocity $U_x/U_i$ with the radius ratio $R_r$ at different eccentricities $\varepsilon$. The ES component is 5 times amplified for a better observation.} \label{Fig2} \end{figure*} \begin{figure*}[!htb \centering \includegraphics[width=5in]{Fig3.eps} \caption{Variation of the cylinder velocity $U_y/U_i$ with the radius ratio $R_r$ at different eccentricities $\varepsilon$.} \label{Fig3} \end{figure*} Fig.\ \ref{Fig2} shows the variation of the cylinder velocity $U_x$ with the radius ratio $R_r$ at different eccentricities $\varepsilon$. The ES components of $U_x$ obtained from the Dirichlet and the Neumann conditions follow the same trend as $R_r$ and $\varepsilon$ increase (Figs.\ \ref{Fig2}(a1) and \ref{Fig2}(b1)). They monotonically increase as $\varepsilon$ increases, and show a parabolic variation as $R_r$ increases. This is due to the fact that the ES component of $U_x$ is caused by the asymmetric surrounding electric field. A stronger asymmetry leads to a larger ES force. The asymmetry of the surrounding electric field monotonically increases as $\varepsilon$ increases. At $R_r=0$, the cylindrical pore is infinitely large compared to the cylinder. The cylindrical pore shows negligible influence on the local electric field around the cylinder. Thus, the ES force is zero. At $R_r=1$, the cylinder and the cylindrical pore coincide with each other. The surrounding electric field becomes totally symmetric, which leads to zero ES force. As $R_r$ increases from 0 to 1, the asymmetry of the surrounding electric field first increases and then decreases. The ES components of $U_x$ obtained from these two conditions are of the same order of magnitude although the Dirichlet condition leads to a faster decay as $R_r$ increases when $R_r$ is large. This trend can be more clearly observed in Fig.\ S.4(a). Although the electric fields obtained from the two conditions do not show significant difference, the resulted cylinder velocities due to fluid flow show otherwise. Using the Dirichlet condition, the ICEO components of cylinder velocities ($U_x$, $U_y$ and $\Omega$) increase from zero and then diminishes to zero as $R_r$ increases (Figs.\ \ref{Fig2}(a2), \ref{Fig3}(a1) and \ref{Fig4}(a1)). While using the Neumann condition, they monotonically increase from zero as $R_r$ increases (Figs.\ \ref{Fig2}(b2), \ref{Fig3}(b1) and \ref{Fig4}(b1)). As $\varepsilon$ increases, the ICEO components of cylinder velocities obtained from both conditions show monotonic increases. \begin{figure*}[!htb \centering \includegraphics[width=5in]{Fig4.eps} \caption{Variation of the cylinder velocity $\Omega R_i/U_i$ with the radius ratio $R_r$ at different eccentricities $\varepsilon$.} \label{Fig4} \end{figure*} As $R_r$ increases, the EO components of $U_x$ and $U_y$ approach zero (Figs.\ \ref{Fig2}(a3) and \ref{Fig3}(a2)) and constant values (Figs.\ \ref{Fig2}(b3) and \ref{Fig3}(b2)) when they are obtained from the Dirichlet and the Neumann conditions, respectively. The EO components of $\Omega$ obtained from both conditions approach constant values as $R_r$ increases (Figs.\ \ref{Fig4}(a2) and \ref{Fig4}(b2)). In addition, as $\varepsilon$ increases, the magnitudes of the EO components of $U_x$ monotonically decrease (Figs.\ \ref{Fig2}(a3) and \ref{Fig2}(b3)), the EO components of $U_y$ increase from negative to positive (Figs.\ \ref{Fig3}(a2) and \ref{Fig3}(b2)), and the magnitudes of the EO components of $\Omega$ monotonically increase (Figs.\ \ref{Fig4}(a2) and (b2)). From Figs.\ \ref{Fig2} $\sim$ \ref{Fig4}, we can conclude that the cylinder velocities obtained from the Neumann condition are larger than those obtained from the Dirichlet condition, especially at large $R_r$. One may refer to Figs.\ S.4 $\sim$ S.6 in Section D of the Supplementary for more detail. The Dirichlet and the Neumann conditions specify the electrical potential and the electric field (i.e., surface charge density) on the cylindrical pore, respectively \cite{jackson1999classical}. The tangential electric field on the cylindrical pore is not defined by the Neumann condition. Hence, the electric fields within the cylindrical pore vary due to the different boundary conditions. As $R_r$ reduces, the difference in these two conditions becomes more significant. The difference of the ICEO components obtained in the two conditions is more pronounced than that of the EO components. This is due to the fact that the ICEO component is a quadratic function of electric field, while the EO component is linearly proportional to electric field. Cylinder velocities depend on the relative magnitudes of their components as shown in Figs.\ \ref{Fig2} $\sim$ \ref{Fig4}. The cylinder velocity maps at $\theta_0=0$ are shown in Fig.\ \ref{Fig5}. The vectors and contours indicate the translational and rotational velocities of the cylinder, respectively. For a nonzero $\theta_0$, the cylinder velocity map is the same as that at $\theta_0=0$ but tilts at an angle $\theta_0$. To facilitate the discussion, a polar coordinate system is introduced (as seen in the upper right corner of Fig.\ \ref{Fig5}), where $\alpha$ indicates the angle of the polar coordinates. \begin{figure}[!htb \centering \includegraphics[width=6in]{Fig5.eps} \caption{Cylinder velocity maps. $\tilde{\zeta}=0$, 0.2 and 1 in Figs.\ (a1,b1), (a2,b2) and (a3,b3), respectively. Radius ratio $R_r=0.1$ and electric field phase angle $\theta_0=0$. The vector field indicates the cylinder translational velocity $\mathbf{U}=U_x\mathbf{e}_x+U_y\mathbf{e}_y$. The contour plot shows the cylinder rotational velocity $\Omega$, where the positive and negative values indicate the counterclockwise and clockwise directions, respectively.} \label{Fig5} \end{figure} Both the translational and rotational velocities of the cylinder obtained from the Neumann condition (Fig.\ \ref{Fig5}(b)) are larger than those obtained from the Dirichlet condition (Fig.\ \ref{Fig5}(a)). The contour plot demonstrates that the cylinder possesses a greater rotational velocity $\Omega$ when it is near the cylindrical pore at $\alpha=(2n-1)\pi/4$. As $\tilde{\zeta}$ increases, the peak values of $\Omega$ at $\alpha=5\pi/4$ and $7\pi/4$ increase while those at $\alpha=\pi/4$ and $3\pi/4$ reduce or even disappear. This is due to the increasing EO component of $\Omega$. At a given electric field, the magnitude and the direction of $\Omega$ can be tuned by adjusting the position of the cylinder within the cylindrical pore. The vector fields in Fig.\ \ref{Fig5} show the magnitude distributions of $\mathbf{U}$ and the cylinder trajectories. The cylinder experiences large $\mathbf{U}$ when it is close to the cylindrical pore. When $\tilde{\zeta}=0$, the results show that the cylinder moves towards and becomes stationary at the center of cylindrical pore regardless of its initial position (Figs.\ \ref{Fig5}(a1) and \ref{Fig5}(b1)). At $\tilde{\zeta}=0.2$, the cylinder moves towards and becomes stationary at a stationary point near the cylindrical pore due to the increased EO component of $\mathbf{U}$ (Figs.\ \ref{Fig5}(a2) and \ref{Fig5}(b2)). As $\tilde{\zeta}$ increases to 1, two stationary points appear within the cylindrical pore because the EO component of $\mathbf{U}$ is greatly enhanced (Figs.\ \ref{Fig5}(a3) and \ref{Fig5}(b3)). Regardless of the initial position and the specific trajectory, the cylinder moves towards and becomes stationary at the stationary points using the Dirichlet condition (Fig.\ \ref{Fig5}(a3)); while it may become stationary at the stationary points or reach the lower side of the cylindrical pore using the Neumann condition (Fig.\ \ref{Fig5}(b3)). These different cylinder trajectories are caused by the different EO components of $\mathbf{U}$ obtained from the two conditions. \subsection{Micromotor} Since the cylinder rotates when it is eccentric with respect to the cylindrical pore (Figs.\ \ref{Fig4} and S.3), micromotors can be developed by letting the cylinder free to rotate but not translate. The rotation of the cylinder may influence the establishment of EDL on the cylinder. To ensure the influence is negligible, the rotational velocity of the cylinder must be slow compared to the establishment of EDL. The characteristic time of the EDL formation is the charging time $t_c$, defined as $t_c=\kappa^{-1}R_i/D_i$. Here $\kappa^{-1}=\sqrt{\varepsilon_wk_B N_A T/(2F^2c_0)}$ is the Debye length of the EDL, where $\varepsilon_w$ is the dielectric permittivity of the electrolyte solution; $k_B$ is the Boltzmann constant; $N_A$ is the Avogadro constant; $T$ is the absolute temperature of the electrolyte solution; $F$ is the Faraday constant; and $c_0$ is the molar concentration of the electrolyte solution. The EDL charging time of the cylinder $t_c$ is larger than the Debye relaxation time for ionic screening, $t_D=\kappa^{-2}/D_i$, while smaller than the diffusion time for the relaxation of bulk concentration gradient, $t_R=R_i^2/D_i$, by a factor of $\kappa R_i$. $\kappa R_i$ is typically large in microfluidics \cite{squires2004induced}. We hereby define the charging frequency $f_c=1/t_c=\Gamma_c \frac{\sqrt{c_0}}{R_i}$, where $\Gamma_c=\sqrt{2}D_i F/\sqrt{\varepsilon_w k_B N_A T}$ is a constant with the given parameters in Table \ref{table1}. The charging frequency $f_c$ is proportional to $\sqrt{c_0}$ and inversely proportional to $R_i$. To ensure the effect of the cylinder rotation on the establishment of EDL is insignificant, the rotational velocity $\Omega$ of cylinder should be much smaller than the charging frequency $f_c$, $\Omega\ll f_c$. \begin{table}[htb] \centering \footnotesize \caption{Parameters of the electrokinetic system of the electrolyte solution. \label{table1}} \begin{tabular}{l l l} \hline Dielectric permittivity & $\varepsilon_w$ & $7\times 10^{-10}\;{\rm kg\cdot m\cdot V^{-2}\cdot s^{-2}}$\\ Viscosity & $\mu$ & ${\rm 1\times 10^{-3}\; kg\cdot m^{-1}\cdot s^{-1}}$\\ Density & $\rho$ & ${\rm 1\times 10^{3}\; kg\cdot m^{-3}}$\\ Diffusivity & $D_i$ & ${\rm 1\times 10^{-9}\; m^2\cdot s^{-1}}$\\ Boltzmann constant & $k_B$ & $1.38\times 10^{-23}\; {\rm J\cdot K^{-1}}$ \\ Avogadro constant & $N_A$ & $6.02\times 10^{23}\; {\rm mol^{-1}}$\\ Absolute temperature & $T$ & $298.15\; {\rm K}$\\ Faraday constant & $F$ & $9.65\times 10^{4}\; {\rm C\cdot mol^{-1}}$\\ \hline \end{tabular} \end{table} The rotational velocity scale $\Omega_i=U_i/R_i=\Gamma_r E_0^2$ is used to represent cylinder rotation in the following analysis, where $\Gamma_r=\varepsilon_w/\mu$ is a constant with the given parameters in Table \ref{table1}. The present study is carried out with the quasi-steady state assumption, i.e., the unsteady term, $\rho\partial\mathbf{u}/\partial t$, in the Stokes equation is neglected. To ensure the validity of this assumption, the diffusion time of fluid vorticity, $t_\nu=R_i^2/\nu$, should be much smaller than the advection time scale of the flow, $t_a=R_i/U_i$ \cite{pozrikidis2011introduction}, where $\nu=\mu/\rho$ is the kinematic viscosity. Clearly, $\Omega_i=t_d^{-1}$, thus, $\Omega_i\ll t_\nu^{-1}$. Given $c_0=1\times 10^{-3}$ mol$\cdot$L$^{-1}$, then $\kappa^{-1}\approx10$ nm. We take the cylinder radius $R_i=10\;\mu$m, thus, $\kappa R_i\approx1\times10^3$, the thin EDL approximation holds. And $t_c$ is much larger than $t_D$, while much smaller than $t_R$. The charging frequency of the EDL establishment $f_c\approx1\times10^4$ s$^{-1}$, which is much larger than the rotational velocity of most micromotors. And $\tau_\nu^{-1}=1\times10^4$ s$^{-1}$, the unsteady term in the Stokes equation is ensured negligible so long as $\Omega_i$ is much smaller than this value. The rotational velocity of the load-free micromotor is the same as that shown in Figs.\ \ref{Fig4} and S.3. When the micromotor works under a load $M_l$, the moment balance becomes $M_e+M_H+M_d=M_l$. Substituting Eqs.\ (S.27) and (S.46) into this equation, the relationship between the rotational velocity of the micromotor $\Omega_m$ and the load $M_l$ is obtained, \begin{eqnarray} \Omega_m &=& \left\{\frac{2\tilde{\zeta}U_i}{R_i}\text{e}^{-\tau_o}\sinh\tau_o\sinh\tau_i\left[1-\left(\tau_i-\tau_o\right)\left(1+\coth(\tau_i-\tau_o)\right) \right.\right.\nonumber\\ &&\left.\left.-\cosh\tau_o\sinh\tau_o-\coth\tau_i\sinh^2\tau_o\right]\cos\theta_0 \right.\nonumber\\ &&\left.+\frac{U_i}{2R_i} \frac{\text{e}^{-2\tau_o}\sinh\tau_i}{\cosh^2(\tau_i-\tau_o)}\left[\cosh(\tau_i-2\tau_o)-\cosh\tau_i+2(\tau_i-\tau_o)\sinh\tau_i\right]\sin2\theta_0 \right.\nonumber\\ &&\left.+\frac{M_l}{4\pi\mu R_i^2}\left[(\tau_i-\tau_o)(\cosh2\tau_i+\cosh2\tau_o-2)-\sinh2\tau_i+\sinh2\tau_o+\sinh2(\tau_i+\tau_o)\right]\right\} \nonumber\\ && /\left[\cosh2\tau_i-\cosh2(\tau_i-\tau_o)-2(\tau_i-\tau_o)\coth(\tau_i-\tau_o)\sinh^2\tau_i\right]. \label{eq31loadc} \end{eqnarray} \begin{figure*}[!htb \centering \includegraphics[width=4.7in]{Fig6.eps} \caption{Variation of the rotational velocity of the micromotor with the load $M_l$ obtained from (a) the Dirichlet condition and (b) the Neumann condition, and (c) schematic diagram of the micromotor. Here the radius ratio $R_r=0.5$, the eccentricity $\varepsilon=0.5$, the electric field phase angle $\theta_0=7\pi/4$, the dimensionless zeta potential of the cylindrical pore $\tilde{\zeta}=0$, the cylinder radius $R_i=10\,\mu$m, and the electric field strength $E_0=10$ kV$\cdot$m$^{-1}$.} \label{FIG10} \end{figure*} A schematic diagram of the micromotor and the variation of the rotational velocity of the micromotor $\Omega_m$ with the load $M_l$ are presented in Fig.\ \ref{FIG10}. $\Omega_m$ reduces linearly as $M_l$ increases and reaches zero when $M_l$ equals to the hydrodynamic moment $M_H$. The ranges of $M_l$ and $\Omega_m$ obtained from the Neumann condition is much larger than that obtained from the Dirichlet condition. By choosing the appropriate parameters according to Figs.\ \ref{Fig4} and S.3, a micromotor can be developed with a controllable rotational velocity. $M_H$ increases as the radius ratio $R_r$ and the eccentricity $\varepsilon$ increase (Eqs.\ (S.27) and (S.69)). Thus, the upper limit of the load $M_l$ can be increased by increasing $R_r$ and $\varepsilon$. Accordingly, a micromotor with a much faster rotational velocity $\Omega_m$ and a larger bearing capacity, i.e., a larger load $M_l$, can be developed. The micromotor can also bear loads in the opposite direction by adjusting the electric field phase angle $\theta_0$. Both the Dirichlet and the Neumann boundary conditions of the electric field have been used in the studies of particle suspensions \cite{levine1974prediction,kozak1989electrokinetics,ohshima1997electrophoretic}. The different results are due to the fact that the Dirichlet boundary condition (Eq.\ (\ref{eq4potentialDirichlet})) defines the electrical potential on the cylindrical pore, while the Neumann boundary condition (Eq.\ (\ref{eq4potentialNeumann})) defines the surface charge density \cite{jackson1999classical}. Eq.\ (\ref{eq4potentialNeumann}) does not specify the tangential electric field on the cylindrical pore. It was reported that the statistical mechanics modelling on the electrophoresis of biocells favors the Dirichlet boundary condition \cite{keh2000diffusiophoresis}. \section{Conclusion} In this paper, the induced charge electrophoresis of a conducting cylinder suspended in a non-conducting cylindrical pore is theoretically studied, and a micromotor is proposed utilizing the cylinder rotation. Both the Dirichlet and the Neumann boundary conditions of the electric field are applied on the cylindrical pore. The analytical study on the cylinder velocities shows that the cylinder not only translates but also rotates when the cylinder and the cylindrical pore are eccentric. The cylinder velocities are examined with various values of the eccentricity, the radius ratio, the electric field phase angle, and the zeta potential of the cylindrical pore. The analysis shows that the cylinder velocities obtained in the two boundary conditions present great differences. Moreover, the cylinder trajectories show that the cylinder always approaches and becomes stationary at certain stationary points within the cylindrical pore. By choosing the appropriate parameters of the electrokinetic system, the micromotor proposed in this paper can achieve a high rotational velocity without influencing the EDL establishment on the cylinder. A large eccentricity and a strong electric field are preferred to develop a micromotor with a high rotational velocity and a great bearing capacity. The micromotor proposed here has advantages of simple geometry and low operating voltage, and a great potential in the application of the lab-on-a-chip systems for chemical and biological analysis. \section*{Acknowledgement} The authors gratefully acknowledge research support from the Singapore Ministry of Education Academic Research Fund Tier 2 research Grant MOE2011-T2-1-036.
1,314,259,996,133
arxiv
\section{Introduction} Temporal ordering of events from semi-/un-structured textual data (e.g., news article, clinical narrative) has important applications in many practical clinical applications such as questioning and answering (e.g., personal assistance), timeline analysis (e.g., event monitoring, pathway extraction), and text summarisation. Chronological ordering of events involves the tasks of named entity recognition and classification (NER) or event extraction, including temporal entity recognition and normalisation (TERN), and temporal relation (TLINK) identification and classification. Moreover, temporal ordering of events from textual clinical data include, at least, three NLP tasks: \begin{inparaenum}[(1)] \item event extraction (e.g., clinical events or concepts such as problems, treatments and tests), \item temporal entity extraction: identification (e.g., `January 4 1988', `twice daily') and normalisation, and \item temporal relations extraction (\textit{determine when a particular event occurred}).\end{inparaenum} For example, in Figure \ref{figure:note_ex} a number of events (highlighted) and TE (underlined) have been identified in a sample clinical narrative. Subsequently, the chronological ordering of events have been visualised in the given timeline. \begin{figure}[h] \includegraphics*[scale=0.29]{figures/temporal_ordering.pdf} \caption{Chronologically ordered events from a sample clinical narrative} \label{figure:note_ex} \end{figure} The methods described in this report has been inspired from a number of work derived from community held evaluations in relevant NLP tasks: \subsubsection*{Event extraction} Recent work in clinical event extraction has been notably pushed by recent community held evaluation in clinical named entity recognition organised as part of 2010 \cite{Uzuneretal11} and 2012 \cite{Sunetal13b} i2b2 challenges. \subsubsection*{Temporal entity extraction} Likewise, temporal entity extraction has been notably pushed by a number of general domain SemEval/TempEval \cite{Verhagenetal07,Verhagenetal10,Uzzamanetal13} and notably specific domain 2012 i2b2 \cite{Sunetal13b} challenges. \subsubsection*{Temporal relation extraction} The aim of temporal relation extraction is to anchor extracted events onto a temporal space. Recent work on this problem have resulted from the 2012 i2b2 \cite{Sunetal13b} and more recently SemEval-2015 task 6, Clinical TempEval \cite{Bethard2015}.\\ The remainder of this paper is structured as followed: Section \ref{sec:event_extraction} described the methods engineered to extract clinical events such as medical problems, treatments and tests. Section \ref{sec:temporalentity_extraction} describes the methods developed to identify and normalise (ISO-8601). Section \ref{sec:tlink_extraction} describes the temporal entity identification and classification approach. Section \ref{sec:experiments_results_discussions} presents the experiments, results and dicussions. The conclusion is given in the final Section \ref{sec:conclusion}. This paper is largely self-contained\\ Note that this report is a reprint from the author's thesis [TBA] and significant improvement of intermediate results previously published \cite{Kovacevicetal13}.\\ A number of components described herein are available as open source\footnote{Clinical NERC \url{http://sourceforge.net/projects/clinical-nerc/} and Clinical TERN \url{http://sourceforge.net/projects/clinical-tern/}}\\ \section{Event extraction} \label{sec:event_extraction} The aim of the event extraction method is to identify broad clinical event categories such as, \textit{Problem}, \textit{Treatment} and \textit{Test} and map them to a medical knowledge base such as the UMLS Metathesaurus for fine-grained semantic characterisation of event instances\footnote{No evaluation is provided on event/concept mapping.}. These event categories will collectively be referred to as EVENTs from henceforth. We have adopted the i2b2 definitions of concept or event categories which are largely based on the UMLS semantic types, but not limited by their coverage\footnote{\url{https://www.i2b2.org/NLP/Relations/assets/Concept\%20Annotation\%20Guideline.pdf}}; \subsection{Methods} The core NER is a data-driven approach (using the state-of-the-art sequence labelling algorithm CRF) to identify clinical EVENTs from healthcare narratives. The EVENT extraction pipeline is made up of three main processing components: NLP pre-processing, the NER (see Figure \ref{figure:ClinicalNER}), and Negation.: \begin{figure}[h] \begin{center} \includegraphics*[scale=0.5]{figures/NERC.pdf} \caption{EVENT extraction architecture} \label{figure:ClinicalNER} \end{center} \end{figure} The NLP pre-processing pipeline is made up of lexical and syntactic processing components, specifically: \begin{inparaenum}[(1)] \item Tokeniser, \item sentence splitter, \item word stemmer, \item POS tagger, and \item chunker / shallow parser.\end{inparaenum} \subsubsection*{Data-driven NER} The Data-driven NER component utilises separate CRFs trained for each EVENT category: \textit{Problem}, \textit{Treatment} and \textit{Test}. A combination of the forward and backward feature selection approaches were adopted to select a total of 20 most discriminant features from an initial set of 120 features. The same set of features were used across all categories as our analysis showed this was the best fit. The extracted features can be clustered into two sets: lexical and syntactic, with four feature groups across (see the below list). \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[\textbf{Lexical}] \item $fg_1$: the token string or alphanumeric character sequence \item $fg_2$: the stem of each token \item $fg_3$: the POS-tag for each token \item[\textbf{Syntactic}] \item $fg_4$: the chunk tag for each token \end{itemize} Further, the feature space is also made up of contextual features of the neighbouring tokens with a feature window size of 5 or [-2,2] with respect to the current token. The \textit{window size} corresponds to the number of tokens to the \textit{left} and \textit{right}, including the current token, of which contextual token features are considered. Specifically, for each token $t$ and a given feature group $fg$, the feature space consists of: ($t_{fg}$), ($t_{fg}$+1), ($t_{fg}$+2), ($t_{fg}$-1), and ($t_{fg}$-2) (see Table \ref{table:NERtemplate}). \begin{table}[h] \caption{Feature template: clinical EVENTs} \label{table:NERtemplate} \begin{small} \textit{CRF feature template used for all EVENT categories: Problem, Treatment and Test.} \end{small} \begin{center} \begin{tabular}{llll} \hline \textbf{$fg_1$:Token} & \textbf{$fg_2$:Stem} & \textbf{$fg_3$:POS} & \textbf{$fg_4$:Chunk}\\ U00:\%x[-2,1] & U05:\%x[-2,2] & U10:\%x[-2,3] & U15:\%x[-2,4]\\ U01:\%x[-1,1] & U06:\%x[-1,2] & U11:\%x[-1,3] & U16:\%x[-1,4]\\ U02:\%x[0,1] & U07:\%x[0,2] & U12:\%x[0,3] & U17:\%x[0,4]\\ U03:\%x[1,1] & U08:\%x[1,2] & U13:\%x[1,3] & U18:\%x[1,4]\\ U04:\%x[2,1] & U09:\%x[2,2] & U14:\%x[2,3] & U19:\%x[2,4]\\ \hline \end{tabular} \end{center} \end{table} All CRFs were trained using a mix of BIO and W-BIO (W: single word, B: beginning, I: inside, O: outside) sequence label models with the following (default) CRF parameters: $C=1.00$, $ETA: 0.0001$ and L2-regularisation algorithm. The post-processing component contains three sub-components: \begin{enumerate} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item \textbf{Label fixer}\\ This components corrects sequence label prediction from the NER component. These corrections are simple heuristics based on commonly observed errors in the training data set. Table \ref{table:labelfixer} list the full set of heuristics utilised. \begin{table}[h] \caption{Label fixer heuristic} \label{table:labelfixer} \begin{center} \begin{tabular}{|c|l|l|} \hline & \textbf{Raw predictions} & \textbf{Corrected predictions} \\ \hline a & ... O O O I I I I ... & ... O O \textbf{B} I I I I ... \\ \hline b & ... O O O B O O O ... & ... O O O B \textbf{I} O O ... \\ \hline c & ... O O O B O I I ... & ... O O O B \textbf{I} I I... \\ \hline d & ... O O O B I I B I I ... & ... O O O B I I \textbf{I} I I... \\ \hline \end{tabular} \end{center} \end{table} \item \textbf{Boundary adjustment} \\ This component attempts to expand the event boundary by including contextual tokens to the right and left of predictions that possess POS/chunk tags that corresponded to nouns and noun phrases and their constituents including adjectives and determiners (e.g., `a'; `this'; `her'). This sub-component is useful when the NER only tags part of an event. For example, if the NER component annotates the word `severe', `stomach', or `ache' from the actual term `severe stomach ache', this component would hypothetically capture the latter complete term boundary. \item \textbf{False positive filter}\\ This component removes common false positives predictions observed during the validation of the NER, i.e., common model prediction errors. Examples of false positives prediction include single character predictions (e.g., `a'), pronouns (e.g., `he'; `she'), and determiners (e.g., `the'). \end{enumerate} \subsubsection*{Negation} To identify negated clinical EVENTs we used the ConText negation tool as described in \citep{Harkemaetal09}. \section{Temporal entity extraction} \label{sec:temporalentity_extraction} The TERN task involves the recognition and normalisation of TEs. TE are defined by TIMEX3 schema are grouped into four temporal types: \textit{Date} (e.g., `August 23, 1993'), \textit{Time} (e.g., `2:23 p.m.'), \textit{Frequency} (e.g., `every morning'), and \textit{Duration} (e.g., `two weeks'). In addition, the \textit{Date and time format: ISO-8601} standard is used to normalise TEs into a standardised format. \subsection{Methods} We propose a hybrid-based TER component, with a rule-based temporal normalisation component (ClinicalNorMA)\footnote{https://github.com/filannim/clinical-norma}. The motivation for adopting a hybrid approach for TER was to compare different approaches, and potentially combine the methods for the best possible performance. \subsection*{Architecture} The TERN component is made up of the following components (see Figure \ref{figure:TERNArchitecture} for a overview). \begin{figure}[h] \begin{center} \includegraphics[scale=0.5]{figures/TERN.pdf} \caption{TERN architecture} \label{figure:TERNArchitecture} \end{center} \end{figure} A pre-processing pipeline is made up of the following NLP components: \begin{inparaenum}[(1)] \item tokeniser, \item sentence splitter, and \item semantic temporal resources.\end{inparaenum} Specifically, several bespoke temporal knowledge resources were manual compiled and applied at this stage of processing to subsequently be utilised as features for the rule- and ML-based TER components. These semantic resources cover a broad set of temporal expression sub-categories: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item clinical frequency (e.g., qd (once a day), bid (twice a day)) \item duration (e.g., `over night', `weekend', `months') \item festival (e.g., `Yom Kippur', `Nowruz', `Christmas') \item season (e.g., `summer', `spring', `autumn', etc.) \item weekday (e.g., `Monday', `Tuesday', `Wednesday', etc.) \item month (e.g., `January', `February', `March', etc.) \item literal time (e.g., `morning', `afternoon', `evening') \item temporal modifier (e.g., `on', `after', `before') \item ordinal number (e.g., `first', `second', `third', etc.) \item literal number (e.g., `one', `two', `three', etc.) \end{itemize} \subsection*{Temporal expression recognition} The TER component consists of combined rule- and ML-based methods.\\ \textbf{The rule-based component} consist of a total of 65 rules containing patterns derived from an initial collocation extraction (i.e., bi- and tri-grams) and pattern analysis of TEs in the training data. For example, the TE patterns `MM/DD/YYYY', `MM/DD/YY', `YYYY/DD/MM' and `MM/DD' accounted for roughly 35\% of temporal expressions in the training data (i2b2-TRC). The rule set developed combines a few types of feature: \begin{inparaenum}[(a)] \item semantic: temporal categories derived from the set of specific temporal knowledge resources during the pre-processing (see previous sub-section), \item lexical: such as common recurring expressions (e.g., `postoperative day one', `hospital day five', `today'), and \item pattern features e.g., `MM/DD/YYYY', `MM/DD/YY'.\end{inparaenum}\\ \textbf{The ML-based component} was developed using a set of features selected based on an initial literature review, and further refinement using a combination of manual forward and backward feature selection approach. A total of 19 most discriminate features were selected from an initial set of 120 features. These features can be organised into three sets: \begin{itemize} \setlength{\itemsep}{1pt} \setlength{\parskip}{0pt} \setlength{\parsep}{0pt} \item[\textbf{Lexical}] \item $fg_1$: the token string or alphanumeric character sequence \item $fg_2$: semantic temporal categories derived from the `NLP pre-processing' \item[\textbf{Orthographic}] \item $fg_3$: token kind given by the literal representation: \textit{word}, \textit{number}, \textit{symbol}, or \textit{punctiuation} \item $fg_4$: token-case given by the literal representation: \textit{lower-case}, \textit{upper-case}, \textit{upper-initial}, \textit{mixed-caps}, \textit{all-caps} \item[\textbf{Combined}] \item $fg_5$: concatenation of the features: $fg_1$, $fg_2$ and $fg_4$ \end{itemize} In addition to these features, the feature space consists of contextual features. Specifically, we found that the optimal feature window size of 5 or [-2,2] was ideal for $fg_1$, $fg_3$ and $fg_4$, and a window size of 3 or [0,2] for $fg_2$ (Table \ref{table:TERtemplate} gives the complete feature space used). \begin{table}[h] \caption{Feature template: clinical TER} \label{table:TERtemplate} \begin{small} \textit{CRF feature template used for the TER. } \end{small} \begin{center} \begin{tabular}{ll} \hline \textbf{$fg_1$:Token} & \textbf{$fg_2$:Dictionary} \\ U00:\%x[-2,1] & \\ U01:\%x[-1,1] & \\ U02:\%x[0,1] & U05:\%x[0,2] \\ U03:\%x[1,1] & U06:\%x[1,2] \\ U04:\%x[2,1] & U07:\%x[2,2] \\ & \\ \textbf{$fg_3$:TokenKind} & \textbf{$fg_4$:TokenCase} \\ U08:\%x[-2,5] & U13:\%x[-2,6] \\ U09:\%x[-1,5] & U14:\%x[-1,6] \\ U10:\%x[0,5] & U15:\%x[0,6] \\ U11:\%x[1,5] & U16:\%x[1,6] \\ U12:\%x[2,5] & U17:\%x[2,6] \\ & \\ \textbf{$fg_5$:Combined} & \\ \multicolumn{2}{l}{U18:\%x[0,1]/\%x[0,2]/\%x[0,4]} \\ \hline \end{tabular} \end{center} \end{table} The ML-based module uses a a state-of-the-art sequence labelling algorithm (CRF) trained with the IO token representation schema with the following (default) CRF parameters: $C=1.00$, $ETA: 0.0001$ and L2-regularisation algorithm. \subsubsection*{Results integration} The output of the ML- and rule-based methods are combined at the mention level: union of the respective overlapping and non-overlapping outputs. \subsubsection*{Post-processing} A rule-based post-processing component was developed in order to correct obvious and systematic errors from the hybrid TER method. This component removes common false-positives predictions identified during the development of the TER component. Common examples include single character predictions and non-related but similar numerical expressions e.g., pulmonary artery pressure measures (e.g., `42/21') and other (partial) numerical expressions such as telephone, fax and ward numbers. \subsubsection*{TE normalisation} The ClinicalNorMA \citep{Kovacevicetal13} is adopted as the TE normalisation component. The normaliser is based on the general domain normalisation component TRIOS \citep{Uzzaman-Allen10}. Further, ClinicalNorMA is rule-based and adheres to the TIMEX3 schema, specifically, the extended schema described in \citep{Sunetal13}. \section{Temporal relation identification and classification} \label{sec:tlink_extraction} The aim of temporal relation extraction is the chronological ordering of events. The identification of temporal links between entity pairs such as EVENTs (e.g., \textit{Problem}, \textit{Treatment}, \textit{Test}), TEs, and EVENTs and TEs, as well as the subsequent classification of these links into predefined categories (e.g., \textit{After}, \textit{Before}, \textit{Overlap}) is known as TLINK extraction. The TLINK method developed and described herein is rule-based. The developed approach is motivated by a gap in current literature of pure knowledge driven methods for clinical TLINKs extraction (see Section \ref{sec:TIE}). The developed method has two main components. The first component takes as input extracted clinical concepts (\textit{Problem}, \textit{Treatment}, and \textit{Test}) and TEs (\textit{Date}, \textit{Time}, \textit{Duration} and \textit{Frequency}), and generates TLINK candidate pairs (the \textit{identification} step) and subsequently \textit{classifies} the identified links into three different categories: \textit{After}, \textit{Before}, or \textit{Overlap}. A final component derives the transitive closure (refer to Appendix \ref{sec:appendic_trans_closure}) of relations extracted in order to generate implied relations that have been missed by the preceding TLINK method. Figure \ref{figure:TLINKarchitecture} shows an abstract representation of the methodology. The remaining part of this section describes our methods in detail. \begin{figure*}[h] \begin{center} \includegraphics*[scale=0.7]{figures/TLINK1.pdf} \caption{TLINK extraction architecture} \label{figure:TLINKarchitecture} \end{center} \end{figure*} \subsubsection*{TLINK identification and classification} A notable difference between previous work and our approach is that we use \begin{inparaenum}[(i)] \item a pure rule-based method for TLINK extraction, and \item combine the TLINK candidate generation (identification) and classification into a single simultaneous component.\end{inparaenum} The rule based TLINK component is partitioned into two sub-components: \begin{enumerate}[(1)] \item intra-sentence: TLINKs within a sentence span; \item inter-sentence: TLINKs across sentences. \end{enumerate} \subsubsection*{Intra-sentence TLINKs} In order to analyse intra-sentence TLINKs, we first performed an initial semi-automatic analysis in the development dataset. For each sentence containing a TLINK, the TLINK pairs were abstracted to their respective EVENT or TIMEX3 types. Additionally, any context to the right and left of the TLINKs were removed to easily spot patterns. Subsequently, the abstracted TLINK pairs were manually analysed for common patterns by the given TLINK category. For example, in the following sentences (a, b) the underlined EVENTs and TEs are part of six different TLINKs (or three TLINKs per sentence): \begin{itemize} \item[(a)] `The patient reported \underline{vomiting}, \underline{nausea} and \underline{headaches}.' \item[(b)] `The patient received \underline{steroids} for \underline{his swelling} in \underline{2006}.' \end{itemize} In the following list, all pair-wise EVENTs or TE, part of TLINK is abstracted to their respective label and any context to the left and right of the pair-wise link is removed (illustrated by being \sout{strikeout}). \begin{itemize} \item[($a_1$)] `\sout{The patient reported} \underline{PROBLEM}, \underline{PROBLEM} \sout{and headaches}. \item[($a_2$)] `\sout{The patient reported} \sout{vomiting,} \underline{PROBLEM} and \underline{PROBLEM}.' \item[($a_3$)] `\sout{The patient reported} \underline{PROBLEM}, nausea and \underline{PROBLEM}.' \item[($b_1$)] `\sout{The patient received} \underline{TREATMENT} for \underline{PROBLEM} \sout{in 2006}.' \item[($b_2$)] `\sout{The patient received steroids for} \underline{PROBLEM} in \underline{DATE}.' \item[($b_3$)] `\sout{The patient received} \underline{TREATMENT} for his swellings in \underline{DATE}.' \end{itemize} This approach enabled us to profile various TLINK categories and formalise extraction rules based on common abstraction patterns. For example, Table \ref{table:TLINKAbsEx} lists a number of common patterns found and their typically associated TLINK category. \begin{table}[h] \caption{TLINK patterns} \label{table:TLINKAbsEx} \begin{small} \textit{This table show common patterns semi-automatically extracted from the development/training dataset. The patterns listed in this tables make up the largest and most obvious TLINK patterns observed.} \end{small} \begin{center} \begin{tabular}{lcl}\hline \multicolumn{1}{c}{\textbf{TLINK abstraction patterns}} && \multicolumn{1}{c}{\textbf{Typical TLINK}} \\ \hline PROBLEM and PROBLEM & $\rightarrow$ & $[$PROBLEM$]$ \textit{Overlap} $[$PROBLEM \\ PROBLEM, PROBLEM & $\rightarrow$ & $[$PROBLEM$]$ \textit{Overlap} $[$PROBLEM$]$ \\ TREATMENT on DATE & $\rightarrow$ & $[$TREATMENT$]$ \textit{Overlap} $[$DATE$]$ \\ TREATMENT in DATE & $\rightarrow$ & $[$TREATMENT$]$ \textit{Overlap} $[$DAT$]$E \\ TREATMENT for PROBLEM & $\rightarrow$ & $[$TREATMENT$]$ \textit{Before} $[$PROBLEM$]$ \\ TREATMENT of PROBLEM & $\rightarrow$ & $[$TREATMENT$]$ \textit{Before} $[$PROBLEM$]$ \\ TEST showed PROBLEM & $\rightarrow$ & $[$TEST$]$ \textit{Before} $[$PROBLEM$]$ \\ PROBLEM after TREATMENT & $\rightarrow$ & $[$PROBLEM$]$ \textit{After} $[$TREATMENT$]$\\ TREATMENT post TEST & $\rightarrow$ & $[$TREATMENT$]$ \textit{After} $[$TEST$]$\\ \hline \end{tabular} \end{center} \end{table} Profiling of TLINKs revealed the occurrence of different types of relations at the sentence level which we group into three different types: \textit{co-ordinate}, \textit{prepositional}, and \textit{other} TLINKs. Further, these three types of TLINKs directly correspond to the type of extraction rules, which take advantage of corresponding features that characterised them. Specifically: \begin{itemize} \item \textbf{co-ordinate TLINKs} are links that are characterised by EVENTs separated by co-ordinate conjunctions such as `and', `or', or comma (i.e., `,'). For example, in the sentence (a) above, all events are co-ordinate TLINKs. In the development dataset we observed that co-ordinate TLINKs as predominately categorised as `overlap'. \item \textbf{prepositional TLINKs} are characterised by EVENTs/TEs that are linked by a prepositions. For example, in sentence (b), the preposition `for' between the two EVENTs indicates the presence of a TLINK (in this particular example the TLINK is $[$TREATMENT$]$ after $[$PROBLEM$]$). \item \textbf{other TLINKs} are links that do not fit in either of the previously characterised types. A notable number of other TLINKs are characterised by linking verbs between EVENTs. For example, in the sentence `The TEST \textit{revealed} PROBLEM', TEST is linked, by the verb `revealed', to PROBLEM (in this particular example the TLINK is: $[$TEST$]$ \textit{Before} $[$PROBLEM$]$). \end{itemize} Table \ref{table:TLINK-features-ch5} lists and describes the type of features used to extract intra-sentence TLINKs. \subsubsection*{Inter-sentence TLINKs} TLINKs that span across sentences fall into two characterised types: SECTIME and co-referential TLINKs. \begin{itemize} \item \textbf{SECTIME TLINKs} represent the largest proportion of inter-sentence TLINKs (e.g., in the full i2b2-TRC corpus, 45.87\% of all TLINKs are SECTIME links \citep{Sunetal13}). These are links anchored to relevant document section. Specifically, in the i2b2-TRC dataset, all events within `History of Present Illness' or related sections are linked to the admission date, and events within the `Hospital course' section are linked to discharge date. SECTIME links are predominately categorised as \textit{Before}.\\ Notably, it is not uncommon that clinical narratives do not contain sectime. More commonly events are anchored to the document creation time also known as DocTimeRel (document creation time relation). \item \textbf{Co-referential TLINKs} are EVENT co-references. These type of TLINKs are characterised as multiple EVENT mentions that refer to the same EVENT. \end{itemize} The approach for these two types of inter-sentence TLINKs differed. In the i2b2-TRC datasets, for development and testing, SECTIME TLINKs were addressed in a three step approach: \begin{enumerate}[(1)] \item extract admission and discharge dates; \item apply Section Boundary Detection (SBD), i.e., identify `history of present illness' and `hospital course' sections accordingly; \item anchor each EVENT in a given document section to the appropriate section date and set each TLINK category to \textit{Before}. \end{enumerate} However, in the case-study data there were couple notable differences to how SECTIME TLINKs were extracted. Namely, as there only existed one section time i.e., the DRD, the SBD was omitted and each EVENT was anchored to the DRD. In addition, while each TLINK category was initially set to the default link type \textit{Before}, we observed a number of common events that occurred on the DRD: routine clinical measurements such as weight, height, blood pressure, and similar. These contained TLINK type were all amended accordingly to \textit{Overlap}. Co-referential TLINKs are approached by considering a novel feature: lexical-level similarity (i.e., comparing literal strings with no additional features considered) between EVENTs in a given clinical note. A combined token- and character-level string similarity metric SoftTFIDF algorithm \cite{Cohenetal03} was adopted to determine the \textit{similarity} between two candidate events. Specifically, the SoftTFIDF component take as input two strings and outputs a similarity score: a real number between [0,1]; where 1 is a perfect match and 0 the vice versa. The optimum threshold of 0.8 was determined through systematic experimentation with the i2b2-TRC development set. The co-referential TLINK pseudo method developed is given below: \begin{enumerate}[(1)] \item using SoftTFIDF, $n^{2}-1$ comparisons are done between events in a given document (including across document sections, if any); \item if the SoftTFIDF similarity score between any pair-wise EVENTs is greater or equal to the threshold (0.8): create a TLINK between EVs with the link category: \textit{Overlap}. \end{enumerate} \subsubsection*{TLINKs features} Table \ref{table:TLINK-features-ch5} list the type of features used across both intra- and inter sentence TLINK methods. The features are used as part of formalised rules and heuristics to identify and classify TLINKs and include: \begin{table}[h] \caption{TLINK extraction features} \label{table:TLINK-features-ch5} \begin{small} \textit{The features listed herein were used for both TLINK identification and classification; description of each feature type follows this table. Nota bene: EV=EVENT and ST=SECTIME.} \end{small} \begin{center} \begin{tabular}{|r|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\textbf{Feature type}}} & \multicolumn{3}{c|}{\textbf{Inter-sentence}} & \multicolumn{2}{c|}{\textbf{Intra-sentence}} \\ \cline{2-6} & EV-EV & EV-TE & EV-ST & EV-EV & EV-TE \\ \hline String similarity & \cmark & & & & \\ \hline Position information& & & \cmark & \cmark & \cmark \\ \hline Distance information& & & & \cmark & \cmark \\ \hline Preposition & & & & \cmark & \cmark \\ \hline Conjunction & & & & \cmark & \cmark \\ \hline TE-related & \cmark & & & & \cmark \\ \hline NE-related & \cmark & & & \cmark & \cmark \\ \hline \end{tabular} \end{center} \end{table} Description of feature types listed in Table \ref{table:TLINK-features-ch5} follows. \begin{itemize} \item \textbf{String similarity}: specifically, string similarity score between pair-wise EVENTs derived from SoftTFIDF were used to extract co-referential TLINKs; \item \textbf{Position information}: the position of an EVENT within a given section (SECTIME TLINKs); \item \textbf{Distance information}: \begin{inparaenum}[(i)] \item token distance between entity pairs, and \item number of EVENT and TE between entity pairs; \end{inparaenum} \item \textbf{Preposition}: between two candidate pairs e.g., `in', `on', `after', `before' and so forth; \item \textbf{Conjunction}: lexical cues between two candidate pairs e.g., `and', `both' and so forth; \item \textbf{TE-related}: TE type i.e., date, time, duration, and frequency; \item \textbf{EVENT-related}: EVENT information such as type i.e., \textit{Problem}, \textit{Treatment}, \textit{Test}; including HrQoL concept categories) and negation information; \end{itemize} \subsubsection*{Temporal links closure} In order to capture implied TLINKs not captured by the initial rule-based method the transitive closure may be calculated. The final TLINK component engineered calculates the full set of transitive relations or temporal closure of links extracted using the initial rule-based component. Description of transitive closure is given in Appendix \ref{sec:appendic_trans_closure}. However, the explicit results including transitive closure has not been included, except the inherit evaluation provided by the TempEval-3 metric. \section{Experiments, Results and Discussions} \label{sec:experiments_results_discussions} \subsection{Data} The NER, TERN and TLINK methods presented in this report were developed and validated using a set of publicly available research datasets. The NLP research datasets used were obtained from the clinical TM challenges organised by the i2b2\footnote{the research datasets provided by i2b2 are not entirely public, and require data user agreements to be signed; \url{https://www.i2b2.org/NLP/DataSets/}.}. Specifically, these datasets are derived from the following shared-tasks: \begin{enumerate}[(i)] \item The 2010 i2b2 $4^{th}$ Shared Task; referred to as \textit{i2b2-CARC} hereafter \citep{Uzuneretal11}, and \item The 2012 i2b2 $6^{th}$ Shared Task; referred to as \textit{i2b2-TRC} hereafter \citep{Sunetal13b}. \end{enumerate} Table \ref{table:NLP-datasets} provides details such the size (number of documents across training and test datasets). \begin{table}[h] \caption{NLP datasets} \label{table:NLP-datasets} \begin{small} \textit{This table shows the NLP datasets used in this report.} \end{small} \begin{center} \begin{tabular}{l|l|rr} \hline \textbf{Dataset} & \textbf{Annotation} & \textbf{Training} & \textbf{Test}\\ \hline i2b2-TRC & EVENT\footnote{Include annotated EVENTs such as \textit{Problem}, \textit{Treatment} and \textit{Test}, \textit{Occurrence}, \textit{Evidential} and \textit{Clinical department}.},TIMEX3,TLINK & 190 & 120\\ i2b2-CARC & EVENT\footnote{Include annotated EVENTs such as \textit{Problem}, \textit{Treatment}, \textit{Test}.} & 170 & 256\\ \hline \end{tabular} \end{center} \end{table} These datasets were produced using multiple annotators, including domain experts. Specifically, the i2b2-TRC was produced using eight expert annotators, four of whom had medical background; the i2b2-CARC was produced using twelve annotators including six with medical background\footnote{Annotation task information regarding i2b2-CARC corpus was obtained by email from responsible researcher Brett South, Senior scientist (currently) at University of Utah, Department of Biomedical Informatics.}. \subsubsection*{EVENT} The dataset utilised to engineer the event extraction method was composed of the i2b2-TRC and i2b2-CARC corpora. A total of 736 discharge summaries was used across the training (616 documents) and test (120 documents; i2b2-TRC test dataset). Table \ref{table:NER-corpus-stats} shows the label distribution by event/concept category across the combined datasets used in this report. \begin{table}[h] \caption{EVENT label distribution} \label{table:NER-corpus-stats} \begin{small} \textit{In this report, the i2b2-TRC (training) and i2b2-CARC (training and test) data was combined as the training dataset, while the i2b2-TRC test dataset was used as the held-out test data for the clinical NER method described herein.} \end{small} \begin{center} \begin{tabular}{lrr} \hline \textbf{EVENT} & \textbf{Training} & \textbf{Test} \\ \hline Problem & 24,330 & 4,309 \\ Treatment & 17,773 & 3,285 \\ Test & 16,062 & 2,173 \\ \hline \textbf{Total} & 58,165 & 9,767 \\ \hline \end{tabular} \end{center} \end{table} Table \ref{table:event-iaa-trc} show the IAA for i2b2-TRC \cite[p.808]{Sunetal13b}\footnote{These statistics are computed for \textit{Problem}, \textit{Treatment} and \textit{Test}} and Table \ref{table:event-iaa-carc} show the IAA for i2b2-CARC dataset\footnote{These statistics are computed across six different EVENTs: \textit{Problem}, \textit{Treatment}, \textit{Test}, \textit{Occurrence}, \textit{Evidential} and \textit{Clinical department}. Only the first three EVENT categories are considered in this report.}. The IAA scores confirm that recognition of event boundaries for both i2b2-TRC and i2b2-CARC is a fairly straight forward task for manual processing; with the identification of \textit{Problem}, \textit{Treatment} and \textit{Test} event boundaries being a simpler task (see Table \ref{table:event-iaa-carc}). Likewise, classification of EVENT \textit{type} and concept negation reveal to be a relatively straight forward manual annotation task for appropriately trained experts. \begin{table}[h] \begin{minipage}[h]{0.5\linewidth} \centering \caption {i2b2-TRC: EVENT IAA} \label{table:event-iaa-trc} \vspace{5mm} \begin{tabular}{lccccc} \hline \textbf{EVENT} & \multicolumn{2}{c}{\textbf{\textbf{$Avg. P\&R$}}} & \multicolumn{2}{c}{\textbf{\textbf{$\kappa$}}} \\\hline Span (strict) & \multicolumn{2}{c}{0.83} & \multicolumn{2}{c}{-} \\ Span (lenient)& \multicolumn{2}{c}{0.87} & \multicolumn{2}{c}{-} \\ \cdashline{1-5} Type & \multicolumn{2}{c}{0.93} & \multicolumn{2}{c}{0.90} \\ Negation & \multicolumn{2}{c}{0.97} & \multicolumn{2}{c}{0.21} \\ \hline \end{tabular} \end{minipage}% \qquad \begin{minipage}[h]{0.45\linewidth} \centering \caption {i2b2-CARC: EVENT IAA} \label{table:event-iaa-carc} \vspace{5mm} \begin{tabular}{lccc} \hline \textbf{EVENT} & \multicolumn{2}{c}{\textbf{\textbf{$Avg. P\&R$}}} \\\hline Span (strict) & \multicolumn{2}{c}{0.85} \\ Span (lenient)& \multicolumn{2}{c}{0.91} \\ \hline & \\ \hfill \end{tabular} \end{minipage} \end{table} \newpage \subsubsection*{TIMEX3} The i2b2-TRC dataset was used for the development and evaluation of the TERN component. A total of 310 discharge summaries was used across the development (190 documents) and test (120 documents) datasets. Table \ref{table:temporal-corpus-stat} and Table \ref{table:timex3-iaa} show the label distribution across the dataset by TE type and the IAA, respectively \citep[p.808]{Sunetal13b}. Notably, while the IAA shows a fairly good agreement for the recognition of TE spans (with strict boundary identification proving more challenging), normalisation of TE (i.e., \textit{value}) seems even more challenging for manual efforts. \begin{table}[h] \begin{minipage}[h]{0.5\linewidth} \centering \caption {TIMEX3 label distribution} \label{table:temporal-corpus-stat} \vspace{5mm} \begin{tabular}{lcccc} \hline \textbf{Type}& \multicolumn{2}{r}{\textbf{Training}} & \multicolumn{2}{r}{\textbf{Test}} \\ \hline Date & \multicolumn{2}{r}{1,641} & \multicolumn{2}{r}{1,222} \\ Duration & \multicolumn{2}{r}{407} & \multicolumn{2}{r}{341} \\ Frequency & \multicolumn{2}{r}{249} & \multicolumn{2}{r}{197} \\ Time & \multicolumn{2}{r}{69} & \multicolumn{2}{r}{60} \\ \hline \textbf{Total} & \multicolumn{2}{r}{2,366} & \multicolumn{2}{r}{1,820} \\ \hline \end{tabular} \end{minipage}% \qquad \begin{minipage}[h]{0.45\linewidth} \centering \caption {i2b2-TRC: TIMEX3 IAA} \label{table:timex3-iaa} \vspace{5mm} \begin{tabular}{lccccc} \hline \multicolumn{1}{c}{\textbf{TIMEX3}} & \multicolumn{2}{c}{\textbf{\textbf{$Avg. P\&R$}}} & \multicolumn{2}{c}{\textbf{\textbf{$\kappa$}}} \\\hline Span (strict) & \multicolumn{2}{c}{0.73} & \multicolumn{2}{c}{-} \\ Span (lenient)& \multicolumn{2}{c}{0.89} & \multicolumn{2}{c}{-} \\ \cdashline{1-5} Type & \multicolumn{2}{c}{0.90} & \multicolumn{2}{c}{0.37} \\ Value & \multicolumn{2}{c}{0.75} & \multicolumn{2}{c}{-} \\ Modifier & \multicolumn{2}{c}{0.83} & \multicolumn{2}{c}{0.21} \\ \hline \end{tabular} \end{minipage} \end{table} \subsubsection*{TLINK} The temporal relation component was developed and validated using the i2b2-TRC dataset. Note that only TLINKs that include EVENTs such as \textit{Problem}, \textit{Treatment}, Test and TIMEX3 have been considered. Table \ref{table:tlink-corpus-stat} and Table \ref{table:tlink-iaa} show the label distribution and the IAAs, respectively \citep[p.808]{Sunetal13b}. Notably, and comparably (i.e., versus EVENT and TE recognition tasks), it is apparent that TLINK identification is a challenging task (i.e., 0.39 in average precision-recall) for humans. However, manual effort for TLINK classification (i.e., \textit{type}) show reasonable performance. \begin{table}[h] \begin{minipage}[h]{0.5\linewidth} \centering \caption{TLINK label distribution} \label{table:tlink-corpus-stat} \vspace{5mm} \begin{tabular}{lcccc} \hline \textbf{Type}& \multicolumn{2}{r}{\textbf{Training}} & \multicolumn{2}{r}{\textbf{Test}} \\ \hline Before & \multicolumn{2}{r}{11,981} & \multicolumn{2}{r}{10,488} \\ Overlap & \multicolumn{2}{r}{7,276} & \multicolumn{2}{r}{5,694} \\ After & \multicolumn{2}{r}{1,415} & \multicolumn{2}{r}{1,275} \\ \hline \textbf{Total} & \multicolumn{2}{r}{20,672} & \multicolumn{2}{r}{17,457} \\ \hline \end{tabular} \end{minipage}% \qquad \begin{minipage}[h]{0.45\linewidth} \centering \caption {i2b2-TRC: TLINK IAA} \label{table:tlink-iaa} \vspace{5mm} \begin{tabular}{lccccc} \hline \multicolumn{1}{c}{\textbf{TLINK}} & \multicolumn{2}{c}{\textbf{\textbf{$Avg. P\&R$}}} & \multicolumn{2}{c}{\textbf{\textbf{$\kappa$}}} \\\hline Span (strict) & \multicolumn{2}{c}{0.39} & \multicolumn{2}{c}{-} \\ Span (lenient)& \multicolumn{2}{c}{-} & \multicolumn{2}{c}{-} \\ \cdashline{1-5} Type & \multicolumn{2}{c}{0.79} & \multicolumn{2}{c}{0.3} \\ \hline \hfill \end{tabular} \end{minipage} \end{table} \newpage \subsection{Event extraction} We explored a number of sequence label models: IO, BIO and W-BIO (where, W: single token word; B: beginning; I: inside; O: outside) in addition to a set of post-processing components. For the development/validation experiments we used the training data (Table \ref{table:NER-corpus-stats}) which we split into a validation training set (500 documents) and a validation test set (116 documents). \begin{table}[h]\caption{EVENT extraction validation test results} \label{table:NERvalidation} \begin{small} \textit{The validation test set results are obtained by training the models on a set of 500 documents and testing on a validation test set of 116 (shown here). The best results, horizontally or by EVENT category, are highlighted. From all models experimented, the IO model performed worst overall concept types, with strict scores being notably lower than other models (approximately 5\% across all concept categories). Further, the difference between BIO and W-BIO were minimal: the BIO models permed slightly better for the \textit{Problem} and \textit{Treatment} categories while W-BIO performed better on identifying the \textit{Test} category.} \end{small} \begin{center} \begin{tabular}{lrccc} \hline \multirow{2}{*}{\textbf{EVENT}} & \multirow{2}{*}{\textbf{Model}} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\ & & \small strict$\mid$lenient & \small strict$\mid$lenient & \small strict$\mid$lenient \\ \hline \multirow{3}{*}{Problem} & IO & 67.46$\mid$84.33 & 70.22$\mid$\textbf{87.78} & 68.81$\mid$86.02\\ & BIO & \textbf{73.20}$\mid$\textbf{85.95} & \textbf{74.63}$\mid$87.62 & \textbf{73.91}$\mid$\textbf{86.78}\\ & W-BIO & 72.32$\mid$85.83 & 73.54$\mid$87.28 & 72.92$\mid$86.55 \\ \hline \multirow{3}{*}{Treatment} & IO & 73.63$\mid$89.36 & 70.65$\mid$\textbf{85.74} & 72.11$\mid$87.51\\ & BIO & \textbf{79.45}$\mid$90.37 & \textbf{74.70}$\mid$84.97 & \textbf{77.00}$\mid$\textbf{87.59}\\ & W-BIO & 79.41$\mid$\textbf{90.91} & 73.45$\mid$84.09 & 76.31$\mid$87.37\\ \hline \multirow{3}{*}{Test} & IO & 75.00$\mid$89.20 & 72.13$\mid$\textbf{85.79} & 73.54$\mid$87.47\\ & BIO & 80.14$\mid$89.82 & 76.37$\mid$85.59 & 78.21$\mid$87.65 \\ & W-BIO & \textbf{80.72}$\mid$\textbf{90.34} & \textbf{76.50}$\mid$85.62 & \textbf{78.56}$\mid$\textbf{87.92}\\ \hline \hline \multirow{3}{*}{\textbf{Micro score}} & IO & 71.31$\mid$87.13 & 70.88$\mid$\textbf{86.60} & 71.09$\mid$86.86\\ & BIO & \textbf{76.92}$\mid$88.30 & \textbf{75.13}$\mid$86.24 & \textbf{76.02}$\mid$\textbf{87.26} \\ & W-BIO & 76.67$\mid$\textbf{88.54} & 74.33$\mid$85.84 & 75.48$\mid$87.17\\ \hline \end{tabular} \end{center} \end{table} Our experiments showed that the IO models performed consistently worst compared to BIO and W-BIO, with the latter two models showing little difference (see Table \ref{table:NERvalidation}). For example, considering strict evaluation metrics, there is minimal difference between BIO and W-BIO models, while a notable difference can be observed between IO and BIO/W-BIO models (approximately 5\% micro $F_1$-measure). This suggests that BIO and W-BIO models are better suited for strict boundary identification of clinical concepts investigated compared to the IO sequence label schema. Moreover, when considering lenient evaluation scores, there is a minimal difference among all models, however, BIO and W-BIO models score consistently higher precision and $F_1$-measure while IO models score consistently higher recall. The final evaluation or test results are presented in Table \ref{table:NERevaluation}. These are consistent with our findings during validation (Table \ref{table:NERvalidation}). As may be seen from both the validation and evaluation results, there is no notable difference between BIO and W-BIO models, except for W-BIO (\textit{Test}) which shows notably better results. In light of evaluation results that are comparable to IAA (Table [\ref{table:event-iaa-trc},\ref{table:event-iaa-carc}]), we have omitted detailed error analysis. \begin{table}[h] \caption{EVENT extraction results on the held-out test set} \label{table:NERevaluation} \begin{small} \textit{The results on the held-out test set showed similar trend to the validation results; the IO models have been excluded due to notably poor performance on the validation set. Further, similar to the validation results, BIO performed better on Problem and Treatment categories while W-BIO model performed best on the Test category.} \end{small} \begin{center} \begin{tabular}{lrccc} \hline \multirow{2}{*}{\textbf{EVENT}} & \multirow{2}{*}{\textbf{Model}} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\ & & \small strict$\mid$lenient & \small strict$\mid$lenient & \small strict$\mid$lenient \\ \hline \multirow{2}{*}{Problem} & BIO & 81.52$\mid$90.68 & 82.62$\mid$\textbf{92.90} & 82.07$\mid$91.29\\ & W-BIO & \textbf{81.91}$\mid$\textbf{90.84} & \textbf{82.80}$\mid$91.83 & \textbf{82.35}$\mid$\textbf{91.33}\\ \hline \multirow{2}{*}{Treatment} & BIO & 87.24$\mid$94.43 & \textbf{80.12}$\mid$\textbf{86.73} & 83.53$\mid$\textbf{90.42}\\ & W-BIO & \textbf{88.00}$\mid$\textbf{94.72} & \textbf{80.12}$\mid$86.24 & \textbf{83.88}$\mid$90.28\\ \hline \multirow{2}{*}{Test} & BIO & 85.48$\mid$93.02 & 82.88$\mid$90.20 & 84.16$\mid$91.59\\ & W-BIO & \textbf{86.45}$\mid$\textbf{93.49} & \textbf{83.71}$\mid$\textbf{90.52} & \textbf{85.06}$\mid$\textbf{91.98}\\ \hline \hline \multirow{2}{*}{\textbf{Micro score}} & BIO & 84.22$\mid$92.39 & 81.84$\mid$\textbf{89.78} & 83.01$\mid$91.07\\ & W-BIO & \textbf{84.85}$\mid$\textbf{92.66} & \textbf{82.10}$\mid$89.66 & \textbf{83.45}$\mid$\textbf{91.13}\\ \hline \end{tabular} \end{center} \end{table} The final models selected for the clinical NER pipeline was BIO for \textit{Problem} and \textit{Treatment}, and W-BIO for \textit{Test}. The final evaluation scores, including negation is given in Table \ref{table:NER_final} \begin{table}[h] \caption{The clinical NER performance} \label{table:NER_final} \begin{center} \begin{tabular}{llcc}\hline & \textbf{$F_1$-micro} \% & \multirow{2}{*}{\textbf{Negation}} & \multirow{2}{*}{\textbf{Negation $\kappa$}} \\ &strict$\mid$lenient&& \\\hline \textbf{EVENT} & 83.21$\mid$91.17 & 0.93 & 0.65 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection*{Impact analysis} In order to justify the use of various features, datasets, and post-processing components, a series of impact analysis have been conducted and shown in Table \ref{table:feature_impact} (which shows the feature impact of different CRF features used), Table \ref{table:dataset_impact} (impact of datasets on the overall performance) and Table \ref{table:pp_impact} (impact of post-processing components). Table \ref{table:feature_impact} shows the feature impact analysis by the micro score of EVENTs; lexical features have been used as the baseline. Notably, word stem have the most impact (+3\% strict and +2\% lenient $F_1$); POS and chunk features showed minimal impact on their own with the latter having a negative impact of -0.01\% lenient $F_1$. Further, while POS and chunk features adversely affect the precision, both show a positive effect on recall. \begin{table}[h] \caption{EVENT recognition: feature impact analysis} \label{table:feature_impact} \begin{small} \textit{This table shows the feature impact of several CRF feature groups.} \end{small} \begin{center} \begin{tabular}{l|ccc} \hline \multirow{3}{*}{\textbf{Feature vector}} & \multicolumn{3}{c}{\textbf{EVENTs}}\\ & $P$-micro \% & $R$-micro \% & $F_1$-micro \% \\ & strict$\mid$lenient&strict$\mid$lenient &strict$\mid$lenient \\\hline Baseline (Lexical) & 82.56$\mid$92.34 & 76.79$\mid$85.88 & 79.57$\mid$88.99 \\ Baseline $+$ Stem & 84.56$\mid$\textbf{92.96} & 81.37$\mid$89.44 & 82.93$\mid$91.17 \\ Baseline $+$ POS & 82.51$\mid$92.08 & 77.66$\mid$86.67 & 80.01$\mid$89.29\\ Baseline $+$ Chunk & 82.39$\mid$92.07 & 77.03$\mid$86.09 & 79.62$\mid$88.98 \\ \hline \textbf{All features} & \textbf{84.43}$\mid$92.50 & \textbf{82.02}$\mid$\textbf{89.85} & \textbf{83.21}$\mid$\textbf{91.17}\\ \hline \end{tabular} \end{center} \end{table} Notably, using the i2b2-CARC corpus improved (strict and lenient) micro $F_1$-score with +17\% and +12\% (see Table \ref{table:dataset_impact}). \begin{table}[h] \caption{EVENT recognition: dataset impact} \label{table:dataset_impact} \begin{small} \textit{This table shows the impact of the different datasets used to train the CRF models.} \end{small} \begin{center} \begin{tabular}{l|ccc} \hline \multirow{3}{*}{\textbf{Dataset}} & \multicolumn{3}{c}{\textbf{EVENTs}}\\ & $P$-micro \% & $R$-micro \% & $F_1$-micro \% \\ & strict$\mid$lenient& strict$\mid$lenient& strict$\mid$lenient\\ \hline i2b2-TRC & 69.03$\mid$82.97 & 63.04$\mid$75.77 & 65.90$\mid$79.20\\ \textbf{i2b2-TRC$+$i2b2-CARC } & \textbf{84.43}$\mid$\textbf{92.50} & \textbf{82.02}$\mid$\textbf{89.85} & \textbf{83.21}$\mid$\textbf{91.17}\\ \hline \end{tabular} \end{center} \end{table} Table \ref{table:pp_impact} shows the impact of the post-processing sub-components. For example, while the label-fixer has a adverse effect on the precision (-5\% strict and -4\% lenient), it has a positive impact on recall (+3\% strict and +5\% lenient). In addition, the label-fixer shows less than -1\% (strict) and more than +1\% (lenient) impact on the $F_1$-score. Boundary adjustment showed a positive effect on all strict metrics, and expectedly with no effect on lenient scores. The FP filter showed a slight positive impact on precision, and interestingly vice-versa on recall. \begin{table}[h] \caption{EVENT recognition: post-processing impact analysis} \label{table:pp_impact} \textit{This table lists the performance impact of the various post-processing components.} \begin{center} \begin{tabular}{l|ccc} \hline \multirow{3}{*}{\textbf{Component}} & \multicolumn{3}{c}{\textbf{EVENTs}}\\ & $P$-micro \% & $R$-micro \% & $F_1$-micro \% \\ & strict$\mid$lenient& strict$\mid$lenient& strict$\mid$lenient\\\hline No post-processing & 88.09$\mid$96.06 & 77.64$\mid$84.66 & 82.54$\mid$90.00\\ Only label-fixer & 82.85$\mid$92.23 & 80.81$\mid$\textbf{89.97} & 81.82$\mid$91.09\\ Only boundary-adjustment & \textbf{89.34}$\mid$96.06 & 78.73$\mid$84.66 & \textbf{83.70}$\mid$90.00\\ Only FP filter & 89.14$\mid$\textbf{96.45} & 77.63$\mid$84.00 & 82.99$\mid$89.79\\ \hline \hline \textbf{All post-processing} & 84.43$\mid$92.50 & \textbf{82.02}$\mid$89.85 & 83.21$\mid$\textbf{91.17}\\ \hline \end{tabular} \end{center} \end{table} \subsection{Temporal entity extraction} We explored a number of methods in order to adopt the best approach for TER (validation results are given in Table \ref{table:validationTER}). For the ML-based method, we experimented with various sequence label schemas (i.e., IO, BIO and W-BIO). Notably, we discovered that all sequence label models explored performed relatively similar in terms of lenient scores, but W-BIO and BIO models performed notably better in terms of strict scores (e.g., 3-4\% $F_1$-measure). However, the strict rule-based method outperformed all ML models both in terms of lenient and strict scores (over 90\% lenient $F_1$-score). \begin{table}[h] \caption{TER validation results} \label{table:validationTER} \begin{small} \textit{The ML-based component was validated on the i2b2-TRC training data which was split 60/40\% for training and validation respectively. *The rule-based results shown was obtained using the whole training data.} \end{small} \begin{center} \begin{tabular}{cccc} \hline \multirow{2}{*}{\textbf{Method}} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\ & \small strict$\mid$lenient & \small strict$\mid$lenient & \small strict$\mid$lenient \\ \hline \multicolumn{1}{r}{IO} & 66.03$\mid$86.94 & 67.17$\mid$88.44 & 66.60$\mid$87.69 \\ \multicolumn{1}{r}{BIO} & 71.26$\mid$87.85 & 70.95$\mid$87.47 & 71.10$\mid$87.66 \\ \multicolumn{1}{r}{W-BIO} & 71.80$\mid$87.85 & 71.49$\mid$87.47 & 71.65$\mid$87.66 \\ \cdashline{1-4} \multicolumn{1}{r}{Rule-based*} & \textbf{78.66}$\mid$\textbf{89.64} & \textbf{80.15}$\mid$\textbf{91.34} & \textbf{79.40}$\mid$\textbf{90.48} \\ \hline \end{tabular} \end{center} \end{table} Using the official i2b2-TRC test set, we further evaluated the various ML models (using the complete training set to derive the models) and the rule-based method. In addition, we explored a number of combination the various ML models and the rule-based method (results are given in Table \ref{table:evaluationTER}). \begin{table}[h] \caption{TER evaluation on the held-out test set} \label{table:evaluationTER} \begin{small} \textit{This table shows the evaluation results of various ML-, rule- and hybrid-based methods on the official i2b2-TRC test. } \end{small} \begin{center} \begin{tabular}{cccc} \hline \multirow{2}{*}{\textbf{Method}} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\ & \small strict$\mid$lenient & \small strict$\mid$lenient & \small strict$\mid$lenient \\ \hline \multicolumn{1}{r}{IO} & 64.42$\mid$87.10 & 66.65$\mid$90.11 & 65.51$\mid$88.58 \\ \multicolumn{1}{r}{BIO} & 67.45$\mid$86.63 & 69.56$\mid$89.34 & 68.49$\mid$87.96 \\ \multicolumn{1}{r}{W-BIO} & 68.22$\mid$86.47 & 68.41$\mid$86.70 & 68.31$\mid$86.58 \\ \cdashline{1-4} \multicolumn{1}{r}{Rule-based} & \textbf{77.29}$\mid$\textbf{89.64} & 76.65$\mid$88.90 & \textbf{76.97}$\mid$89.27 \\ \cdashline{1-4} \multicolumn{1}{r}{IO$+$Rule-based} & 72.03$\mid$86.62 & \textbf{78.41}$\mid$\textbf{94.29} & 75.09$\mid$\textbf{90.29} \\ \multicolumn{1}{r}{BIO$+$Rule-based} & 71.15$\mid$86.05 & 77.64$\mid$93.90 & 74.25$\mid$89.81 \\ \multicolumn{1}{r}{W-BIO$+$Rule-based} & 71.66$\mid$85.73 & 78.08$\mid$93.41 & 74.73$\mid$89.40 \\\hline \end{tabular} \end{center} \end{table} The evaluation on the held-out test set (Table \ref{table:evaluationTER}) shows a similar trend to the validation results (Table \ref{table:validationTER}) in terms of strict scores i.e., W-BIO and BIO performs notably better than IO: approximately +3\%. This indicates good generalisable methods. However, the IO model shows more notable improvement (than the validation results) in terms of $F_1$-measure over the W-BIO (+2\%) and BIO (0.62\%) models. The rule-based methods performs better than all ML models, except for the IO model's lenient recall. We also explored a number of combinations between various ML models and the rule-based method (union of the output of each respective method) in order to discover any possible improvements. In particular, since the normalisation of TE is more important than recognition results, we are specifically interested in improved recall. The combination of the IO model and the rule-based method showed the best overall performance. A notable improvement, in terms of lenient recall, of +4.18\% and +5.39\% compared the best ML model (IO) and the rule-based method respectively, was achieved by the `IO+rule-based' method. Similarly, the strict recall was improved with +8.85\% and +1.76\% over the best ML model (BIO) and the rule-based method respectively. In addition, the best $F_1$-measure of 90.29\% was also achieved with the `IO+rule-based' method. As expected, the rule-based method achieved the best precision of all methods. This slightly exceeds the state-of-the-art system \citep{Sohnetal13}, which reported an $F_1$-score of 90.03\%. The normalisation scores reproduced using the i2b2-TRC test dataset are given in Table \ref{table:evaluationNorm}. As apparent by the `primary score' TERN task is a challenging task and an open research problem. \begin{table}[htbp] \caption{TE normalisation results} \label{table:evaluationNorm} \begin{small} \textit{This table gives the normalisation scores. The primary score is the product of the TER lenient $F_1$-measure and normalisation value accuracy and is considered the main TERN metric.} \end{small} \begin{center} \begin{tabular}{cccc} \hline \textbf{Type} & \textbf{Value} & \textbf{Modifier} & \textbf{Primary score} \\ \hline 0.8473 & 0.7044 & 0.8275 & 0.63\\ \hline \end{tabular} \end{center} \end{table} While automated recognition of TEs have shown comparable and exceeding human-level benchmark results (e.g., \citep{Sunetal13b,Uzzamanetal13}), normalisation remain a challenge|both for human and automated methods. For instance, the current state-of-the-art clinical TERN methods achieve a mere 66\% (primary score) which is just lower than the human benchmark of 66.75\% \citep{Sunetal13b}. Similarly, the state-of-the-art system \citep{Sohnetal13} achieved a 73\% accuracy for the normalised value attribute, notably lower to the human benchmark of 75\%. Regardless, these scores, either automated or human, are notably lower than common IE score of +90\% which is typically considered as good. One of the notable challenges of TERN is the normalisation of relative expressions (e.g., `two weeks ago' `post-operative day'). \subsection{Temporal relation extraction} \subsubsection*{Evaluation metrics} The methods described herein have been validated using multiple evaluation methods/metrics. The main evaluation metric used in the 2012 i2b2 temporal relation challenge \citep{Sunetal13} was TempEval-3 type evaluation metrics where the `reduced graph' or redundant relations (i.e., a relation is redundant if it can be inferred from other relations) are removed. The \textit{TempEval-3 evaluation metric} used is described below: \begin{itemize} \item Precision: the total number of reduced system output TLINKs that can be verified in the gold standard closure divided by the total number of reduced system output TLINKs. \item Recall: the total number reduced gold standard output TLINKs that can be verified in the system closure divided by the total number of reduced gold standard output TLINKs. \end{itemize} We initially developed and evaluated our method using gold standard EVENTs and TEs; the results of these experiments are shown in Table \ref{table:TLINKdevelopment} and Table \ref{table:TLINKtest}. In addition, an end-to-end evaluation where EVENTs, TEs and TLINKs are all tagged using bespoke methods (described in Sections [\ref{sec:event_extraction},\ref{sec:temporalentity_extraction},\ref{sec:tlink_extraction}] respectively) is shown in Table \ref{table:TLINKtest-e2e}. \begin{table}[h] \caption{TLINK development set results} \label{table:TLINKdevelopment} \begin{small} \textit{This table shows the performance of the TLINK pipeline on the development/training dataset. We used two evaluation metrics: common precision-recall and the TempEval-3.} \end{small} \begin{center} \begin{tabular}{lccc} \hline \textbf{Evaluation setting} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\\hline Customary precision-recall & 81.40 & 55.06 & 65.69 \\ TempEval-3 precision-recall & 80.43 & 48.05 & 62.59 \\ \hline \end{tabular} \end{center} \end{table} As expected, fairly precision-bias results were achieved, as that was the aim during design and development. This leaves room for future work to further extend the current method in order to balance recall and to further optimise the overall score. A direct comparison cannot be made between our results and work on the full i2b2-TRC dataset \citep{Sunetal13} due to the reason that our experiments were based on a reduced set of TLINKs. The full i2b2-TRC dataset included pairwise TLINKs of six different EVENTs, three more than used in our experiments. We did not include \textit{Occurrence}, \textit{Evidential} and \textit{Clinical department} as they were not relevant/useful for characterising patient journeys. Nonetheless, we note the performance of the best systems evaluated on the full i2b2-TRC dataset as a point of reference. The best systems to-date, using gold annotations (for clinical EVENTs and TEs) achieved a $F_1$-measure of 69\% \citep{Tangetal13,Cherryetal13}. As previously mentioned in Chapter \ref{cha:Background}, both \cite{Tangetal13} and \cite{Cherryetal13} use complex hybrid methods with rule based components for candidate generation (i.e., TLINK identification). For classification of TLINKs, \cite{Tangetal13} uses a combination of CRF and SVM, whilst \cite{Cherryetal13} use a combination of MaxEnt and SVM for TLINK classification. In contrast, our method uses a knowledge based approach to recognise and simultaneously classify TLINKs. Our approach achieved an overall score of 62.96\% $F_1$-measure on the held-out test set (Table \ref{table:TLINKtest}). Further, considering common IE evaluation metrics, where system predictions are evaluated against manually annotated gold dataset without any further manipulation of labels, our approach achieved 65.34\% with customary and 62.96\% $F_1$-measure using TempEval-3 metrics. \begin{table}[h] \caption{TLINK results on the held-out test set} \label{table:TLINKtest} \begin{small} \textit{This table shows the results of the TLINK pipeline on the held-out test set. The results are presented using common precision-recall and the TempEval-3 evaluation metric.} \end{small} \begin{center} \begin{tabular}{lcccc} \hline \textbf{Evaluation setting} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\\hlin Customary precision-recall & 81.51 & 54.52 & 65.34 \\%& 65.36 \\ TempEval-3 precision-recall & 80.23 & 49.10 & 62.96 \\ \hline \end{tabular} \end{center} \end{table} A comparison of results between the development (Table \ref{table:TLINKdevelopment}) and held-out test data (Table \ref{table:TLINKtest}), indicate good generalisability of the methods developed. For instance, consider the minimal variation in $F_1$-measures between the development and test set. Except a small drop in $F_1$-score when not including temporal closure (`Regular (no closure)'), the results on the test dataset were slightly better than those on the development set. Table \ref{table:tlink_component_evaluation} shows the specific component-based evaluation of SECTIME, intra-sentence and inter-sentence TLINKs. For each component, the held-out test set has been reduced to only the relevant type of TLINKs (i.e., when evaluating SECTIME, only SECTIME links are retained). These evaluation results are obtained using the test dataset with gold annotations. Similar to the findings of the TLINK challenge described in \citep{Sunetal13}, we found that SECTIME TLINKs were easiest to extract (see Table \ref{table:tlink_component_evaluation}). Secondly, as expected, intra-sentence TLINKs where easier to extract than inter-sentence TLINKs (when exluding SECTIME TLINKs). Lastly, as concluded by previous work \citep{Sunetal13}, and equally applicable to our rule based approach, a better method to generate candidate pairs would be beneficial to optimise recall. \begin{table}[h] \caption{TLINK component based evaluation} \label{table:tlink_component_evaluation} \begin{small} \textit{This table shows the individual TLINK component based evaluation of the three TLINK sub-components: SECTIME, intra-sentence and inter-sentence TLINK methods. For each TLINK component evaluated the data has been reduced to only the relevant type of links.} \end{small} \begin{center} \begin{tabular}{lccc} \hline \textbf{TLINK} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\ \hline SECTIME & 93.93 & 92.04 & 92.97 \\ Inter-sentence & 55.72 & 20.40 & 29.87 \\ Intra-sentence & 39.47 & 22.50 & 28.66 \\ \hline \end{tabular} \end{center} \end{table} The component-based analysis also reinforces the conclusion that an extension of our method for recognition of candidate pairs ought to be explored. Currently, only neighbouring candidate EVENTs and co-referential inter-sentence TLINK are addressed. Extensions may include intra-sentence TLINKs that have multiple token distance in-between (e.g., first and last EVENTs in a sentence) and non co-referential inter-sentence TLINKs. Moreover, Table \ref{table:tlink_component_evaluation} also shows the source of errors. Despite the aim of generating high precision rules, yet, it was challenging to replicate the manual effort. However, the highly inconsistent annotations (i.e., IAA: 0.39) indicate that the TLINK annotation themselves were a notable source of generated errors. \subsubsection*{End-to-end evaluation} Table \ref{table:TLINKtest-e2e} shows the end-to-end evaluation: all entities are derived from bespoke methods such as clinical NER (described in Chapter \ref{cha:ConceptExtraction}), and the TERN method described in this chapter. \begin{table}[h] \caption{TLINK end-to-end results on the held-out test set} \label{table:TLINKtest-e2e} \begin{small} \textit{This table shows the results of the end-to-end system output: all annotations are derived from the bespoke clinical NER, TERN and TLINK methods described in this report thus far.} \end{small} \begin{center} \begin{tabular}{lccc} \hline \textbf{Evaluation method} & \textbf{Precision \%} & \textbf{Recall \%} & \textbf{$F_1$-measure \%} \\ \hline Customary precision-recall & 78.27 & 48.21 & 59.67 \\%& 59.84 \\ TempEval-3 precision-recall & 76.87 & 43.05 & 55.19 \\\hlin \end{tabular} \end{center} \end{table} As a point of reference, \cite{Tangetal13} achieved 62.78\% ($F_1$-measure) on the full i2b2-TRC dataset using the TempEval-3 evaluation method. Our method achieved 55.19\% using the same metric on the reduced dataset (in terms of event categories considered). Further, evaluating our method as per typical IE evaluation (i.e., against the gold set without any manipulation to the temporal graph) we achieved a $F_1$-measure of 59.67\%. While our methods shows good precision, an apparent limitation is the recall. We hypothesis that a better approach to candidate generation can address the latter gap. \section{Conclusion} \label{sec:conclusion} This report describes a set of NLP methods to order clinical events onto a temporal space or timeline. A number of notable observation were made from the validation of these methods: \begin{itemize} \item EVENTs or broad clinical concept categories (i.e., \textit{Problem}, \textit{Treatment}, \textit{Test}) can be automatically extracted (using CRF) with comparable scores to human benchmark. \item negation of concepts can be automatically determined (using ConText negation tool with minor `tailoring') with comparable accuracy to human benchmark. \item temporal entity identification can be automatically extracted with comparable score to human benchmarks. \item temporal entity normalisation is comparably challenging (even for humans). Further, determining the value (ISO-8601) was harder than type identification. \item TLINK extraction is overall an open research problem. DocTimeRel or SECTIME links can be extracted with good scores (93\%) while intra- and inter-sentence links are notably more challenging to extract. \end{itemize} In future work, we will investigate the expansion of lexical features by incorporating lexical variant generation for EVENT extraction. The expansion of the TE normalisation component is currently being achieved. Additionally, expanding candidate generation heuristics and integrating machine learning classifiers are currently being investigated to improve the TLINK component. \clearpage \section{Apppendice} \section*{Appendix A: Event extraction} \subsection*{Event conceptualisation} Table \ref{table:event-definition} shows the semantic definition of relevant event categories. \begin{table}[!h] \caption{Definition of EVENT categories} \label{table:event-definition} \begin{small} The definition of event categories are described according to the annotation guidelines \end{small} \begin{center} \begin{tabular}{l|ll} \hline \textbf{EVENT} & \textbf{Semantic type} & \textbf{Semantic group} \\ \hline \multirow{12}{*}{\textit{Problem}} & acquired abnormality & \multirow{10}{*}{Disorders} \\ & anatomical abnormality & \\ & cell or molecular dysfunction & \\ & congenital abnormality & \\ & disease or syndrome & \\ & injury or poisoning & \\ & mental or behavioural dysfunction & \\ & neoplastic process & \\ & pathologic functions & \\ & sign or symptom & \\ \cdashline{2-3} & bacterium & \multirow{2}{*}{Living Beings}\\ & virus & \\ \hline \multirow{8}{*}{\textit{Treatment}} & antibiotic & \multirow{5}{*}{Chemicals \& Drugs} \\ & biomedical or dental material & \\ & clinical drug & \\ & pharmacologic substance & \\ & steroid & \\ \cdashline{2-3} & drug delivery device & \multirow{2}{*}{Devices}\\ & medical device & \\ \cdashline{2-3} & therapeutic or preventive procedure & Procedures \\ \hline \multirow{2}{*}{\textit{Test}} & diagnostic procedure & \multirow{2}{*}{Procedures} \\ & laboratory procedure & \\ \hline \end{tabular} \end{center} \end{table} \clearpage \section*{Appendix B: Transitive closure: an example} \label{sec:appendic_trans_closure} \textbf{Transitive closure} of a given relations computes all implicit relations or take into account its transitivity. Further, we can define a transitive relation as: \begin{equation} \forall_{a,b,c} \subseteq X : (a R b \wedge b R c) \Rightarrow a R c \end{equation} For example, in Table \ref{table:transitive-closure} the letters ${A, B, C}$ represent different EVENTs, and the arrows `$\rightarrow$', `$\leftarrow$', and `$\leftrightarrow$' represent the temporal relations `before', `after', and `overlap' respectively. Hence, given the TLINKs: EVENT $A$ before EVENT $B$, EVENT $B$ after EVENT $C$, and EVENT $A$ overlap EVENT $C$ are represent as followed $A \rightarrow B$, $B \leftarrow C$, and $A \leftrightarrow C$ respectively. \begin{table}[h] \caption{Transitive relations} \label{table:transitive-closure} \begin{small} \textit{This table shows a number of example of transitive relations.} \end{small} \begin{center} \begin{tabular}{c} \hline If A $\rightarrow$ B and B $\rightarrow$ C, then A $\rightarrow$ C \\ If A $\leftarrow$ B and B $\leftarrow$ C, then A $\leftarrow$ C \\ If A $\leftrightarrow$ B and B $\leftrightarrow$ C, then A $\leftrightarrow$ C \\ If A $\rightarrow$ B and B $\leftrightarrow$ C, then A $\rightarrow$ C \\ If A $\rightarrow$ B and A $\leftrightarrow$ C, then C $\rightarrow$ B \\ \hline \end{tabular} \end{center} \end{table} \clearpage \section*{References} \bibliographystyle{elsarticle-num}
1,314,259,996,134
arxiv
\section{Introduction} \label{section:introduction} It is one of the most important topics in both conformal and CR geometries to study invariant differential operators. Analytic properties of such operators are deeply connected to geometric problems, such as the Yamabe problem and the constant $Q$-curvature problem. In conformal geometry, Graham, Jenne, Mason, and Sparling~\cite{Graham-Jenne-Mason-Sparling1992} have constructed a family of conformally invariant differential operators, called GJMS operators. Let $(N, g)$ be a Riemannian manifold of dimension $n$. For $k \in \bbN$ and $k \leq n / 2$ if $n$ is even, the $k$-th GJMS operator $P_{k}$ is a differential operator acting on $C^{\infty}(N)$ such that its principal part coincides with the $k$-th power of the Laplacian, and it has the following transformation law under the conformal change $\whg = e^{2 \Upsilon} g$: \begin{equation} e^{(n/2 + k) \Upsilon} \whP_{k} = P_{k} e^{(n/2 - k) \Upsilon}, \end{equation} where $\whP_{k}$ is defined in terms of $\whg$. Analytic properties of $P_{k}$ on closed manifolds are quite simple. From standard elliptic theory, it follows that $P_{k}$ is essentially self-adjoint and has closed range. Moreover, its spectrum is a discrete subset of $\bbR$, and the eigenspace corresponding to each eigenvalue is a finite-dimensional subspace of $C^{\infty}(N)$. In CR geometry, Gover and Graham~\cite{Gover-Graham2005} have introduced a family of CR invariant differential operators, called CR GJMS operators, via Fefferman construction. Let $(M, T^{1,0}M, \theta)$ be a $(2n+1)$-dimensional pseudo-Hermitian manifold and $k \in \bbN$ with $k \leq n+1$. The \emph{$k$-th CR GJMS operator} $P_{k}$ is a differential operator acting on $C^{\infty}(M)$ such that its principal part is the $k$-th power of the sub-Laplacian, and its transformation rule under the conformal change $\hat{\theta} = e^{\Upsilon} \theta$ is given by \begin{equation} e^{(n+1+k) \Upsilon / 2} \whP_{k} = P_{k} e^{(n+1-k) \Upsilon / 2}, \end{equation} where $\whP_{k}$ is defined in terms of $\hat{\theta}$. Although $P_{k}$ is not elliptic, it is known to be \emph{subelliptic} for $1 \leq k \leq n$~\cite{Ponge2008-Book}; in particular, the same statements as in the previous paragraph also hold for $P_{k}$ on closed manifolds. However, the \emph{critical CR GJMS operator} $P_{n+1}$ is not even hypoelliptic. In fact, its kernel contains the space of CR pluriharmonic functions, which is infinite-dimensional on closed embeddable CR manifolds. In this paper, nevertheless, we will prove that similar results to the above are true for $P_{n+1}$ on the orthogonal complement of $\Ker P_{n+1}$. In what follows, we simply write $P$ for the critical CR GJMS operator. In the remainder of this section, let $(M, T^{1,0}M, \theta)$ be a closed embeddable pseudo-Hermitian manifold of dimension $2n+1$. Note that the embeddability automatically holds if $n \geq 2$~\cite{Boutet_de_Monvel1975}. We will first prove \begin{theorem} \label{thm:self-adjointness-of-critical-CR-GJMS-operator} The maximal closed extension of $P$ is self-adjoint and has closed range. \end{theorem} We use the same letter $P$ for the maximal closed extension of the critical CR GJMS operator by abuse of notation. Moreover, we obtain the following theorem on the spectrum of $P$: \begin{theorem} \label{thm:spectrum-of-critical-CR-GJMS-operator} The spectrum of $P$ is a discrete subset in $\bbR$ and consists only of eigenvalues. Moreover, the eigenspace corresponding to each non-zero eigenvalue of $P$ is a finite-dimensional subspace of $C^{\infty}(M)$. Furthermore, $\Ker P \cap C^{\infty}(M)$ is dense in $\Ker P$. \end{theorem} In dimension three, Hsiao~\cite{Hsiao2015} has shown \cref{thm:self-adjointness-of-critical-CR-GJMS-operator,thm:spectrum-of-critical-CR-GJMS-operator} by using Fourier integral operators with complex phase. Our proofs are similar to Hsiao's ones, but based on the \emph{Heisenberg calculus}, the theory of Heisenberg pseudodifferential operators. The use of these operators simplifies some proofs and gives more precise regularity results. We will also give some applications of these theorems and their proofs. Let $\scrP$ and $\overline{\scrP}$ be the space of CR pluriharmonic functions and its $L^{2}$-closure respectively. Then $\Ker P$ contains $\overline{\scrP}$, and the \emph{supplementary space} $\scrW$ is defined by \begin{equation} \scrW \coloneqq \Ker P \cap \overline{\scrP}^{\perp}. \end{equation} \begin{proposition} \label{prop:supplementary-space} The supplementary space $\scrW$ is a finite dimensional subspace of $C^{\infty}(M)$. \end{proposition} In dimension three, \cref{prop:supplementary-space} has been already proved by Hsiao~\cite{Hsiao2015}. However, in this case, the author~\cite{Takeuchi2019-preprint} has shown that $\scrW$ is equal to zero. On the other hand, for each $n \geq 2$, there exists a closed pseudo-Hermitian manifold $(M, T^{1, 0} M, \theta)$ of dimension $2n+1$ such that $\scrW \neq 0$; see the proof of~\cite{Takeuchi2018}*{Theorem 1.6}. We will also tackle the zero CR $Q$-curvature problem. The \emph{CR $Q$-curvature} $Q$, introduced by Fefferman and Hirachi~\cite{Fefferman-Hirachi2003}, is a smooth function on $M$ such that it transforms as follows under the conformal change $\whxth = e^{\Upsilon} \theta$: \begin{equation} \label{eq:transformation-law-of-CR-Q-curvature} \whQ = e^{-(n+1) \Upsilon} (Q + P \Upsilon), \end{equation} where $\whQ$ is defined in terms of $\whxth$. Marugame~\cite{Marugame2018} has proved that the total CR $Q$-curvature \begin{equation} \overline{Q} \coloneqq \int_{M} Q \theta \wedge (d \theta)^{n} \end{equation} is always equal to zero. Moreover, the CR $Q$-curvature itself is identically zero for pseudo-Einstein contact forms~\cite{Fefferman-Hirachi2003}. Hence it is natural to ask whether $(M, T^{1,0}M)$ admits a contact form whose CR $Q$-curvature vanishes identically; this is the \emph{zero CR $Q$-curvature problem}. This problem has been solved affirmatively for embeddable CR three-manifolds by the author~\cite{Takeuchi2019-preprint}. However, it is still open in general. By the transformation law \cref{eq:transformation-law-of-CR-Q-curvature}, it is necessary that \begin{equation} \int_{M} f Q \theta \wedge (d \theta)^{n} = 0 \end{equation} holds for any $f \in \Ker P \cap C^{\infty}(M)$. Note that this condition is independent of the choice of $\theta$. The following proposition states that it is also a sufficient condition for embeddable CR manifolds: \begin{proposition} \label{prop:zero-CR-Q-curvature-problem} There exists a contact form $\hat{\theta}$ on $M$ such that the CR $Q$-curvature $\whQ$ vanishes identically if and only if $Q \perp (\Ker P \cap C^{\infty}(M))$. \end{proposition} This paper is organized as follows. In \cref{section:CR-manifolds}, we recall basic facts on CR manifolds. \cref{section:model-operators-on-the-Heisenberg-group} deals with convolution operators on the Heisenberg group, which is a ``model'' of the Heisenberg calculus. In \cref{section:Heisenberg-calculus}, we give a brief exposition of the Heisenberg calculus. \cref{section:proofs-of-the-main-results} is devoted to proofs of the main results in this paper. \section{CR manifolds} \label{section:CR-manifolds} Let $M$ be an orientable smooth $(2n+1)$-dimensional manifold without boundary. A \emph{CR structure} is a rank $n$ complex subbundle $T^{1, 0} M$ of the complexified tangent bundle $T M \otimes \mathbb{C}$ such that \begin{equation} T^{1, 0} M \cap T^{0, 1} M = 0, \qquad \comm{\Gamma(T^{1 ,0} M)}{\Gamma(T^{1, 0} M)} \subset \Gamma(T^{1, 0} M), \end{equation} where $T^{0, 1} M$ is the complex conjugate of $T^{1, 0} M$ in $T M \otimes \bbC$. Define a hyperplane bundle $H M$ of $T M$ by $H M \coloneqq \Re T^{1, 0} M$. A typical example of CR manifolds is a real hypersurface $M$ in an $(n+1)$-dimensional complex manifold $X$; this $M$ has the canonical CR structure \begin{equation} T^{1, 0} M \coloneqq T^{1, 0} X |_{M} \cap (T M \otimes \bbC). \end{equation} Take a nowhere-vanishing real one-form $\theta$ on $M$ such that $\theta$ annihilates $T^{1, 0} M$. The \emph{Levi form} $\mathcal{L}_{\theta}$ with respect to $\theta$ is the Hermitian form on $T^{1,0} M$ defined by \begin{equation} \calL_{\theta}(Z, W) \coloneqq - \sqrt{-1} \, d \theta(Z, \ovW), \qquad Z, W \in T^{1, 0} M. \end{equation} A CR structure $T^{1, 0} M$ is said to be \emph{strictly pseudoconvex} if the Levi form is positive definite for some $\theta$; such a $\theta$ is called a \emph{contact form}. The triple $(M, T^{1, 0} M, \theta)$ is called a \emph{pseudo-Hermitian manifold}. Denote by $T$ the \emph{Reeb vector field} with respect to $\theta$; that is, the unique vector field satisfying \begin{equation} \theta(T) = 1, \qquad T \contr d\theta = 0. \end{equation} Define an operator $\delb_{b} \colon C^{\infty}(M) \to \Gamma((T^{0, 1} M)^{\ast})$ by \begin{equation} \delb_{b} f \coloneqq d f|_{T^{0, 1} M}. \end{equation} A smooth function $f$ is called a \emph{CR holomorphic function} if $\delb_{b} f = 0$. A \emph{CR pluriharmonic function} is a real-valued smooth function that is locally the real part of a CR holomorphic function. We denote by $\scrP$ the space of CR pluriharmonic functions. By using the Levi form and the volume form $\theta \wedge (d \theta)^{n}$, we obtain the formal adjoint $\delb_{b}^{\ast} \colon \Gamma((T^{0, 1} M)^{\ast}) \to C^{\infty}(M)$ of $\delb_{b}$. The \emph{Kohn Laplacian} $\Box_{b}$ and the \emph{sub-Laplacian} $\Delta_{b}$ are defined by \begin{equation} \Box_{b} \coloneqq \delb_{b}^{\ast} \delb_{b}, \qquad \Delta_{b} \coloneqq \Box_{b} + \overline{\Box}_{b}. \end{equation} Note that \begin{equation} \Box_{b} = \frac{1}{2} \Delta_{b} + \frac{\sqrt{-1}}{2} n T; \end{equation} see~\cite{Lee1986}*{Theorem 2.3} for example. The Gaffney extension of the Kohn Laplacian, also denoted by $\Box_{b}$, is a self-adjoint operator on $L^{2}(M)$. The kernel $\Ker \Box_{b}$ is the space of $L^{2}$ CR holomorphic functions. The \emph{critical CR GJMS operator} $P$ is a differential operator of order $2n+2$ acting on $C^{\infty}(M)$. It is known to be formally self-adjoint~\cite{Gover-Graham2005}*{Proposition 5.1}. Moreover, it annihilates CR pluriharmonic functions~\cite{Hirachi2014}*{Section 3.2}. A CR manifold $(M, T^{1,0}M)$ is said to be \emph{embeddable} if there exists a smooth embedding of $M$ to some $\mathbb{C}^{N}$ such that $T^{1, 0} M = T^{1,0} \mathbb{C}^{N}|_{M} \cap (T M \otimes \mathbb{C})$. It is known that a closed strictly pseudoconvex CR manifold $(M, T^{1,0}M)$ is embeddable if and only if $\Box_{b}$ has closed range~\cites{Boutet_de_Monvel1975,Kohn1986}. \section{Model operators on the Heisenberg group} \label{section:model-operators-on-the-Heisenberg-group} The Heisenberg group $G$ is the Lie group with the underlying manifold $\bbR \times \bbC^{n}$ and the multiplication \begin{equation} (t, z) \cdot (t^{\prime}, z^{\prime}) \coloneqq (t + t^{\prime} + 2 \Im (z \cdot \ovz^{\prime}), z + z^{\prime}). \end{equation} The left translation by $(t, z)$ and the inversion on $G$ are denoted by $l_{(t, z)}$ and $\iota$ respectively. For $\alpha = 1, \dots , n$, we introduce a left-invariant complex vector field $Z_{\alpha}^{0}$ by \begin{equation} Z_{\alpha}^{0} \coloneqq \pdv{}{z^{\alpha}} + \sqrt{-1} \ovz^{\alpha} \pdv{}{t}. \end{equation} The canonical CR structure $T^{1, 0} G$ is spanned by $Z_{1}^{0}, \dots , Z_{n}^{0}$. Define a left-invariant one-form $\theta^{0}$ on $G$ by \begin{equation} \theta^{0} \coloneqq d t + \sqrt{-1} \sum_{\alpha = 1}^{n} (z^{\alpha} d \ovz^{\alpha} - \ovz^{\alpha} d z^{\alpha}). \end{equation} Then $\theta^{0}$ annihilates $T^{1, 0} G$ and the Levi form $\calL_{\theta^{0}}$ satisfies $\calL_{\theta^{0}}(Z_{\alpha}^{0}, Z_{\beta}^{0}) = 2 \delta_{\alpha \beta}$; in particular, $\theta^{0}$ is a contact form on $G$. The Reeb vector field $T^{0}$ coincides with $\pdvf{}{t}$. The Lie algebra $\frakg$ of $G$ is isomorphic to $\bbR \times \bbC^{n}$ as a linear space via \begin{equation} \frakg \to \bbR \times \bbC^{n}; \qquad t T^{0} + 2 \sum_{\alpha = 1}^{n} \Re(z^{\alpha} Z_{\alpha}^{0}) \mapsto (t, z). \end{equation} Under this identification, the Lie bracket on $\frakg$ is given by \begin{equation} \comm{(t, z)}{(t^{\prime}, z^{\prime})} = (4 \Im(z \cdot \ovz^{\prime}), 0). \end{equation} Moreover, the exponential map $\frakg \to G$ coincides with the identity map on $\bbR \times \bbC^{n}$. Furthermore, the dual $\frakg^{\ast}$ of $\frakg$ is also canonically isomorphic to $\bbR \times \bbC^{n}$ as a linear space. We write this linear coordinate as $(\tau, \zeta)$. For $r \in \bbR_{+}$, the parabolic dilation $\delta_{r}$ on $\bbR \times \bbC^{n}$ is defined by \begin{equation} \delta_{r}(t, z) = (r^{2} t, r z). \end{equation} This dilation defines automorphisms on $G$, $\frakg$, and $\frakg^{\ast}$, for which we will use the same letter $\delta_{r}$ by abuse of notation. In what follows, the term ``homogeneous'' is defined in terms of $\delta_{r}$. We will sometime write $v$ for a point of $G$. Denote by $d v$ the Lebesgue measure on $G$, which is a Haar measure on $G$. Let $\Schwartz(G)$ (resp.\ $\Schwartz(\frakg^{\ast})$) be the space of rapidly decreasing functions on $G$ (resp.\ $\frakg^{\ast}$), and $\Schwartz^{\prime}(G)$ (resp.\ $\Schwartz^{\prime}(\frakg^{\ast})$) be that of tempered distributions on $G$ (resp.\ $\frakg^{\ast}$). The coupling of $f \in \Schwartz(G)$ and $k \in \Schwartz^{\prime}(G)$ is written as $\coupling{k}{f}$. The pull-back by $\delta_{r}$ induces endomorphisms on $\Schwartz(G)$ and $\Schwartz(\frakg^{\ast})$, and these extend to those on $\Schwartz^{\prime}(G)$ and $\Schwartz^{\prime}(\frakg^{\ast})$. The Fourier transform $\Fourier$ defines isomorphisms \begin{equation} \Schwartz(G) \xrightarrow{\cong} \Schwartz(\frakg^{\ast}), \qquad \Schwartz^{\prime}(G) \xrightarrow{\cong} \Schwartz^{\prime}(\frakg^{\ast}); \end{equation} in our convention, the Fourier transform $\Fourier (f)$ of $f \in \Schwartz(G)$ is defined by \begin{equation} \Fourier (f)(\tau, \zeta) \coloneqq \int_{G} e^{- \sqrt{-1} (t \tau + \Re(z \cdot \ovxz))} f(t, z) d v. \end{equation} Now we consider ``model operators'' of the Heisenberg calculus. For $m \in \bbR$, set \begin{equation} \Sigma_{H}^{m} \coloneqq \Set{a \in C^{\infty}(\frakg^{\ast} \setminus \{0\}) | \delta_{r}^{\ast} a = r^{m} a}, \end{equation} which is the space of \emph{Heisenberg symbols} of order $m$. Let $\scrG^{m}$ be the space of $g \in \Schwartz^{\prime}(\frakg^{\ast})$ such that $g$ is smooth on $\frakg^{\ast} \setminus \{0\}$ and satisfies \begin{equation} \delta_{r}^{\ast} g = r^{m} g + (r^{m} \log r) h, \end{equation} where $h \in \Schwartz^{\prime}(\frakg^{\ast})$ with $\supp h \subset \{0\}$ and $\delta_{r}^{\ast} h = r^{m} h$. The restriction map $\scrG^{m} \to \Sigma_{H}^{m}$ is known to be surjective~\cite{Beals-Greiner1988}*{Proposition 15.8}. Moreover, the inverse Fourier transform gives an isomorphism \begin{equation} \Fourier^{-1} \colon \scrG^{m} \xrightarrow{\cong} \scrK_{-m-2n-2}, \end{equation} where $\scrK_{l}$ is the space of $k \in \Schwartz^{\prime}(G)$ such that $k$ is smooth on $G \setminus \{0\}$ and satisfies \begin{equation} \delta_{r}^{\ast} k = r^{l} k + (r^{l} \log r) \psi \end{equation} for a homogeneous polynomial $\psi$ of degree $l$~\cite{Beals-Greiner1988}*{Proposition 15.24}. We also introduce a function space on which Heisenberg symbols act. Let $\Schwartz_{0}(G)$ be the space of $f \in \Schwartz(G)$ such that \begin{equation} \int_{G} \psi(v) f(v) d v = 0 \end{equation} for any polynomial $\psi$ on $G$. This condition is equivalent to that $\Fourier (f) \in \Schwartz(\frakg^{\ast})$ vanishes to infinite order at the origin. We denote by $\Psi_{H}^{m}$ the space of endomorphisms $A$ on $\Schwartz_{0}(G)$ commuting with left translation and admitting its formal adjoint $A^{\ast}$ of homogeneous degree $m$; that is, \begin{equation} A^{\ast} \circ \delta_{r}^{\ast} = r^{m} \delta_{r}^{\ast} \circ A^{\ast}. \end{equation} We would like to define a canonical isomorphism between $\Sigma_{H}^{m}$ and $\Psi_{H}^{m}$. \begin{proposition} \label{prop:well-defined-of-Hpsido} Let $a \in \Sigma_{H}^{m}$ and take $g \in \scrG^{m}$ with $g |_{\frakg^{\ast} \setminus \{0\}} = a$. Then the convolution operator \begin{equation} \label{eq:definition-of-O(p)} f \mapsto [\Fourier^{-1}(g) \ast f](v) \coloneqq \coupling{\Fourier^{-1}(g)}{f \circ l_{v} \circ \iota} \end{equation} defines an endomorphism on $\Schwartz_{0}(G)$ and is independent of the choice of $g$. Moreover, this operator commutes with left translation and is homogeneous of degree $m$. Furthermore, it is equal to zero if and only if $a = 0$. \end{proposition} \begin{definition} For $a \in \Sigma_{H}^{m}$, an operator $O^{0}(a) \colon \Schwartz_{0}(G) \to \Schwartz_{0}(G)$ is defined by \cref{eq:definition-of-O(p)}. \end{definition} \begin{proof}[Proof of \cref{prop:well-defined-of-Hpsido}] It follows from~\cite{Christ-Geller-Glowacki-Polin1992}*{Proposition 2.2} that \cref{eq:definition-of-O(p)} defines an endomorphism on $\Schwartz_{0}(G)$ commuting with left translation and homogeneous of degree $m$. Assume that $g^{\prime}$ also satisfies $g^{\prime} |_{\frakg^{\ast} \setminus \{0\}} = a$. Then the support of $g^{\prime} - g$ is contained in $\{0\} \subset \frakg^{\ast}$. Hence $\Fourier^{-1}(g^{\prime} - g)$ is a polynomial on $G$, and so $\Fourier^{-1}(g^{\prime} - g) \ast f = 0$ for any $f \in \Schwartz_{0}(G)$. This implies the independence of the choice of $g$. Next, suppose that the operator \cref{eq:definition-of-O(p)} is equal to zero. For any $f \in \Schwartz_{0}(G)$, we have $\coupling{\Fourier^{-1}(g)}{f \circ \iota} = 0$. Hence $g$ annihilates $\Fourier(\Schwartz_{0}(G))$. Since $C^{\infty}_{c}(\frakg^{\ast} \setminus \{0\})$ is a subspace of $\Fourier(\Schwartz_{0}(G))$, the support of $g$ is contained in $\{0\} \subset \frakg^{\ast}$. Therefore $a = g |_{\frakg^{\ast} \setminus \{0\}} = 0$. \end{proof} The operator $O^{0}(a)$ is well-behaved under formal adjoint and composition. \begin{theorem} \label{thm:adjoint-and-composition-of-model-operators} (i) The formal adjoint of $O^{0}(a)$, $a \in \Sigma_{H}^{m}$, is given by $O^{0}(\ova)$. In particular, $O^{0}(a)$ is formally self-adjoint if and only if $a$ is real-valued. (ii) There exists a bilinear product \begin{equation} \ast^{0} \colon \Sigma_{H}^{m_{1}} \times \Sigma_{H}^{m_{2}} \to \Sigma_{H}^{m_{1} + m_{2}} \end{equation} such that $O^{0}(a_{1}) O^{0}(a_{2}) = O^{0}(a_{1} \ast^{0} a_{2})$ for any $a_{1} \in \Sigma_{H}^{m_{1}}$ and $a_{2} \in \Sigma_{H}^{m_{2}}$. \end{theorem} \begin{proof} (i) Take $g \in \scrG^{m}$ with $g |_{\frakg^{\ast} \setminus \{0\}} = a$ The formal adjoint of $O^{0}(a)$ is given by the convolution with respect to \begin{equation} \overline{\Fourier^{-1}(g)} \circ \iota = \Fourier^{-1}(\ovg); \end{equation} see~\cite{Christ-Geller-Glowacki-Polin1992}*{Section 3}. Thus we have $(O^{0}(a))^{\ast} = O^{0}(\ova)$. (ii) See \cite{Ponge2008-Book}*{Proposition 3.1.3(2)}. \end{proof} In particular, $O^{0}$ defines an injective map from $\Sigma_{H}^{m}$ to $\Psi_{H}^{m}$. In fact, this is an isomorphism. \begin{proposition} For any $A \in \Psi_{H}^{m}$, there exists the unique $a \in \Sigma_{H}^{m}$ such that $A = O^{0}(a)$. \end{proposition} \begin{proof} Let $A \in \Psi_{H}^{m}$. By~\cite{Christ-Geller-Glowacki-Polin1992}*{Proposition 3.2}, we have $k \in \scrK_{-m-2n-2}$ such that $A f = k \ast f$ for any $f \in \Schwartz_{0}(G)$. If we define $a \in \Sigma_{H}^{m}$ by $a \coloneqq (\Fourier k) |_{\frakg^{\ast} \setminus \{0\}}$, then $O^{0}(a)$ coincides with $A$ by definition. \end{proof} \begin{definition} The \emph{Heisenberg symbol} \begin{equation} \sigma_{m}^{0} \colon \Psi_{H}^{m} \to \Sigma_{H}^{m} \end{equation} is defined by the inverse map of $O^{0}$. \end{definition} It follows from \cref{thm:adjoint-and-composition-of-model-operators} that \begin{equation} \sigma_{m}^{0}(A^{\ast}) = \overline{\sigma_{m}^{0}(A)}, \qquad \sigma_{m_{1} + m_{2}}^{0}(A_{1} A_{2}) = \sigma_{m_{1}}^{0}(A_{1}) \ast^{0} \sigma_{m_{2}}^{0}(A_{2}) \end{equation} for $A \in \Psi_{H}^{m}$, $A_{1} \in \Psi_{H}^{m_{1}}$, and $A_{2} \in \Psi_{H}^{m_{2}}$. In particular, $A$ is formally self-adjoint if and only if $\sigma_{m}^{0}(A)$ is real-valued. Before the end of this section, we note a relation between the Reeb vector field and $\Psi_{H}^{m}$. \begin{lemma} \label{lem:Reeb-commutes-model-operator} The Reeb vector field $T^{0}$ commutes with any $A \in \Psi_{H}^{m}$. \end{lemma} \begin{proof} The vector field $T^{0}$ generates the flow $l_{(t, 0)}$. Since $A \in \Psi_{H}^{m}$ commutes with left translation, we have $\comm{T^{0}}{A} = 0$. \end{proof} \section{Heisenberg calculus} \label{section:Heisenberg-calculus} In this section, we recall basic properties of Heisenberg pseudodifferential operators; see~\cites{Beals-Greiner1988,Ponge2008-Book} for a comprehensive introduction to the Heisenberg calculus. Throughout this section, we fix a closed pseudo-Hermitian manifold $(M, T^{1,0}M, \theta)$ of dimension $2n+1$. Let \begin{equation} \frakg M \coloneqq (TM / HM) \oplus HM. \end{equation} The Reeb vector field $T$ defines a nowhere-vanishing section $[T]$ of $TM / HM$. For sections $X_{0}$ and $Y_{0}$ of $TM / HM$ and $X^{\prime}$ and $Y^{\prime}$ of $HM$, the Lie bracket $\comm{X_{0} + X^{\prime}}{Y_{0} + Y^{\prime}}$ is defined by \begin{equation} \comm{X_{0} + X^{\prime}}{Y_{0} + Y^{\prime}} \coloneqq - d \theta (X^{\prime}, Y^{\prime}) [T]. \end{equation} This bracket makes $\frakg M$ a bundle of two-step nilpotent Lie algebras. The dilation $\delta_{r}$ on $\frakg M$ is defined by \begin{equation} \delta_{r} |_{TM / HM} \coloneqq r^{2}, \qquad \delta_{r} |_{HM} \coloneqq r. \end{equation} It follow from the definition of the Lie bracket that $\delta_{r}$ is a fiberwise Lie algebra isomorphism. Set $G M \coloneqq \frakg M$ as a smooth fiber bundle with the fiberwise group structure defined via the Baker-Campbell-Hausdorff formula. The dilation $\delta_{r}$ on $\frakg M$ induces that on $G M$, which we write as $\delta_{r}$ for abbreviation. Take a local frame $(Z_{\alpha})$ of $T^{1,0}M$ on an open set $U \subset M$ such that \begin{equation} \calL_{\theta}(Z_{\alpha}, Z_{\beta}) = 2 \tensor{\delta}{_{\alpha}_{\beta}}. \end{equation} Then the map \begin{equation} \label{eq:identification-with-trivial-Lie-alg-bundle} \frakg M |_{U} \to U \times \frakg; \qquad \pqty{ p, t [T] + 2 \Re \sum_{\alpha = 1}^{n} z^{\alpha} Z_{\alpha} } \mapsto (p, t, z) \end{equation} gives an isomorphism between fiber bundles of Lie algebras. This isomorphism is compatible with the dilation. The identification \cref{eq:identification-with-trivial-Lie-alg-bundle} induces those on $G M$ and the dual bundle $\frakg^{\ast} M \coloneqq (\frakg M)^{\ast}$ of $\frakg M$: \begin{equation} \label{eq:identification-with-trivial-Lie-grp-bundle} G M |_{U} \to U \times G, \qquad \frakg^{\ast} M |_{U} \to U \times \frakg^{\ast}. \end{equation} These are also compatible with the dilation. Let $(Z_{\alpha}^{\prime})$ be another local frame of $T^{1,0}M$ on $U$ satisfying $\calL_{\theta}(Z_{\alpha}^{\prime}, Z_{\beta}^{\prime}) = 2 \tensor{\delta}{_{\alpha}_{\beta}}$. This gives another identification $\frakg M |_{U} \to U \times \frakg$. These two identifications relate with each other by a smooth family $(U(p))_{p \in U}$ of unitary matrices; that is, \begin{equation} U \times \frakg \to U \times \frakg; \qquad (p, t, z) \mapsto (p, t, U(p) \cdot z). \end{equation} The same is true for $G M$ and $\frakg^{\ast} M$. For $m \in \mathbb{R}$, the space $\Sigma_{H}^{m}(M)$ consists of functions in $C^{\infty}(\frakg^{*} M \setminus \{0\})$ that are homogeneous of degree $m$ on each fiber. Under the identification \cref{eq:identification-with-trivial-Lie-grp-bundle}, the fiberwise product $\ast^{0}$ induces a well-defined bilinear product \begin{equation} \ast \colon \Sigma_{H}^{m_{1}}(M) \times \Sigma_{H}^{m_{2}}(M) \to \Sigma_{H}^{m_{1} + m_{2}}(M). \end{equation} Now we consider Heisenberg pseudodifferential operators. For $m \in \mathbb{R}$, denote by $\Psi_{H}^{m}(M)$ the space of \emph{Heisenberg pseudodifferential operators $A \colon C^{\infty}(M) \to C^{\infty}(M)$ of order $m$}. This space is closed under complex conjugate, transpose, and formal adjoint~\cite{Ponge2008-Book}*{Proposition 3.1.23}. In particular, any $A \in \Psi_{H}^{m}$ extends to a linear operator \begin{equation} A \colon \scrD^{\prime}(M) \to \scrD^{\prime}(M), \end{equation} where $\scrD^{\prime}(M)$ is the space of distributions on $M$. For example, $V \in \Gamma(H M)$ is an element of $\Psi_{H}^{1}(M)$ and $T \in \Psi_{H}^{2}(M)$. Note that $\Psi_{H}^{-\infty}(M) \coloneqq \bigcap_{m \in \bbR} \Psi_{H}^{m}(M)$ coincides with the space of smoothing operators on $M$. As in the usual pseudodifferential calculus, there exists the \emph{Heisenberg principal symbol} \begin{equation} \sigma_{m} \colon \Psi_{H}^{m}(M) \to \Sigma_{H}^{m}(M), \end{equation} which has the following properties: \begin{proposition}[\cite{Ponge2008-Book}*{Propositions 3.2.6 and 3.2.9}] \label{prop:Heisenberg-principal-symbol} (i) The Heisenberg principal symbol $\sigma_{m}$ gives the following exact sequence: \begin{equation} 0 \to \Psi_{H}^{m-1}(M) \to \Psi_{H}^{m}(M) \xrightarrow{\sigma_{m}} \Sigma_{H}^{m}(M) \to 0. \end{equation} (ii) For $A_{1} \in \Psi_{H}^{m_{1}}(M)$ and $A_{2} \in \Psi_{H}^{m_{2}}(M)$, the operator $A_{1} A_{2}$ is a Heisenberg pseudodifferential operator of order $m_{1} + m_{2}$, and \begin{equation} \sigma_{m_{1} + m_{2}}(A_{1} A_{2}) = \sigma_{m_{1}}(A_{1}) \ast \sigma_{m_{2}}(A_{2}). \end{equation} \end{proposition} On the other hand, there exists a crucial difference between the usual pseudodifferential calculus and the Heisenberg one. Since the product $\ast$ is non-commutative, the commutator $\comm{A_{1}}{A_{2}}$ of $A_{1} \in \Psi_{H}^{m_{1}}(M)$ and $A_{2} \in \Psi_{H}^{m_{2}}(M)$ is not an element of $\Psi_{H}^{m_{1} + m_{2} - 1}(M)$ in general. However, we have the following \begin{lemma} \label{lem:commutator-for-Reeb-vector-field} Let $A \in \Psi_{H}^{m}(M)$. Then $\comm{T}{A} \in \Psi_{H}^{m+1}(M)$. \end{lemma} \begin{proof} It is enough to show that $\sigma_{m+2}(\comm{T}{A}) = 0$, or equivalently, \begin{equation} \sigma_{2}(T) \ast \sigma_{m}(A) = \sigma_{m}(A) \ast \sigma_{2}(T). \end{equation} Fix an identification \cref{eq:identification-with-trivial-Lie-grp-bundle}. Then $\sigma_{2}(T) \in \Sigma_{H}^{2}(M)$ is given by \begin{equation} \sigma_{2}(T)(p, \tau, \zeta) = \sqrt{-1} \tau = \sigma_{2}^{0}(T^{0})(\tau, \zeta); \end{equation} see~\cite{Ponge2008-Book}*{Example 3.25}. Hence it suffices to prove that $\sigma_{2}^{0}(T^{0}) \ast^{0} a = a \ast^{0} \sigma_{2}^{0}(T^{0})$ holds for any $a \in \Sigma_{H}^{m}$. From \cref{lem:Reeb-commutes-model-operator}, we obtain \begin{equation} O^{0}(\sigma_{2}^{0}(T^{0}) \ast^{0} a) = T^{0} O^{0}(a) = O^{0}(a) T^{0} = O^{0}(a \ast^{0} \sigma_{2}^{0}(T^{0})), \end{equation} which is equivalent to $\sigma_{2}^{0}(T^{0}) \ast^{0} a = a \ast^{0} \sigma_{2}^{0}(T^{0})$. \end{proof} Next, consider approximate inverses of Heisenberg pseudodifferential operators. We write $A \sim B$ if $A - B$ is a smoothing operator. \begin{definition} Let $A \in \Psi_{H}^{m}(M)$. An operator $B \in \Psi_{H}^{-m}(M)$ is called a \emph{parametrix} of $A$ if $A B \sim I$ and $B A \sim I$. \end{definition} The existence of a parametrix of a Heisenberg pseudodifferential operator is determined only by its Heisenberg principal symbol. \begin{proposition}[\cite{Ponge2008-Book}*{Proposition 3.3.1}] \label{prop:equivalent-conditions-for-existence-of-parametrix} Let $A \in \Psi_{H}^{m}(M)$ with Heisenberg principal symbol $a \in \Sigma_{H}^{m}(M)$. Then the following are equivalent: \begin{enumerate} \item $A$ has a parametrix; \item there exists $B \in \Psi_{H}^{-m}(M)$ such that $A B - I, B A - I \in \Psi_{H}^{-1}(M)$; \item there exists $b \in \Sigma_{H}^{-m}(M)$ such that $a \ast b = b \ast a = 1$. \end{enumerate} \end{proposition} Now consider the Heisenberg differential operator $\Delta_{b} + 1$ of order $2$. It is known that this operator has a parametrix; see the proof of~\cite{Ponge2008-Book}*{Proposition 3.5.7} for example. Since $\Delta_{b} + 1$ is positive and self-adjoint, the $s$-th power $(\Delta_{b} + 1)^{s}$ of $\Delta_{b} + 1$, $s \in \mathbb{R}$, is a Heisenberg pseudodifferential operator of order $2 s$~\cite{Ponge2008-Book}*{Theorems 5.3.1 and 5.4.10}. Using this operator, we define \begin{equation} W_{H}^{s}(M) := \Set{ u \in \scrD^{\prime}(M) \mid (\Delta_{b} + 1)^{s/2} u \in L^{2}(M) }. \end{equation} This space is a Hilbert space with the inner product \begin{equation} \iproduct{u}{v}_{s} = \iproduct{(\Delta_{b} + 1)^{s/2} u}{(\Delta_{b} + 1)^{s/2} v}_{L^{2}(M)}; \end{equation} write $\norm{\cdot}_{s}$ for the norm determined by $\iproduct{\cdot}{\cdot}_{s}$. The space $C^{\infty}(M)$ is dense in $W_{H}^{s}(M)$, and $C^{\infty}(M) = \bigcap_{s \in \bbR} W_{H}^{s}(M)$~\cite{Ponge2008-Book}*{Proposition 5.5.3}. Note that, for $k \in \mathbb{N}$, the Hilbert space $W_{H}^{k}(M)$ coincides with the Folland-Stein space $S^{k, 2}(M)$ as a topological vector space~\cite{Ponge2008-Book}*{Proposition 5.5.5}. Similar to the usual $L^{2}$-Sobolev space theory, we obtain the following \begin{lemma} \label{lem:Rellich's-lemma} For $s_{1} < s_{2}$, the embedding $W_{H}^{s_{2}}(M) \hookrightarrow W_{H}^{s_{1}}(M)$ is compact. \end{lemma} \begin{proof} The operator $(\Delta_{b} + 1)^{s^{\prime}/2}$, $s^{\prime} \in \mathbb{R}$, gives an isometry $W_{H}^{s + s^{\prime}}(M) \to W_{H}^{s}(M)$, and so we may assume that $s_{1} = 0$. From~\cite{Ponge2008-Book}*{Proposition 5.5.7}, we derive that the embedding $W_{H}^{s_{2}}(M) \hookrightarrow W_{H}^{0}(M) = L^{2}(M)$ is the composition of the two embeddings $W_{H}^{s_{2}}(M) \hookrightarrow H^{s_{2} / 2}(M)$ and $H^{s_{2} / 2}(M) \hookrightarrow L^{2}(M)$, where $H^{s}(M)$ is the usual $L^{2}$-Sobolev space on $M$ of order $s$. Thus the compactness of $W_{H}^{s_{2}}(M) \hookrightarrow L^{2}(M)$ follows from Rellich's lemma. \end{proof} Heisenberg pseudodifferential operators act on these Hilbert spaces as follows: \begin{proposition} \label{prop:mapping-properties-of-Hpsido} Any $A \in \Psi_{H}^{m}(M)$ extends to a continuous linear operator \begin{equation} A \colon W_{H}^{s+m}(M) \to W_{H}^{s}(M) \end{equation} for every $s \in \bbR$. In particular if $m < 0$, the operator $A \colon L^{2}(M) \to L^{2}(M)$ is compact. \end{proposition} \begin{proof} The former statement follows from~\cite{Ponge2008-Book}*{Propositions 5.5.8}. The latter one is a consequence of the former one and \cref{lem:Rellich's-lemma}. \end{proof} \section{Proofs of the main results} \label{section:proofs-of-the-main-results} In this section, we prove the main results in this paper. In what follows, we fix a closed embeddable pseudo-Hermitian manifold $(M, T^{1, 0} M, \theta)$ of dimension $2n+1$. For $\mu \in \bbR$, we define a formally self-adjoint Heisenberg differential operator $L_{\mu}$ of order $2$ by \begin{equation} L_{\mu} \coloneqq \frac{1}{2} \Delta_{b} + \frac{\sqrt{-1}}{2} \mu T. \end{equation} It is known that $L_{\mu}$ has a parametrix $N_{\mu} \in \Psi_{H}^{-2}(M)$ if and only if $\mu \notin \pm (n + 2 \bbN)$; see the proof of~\cite{Ponge2008-Book}*{Proposition 3.5.7} for example. On the other hand, the embeddability of $M$ implies that there exist the partial inverse $N_{n} \in \Psi_{H}^{-2}(M)$ of $L_{n} = \Box_{b}$ and the orthogonal projection $S \in \Psi_{H}^{0}(M)$ to $\Ker \Box_{b}$, called the \emph{\Szego projection}~\cite{Beals-Greiner1988}*{Theorem 24.20 and Corollary 25.67}. Taking the complex conjugate gives the partial inverse $N_{-n} \in \Psi_{H}^{-2}(M)$ of $L_{-n} = \overline{\Box}_{b}$ and the orthogonal projection $\ovS \in \Psi_{H}^{0}(M)$ to $\Ker \overline{\Box}_{b}$ \begin{lemma} \label{lem:commutator-of-Szego-projection} For any $\mu \in \bbR$, one has $\comm{L_{\mu}}{S} \in \Psi_{H}^{1}(M)$. \end{lemma} \begin{proof} We have \begin{equation} \comm{L_{\mu}}{S} = \comm{L_{n}}{S} + \frac{\sqrt{-1}}{2}(\mu - n)\comm{T}{S} = \frac{\sqrt{-1}}{2}(\mu - n)\comm{T}{S} \in \Psi_{H}^{1}(M) \end{equation} by \cref{lem:commutator-for-Reeb-vector-field}. \end{proof} This lemma implies a property of $S$ and $\ovS$. \begin{lemma} \label{lem:composition-of-Szego-projection-and-its-conjugate} One has $\ovS S, S \ovS \in \Psi_{H}^{-1}(M)$. \end{lemma} \begin{proof} Since $S L_{n} = L_{-n} \ovS = 0$, we have \begin{align} L_{0} S \ovS &= \comm{L_{0}}{S} \ovS + S L_{0} \ovS \\ &= \comm{L_{0}}{S} \ovS + \frac{1}{2} S (L_{n} + L_{-n}) \ovS \\ &= \comm{L_{0}}{S} \ovS. \end{align} By \cref{lem:commutator-of-Szego-projection}, $\comm{L_{0}}{S} \ovS \in \Psi_{H}^{1}(M)$. On the other hand, $L_{0}$ has a parametrix $N_{0} \in \Psi_{H}^{-2}(M)$. Hence \begin{equation} S \ovS \sim N_{0} \comm{L_{0}}{S} \ovS \in \Psi_{H}^{-1}(M). \end{equation} Taking the complex conjugate yields $\ovS S \in \Psi_{H}^{-1}(M)$. \end{proof} The critical CR GJMS operator $P$ on $(M, T^{1, 0} M, \theta)$ coincides with \begin{equation} L_{-n} L_{-n+2} \dotsm L_{n-2} L_{n} \end{equation} modulo $\Psi_{H}^{2n+1}(M)$; see~\cite{Ponge2008-Book}*{Proposition 3.5.7}. Set \begin{equation} G_{0} \coloneqq N_{n} N_{n-2} \dotsm N_{-n+2} N_{-n} \in \Psi_{H}^{-2n-2}(M), \qquad \Pi_{0} \coloneqq S + \ovS \in \Psi_{H}^{0}(M) \end{equation} Then modulo $\Psi_{H}^{-1}(M)$, \begin{align} P G_{0} &\equiv L_{-n} L_{-n+2} \dotsm L_{n-2} L_{n} N_{n} N_{n-2} \dotsm N_{-n+2} N_{-n} \\ &= L_{-n} L_{-n+2} \dotsm L_{n-2} (I - S) N_{n-2} \dotsm N_{-n+2} N_{-n} \\ &\equiv (I - S) L_{-n} L_{-n+2} \dotsm L_{n-2} N_{n-2} \dotsm N_{-n+2} N_{-n} \\ &\sim (I - S) (I - \ovS) \\ &= I - S - \ovS + S \ovS \\ &\equiv I - \Pi_{0}. \end{align} Thus we have \begin{equation} R_{0} \coloneqq P G_{0} + \Pi_{0} - I \in \Psi_{H}^{-1}(M). \end{equation} \begin{lemma} The operator $I + R_{0} \in \Psi_{H}^{0}(M)$ has a parametrix $A_{0} \in \Psi_{H}^{0}(M)$. Moreover, $A_{0}$ satisfies $A_{0} - I \in \Psi_{H}^{-1}(M)$. \end{lemma} \begin{proof} Since \begin{equation} I (I + R_{0}) - I = (I + R_{0}) I - I = R_{0} \in \Psi_{H}^{-1}(M), \end{equation} $I + R_{0}$ has a parametrix $A_{0}$ by \cref{prop:equivalent-conditions-for-existence-of-parametrix}. From $R_{0} \in \Psi_{H}^{-1}(M)$ and \cref{prop:Heisenberg-principal-symbol}, we obtain \begin{equation} \sigma_{0}(A_{0}) = \sigma_{0}((I + R_{0}) A_{0}) = \sigma_{0}(I), \end{equation} which means $A_{0} - I \in \Psi_{H}^{-1}(M)$. \end{proof} The proof of the following proposition is inspired by that of~\cite{Beals-Greiner1988}*{Proposition 25.4}. \begin{proposition} There exist $G_{\infty} \in \Psi_{H}^{-2n-2}(M)$ and $\Pi_{\infty} \in \Psi_{H}^{0}(M)$ such that \begin{gather} G_{\infty}^{\ast} \sim G_{\infty}, \qquad \Pi_{\infty}^{\ast} \sim \Pi_{\infty}^{2} \sim \Pi_{\infty}, \\ P G_{\infty} + \Pi_{\infty} \sim G_{\infty} P + \Pi_{\infty} \sim I, \\ \Pi_{\infty} P \sim P \Pi_{\infty} = 0, \qquad \Pi_{\infty} G_{\infty} \sim G_{\infty} \Pi_{\infty} \sim 0, \\ \Pi_{\infty} - \Pi_{0} \in \Psi_{H}^{-1}(M). \end{gather} \end{proposition} \begin{proof} Let $A_{0} \in \Psi_{H}^{0}(M)$ be a parametrix of $I + R_{0}$, and set \begin{equation} \Pi_{\infty} \coloneqq \Pi_{0} A_{0} \in \Psi_{H}^{0}(M), \qquad G_{\infty} \coloneqq (I - \Pi_{\infty}) G_{0} A_{0} \in \Psi_{H}^{-2n-2}(M). \end{equation} Note that \begin{equation} \Pi_{\infty} - \Pi_{0} = \Pi_{0}(A_{0} - I) \in \Psi_{H}^{-1}(M). \end{equation} Since $P \Pi_{0} = 0$, we have $P \Pi_{\infty} = 0$ and $\Pi_{\infty}^{\ast} P = 0$. Moreover, \begin{equation} P G_{\infty} + \Pi_{\infty} = (P G_{0} + \Pi_{0}) A_{0} = (I + R_{0}) A_{0} \sim I, \qquad G_{\infty}^{\ast} P + \Pi_{\infty}^{\ast} \sim I. \end{equation} Hence \begin{align} \Pi_{\infty}^{\ast} \sim \Pi_{\infty}^{\ast} (P G_{\infty} + \Pi_{\infty}) = \Pi_{\infty}^{\ast} \Pi_{\infty} = (G_{\infty}^{\ast} P + \Pi_{\infty}^{\ast}) \Pi_{\infty} \sim \Pi_{\infty}. \end{align} We also have \begin{equation} \Pi_{\infty} G_{\infty} = (\Pi_{\infty} - \Pi_{\infty}^{2}) G_{0} A_{0} \sim 0 \end{equation} and \begin{align} G_{\infty} \Pi_{\infty} &\sim (G_{\infty}^{\ast} P + \Pi_{\infty}^{\ast}) G_{\infty} \Pi_{\infty} \\ &\sim G_{\infty}^{\ast} (I - \Pi_{\infty}) \Pi_{\infty} \\ &= G_{\infty}^{\ast} (\Pi_{\infty} - \Pi_{\infty}^{2}) \\ &\sim 0. \end{align} Therefore \begin{align} G_{\infty}^{\ast} &\sim G_{\infty}^{\ast} (I - \Pi_{\infty}) \\ &\sim G_{\infty}^{\ast} P G_{\infty} \\ &\sim (G_{\infty}^{\ast} P + \Pi_{\infty}) G_{\infty} \\ &\sim G_{\infty}, \end{align} which completes the proof. \end{proof} Consider $P$ as an unbounded closed operator on $L^{2}(M)$ by the maximal closed extension. The domain $\Dom P$ contains $W_{H}^{2n+2}(M)$ by \cref{prop:mapping-properties-of-Hpsido}. Conversely, any $u \in \Dom P$ is an element of $W_{H}^{2n+2}(M)$ modulo $\Ker P$ by the lemma below. \begin{lemma} \label{lem:domain-of-critical-GJMS-operator} For $u \in \Dom P$, one has $u - \Pi_{\infty} u \in W_{H}^{2n+2}(M)$. In particular, $\Dom P = \Ker P + W_{H}^{2n+2}(M)$. \end{lemma} \begin{proof} Set \begin{equation} \label{eq:left-parametrix} R_{\infty} \coloneqq G_{\infty} P + \Pi_{\infty} - I \in \Psi_{H}^{- \infty}(M). \end{equation} If $v = P u \in L^{2}(M)$, then \begin{equation} u - \Pi_{\infty} u = G_{\infty} v - R_{\infty} u \in W_{H}^{2n+2}(M). \end{equation} In particular, $u \in \Ker P + W_{H}^{2n+2}(M)$ since $\Pi_{\infty} u \in \Ker P$. \end{proof} \begin{lemma} \label{lem:GJMS-orthogonal-to-Pi} The range $\Ran P$ of $P$ is orthogonal to $\Ran \Pi_{\infty}$ in $L^{2}(M)$. \end{lemma} \begin{proof} Assume $u \in \Dom P$ and $v \in L^{2}(M)$. Take a sequence $(v_{j}) \in C^{\infty}(M)$ such that $v_{j}$ converges to $v$ in $L^{2}(M)$ as $j \to + \infty$. Since $\Pi_{\infty} \in \Psi_{H}^{0}(M)$, the function $\Pi_{\infty} v_{j}$ is smooth and converges to $\Pi_{\infty} v$ in $L^{2}(M)$ as $j \to + \infty$ also. Hence \begin{align} \iproduct{P u}{\Pi_{\infty} v}_{0} &= \lim_{j \to \infty} \iproduct{P u}{\Pi_{\infty} v_{j}}_{0} \\ &= \lim_{j \to \infty} \iproduct{u}{P \Pi_{\infty} v_{j}}_{0} \\ &= 0, \end{align} which completes the proof. \end{proof} \begin{proof}[Proof of \cref{thm:self-adjointness-of-critical-CR-GJMS-operator}] We first prove that $P$ is self-adjoint. To this end, it is enough to show that $P$ is symmetric. Let $u, v \in \Dom P$. From \cref{lem:domain-of-critical-GJMS-operator}, it follows that $v^{\prime} \coloneqq v - \Pi_{\infty} v$ is in $W_{H}^{2n+2}(M)$. Take a sequence $(v_{j})$ in $C^{\infty}(M)$ such that $v_{j}$ converges to $v^{\prime}$ in $W_{H}^{2n+2}(M)$ as $j \to + \infty$. Then $P v_{j}$ converges to $P v^{\prime} = P v$ in $L^{2}(M)$ as $j \to + \infty$ by the continuity of $P \colon W_{H}^{2n+2}(M) \to L^{2}(M)$. From \cref{lem:GJMS-orthogonal-to-Pi}, we derive \begin{align} \iproduct{P u}{v}_{0} &= \iproduct{P u}{v^{\prime}}_{0} + \iproduct{P u}{\Pi_{\infty} v}_{0} \\ &= \lim_{j \to \infty} \iproduct{P u}{v_{j}}_{0} \\ &= \lim_{j \to \infty} \iproduct{u}{P v_{j}}_{0} \\ &= \iproduct{u}{P v}_{0}, \end{align} which means that $P$ is symmetric. We next prove that $P \colon \Dom P \to L^{2}(M)$ has closed range. It suffices to show that there exists $\epsilon > 0$ such that \begin{equation} \norm{P u}_{0} \geq \epsilon \norm{u}_{0} \end{equation} for any $u \in \Dom P \cap (\Ker P)^{\perp}$. Note that $(\Ker P)^{\perp} \subset \Ker \Pi_{\infty}^{\ast}$ since $\Ran \Pi_{\infty} \subset \Ker P$. Set \begin{equation} \label{eq:right-parametrix-of-GJMS} R_{\infty}^{\prime} \coloneqq P G_{\infty} + \Pi_{\infty} - I \in \Psi_{H}^{- \infty}(M). \end{equation} Note that \begin{equation} \label{eq:left-parametrix-of-GJMS} G_{\infty}^{\ast} P + \Pi_{\infty}^{\ast} = I + (R_{\infty}^{\prime})^{\ast}. \end{equation} Suppose that we can take a sequence $(u_{j})$ in $\Dom P \cap (\Ker P)^{\perp}$ such that \begin{equation} \norm{u_{j}}_{0} = 1, \qquad \norm{P u_{j}}_{0} \leq \frac{1}{j}. \end{equation} From \cref{eq:left-parametrix-of-GJMS}, it follows that \begin{equation} u_{j} = G_{\infty}^{\ast} (P u_{j}) - (R_{\infty}^{\prime})^{\ast} u_{j} \end{equation} is uniformly bounded in $W_{H}^{2n+2}(M)$. By \cref{lem:Rellich's-lemma}, we may assume that $u_{j}$ converges to some $u \in L^{2}(M)$ as $j \to + \infty$. From the definition of $u_{j}$, we derive that $u$ is in $(\Ker P)^{\perp}$ and $\norm{u}_{0} = 1$. However, since $\norm{P u_{j}}_{0} \leq 1 / j$, we have $u \in \Dom P$ and $P u = 0$. This is a contradiction. \end{proof} Since $P$ is a range-closed operator, there exist the partial inverse $G$ of $P$ and the orthogonal projection $\Pi$ to $\Ker P$. Next, we show that these operators are Heisenberg pseudodifferential operators. \begin{theorem} \label{thm:partial-inverse-of-critical-CR-GJMS-operator} The operators $G$ and $\Pi$ are Heisenberg pseudodifferential operators of order $-2n-2$ and $0$ respectively. Moreover, $\Pi$ coincides with $\Pi_{0}$ modulo $\Psi_{H}^{-1}(M)$. \end{theorem} \begin{proof} First note that \begin{equation} \Pi \Pi_{\infty} = \Pi_{\infty}, \qquad \Pi_{\infty}^{\ast} \Pi = \Pi_{\infty}^{\ast}. \end{equation} since $\Ran \Pi_{\infty} \subset \Ker P$. Composing $\Pi$ to \cref{eq:right-parametrix-of-GJMS} from the left and taking its adjoint, we have \begin{equation} \Pi_{\infty} = \Pi + \Pi R_{\infty}^{\prime}, \qquad \Pi_{\infty}^{\ast} = \Pi + (R_{\infty}^{\prime})^{*} \Pi. \end{equation} Hence \begin{equation} \Pi - \Pi_{\infty} = - \Pi R_{\infty}^{\prime} = (R_{\infty}^{\prime})^{\ast} \Pi R_{\infty}^{\prime} - \Pi_{\infty}^{\ast} R_{\infty}^{\prime}, \end{equation} which is a smoothing operator. In particular, $\Pi$ is a Heisenberg pseudodifferential operator of order $0$. Moreover, $\Pi - \Pi_{0} \in \Psi_{H}^{-1}(M)$ since $\Pi_{\infty} - \Pi_{0} \in \Psi_{H}^{-1}(M)$. Next consider $G$. Set \begin{equation} \label{eq:right-parametrix-of-GJMS-v2} R_{\infty}^{\prime \prime} \coloneqq P G_{\infty} + \Pi - I \in \Psi_{H}^{- \infty}(M). \end{equation} Composing $G$ to \cref{eq:right-parametrix-of-GJMS-v2} and taking its adjoint give \begin{equation} (I - \Pi) G_{\infty} = G + G R_{\infty}^{\prime \prime}, \qquad (G_{\infty})^{*} (I - \Pi) = G + (R_{\infty}^{\prime \prime})^{*} G. \end{equation} Hence \begin{align} G - (I - \Pi) G_{\infty} &= - G R_{\infty}^{\prime \prime} \\ &= (R_{\infty}^{\prime \prime})^{\ast} G R_{\infty}^{\prime \prime} - (G_{\infty})^{*} (I - \Pi) R_{\infty}^{\prime \prime}, \end{align} which is a smoothing operator. Therefore $G$ is a Heisenberg pseudodifferential operator of order $-2n-2$. \end{proof} This theorem proves \cref{thm:spectrum-of-critical-CR-GJMS-operator}. \begin{proof}[Proof of \cref{thm:spectrum-of-critical-CR-GJMS-operator}] From \cref{prop:mapping-properties-of-Hpsido,thm:partial-inverse-of-critical-CR-GJMS-operator}, we derive that the partial inverse $G \colon L^{2}(M) \to L^{2}(M)$ is a compact self-adjoint operator. Hence the spectrum $\sigma(G)$ of $G$ is bounded and consists only of eigenvalues, and $0$ is the only accumulation point of $\sigma(G)$. Moreover, for any non-zero eigenvalue $\lambda$, the eigenspace $H_{\lambda} \coloneqq \Ker (G - \lambda)$ is finite-dimensional, and there exists the following orthogonal decomposition: \begin{equation} L^{2}(M) = \Ker G \oplus \bigoplus_{\lambda \in \sigma(G) \setminus \{0\}} H_{\lambda}. \end{equation} Furthermore, since $G$ maps $W_{H}^{s}(M)$ to $W_{H}^{s+2n+2}(M)$, the eigenspace $H_{\lambda}$ is a linear subspace of $C^{\infty}(M)$. By the definition of the partial inverse, $H_{\lambda}$ is the eigenspace of $P$ with eigenvalue $1 / \lambda$, and $\Ker G = \Ker P$. Hence the spectrum $\sigma(P)$ is discrete and consists only of eigenvalues, and the eigenspace corresponding to each non-zero eigenvalue is a finite-dimensional subspace of $C^{\infty}(M)$. Moreover, $\Ker P \cap C^{\infty}(M)$ is dense in $\Ker P$ since the orthogonal projection $\Pi$ to $\Ker P$ is a Heisenberg pseudodifferential operator of order $0$. \end{proof} An argument similar to the proof of \cref{thm:partial-inverse-of-critical-CR-GJMS-operator} also gives \cref{prop:supplementary-space}. \begin{proof}[Proof of \cref{prop:supplementary-space}] Let $\pi$ be the orthogonal projection to $\overline{\scrP}$. Note that $\Pi - \pi$ is the orthogonal projection to $\scrW$. Hence it is enough to prove that $\Pi - \pi$ is a smoothing operator. Since $\Pi \sim \Pi_{\infty}$, it suffices to show that $\pi - \Pi_{\infty}$ is a smoothing operator. From the construction of $\Pi_{\infty}$, we derive \begin{equation} \Ran \Pi_{\infty} \subset \Ran \Pi_{0} \subset \Ran \pi. \end{equation} Hence \begin{equation} \pi \Pi_{\infty} = \Pi_{\infty}, \qquad \Pi_{\infty}^{\ast} = \Pi_{\infty}^{\ast} \pi. \end{equation} It follows from \cref{eq:right-parametrix-of-GJMS} that \begin{equation} \Pi_{\infty} = \pi + \pi R_{\infty}^{\prime}, \qquad \Pi_{\infty}^{\ast} = \pi + (R_{\infty}^{\prime})^{\ast} \pi. \end{equation} Therefore we have \begin{equation} \pi - \Pi_{\infty} = - \pi R_{\infty}^{\prime} = (R_{\infty}^{\prime})^{\ast} \pi R_{\infty}^{\prime} - \Pi_{\infty}^{\ast} R_{\infty}^{\prime}, \end{equation} which is a smoothing operator. \end{proof} As an application of results in this section, we give a necessary and sufficient condition for the zero CR $Q$-curvature problem. \begin{proof}[Proof of \cref{prop:zero-CR-Q-curvature-problem}] As we saw in the introduction, $Q \perp (\Ker P \cap C^{\infty}(M))$ if there exists a contact form with zero $Q$-curvature. Conversely, assume that $Q$ is orthogonal to $\Ker P \cap C^{\infty}(M)$. It follows from \cref{thm:spectrum-of-critical-CR-GJMS-operator} that $Q$ is in fact orthogonal to $\Ker P$. Then $\Upsilon \coloneqq - G Q \in C^{\infty}(M)$ and $P \Upsilon = - Q$. Hence $\whxth \coloneqq e^{\Upsilon} \theta$ satisfies $\widehat{Q} = 0$. \end{proof} \section*{Acknowledgements} The author is grateful to Charles Fefferman, Kengo Hirachi, and Paul Yang for helpful comments. A part of this work was carried out during his visit to Princeton University with the support from The University of Tokyo/Princeton University Strategic Partnership Teaching and Research Collaboration Grant, and the Program for Leading Graduate Schools, MEXT, Japan. He would also like to thank Princeton University for its kind hospitality.
1,314,259,996,135
arxiv
\section{Introduction}\label{section1} To put our motivation and results into perspective, we start by recalling some results related to the second fundamental form of submanifolds in the unit spheres and standard complex projective spaces. Let $M$ be an $n$-dimensional compact minimal submanifold in the unit sphere $S^{n+r}$ with second fundamental form $\sigma$. In a seminal paper \cite{Si}, Simons discovered a gap phenomenon for the squared length of $\sigma$: if $|\sigma|^2\leq n/(2-\frac{1}{r})$ everywhere on $M$, then either $\sigma\equiv0$, i.e., $M$ is totally geodesic, or $|\sigma|^2\equiv n/(2-\frac{1}{r})$. Soon afterwards Chern, do Carmo and Kobayashi (\cite{CDK}) showed that Simon's result is sharp and classified the equality case. The case of $r=1$ was also independently obtained by Lawson (\cite{La}). Chern also proposed to study the subsequent gaps for $|\sigma|^2$ (\cite[p. 42]{Ch}, \cite[p. 75]{CDK}) and this was collected by Yau in his famous problem section (\cite[p. 693]{Ya}). For minimal hypersurfaces in $S^{n+1}$, i.e., the case of codimension $r=1$, the investigation of the second gap for $|\sigma|^2$ was initiated by Peng and Terng (\cite{PT1}, \cite{PT2}) and, after some works (\cite{WX}, \cite{Zh}), their estimate was eventually improved by Ding and Xin (\cite{DX}) to show that for each $n$, there exists a positive constant $\delta(n)$ depending only on $n$ such that if $|\sigma|^2> n$, then $|\sigma|^2\geq n+\delta(n)$. Due to their method employed the gap $\delta(n)$ is by no means optimal and the conjectured optimal gap is $n$, i.e., if $|\sigma|^2> n$, then $|\sigma|^2\geq 2n,$ which remains open. It is known that the case of $2n$ can be achieved by Cartan's isoparametric minimal hypersurfaces in $S^{n+1}$. It is well-known that any (holomorphically immersed) complex submanifold of a K\"{a}hler manifold is minimal and so is natural to exploit similar gap phenomena for complex submanifolds in complex projective spaces endowed with the Fubini-Study metric of constant holomorphic sectional curvature $1$. This was initiated by Ogiue in \cite{Og1} and shortly afterwards there appeared a series of related papers on this topic, whose main results have been summarized by Ogiue in his influential semi-expository paper \cite{Og2}. Ogiue also posed several open problems in \cite[p. 112]{Og2} to characterize the totally geodesic submanifolds in $\mathbb{P}^{n+r}$ in terms of various curvature pinching conditions, and some of them have been answered or partially answered (\cite{Che}, \cite{Ro}, \cite{RV}, \cite{Li1}, \cite{Li2}), to the author's best knowledge. In particular, Ogiue conjectured that (\cite[p. 112, Problem 3]{Og2}) if $|\sigma|^2<n$ everywhere on a complete complex $n$-dimensional holomorphically immersed submanifold $M$ in $\mathbb{P}^{n+r}$, then it must be totally geodesic. This was resolved by Cheng and Liao (\cite{Che}, \cite{Li1}) when $M$ is compact. The previously-mentioned results indicate that, in the class of compact minimal (resp. complex) submanifolds in a sphere (resp. complex projective space), the totally geodesic submanifolds are isolated. This motivates Gromov to conjecture in \cite{Gr} that every \emph{smooth} immersed map of a compact smooth manifold into a compact quotient of the complex hyperbolic space, whose second fundamental form is small, is homotopic to a totally geodesic submanifold. Here the term ``small" is only qualitative and has not been precisely formulated. Besson, Courtois and Gallot answered it in \cite{BGG} in the holomorphic case in terms of the $L^2$ and $L^{2n}$ norms of the second fundamental form. To be precise, they showed that (\cite[p. 151]{BGG}) a holomorphic immersion of a compact complex $n$-dimensional K\"{a}hler manifold into a compact quotient of the complex hyperbolic space, whose $|\sigma|^{2}_{L^2}$ and $|\sigma|^{2}_{L^{2n}}$ are smaller than a positive constant depending only on $n$, is totally geodesic. Various characterizations of $\mathbb{P}^n$ and the hyperquadric in $\mathbb{P}^{n+1}$ have a long history since the pioneer works of Hirzebruch-Kodaira, Brieskorn, and Kobayashi-Ochiai (\cite{HK}, \cite{Br}, \cite{KO}). Motivated by the result in \cite{BGG}, Loi and Zedda also addressed in \cite{LZ} the problem of finding the \emph{optimal} constant $c(n)$ depending only on $n$ to ensure that, if $|\sigma|^2_{L^2}<c(n)$ for an $n$-dimensional projective manifold in $\mathbb{P}^{n+r}$, then $M$ is totally geodesic, and formulated the following (\cite[p. 69]{LZ}) \begin{conjecture}[Loi-Zedda]\label{conj} Let $M$ be an $n$-dimensional projective manifold in $\mathbb{P}^{n+r}$ with the induced metric. If \be\label{inequality1} |\sigma|^2_{L^2}:=\int_M|\sigma|^2\ast1<2n \cdot\text{Vol}(\mathbb{P}^n),\ee where $\ast1$ is the volume form and $\text{Vol}(\mathbb{P}^n)$ the volume of the standard $n$-dimensional $\mathbb{P}^n$ in $\mathbb{P}^{n+r}$, then $M$ is isomorphic to the $n$-dimensional hyperplane $\mathbb{P}^n$. And the equality holds if and only if $M$ is isomorphic to the complex quadric \be\label{quadric}Q^n:=\Big\{[z^0:\cdots:z^{n+1}: \underbrace{0:\cdots:0}_{r-1}] \in~\mathbb{P}^{n+r}~\big|~(z^0)^2+\cdots+ (z^{n+1})^2=0\Big\}.\nonumber\ee \end{conjecture} \begin{remark} Loi and Zedda verified Conjecture \ref{conj} in the cases of $n=1$, $M$ is a complete intersection and $|\sigma|^2$ is constant (\cite[Theorem 3]{LZ}). \end{remark} The key idea in Simons's paper \cite{Si} is to calculate the Laplacian of $|\sigma|^2$ and estimate some terms involved to produce the desired results. This idea was more or less inherited by most papers related to the second fundamental form of complex submanifolds in complex projective spaces (e.g., those summarized in \cite{Og2}), but with one exception in \cite{Che}, where some arguments are of algeo-geometric nature. This motivates us to apply deeper results in algebraic geometry to treat this kind of problems, and the main tools employed in our paper are some classification results in the adjunction theory of algebraic geometry, mainly due to Fujita (\cite{Fu}). The rest of this paper is organized as follows. The main results and several of their corollaries are stated in Section \ref{section2}. In Section \ref{section3} we briefly recall the notions of sectional genus and $\Delta$-genus in algebraic geometry and some classification results in terms of them, mainly due to Fujita. Then Sections \ref{section4}, \ref{section5} and \ref{section4.3} are devoted to the proofs of various results described in Section \ref{section2}. \section*{Acknowledgements} The author wishes to thank Professor Yuan-Long Xin for drawing his attention to this kind of rigidity results in minimal submanifold theory. \section{Main results}\label{section2} Our first observation is the following formula for $\underline{|\sigma|}^2$. \begin{theorem}\label{firstresult} Let $M\overset{i}{\hooklongrightarrow}\mathbb{P}^{n+r}$ be an $n$-dimensional projective manifold in $\mathbb{P}^{n+r}$ with the induced metric, $\sigma$ the second fundamental form of $M$, and $L$ the hyperplane section bundle on $M$, i.e., $L=i^{\ast}\big(\mathcal{O}_{\mathbb{P}^{n+r}}(1)\big)$. Then \be\label{fundamentalformformula}\underline{|\sigma|}^2=2n \big[1+\frac{g(L)-1}{d(M)}\big],\ee where $d(M)$ is the degree of $M$ in $\mathbb{P}^{n+r}$ and $g(L)\in\mathbb{Z}$ the sectional genus of $L$. \end{theorem} \begin{remark} The notion of \emph{sectional genus} of a line bundle comes from algebraic geometry and more related details shall be explained in Section \ref{section3}. By definition the degree of $M$ is nothing but $L^n$: $d(M)=L^n$. \end{remark} In order to state the classification result, we recall in the next example some special polarized manifolds, which are called \emph{rational normal scrolls} in the literature. We refer the reader to \cite[p. 5-7]{EH1} or \cite[\S 9.1.1]{EH2} for more details. \begin{example}[Rational normal scroll] Let $\varepsilon$ be a direct sum of $n$ line bundles of positive degrees over $\mathbb{P}^1$, i.e., $$\varepsilon=\bigoplus_{i=1}^n\mathcal{O}_{\mathbb{P}^1}(a_i),\qquad a_i\in\mathbb{Z}_{>0}.$$ Write $\mathbb{P}(\varepsilon)$ for the projectivization of $\varepsilon$ and let $\mathcal{O}_{\mathbb{P}(\varepsilon)}(1)$ be the tautological line bundle on $\mathbb{P}(\varepsilon)$, which is very ample under the conditions that all the degrees $a_i>0$. Denote for our later convenience \be\label{notatinforscorll} \big(S(a_1,\ldots,a_n),\mathcal{O}(1)\big):= \big(\mathbb{P}(\varepsilon), \mathcal{O}_{\mathbb{P}(\varepsilon)}(1)\big)\ee and call this polarized pair a \emph{rational normal scroll}. \end{example} With this notion in hand we are able to classify $M$ where $\underline{|\sigma|}^2\leq2n$. Since the classification for $\underline{|\sigma|}^2<2n$ can be made more explicitly and the related applications also focus on it, in the sequel we separately describe the two cases of ``$\underline{|\sigma|}^2<2n$" and ``$\underline{|\sigma|}^2=2n$". \begin{theorem}\label{secondresult} Let $(M,L)$ be as in Theorem \ref{firstresult} and assume that $\underline{|\sigma|}^2<2n.$ Then the pair $(M,L)$ is isomorphic to \begin{enumerate} \item $(\mathbb{P}^n,\mathcal{O}_{\mathbb{P}^n}(1))$, in which case $\sigma\equiv0$ and $d(M)=1$; \item $(Q^n,\mathcal{O}_{Q^n}(1))$, in which case $|\sigma|^2\equiv n$, $d(M)=2$ and the codimension $r\geq 1$; \item the Veronese surface $\big(\mathbb{P}^2,\mathcal{O}_{\mathbb{P}^2}(2)\big)$, in which case $\underline{|\sigma|}^2=3$, $d(M)=4$ and the codimension $r\geq 3$; or \item one of the rational normal scrolls $\big(S(a_1,\ldots,a_n),\mathcal{O}(1)\big)$ in the notation of (\ref{notatinforscorll}) with all $a_i>0$, in which case \begin{eqnarray}\label{norm of scroll} \left\{ \begin{array}{ll} \underline{|\sigma|}^2=2n(1-\frac{1} {\sum_{i=1}^na_i})\\ ~\\ d(M)=\sum_{i=1}^na_i,\\ \end{array} \right. \end{eqnarray} and the codimension $r\geq-1+\sum_{i=1}^na_i$. \end{enumerate} Furthermore, in the above cases, the lower bounds of the codimension $r$ are exactly realized by the Kodaira maps of these very ample line bundles $L$. \end{theorem} \begin{remark}\label{remark2.6} The four classes above are disjoint with exactly one exception $\big(Q^2,\mathcal{O}_{Q^2}(1)\big)$, which is also isomorphic to the rational normal scroll $\big(S(1,1), \mathcal{O}(1)\big)$. \end{remark} Observe from (\ref{fundamentalformformula}) that the pairs $(M,L)$ where $\underline{|\sigma|}^2=2n$ correspond exactly to those with sectional genera $1$, which have been classified by Fujita (\cite[p. 107]{Fu}). So consequently we have \begin{proposition} Let $(M,L)$ be as above. If $\underline{|\sigma|}^2=2n$, then the pair $(M,L)$ is either \begin{enumerate} \item a del Pezzo manifold, i.e., the canonical line bundle $K_M=(1-n)L$; or \item a scroll over an elliptic curve, i.e., $M$ is the projectivization of a vector bundle on an elliptic curve and $L$ its tautological line bundle. \end{enumerate} \end{proposition} Next we present several applications to Theorems \ref{firstresult} and \ref{secondresult}, whose detailed proofs shall be given in Section \ref{section4.3}. The first one is a confirmation of Loi and Zedda's Conjecture \ref{conj}: \begin{corollary}\label{thirdresult} Conjecture \ref{conj} is true. \end{corollary} As mentioned above, Cheng and Liao showed that (\cite{Che}, \cite{Li1}), if for an $n$-dimensional compact (holomorphically immersed) complex submanifold whose $|\sigma|^2<n$ \emph{everywhere} on $M$, then $M$ is totally geodesic, which confirmed a conjecture of Oguie (\cite[p. 112, Prob. 3]{Og2}) in the compact case. As an immediate consequence of Theorem \ref{secondresult} as well as Remark \ref{remark2.6}, for embedded case this result can be improved from the pointwise case to the mean case. \begin{corollary}\label{fourthresult} For an $n$-dimensional projective manifold $M$ in $\mathbb{P}^{n+r}$ with the induced metric, if $\underline{|\sigma|}^2<n$ (resp. $\underline{|\sigma|}^2=n$), then $M$ is isomorphic to $\mathbb{P}^n$ (resp. the quadric $Q^n$). \end{corollary} Another by-product of this classification result is the \emph{optimal} second gap value on $|\sigma|^2$, which is a solution to the complex analog of the problem proposed by Chern discussed in the Introduction. \begin{corollary}\label{fifthresult} For an $n$-dimensional projective manifold $M$ in $\mathbb{P}^{n+r}$ with the induced metric and $n\geq3$, if $\underline{|\sigma|}^2>n$, then $\underline{|\sigma|}^2\geq 2n-2$. In particular, if $|\sigma|^2\in(n,2n-2]$ everywhere on $M$, then $|\sigma|^2\equiv 2n-2$. They are optimal as the case $|\sigma|^2\equiv 2n-2$ can be achieved by the rational normal scroll $S(\underbrace{1,\ldots,1}_{n})$. \end{corollary} \section{Preliminaries}\label{section3} We briefly recall in this section some related notation and results in algebraic geometry, mainly for our later purpose. For more details on these materials we refer the reader to \cite{Fu} and \cite[\S 3]{BS}. Throughout this section we work over $\mathbb{C}$, the field of complex numbers. Let $M$ be an $n$-dimensional smooth projective variety, $L$ a line bundle on it, and $K_M$ its canonical line bundle. By applying the Hirzebruch-Riemann-Roch formula to the holomorphic Euler characteristic $\chi(M,tL)$ ($t\in\mathbb{Z}$) and considering some special coefficients in front of $t^i$, it turns out that (cf. \cite[p. 25-26]{Fu}) the integer $$\big(K_M+(n-1)L\big)\cdot L^{n-1}$$ is \emph{even}. With this fact in mind, the following two closely related notions were introduced by Fujita (cf. \cite[p. 26]{Fu}), who, in a series of papers, successfully described the structure of the pair $(M,L)$ for ample $L$ when they are small enough. These results have been summarized in his book \cite{Fu}. \begin{definition} Let $M$ be an $n$-dimensional smooth projective variety and $L$ a line bundle on it. The \emph{sectional genus} of $L$, $g(L)$, is defined by \be\label{sectional genus}g(L):=\frac{\big(K_M+(n-1)L\big)\cdot L^{n-1}}{2}+1.\ee The \emph{$\Delta$-genus} of $L$, $\Delta(L)$, is defined by \be\label{delta genus}\Delta(L):=n+L^n-\text{dim}_{\mathbb{C}}H^0(M,L).\ee Here as usual denote by $H^0(M,L)$ the complex vector space consisting of holomorphic sections of $L$. \end{definition} A fundamental result related to $g(L)$ and $\Delta(L)$ is the following result due to Fujita (\cite{Fu0}, \cite[p. 107, p. 35]{Fu}). \begin{theorem}\label{Fujita1} The sectional genus $g(L)\geq0$ if $L$ is ample, and the equality holds if and only if the $\Delta$-genus $\Delta(L)=0$. In the latter case $L$ is necessarily very ample. \end{theorem} The following classification result of $\Delta$-genera zero for ample $L$ is due to Fujita (\cite[p. 41]{Fu}). \begin{theorem}\label{Fujita2} Let $M$ be an $n$-dimensional smooth projective variety and $L$ an ample line bundle on it. Suppose that $\Delta(L)=0$. Then the pair $(M,L)$ is isomorphic to \begin{enumerate} \item $(\mathbb{P}^n,\mathcal{O}_{\mathbb{P}^n}(1))$, \item $(Q^n,\mathcal{O}_{Q^n}(1))$, \item the Veronese surface $\big(\mathbb{P}^2,\mathcal{O}_{\mathbb{P}^2}(2)\big)$, or \item the rational normal scroll $\big(S(a_1,\ldots,a_n),\mathcal{O}(1)\big)$, in the notation of (\ref{notatinforscorll}), with all integers $a_i>0$. \end{enumerate} \end{theorem} \begin{remark} It turns out that the manifolds listed in Theorem \ref{Fujita2} coincide with smooth projective varieties of \emph{minimal degrees} (cf. \cite[Thm 1]{EH1}), a fact that will be needed in our proof of Theorem \ref{secondresult}. \end{remark} \section{Proof of Theorem \ref{firstresult}}\label{section4} Let $(M,g,J)\overset{i}{\hooklongrightarrow}(\mathbb{P}^{n+r},g_{0},J_{0})$ be an $n$-dimensional projective manifold in $\mathbb{P}^{n+r}$. Here $\mathbb{P}^{n+r}$ is endowed with the standard complex structure $J_0$ and the Fubini-Study metric $g_0$ of constant holomorphic sectional curvature $1$, and $M$ is endowed with the induced metric $g=i^{\ast}(g_0)$. The associated (normalized) K\"{a}hler form $\omega_0$ of $g_0$ under the homogeneous coordinate $[z^0:z^1:\cdots:z^n]$ is defined by $$\omega_0:= \frac{\sqrt{-1}}{2\pi}\partial\bar{\partial}\log( \sum_{i=0}^n|z^i|^2).$$ With this normalized coefficient we have (cf. \cite[p. 165]{Zhe}) \begin{eqnarray} \left\{ \begin{array}{ll} \int_{\mathbb{P}^{n+r}}\omega_0^{n+r}=1\\ ~\\ c_1\big(\mathcal{O}_{\mathbb{P}^{n+r}}(1)\big) =[\omega_0]\in H^{1,1}(\mathbb{P}^{n+r};\mathbb{Z}).\\ \end{array} \right.\nonumber \end{eqnarray} Therefore $\omega:=i^{\ast}(\omega_0)$ is the associated normalized K\"{a}hler form of $g$ such that \begin{eqnarray}\label{1.5} \left\{ \begin{array}{ll} c_1(L)=[\omega]\in H^{1,1}(M;\mathbb{Z})\\ ~\\ \int_M\omega^n=L^n=d(M)\\ \end{array} \right. \end{eqnarray} as $L=i^{\ast}\big(\mathcal{O}_{\mathbb{P}^{n+r}}(1)\big)$. Let $$\text{Ric}(g):=-\frac{\sqrt{-1}}{2\pi}\partial\bar{\partial} \log\det{(g)}$$ be the (normalized) Ricci form of $g$, which represents the first Chern class of $M$ and thus the anti-canonical line bundle: \be\label{2.5}[\text{Ric}(g)]=c_1(K_M^{-1}).\ee Denote by $S_g$ the scalar curvature function of $g$ on $M$ and it is a well-known fact that (cf. \cite[p. 60]{Sz}) \be\label{3}S_g\cdot\omega^n=n\cdot\text{Ric}(g)\wedge\omega^{n-1}.\ee Another basic fact is that $S_g$ is related to the squared length of the second fundamental form $\sigma$ by \be\label{4}|\sigma|^2=n(n+1)-S_g,\ee which can be proved via the Gauss equation (cf. \cite[p. 77]{Og2}). With these facts understood, we can proceed to prove Theorem \ref{firstresult}. \begin{proof} \be\begin{split} \underline{|\sigma|}^2-2n &=\frac{\int_M|\sigma|^2\omega^n}{\int_M\omega^n}-2n\\ &=n(n-1)-\frac{\int_MS_g\cdot\omega^n}{\int_M\omega^n}\qquad\big(\text{by (\ref{4})}\big)\\ &=n\Big[\frac{\int_M\big((n-1)\omega-\text{Ric}(g)\big) \wedge\omega^{n-1}}{\int_M\omega^n}\Big]\qquad\big(\text{by (\ref{3})}\big)\\ &=n\frac{\big((n-1)L+K_M\big)\cdot L^{n-1}}{d(M)}\qquad\big(\text{by (\ref{1.5}) and (\ref{2.5})}\big)\\ &=2n\Big[\frac{g(L)-1}{d(M)}\Big].\qquad\big(\text{by (\ref{sectional genus})}\big) \end{split} \nonumber\ee This yields the desired formula (\ref{fundamentalformformula}) and thus completes the proof of Theorem \ref{firstresult}. \end{proof} \section{Proof of Theorem \ref{secondresult}}\label{section5} The proof shall be divided into three lemmas. \begin{lemma}The pair $(M,L)$ in question must be isomorphic to one of the four cases described in Theorem \ref{secondresult}. \end{lemma} \begin{proof} The condition is that $\underline{|\sigma|}^2<2n$. Combining this with (\ref{fundamentalformformula}) implies that $g(L)\leq 0$. However, in our situation $L$ is ample (indeed very ample) and so Theorem \ref{Fujita1} tells us that $g(L)\geq0$. Therefore the only possibility is that $g(L)=0$, which, again by Theorem \ref{Fujita1}, is equivalent to $\Delta(L)=0$. This enables us to apply Fujita's classification result, Theorem \ref{Fujita2}, to conclude that the pair $(M,L)$ in question must be isomorphic to one of the four cases claimed in Theorem \ref{secondresult}. \end{proof} \begin{lemma}The claims in the four cases on the second fundamental form $\sigma$ and the degree $d(M)$ are true. \end{lemma} \begin{proof} First note that the formula (\ref{fundamentalformformula}) reduces to \be\label{newfundamentalformformula} \underline{|\sigma|}^2=2n \big[1-\frac{1}{d(M)}\big]\ee as $g(L)=0$ discussed in the above lemma. Case $(1)$ is clear. For Case $(2)$, $d(M)=\big[\mathcal{O}_{Q^n}(1)\big]^n=2$ and so (\ref{newfundamentalformformula}) implies that $\underline{|\sigma|}^2\equiv n$. However, the standard fact is that the scalar curvature in this case is $S_g\equiv n^2$ (cf. \cite[p. 82]{Og2}) and so $|\sigma|^2\equiv n$ via (\ref{4}). Case $(3)$ is clear. For Case $(4)$, the degree satisfies (\cite[p. 7]{EH1}) $$d\big(S(a_1,\ldots,a_n)\big)=\big[\mathcal{O}(1)\big]^n=\sum_{i=1}^na_i$$ and so (\ref{norm of scroll}) follows from (\ref{newfundamentalformformula}). \end{proof} \begin{lemma}The lower bounds of the codimension $r$ are sharp and exactly attained by the Kodaira maps of these very ample line bundles. \end{lemma} \begin{proof} Let $(M,L)$ be any pair in Theorem \ref{secondresult} and $r_m$ the desired \emph{minimal} codimension. This means that there exists an embedding $M\overset{i}{\hooklongrightarrow}\mathbb{P}^{n+r_m}$ with $i^{\ast}\big(\mathcal{O}_{\mathbb{P}^{n+r_m}}(1)\big)=L$ and $M$ is not contained in any hyperplane of $\mathbb{P}^{n+r_m}$. A general fact tells us that in this case the codimension $r_m\leq L^n-1$ (\cite[Prop. 0]{EH1}). However, in our four cases the pairs $(M,L)$ exactly satisfy $r_m=L^n-1$ and are called smooth projective varieties of \emph{minimal degree} (\cite[Thm. 1]{EH1}), a fact traced back to del Pezzo and Bertini, and a new and modern treatment was presented in \cite{EH1}. It suffices to show that the codimension of the Kodaira map induced by the very ample line bundle $L$ is exactly $L^n-1$. Indeed, the Kodaira map of $(M,L)$ is of the following form (\cite[p. 176]{GH}): \be M\hooklongrightarrow \mathbb{P}\big(H^0(M,L)^{\ast}\big) \cong\mathbb{P}^{\text{dim}_{\mathbb{C}}H^0(M,L)-1}.\nonumber\ee Note that for these $L$ we have $\Delta(L)=0$ and so (\ref{delta genus}) tells us that $$\text{dim}_{\mathbb{C}}H^0(M,L)-1=n+L^n-1$$ and hence the codimension is exactly $L^n-1$. This completes this lemma and hence the whole proof of Theorem \ref{secondresult}. \end{proof} \section{Proof of Corollaries \ref{thirdresult} and \ref{fifthresult}}\label{section4.3} \subsection{Proof of Corollary \ref{thirdresult}} It suffices to show that the degree of $M$ in question is $1$ or $2$ respectively. \begin{proof} The inequality (\ref{inequality1}) is equivalent to \be\label{inequality1-equiv}\underline{|\sigma|}^2<\frac{2n} {d(M)}\ee as (cf. \cite[Lemma 5]{LZ}) \begin{eqnarray} \left\{ \begin{array}{ll} |\sigma|^2_{L^2}=\underline{|\sigma|}^2\cdot\text{Vol}(M)\\ ~\\ \text{Vol}(M)=d(M)\cdot\text{Vol}(\mathbb{P}^n).\\ \end{array} \right.\nonumber \end{eqnarray} Assume that the inequality (\ref{inequality1-equiv}) holds. This, together with (\ref{fundamentalformformula}), yields \be\label{1}d(M)+g(L)<2.\ee Again by Theorem \ref{Fujita1} the ampleness of $L$ implies that $g(L)\geq0$ and so the only solution to (\ref{1}) is $$\big(d(M),g(L)\big)=(1,0)$$ and so $M$ is isomorphic to the $n$-dimensional hyperplane $\mathbb{P}^n$. If the equality case in (\ref{inequality1-equiv}) holds, then $d(M)+g(L)=2$, whose only solution is $$(d(M),g(L))=(2,0),$$ from which the result follows. \end{proof} \subsection{Proof of Corollary \ref{fifthresult}} Note that the degree of the rational normal scrolls in Theorem \ref{secondresult} is $$d\big(S(a_1,\ldots,a_n)\big)=\sum_{i=1}^na_i\geq n,$$ with the equality achieved by $S(\underbrace{1,\ldots,1}_{n})$. Accordingly their $\underline{|\sigma|}^2$, via (\ref{norm of scroll}), satisfies $$\underline{\big|\sigma\big(S(a_1,\ldots,a_n)\big)\big|}^2\geq 2n-2,$$ also with the equality achieved by $S(1,\ldots,1)$. It suffices to show that $$\big|\sigma\big(S(1,\ldots,1)\big)\big|^2\equiv 2n-2.$$ Indeed in this case the embedding of $S(1,\ldots,1)$ into $\mathbb{P}^{2n-1}$ induced by the Kodaira map of $L$ is the famous \emph{Segre embedding} (\cite[p. 52-53]{EH2}): $$S(1,\ldots,1)\cong\mathbb{P}^{n-1} \times\mathbb{P}^1\hooklongrightarrow\mathbb{P}^{2n-1}.$$ Furthermore the induced metric on $\mathbb{P}^{n-1}\times\mathbb{P}^1$ from $\mathbb{P}^{2n-1}$ is isometric to the product metric of the Fubini-Study metrics of constant holomorphic sectional curvature $1$ (\cite[p. 401]{CR}), say $(g_1,g_2)$. Recall that the (constant) scalar curvature of the Fubini-Study metric of constant holomorphic curvature $1$ of $\mathbb{P}^n$ is $n(n+1)$. So the scalar curvature of the product metric is $$S_{(g_1,g_2)} =S_{g_{1}}+S_{g_{2}}=(n-1)n+2=n^2-n+2$$ and thus by (\ref{4}) we have $|\sigma|^2\equiv 2n-2$. This completes the proof of Corollary \ref{fifthresult}.
1,314,259,996,136
arxiv
\section{Introduction} The vertically integrated pathway of topographically steered, rotating flows across finite amplitude topography can be understood from integral constraints on the potential vorticity (PV) of the fluid \citep{yang2000water, yang2007potential}, where topographic contours represent free streamlines and PV dissipation associated with boundary currents is balanced by net advective PV fluxes that arise from cross-slope transport. Thus, the expected circulation of a throughflow across a ridge is that of an anticyclonic western boundary current as the flow circulates from the deep ocean into the ridge crest, and a cyclonic, eastern boundary current (pseudo westerward intensification) as the flow circulates into deeper water downstream from the ridge. Along the ridge crest, the flow is expected to follow $f/H$ contours thus closing the circulation by connecting the two boundary current systems. Exchanges of watermass properties between ocean basins can be model as topographically steered, forced flows that are constrained laterally by abrupt coastal shelfs, such as the flow of North Atlantic waters across the Iceland-Faroe Ridge \citep{hansen2003iceland, osterhus2005measured,hansen2008inflow,hansen2010stability}, and even exchanges between marginal seas like the throughflow in the South China Sea \citep{qu2006south,qu2009introduction}. An overlooked component of boundary currents (and therefore throughflows) is their vertical baroclinic structure, in particular the secondary ageostrophic circulation associated with the bottom boundary layer \citep{pedlosky1968overlooked}. Depending on the orientation of the mean (interior) flow with respect to f/H contours, bottom friction can lead to a cross-slope Ekman transport that rapidly modifies the stratification within the boundary layer, and that of the interior flow if it results in a convectively unstable stratification \citep{condie1995descent, waahlin2001downward,brink2010buoyancy}. If the along-slope interior flow experiences little variability and the transit time along the ridge is sufficiently long, bottom friction can lead to an Ekman arrest, when the Ekman transport promotes a horizontal buoyancy gradient in thermal wind balance such that the geostrophic shear reduces the effects of the bottom boundary onto the mean interior flow via \textit{buoyancy shutdown} \citep{maccready1991buoyant, trowbridge1991asymmetric,maccready1993slippery, ruan2016bottom}. Furthermore, baroclinicity within a \textit{thick} bottom boundary layer can lead to baroclinic instability, thus a rapid restratification of the bottom boundary layer \citep{wenegrat2018submesoscale}. Lastly, diapycnal mixing at the bottom can drive frictional flow up the slope by weakening the stratification, thus promoting a stable boundary layer \citep{wunsch1970oceanic, phillips1970flows,thorpe1987current, benthuysen2012friction}. In the case of throughflows across finite amplitude ridge topography with channel geometry, the flow can experience significant along stream variability at it transits the ridge. Since the transit time is not infinite, it is not apparent if the along-slope flow will experience Ekman arrest (buoyancy shutdown). Nonetheless, Ekman dynamics within a thick boundary layer can modify the upper interior ocean through rapid re- or de-stratification resulting in high or low bottom PV anomalies, respectively. Then, depending on the resulting localized distribution of the PV within the bottom boundary layer, the mean flow or baroclinic eddies can advect such PV anomalies away into the interior. Such non-local redistribution by eddies has already been explored in idealized studies of subduction in surface fronts \citep{spall1995frontogenesis, manucharyan2013generation} and shelf break boundary currents \citep{spall2008western}, although in these studies, bottom boundary layer dynamics are largely unresolved and the flows are not equilibrated. How bottom boundary layer control may come about in topographically steered throughflows is unclear, particularly when the mean flow experiences significant along-stream changes as it navigates the ridge. That is, along-stream variability can lead to localized patterns of bottom diapycnal mixing, while advective fluxes can redistribute PV leading to non-local effects. In this study we investigate the effects associated with baroclinic eddies and bottom boundary layer dynamics on the throughflow across a finite amplitude ridge using a combination of theory and numerical simulations of the primitive equations. In particular, we analyze the Ertel PV of the fluid, a dynamically active tracer with conservation properties that incorporate both thermodynamic and dynamical aspects of the flow \citep{haynes1987evolution, haynes1990conservation}. What makes ideal the use of the PV of the fluid, is its ability to describe density and mean flow anomalies as \textit{High} or \textit{Low} PV with respect to background values, useful when looking at the PV along the path of the flow (streamlines) or along isentropes (surfaces of constant temperature). In an accompanying paper, we study the effect these processes have on the vertically integrated circulation. The outline of the paper is as follows: In section \ref{secPV_theory} we derive expressions for boundary PV fluxes that generalize the integral PV balance, including a term associated with the presence of bottom topography. We split the topographic PV terms in the integral balance into those evaluated at lateral walls, and those along f/H contours. In section \ref{model_conf} we outline the model configuration for all simulations, and then in section \ref{results} we present the main results beginning with a description of the vertical structure of the mean fields associated with equilibrated simulations, maps of PV along isentropes and flow fields along streamlines, in order to interpret changes the throughflow experiences as it navigates the ridge. We then present an integral PV balance, i.e. boundary PV sources to complement the previous analysis. Lastly, we summarize our findings and conclusions in section \ref{d_and_c}. \section{Boundary sources of potential vorticity}\label{secPV_theory} In the primitive equations, the conservation of (Ertel) potential vorticity $q=\bm{\omega}_{a}\cdot\nabla b$ in flux form is given by \begin{equation}\label{PV_eqn} \frac{\partial q}{\partial t} = -\nabla\cdot\mathbf{J} \end{equation} where $\bm{\omega}_{a}=(-v_{z},u_{z}, f+\zeta)$ is the absolute vorticity with $\mathbf{u}=(u,v,w)$ the 3D velocity, $\zeta=v_{x}-u_y$ the relative vorticity and $b=g\alpha T$ the buoyancy of the fluid (where we have assumed a linear equation of state, $T$ being the temperature). The PV flux vector can be written in two equivalent forms \begin{equation}\label{Jvector} \mathbf{J}=\underbrace{\mathbf{u}q}_{\mathbf{J_{A}}}+ \underbrace{\nabla b\times\mathbf{F}}_{\mathbf{J_{NA}}}= \nabla b\times\nabla B + \nabla b \times\left(\frac{\partial\mathbf{u}_{h}}{\partial t} \right) - \bm{\omega}_{a}\frac{\partial b}{\partial t} \end{equation} $B=(1/2)|\mathbf{u}|^2+g(\eta-z)+p/\rho_{0}$ is the Bernoulli potential, where $\eta$ is the surface elevation, $p$ the fluid pressure due to stratification, and $\mathbf{F}=\mathbf{F}_{h}+\rho^{-1}_{0}\partial_{z}\bm{\tau}$ represents dissipation of horizontal momentum, with $\mathbf{F}_{h}$ being the viscous, harmonic horizontal dissipation of momentum. The two equivalent formulations of the PV flux vector $\mathbf{J}$ in (\ref{Jvector}) allow complementary interpretations: $\mathbf{J}$ can be decomposed into an advective and a dissipative flux contribution \citep{haynes1987evolution, haynes1990conservation}, and so in the absence of dissipative or diabatic processes, the evolution of (Ertel) potential vorticity is given entirely by the divergence of advective fluxes. Furthermore, the advective PV flux can, in the presence of eddying variability, be split into mean and eddy fluxes, $\textit{i.e.}$ $\mathbf{J}_{A}=\overline{\mathbf{u}}\overline{q} + \overline{\mathbf{u}'q'}$, where the overbar $\overline{()}$ represents time-mean, and primed variables $()'$ represent the deviation (eddy) from such mean. Lastly, if the flow is steady (last two terms in (\ref{Jvector}) are zero), the intersections of surfaces of constant Bernoulli potential and surfaces of constant buoyancy/temperature represent streamlines for the total potential vorticity flux \citep{schar1993generalization, bretherton1993flux}. In the quasigeostrophic limit, the intersection of temperature/buoyancy surfaces with bottom topography can be represented as PV-delta sheets \citep{bretherton1966critical,rhines1979geostrophic}, which can be tapped by the circulation and be advected by the mean flow, leading to downstream formation of cyclonic anomalies \citep{hallberg2000boundary,schneider2003boundary}. In what follows, we derive a local flux formulation of the topographic PV flux in primitive equations that is general enough that it can incorporate dynamics beyond QG, and allow us to diagnose the topographic PV flux that is consistent with a volume integrated \textit{Eulerian} Ertel PV balance. Using the Bernoulli formulation of the PV flux vector in (\ref{Jvector}), the topographic PV flux can be written as the sum of two contributions as follows \begin{equation}\label{two_decomp} \overline{\mathbf{J}}\cdot\mathbf{n}_{bot} = \overline{\mathbf{J}}^t\cdot\mathbf{n}_{bot}+\overline{\mathbf{J}}^s\cdot\mathbf{n}_{bot} \end{equation} where the superscripts $t$ and $s$ differentiate between the fluxes that explicitly depend on the \textit{tendency} of flow variables (the last two terms in \ref{Jvector}), and those that do not. $\mathbf{n}_{bot}=(0,-dh/dy,-1)$ is the local normal vector to bottom topography, outward pointing. The terms in (\ref{two_decomp}) are \begin{equation}\label{Jtbot} \overline{J}^t_{bot}=\mathbf{n}_{bot}\cdot\left[\overline{\nabla b\times\left(\frac{\partial\mathbf{u}_{h}}{\partial t}\right)} - \overline{\bm{\omega_{a}}\frac{\partial b}{\partial t}}\right]\bigg\rvert_{z=-h} \end{equation} and \begin{equation}\label{Jsbot} \overline{J}^s_{bot}=\mathbf{n}_{bot}\cdot\left(\overline{\nabla b\times\nabla B}\right)\big\rvert_{z=-h} \end{equation} In the presence of finite amplitude topography, linearization of the bottom boundary condition is no longer possible. Thus we introduce a $\sigma$-coordinate transformation to represent a horizontal (\textit{i.e.} meridional) derivative along terrain-following coordinates \citep{shchepetkin2005regional}. The terrain-following derivative is \begin{equation}\label{sigma_transform} \frac{\partial (\cdot)}{\partial y}\bigg\rvert_{\sigma=-1} = \bigg(1-\gamma_{(\cdot)}\bigg)\frac{\partial (\cdot)}{\partial y}\bigg\rvert_{z=-h} \end{equation} where $\sigma=\sigma(\eta,h)$ is the terrain following coordinate with a nonlinear \textit{stretching} function, such that $\sigma(0)=\eta(x,y, t)$ and $\sigma(-1)=-h(y)$. The left hand side of (\ref{sigma_transform}) represents an along-bottom derivative of a scalar function denoted by $(\cdot)$, where $\gamma_{(\cdot)}=(dh/dy)\big/((\cdot)_{y}/(\cdot)_{z})$ is the ratio of local slope of scalar $(\cdot)$ to the topographic slope, which vanishes for flat bottom topography. $\gamma_{(\cdot)}$ arises as a \textit{stability} \textit{slope} parameter that measures the effect of the sloping bottom on baroclinic instability, when considering the scalar buoyancy ($\gamma_{b}$) \citep{blumsack1972mars, lozier2005influence, isachsen2011baroclinic}. Given the transformation (\ref{sigma_transform}), the (time-mean) contribution of (\ref{Jtbot}) is \begin{equation}\label{Jtbot3} \overline{J^{t}_{bot}}=\underbrace{-\mathbf{\hat{k}}\cdot\left(\overline{\nabla_{\sigma}b\times\frac{\partial\mathbf{u}_{h}}{\partial t}}\right)\bigg|_{\sigma=-1}}_{\overline{J}^{t}_{b1}}+\underbrace{\bigg[\overline{\big(f+\zeta_{\sigma}\big)\frac{\partial b}{\partial t}}\bigg]\bigg|_{\sigma=-1}}_{\overline{J}^{t}_{b2}} \end{equation} where $\mathbf{\hat{k}}$ is the unit vector perpendicular to depth-surfaces. Thus $\overline{J}^{t}_{bot}=0$ either when the flow variables are steady, or when surfaces of constant buoyancy (temperature) do not intersect the bottom ($\nabla_{\sigma}b=0$ and so $\overline{J}^{t}_{b1}=0$), and when the along-bottom vorticity $\zeta_{\sigma}=-f$ (so $\overline{J}^t_{b2}=0$). The contribution to the net topographic PV flux by the second term term in (\ref{two_decomp}) can be written as \begin{equation}\label{sJbot2} \overline{J}^{s}{_{bot}}=\bigg[\overline{B_{x}b_{y}\left(1- \gamma_{b} \right)} - \overline{b_{x}B_{y}\left(1-\gamma_{B}\right)}\bigg]_{z=-h(y)} \end{equation} or, using $\sigma$-coordinates, \begin{equation}\label{sigmaPVbot} \overline{J}^s_{bot}=\overline{\dfrac{\partial_{\sigma}(B,b)}{\partial(x,y)}}\bigg\rvert_{\sigma=-1} \end{equation} where the subscripts $\sigma$ on the Jacobian term $\partial_{\sigma}(\cdot,*)/\partial(x,y)$ implies that derivatives are along sigma (terrain following) coordinates. Now, we introduce the functional $\Gamma_{b}=\Gamma[b;y,z]$ acting on the scalar function $b$ (although it can be any scalar function), defined as \begin{equation} \Gamma_{b}=\left(\dfrac{\partial b}{\partial y}\bigg|_{\sigma=-1}\right)\bigg/ \left(\dfrac{dh}{dy}\right) \end{equation} The functional $\Gamma_{b}$ represents the ratio between along-bottom northward gradient (\textit{i.e.} along $\sigma=-1$ surface) to topographic slope, where we have used the transformation (\ref{sigma_transform}) for simplicity and compact notation. With this, the expression in (\ref{sJbot2}) is written compactly as \begin{equation}\label{sJbot2n} \overline{J^{s}}_{bot}=\bigg[\overline{B_{x}\Gamma_{b}} -\overline{b_{x}\Gamma_{_{B}}}\bigg]\bigg\rvert_{z=-h(y)}\frac{dh}{dy} \end{equation} We now decompose $\overline{J}^s_{bot}$ into contributions from (lateral) boundaries (\textit{i.e.} evaluated where topography intersects lateral boundaries) and terms along $f/H$ contours. Then, the time-mean net topographic PV flux $\overline{J}_{bot}$, given by the sum (\ref{two_decomp}) is \begin{equation}\label{sJbot4n} \overline{J}_{bot}= \Delta_{x}\overline{J}^{*}_{bot} + \int\int\left(\overline{b_{bot}\frac{\partial\Gamma_{_{B}}}{\partial x}}-\overline{B_{bot}\frac{\partial\Gamma_{b}}{\partial x}}\right)\bigg\rvert_{z=-h}\frac{dh}{dy}dxdy + \int\int \overline{J}^{t}_{bot}dxdy \end{equation} where $\Delta_{x}(\cdot) = (\cdot)_{x=L} - (\cdot)_{x=0}$ is the cross-channel difference of $(\cdot)$ (\textit{i.e.} difference between the evaluation at eastern and western walls). The first term on the RHS of (\ref{sJbot4n}) is associated with effects on lateral walls (\textit{e.g.} form stresses) and vanishes in a zonally periodic domain. The second term in (\ref{sJbot4n}) is associated with changes of buoyancy, pressure and kinetic energy across and along f/H contours, while the third term is associated with the $\textit{tendency}$ terms in (\ref{Jtbot3}). The first term on the RHS of (\ref{sJbot4n}) can be expanded as follows: \begin{equation}\label{Jbot1} \Delta_{x}\overline{J}^{*}_{bot} = \Delta_{x}\overline{\mathcal{F}^{BT}_{_{\Gamma}}} + \Delta_{x}\overline{\mathcal{F}^{BC}_{_{\Gamma}}}+ \Delta_{x}\overline{\mathcal{E}_{_{\Gamma}}}-\Delta_{x}\overline{\mathcal{S}_{_{{\Gamma}}}} \end{equation} where \begin{eqnarray}\label{Lateral_PV_terms} \mathcal{F}^{BT}_{_{\Gamma}} & = &\int_{0}^{M}(g\eta\Gamma_{b})\big\rvert_{z=-h}\frac{dh}{dy}dy\nonumber\\ \mathcal{F}^{BC}_{_{\Gamma}} & = & \int_{0}^{M}\left(\frac{p\Gamma_{b}}{\rho_{0}}\right)\bigg\rvert_{z=-h}\frac{dh}{dy}dy \nonumber\\ \mathcal{E}_{_{\Gamma}} & = & \frac{1}{2}\int_{0}^{M}(\Gamma_{b}v^2)\big\rvert_{z=-h}\frac{dh}{dy}dy \nonumber\\ \mathcal{S}_{_{\Gamma}} & = & \int_{0}^{M} (b\Gamma_{_{B}})\big\rvert_{z=-h}\frac{dh}{dy}dy\nonumber\\ \end{eqnarray} The first three terms in (\ref{Lateral_PV_terms}) are associated with the bottom Bernoulli potential, and in particular the first two integral terms represent weighted form stresses (weight being $\Gamma_{b}$) associated with flow separation from the lateral walls. Each integral in (\ref{Lateral_PV_terms}) vanishes when the integrand is symmetric with respect to the (symmetric ridge). However, it is the difference between the wall-contributions (\ref{Jbot1}) that results in net PV fluxes.\\ Our formulation of net topographic PV fluxes (\ref{sJbot4n}) shows the role of form stresses associated with lateral boundary currents in promoting local (and net) PV fluxes (\ref{Jbot1}), as well as the potential role of along-slope variations of buoyancy and bottom Bernoulli potential in promoting local and integral PV fluxes (second integral terms in \ref{sJbot4n}). Along slope variations of buoyancy and bottom Bernoulli potential are associated with viscous dissipation, ageostrophic flows an eddies. The question of which boundary PV terms make the largest contribution, the location of maxima or minima along f/H contours, and what effect these flux have on the mean fields associated with an equilibrated throughflow past finite amplitude topography is examined in subsequent sections. \section{Model Configuration}\label{model_conf} We use the Regional Ocean Modeling System (ROMS) that solves the primitive equations using a terrain-following vertical coordinate \citep{shchepetkin2005regional}. The rectangular channel domain has dimensions $L=250$km and $M=1000$km in the along-ridge ($x$) and across-ridge ($y$) directions. Bottom topography is characterized by a (symmetric) ridge topography $h=h(y)$ that intersects the lateral walls, with a minimum channel depth of $500$m at the crest ($h_{max}=500m$), and maximum channel depth of $H_{0}=1000$m. The resolution in the vertical grid varies as a function of the distance from the ridge, and the top and bottom boundaries. We use 40 vertical levels that allow for a maximum resolution of $dz\approx3\;$m near the sea surface and ocean bottom over the ridge crest, and a minimum resolution of $dz=32\;$m at intermediate depths, away from ridge topography and vertical boundaries. The horizontal resolution is constant, with $\Delta x=1\;$km and $\Delta y=2\;$km. The Coriolis parameter is kept constant at $f=1.25\times10^{-4}s^{-1}$, and the model is forced by a lateral uniform volume inflow of $Q_{in}\approx3\;$Sv (1 Sv=$10^6 m^{3}s^{-1}$), an outflow $Q_{out}$ of equal magnitude. A sponge layer is applied within the nudging region to further bring the boundary condition to that of the imposed inflow/outflow. Furthermore, the inflow and outflow are relaxed towards a background stratification $N_{0}^2= 1\times10^{-5}s^{-2}$ every 5 days, the relaxation timescale decaying linearly within 10 grid points of the open boundaries. The combination of volume flux, sponge layer and relaxation make it so that the flow is forced by a (purely) advective PV flux $\mathbf{J}_{_{A}}= N_{0}^2v_{0}f \mathbf{\hat{j}}\approx1.5\times10^{-11}ms^{-4}$ at both the inflow and outflow locations (see Fig \ref{fig:diagram}). The model dissipates momentum laterally via a harmonic viscous term with an amplitude defined by the coefficient $A_{_{H}}$. The model employs a quadratic bottom drag with drag coefficient $C_{D}=2.5\times10^{-3}$, and we implement the KPP parameterization for (nonlocal) vertical mixing \citep{large1994oceanic}. We vary $A_{_{H}}$ for each different experiment, which has the effect of varying the width of the lateral boundary currents given by the scaling $\lambda_{_{M}}=(A_{h}/\beta_{_{T}})^{1/3}$, where $\beta_{_{T}}=(f/h^2)dh/dy$ is the topographic beta (see Table \ref{table_one}). We measure the non-linearity of the (lateral) boundary currents through the boundary current Reynolds number $Re=V_{_{max}}\lambda_{M}/A_{_{H}}$, a measure of the relative role of inertial and viscous dynamics. While we considered simulations with different values of $N_0^2$, as well as different topographic heights ($h_{max}=350m$ and $h_{max}=650m$), we observed no significant dynamical differences due to the strong barotropic nature of the simulations. The analysis takes place $100$km away from the nudging regions, localized to northern and southern open boundaries. We consider both free-slip and no-slip tangential boundary conditions resulting in the flow configurations which flux momentum laterally (no-slip), or not (free-slip), across lateral walls. We also consider simulations without stratification, as reference simulations, but these are only considered in the accompanying paper, which examines the vertically integrated circulation (See Table \ref{table_one} for parameters associated with mean flow circulation across large amplitude topography). Each simulation is ran for 7 years, of which the first 12 months represent the spin up, a relatively short time due to the strongly barotropic nature of the simulations. Given the potential role of the bottom boundary layer on the along-slope flow far away from the influence of lateral boundary currents, we introduce relevant parameters that arise in the two-dimensional, (vertically) semi-infinite bottom boundary layer theory (see Table \ref{table_two}). A parameter of great importance of along slope flows is the (\textit{small angle}) slope Burger number $S\approx (N\theta/f)^2$ where $\theta\approx dh/dy$. The timescale at which buoyancy forces balance an (initial) up/down-slope Coriolis force is given by the buoyancy shutdown timescale, defined as $\mathcal{T}_{shut}=P_{r}^{-1}S^{-2}f^{-1}$, valid for $S\ll1$ (here $Pr\approx1$). Similarly, the stratified frictional spindown time scale is given by $\mathcal{T}_{spin}=E^{-1/2}f^{-1}$, where $E=2A_{\nu}/fH^2_{p}$ is the Ekman number and $H_{p}=fL/2\pi N$ the Prandtl depth, the depth scale over which anomalies penetrate into the interior. The inertial and diffusive time scales are expressed as $\mathcal{T}_{inertial}= f^{-1}$ and $\mathcal{T}_{diffusive}=E^{-1} f^{-1}$. An additional time scale to consider, is the \textit{transit} time scale along $f/H$ contours, defined as an advective time scale $T_{adv}=L_{w}/\overline{U}$, where $L_{w}$ is a lateral scale along the ridge away from the influence of lateral boundary currents, and $\overline{U}$ a scale of the velocity field there. Using $L_{w}\approx 100$km, and a typical velocity $\overline{U}\approx 0.3m/s$, then $T_{adv}\approx4$days (see Table \ref{table_two}). The diffusive boundary layer has thickness of $\delta_{T}=\left(2A_{\kappa}\mathcal{T}_{diff}\right)^{1/2}$, whereas the bottom boundary layer thickness defined by $\delta_{Ek}=\sqrt{2A_{\nu}/f}$ (where $A_{\nu}$ is not the background value, but rather a model output variable) can vary spatially according to patterns of enhanced vertical (turbulent) mixing and potentially due to eddy variability. Eddies that propagate along f/H contours can introduce temporal variability on the thickness of the bottom boundary layer, measured by the ratio $r_{\omega}=\omega_{bbl}/f$, where $\omega_{bbl}$ is the frequency of the oscillation (eddies) within the bottom boundary layer \citep{ruan2016bottom}. \section{Results}\label{results} In this section, we analyze simulations of continuously stratified, rotating throughflows under channel geometry and characterized by a net mean transport across finite amplitude bottom topography. Throughout this paper, unless otherwise stated, we focus primarily on simulation $CSt_{ns}$ characterized by narrow ($\lambda_{_{M}}\approx 17km$) unstable lateral boundary current and $Re\sim O(100)$ (see Table \ref{table_one}). Thus, the simulation best describes an equilibrated, topographically steered flow with significant along-stream variability. At the end of this section, we will address the rest of the simulations, when we present the topographic PV fluxes just derived in the previous section. The mean circulation is characterized by a bottom-intensified anticyclonic boundary current associated with the southward facing (decreasing $y$ direction) sloping bottom localized near the western wall ($x=0$), and a cyclonic boundary current associated with the northward facing sloping bottom localized near the eastern wall ($x=250$km). Atop the ridge the mean flow is bottom intensified with a vertical expression that reaches up to the surface, and follows along f/H contours just north of the ridge crest. We examine the vertical structure of the mean flow along the path of (time) mean transport streamlines that are calculated by inverting the 2D Laplacian of the (barotropic) vorticity ($\psi=\nabla^2\overline{\Omega}$, where $\Omega=\hat{\mathbf{k}}\cdot\nabla\times\langle\mathbf{u}_{h}\rangle$ and brackets represent vertical integration). While in the presence of sloping topography there is no longer a barotropic mode but instead a bottom intensified topographic Rossby wave \citep{rhines1970edge}, the depth scale of topographic Rossby waves associated with the mean (boundary current) circulation atop the ridge is relatively of the same order than the depth at the ridge crest $H_{p}\sim O(500m)$. Moreover, given that atop the ridge, all shear and baroclinicity is concentrated towards the bottom, the vertically integrated flow fields represent a good approximation of the dominant (throughflow scale) dynamics that are associated with topographically locked Rossby waves. The transport streamlines, thus, represent a good measure of the throughflow pathway. To better understand the changes experienced by the throughflow as it navigates topography, we examine flow quantities along individual streamlines (e.g. $\psi=\{0.5,1.5, 2.5\}\;Sv$), recognizing that streamlines can diverge from each other and approximate the path of the throughflow (Fig. \ref{fig:zero}a-b). Along the path of the mean flow atop the ridge crest, we find a laterally and temporally varying vertical (turbulent) viscosity $A_{\nu}$ as a result of along-stream changes on the bottom intensified current, and eddies that separate from the western boundary current that propagate along $f/H$ contours. As a result, the thickness of the bottom boundary layer $\delta_{Ek}=\sqrt{(2A_{\nu}/f)}$ varies along the path of the flow atop the ridge, with the greatest thickness within the vicinity of the boundary currents ($\delta_{Ek}\approx25m$). Along the $f/H$ path of the bottom-intensified flow away from boundary currents, $\delta_{Ek}\approx 10-12m$ (Fig. \ref{fig:zero}c). Furthermore, baroclinic bottom intensified eddies promote thickness changes to the boundary layer $\delta'_{Ek}\pm3m$ off the mean value (see Fig. \ref{fig:zero}c, inplot), with a timescale of approximately 4-5 days. The ratio of frequency on boundary layer thickness variability to inertial frequency $r_{\omega}=\omega_{bbl}/f \ll1$, which means that eddies allow the bottom intensified flow along $f/H$ contours to escape buoyancy shutdown \citep{ruan2016bottom}. Moreover, since $\mathcal{T}_{shut}>>\gg \mathcal{T}_{inertial}$, frictional spindown along the ridge is the dominant process, and with the bottom boundary layer being able to escape buoyancy shutdown associated with the mean flow. Both the mean and eddy kinetic energy (MKE and EKE respectively) along the western boundary current are bottom intensified (Fig. \ref{fig:one}a-d, at $R\approx550$km). Their maximum value are located close to each other, and decay rapidly as the flow moves along the ridge crest. MKE and EKE associated with the cyclonic boundary current are laterally apart from each other ($\sim 150$km), with the EKE maximum taking place downstream (EKE is maximum at R$\approx1000$km downstream from the ridge, see Fig. \ref{fig:one}b,d; MKE maximum at $R\approx850$km, see Fig. \ref{fig:one}a,c). The vertical structure of MKE and EKE along the cyclonic boundary current is suspect (\textit{e.g.} MKE behavior seen at R=850km in Fig. \ref{fig:one}c, and EKE behavior at R=1000km in Fig. \ref{fig:one}d). Both are middepth intensified, above the isotherm $T=6.2^\circ\:C$ (Fig.\ref{fig:one}a-d). Their vertical structure decays rapidly at increasing depths and slowly towards the surface. This suggests that both MKE and EKE retain the vertical structure of a (bottom intensified) topographically Rossby wave, a behavior expected for MKE since it lies above sloping bottom but not for EKE which lies downstream, away from sloping topography. The along stream behavior of buoyancy production $\overline{w'b'}$, associated with the conversion of available potential energy into eddy kinetic energy by baroclinic instability, provides further insight into the vertical structure of the mean and eddying fields. Associated with the western intensified boundary current (Fig. \ref{fig:two}a, at R=550km) there is a local maxima in buoyancy production, a sign of (baroclinic) instability growth associated with the bottom intensified boundary current. Buoyancy production shows another local maxima at intermediate depths, associated with the instability of the cyclonic boundary current ($R\approx900$km in Fig. \ref{fig:two}b). Such local maxima is located below the local EKE maxima of the cyclonic boundary current and 100 km upstream (Fig. \ref{fig:one}d at $R=1000$ km). This suggests that eddies on the cyclonic boundary current are advected (or self advect) roughly 100km before these reach finite amplitude. Furthermore, buoyancy production implies growth by baroclinic instability, a mechanism that requires the \textit{relaxation} of tilting buoyancy/temperature surfaces in the across-stream direction and, thus, the presence of a cross-stream PV gradient. Lateral PV gradients can be approximated by cross-stream layer thickness gradients and so we focus on an intermediate layer defined by two isotherms: $T_{c}=\{5.7, 6.2\}^\circ\;C$ (thick black contours in Fig. \ref{fig:two}a,b). At first glance, there appears to be a cross-stream thickness gradient by comparing the behavior of the isotherms in Fig. \ref{fig:two}a,b at $R=800$km along two spatially separated streamlines: $\psi=1.5Sv$ (Fig. \ref{fig:two}b) and $\psi=0.5$Sv (Fig. \ref{fig:two}a). Zonal sections of the mean northward advective PV flux $\overline{\mathbf{J}}_{A}\cdot\mathbf{j}$ further reveal the cross-stream behavior of isotherms. We focus on 4 locations: a section along the ridge crest at $y=500$km where the local maxima of $\overline{\mathbf{J}}_{A}$ is associated with the bottom intensified anticyclonic boundary current (Fig.\ref{fig:three}a), a section at $y=550$km north of the ridge crest where the local maxima of $\overline{\mathbf{J}}_{A}$ is associated with the bottom intensified cyclonic boundary current (Fig. \ref{fig:three}b); another section upstream from the ridge at $y=200$km (Fig. \ref{fig:three}c) and lastly a section at $y=800$km far downstream from the ridge (Fig. \ref{fig:three}d). At the section along $y=550$km, the isotherms $T_{c}=\{5.7^\circ\:C, 6.2^\circ\:C\}$ incrop towards the sloping bottom topography, with the advective PV going to zero within the space (area) bounded by these isotherms (z=600m, x=240km in fig. \ref{fig:three}b). The mean advective PV can be approximated as $\mathbf{\overline{J}_{A}}\approx\overline{v}\;\overline{q}\mathbf{\hat{j}}$, since baroclinic growth takes place farther downstream. The vanishing of the mean advective PV is then associated with a vanishing mean (Ertel) potential vorticity $\overline{q}\approx0$, given the observed isothermal tilts (thus vanishing stratification), and $\overline{v}\neq0$, even within the bottom boundary layer (Fig. \ref{fig:one}a, c, e). The stratification associated with the incropping layer defined by isotherms $T_{c}=\{5.7^\circ\:C, 6.2^\circ\:C\}$ (Fig. \ref{fig:three}b) resembles that of a mixed layer front, associated with the (cross-slope) boundary current flow. This is a different configuration from idealized quasi 2D bottom mixed layer fronts in the literature. Bottom mixed layer fronts within the bottom boundary layer with an orientation like the one observed in Fig. \ref{fig:three}b must then be associated with a net along-slope (westward) Ekman driven transport of buoyancy/temperature anomalies, i.e. away from the eastern wall. Such Ekman transport of warm water anomalies then promotes a convectively unstable stratification, hence the formation of an incropping bottom mixed layer front. Downstream from the ridge, the layer defined by the isotherms $T_{c}=\{5.7^\circ\:C, 6.2^\circ\:C\}$ shows a vanishing advective PV flow (Fig. \ref{fig:three}d). Given the strong barotropic nature of the basin-scale circulation downstream from the ridge and that $\overline{J}_{A}$ does not change sign in the vertical, these imply a vanishing of the potential vorticity $q\approx0$ within the layer defined by the $T_{c}$-isotherms. Our argument is supported by the spatial distribution of mean potential vorticity across the ridge at the zonal section $x=125km$ (Fig. \ref{fig:four}a). Downstream from the ridge, the layer defined by isotherms $T_{c}=\{5.7^\circ\:C, 6.2^\circ\:C\}$ shows a near zero PV anomaly (when compared to the PV values above of below). Given that PV cannot be changed within an isentropic surface \citep{haynes1987evolution,haynes1990conservation}, the observed PV distribution must then be associated with non-local advective transport. We now consider the isentropic PV (IPV) distribution along the $T_{0}=6.0^\circ\:C$ surface. The IPV shows a spread of low PV associated with the signature of bottom mixed layer frontal watermass into the interior (beginning at x=240km, y=500km in Fig. \ref{fig:four}b), roughly in the direction of the mean flow (white arrow). A snapshot of IPV along the same $T_{0}=6^\circ\:C$ surface for a day near the end of our (7 year) simulation shows eddies (dipoles) advecting low PV anomalies within their anticyclonic core, into the interior, away from their formation site (Fig. \ref{fig:five}, at $x=225$km, $y=680$km). These eddies appear to be advected downstream by the mean flow, as well as rotate cyclonically, a sign of an unbalanced dipole with one core strong than the other \citep{manucharyan2013generation}. As the dipoles reach finite amplitude, these appear to get dissipated potentially by the strong shear of the mean (separating) boundary current. IPV maps also show the advection of anomalous high PV along the ridge crest (Fig. \ref{fig:four}, and Fig. \ref{fig:five}). Advection of high PV anomalies is associated with the bottom intensified baroclinic eddies, predominantly cyclonic (e.g. Fig.\ref{fig:five} $x=90$km, $y=550$km) and along $f/H$ contours. \subsection{Topographic PV Fluxes} The sources of PV anomalies shown in Figs. \ref{fig:four} and \ref{fig:five} are associated with the interaction of the bottom-intensified flow with the frictional bottom that result in local intersection of temperature/buoyancy surfaces with the sloping bottom in regions of strong curvature of the flow as it navigates the ridge. In this section we complement the IPV and along-streamline analysis of the vertical structure of the mean and eddy fields, by presenting an (Eulerian) volume integrated PV budget (see Appendix \ref{IntegralPV} for a detailed description). Our previous analysis suggests net topographic PV fluxes to play an important role balancing the budget as these are associated with PV sources. The control volume considered here is delimited by one cross channel section far upstream and another downstream from the ridge (at $y=100$km and $y=900$km). We calculate the volume integrated balance of PV in (\ref{PV_eqn}), over a sufficiently long time that the LHS of (\ref{PV_eqn}) is vanishingly small (Fig. \ref{fig:six}), and the evolution of PV is driven by boundary sources and interior redistribution (\textit{e.g.} mean flow or eddy advection). In all simulations the net northward PV flux is negative $\Delta_{y}(\mathbf{\overline{J}}\cdot\mathbf{\hat{j}})<0$, implying a larger (northward) PV flux entering the control volume (upstream from the ridge), where $\Delta_{y}(\cdot)=(\cdot)_{y=900}-(\cdot)_{y=100}$ represents the difference in the integrated fluxes down and upstream. In addition, the dominant contribution to the net northward flux is advective \textit{i.e.} $\Delta_{y}(\overline{\mathbf{J}}\cdot\mathbf{\hat{j}}) \approx \Delta_{y}(\mathbf{\overline{J}}_{A}\cdot\mathbf{\hat{j}}) < 0$ (Fig. \ref{fig:six}). The net loss of northward PV flux is approximately balanced by a net positive topographic PV flux ($\overline{J}_{bot}>0$), in all simulations (Fig. \ref{fig:six}) \footnote{ROMS does not explicitly solve the integral PV equation (\ref{IntConstrain1}) at every time-step. As a consequence, we don't expect that our integral PV budget will close exactly, but we find relatively good agreement.}. Thus \begin{equation}\label{approx_bal} \Delta_{y}\overline{J}_{A} \approx -\overline{J}_{bot} \end{equation} This approximate balance is consistent with our previous findings, where the high PV anomalies remain atop the ridge, whereas the low PV anomalies get advected downstream and homogenized, thus reducing the value of the background PV downstream from the ridge, and so reducing the northward advective PV flux downstream out of the control volume. Following (\ref{two_decomp}) and (\ref{sJbot2n}), we now look at the decomposition of the topographic PV flux term among the different expressions. We find that $\overline{J}^t_{bot}$ makes no contribution to the net flux in all simulations, so that $\overline{J}_{bot}\approx\overline{J}^s_{bot}$ (Fig. \ref{fig:seven}). Furthermore, we use the decomposition in (\ref{sJbot4n}), where we separate the terms (\ref{sJbot2n}) between those evaluated at the walls and those along $f/H$ contours away from the influence of the boundary currents (see Fig. \ref{fig:eight}, which shows the terms that make the dominant PV terms in (\ref{sJbot2n})). Such decomposition allows to focus on the spatial distribution of the (local) fluxes on three regions: one associated with each lateral boundary current and one associated with the flow along $f/H$ contours, near the ridge crest. The dynamics associated with the mean flow along $f/H$ contours best resembles idealized quasi-2D studies of bottom boundary layers at a slope \citep{maccready1991buoyant, benthuysen2012friction}. In all simulations, we find that the greatest contribution to the net topographic PV fluxes takes place within the region of influence of lateral boundary currents, where the mean flow departs greatly from f/H contours (Fig. \ref{fig:eight} m-p). All variables except $\overline{\Gamma}_{B}dh/dy$, decay drastically away from the influence of lateral boundary currents (Fig. \ref{fig:eight},l). This can be associated with the fact that the temperature surfaces intersect the bottom in the vicinity of boundary currents associated with the bottom fronts (with high and low PV anomalies), but the along-slope flow experiences little of such behavior. In the region associated with flow along $f/H$ contours, the mean flow (and thus the bottom boundary layer) is located near the ridge crest where the bottom slope approaches zero, resulting in a buoyancy shutdown timescale much greater than the stratified, frictional spindown timescale, \textit{i.e.} $\mathcal{T}_{shut}\gg\mathcal{T}_{spin}$ (see Table \ref{table_two}). This suggest that buoyancy effects within the bottom boundary layer are too slow to promote baroclinicity within the bottom boundary layer, and thus induce Ekman arrest. Thus, while we do observe downward Ekman transport within the bottom boundary layer (Fig. \ref{fig:nine}b), the vanishingly small contribution to the PV flux is likely associated with the location of the mean flow being close to the ridge crest, where the topographic slope almost vanishes, and eddy driven variability dominates. Another important factor to consider is the strong stratification upstream, associated with the high PV front, and the baroclinic instability that ensues, which continuously act to relax the temperature/buoyancy surfaces with respect to depth levels. \section{Discussions and Conclusions}\label{d_and_c} We have presented in this study an analysis of the potential vorticity budget and the vertical structure of a rotating, stratified throughflow across finite amplitude topography under channel geometry. We found that bottom boundary layer dynamics associated with boundary currents help promote the formation of \textit{high} PV anomalies along an anticyclonic western boundary current, and a \textit{low} PV anomaly associated with an eastern (\textit{pseudo} westward) cyclonic boundary current. The \textit{high} PV anomaly is advected along topographic contours atop the ridge. Meanwhile, the vanishingly \textit{low} PV anomaly, is advected downstream resulting in a mid-depth isentropic layer characterized by watermasses with anomalous low PV signature. The observed PV distribution associated with anticyclonic (net upslope transport) and the cyclonic boundary currents (net downslope transport) are, in a way, in accordance to the vertical stratification expected from basic PV principles of stretching and squashing associated with flow past topographic obstacles \citep{pedlosky2013geophysical, vallis2006atmospheric}. Such principles are based on linear, inviscid QG dynamics that assure PV conservation. Our work shows that when boundary layer dynamics are important, dissipative processes irreversibly amplify the expected stratification promoting strong high bottom stratification (a front and thus creating high PV anomaly) as the flow moves into the ridge crest, and a low bottom stratification (the mixed layer front with low PV). As eddies help redistribute PV anomalies non-locally, dissipative and nonlinear advective processes not typically included in first principles of PV conservation, are responsible for the net decrease of PV fluxes carried by the net northward transport. That is, in the absence of dissipative PV processes and eddies, one would expect a net zero northward PV flux as the flow navigates across finite amplitude topography. Throughout our simulations, we found that buoyancy shutdown (thus Ekman arrest) is not a dominant process within the bottom boundary layer. This can be attributed to the fact that the along-slope flow takes place very close to the ridge crest where the slope Burger number approaches zero, effectively increasing the timescale required for buoyancy shutdown to take place. The location of the along-slope flow with respect to the ridge crest is largely determined by the location of the reversal of the background PV gradient, in this case given entirely by the topography slope. An increase in the background PV ($\beta$-plane dynamics) would lead to an along-slope flow further downstream, and thus an increased topographic slope. However, adding $\beta-$plane dynamics would also result in a domain scale western boundary current given our inflow-inflow conditions, like those in \citet{yang2000water}, largely altering the flow pattern. In reality, $\beta$-effects, baroclinic pressure forces (driven by contrasting density differences across topography), barotropic forcing (like the one considered here, due to the inflow and outflow) and surface forcing all play a role in determining the large scale patters as well as the local dynamical processes studied here. Our objective was to isolated the topographic effects on the throughflow. Our choice of lateral walls has an influence on the orientation of the equilibrated bottom mixed layer front, which is perpendicular to isobaths and aligned with the mean (interior) flow. Nonetheless, imposing a step sloping bottom (coastal shelfs) instead of a lateral wall would still lead to net Ekman transport of warm water into the deep ocean in the direction perpendicular to a cyclonic boundary current. However, a step continental shelf break instead of a lateral wall could lead to a significant modification of the mean flow since such $f/H$ contours could provide topographic Rossby waveguides along the shelf, thus a separate pathway for the exchange of watermasses across basins, further complicating the model and analysis. The observed orientation of the bottom mixed layer was not explored by \citet{wenegrat2018submesoscale}, and the dynamics are likely very different due to the strong shear flow of the interior flow and the (rapid) changes experienced as it navigated the depth changes. Our observed middepth eddy variability and subsequent eddy advection of low PV anomalies away from the middepth front, resembles the middepth boundary current of low potential vorticity studied in \citet{spall2008western}. There, eddies that grow from the middepth boundary current advect low PV waters into the interior. The major difference in our model is the significant mean flow advection into the interior, and the character of our flow: an equilibrated system. The equilibration of our throughflow allows us to quantify the net advective fluxes across the ridge that originate from the injection of vanishingly low PV waters from the bottom boundary layer, through a Eulerian PV volume budget. We do not address the role of Kelvin waves and hydraulic control on PV sources, although the stratification pattern along the western wall (Fig. \ref{fig:nine}a) resembles that of a system experiencing hydraulic control. However, hydraulic control and thus the role of Kelvin waves in advective PV fronts, are largely transient problems in the absence of dissipation. Boundary currents require a viscous and frictional interaction with solid boundaries and, thus, it is unlikely that the observed stratification along the lateral walls is determined by inviscid wave processes. A staple of hydraulic control theory is that PV along the ridge can exert a control on the upstream circulation \citep{whitehead1995critical, helfrich2003rotating}. Thus, it is possible that Kelvin waves, away from the influence of strong boundary current and bottom fronts, can propagate (advect) the PV anomalies found atop the ridge. In doing so, Kelvin waves would help drive the observed net northward PV flux by promoting a higher advective PV flux upstream from the ridge and a lower advective PV flux downstream. They would be doing so by advecting PV anomalies (strong/low stratification) along the walls, e.g. advect upstream a high PV front along the western wall and advectve a low PV front along the eastern wall downstream. Under such scenario, baroclinic eddies upstream and downstream from the ridge could potentially advect such anomalies from the walls into the interior, help raise the background PV value by a process of homogenization. Based in our results its limitations, an interesting problem to which we can expand is that of an exchange in the presence of a destabilizing buoyancy forcing, for example, on a basin, resembling that by \citet{spall2010dynamics} which drives a convective overturning circulation across the ridge crest, while also retaining a large scale forcing (inflow/outflow) such that there is a barotropic (and baroclinic) pressure gradient across the ridge and thus a throughflow. Then, some particularly interesting questions arise in such scenario, for example how the boundary and interior distribution of PV anomalies may change by introducing such effects, and the presence of overflows. Such problem then represents the next step into generalizing our results. \acknowledgments This research was funded by NASA (NNX13AE28G, NNX13AH19G and NNX17AH56G) and the Mexican Council for Science and Technology (CONACyT following its abbreviation in spanish). We thank Charles Eriksen for helpful comments on an earlier draft, and Peter Rhines for many helpful discussions on the dynamics of flows across topography.
1,314,259,996,137
arxiv
\section{Introduction \label{sec:int}} Deep inelastic scattering (DIS) of electrons on protons, $ep$, at centre-of-mass energies of up to $\sqrt{s} \approx 320\,$GeV at HERA has been central to the exploration of proton structure and quark--gluon dynamics as described by perturbative Quantum Chromo Dynamics (pQCD). The combination of H1 and ZEUS data on inclusive $ep$ scattering and the subsequent pQCD analysis, introducing the family of parton density functions (PDFs) known as HERAPDF2.0\cite{HERAPDF20}, was a milestone for the exploitation of the HERA data. The preliminary work presented here represents a completion of the HERAPDF2.0 family with a fit at NNLO to HERA inclusive and jet production data published separately by the ZEUS and H1 collaborations. This was not possible at the time of the original introduction of HERAPDF2.0 because a treatment at NNLO of jet production in $ep$ scattering was not available then. \section{Procedure and Data} The name HERAPDF stands for a pQCD analysis within the DGLAP formalism, where predictions from pQCD are fitted to data. These predictions are obtained by solving the DGLAP evolution equations at LO, NLO and NNLO in the \mbox{$\overline{\rm{MS}}$}\ scheme. The inclusive and dijet production data which were already used for HERAPDF2.0Jets NLO were again used for the analysis presented here. A new data set~\cite{h1lowq2newjets} published by the H1~collaboration on jet production in low~$Q^2$ events, where $Q^2$ is the four-momentum-transfer squared, was added as input to the fits. The fits presented here were done in the same way as for all other members of the HERAPDF2.0 family, for full details see~\cite{h1zeusprelim} and references therein. The fits were performed using the programme QCDNUM within the xFitter framework. Only cross sections for $Q^2$ starting at $Q^2_{min} = 3.5$\,GeV$^2$ were used in the analysis. All parameter setting were the same as for the HERAPDF2.0Jets NLO fit. The analysis of uncertainties was also performed in exactly the same way. There were some modifications with respect to the analysis at NLO. They were driven by the usage of the newly available treatment of jet production at NNLO. The jet data were included in the fits at NNLO using predictions for the jet cross sections calculated using NNLOJET~\cite{nnlojet}, which was interfaced to the fast interpolation grid code, fastNLO and Applgrid using the Applfast framework~\cite{applfast} in order to achieve the required speed for the convolution for us in an iterative PDF fit. The predictions were multiplied by corrections for hadronisation and $Z^0$ exchange before they were used in the fits. A running electro-magnetic $\alpha$ as implemented in the 2012 version of the programme EPRC was used for the treatment of the jet cross sections. The new treatment of inclusive jet and dijet production at NNLO was only applicable to a slightly reduced phase space compared to HERAPDF2.0Jets NLO. All data points with $\sqrt{\langle p_T^2 \rangle +Q^2} \le 13.5$\,GeV were excluded, where $p_T$ is the transverse energy of the jets. In addition, six data points, the lowest $\langle p_T \rangle$ bin for each $Q^2$ region, were excluded from the ZEUS dijet data set because the NNLO predictions for these points were deemed unreliable. In addition, the trijet data which were used as input to HERAPDF2.0Jets NLO had to be excluded as their treatment at NNLO was not available. The choice of scales was also adjusted for the NNLO analysis. At NLO, the factorisation scale was chosen as $\mu_{\rm f}^2 = Q^2$, while the renormalisation scale was linked to the transverse momenta, $p_T$, of the jets by $\mu_{\rm r}^2 = (Q^2 + p_{T}^2)/2$. For the NNLO analysis, $\mu_{\rm f}^2 =\mu_{\rm r}^2= Q^2 + p_{T}^2$ was chosen. \section{Determination of the strong coupling constant} \label{sec:as} Jet production data are essential for the determination of the strong coupling constant, $\alpha_s(M_Z^2)$. In pQCD fits to inclusive DIS data alone, the gluon PDF is determined via the DGLAP equations only, using the observed scaling violations. This results in a strong correlation between the shape of the gluon distribution and the value of $\alpha_s(M_Z^2)$. Data on jet production cross sections provide an independent constraint on the gluon distribution. Jet and dijet production are also directly sensitive to $\alpha_s(M_Z^2)$ and thus such data allow for an accurate simultaneous determination of $\alpha_s(M_Z^2)$ and the gluon distribution. The HERAPDF2.0Jets NNLO (prel.) fit with free $\alpha_s(M_Z^2)$ gave a value of \begin{eqnarray} \nonumber \alpha_s(M_Z^2) =0.1150 \pm 0.0008{\rm (exp)} ^{+0.0002}_{-0.0005}{\rm (model/parameterisation)} \\ \nonumber ~~~~ \pm 0.0006{\rm (hadronisation)}~~ \pm 0.0027 {\rm (scale)}~~. \end{eqnarray} This result on $\alpha_s(M_Z^2)$ is compatible with the world average~\cite{PDG18} and it is competitive with other determinations at NNLO. The HERAPDF2.0Jets NNLO (prel.) fit with free $\alpha_s(M_Z^2)$ uses 1343 data points and has a $\chi^2/$d.o.f.\,$= 1599/1328 = 1.203$. This can be compared to the $\chi^2/$d.o.f.\,$= 1363/1131 = 1.205$ for HERAPDF2.0 NNLO based on inclusive data only. The similarity of the $\chi^2/$d.o.f. values indicates that the data on jet production do not introduce any tension. The experimental uncertainty was determined from the fit. The $\chi^2$ scan in $\alpha_s(M_Z^2)$ shown in Fig.~\ref{fig:alphasscan}a) confirmed the value of $\alpha_s(M_Z^2)$ and the experimental uncertainty. In addition to this the HERAPDF procedure considers model and parameterisation uncertainties and, for jet data, hadronisation uncertainties are also considered, see~\cite{HERAPDF20} for details. These additional uncertainties are also shown in Fig.~\ref{fig:alphasscan}a). A strong motivation to determine $\alpha_s(M_Z^2)$ at NNLO was the hope to substantially reduce scale uncertainties. This uncertainty was evaluated by varying the renormalisation and factorisation scales by a factor of two, both separately and simultaneously, and taking the maximal positive and negative deviations. The uncertainties were assumed to be 50\,\% correlated and 50\,\% uncorrelated between bins and data sets. The result is also shown in Fig.~\ref{fig:alphasscan}a). The scale uncertainty still dominates the uncertainties. As the input data were changed for the NNLO analysis and the choice of scales were changed with respect to the NLO analysis, a detailed comparison of scale uncertainties will be published after the appropriate reanalysis of the data at NLO. However, the present scale uncertainty, of $\pm 0.0027$ for the NNLO analysis, is significantly lower than the $+0.0037,-0.0030$ previously observed for the HERAPDF2.0Jets NLO analysis. If the NNLO determination of $\alpha_s(M_Z^2)$ is performed with the old choice of scales, the value of $\alpha_s(M_Z^2)$ is further reduced to 0.1135, well within scale uncertainties. The question whether data with relatively low $Q^2$ bias the determination of $\alpha_s(M_Z^2)$ arises in the context of the HERA data anlysis for which low $Q^2$ is also low $x$. A treatment beyond DGLAP may be necessary because of higher-tiwst terms, $ln(1/x)$ terms or even parton saturation. To check for such bias Figure~\ref{fig:alphasscan}b) shows scans with $Q^2_{min}$ set to 3.5\,GeV$^2$, 10\,GeV$^2$ and 20\,GeV$^2$ for the inclusive data. Clear minima are visible which coincide within uncertainties. \begin{figure} \centering \setlength{\unitlength}{0.1\textwidth} \begin{picture} (9,8.5) \put(1.2,4.5){\includegraphics[width=0.60\textwidth]{figures/scan-prel-all-eps-converted-to.pdf}} \put(1.2,0.0){\includegraphics[width=0.65\textwidth]{figures/scan-q2-incl-eps-converted-to.pdf}} \put (0.6,5.4) {a)} \put (0.6,0.9) {b)} \end{picture} \caption {$\Delta \chi^2 = \chi^2 - \chi^2_{\rm min}$ vs.\ $\alpha_s(M_Z^2)$ for HERAPDF2.0Jets NNLO (prel.) fits with fixed $\alpha_s(M_Z^2)$ with a) the standard $Q^2_{min}$ of 3.5\,GeV$^2$ b) with $Q^2_{min}$ set to 3.5\,GeV$^2$, 10\,GeV$^2$ and 20\,GeV$^2$ for the inclusive data. } \label{fig:alphasscan} \end{figure} \section{The PDFs of HERAPDF2.0Jets NNLO (prel.)} The PDFs resulting from the HERAPDF2.0Jets NNLO (prel.) fit with fixed $\alpha_s(M_Z^2) = 0.115$ and $\alpha_s(M_Z^2) = 0.118$ are shown and compared in in Fig.~\ref{fig:as0-115vsas0-118} at a scale of $Q^2=10$\,GeV$^2$. Here, total uncertainties are shown, including experimental, model and parameterisation uncertainties as well as additional hadronisation uncertainties on the jet data. The former value of $\alpha_s(M_Z^2)$ is chosen because it is the preferred value of these data and the latter value is chosen because it is the PDG value, it also allows direct comparison to the published PDFs of HERAPDF2.0 NNLO based on inclusive data only. These PDFs are very similar as shown in Fig.~\ref{fig:as0-118vsherapdf2}, indicating that the jet data do not change PDF shapes for fixed $\alpha_s(M_Z^2)$, but they have impact on the extracted value of $\alpha_s(M_Z^2)$, when it is allowed to be free. The comparison of the new HERAPDF2.0Jets NNLO (prel.) fits with differing values of $\alpha_s(M_Z^2)$ shows a significant difference in the gluon distributions, as expected given the correlation between the gluon PDF shape and the value of $\alpha_s(M_Z^2)$. \begin{figure}[tbp] \centering \setlength{\unitlength}{0.1\textwidth} \begin{picture} (11,8.5) \put(0.0,4.2){\includegraphics[width=0.55\textwidth]{figures/as0-115-as0-118-uv.pdf}} \put(5.0,4.2){\includegraphics[width=0.55\textwidth]{figures/as0-115-as0-118-dv.pdf}} \put(0.0,0.0){\includegraphics[width=0.55\textwidth]{figures/as0-115-as0-118-gluon.pdf}} \put(5.0,0.0){\includegraphics[width=0.55\textwidth]{figures/as0-115-as0-118-sea.pdf}} \put (0.6,4.7) {a)} \put (5.6,4.7) {b)} \put (0.6,0.5) {c)} \put (5.6,0.5) {d)} \end{picture} \vspace{-0.5cm} \caption { Comparison of the parton distribution functions a) $xu_v$, b) $xd_v$, c) $xg$ and d) $x\Sigma=x(\bar{U}+\bar{D})$ of HERAPDF2.0Jets NNLO (prel.) with fixed $\alpha_s(M_Z^2) = 0.115$ and $\alpha_s(M_Z^2) = 0.118$ at the scale $Q^{2} = 10\,$GeV$^{2}$. The total uncertainties are shown as differently hatched bands. } \label{fig:as0-115vsas0-118} \end{figure} \begin{figure}[tbp] \centering \setlength{\unitlength}{0.1\textwidth} \begin{picture} (11,8.5) \put(0.0,4.2){\includegraphics[width=0.55\textwidth]{figures/as0-118-herapdf2-uv.pdf}} \put(5.0,4.2){\includegraphics[width=0.55\textwidth]{figures/as0-118-herapdf2-dv.pdf}} \put(0.0,0.0){\includegraphics[width=0.55\textwidth]{figures/as0-118-herapdf2-gluon.pdf}} \put(5.0,0.0){\includegraphics[width=0.55\textwidth]{figures/as0-118-herapdf2-sea.pdf}} \put (0.6,4.7) {a)} \put (5.6,4.7) {b)} \put (0.6,0.5) {c)} \put (5.6,0.5) {d)} \end{picture} \vspace{-0.5cm} \caption { Comparison of the parton distribution functions a) $xu_v$, b) $xd_v$, c) $xg$ and d) $x\Sigma=x(\bar{U}+\bar{D})$ of HERAPDF2.0Jets NNLO (prel.) and HERAPDF2.0 NNLO based on inclusive data only, both with fixed $\alpha_s(M_Z^2) = 0.118$, at the scale $Q^{2} = 10\,$GeV$^{2}$. The total uncertainties are shown as differently hatched bands. } \label{fig:as0-118vsherapdf2} \end{figure} \section{Summary} The HERA data set on inclusive $ep$ scattering as introduced by the ZEUS and H1 collaborations, together with selected data on jet production, published separately by the two collaborations, were used as input to NNLO fits called HERAPDF2.0Jets NNLO (prel.). They complete the HERAPDF2.0 family. A fit with free $\alpha_s(M_Z^2)$ gave $\alpha_s(M_Z^2) = 0.1150 \pm 0.0008{\rm (exp)} ^{+0.0002}_{-0.0005} {\rm (mo-} $ ${\rm del/parameterisation)} \pm 0.0006{\rm (hadronisation)}~~ \pm 0.0027 {\rm (scale)}$. A preliminary set of PDFs with a full analysis of uncertainties was obtained from a HERAPDF2.0Jets NNLO (prel.) fit with fixed $\alpha_s(M_Z^2) = 0.115$. These PDFs were compared to PDFs from a similar fit with fixed $\alpha_s(M_Z^2) = 0.118$ and the PDFs from HERAPDF2.0 NNLO based on inclusive data only. All these PDFs are very similar.
1,314,259,996,138
arxiv
\section{Introduction} Conventional treatment planning in intensity-modulated radiation therapy (IMRT) is often described as a time-consuming trial-and-error process, as it requires the repeated solution of successively fine-tuned treatment plan optimization problems before requirements on plan quality are met. The need for re-optimization partly stems from a methodological difference between the mathematical planning objectives and the clinical evaluation criteria. While conventional planning objectives minimize the violation of dose statistics thresholds using (quadratic) penalty functions, attention is rarely paid during plan quality assessment to the optimal level of deviation. What rather influences plan quality is the actual dose statistics of the dose distribution. The vast amount of resources spent on treatment planning has motivated the development of strategies for automated approaches that require limited user guidance. A particular interest has recently grown in knowledge-based planning, where achievable yet desirable dose statistics thresholds are predicted with machine-learning techniques ahead of optimization. The expected outcome is a less need for fine-tuning. Several methods have been proposed in this direction \cite{Appenzoller2012,Shiraishi2016,SkarpmanMunter2015,Song2015,Wu2009,Wu2011} and their success in producing high-quality treatment plans while reducing user interaction has been investigated in subsequent evaluations \cite{Chang2016,Fogliata2015,Tol2015}. Another suggested approach is computerization of the typical trial-and-error scheme adopted by treatment planners. While some of these methods are site-specific \cite{Zhang2011}, others are developed with the ambition to handle a large class of cases \cite{Gintz2016,Hazell2016,Xhaferllari2013}. Still aiming for a less demanding planning process, we return to the underlying inconsistency between mathematical planning objectives and clinical evaluation criteria. We abandon the usual penalty-function framework and suggest planning objectives that more explicitly relate to plan quality measures, in our case to dose-volume histogram (DVH) statistics. The proposed planning objectives are based on the wellknown concept of mean-tail-dose often referred to by the financial counterpart conditional value-at-risk (CVaR). The use of CVaR-based objectives to obtain qualified approximations of optimal value-at-risk (corresponding to DVH statistics) is a frequent tool in finance \cite{Alexander2006,Rockafellar1997} since this leads to convex optimizaion. As for treatment planning, mean-tail-doses were originally incorporated as constraints of a penalty-function based formulation by Romeijn \emph{et al.} \cite{Romeijn2006}. The aim was to give a tractable alternative to bounds on the mathematically intractable DVH statistics. In following studies, mean-tail-doses have been used in, e.g., lexicographic optimization of prostate plans \cite{Clark2008} and as constraints in optimization of brachytherapy \cite{Holm2013}. The present study investigates their merit as tools for optimizing the DVH statistics to be assessed in the plan evaluation process. Leaving the penalty-function framework has the attractive effect that dose statistics thresholds are no longer required, neither is therefore the process of fine-tuning them. Nevertheless, a need remains to find the objective weights that give the desired tradeoff between conflicting evaluation criteria. Multicriteria optimization (MCO) techniques, by which the procedure of balancing planning objectives can be largely shortened, have been developed for the conventional case through rigorous research (see, e.g., \cite{Bokrantz2013a,Bokrantz2015,Craft2013,Fredriksson2013} for recent advances). Although beyond the scope of this study, we envisage that similar MCO techniques can be as advantageously combined with the proposed planning objectives. The outcome would then be a planning process that has little need for parameter tweaking. The paper is organized as follows. In Section~\ref{sec:Formulation}, the conventional and proposed formulations of planning objectives are presented. Section~\ref{sec:CompStud} provides the set-up of and results from a computational study where the two formulations are compared. In Section~\ref{sec:Method}, a method is described to handle the large-scale dimensions of the proposed optimization formulation. Existence of such a method is of utmost importance for the clinical applicability of the proposed approach; however, this section can be omitted by the reader who is only interested in the outcome of the method. \section{Formulation of planning objectives}\label{sec:Formulation} Treatment plan MCO is generally formulated \begin{equation*}\label{eq:genMCO} \begin{aligned} & \minimize{x \in \mathcal{X}} && \big[\,f_1(x) \cdots f_K(x)\,\big]^T, \end{aligned} \end{equation*} where each of the $K$ planning objectives $f_1,\ldots,f_K$ somehow relates to a plan evaluation criterion, and where $\mathcal{X}$ is the set of feasible treatment variables. We limit this study to fluence map optimization (FMO), so that $\mathcal{X}$ equals the convex set $\{x \in \mathbb{R}^n: x \geq 0\}$ of physically realizable fluence maps of $n$ beamlets. However, the theory and methods applied permit the more general assumption that $\mathcal{X}$ be any convex polytope of treatment variables that relate to voxel dose through a linear mapping. In addition to FMO, that assumption is valid for, e.g., certain formulations of sliding window optimization and spot weight optimization in proton therapy. Treatment plan MCO formulations are usually solved under the notion of Pareto optimality. A Pareto optimal plan is produced, among other techniques, by solving a weighted-sum instance where the $K$ planning objectives have been aggregated into one using nonnegative objective weights. Each combination of weights results in a different Pareto optimal plan, and all possible Pareto optimal plans form the Pareto set. See \cite{Bokrantz2013b} for a thorough introduction to MCO with focus on treatment planning. This study concerns plan evaluation criteria relating to the cumulative dose-volume histogram (DVH). As a function of the fluence map $x$, the DVH statistics quantifies the least dose received by the hottest volume fraction $v^{\text{ref}}$ of the region-of-interest (ROI) $r$. A mathematical definition is \begin{equation*} D(x;\,r,v^{\text{ref}}) = \min \{ d \in \mathbb{R} : \sum_{\substack{j \in V_r:\\p_j^Tx \geq d}} \Delta^r_j \leq v^{\text{ref}}\}, \end{equation*} where $V_r$ collects the voxels of ROI $r$, $\Delta_j^r$ is the relative volume of $r$ located in voxel $j$, and $p_j^T$ is the $j$th row of the dose deposition coefficient matrix so that $p_j^Tx$ is the voxel dose received by voxel $j$. The compact notation $D(x)$ is used throughout, and $D(x)$ is referred to as a dose-at-volume. A mathematical drawback of the DVH statistics is its non-convexity and non-differentiability, making it intractable for optimization purposes. \subsection{Conventional planning objectives} Conventional planning objectives are designed to push the DVH curve of a ROI towards the point $(d^{\text{ref}},v^{\text{ref}})$, with $d^{\text{ref}}$ a DVH statistics threshold. Aiming to meet maximum or minimum DVH criteria at volume fraction $v^{\text{ref}}$, they quadratically penalize voxel overdose or underdose from $d^{\text{ref}}$ while omitting the hottest $v^{\text{ref}}$ or coldest $1-v^{\text{ref}}$ volume fraction. Thus, by construction, only the voxel doses that fall between $d^{\text{ref}}$ and the dose-at-volume $D(x)$ are penalized, as seen in Figure~\ref{fig:ConventionalObjectives}. \begin{figure}[t]\centering \subfloat{\centering\includegraphics[width=160pt]{udvh.pdf}}\\%\hspace*{20pt} \subfloat{\centering\includegraphics[width=160pt]{ldvh.pdf}} \caption{Voxel overdoses (top) and underdoses (bottom) (shaded) from the threshold $d^{\text{ref}}$ illustrated in the DVH graph. A conventional planning objective $q^+(x)$ or $q^-(x)$ is the sum of the voxel overdoses or underdoses squared so that when minimized, the DVH curve is pushed towards the point $(d^{\text{ref}},v^{\text{ref}})$, aiming to meet a maximum or a minimum DVH criterion at volume fraction $v^{\text{ref}}$.\label{fig:ConventionalObjectives}} \end{figure} The mathematical definitions are \begin{equation*} q^+(x;\,r,v^{\text{ref}},d^{\text{ref}}) = \sum_{\substack{j \in V_r:\\ d^{\text{ref}} \leq p_j^Tx \leq D(x)}} \mkern-12mu \Delta_j^r\,\big(p_j^Tx - d^{\text{ref}})^2 \end{equation*} for penalizing overdose, and \begin{equation*} q^-(x;\,r,v^{\text{ref}},d^{\text{ref}}) = \sum_{\substack{j \in V_r:\\ D(x) \leq p_j^Tx \leq d^{\text{ref}}}} \mkern-12mu \Delta_j^r\,\big(d^{\text{ref}} - p_j^Tx\big)^2 \end{equation*} for penalizing underdose. Compact notations $q^+(x)$ and $q^-(x)$ are used throughout. A conventional planning objective is both non-convex and non-differentiable, yet we want to draw the attention to another problematic aspect: its limited ability to optimize or merely control the actual dose-at-volume $D(x)$. For instance, large gaps between $d^{\text{ref}}$ and $D(x)$ can be assigned relatively small penalties. The tool available for the user to reach satisfactory plan quality is to iteratively modify the thresholds and re-optimize the plan. This is a trial-and-error challenge, since the impact of $d^{\text{ref}}$ adjustments on the dose-at-volume is unclear. Another strategy, now formalized by MCO techniques, is to choose a utopian $d^{\text{ref}}$ (such as 0~Gy for healthy tissue) and then fine-tune the objective weight to increase or decrease the impact of the penalty. Nevertheless, the inconsistency remains between a conventional planning objective and the pointwise dose-at-volume. \subsection{Proposed planning objectives} To the extent that plan quality is assessed by dose-at-volume, an idealistic planning objective is, naturally, equalling $D(x)$ itself. Treatment plan MCO would then amount to solving \begin{equation}\label{eq:desiredMCO} \begin{aligned} & \minimize{x \in \mathcal{X}} && \mkern-12mu \big[\,D_1(x) \cdots D_{k_+}(x) -\!\! D_{k_++1}(x) \cdots -\!\! D_K(x)\,\big]^T \\ & \mathop{\operator@font subject\ to} && D_k(x) \leq u_k, \quad k = 1,\ldots,k_+, \\ & && D_k(x) \geq l_k, \quad k = k_++1,\ldots,K, \end{aligned} \end{equation} where $D_k(x) = D(x; r_k, \vrefk{k})$, and where bounds $l_k$ and $u_k$ restrict $D_k(x)$ to relevant values. Here, the last $K-k_+$ doses-at-volume are subjected to maximization by minimizing the negatives. This idealized formulation is largely intractable: for instance, the inherent non-convexity of $D(x)$ renders \eqref{eq:desiredMCO} a non-convex problem whose global minimum is difficult to find. It forms, however, the basis for the proposed formulation, demonstrating how the usual penalty-function framework is abandoned. We arrive at the proposed formulation by using planning objectives that approximate dose-at-volume by upper or lower mean-tail-dose. The upper or lower version respectively equals the average dose received by the hottest $v^{\text{ref}}$ or coldest $1-v^{\text{ref}}$ volume fraction, i.e., the average of voxel doses greater or less than $D(x)$ as depicted in Figure~\ref{fig:MeanTailDose}. \begin{figure}[t]\centering \subfloat{\centering\includegraphics[width=160pt]{umtd.pdf}}\\%\hspace*{20pt} \subfloat{\centering\includegraphics[width=160pt]{lmtd.pdf}} \caption{Upper (top) and lower (bottom) mean-tail-doses in relation to dose-at-volume $D(x)$. The upper version $d^+(x)$ is the average of voxel doses greater than $D(x)$ and the lower version $d^-(x)$ is the average of voxel doses less than $D(x)$. When minimizing $d^+(x)$ or maximizing $d^-(x)$, $D(x)$ is pushed to the left or right, respectively.\label{fig:MeanTailDose}} \end{figure} Their mathematical definitions can be established independently of $D(x)$ (see Rockafellar and Uryasev \cite{Rockafellar1997} for the derivation for the financial counterpart CVaR) as \begin{equation*}\begin{split} d^+(x; r, v^{\text{ref}}) = & \\ &\mkern-40mu \min\{\alpha + \frac{1}{v^{\text{ref}}}\sum_{j \in V_r} \Delta^r_j \left(p_j^Tx - \alpha\right)_+ : \alpha \in \mathbb{R}\} \end{split}\end{equation*} for upper mean-tail-dose, and \begin{equation*}\begin{split} d^-(x; r, v^{\text{ref}}) = & \\ &\mkern-75mu \max\{\alpha - \frac{1}{1-v^{\text{ref}}}\sum_{j \in V_r} \Delta^r_j \left(\alpha - p_j^Tx\right)_+ : \alpha \in \mathbb{R}\} \end{split}\end{equation*} for lower mean-tail-dose, where $(\,\cdot\,)_+$ denotes the positive part function $\max\{\,\cdot\,, 0\}$. Compact notations $d^+(x)$ and $d^-(x)$ are used throughout. There are two favorable aspects of using mean-tail-doses as planning objectives: the convexity of $d^+(x)$ and concavity of $d^-(x)$ (i.e., convexity of the negative of $d^-(x)$) making them suitable for optimization, and the relationship $d^-(x) \leq D(x) \leq d^+(x)$ ensuring that the dose-at-volume is appropriately controlled. Treatment plan MCO with proposed planning objectives becomes \begin{equation*} \begin{aligned} & \minimize{x \in \mathcal{X}} && \mkern-12mu \big[\,d^+_1(x) \cdots d^+_{k_+}(x) -\!\! d^-_{k_++1}(x) \cdots -\!\! d^-_K(x)\,\big]^T \\ & \mathop{\operator@font subject\ to} && d^+_k(x) \leq u_k, \quad k = 1,\ldots,k_+, \\ & && d^-_k(x) \geq l_k, \quad k = k_++1,\ldots,K, \end{aligned} \end{equation*} a convex formulation whose Pareto optimal solutions provide pessimistic bounds on the doses-at-volume of the optimized treatment plan. The formulation expands into \begin{equation}\label{eq:convexFMO} \begin{aligned} & \minimizetwo{\alpha_k,d_k \in \mathbb{R},\,x \in \mathcal{X}}{\eta^k \in \mathbb{R}^{m_k}} &&\mkern-12mu \big[\,d_1 \cdots d_{k_+} -\!\! d_{k_++1} \cdots -\!\! d_K\,\big]^T \\ & \mathop{\operator@font subject\ to} && \alpha_k + \frac{1}{\vrefk{k}}\sum_{j \in V_{r_k}} \Delta^r_j \eta^k_j \leq d_k, \\ & && \eta^k_j \geq p_j^Tx - \alpha_k, \enskip \eta^k_j \geq 0, \enskip j \in V_{r_k}, \\[3pt] & && &&\mkern-120mu k = 1,\ldots,k_+, \\[8pt] & && \alpha_k - \frac{1}{1-\vrefk{k}}\sum_{j \in V_{r_k}} \Delta^r_j \eta^k_j \geq d_k, \\ & && \eta^k_j \geq \alpha_k - p_j^Tx, \enskip \eta^k_j \geq 0, \enskip j \in V_{r_k}, \\[3pt] & && &&\mkern-120mu k = k_++1,\ldots,K, \\[8pt] & && d_k \in \big[l_k,u_k\big], &&\mkern-120mu k = 1,\ldots,K, \end{aligned} \end{equation} with artificial $m_k$-dimensional variables $\eta^k$ introduced to linearly handle the positive part function, denoting by $m_k$ the number of voxels in ROI $r_k$; and with auxiliary variables $d_k$ introduced for clarity. It should be noted that we have slightly modified the formulation by accepting both upper and lower bounds on $d_k$. The desired (and obtained) effect is removed incentive to minimize or maximize the $k$th mean-tail-dose beyond these limits. In a Pareto optimal solution, $d_k$ takes the value of the $k$th mean-tail-dose or, if any bound on $d_k$ is active, gives a pessimistic bound. \subsection{A note on planning constraints} Planning constraints impose bounds on doses-at-volume without providing incentive to exceed the requirements. An upper bound is formulated by conventional means as the constraint $q^+(x) \leq 0$, which corresponds to enforcing the DVH curve to reach the point $(d^{\text{ref}},v^{\text{ref}})$ where thus $d^{\text{ref}}$ is the bound. Analogously, a lower bound is imposed by $q^-(x) \leq 0$. In the proposed framework, upper and lower bounds on doses-at-volume are formulated as upper or lower bounds on upper or lower mean-tail-dose, respectively. \subsection{A note on maximum and minimum dose} Maximum or minimum doses are conventionally controlled by quadratically penalizing all voxel dose deviations from the threshold $d^{\text{ref}}$. The mathematical definitions are \begin{equation*} q^{\text{max}}(x; r, d^{\text{ref}}) = \sum_{\substack{j \in V_r:\\ d^{\text{ref}} \leq p_j^Tx}} \Delta_j^r\,\big(p_j^Tx - d^{\text{ref}}\big)^2 \end{equation*} for a maximum dose objective, and \begin{equation*} q^{\text{min}}(x; r, d^{\text{ref}}) = \sum_{\substack{j \in V_r:\\ p_j^Tx \leq d^{\text{ref}}}} \Delta_j^r\,\big(d^{\text{ref}} - p_j^Tx\big)_+^2 \end{equation*} for a minimum dose objective, coinciding with $q^+(x)$ and $q^-(x)$ for $v^{\text{ref}} = 0$ and $v^{\text{ref}} = 1$, respectively. The functions are convex and differentiable, but the previous discussion regarding inconsistency with plan quality applies: the planning objectives are successful in pushing the DVH curve towards $d^{\text{ref}}$, but have limited ability to control the actual maximum or minimum dose. The proposed framework allows direct optimization of the maximum and minimum dose statistics. These are mathematically defined as \begin{equation*} d^{\text{max}}(x; r) = \min \{ d \in \mathbb{R} : p_j^Tx \leq d, \forall j \in V_r \} \end{equation*} for maximum dose, and \begin{equation*} d^{\text{min}}(x; r) = \max \{ d \in \mathbb{R} : p_j^Tx \geq d, \forall j \in V_r \} \end{equation*} for minimum dose. Convexity of $d^{\text{max}}$ and concavity of $d^{\text{min}}$ make them suitable for minimization and maximization, respectively, and they can be integrated into \eqref{eq:convexFMO} without changing its characteristics. The construction of planning constraints for maximum and minimum dose is analogous to the construction of dose-at-volume constraints. For instance, an upper bound on maximum dose is given by $q^{\text{max}}(x; r, d^{\text{ref}}) \leq 0$ or $d^{\text{max}}(x; r) \leq u$. It should be noted that these constraints are equivalent if $d^{\text{ref}} = u$. \section{Computational study}\label{sec:CompStud} The proposed and conventional frameworks are juxtaposed in a preliminary computational study comprising two patient cases. The aim is to get an indication of the relative merit of the planning objectives as tools for optimizing DVH statistics. To this end, we compare the distribution of doses-at-volume among treatment plans generated in either the proposed or the conventional framework. Both patient cases are limited to three planning objectives in order to allow comparison in three-dimensional plots; however, planning constraints are added to increase complexity and clinical relevance. Weighted-sum instances of the proposed formulation in \eqref{eq:convexFMO} are solved using a MATLAB (MathWorks, Natick, Massachusetts) implementation of the method presented in Section \ref{sec:Method}. The conventional formulation is managed using the SQP solver SNOPT \cite{Gill2005} (Stanford Business Software, Stanford, California). Patient geometries and other problem data, including dose deposition coefficient matrices, are exported from RayStation (RaySearch Laboratories, Stockholm, Sweden). \subsection{Patient cases} The cases involve a prostate and a lung cancer patient. The patient geometries are discretized into $74\,877$ and $106\,465$~voxels and the (five coplanar) beams into, in total, $1310$ and $3923$~beamlets, all using 5~mm resolution grids. Planning objectives are chosen to minimize doses-at-volume to respectively the bladder, rectum and entire healthy volume (referred to as the surrounding ROI), and the esophagus, lung and surrounding ROI. The following requirements are added as planning constraints. For the prostate case, a minimum dose of 68~Gy is required to the planning target volume (PTV) and the maximum dose to the entire patient volume (external ROI) must not exceed 72~Gy. For the lung case, a minimum of 66.5~Gy and a maximum of 75~Gy are required to the PTV, and the maximum dose to the surrounding ROI and spinal cord must not exceed 70~Gy and 50~Gy, respectively. All parameters are listed in Table~\ref{tab:ModelParams}. It should be noted that, by restricting to planning constraints for minimum and maximum dose, we ensure that the proposed and conventional formulations define identical feasible regions. \begin{table}[t]\centering \caption{Parameters for planning objectives (o) and constraints (c) used in the prostate and lung case.\label{tab:ModelParams}} \begin{tabularx}{\columnwidth}{l l l X X X l X X} \hline\hline \multicolumn{3}{l}{\bf Prostate case} \\ & & \multicolumn{4}{l}{Proposed} & \multicolumn{3}{l}{Conventional} \\ & ROI & & $\vrefk{k}$ & $l_k$ & $u_k$ & & $\vrefk{k}$ & $\drefk{k}$ \\\hline o & Surrounding & $d^+$ & 5\% & 0 & 70 & $q^+$ & 5\% & 0 \\ o & Bladder & $d^+$ & 50\% & 0 & 70 & $q^+$ & 50\% & 0 \\ o & Rectum & $d^+$ & 20\% & 0 & 70 & $q^+$ & 20\% & 0 \\[3pt] c & External & $d^{\text{max}}$ & --- & --- & 72 & $q^{\text{max}}$ & --- & 72 \\ c & PTV & $d^{\text{min}}$ & --- & 68 & --- & $q^{\text{min}}$ & --- & 68 \\[6pt] \multicolumn{3}{l}{\bf Lung case} \\ & & \multicolumn{4}{l}{Proposed} & \multicolumn{3}{l}{Conventional} \\ & ROI & & $\vrefk{k}$ & $l_k$ & $u_k$ & & $\vrefk{k}$ & $\drefk{k}$ \\\hline o & Surrounding & $d^+$ & 5\% & 0 & 70 & $q^+$ & 5\% & 0 \\ o & Lung & $d^+$ & 25\% & 0 & 70 & $q^+$ & 25\% & 0 \\ o & Esophagus & $d^+$ & 20\% & 0 & 70 & $q^+$ & 20\% & 0 \\[3pt] c & Surrounding & $d^{\text{max}}$ & --- & --- & 70 & $q^{\text{max}}$ & --- & 70 \\ c & PTV & $d^{\text{min}}$ & --- & 66.5 & --- & $q^{\text{min}}$ & --- & 66.5 \\ c & PTV & $d^{\text{max}}$ & --- & --- & 75 & $q^{\text{max}}$ & --- & 75 \\ c & Spinal cord & $d^{\text{max}}$ & --- & --- & 50 & $q^{\text{max}}$ & --- & 50 \\ \hline\hline \end{tabularx} \end{table} \subsection{DVH statistics of treatment plan cohorts} A cohort of differently balanced treatment plans was generated for both patient cases and within both frameworks, resulting in four sets of plans; each treatment plan is the result of a weighted-sum instance. The underlying four sets of objective weight triplets $(w_1,w_2,w_3)$ all included the anchor (one $w_k$ equal to unity and others to zero) and the balanced (all $w_k$ equal) triplets. The remaining combinations varied among the four sets and were sampled from successively refined symmetric grids to have, as far as deemed possible, well-distributed plans. This strategy is primitive in its nature and only convenient for two- or three-dimensional MCO; for higher dimensions, it is recommended to apply techniques similar to those suggested in \cite{Bokrantz2013b}. A total of respectively 55 and 159 plans were generated for the prostate case using the proposed and conventional framework, and 34 and 130 for the lung case. Each treatment plan was characterized by the three doses-at-volume intended to be optimized. The distribution of such triplets in a three-dimensional coordinate system is illustrated in Figures~\ref{fig:Pareto:prostate}~and~\ref{fig:Pareto:lung}; the left, middle, and right subfigures show different angles to enhance spatial perception and the corner of all-lowest value within the axes limits has been marked with a circle. \begin{figure}[t]\centering \subfloat{\centering\includegraphics[height=90pt]{prostate3.pdf}}\hspace*{10pt} \subfloat{\centering\includegraphics[height=90pt]{prostate2.pdf}}\hspace*{10pt} \subfloat{\centering\includegraphics[height=90pt]{prostate1.pdf}} \caption{Dose-at-volume triplets [Gy] obtained in prostate plans generated using the proposed (black dots) and conventional (red dots) planning objectives, and the convex hull of the former. The left, middle, and right subfigures show different angles of the plot. The doses-at-volume concern the surrounding ROI (S), bladder (B), and rectum (R). \label{fig:Pareto:prostate}} \end{figure} \begin{figure}[t]\centering \subfloat{\centering\includegraphics[height=90pt]{lung3.pdf}}\hspace*{10pt} \subfloat{\centering\includegraphics[height=90pt]{lung2.pdf}}\hspace*{10pt} \subfloat{\centering\includegraphics[height=90pt]{lung1.pdf}} \caption{Dose-at-volume triplets [Gy] obtained in lung plans generated using the proposed (black dots) and conventional (red dots) planning objectives, and the convex hull of the former. The left, middle, and right subfigures show different angles of the plot. The doses-at-volume concern the esophagus (E), lung (L), and surrounding ROI (S).\label{fig:Pareto:lung}} \end{figure} The figures give two indications of particular interest. First, the plans optimized using the proposed planning objectives are superior to those generated within the conventional framework in the sense that their convex hull is located closer to the all-lowest point. This dominance is seen for both patient cases and suggests that the proposed framework, despite its approximative nature, provided a more efficient tool for optimizing the DVH statistics. Second, the proposed plans dominate each other to a remarkably smaller extent than the conventional plans. For the lung case in particular, the proposed plans span a wide region in the dose-at-volume domain, as opposed to the conventional plans which give the impression of being randomly scattered. A comment is needed regarding the accuracy in terms of fulfilment of planning constraint. While each generated proposed plan strictly satisfied all requirements on minimum and maximum dose, almost all conventional plans violated them. For instance, the median minimum PTV dose of conventional plans admits an underdose of 2.0 Gy in the prostate case and 3.6 Gy in the lung case. Violation within some tolerance is standard; however, SNOPT had difficulties in satisfying the specified tolerance in several tested instances. This property of conventional planning is known, and is briefly discussed and explained in \cite{Fredriksson2012}. \section{Numerical method}\label{sec:Method} In this section, the issue of solving weighted-sum instances of \eqref{eq:convexFMO} is discussed. The instances belong to the class of linear programming (LP) problems for which there exists several numerical optimization methods, some commercially available in general-purpose implementations. However, the LP problems of this study require more careful handling due to their size being proportional to the number of voxels. With 5~mm resolution, the number of variables and constraints is easily brought to the order of $10^5$. Below, we describe how the specific structure of \eqref{eq:convexFMO} allows us to eliminate number-of-voxels dependence in systems of linear equations of an interior-point method. The reduced system is proportional to the number of beamlets which, at least for fixed-gantry FMO, usually is orders of magnitude smaller than the number of voxels. In demonstrating how problem structure is accounted for, we give a brief description of a standard primal-dual interior-point method. The interested reader is referred to, e.g., \cite{Wright1997} for more thoroughgoing theory. \subsection{Interior-point method for specific structure}\label{sec:Method:Specific} For convenience, we express a weighted-sum instance of \eqref{eq:convexFMO} on the more compact form \begin{equation}\label{eq:compactForm} \begin{aligned} &\minimize{z,\eta} && c_z^Tz + c_{\eta}^T\eta \\ &\mathop{\operator@font subject\ to} && A_{zz} z + A_{z\eta}\eta \geq b_z, \\ & && A_{\eta z}z + \eta \geq b_{\eta}, \\ & && z,\eta \geq 0, \end{aligned} \end{equation} where $\eta$ collects vectors $\eta^k$ for all $k$ and $z$ collects the remaining variables. This compact form preserves the key characteristic of \eqref{eq:convexFMO}: the identity coefficient matrix of $\eta$ in the second constraint stemming from constraints $\eta_j^k \geq p_j^Tx-\alpha_k$ and $\eta_j^k \geq \alpha_k-p_j^Tx$. We should also note that the size of $A_{zz}$ is number-of-voxels independent, and that $A_{\eta z}$ is dense due to its rows containing $p_j^T$. Associated with the (primal) optimization formulation \eqref{eq:compactForm} is its dual \begin{equation*} \begin{aligned} &\maximize{y_z,y_{\eta}} && b_z^Ty_z + b_{\eta}^Ty_{\eta} \\ &\mathop{\operator@font subject\ to} && A_{zz}^T y_z + A_{\eta z}^Ty_{\eta} \leq c_z, \\ & && A_{z\eta}^Ty_z + y_{\eta} \leq c_{\eta}, \\ & && y_z,y_{\eta} \geq 0. \end{aligned} \end{equation*} An interior-point method applies Newton's method to find a solution to the complementary slackness conditions \begin{equation*} \begin{aligned} y_z^T\!\left(A_{zz}z + A_{z\eta}\eta - b_z\right) = 0, && \mkern-10mu y_{\eta}^T\!\left(A_{\eta z}z + \eta - b_{\eta}\right) = 0, \\ z^T\!\left(c_z - A_{zz}^Ty_z - A_{\eta z}^Ty_{\eta}\right) = 0, && \mkern-10mu \eta^T\!\left(c_{\eta} - A_{z\eta}^Ty_z - y_{\eta}\right) = 0, \end{aligned} \end{equation*} while ensuring strict primal and dual feasibility by appropriate step sizes; feasibility and complementarity is necessary and sufficient for optimality. To this end, the left-hand-sides of the conditions are perturbed by some $\mu > 0$ that is successively decreased as the method proceeds. Each iteration hence amounts to solving the system of linear equations \begin{equation}\label{eq:compactForm:KKTsys}\begin{split} \arraycolsep=2pt \left[\begin{array}{ c c | c c } D_1 & -ZA_{zz}^T & 0 & -ZA_{\eta z}^T \\ Y_zA_{zz} & D_2 & Y_zA_{z\eta} & 0 \\\hline 0 & -HA_{z\eta}^T & D_3 & -H \\ Y_{\eta}A_{\eta z} & 0 & Y_{\eta} & D_4 \end{array}\right] \left[\begin{array}{ l } \Delta z \\ \Delta y_z \\ \Delta \eta \\ \Delta y_{\eta} \end{array}\right] = & \\ &\mkern-140mu -\left[\begin{array}{ r } Z D_1 e - \mu e \\ Y_z D_2 e - \mu e \\ H D_3 e - \mu e \\ Y_{\eta} D_4 e - \mu e \end{array}\right] \end{split}\end{equation} for some $\mu$, where \begin{align*} & Z = \mathop{\operator@font diag}(z), && D_1 = \mathop{\operator@font diag}\left(c_z - A_{zz}^T y_z - A_{\eta z}^Ty_{\eta}\right), \\ & Y_z = \mathop{\operator@font diag}(y_z), && D_2 = \mathop{\operator@font diag}\left(A_{zz} z + A_{z\eta}\eta - b_z \right), \\ & H = \mathop{\operator@font diag}(\eta), && D_3 = \mathop{\operator@font diag}\left(c_{\eta} - A_{z\eta}^Ty_z - y_{\eta}\right), \\ & Y_{\eta} = \mathop{\operator@font diag}(y_{\eta}), && D_4 = \mathop{\operator@font diag}\left(A_{\eta z}z + \eta - b_{\eta}\right). \end{align*} The method converges to an optimal solution as the perturbation $\mu$ approaches zero. Solving \eqref{eq:compactForm:KKTsys} is a computational challenge since the size of the bottom right block is dependent on the number of voxels. However, accounting for the specific structure of the bottom block permits to solve \eqref{eq:compactForm:KKTsys} in two relatively inexpensive steps: by one solve with the substantially smaller Schur complement \begin{equation*}\begin{split} \arraycolsep=1pt \left[\begin{array}{ c c } D_1 & \!\!\! -ZA_{zz}^T \\ Y_zA_{zz} & D_2 \end{array}\right] - & \\ &\mkern-100mu \arraycolsep=1pt -\left[\begin{array}{ c c } 0 & \!\!\! -ZA_{\eta z}^T \\ Y_zA_{z \eta} & 0 \end{array}\right] \arraycolsep=4pt \left[\begin{array}{ c c } D_3 & -H \\ Y_{\eta} & D_4 \end{array}\right]^{-1} \arraycolsep=1pt \left[\begin{array}{ c c } 0 & \!\!\! -HA_{z \eta}^T \\ Y_{\eta}A_{\eta z} & 0 \end{array}\right] \end{split}\end{equation*} whose size is of the same order as the number of beamlets; and by one solve with the bottom block. The computational gain comes from the fact that, as the composite of four diagonal matrices, \begin{equation*} \begin{bmatrix} D_3 & -H \\ Y_{\eta} & D_4 \end{bmatrix}^{-1} = \, \begin{bmatrix} M D_4 & M H \\ -MY_{\eta} & MD_3 \end{bmatrix} \end{equation*} with $M=\left(D_3 D_4 + Y_{\eta}H \right)^{-1}$ diagonal, both the bottom block and its inverse merely act as the inexpensive operation of scaling and adding two vectors. The result is a dimensionality reduction of several orders of magnitude. The main computational cost per iteration now lies in computing the dense matrix product $A_{\eta z}^T D_3 Y_{\eta} A_{\eta z}$ appearing in the Schur complement. A technique is suggested in \cite{Gondzio1996} to accelerate convergence at the expense of multiple solves of \eqref{eq:compactForm:KKTsys} with different right-hand-sides. Each additional solve is relatively inexpensive, since it reuses the Schur complement (and its factorization) of the first solve. A decrease in total computational cost is expected, since the accelerated convergence needs fewer iterations to meet a certain accuracy. It should be mentioned that interior-point methods require starting points that strictly fulfil inequality constraints. It is therefore convenient to introduce nonnegative slack variables $s_z$, $s_{\eta}$, $\sigma_z$, and $\sigma_{\eta} \geq 0$ in order to turn the non-trivial primal and dual inequalities into equalities, \begin{equation*} \begin{aligned} A_{zz}z + A_{z\eta}\eta - s_z = b_z, && A_{zz}^Ty_z + A_{\eta z}^Ty_{\eta} + \sigma_z = c_z, \\ A_{\eta z}z + \eta - s_{\eta} = b_{\eta}, && A_{z\eta}^Ty_z + y_{\eta} + \sigma_{\eta} = c_{\eta}. \end{aligned} \end{equation*} However, Newton's method reduces to solving \eqref{eq:compactForm:KKTsys} with $D_1$, $D_2$, $D_3$, and $D_4$ in the coefficient matrix respectively replaced by $\Sigma_z = \mathop{\operator@font diag}(\sigma_z)$, $S_z = \mathop{\operator@font diag}(s_z)$, $\Sigma_{\eta} = \mathop{\operator@font diag}(\sigma_{\eta})$, and $S_{\eta} = \mathop{\operator@font diag}(s_{\eta})$. The computational effect on a single iteration is therefore insignificant. \subsection{Performance of method implementation} We finalize this section by presenting the results of applying a MATLAB R2015a implementation of the previously described interior-point method to the cases introduced in Section~\ref{sec:CompStud}. The progress of the method is shown in Table~\ref{tab:Performance}, with the results of using the commercial software CPLEX 12 (IBM, Armonk, New York) given as a reference. \begin{table}[t]\centering \caption{Iteration progress of the interior-point method (IP) described in Section~\ref{sec:Method:Specific} applied to the prostate and lung cases. \emph{Dim} is the size of the original system of linear equations, whereas \emph{reduced dim} and \emph{sparsity} concern the Schur complement.\label{tab:Performance}} \begin{tabularx}{\columnwidth}{l X X X X X} \hline\hline \multicolumn{6}{l}{\bf Prostate case} \\ \multicolumn{6}{l}{Dim 459\,638, reduced dim 3954, sparsity 11.16 \%} \\[3pt] Solver & Dual gap & Nb fact & Nb solv & Obj val & Residual \\\hline IP & 7.04e-01 & 29 & 107 & 57.684 & 1.13e-02 \\ & 3.45e-02 & 33 & 127 & 57.433 & 8.89e-04 \\ & 7.47e-03 & 35 & 133 & 57.424 & 2.28e-04 \\ & 7.76e-04 & 37 & 142 & 57.421 & 3.56e-05 \\ & 2.81e-06 & 39 & 156 & 57.421 & 1.27e-07 \\ & 4.72e-08 & 40 & 159 & 57.421 & 2.22e-09 \\[3pt] CPLEX & 4.75e-05 & 26 & --- & 57.421 & 3.77e-15 \\[6pt] \multicolumn{6}{l}{\bf Lung case} \\ \multicolumn{6}{l}{Dim 746\,649, reduced dim 11\,799, sparsity 11.13 \%} \\[3pt] Solver & Dual gap & Nb fact & Nb solv & Obj val & Residual \\\hline IP & 7.10e-01 & 34 & 142 & 47.410 & 1.81e-02 \\ & 7.36e-02 & 38 & 163 & 47.159 & 2.21e-03 \\ & 9.50e-03 & 41 & 173 & 47.134 & 3.16e-04 \\ & 1.36e-04 & 44 & 190 & 47.130 & 5.06e-06 \\ & 1.94e-05 & 45 & 195 & 47.130 & 1.06e-06 \\ & 3.29e-07 & 46 & 201 & 47.130 & 1.18e-08 \\[3pt] CPLEX & 1.10e-04 & 40 & --- & 47.130 & 6.02e-06 \\ \hline\hline \end{tabularx} \end{table} The total number of factorizations (one per iteration) and solves (up to ten per iteration) is indicated in columns \emph{Nb fact} and \emph{Nb solv}. \emph{Dual gap} gives an upper bound on the gap between the current objective function value (\emph{Obj val}) and the optimum, and can be used as a stopping criterion. \emph{Residual} is the current primal and dual feasibility. As seen in Table~\ref{tab:Performance}, the required number of iterations was 40 and 46 to meet the accuracy criterion. On an Intel Core i7 2.80 GHz computer, computing and factorizing the Schur complement took about 4.9 and 0.46 seconds for the prostate case, resulting in a total running time of approximately 5 minutes. For the lung case, 59.5 and 9.2 seconds translate into a running time of about 60 minutes. This should be contrasted to the several hours required by CPLEX to solve each of the cases, indicating the importance of accounting for problem structure when solving \eqref{eq:convexFMO}. \section{Discussion} Treatment planning by conventional means is known to be a complex process involving several re-optimizations with successively fine-adjusted parameters. In this study, we have dealt with one possible cause: inconsistency between the criteria used for optimizing and evaluating treatment plans. We propose planning objectives with an explicit relationship to commonly used plan quality measures, in our case the DVH statistics, and have thereby left the usual penalty-function framework used by both the conventional and other suggested planning objectives \cite{Bokrantz2013b,Kessler2005,Romeijn2006}. In an initial computational study involving fluence map optimization of two patient cases, we explored the potential of the proposed framework as a tool for optimizing the DVH statistics. We hypothesized that the doses-at-volume (i.e., individual points on the DVH curve) of treatment plans optimized using the proposed planning objectives would be ranked as better than the doses-at-volume of plans generated within the conventional framework. Dominance in this aspect was indeed observed in cohorts of differently balanced plans, and in addition, a larger variety of doses-at-volume was seen among plans optimized within the proposed framework. The indication is that the DVH statistics are better optimized and more efficiently balanced by the proposed planning objectives than by the conventional approach. Despite its trial-and-error nature, it cannot be denied that the conventional planning process is able to produce treatment plans of high clinical relevance. Whether the same is true for treatment plans optimized using the proposed framework was beyond the scope of this preliminary study to explore. Examination of the clinical acceptability of treatment plans is of high importance to draw further conclusions regarding the proposed planning objectives and should be covered by future investigation, for instance by looking more closely at dose distributions and DVH curves. The analysis is even strengthened if deliverability constraints are added, which relate to the physical limitations of treatment machines. Clinical relevance also depends on the availability of fast optimization methods. The method described in Section~\ref{sec:Method} indicates that the proposed formulation is solvable if its structure is accounted for. The method is not as efficient as most commercial treatment planning systems, and the computational cost is expected to increase even more if deliverability constraints are added. However, given that the conventional framework requires a considerable amount of manual overhead, longer optimization running times could be acceptable if the planning process necessitates less user guidance. \section{Conclusion} We have formulated planning objectives for treatment plan multicriteria optimization with an explicit relationship to DVH statistics. This is in contrast to the conventionally clinically used planning objectives by which the violation of DVH statistics thresholds is minimized, offering limited control of individual points on the DVH curve (doses-at-volume). The merit of the two planning approaches as tools for DVH statistics optimization was investigated by exploring sets of differently balanced treatment plans generated using each approach. Dominance was observed, in the sense of better doses-at-volume, among the sets of plans optimized within the proposed framework. In addition, a larger variety of doses-at-volume was seen in these sets of plans, indicating that the DVH statistics are better optimized and more efficiently balanced using the proposed planning objectives. Treatment planning with the proposed planning objectives amounts to solving optimization problems whose dimensions are of the same order as the generally large number of voxels. Availability of a numerical method that can handle these large problems is of utmost importance for the validity of the proposed framework. We have demonstrated how the problem structure can be used to adapt a standard optimization method, and thereby obtain a reduction in computational cost by several orders of magnitude.
1,314,259,996,139
arxiv
\section{introduction} \label{Intro} The bond-percolation model can be described by means of the partition sum \begin{equation} Z_{\rm bond}(p)=\prod_{\langle{ij}\rangle} \sum_{b_{ij}=0}^1 [(1-p) (1-b_{ij}) + p b_{ij}]=1 \label{zb} \end{equation} where the bond variables $b_{ij}$ are located on the edges of a lattice, and labeled with the site numbers at both ends. The `bonds', i.e., the nonzero bond variables, form a network of which one may study the percolation properties. Similarly, the site percolation problem is described by \begin{equation} Z_{\rm site}(p)=\prod_{\langle{i}\rangle} \sum_{s_{i}=0}^1 [(1-p) (1-s_{i}) + p s_{i}]=1 \label{zs} \end{equation} In this case, the percolation problem is formed by adding `bonds' between all pairs $(i,j)$ of neighboring sites that are both occupied ($s_i=s_j=1$). Eqs.~(\ref{zb}) and (\ref{zs}) specify that the bonds or sites are occupied with independent probabilities $p$. The values of these partition sums are trivial, but the percolation properties contained in these models are not. These properties, in particular for two-dimensional models, have been investigated by a considerable number of different approaches, see \cite{MFS1,E,DV,HK,MNSS,BN,RMZ1,RMZ2,RMZ3,DB1} and references therein. According to the universality hypothesis, some of these properties, such as the critical exponents, are the same for different two-dimensional lattices. Other properties, such as the percolation threshold, are naturally dependent on the type of the lattice, as well as on the number of neighbors to which a given site can form a bond. If not mentioned explicitly, we consider models with bonds between nearest-neighbor sites only. The present work reports some new findings, obtained by means of Monte Carlo simulation and transfer-matrix methods. Monte Carlo simulation was used in the cases of site percolation on the diced lattice, and of bond percolation on the square lattice with nearest- and next-nearest-neighbor bonds. The outline of this paper is as follows. In Sec.~\ref{secnum} we sketch our transfer-matrix and Monte Carlo methods. Section \ref{secres} describes the analyses and lists our results. The conclusions are summarized and discussed in Sec.~\ref{seccon}. \section{Numerical methods} \label{secnum} Our numerical analyses employ both transfer-matrix and Monte Carlo techniques. Both approaches have their advantages. Transfer-matrix calculation yield finite-size results of a high precision, typically with error margins of the order of $10^{-12}$, which therefore allow the application of sensitive fitting procedures. However, the transfer-matrix results are restricted to rather small values of the finite-size parameter. In contrast, the Monte Carlo results can be applied to much larger finite sizes, but they are also subject to significant statistical errors. Which of the two techniques was applied to specific models depended on considerations of effectiveness and complexity. The site percolation problem on the diced lattice was investigated by the Monte Carlo method, in view of the expected amount of work involved in writing the various sparse-matrix multiplication subprograms in a transfer-matrix algorithm for the diced lattice, which has inequivalent sites. While the transfer-matrix method usually yields relatively accurate results, this appeared not to be the case for the bond-percolation problem on the square lattice with crossing bonds. For this reason we also investigated this problem using the Monte Carlo method. \subsection{The transfer matrix} \label{sectm} The present transfer-matrix calculations apply to models wrapped on an infinitely long cylinder, with a finite circumference $L$. For the bond-percolation model, which can be considered as the special case $q=1$ of the random-cluster representation of the $q$-state Potts model \cite{KF}, we may conveniently use the numerical methods developed earlier for the random-cluster model. Our basic approach \cite{BND} is close in spirit to that of Derrida and Vannimenus \cite{DV} for the percolation model. Ref.~\onlinecite{BNi82} describes in detail how the state of connectivity of the sites on the end row of the cylinder can be coded by means of an integer (1, 2, 3, $\cdots$) that serves as a transfer-matrix index. That work also describes the sparse-matrix decomposition of the transfer matrix. However, the various lattice structures investigated here require modifications of the sparse-matrix methods described there. In addition, the model with nearest- and next-nearest-neighbor bonds violates the condition of `well-nestedness' described in Ref.~\onlinecite{BNi82}, so that new coding and decoding algorithms had to be devised. The transfer matrices for the site-percolation models require different modifications that, again, depend on the lattice structure. It is, in general, necessary to store the occupation number (0 or 1) of the sites on the end-most row of the lattice, as well as the state of connectivity of the occupied sites. This combined information can also be coded as an integer that plays the role of a transfer-matrix index, using the methods described in Ref.~\onlinecite{QDB}. However, the state of connectedness of the occupied sites on the last row may, depending on the lattice geometry, be subject to an additional condition. If the sites on this row are nearest neighbors, such as for the square lattice with the transfer direction along one set of lattice edges, then two adjacent, occupied sites must belong to the same cluster. This reduces the number of possible connectivities. To take advantage of this reduction, the coding-decoding algorithms for the square-lattice model were modified. For the square-lattice site-percolation model with transfer in the diagonal direction, the situation is different, and the algorithms of Ref.~\onlinecite{QDB} had to be used. To save memory and computer time, sparse matrix decompositions were applied in all cases. A full description of all these algorithms is beyond the scope of this paper; we trust that it is sufficient to mention that all further relevant details are contained in or follow from the explanations given here and in Refs.~\onlinecite{BNi82,QDB,BWW,KBN,TI}. The connectivities used here include those of the `magnetic' type, i.e., they carry the information which of the sites of the end row are connected by a percolating path to a far-away site, say on the first row of the lattice. Using a computer with 8 gigabyte of fast memory, our algorithms can perform transfer matrix calculations in connectivity spaces of linear dimensions up to the order $10^{8}$. Some details concerning the largest system sizes that could thus be handled, and the corresponding transfer-matrix sizes, appear in Tab.~\ref{tab:tmdat}. \begin{table} \caption{Some details about the transfer-matrix calculations on the various models. Transfer directions are given with respect to a set of lattice edges. Included are the largest system sizes and the linear size of the transfer matrix for that system, as well as the corresponding size of the largest sparse matrix. The entry `8-nb square' refers to the square lattice with nearest- and next-nearest-neighbor bonds.} \label{tab:tmdat} \begin{center} \begin{tabular}{|l|c|l|c|r|r|} \hline Lattice & type & direction& $L_{\rm max}$ & max size & sparse size \\ \hline kagome & bond & perpendicular & 13 & 5943200 & 22732740 \\ square & bond & parallel & 15 & 87253605 & 87253605 \\ square & bond & diagonal & 14 & 22732740 & 87253605 \\ triangular & bond & perpendicular & 14 & 22732740 & 87253605 \\ 8-nb square& bond & parallel & 10 & 678570 & 27644437 \\ square & site & parallel & 16 & 6903561 & 57225573 \\ square & site & diagonal & 12 & 26423275 & 125481607 \\ honeycomb & site & parallel & 12 & 26423275 & 125481607 \\ triangular & site & perpendicular & 17 & 19848489 & 57225573 \\ \hline \end{tabular} \end{center} \end{table} The eigenvalue problem of the transfer matrix reduces in effect to separate calculations in the magnetic and nonmagnetic sectors. The calculation of the largest eigenvalue in the nonmagnetic sector trivially yields $\Lambda_0=1$. The transfer-matrix construction enables the numerical calculation of the magnetic eigenvalue $\Lambda_1$ as described earlier \cite{BNi82}. The analysis of these magnetic eigenvalues uses Cardy's mapping \cite{JCxi} which establishes an asymptotic relation between the magnetic eigenvalue and the exact magnetic scaling dimension $X_h$. Furthermore we employ knowledge of the exact critical exponents from the Coulomb gas theory \cite{BN} and the theory of conformal invariance \cite{JC}. These results establish that $X_h=5/48$ for percolation models, and that the first and second thermal dimensions are equal to $X_t=5/4$ and $X_{t2}=4$ respectively. \subsection{Monte Carlo calculations} \label{secmc} We employed Monte Carlo simulations for the site-percolation problem on the diced lattice, and for the bond-percolation model on the square lattice with nearest- and next-nearest-neighbor bonds. The finite systems were defined in an $L \times L $ periodic geometry, in the case of the diced lattice on the basis of a rhombus with an angle $2 \pi/3$ between the main axes, as illustrated in Fig.~\ref{fig1}. The system, including its periodic structure, displays a hexagonal symmetry. Thus, for the definition of the periodic box one has in fact the freedom to choose any two out of three main axes separated by angles $2 \pi/3$. \begin{figure} \begin{center} \leavevmode \epsfxsize 12.6cm \epsfbox{kadi.eps} \end{center} \caption{The diced lattice (full lines) and its dual, the kagome lattice (thin lines). The three main axes are labeled $x$, $y$ and $z$. } \label{fig1} \end{figure} For the square lattice we employed a periodic box with the usual square symmetry, with only two main axes. A Metropolis-like procedure was applied: one visits the sites or bonds sequentially, and randomly decides with probability $p$ whether it is occupied; clusters are then constructed on the basis of these occupied site or bond variables. We employed a random generator based on binary shift registers. To avoid errors resulting from the use of single short shift registers \cite{disp}, we used the modulo-2 addition of two independent shift registers with lengths chosen as the Mersenne exponents 127 and 9689. This type of random generator is well tested \cite{SB}. For a sufficiently long series of percolation configurations thus obtained, we sampled the wrapping probability $P$ that a configuration has {\it at least} one cluster that wraps across a periodic boundary and connects to itself along {\it any} of the aforementioned main axes. This is done for a range of values $p$ of the site- or bond probabilities. For the analysis of the data for the model on the diced lattice, it is helpful that the value of the wrapping probability $P_c$ is exactly known for the periodic rhombus geometry of the critical triangular bond percolation model as $P_c=0.683946586 \cdots $~\cite{Ziff-99} (note that $\pi_{+}$ in the latter paper represents our $P_c$). It applies in the limit of large system size $L$ and is believed to be universal, i.e., it also applies to the diced lattice which also has a hexagonal symmetry. For the periodic square geometry, the universal value $P_c$ is exactly known as $P_c=0.690473725 \cdots $~\cite{HTPinson,Ziff-99}. The simulations were performed for $15$ system sizes in the range $ 4 \leq L \leq 256$; about $21 \times 10^9$ samples were taken for each $L$ for $L \leq 64$, and $6 \times 10^9$ samples for $L=128$ and 256. \section{Results} \label{secres} The analysis of the numerical finite-size data was done by means of well-documented finite-size scaling methods \cite{FSS}. We describe the procedures followed for the transfer-matrix and Monte Carlo data separately. \subsection{Percolation thresholds} \subsubsection{Transfer matrix results} \label{pthres} The data analysis was performed on the basis of the scaled gap \begin{equation} X_h(p,L) \equiv \frac{\zeta L \ln(\Lambda_0/\Lambda_1)}{2 \pi} \end{equation} where $\zeta$ is the geometric factor defined as the ratio between the lattice unit in which the finite size $L$ is expressed, and the thickness of the layer added to the lattice by a transfer-matrix operation. According to finite-size scaling, the scaled gap behaves, near the percolation threshold $p_c$, as \begin{equation} X_h(p,L) = X_h +a (p-p_c) L^{2-X_t} + b L^{2-X_{t2}} + \cdots \label{Xsc} \end{equation} where $a$ and $b$ are model-dependent parameters. It follows from the definition of $p_c(L)$ as the solution of the equation \begin{equation} X_h(p_c(L),L) = X_h \end{equation} and from Eq.~(\ref{Xsc}) that \begin{equation} p_c(L) \simeq p_c + c L^{X_t-X_{t2}} \end{equation} with $c=-b/a$. Since $X_t-X_{t2}=-11/4$, the finite-size estimates $p_c(L)$ should converge rapidly to $p_c$. In fact, the numerical data allow independent fitting of the exponent $X_t-X_{t2}$ and thus provide an independent confirmation of its value $-11/4$. On this basis one can, for instance, rule out a leading correction with exponent $-7/4$, such as would be generated by a hypothetical integer dimension $X=3$. Assuming $X_{t2}=4$, improved convergence of the $X_h$ estimates is obtained by iterated power-law fitting as described in Ref.~\onlinecite{BNi82}. After a first fitting step with exponent $-11/4$, the next iteration step yielded, in most cases, an exponent with approximately the same value, which suggests that Eq.~(\ref{Xsc}) should be replaced by \begin{equation} X_h(p,L) = X_h +a (p-p_c) L^{2-X_t} + (b+d \ln L ) L^{2-X_{t2}} + \cdots \label{Xsc1} \end{equation} The appearance of such logarithmic terms is consistent with renormalization theory for scaling relations involving integer exponents \cite{Wegner}. Final estimates for the percolation threshold were obtained from another power-law iteration step. These results are shown in Tab.~\ref{tab:tmres}, together with error estimates in the last decimal place. These error estimation of the extrapolated results requires considerable attention. While subsequent iteration steps eliminate successive corrections, the remaining corrections are, in principle, unknown. Fortunately, the apparent convergence of the fits indicates that they decay rapidly, i.e., with rather large and negative exponents of $L$. The error estimates can be based on the differences between the results from the last iteration step for a few of the largest available system sizes. The rapid decrease of these differences with increasing $L$ suggests that the error of the extrapolated result is of the same order as the differences for the largest $L$ values. However, it is obvious that a single estimate of this type is not very reliable, and one should search for additional evidence. First, one can vary the fitting procedure, for instance one can fix the correction exponent at $-11/4$ in the second or the third iteration step, or treat it as a free parameter. Another variation is to use, in the second iteration step, a fit of the form given by Eq.~(\ref{Xsc1}). These procedures yielded consistent results, and also provide independent data on the accuracy of the extrapolations. Furthermore, the amplitude of the correction term as evaluated in the last iteration step should behave sufficiently regularly as a function of $L$. If not, the differences in the last iteration step are not a reliable basis for the error estimation. When these conditions were satisfied, we took the error estimate equal to a few times the typical difference between the results for the largest two systems. To provide some actual information about the apparent convergence of the percolation thresholds, we list the largest-$L$ differences of the finite-size estimates of the original data, and of the first, second, and third iteration steps, for the case of the site-percolation problem on the square lattice, with transfer parallel to the edges. While these differences depend on the fit procedure, they typically amount to $2\times 10^{-5}$, $10^{-6}$, $10^{-7}$, and less than $10^{-8}$ respectively. Furthermore, the amplitude of the last power-law step appears to tend smoothly to a constant and gives no sign of, for instance, an extremum as a function of $L$. \begin{table} \caption{Summary of percolation thresholds of some two-dimensional lattices. The symbol $z$ is the coordination number; $p_c^{\rm bond}$ and $p_c^{\rm site}$ represent the critical bond- and site-occupation probabilities, respectively. Errors in the last decimal place are given in parentheses. The value 0 is given in those cases where the percolation threshold is exactly known \cite{E}. The remaining entries were obtained from the literature as indicated by the reference listed, or by the present numerical analyses, which use Monte Carlo simulations (as indicated by MC in the source column) or transfer-matrix calculations (as indicated by TM). The bond percolation threshold for the diced lattice follows from a duality transformation of the kagome lattice model, and did therefore not require separate calculations. Similarly, the entry for the site-percolation threshold for the eight-neighbor square lattice follows from that for the matching lattice, i.e., the entry for the four-neighbor model.} \label{tab:tmres} \begin{center} \begin{tabular}{|l|c||l|c||l|c|} \hline Lattice &$z$&$p_c^{\rm bond}$& source & $p_c^{\rm site}$&source \\ \hline triangular& 6 &0.3472964... (0)& exact &0.5 (0) &exact \\ \hline honeycomb & 3 &0.6527036... (0)& exact &0.6970402 (1) & TM \\ \cline{5-6} & & & &0.697043 (3) &\cite{RMZ1} \\ \cline{5-6} & & & &0.69704024 (4) &\cite{WZhang}\\ \hline kagome & 4 &0.52440499 (2) & TM &0.6527036... (0) & exact \\ \cline{3-4} & &0.5244053 (3) & \cite{RMZ1} & & \\ \cline{3-4} & &0.52440503 (5) & MC & & \\ \hline diced &~~3,6~~&0.47559501 (2) & TM,d &0.58504627 (6) & MC \\ \hline square & 4 &0.5 (0) & exact &0.59274605 (3) & TM \\ \cline{5-6} & & & &0.59274603 (9) &\cite{ML} \\ \cline{5-6} & & & &0.59274606 (5) & MC \\ \hline square & 8 &0.250369 (3) & TM &0.40725395 (3) & TM,m \\ \cline{3-4} & &0.25036834 (6) & MC & & \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Monte Carlo results} The numerical results for the wrapping probability $P$ defined in Sec.~\ref{secmc} were fitted, using the least-squares criterion, by means of the finite-size-scaling formula \begin{equation} P (p,L)=P_c +a_1 (p-p_c) L^{y_t} + a_2 (p-p_c)^2 L^{2y_t} + b_1 L^{y_i} + b_2 L^{y_i-1} + c (p-p_c) L^{y_t+y_i} \; , \label{fit_q} \end{equation} where $y_t=2-X_t=3/4$ is the temperature exponent, and $y_i=2-X_{t2}=-2$ is the irrelevant exponent \cite{BN} describing the corrections to scaling. We simulated the bond-percolation model on the triangular lattice right at the exactly known critical point for $15$ values of $L$ in range $4 \leq L \leq 512$. The number of samples is about $1 \times 10^{10}$ for system sizes $L \leq 256$, and $2 \times 10^9$ for $L=512$. The wrapping probability $P$ was fitted by Eq.~(\ref{fit_q}), excluding the $p$-dependent terms. We obtained $P_c=0.683947$ $(3)$, in good agreement with the exact result $0.683946586 \cdots $~\cite{Ziff-99}. For the diced lattice, the asymptotic critical wrapping probability was fixed at the exact value. The $P(p,L)$ values appear to be well described by Eq.~(\ref{fit_q}) for system sizes not smaller than the minimum size $L_{\rm min} = 16 $. Satisfactory fits (as judged from the $\chi^2$ criterion) could also be obtained for smaller values of $L_{\rm min} = 16 $ when additional corrections were included with exponents $y_i-2$ and $y_i-3$. These fits are quite stable with respect to variation of $L_{\rm min}$, and yield the site-percolation threshold of the diced lattice as $p_c= 0.58504627$ $(6)$. Also for the bond percolation on the square lattice with nearest- and next-nearest-neighbor bonds we fitted the $P(p,L)$ data by Eq.~(\ref{fit_q}), but with the wrapping probability fixed at $P_c=0.690473725 \cdots $~\cite{HTPinson,Ziff-99}. Satisfactory fits were obtained for $L \geq 6 $, and yield the bond-percolation threshold as $p_c =0.25036834$ $(6)$. The estimates for $p_c$ are included in Tab.~\ref{tab:tmres}. The estimation of the uncertainty margin in $p_c$ is relatively straightforward. The Monte Carlo runs are divided in 2000 subruns, and the error in the average of a run follows from the standard deviation of the subrun averages. The multivariate analysis that determines $p_c$ thus also produces the statistical error in this quantity. However, the actual error is still subject to the effects of correction terms not included in Eq,~(\ref{fit_q}). Such additional correction terms decay rapidly with the system size, so that the finite-size cutoff parameter $L_{\rm min}$ is reasonably well determined by the $L_{\rm min}$-dependence of the residual $\chi^2$ of the fits. This finite-size cutoff parameter naturally depends on the number of finite-size corrections included. Thus many fits were made to determine each $p_c$, varying the number of correction terms in the fit formula, and varying the minimum system size below which the finite-size data were excluded. he errors quoted are such that the margins include all one-standard-deviation lower and upper bounds of several different fits, using different fit formulas as well as a range of different accetable values of $L_{\rm min}$ for each fit formula. \subsection{Corrections to scaling} The analysis of the finite-size data to determine the percolation thresholds in Sec.~\ref{pthres} indicated that there are corrections described by an irrelevant scaling dimension $X_{t2}$ close to the value $4$ predicted by the Coulomb gas analysis \cite{BN} and the Kac formula \cite{AAB,FQS,Kac}. However, the analysis also suggested that corrections governed by this exponent contain a logarithmic correction factor. The models for which the percolation threshold is exactly known, such as the bond-percolation model on the square lattice and the site-percolation model on the triangular lattice, allow a study of the finite-size dependence of the scaled gap $X(p_c,L)$ at the exact critical point. In that case the corrections are due only to the irrelevant fields, and additional errors due to the uncertainty of the percolation threshold are eliminated. Analysis of the scaled gap will purportedly reveal the nature of the corrections associated with the leading irrelevant exponent. In order to focus more precisely on possible logarithmic terms, we defined the model-dependent quantity $C(L)$ as \begin{equation} C(L) \equiv (X_h(p_c,L) -X_h) L^{2} \label{cora} \end{equation} which serves as an estimate of the amplitude of the finite-size correction term in $X_h(p_c,L)$ if a logarithmic term is absent. For a few models with exactly known percolation thresholds, we calculated finite-size data for $C(L)$ and applied a fit of the form \begin{equation} C(L) \approx C + A \ln L +B L^{-r} \label{coraf} \end{equation} First the parameters $C$ and $A$ were solved from two consecutive values of $C(L)$, with the amplitude $B$ set to zero. The third term, which is treated as a perturbation, is then taken into account in the second step by means of an iterated power law fit as described in Ref.~\onlinecite{BNi82}. This approach led to a series of apparently well-convergent estimates of the constant $C$ and the amplitude $A$. These are shown in Tab.~\ref{tab:ab}. This analysis was unable to yield good estimates of the exponent $r$, which indicates that there exist further correction terms, in addition to those listed in Eq.~(\ref{coraf}). However, the data were insufficient to obtain more quantitative information. \begin{table} \caption{Results of the analysis of the corrections to scaling in the quantity $X_h(L)$ for a few exactly solved bond- or site-percolation models. Transfer directions are given with respect to a set of lattice edges and specify the orientation of the lattice with respect to the axis of the cylinder on which the model is wrapped. Results are shown for the amplitudes $C$ and $A$ of the $L^{-2}$ and the $L^{-2}\ln L$ terms respectively.} \label{tab:ab} \begin{center} \begin{tabular}{|l|c|l|r|r|} \hline Lattice & type & direction & $C$~~~~~ & $A$~~~~~ \\ \hline square & bond & parallel & 0.0306 (1) & $-0.0054$ (1) \\ square & bond & diagonal & $-0.0205$ (1) & $-0.0027$ (1) \\ triangular & bond & perpendicular & $-0.0037$ (1) & $-0.0036$ (1) \\ triangular & site & perpendicular & 0.0195 (1) & 0.0000 (1) \\ \hline \end{tabular} \end{center} \end{table} The difference between the two entries for the amplitude $A$ for the square lattice is, at least approximately, equal to 2. This factor may be attributed to the difference of a factor $\sqrt 2$ in the length units of the finite size $L$ for the two cases (an edge or a diagonal of the square lattice). This would suggest that the amplitude $A$ of the logarithmic term is, unlike the amplitude $C$, independent on the orientation of the finite direction of the square lattice in the cylindrical geometry. \section{Conclusion} \label{seccon} We obtained new results for the percolation thresholds of several two-dimensional models. The results are, as far as they overlap with the literature, generally consistent with existing results, and the error margins are somewhat reduced. Although our result for the bond-percolation threshold of the kagome lattice model lies remarkably close to an approximate result given by Scullard and Ziff \cite{RMZ3}, the difference is quite significant, in agreement with the conclusions of these authors \cite{RMZ3}. In our numerical analysis of the transfer-matrix data, we made use of the universality hypothesis, i.e., in all cases we assumed the validity of the exact results for the scaling dimensions of the percolation models in two dimensions \cite{BN}. However, we are able to support this assumption considerably. In addition to Eq.~(\ref{Xsc}), we may use `phenomenological renormalization' \cite{MPN} to determine the critical points, so that we no longer make use of prior knowledge $X_h=5/48$. This approach yields the same critical points, within error margins that are a few times larger than those listed in Tab.~\ref{tab:tmres}. The estimates of the scaled gaps produced by the phenomenological renormalization approach match the value 5/48 up to several decimal places, and give no sign of deviations from universality. The analysis of the corrections to scaling is in agreement with the irrelevant scaling dimension $X_{t2}=4$, but showed the existence of a contribution with a logarithmic factor in the transfer-matrix data for the scaled magnetic gap. The amplitude of this contribution is strongly model-dependent, and possibly vanishes for the triangular site-percolation model. This raises the question in what sense the latter model could be special. It may be argued that it is the model with the highest symmetry investigated here; it has a $\pi/3$ rotational symmetry as well as a form of self-dual symmetry because of the matching-lattice argument \cite{E}. A corresponding logarithmic term could possibly also be present in the quantity $P(p,L)$, but we were unable to confirm its existence from our Monte Carlo data. Furthermore, we recall that the bond-percolation model with crossing bonds (8 neighbors) lives in an extended space of connectivities because the condition of well-nestedness \cite{BNi82} no longer applies. Accordingly one may postulate that these non-well-nested connectivities introduce another irrelevant field, and that additional corrections described by a new scaling dimension would appear. However, we did not find any clear sign of such new corrections. Finally we remark that the present logarithmic terms are unrelated to those reported by Adler and Privman \cite{AP}, which apply to some leading singularities. Logarithmic factors naturally appear in quantities involving the mean cluster size. The free energy of the random cluster model also serves as the generating function of percolation properties. The mean cluster size can be obtained by differentiation of the random-cluster partition sum or the free energy to the number of Potts states $q$. The $q$-dependence of the critical exponents then provides a mechanism that introduces such logarithmic factors in some critical singularities. \acknowledgments We acknowledge discussions with Profs. J. C. Cardy and R. M. Ziff. YD thanks for the support of the Alexander von Humboldt Foundation (Germany). HB thanks the Lorentz Fund (The Netherlands) for support. \newpage
1,314,259,996,140
arxiv
\section{Introduction} \noindent Let $G$ be a (simple, undirected, finite) graph. The sets of vertices and edges of a graph $G$ are denoted by $V(G)$ and $E(G)$, respectively. A set $S\subseteq V(G)$ is independent if no edge of $G$ has both its endpoints in $S$. An independent set $S$ is maximal if no independent set of $G$ properly contain $S$. Let m.i.s be the set of all maximal independent sets in $G$. Let $i_{M}(G)$ denote the number of m.i.s in $G$. \begin{definition} We say that a graph is complete t-partite graph if its vertex set $V(G)$ can be partitioned into disjoint subsets $V_{1},\ldots,V_{t}$ such that $|V_i|=n_{i}$ for all $1\leq i\leq t$ and $G$ contains every edge joining $V_i$ and $V_j$ for all $i\neq j$ and $1\leq i,j\leq t$. \end{definition} \begin{definition} We say that a graph $G$ is t-partite graph if its vertex set $V(G)$ can be partitioned into disjoint subsets $V_1,\ldots,V_t$ such that every edge of $G$ joins a vertex of $V_i$ to a vertex of $V_j$ for $i\neq j$ and $1\leq i,j\leq t$. Moreover, in this paper we consider $t$ as the smallest number that has above properties. \end{definition} Let $G$ be a simple undirected graph with the vertex set $V(G) =\{x_1,\ldots,x_n\}$ and the edge set $E(G)$. The edge ideal of $G$ is $I(G)$ as an ideal of the polynomial ring $R=k[x_1,\ldots,x_n]$ over a field $k$ that generated by all monomials $x_{i}x_{j}$ such that $x_{i}$ is adjacent to $x_{j}$ in $G$. We can associate to G the simplicial complex $\Delta_G$ that the faces of $\Delta_G$ are independent sets or stable sets of G and $I_{\Delta_G} = I(G)$. \begin{definition} A simplicial complex $\Delta$ is called shellable if the facets (maximal faces) of $\Delta$ can be ordered $F_1,\ldots,F_s$ such that for all $1\leq i<j\leq s$, there exists some $v\in F_j\setminus F_i$ and some $l\in \{1,\ldots,j-1\}$ with $F_j\setminus F_l=\{v\}$.\\ We call $F_1,\ldots,F_s$ a shelling of $\Delta$ when the facets have been ordered with respect to the definition of shellable. \end{definition} A graph $G$ is called shellable, if the simplicial complex $\Delta _G$ is a shellable simplicail complex. \begin{definition} A k-coloring of a graph $G$ is a labeling $f: V (G)\rightarrow S$ where $|S|= k$. The labels are colors; the vertices of one color form a color class. A k-coloring is proper if adjacent vertices have different labels. A graph is k-colorable if it has a proper k-coloring. The chromatic number of graph G, $\chi(G)$, is the least $k$ such that G is k-colorable. \end{definition} The problem finding $i_{M}(G)$, even lower or upper bounds for it, is considerable lately. $i_{M}(G)$ is often known as the Fibonacci number, or in mathematical chemistry as the Merrifield-Simmons index or the $\sigma$-index. The study was initiated by Prodinger and Tichy in [9]. In the paper, first we present a lower bound for $i_{M}(G)$ where $G$ is a graph with $\chi(G)$ as chromatic number and next we conclude that $\chi(G)= i_{M}(G)=t$ in complete t-partite graphs. By using it, we obtain an equivalent condition for shellability in complete t-partite graphs as following theorem:\\ \begin{theorem}\label{main} Let $G$ be a complete t-partite graph. $G$ is shellable if and only if $G$ be t-colorable such that exactly one of color classes has arbitrary elements and other classes have only one element. \end{theorem} \begin{definition} A simplicial complex $\Delta$ is recursively defined to be vertex decomposable if it is either a simplex, or else has some vertex $v$ so that\\ i)Both $\Delta\setminus v$ and $link_{\Delta} ^{v}$ are vertex decomposable and \\ ii)No face of $link_{\Delta}^{v}$ is a facet of $\Delta\setminus v$, where \\$$link_{\Delta}^{F}=\{G: G\cap F=\emptyset , G\cup F\in \Delta\}.$$ \end{definition} We call a graph $G$ vertex decomposable if the simplicial complex $\Delta_{G}$ is vertex decomposable. \\By using above theorem, we present an equivalent condition for vertex decomposability in complete t-partite graphs as following corollary:\\ \begin{corollary}\label{main} Let $G$ be a complete t-partite graph. $G$ is vertex decomposable if and only if $G$ be t-colorable such that exactly one of color classes has arbitrary elements and other classes have only one element. \end{corollary} \begin{definition} A local ring $(R, m)$ is called Cohen-Macaulay if $depth(R) = dim(R)$. If $R$ is non local and $R _p$ is a Cohen-Macaulay local ring for all $p \in Spec(R)$, then we say that $R$ is a Cohen-Macaulay ring. \end{definition} \begin{definition} The graph $G$ is said to be Cohen-Macaulay over the field $k$ if $R/ I ( G )$ is a Cohen-Macaulay ring. \end{definition} \begin{definition} Let $G$ be a graph with vertices $V(G)=\{x_1,\ldots,x_n\}$. A subset $A\subseteq V(G)$ is a vertex cover of $G$ if every edge in $G$ is incident to some vertex in $A$. A vertex cover $A$ is minimal if no proper subset of $A$ is a vertex cover. \end{definition} \begin{definition} A graph $G$ is called unmixed if all the minimal vertex covers of $G$ have the same number of elements. \end{definition} Herzog, Hibi, and Zheng [8] proved that a chordal graph is Cohen-Macaulay if and only if it is unmixed. By using it, we show when a complete t-partite graph is Cohen- Macaulay as following corollary: \begin{corollary}\label{main} Let $G$ be a complete t-partite graph. $G$ is Cohen-Macaulay graph if and only if $G$ be t-colorable such that all color classes have exactly one element. \end{corollary} \section{Main results} \begin{lemma}\label{main} Let $G$ be a graph. If $\chi(G)=t$ then $i_{M}(G)\geq t$. \end{lemma} \begin{proof} Since $\chi(G)=t$ then there exist $t$ color classes for $G$, hence we consider $G$ as a t-partite graph. We suppose that $V_{i}$ is set of elements i-th color class. By definition, $V_{i}$ is a independent set of $G$, then there exists maximal independent set $F_i$ that $V_i\subseteq F_i$, for all $1\leq i\leq t$. Hence, there exist at least $t$ maximal independent sets for $G$. \end{proof} \begin{remark}\label{main} By definition of complete t-partite graphs we have that $i_{M}(G)= t$. \end{remark} The first author represented sufficient condition of following theorem in [11]. \begin{theorem}\label{main} Let $G$ be a complete t-partite graph. $G$ is shellable if and only if $G$ be t-colorable such that exactly one of color classes has arbitrary elements and other classes have only one element. \end{theorem} \begin{proof} $\Leftarrow)$ We have to find a shelling $F_1,\ldots,F_t$ for $\Delta_G$. Let $V(G)=\{x_1,\ldots,x_n\}$. Since proper t-vertex coloring gives a partition of $V(G)$ into $t$ color classes, we suppose that the set of elements of i-th color class be $V_i$. By assumption,\\$V_1=\{x_1,\ldots,x_m\}$, $m=n-t+1$, $V_i=\{x_{m+i-1}\}$ for all $2\leq i\leq t$. Any $V_i$, $1\leq i\leq t$ is an independent set. Since for any $2\leq i\leq t$, if $x_{m+i-1}\in V_1$ then we can replace $V_1$ by $V_{1}\cup V_{i}$ and obtain a $t-1$-partition for $G$ that is a contradiction to the assumption that $t$ is the smallest number with this property. So $V_1$ is a maximal independent set of $G$ and a facet of $\Delta_G$. We put $F_1=V_1$. On the other hand, we have $V_i=\{x_{m+i-1}\}$ as an independent set of $G$, for all $1\leq i\leq t$. With the same argument, we have $F_i= V_i$. Therefore, we find an ordering on the facets of $\Delta_G$ as follows:\\$F_1=\{x_1,\ldots,x_m\}$, $F_2=\{x_{m+1}\}$ ,$\ldots$, $F_t=\{x_{m+t-1}\}.$ \\Since $F_i\setminus F_1=\{x_{m+i-1}\}$ for all $2\leq i\leq t$, then $F_1,\cdots F_t$ is a shelling of $\Delta_G $.\\$\Rightarrow)$Let $V_1,\ldots,V_t$ be a partition of $V(G)$ by definition complete t-partite graph. Since $i_{M}(G)= t$ then $\Delta _G$ has exactly $t$ facets. Let $F_1,\ldots,F_t$ be a shelling of $\Delta_G$. Since $V_i$'s are maximal independent sets then $\{V_1,\ldots,V_t\}$=$\{F_1,\ldots,F_t\}$. Without loss of generality, suppose that $F_i= V_i$ for all $2\leq i\leq t$. We know that $\{F_1,\ldots,F_t\}$ is a shelling, hence there exists $x_2\in F_2 \setminus F_1$ such that $F_2 \setminus F_1=\{x_2\}$. Therefore $F_2=(F_2 \setminus F_1)\cup(F_2\cap F_1)=\{x_2\}$.\\Now, suppose by induction that $F_1=\{x_1,\ldots,x_m\}$, $F_2=\{x_{m+1}\}$,\ldots,$F_{i}=\{x_{m+i-1}\}$. Since $\{F_1,\ldots,F_t\}$ is a shelling of $\Delta_G$ then there exists $x_{i+1}\in F_{i+1} \setminus F_1$ and $l\in \{1,\ldots,i\}$ such that $F_{i+1} \setminus F_l=\{x_{i+1}\}$, then $F_{i+1}=(F_{i+1} \setminus F_l)\cup(F_{i+1}\cap F_l)=\{x_{i+1}\}$. Hence, one of color classes has arbitrary elements and other classes have exactly one element. \end{proof} \begin{corollary}\label{main} Let $G$ be a complete t-partite graph. $G$ is vertex decomposable if and only if $G$ be t-colorable such that exactly one of color classes has arbitrary elements and other classes have only one element. \end{corollary} \begin{proof} $\Rightarrow)$ We suppose that for any proper t-vertex coloring of $G$, there exist at least two classes with at least two elements. Then $G$ isn't shellable by previous theorem. Hence, $G$ isn't vertex decomposable by [5,Corollary 7]. \\ $\Leftarrow)$ By assumption, we conclude that $G$ is a chordal graph. Hence, $G$ is vertex decomposable by [5,Corollary 7]. \end{proof} \begin{theorem}\label{main} Let $G$ be a complete t-partite graph. $G$ is unmixed if and only if $G$ be t-colorable such that all color classes have the same number of elements. \end{theorem} \begin{proof} Since any minimal vertex cover of complete t-partite graph $G$ contains all the elements of $t-1$ classes, then any $t-1$ classes have the same number of elements if and only if cardinality of classes be similar. \end{proof} \begin{corollary}\label{main} Let $G$ be a complete t-partite graph. $G$ is Cohen-Macaulay graph if and only if $G$ be t-colorable such that all color classes have exactly one element. \end{corollary} \begin{proof} $\Leftarrow)$ By assumption, we have $G=K_t$ then $G$ is a chordal graph. By[8,Theorem 2.1]$G$ is Cohen-Macaulay if and only if $G$ is unmixed. Hence, the conclusion follows from Theorem 2.5.\\$\Rightarrow)$ We suppose that one of color classes has at least two elements.Therefore, $G$ isn't unmixed and hence $G$ isn't Cohen-Macaulay by[4,Proposition 6.1.21]. \end{proof} By above Corollary and Theorem 2.3, we have following corollary: \begin{corollary}\label{main} Let $G$ be a complete t-partite graph. The property of being Cohen-Macaulay for $G$ is equivalent to being shellabel if and only if $G$ is t-colorable such that all color classes have exactly one element. \end{corollary}
1,314,259,996,141
arxiv
\section{Introduction}\label{sec:intro} The defining property of topological insulators (TIs)~\cite{footnote} is the band gap in the energy spectrum of the bulk material and gapless conducting boundary states. They are edge states in the case of two-dimensional (2D) systems or surfaces states in the case of three-dimensional ones~\cite{Haldane2017}. The paradigmatic cases which gave rise to the main concepts~\cite{Hasan2010,Qi2011} in this field are: ({\em i}) quantum Hall insulator (QHI) in 2D electron gas which requires an external magnetic field to break the time-reversal invariance and whose edge states are {\em chiral} by allowing spin-unpolarized electron to propagate in only one direction; and ({\em ii}) quantum spin Hall insulator (QSHI)~\cite{Kane2005} which is time-reversal invariant, thereby requiring strong spin-orbit coupling effects instead of external magnetic field, and whose edge states appear in pairs with different chirality and spin polarization. The last experimentally discovered member of 2D TI family is quantum anomalous Hall insulator (QAHI), which requires both nonzero magnetization to break the time reversal invariance and strong spin-orbit coupling effects, with its edge states allowing only one spin species to flow unidirectionally~\cite{Liu2016}. \begin{figure} \includegraphics[width=8.5cm]{fig1.pdf} \caption{Schematic view of a two-terminal devices where an infinite ZGNR is attached to two macroscopic reservoirs with chemical potentials $\mu_\mathrm{L}$ and $\mu_\mathrm{R}$ on the left and right, respectively, so that $\mu_\mathrm{L}-\mu_\mathrm{R} = eV_b$ is externally applied dc bias voltage. In panel (a), the scattering region (blue shaded) in the middle of {\em finite length} $L = 30\sqrt{3}a$ and width $W=29a$ is Floquet TI generated by irradiating~\cite{Oka2009} segment of ZGNR by circularly polarized light of intensity $z$ and frequency $\omega$ [Eq.~\eqref{eq:pphase}]. In panel (b), the scattering region (shaded green) is quantum Hall, quantum anomalous Hall or quantum spin Hall insulator with their parameters tuned to produce the same topologically nontrivial band gap $\Delta_g$ [Fig.~\ref{fig:fig2}(a)] in the bulk of all such conventional time-independent TIs.} \label{fig:fig1} \end{figure} In theoretical analysis, edge states are found as eigenfunctions $\Psi_{k_x}(x,y)=e^{i k_x x}\psi(y)$ of the Hamiltonian of an infinite wire (periodic along the $x$-axis, so that eigenfunctions are labeled by the wavevector $k_x$) made of 2D TIs. The corresponding eigenenergies $\varepsilon(k_x)$ form subbands crossing the band gap~\cite{Halperin1982,MacDonald1984}. The width of the edge states is defined by the spatial region where the probability density is nonzero, \mbox{$|\psi(y)|^2 \neq 0$}, while decaying exponentially fast towards the bulk of the wire. Interestingly, their width~\cite{Prada2013,Chang2014} can also depend on the arrangement of atoms along the edge, such as in the case of graphene wires where edge states of QHI and QAHI or QSHI are narrower in the case of zigzag arrangement of carbon atoms along the edge than in the case of their armchair arrangement~\cite{Prada2013,Chang2014,Sheng2017}. In paradigmatic three-dimensional TIs like Bi$_2$Se$_3$, surface states actually have spatial extent of about $\sim 2$ nm~\cite{Chang2015}. The zigzag edge, which is employed in devices in Fig.~\ref{fig:fig1}, can also introduce a kink in the subband of edge state~\cite{Sheng2017}, so that subband intersects with the Fermi energy $E_F$ at $N_\mathrm{R}$ points with positive velocity and $N_\mathrm{L}$ points with negative velocity. However, only the difference $N_\mathrm{R} - N_\mathrm{L}=|\mathcal{C}|$ is topologically protected according to the {\em bulk-boundary correspondence}~\cite{Hasan2010,Qi2011}. Here $\mathcal{C}$ is an integer topological invariant (like the Chern number in the case of QHI and QAHI) associated with band structure in the bulk. This makes electronic transport through edge states of infinite length {\em perfectly quantized in a robust way}~\cite{Buttiker1988}---the zero-temperature two-terminal conductance is $G(E_F)=G_\mathrm{Q} |\mathcal{C}|$ for $E_F$ swept through the bulk band gap and insensitive to both magnetic and nonmagnetic disorder in the case of QHI and QAHI~\cite{Sheng2017}, or only nonmagnetic disorder in the case of QSHI. Although infinite ballistic wires, including those with topologically trivial edge states~\cite{Allen2015,Zhu2017,Marmolejo-Tejada2018}, also exhibit integer $G(E_F)/ G_\mathrm{Q}$, this is easily disrupted by disorder introduced around the edges or even within the bulk~\cite{Marmolejo-Tejada2018}. Here $G_\mathrm{Q}=2e^2/h$ or $G_Q=e^2/h$ is the conductance quantum for spin-degenerate or spin-polarized edges states, respectively. Thus, it has been considered that the key experimental signature of topology in 2D condensed matter is conductance quantization in transport through edge states, which persists even in the presence of disorder as long as it does not break underlying symmetries of the topological phase or generates energy scales larger than the bulk band gap. However, for QHI, QAHI and QSHI of finite length, the zero-temperature longitudinal conductance $G=I/V_b$, also denoted as `two-terminal' since current $I$ and small bias voltage $V_b$ are measured between the same normal metal (NM) leads, oscillates in Fig.~\ref{fig:fig2} just below the quantized plateau at $2e^2/h$ while remaining very close to it. We use zigzag graphene nanoribbon (ZGNR) within which 2D TI of finite length [Fig.~\ref{fig:fig1}(b)] is established using sufficiently large external magnetic field~\cite{Cresti2016}, or additional terms of the Haldane~\cite{Haldane1988} or the Kane-Mele~\cite{Kane2005} models, to generate QHI, QAHI and QSHI, respectively. Their parameters are tuned so that all three examples of conventional time-independent 2D TIs in Fig.~\ref{fig:fig2}(a) have identical topologically nontrivial bulk band gap $\Delta_g$. Even though $G(E_F)$, for $E_F$ swept through bulk band gap $\Delta_g$, is not perfectly quantized in Fig.~\ref{fig:fig2}(a), its oscillations zoomed in Figs.~\ref{fig:fig2}(b)--(g) are insensitive to nonmagnetic edge disorder (ED) introduced in the form of edge vacancies [illustrated in Fig.~\ref{fig:fig4}]. It is worth mentioning that imperfectly quantized two-terminal $G(E_F)$ was observed in early experiments on QSHI~\cite{Roth2009}, provoking a lively search for exotic many-body inelastic effects~\cite{Maciejko2009,Tanaka2011,Budich2012,Vayrynen2013,Kainaris2014,Vayrynen2018,Novelli2019} which can circumvent band-topology constraints and introduce backscattering of electrons as they propagate through edge states. On the other hand, Fig.~\ref{fig:fig2} demonstrates that imperfectly quantized $G(E_F)$ can be due to a much simpler mechanism---backscattering at the NM-lead/TI-region interface. Lacking perfectly quantized two-terminal conductance $G(E_F)$ as the experimental signature of 2D TI phase of finite length, one can resort to direct imaging of spatial profiles of local current density that should confirm electronic flux confined to a narrow region defined by the edge states. Continuous experimental advances have made this possible, such as by using superconducting interferometry in Josephson junction setup~\cite{Allen2015,Zhu2017} or scanning superconducting quantum interference device (SQUID)~\cite{Nowack2013}. In the latter case, one images magnetic field produced by the current from which one can reconstruct the local current density with $\sim \mu$m spatial resolution~\cite{Nowack2013}. Even higher resolution, with reconstructed images having spatial resolution of $\sim 10$ nm, has been achieved by using scanning tip based on electronic spin of a diamond nitrogen-vacancy (NV) centers~\cite{Tetienne2017,Chang2017,Ku2020}. Particularly {\em intriguing questions} that such images can answer is how electron flux transitions from topologically trivial NM lead present in every experimental device into the region of 2D TI of finite length where the flux is confined within narrow edge currents, as well as how processes at the NM-lead/2D-TI interface affect the total current and the corresponding conductance. \begin{figure*} \includegraphics[scale=1.1]{fig2.pdf} \caption{(a) The zero-temperature two-terminal conductance vs. the Fermi energy $E_F$ of device in Fig.~\ref{fig:fig1}(b) where central scattering region hosts {\em finite-length} conventional 2D time-independent TIs, such as QHI, QAHI and QSHI. The TIs are defined on pristine or edge disordered (denoted by ED) ZGNR due to vacancies illustrated in Figs.~\ref{fig:fig4}(d), ~\ref{fig:fig4}(f) and ~\ref{fig:fig4}(h). The zoom in of conductance values within the rectangle in panel (a) is shown in: (b)--(d) for pristine ZGNR; and (e)--(g) for edge-disordered ZGNR. The two NM leads, from which electrons are injected into the topologically protected edge states of finite length with the corresponding local current density profiles shown in Fig.~\ref{fig:fig4}(c)--(h), are also made of ZGNR of the same width as the scattering region [Fig.~\ref{fig:fig1}(b)]. The gap in the bulk of all three 2D TIs is tuned to $\Delta_g = 0.54 \gamma$ and marked in panel (a).} \label{fig:fig2} \end{figure*} Imaging of local current density could also offer new avenue for resolving a {\em crucial issue} for recently conjectured new class of 2D TIs---the so-called Floquet TI~\cite{Oka2009,Lindner2011,Oka2019,Rudner2020}---which is the connection between the Floquet quasi-energy spectrum and experimentally measurable dc transport properties. The Floquet TI phase arises in 2D electron systems driven out of equilibrium by strong light-matter interaction. For example, graphene~\cite{Oka2009,Lindner2011,Oka2019,Rudner2020}, as well as other 2D materials with honeycomb lattice structure like transition-metal dichalcogenides~\cite{Huaman2019}, subject to a spatially uniform and circularly polarized light are predicted to transmute into Floquet TI with quasi-energy spectrum~\cite{Shirley1965,Sambe1973}. Its multiple gaps share~\cite{Lindner2011} the same topological properties as the band gap of QAHI described by the Haldane model~\cite{Haldane1988}. This means that the laser induced band gaps, such as $\Delta_0$ in Fig.~\ref{fig:fig3}(a) emerging at the charge neutral point (CNP) of graphene and $\Delta_1$ away from CNP, are crossed by subbands of chiral edge states~\cite{Perez-Piskunow2014,Perez-Piskunow2014a}. The eigenfunctions of these subbands decay exponentially towards the bulk with a decay length that depends only on the ratio of the laser frequency and its intensity. The $\Delta_1$ gaps are called dynamical gaps~\cite{Syzranov2008} and they occur at energy $\hbar \omega/2$ above/below the CNP. They can be reached using experimentally accessible parameters. For example, the very recent experiment~\cite{McIver2020} has been interpreted in terms of creation of a transient Floquet TI by driving graphene flake by \mbox{$500$~fs} laser pulse at a frequency of \mbox{$\omega=46$~THz}, so that the photon energy is \mbox{$\hbar \omega \approx 191$ meV} and its wavelength is \mbox{$\lambda \approx 6.5$ $\mu$m}. However, the experiment of Ref.~\cite{McIver2020} did not observe either quantized longitudinal or transverse (Hall) conductance. Instead, they found that at the peak laser pulse fluence the transverse conductance within $\Delta_0$ gap saturated at plateau around \mbox{$G_{xy}=(1.8 \pm 0.4)e^2/h$}, while no such plateau of $G_{xy}$ was observed within $\Delta_1$ gap. The calculations of two-terminal [as in Figs.~\ref{fig:fig2} and ~\ref{fig:fig3}(b)] or multi-terminal conductance typically employ the Landauer-B\"{u}ttiker setup~\cite{Buttiker1988,Imry1999} depicted in Fig.~\ref{fig:fig1} where finite-size scattering region---time-dependent due to light irradiation in Fig.~\ref{fig:fig1}(a) or conventional time-independent in Fig.~\ref{fig:fig1}(b)---is attached to semi-infinite NM leads terminating at infinity into the macroscopic particle reservoirs. This is highly appropriate for Floquet TI since time-dependent potential applied in experiments~\cite{McIver2020} is confined to a finite region, either because of a finite laser spot or the screening inside metallic contacts. On the conceptual side, such setup ensures well-defined asymptotic states and their occupation far away from the irradiated region, thereby evading technical difficulties when using time-dependent leads or reservoirs~\cite{Gaury2014}. It also ensures {\em continuous energy spectrum} of the whole system which plays a key role in both the Landauer-B\"{u}ttiker and Kubo~\cite{Sato2019} formulation of quantum transport because it effectively introduces dissipation at infinity and thereby steady-state transport~\cite{Perfetto2010}, while not requiring~\cite{Esin2018} to explicitly model many-body inelastic scattering processes responsible for dissipation~\cite{Imry1999}. \begin{figure} \includegraphics[width=\linewidth]{fig3.pdf} \caption{(a) Quasi-energy spectrum $\xi_\mathrm{QE} (k_x)$ for an infinite ZGNR that is irradiated by circularly polarized monochromatic laser light of frequency \mbox{$\hbar \omega = 3 \gamma$} and intensity \mbox{$z=0.5$} over its whole length. The spectrum is obtained by diagonalizing the corresponding Floquet Hamiltonian [Eq.~\eqref{eq:floquet_ham}] truncated to $-N_\mathrm{ph}<n<N_\mathrm{ph}$ Floquet replicas where $N_\mathrm{ph}=7$ is chosen. The yellow shaded region marks the topological gap $\Delta_0$ around $\xi_\mathrm{QE} = 0$ corresponding to CNP, while the red shaded region marks the dynamical topological gap $\Delta_1$ around $\xi_\mathrm{QE} = \pm\hbar\omega/2$. (b) The zero-temperature two-terminal conductance vs. $E_F$ (computed using $N_\mathrm{ph} = 7$) of two-terminal device in Fig.~\ref{fig:fig1}(a) whose scattering region is Floquet TI of finite length due to irradiation by circularly polarized light. The pristine irradiated ZGNR is marked by FTI and irradiated edge-disordered ZGNR is marked by FTI-ED. The conductance of an infinite nonirradiated (NIR) pristine ZGNR is also show as a reference. (c) Total DOS for the same device marked by FTI in panel (b). (d) Convergence of lead currents $I_\mathrm{L}$ and $I_\mathrm{R}$ vs. $N_\mathrm{ph}$ at \mbox{$E_F = \hbar\omega/2$ }. } \label{fig:fig3} \end{figure} However, for the same two-terminal Landauer-B\"{u}ttiker setup with irradiated scattering region a plethora of conflicting theoretical conclusions have been reached~\cite{Rudner2020}. For example, Refs.~\cite{Kitagawa2011,Gu2011} predict quantization of longitudinal dc conductance within a few percent of $2e^2/h$, while Ref.~\cite{Kundu2014} finds its anomalous suppression. To recover the quantized value, Ref.~\cite{Kundu2013} proposed an {\em ad hoc} summation procedure over different energies in the lead. Without utilizing such ``Floquet sum rule''~\cite{Farrell2016,Yap2017,Kundu2020}, both Refs.~\cite{FoaTorres2014,Farrell2016} confirm nonquantized $G<2e^2/h$ within $\Delta_0$ gap and $G<4e^2/h$ within $\Delta_1$ gap which, however, are largely insensitive to disorder like vacancies or on-site impurities. The precise quantization could be disrupted by dc component of pumping current~\cite{Brouwer1998,Moskalets2002}, which appears~\cite{FoaTorres2014} even at zero bias voltage due to time-dependent potential in the Hamiltonian whenever the left-right symmetry of the device is broken statically or dynamically~\cite{FoaTorres2005,Bajpai2019}. The absence of quantization is explained~\cite{Esin2018,Rudner2020,FoaTorres2014,Farrell2016} by the mismatch between nonirradiated electronic states in the NM leads and edge states within the gaps of the Floquet TI. The mismatch between states in topologically trivial NM leads and TI scattering region exists also in conventional time-independent TI devices, but without significant disruption of quantized conductance in Fig.~\ref{fig:fig2}. However, specific to Floquet TIs is possibility of Floquet replicas to couple to bulk bands~\cite{FoaTorres2014,Fedorova2019}. That is, although edge states within the gap $\Delta_0$ are primarily built from states near the CNP of nonirradiated graphene, they also contain harmonic components near $\pm n\hbar \omega$ which open possibility for electronic photon-assisted tunneling into or out of states in the NM leads whose energies are far away from the CNP. Thus, engineering the density of state of the leads, in order to connect Floquet TI and macroscopic reservoirs through a narrow band of filter states, can recover longitudinal dc conductance within a few percent of $2e^2/h$~\cite{Esin2018}. The issue of experimentally detectable quantized conductance can be examined without resorting to time-independent Floquet formalism, that is, by performing direct time-dependent quantum transport simulations~\cite{Gaury2014}. Due to high computational demand, such calculations are rarely pursued, but some attempts yield longitudinal conductance reaching close to quantized value after sufficiently long time~\cite{Fruchart2016}. This then poses a question on the accuracy of truncation procedure that is inevitably done to reduce infinite matrices in the Floquet formalism where artifacts~\cite{Mahfouzi2012} can be introduced. One such artifact is dc current which is not conserved (i.e., different in the left and right lead)~\cite{Kitagawa2011,Wang2003}, or insufficient number of Floquet replicas is retained for achieving converged results. In this study, we employ {\em charge-conserving} solution~\cite{Mahfouzi2012} for the Floquet-nonequilibrium Green functions (Floquet-NEGF)~\cite{Kitagawa2011,Wang2003,Mahfouzi2012} which ensures that dc current in the left (L) and the right (R) lead are identical at each level of truncation of matrices in the Floquet formalism, i.e., the number of of ``photon'' excitations $N_\mathrm{ph}$ retained. As an overtire, Fig.~\ref{fig:fig3}(d) demonstrates $|I_\mathrm{L}| \equiv |I_\mathrm{R}|$ at each $N_\mathrm{ph}$, as well as that dc component of current converges at $N_\mathrm{ph}=7$. Nevertheless, the conductance in Fig.~\ref{fig:fig3}(b) remains nonquantized in both $\Delta_0$ and $\Delta_1$ gaps. We then proceed to compare spatial profiles of local current density in conventional and Floquet TIs in Fig.~\ref{fig:fig4} which offer detailed microscopic insight on how electrons propagate from one to another carbon atom as they transition from topologically trivial NM leads into the TI region, or within the TI region with possible edge or bulk vacancies introduced as disorder. The paper is organized as follows. Section~\ref{sec:mm_a} describes the Hamiltonian of Floquet TI defined on ZGNR, as well as charge-conserving Floquet-NEGF from Ref.~\cite{Mahfouzi2012}, which is extended here to nonzero bias voltage and to obtain local current density. The same ZGNR is used in Sec.~\ref{sec:mm_b} to define Hamiltonians for the conventional time-independent QHI, QAHI and QSHI, where we also provide steady-state NEGF expressions for local current density in these systems. Section~\ref{sec:results_a} presents results for two-terminal conductance of these four TIs, and Sec.~\ref{sec:results_b} compares spatial profiles of local current density as it flows from the NM leads into those four TIs. In Sec.~\ref{sec:exp} we discuss experimental schemes to quantify bulk vs. edge contributions to total current within Floquet TI using either a nanopore~\cite{Heerema2016,Chang2014,Chang2012} drilled in the interior of irradiated ZGNR, whose effect on the conductance is also explicitly calculated, or magnetic field imaging via diamond NV centers~\cite{Ku2020}. We conclude in Sec.~\ref{sec:conclusions}. \section{Models and Methods}\label{sec:models} \subsection{Hamiltonian and quantum transport formalism for Floquet TI}\label{sec:mm_a} The semi-infinite leads and the scattering region in Fig.~\ref{fig:fig1} combined constitute, prior to introducing light or external magnetic field or spin-orbit coupling into the scattering region, an infinite homogeneous ZGNR described by the nearest-neighbor tight-binding Hamiltonian \begin{equation}\label{eq:gr_ham_nir} \hat{H}_\mathrm{ZGNR} = - \sum_{\braket{ij}} \gamma_{ij} \hat{c}^\dagger_i\hat{c}_j, \end{equation} Here $\braket{ij}$ indicates the sum over the nearest-neighbor sites; $\hat{c}_i^\dagger$($\hat{c}_j$) creates (annihilates) an electron on site $i$ of the honeycomb lattice hosting a single $p_z$-orbital $\langle \mathbf {r}| i \rangle=\pi(\mathbf{r}-\mathbf{R}_i)$; and \mbox{$\gamma_{ij} = \gamma = 2.7$ eV} is the nearest-neighbor hopping from site $i$ to $j$. The width of the ZGNR is chosen as $W= 29a$, where $a$ is the distance between two nearest-neighbor carbon atoms in graphene. It is well-known that, in general, TIs thinner than twice the width of their boundary states will experience hybridization of those states and opening of a topologically trivial mini gap~\cite{Hasan2010,Qi2011,Chang2015} at the crossing point. For Floquet TI studied in Fig.~\ref{fig:fig3}(a) this would happen if $W \le 14a$, so that our choice of $W$ evades such size artifacts. This is also ensured in the cases for QHI, QAHI and QSHI in Fig.~\ref{fig:fig4} where ZGNR is always wider than the width of edge currents. The ZGNR terminates at infinity into the macroscopic reservoirs of electrons whose chemical potentials are $\mu_\mathrm{L} = E_{F} + eV_b/2$ and $\mu_\mathrm{R} = E_{F} - eV_b/2$ for $E_F$ as the Fermi energy and $V_b$ as the applied dc bias voltage. Note that zero-temperature two-terminal conductance $G(E_F)$ of an infinite homogeneous ZGNR described by Hamiltonian in Eq.~\eqref{eq:gr_ham_nir} is plotted for comparison in Fig.~\ref{fig:fig3}(b) and labeled as nonirradiated (NIR). In the case of Floquet TI, circularly polarized monochromatic laser light irradiates the scattering region (shaded blue) of finite length $L = 30\sqrt{3}a$ in Fig.~\ref{fig:fig1}(a). The electromagnetic field of light is introduced into the Hamiltonian through the vector potential \mbox{$\mathbf{A}(t) = A_0(\bold{e}_x \cos\omega t + \bold{e}_y \sin\omega t )$}, where $\bold{e}_x$ ($\bold{e}_y$) is the unit vector along the $+x$-axis ($+y$-axis). The corresponding electric field generated by $\mathbf{A}(t)$ is \mbox{$\bold{E}(t) = -\partial \bold{A}(t)/\partial t$}. We neglect the relativistic magnetic field of light, so that electronic spin degree of freedom maintains its degeneracy and it is excluded from our analysis. The vector potential modifies the Hamiltonian in Eq.~\eqref{eq:gr_ham_nir} via the standard Peierls substitution~\cite{Li2020} \begin{equation}\label{eq:pphase} \hat{c}^\dagger_i \hat{c}_j \longmapsto \exp\bigg[ i2z(\bold{e}_x \cos\omega t + \bold{e}_y \sin\omega t)\cdot \bold{r}_{ij}\bigg]\hat{c}^\dagger_i \hat{c}_j, \end{equation} which is rigorously proven~\cite{Panati2003} to be sufficient to capture the leading order effects due to the presence of the vector potential $\mathbf{A}(t)$. Here $z = eaA_0/2\hbar$ is a dimensionless measure of intensity of the circularly polarized light; $\omega$ is the frequency and $\bold{r}_{ij}$ is the position vector connecting site $i$ with site $j$. The new Hamiltonian $\hat{H}(t)$ with time-dependent hopping between sites $i$ and $j$, $\gamma_{ij}(t) = \gamma \exp\bigg[ i2z(\bold{e}_x \cos\omega t + \bold{e}_y \sin\omega t)\cdot \bold{r}_{ij}\bigg]$, is time-periodic, $\hat{H}(t+T)=\hat{H}(t)$, with period $T = 2\pi/\omega$. Any solution of the Schr\"{o}dinger equation, $i\hbar \partial \Psi(t)/\partial t = \hat{H}(t) \Psi(t)$, with time-periodic Hamiltonian $\hat{H}(t)=\hat{H}(t+T)$ can be expressed as a linear combination, $\Psi(t)=\sum_\alpha c_\alpha \phi_\alpha^\mathrm{F}(t)$, of the so-called Floquet functions~\cite{Shirley1965,Sambe1973} \begin{equation}\label{eq:eigenstates} \phi_\alpha^\mathrm{F}(t) = e^{-i \xi_\mathrm{QE}^\alpha t/\hbar} u_\alpha(t). \end{equation} Here $\xi_\mathrm{QE}^\alpha$ is the Floquet {\em quasi-energy} and \mbox{$u_\alpha(t+T)=u_\alpha(t)$} are periodic functions which can, therefore, be expanded into a Fourier series \begin{equation}\label{eq:ufunction} u_\alpha(\mathbf{r},t)=\sum_{n=-\infty}^{\infty} e^{i n \omega t} u_n^\alpha(\mathbf{r}). \end{equation} The time-periodic Hamiltonian $\hat{H}(t)=\hat{H}(t+T)$ can also be expanded into a Fourier series \begin{equation}\label{eq:ham_fourier} \hat{H}(t) = \sum_{n=-\infty}^{\infty} \hat{H}_n e^{i n \omega t}, \end{equation} where $\hat{H}_n$ is given in terms of the Bessel functions $J_{m}(z)$ of the first kind \begin{subequations}\label{eq:jmcs} \begin{eqnarray}\label{eq:jms} \exp(iz\sin x) = \sum_{m=-\infty}^{\infty} J_{m}(z)e^{i m x}, \end{eqnarray} \begin{eqnarray}\label{eq:jmc} \exp(iz\cos x) = \sum_{m=-\infty}^{\infty} i^m J_{m}(z)e^{i m x}. \end{eqnarray} \end{subequations} Using the matrix representation of the Fourier coefficients $\bold{H}_n$ in Eq.~\eqref{eq:ham_fourier} in the basis of orbitals $|i\rangle$, we construct the Floquet Hamiltonian~\cite{Shirley1965,Sambe1973} \begin{equation}\label{eq:floquet_ham} \check{\bold{H}}_\mathrm{F} = \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \iddots\\ \cdots & \bold{H}_0 & \bold{H}_1 & \bold{H}_2 & \cdots \\ \cdots & \bold{H}_{-1} & \bold{H}_0 & \bold{H}_1 & \cdots \\ \cdots & \bold{H}_{-2} & \bold{H}_{-1}& \bold{H}_0 & \cdots \\ \iddots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}. \end{equation} which is time-independent but infinite. The time-dependent NEGF formalism~\cite{Gaury2014} operates with two fundamental quantities~\cite{Stefanucci2013}---the retarded $\mathbf{G}^r(t,t')$ and the lesser $\mathbf{G}^<(t,t')$ Green functions (GF)---which describe the density of available quantum states and how electrons occupy those states in nonequilibrium, respectively. They depend on two times, but solutions can be sought in other representations, such as the double-time-Fourier-transformed~\cite{Mahfouzi2012,Wang2003} GFs, ${\bf G}^{r,<}(E,E')$. In the case of periodic time-dependent Hamiltonian, they must take the form~\cite{Martinez2003} \begin{equation} {\bf G}^{r,<}(E,E')={\bf G}^{r,<}(E,E+n \hbar \omega)={\bf G}^{r,<}_n(E), \end{equation} in accord with the Floquet theorem~\cite{Shirley1965,Sambe1973}. The coupling of energies $E$ and $E+ n\hbar\omega$ ($n$ is integer) indicate ``multiphoton'' exchange processes. In the absence of many-body (electron-electron or electron-boson) interactions, currents can be expressed using solely the Floquet-retarded-GF $\check{\bold{G}}^r(E)$ \begin{equation}\label{eq:floquet_GF} [E + \check{\bold{\Omega}} - \check{\bold{H}}_\mathrm{F} - \check{\bold{\Sigma}}^r(E) ]\check{\bold{G}}^r(E) = \check{\bold{1}}, \end{equation} which is composed of ${\bf G}^r_n(E)$ submatrices along the diagonal. Here we use notation \begin{equation}\label{eq:omega_mtx} \check{\bold{\Omega}} = \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \iddots\\ \cdots & -\hbar\omega\bold{1} & 0 & 0 & \cdots \\ \cdots & 0 & 0 & 0 & \cdots \\ \cdots & 0 & 0 & \hbar\omega\bold{1}& \cdots \\ \iddots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}, \end{equation} and $\check{\bold{\Sigma}}^r(E)$ is the retarded Floquet self-energy matrix \begin{equation}\label{eq:floquet_self} \check{\bold{\Sigma}}^r(E) = \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \iddots\\ \cdots & \bold{\Sigma}^r(E-\hbar\omega) & 0 & 0 & \cdots \\ \cdots & 0 & \bold{\Sigma}^r(E) & 0 & \cdots \\ \cdots & 0 & 0 & \bold{\Sigma}^r(E+\hbar\omega) & \cdots \\ \iddots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}, \end{equation} composed of the usual self-energies of the leads~\cite{Velev2004}, $\bold{\Sigma}^r(E) = \sum_{p=\mathrm{L,R}}\bold{\Sigma}_p^r(E)$, on the diagonal. All matrices labeled as $\check{\mathbf{O}}$ are representations of operators acting in the Floquet-Sambe~\cite{Sambe1973} space, $\mathcal{H}_\mathrm{F} = \mathcal{H}_T \otimes \mathcal{H}_e$, where $\mathcal{H}_e$ is the Hilbert space of electronic states spanned by localized orbitals $|i \rangle$ and $\mathcal{H}_T$ is the Hilbert space of periodic functions with period $T=2\pi/\omega$ spanned by orthonormal Fourier vectors $\langle t|n \rangle = \exp(i n \omega t)$. The charge current $I_{p}(t)$ in the lead $p=\mathrm{L,R}$ is time-dependent due to Eq.~\eqref{eq:pphase}, and it also has periodicity $T=2\pi/\omega$ like the Hamiltonian itself. The dc component of current, either due to pumping by time-dependent potential~\cite{Brouwer1998,Moskalets2002,FoaTorres2005,Bajpai2019} or due to applied bias voltage $V_b$ or both, is given by \begin{equation} I_{p} = \frac{1}{T}\int_{t}^{t+T} I_p(t') dt'. \end{equation} Such dc component, or time-averaged current over one period $T$, that is injected into the lead $p$ is obtained from the following NEGF expression~\cite{Mahfouzi2012} \begin{equation}\label{eq:floq_chj} I_{p} = \frac{e}{2N_\mathrm{ph}}\int\limits_{-\infty}^{+\infty}dE\, \mathrm{Tr}[\check{\bold{\Gamma}}_p\check{\bold{f}}_p\check{\bold{G}}^r\check{\bold{\Gamma}}\check{\bold{G}}^a - \sum_{p=\mathrm{L,R}}\check{\bold{\Gamma}}_p\check{\bold{G}}^r\check{\bold{\Gamma}}_\alpha\check{\bold{f}}_\alpha\check{\bold{G}}^a ]. \end{equation} In our convention, $I_p>0$ indicates that charge current is flowing into the lead. Here $\check{\bold{f}}_p$ is the Floquet Fermi matrix \begin{equation}\label{eq:fermi_mtx} \check{\bold{f}}_p(E) = \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \iddots\\ \cdots & f_p(E-\hbar\omega)\bold{1} & 0 & 0 & \cdots \\ \cdots & 0 & f_p(E) & 0 & \cdots \\ \cdots & 0 & 0 & f_p(E+\hbar\omega)\bold{1}& \cdots \\ \iddots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}, \end{equation} where $f_p(E)$ is the Fermi function of the macroscopic particle reservoir attached to lead $p$; \mbox{$\check{\bold{\Gamma}}_p(E) = i [\check{\bold{\Sigma}}_p^r(E) - (\check{\bold{\Sigma}}_p^r(E))^\dagger]$} is the Floquet level broadening matrix; \mbox{$\check{\bold{\Gamma}}(E) = \sum_{p=\mathrm{L,R}} \check{\bold{\Gamma}}_p(E)$}; the Floquet-advanced-GF is defined as $\check{\bold{G}}^a(E) = [\check{\bold{G}}^r(E)]^\dagger$; and $\mathbf{1}$ is the unit matrix in $\mathcal{H}_e$ space. We note that Eq.~\eqref{eq:floq_chj} is generalization of the expression for charge current in Ref.~\cite{Mahfouzi2012} to include the applied bias voltage $V_b$. The linear-response two-terminal conductance is then given by \begin{equation} \label{eq:conductance} G = \frac{I_\mathrm{R}}{V_b}, \end{equation} for small applied bias voltage $eV_b \ll E_F$. While the space $\mathcal{H}_e$ is finite-dimensional, with dimension equal to the number of sites $N_e$ within the scattering region, the space $\mathcal{H}_T$ is infinite-dimensional and has to be truncated using $|n| \le N_\mathrm{ph}$. For truncation we employ the following convergence criterion \begin{equation}\label{eq:convergence} \bigg| \frac{I_p(N_\mathrm{ph}) - I_p(N_\mathrm{ph}-1)}{I_p(N_\mathrm{ph}-1)}\bigg|\times 100 < \delta, \end{equation} where $\delta$ is the convergence tolerance. Since the operators acting in $\mathcal{H}_e$ are represented by matrices of dimension $N_e \times N_e$, the operators $\check{\mathbf{O}}$ acting on the truncated Floquet-Sambe space $\mathcal{H}_\mathrm{F}$ are represented by matrices of dimension $(2N_\mathrm{ph}+1)N_e \times (2N_\mathrm{ph}+1)N_e$. Note that the trace in Eq.~\eqref{eq:floq_chj}, $\mathrm{Tr} \equiv \mathrm{Tr}_e \mathrm{Tr}_T$, is summing over contributions from different subspaces of $\mathcal{H}_T$ so that the denominator includes $2N_\mathrm{ph}$ to avoid double counting. The part of the trace operating in $\mathcal{H}_T$ space ensures that at each chosen truncation $N_\mathrm{ph}$ of Floquet replicas charge current is conserved, $I_\mathrm{L} = - I_\mathrm{R}$, unlike other types of solutions~\cite{Wang2003,Kitagawa2011} of the Floquet-NEGF equations where current conservation is ensured only in the limit $N_\mathrm{ph} \rightarrow \infty$. The bond current operator~\cite{Nikolic2006} between sites $i$ and $j$ is time-dependent due to Eq.~\eqref{eq:pphase} and it is given by~\cite{Gaury2014} \begin{equation}\label{eq:bc_oper} \begin{split} \bold{J}_{ij}(t) &= \frac{e}{i\hbar} [\gamma_{ij}(t)\hat{c}_i^\dagger\hat{c}_j - \gamma_{ji}(t)\hat{c}_j^\dagger\hat{c}_i] \\ &= \sum_{n=-\infty}^{\infty} \bold{J}_{n}^{ij}e^{in\omega t}. \end{split} \end{equation} We define the Floquet bond current matrix as \begin{equation}\label{eq:floquet_curr} \check{\bold{J}}_{ij} = \begin{pmatrix} \ddots & \vdots & \vdots & \vdots & \iddots\\ \cdots & \bold{J}_0^{ij} & \bold{J}_{-1}^{ij} & \bold{J}_{-2}^{ij} & \cdots \\ \cdots & \bold{J}_{1}^{ij} & \bold{J}_0^{ij} & \bold{J}_{-1}^{ij} & \cdots \\ \cdots & \bold{J}_{2}^{ij} & \bold{J}_{1}^{ij}& \bold{J}_0^{ij} & \cdots \\ \iddots & \vdots & \vdots & \vdots & \ddots \end{pmatrix}, \end{equation} which yields nonequilibrium part~\cite{Nikolic2006} of dc bond (or local) charge current flowing between site $i$ and $j$ as \begin{equation}\label{eq:floquet_bc} J_{ij}^\mathrm{neq} = \frac{1}{2\pi i}\sum_{n=-N_\mathrm{ph}}^{N_\mathrm{ph}} \int_{E_F+n\hbar\omega-eV_b/2}^{E_F+n\hbar\omega + eV_b/2} dE\, \mathrm{Tr}[\check{\bold{G}}^<(E)\check{\bold{J}}_{ij}], \end{equation} where $\check{\bold{G}}^<(E) = \sum_{p=\mathrm{L,R}} i\check{\bold{G}}^r(E)\check{\bold{\Gamma}}_p(E)\check{\bold{f}}_p(E)\check{\bold{G}}^a(E)$. \subsection{Hamiltonian and quantum transport formalism for QHI, QAHI and QSHI}\label{sec:mm_b} The two-terminal setup in Fig.~\ref{fig:fig1}(b) hosts one of the three conventional time-independent TIs as the scattering region (shaded green) of finite length $L = 30\sqrt{3}a$. The QHI is realized by applying an external time-independent magnetic field perpendicular to ZGNR. The magnetic field is described by a static vector potential \mbox{$\bold{A} = (By,0,0)$} in the Landau gauge, which is then included into the Hamiltonian in Eq.~\eqref{eq:gr_ham_nir} via the Peierls substitution~\cite{Li2020,Panati2003} \begin{equation}\label{eq:qh} \hat{c}^\dagger_i \hat{c}_j \longmapsto \exp\bigg[ i \frac{\beta}{a_0^2}(x_i-x_j)(y_i+y_j)\bigg]\hat{c}^\dagger_i \hat{c}_j. \end{equation} Here $(x_i,y_i)$ indicates the position vector of a carbon atom at site $i$, and $\beta = eBa_0^2/\sqrt{3}\hbar \approx 0.07$ is a dimensionless measure of the magnetic field strength. The QAHI~\cite{Liu2016} is described by the Haldane model~\cite{Haldane2017,Haldane1988} on the honeycomb lattice \begin{eqnarray}\label{eq:qah} \hat{H}_\mathrm{QAHI} & = & \sum_{\braket{ij}}- \gamma_{ij} \hat{c}^\dagger_i\hat{c}_j + \sum_{\braket{\braket{ij}}} \widetilde{\gamma}_{ij}\hat{c}^\dagger_i\hat{c}_j \nonumber \\ && + \sum_{i \in A} m \hat{c}_i^\dagger\hat{c}_i + \sum_{i \in B} (-m) \hat{c}_i^\dagger\hat{c}_i. \end{eqnarray} Here $\braket{\braket{ij}}$ indicates the sum over the next-nearest-neighbor sites, and $\widetilde{\gamma}_{ij} = - \widetilde{\gamma}_{ji} = i \widetilde{\gamma}$ where we use $\tilde{\gamma}=0.14 \gamma$. The last two mass terms on the right hand side have different sign on the triangular sublattices A and B of the honeycomb lattice, where $m=0.2 \gamma$ specifies the ``mass'' term. Note that circularly polarized light employed in Eq.~\eqref{eq:pphase} is mandatory for Floquet TI to mimic QAHI phase of the Haldane model. In contrast, linearly polarized light, which is made of equal superposition of clockwise and anticlockwise circular polarizations, does not break time-reversal symmetry and cannot lead to Haldane ``mass'' term. Finally, the QSHI is described by the Kane-Mele model~\cite{Kane2005} \begin{equation}\label{eq:kane_mele_ham} \hat{H}_\mathrm{QSHI} = \sum_{\braket{ij}}-\gamma_{ij} \bold{c}_i^\dagger \bold{c}_j + \sum_{\braket{\braket{ij}}} it_\mathrm{SO} \bold{c}_i^\dagger \bm{\sigma} \cdot (\bold{d}_{kj}\times \bold{d}_{ik} )\bold{c}_j, \end{equation} whose edge states crossing the topological nontrivial band gap are both chiral and spin-polarized~\cite{Hasan2010,Qi2011}. Here $\bold{c}_i^\dagger = (\hat{c}_{i\uparrow}^\dagger,\hat{c}_{j\downarrow}^\dagger)$ is a row vector of creation operators $\hat{c}^\dagger_{i\sigma}$ that create an electron on site $i$ with spin $\sigma=\uparrow, \downarrow$; $\bold{c}_i$ is the corresponding column vector of annihilation operators; $\bold{d}_{ik}$ is the unit vector pointing from site $k$ to $i$; $\bm{\sigma} = (\hat{\sigma}_x, \hat{\sigma}_y, \hat{\sigma}_z)$ is the vector of the Pauli matrices; and $t_\mathrm{SO}$ is the strength of the intrinsic spin-orbit coupling~\cite{Kane2005,Chang2014}. The zero-temperature two-terminal conductance $G(E_F)=G_Q \mathcal{T}(E)$ of the setup in Fig.~\ref{fig:fig1}(b) is calculated using the Landauer transmission function~\cite{Imry1999,Stefanucci2013} \begin{equation} \mathcal{T}(E) = \mathrm{Tr}[\bm{\Gamma}_\mathrm{R}(E)\bold{G}^r(E)\bm{\Gamma}_\mathrm{L}(E)\bold{G}^a(E)], \end{equation} where the conductance quantum is $G_Q=2e^2/h$ for QHI and QAHI and $G_Q=e^2/h$ for QSHI. Here the retarded GF of the scattering region is given by \mbox{$\bold{G}^r(E) = [E - \bold{H} - \bm{\Sigma}^r(E)]^{-1}$}; the advanced GF is $\bold{G}^a(E) = [\bold{G}^r(E)]^\dagger$; and $\bold{\Gamma}_p(E) = i [\bm{\Sigma}^r_p(E) - \bm{\Sigma}^a_p(E)]$ are the level-broadening matrices. To compute the nonequilibrium bond current between sites $i$ and $j$ we use~\cite{Mahfouzi2013} \begin{equation}\label{eq:loc_j_ss} J_{ij}^\mathrm{neq} = \frac{eV_b}{2\pi} \Tr[\bold{G}^r(E_F)\bm{\Gamma}_\mathrm{L}(E_F)\bold{G}^a(E_F) \bold{J}_{ij}], \end{equation} where $\bold{J}_{ij}$ is the bond current operator in Eq.~\eqref{eq:bc_oper} but with time-independent hopping $\gamma_{ij}$. \begin{figure*} \includegraphics[width=\linewidth]{fig4.pdf} \caption{Spatial profiles of local current density in two-terminal devices of Fig.~\ref{fig:fig1} where the scattering region (dotted rectangle) of finite length is: (a) irradiated pristine ZGNR hosting Floquet TI; (b) irradiated edge-disordered ZGNR; (c) pristine QHI; (d) edge-disordered QHI; (e) pristine QAHI; (f) edge-disordered QAHI; (g) pristine QSHI; and (h) edge-disordered QSHI. In panels (a) and (b) we use $\hbar\omega = 3\gamma$, $z=0.5$, $N_\mathrm{ph} = 7$, and $E_F = \hbar \omega /2$ corresponding to the middle of $\Delta_1$ gap in Fig.~\ref{fig:fig3}(a). In panels (c)--(h), \mbox{$E_F = 0.2 \gamma$}, and in (g) and (h) \mbox{$t_\mathrm{SO} = 0.1 \gamma$}. The black solid arrows are guide to the eye to indicate the spatial region with large flux of local current density.} \label{fig:fig4} \end{figure*} \section{Results and Discussion}\label{sec:results} \subsection{Conductance within the topological gap: FTI vs. conventional TIs }\label{sec:results_a} By diagonalizing $\check{\mathbf{H}}_\mathrm{F}$ in Eq.~\eqref{eq:floquet_ham} for an infinite ZGNR that is periodic along the $x$-axis and irradiated by circularly polarized light over its {\em whole} length, we obtain the quasi-energy spectrum $\xi_\mathrm{QE}(k_x)$ shown in Fig.~\ref{fig:fig3}(a). The chiral edge states crossing the light-induced gap $\Delta_0$ at $\xi_\mathrm{QE} = 0$ (shaded yellow) and $\Delta_1$ at \mbox{$\xi_\mathrm{QE} = \pm \hbar\omega/2$} (shaded red) suggest naively that upon applying small bias voltage the zero-temperature linear-response two-terminal conductance in Eq.~\eqref{eq:conductance} should be quantized: \mbox{$G(E_F)=2e^2/h$} for $E_F$ within $\Delta_0$ gap; and \mbox{$G(E_F)=4e^2/h$} for $E_F$ within $\Delta_1$ gap due to one or two spin-degenerate conduction channels provided by the edge states, respectively. This is in analogy with chiral edge states of conventional time-independent TIs and their quantized conductance in Fig.~\ref{fig:fig2}. In contrast, the average conductance in Fig.~\ref{fig:fig3}(b) is $G(E_F) \approx 0.73 \times 2e^2/h$ within $\Delta_0$ gap and $G(E_F) \approx 1.87 \times 2e^2/h$ within $\Delta_1$ gap. We emphasize that these results are not an artifact of truncation of the Floquet Hamiltonian $\check{\mathbf{H}}_\mathrm{F}$ in Eq.~\eqref{eq:floquet_ham} since current in the L and R lead in Fig.~\ref{fig:fig3}(d) converge at $N_\mathrm{ph}=7$ using $\delta = 1\%$ criterion in Eq.~\eqref{eq:convergence}. Also, our Floquet-NEGF formalism~\cite{Mahfouzi2012} ensures $|I_\mathrm{L}| \equiv | I_\mathrm{R}|$ in Fig.~\ref{fig:fig3}(d) at each chosen $N_\mathrm{ph}$. We additionally plot the total density of states (DOS) $D(E) = \sum_j D_j(E)$ in Fig.~\ref{fig:fig3}(c) which is nonzero within the gaps $\Delta_0$ and $\Delta_1$ due to contributions from the local DOS (LDOS) $D_j(E)$ originating [Fig.~\ref{fig:fig6}(a)] from both edges and bulk of ZGNR. The LDOS is extracted from the Floquet-retarded-GF in Eq.~\eqref{eq:floquet_GF} using \begin{equation}\label{eq:ldos} D_j (E) = \frac{i}{2\pi} \bra{j} \mathrm{Tr}_T [\check{\bold{G}}^r(E) - \check{\bold{G}}^a(E)] \ket{j}, \end{equation} where $\mathrm{Tr}_T$ is the partial trace over states in $\mathcal{H}_T$. The issue of positivity of DOS and LDOS obtained from the Floquet-retarded-GF has been discussed extensively~\cite{Rudner2020,Uhrig2019}. Even though Floquet TI in irradiated ZGNR does not exhibit quantized conductance plateau in Fig.~\ref{fig:fig3}(b), its conductance is largely insensitive to ED. For example, $G(E_F)$ is reduced by $\sim 2$\% within $\Delta_0$ gap and by $\sim 15$\% within $\Delta_1$ gap upon introducing edge vacancies. Nevertheless, this is still less robust than conventional time-independent TIs whose conductance within the topologically nontrivial band gap is completely insensitive to ED, as shown in Figs.~\ref{fig:fig2}(e)--(g). The disorder is introduced by removing carbon atoms on the top and bottom edges of the scattering region, as illustrated in Fig.~\ref{fig:fig4}(b), while imposing the following conditions: (\emph{i}) ED introduced in NIR ZGNR leads to complete conductance suppression $G(E_F) \rightarrow 0$ within the same energy interval defined by $\Delta_0$ gap; ({\em ii}) ED preserves the left-right symmetry of the device, so that charge pumping is absent when the ED ZGNR is irradiated with circularly polarized light~\cite{FoaTorres2005,Bajpai2019,FoaTorres2014} in the absence of dc bias voltage $V_b=0$. Note that in the case of vacancies at the edges of QSHI, our tight-binding Hamiltonian in Eq.~\eqref{eq:kane_mele_ham} does not capture possibility of formation of a localized magnetic moment at the vacancy site which requires first-principles Hamiltonians~\cite{Vannucci2020}. This opens a possibility of backscattering involving spin flip which will disrupt~\cite{Vannucci2020} (nearly) quantized conductance in Fig.~\ref{fig:fig2}(g). \subsection{Spatial profiles of local current density: Floquet TI vs. QHI, QAHI and QSHI}\label{sec:results_b} The spatial profiles of local current density, i.e., bond current $J_{ij}^\mathrm{neq}$ defined in Eqs.~\eqref{eq:floquet_bc} and ~\eqref{eq:loc_j_ss} for Floquet TI and conventional time-independent TIs, respectively, allows us to visualize how electrons transition from topologically trivial NM leads into chiral edge states within the TI region. Figures~\ref{fig:fig4}(c), ~\ref{fig:fig4}(e) and ~\ref{fig:fig4}(g) shows that bulk states contribute to current density within the leads, but current density becomes confined to narrow flux near the edges of QHI, QAHI and QSHI. The width of the flux corresponds to spatial extent of the edge state. As expected due to chirality of edge states, current flows only along the top edge in QHI and QAHI, while in QSHI it flows on both the top and bottom edges~\cite{Nowack2013}. This is because boundaries of QSHI host a pair of spin-polarized edge states~\cite{Kane2005}, so that on the top edge electrons with spin $\sigma=\uparrow$ move from left to right while at the bottom edge electrons with spin $\sigma=\downarrow$ move from left to right. Upon introducing ED in Figs.~\ref{fig:fig4}(d), ~\ref{fig:fig4}(f) and ~\ref{fig:fig4}(h), topological protection and quantized transport through edge states manifest by local current density circumventing the disorder since any backscattering would require to cross over to the other edge which is forbidden due to the absence of bulk states~\cite{Buttiker1988}. In contrast, local current density is nonzero within the whole Floquet TI in Fig.~\ref{fig:fig4}(a), with larger flux near the edges [Fig.~\ref{fig:fig5}(a)]. Upon introducing ED, the edge flux circumvents the disorder but due to general nonlocality of quantum transport bulk flux is also reduced [Fig.~\ref{fig:fig5}(b)] which explains slight reduction of conductance in Fig.~\ref{fig:fig3}(b) within gaps $\Delta_0$ and $\Delta_1$. \begin{figure} \includegraphics[width=\linewidth]{fig5.pdf} \caption{Spatial profile of local current density in Figs.~\ref{fig:fig4}(a) and ~\ref{fig:fig4}(b) over the transverse cross section within: (a) pristine Floquet TI; or (b) Floquet TI with edge disorder. The position of the transverse cross section is marked by dashed vertical line in Figs.~\ref{fig:fig4}(a) and ~\ref{fig:fig4}(b), respectively. The horizontal dashed line in both panels marks the extent of the edge state.} \label{fig:fig5} \end{figure} Interestingly, SQUID-based imaging of QSHI made from HgTe quantum wells has found that gate tuning of bulk conductivity can lead to transport regime where edge and bulk local current densities {\em coexist}~\cite{Nowack2013}. The trace of local current density is scanned by detecting its magnetic field produced according to the Biot-Savart law, which is possible even through the top gate employed to tune the carrier density. In this regime, experimental images were analyzed to quantify contribution of edge and bulk local currents to the total current. We perform similar analysis in Fig.~\ref{fig:fig5} which shows that in pristine Floquet TI from Fig.~\ref{fig:fig4}(a), edge current contributes 44\% and bulk current contributes 56\% to the total current over the transverse cross section [marked by dashed line in Fig.~\ref{fig:fig4}(a)]. Conversely, in the presence of edge disorder in Fig.~\ref{fig:fig4}(b), edge current contributes 52\% and bulk current contributes 48\% to the total current over the same transverse cross section. \subsection{Proposed experimental schemes for probing edge vs. bulk transport within Floquet TI: Graphene nanopore and magnetic field imaging}\label{sec:exp} \begin{figure} \includegraphics[width=8.5cm]{fig6.pdf} \caption{(a) The LDOS [Eq.~\eqref{eq:ldos}] evaluated at $E = \hbar\omega/2$ in the center of $\Delta_1$ gap in Fig.~\ref{fig:fig3}(a) for irradiated pristine ZGNR. (b) The LDOS evaluated at $E=\hbar\omega/2$ for irradiated ZGNR with a nanopore drilled in the interior of the nanoribbon. (c) Time-averaged local bond current [Eq.~\eqref{eq:floquet_bc}] in the same irradiated ZGNR with a nanopore as in (b). (d) Zero-temperature two-terminal conductance $G(E_F)$ of irradiated ZGNR with a nanopore (orange line) vs. conductance of irradiated pristine ZGNR (blue line) within the gap $\Delta_1$ in Fig.~\ref{fig:fig3}(a). The former is reduced by $\sim 28\%$ with respect to the latter. In all panels we use $\hbar\omega = 3\gamma$, $z=0.5$ and $N_\mathrm{ph} = 7$.} \label{fig:fig6} \end{figure} The spatial profiles of local current density of conventional time-independent TIs in Figs.~\ref{fig:fig4}(c)--(h) indicate that any disorder introduced in the interior of ZGNR will have no effect on the two-terminal conductance in Fig.~\ref{fig:fig2}. This was explicitly demonstrated in Ref.~\cite{Chang2014} for the case of QSHI (based on graphene plus heavy adatoms). Therefore, we propose to employ a nanopore in the ZGNR interior as the simplest technique that can detect the presence of bulk current density in Figs.~\ref{fig:fig4}(a) and ~\ref{fig:fig4}(b) in the case of Floquet TI. We introduce nanopore in Figs.~\ref{fig:fig6}(b) and ~\ref{fig:fig6}(c) in such a way that it preserves the left-right symmetry of the device in order to avoid any charge pumping by time-dependent potential of light~\cite{FoaTorres2005,Bajpai2019,FoaTorres2014}. In experiments, nanopores are routinely drilled, without disrupting the surrounding honeycomb lattice of graphene, for applications like DNA sequencing~\cite{Heerema2016}, and they could also be deployed to block phonon transport in thermoelectric applications~\cite{Chang2014,Chang2012}. Figures~\ref{fig:fig6}(a) and ~\ref{fig:fig6}(b) confirms that nanopore does not impair high LDOS [Eq.~\eqref{eq:ldos}] near the edges of the Floquet TI, which correspond to chiral edge states from Fig.~\ref{fig:fig3}(a). Figure~\ref{fig:fig6}(c) shows that local transport in the presence of nanopore utilizes {\em both} left-to-right moving chiral edge states and bulk states. Since electrons flowing through the bulk are backscattered by the nanopore, presence of nanopore reduces conductance by about $\simeq 28$\% in Fig.~\ref{fig:fig6}(d) within the gap $\Delta_1$. A more detailed probing of edge vs. bulk transport in $\sim\mu$m-sized devices, such as those employed in recent experiments~\cite{McIver2020} to convert graphene into Floquet TI, could be achieved using diamond NV centers. The device can be fabricated on a diamond containing high-density, near-surface NV ensemble~\cite{Tetienne2017,Ku2020}, along with a graphite top gate separated by hexagonal boron nitride to tune the carrier density~\cite{Ku2020}. The spin state of NV centers, which serve as the sensor of magnetic field produced by local current density, can be optically initialized and readout via imaging the NV photoluminescence onto a camera. Such a setup has the advantages of being able to operate over a wide range of temperatures, from cryogenic to room temperature (e.g., the experiment in Ref.~\cite{McIver2020} was done at 80 K); it can be readily integrated with an optical cryostat necessary for experiments involving THz radiation; and it has less stringent vibrational requirement compared to scanning setups. We note that THz radiation is far detuned from any of the NV orbital/spin transitions and hence it will not affect NV centers at all. In Ref.~\cite{McIver2020}, a constant DC bias generates a current \mbox{$I \simeq 125$ $\mu$A}, and THz pulses drive the system into Floquet TI state for about \mbox{$3$ ps} at \mbox{$\sim 210$ kHz} repetition rate. Hence, one has a time-averaged typical current density \mbox{$\bar{J}_{\mathrm{F}}\sim 80$ pA/$\mu$m} in a 1-$\mu$m-wide device. This corresponds to a typical stray magnetic field \mbox{$\mu_0 \bar{J}\sim 0.1$ nT}, where $\mu_0$ is the permeability of free space. While it is a small field, its measurement is attainable with existing NV sensing technologies. For example, a single NV can sense $\sim$ nT field with a Hahn-echo sequence over 100 seconds signal averaging at room temperature~\cite{Maze2008}. Detection of \mbox{$\sim 0.1$ nT} field is attainable in combination with entanglement-assisted repetitive readout~\cite{Neumann2010,Lovchinsky2016}, which is available for NV ensemble, as well as with enhanced coherence at cryogenic temperatures and with dynamical decoupling sequences~\cite{BarGill2013}. One can measure the differential current density \mbox{$\Delta J(x,y)\equiv J_{\mathrm{FTI}}(x,y)-J_{\mathrm{NIR}}(x,y)$}, where $J_{\mathrm{FTI}}(x,y)$ [$J_{\mathrm{NIR}}(x,y)$] is current density within the Floquet TI [nonirradiated normal phase], by pulsing on the THz radiation during one free precession time of the Hahn-echo and keeping the THz drive off during the other free precession. The current density $J_{\mathrm{NIR}}(x,y)$ can be measured separately in a Hahn-echo measurement without any THz pulses to enable one to extract $J_{\mathrm{FTI}}(x,y)$. Diffraction-limited optical imaging of magnetic field has resolution $\sim$ 400 nm~\cite{Ku2020}, which is enough to resolve edge currents separated by a width of 1 $\mu$m. With further improvement in spatial resolution, we anticipate that $\sim$ 10 nm resolution can be achieved by using the Fourier gradient imaging~\cite{Arai2015}. \section{Conclusions}\label{sec:conclusions} In conclusion, using steady-state NEGF formalism applied to two-terminal Landauer-B\"{u}ttiker setup [Fig.~\ref{fig:fig1}(b)] with scattering region consisting of conventional time-independent TIs---such as QHI, QAHI and QSHI defined on graphene nanoribbon in order to generate chiral edge states of {\em finite} length---we demonstrate that their conductance is never perfectly quantized [Fig.~\ref{fig:fig2}]. This is due to backscattering at the NM-lead/2D-TI interface. Nevertheless, it remains very close to perfect plateau at $2e^2/h$ within the topologically nontrivial band gap, and it is completely insensitive to edge disorder. The spatial profiles of local current density visualize how electrons flow from bulk states within topologically trivial NM leads into the narrow flux defined by edge states within the TI region, while circumventing any edge disorder within the TI region. In contrast, when the scattering region is converted into the Floquet TI by irradiating graphene nanoribbon by circularly polarized light, conductance within light-induced topologically nontrivial band gaps is not quantized, but it changes little with edge disorder. This results confirm previous findings in the literature~\cite{FoaTorres2014,Farrell2016} while ensuring proper convergence and charge current conservation in the solution of Floquet-NEGF equations~\cite{Mahfouzi2012}. Furthermore, we employ such charge-conserving Floquet-NEGF formalism to compute spatial profiles of local current density. They are higher along the edges [Fig.~\ref{fig:fig5}(a)], following high LDOS near the edges [Fig.~\ref{fig:fig6}(a)], but they remains nonzero also in the interior of the Floquet TI [Fig.~\ref{fig:fig4}(a)]. Such spatial profiles make it also possible to refine previous qualitative estimates of edge vs. bulk contribution to current through Floquet TI~\cite{Esin2018} with precise measure from Figs.~\ref{fig:fig4}(a), ~\ref{fig:fig4}(b) and ~\ref{fig:fig5} which show that edge currents and bulk currents contribute nearly equally to the total current. Thus, observing quantized transport in Floquet TI would require to minimize coupling to bulk states. We propose a very simple experimental technique to detect presence or absence of bulk states in quantum transport through Floquet TI---conductance measurements under laser irradiation should be performed using uniform graphene flake, as in the very recent experiments~\cite{McIver2020}, as well as repeated after a nanopore~\cite{Heerema2016} is drilled in the interior of the flake. If local current density is nonzero in the bulk, it will be scattered by the nanopore which leads to $\simeq 28$\% reduction [Fig.~\ref{fig:fig6}(d)] of the two-terminal conductance when compared to graphene nanoribbon without the nanopore. Finally, we delineate more sophisticated experimental schemes for direct imaging~\cite{Ku2020} of magnetic field produced by edge and bulk local current densities based on diamond NV centers whose orbital/spin transitions are far detuned from THz radiation employed~\cite{McIver2020} in recent experiments to convert graphene into Floquet TI. \begin{acknowledgments} We thank L. E. F. Foa Torres, J. Gong and L. Vannucci for insightful discussions. This research was supported by the US National Science Foundation (NSF) under Grant No. ECCS 1922689. \end{acknowledgments}
1,314,259,996,142
arxiv
\section{Introduction}\label{sec:introduction} As a successful geometric theory of gravitation, Einstein's theory of general relativity (GR) has been confirmed in all observations devoted to its testing to date \citep{Ashby02,Bertotti03}, in particular in famous experiments \citep{Dyson20,Pound60,Shapiro64,Taylor79}. However, the pursuit of testing gravity at much higher precision has ever continued in the past decades, including measurements of the Earth-Moon separation as a function of time through lunar laser ranging \citep{Williams04}. On the other hand, formulating and quantitatively interpreting the test of gravity is another question and an interesting proposal in this respect has been formulated in the frameworks of the parameterized post-Newtonian (PPN) framework \citep{Thorne71}. Different from the original physical indication \citep{Bertotti03}, scale independent post-Newtonian parameter denoted by $\gamma$, with $\gamma = 1$ representing GR, may serve as a test of the theory on large distances. This paper is focused on the quantitative constraints of the GR as a theory of gravity, using the recently-released large sample of galaxy scale strong gravitational lensing systems discovered and observed in SLACS, BELLS, LSD, and SL2S surveys \citep{Cao15}. Up to now, most of the progress in strong gravitational lensing has been made in investigating cosmological parameters \citep{Zhu00a,Zhu00b,Cha03,Cha04,Mit05,Grillo08,Oguri08,Zhu08a,Zhu08b,Cao12a,Cao12b,Cao12c,Cao12d,Biesiada06,Biesiada10,Collett14,Cardone et al.2016, Bonvin16}, the distribution of matter in massive galaxies acting as lenses \citep{Zhu97,MS98,Jin00,Kee01,Ofek03,Treu06a}, and the photometric properties of background sources at cosmological distances \citep{Cao15b}. All the above mentioned results have been obtained under the assumption that GR is valid. Using strong lensing systems \citet{Grillo08} reported the value for the present-day matter density $\Omega_m$ ranging from 0.2 to 0.3 at 99\% confidence level. This initial result, confirmed in later strong lensing studies (e.g. \citet{Cao12c, Cao15}) is consistent with most of the current data including precision measurements of Type Ia supernovae \citep{Amanullah10} and the anisotropies in the cosmic microwave background radiation \citep{Planck1}. Currently, the concordance $\Lambda$CDM model is in agreement with most of the available cosmological observations, in which the cosmological constant contributing more than 70\% to the total energy of the universe is playing the role of an exotic component called dark energy responsible for accelerated expansion of the Universe. However, there appeared noticeable tensions between different cosmological probes. For example, regarding the $H_0$ there is a tension between the CMB results from Planck \citep{Planck1} and the most recent Type Ia supernovae data \citep{Riess16}. Similarly, the $\sigma_8$ parameter derived from the CMB results from Planck \citep{Planck1} turned out to be in tension with the recent tomographic cosmic shear results both from the Canada France Hawaii Telescope Lensing Survey (CFHTLenS) \citep{Heymans12,MacCrann15} and the Kilo Degree Survey (KiDS) \citep{Hildebrandt16}. These tensions partly motivate the test of GR performed in the present paper. With reasonable prior assumptions and independent measurements concerning background cosmology and internal structure of lensing galaxies, one can use strong lening systems as another tool to constrain the PPN parameters describing the deviations from the GR. This idea was first adopted on 15 SLACS lenses by \citet{Bolton06}, who found the post-Newtonian parameter to be $\gamma=0.98 \pm 0.07$ based on priors on galaxy structure from local observations. More recently, \citet{Schwab10} re-examined the expanded SLACS sample \citep{Bolton08a} and obtained a constraint on the PPN parameter $\gamma = 1.01 \pm 0.05$. Having available reasonable catalogs of strong lenses: containing more than 100 lenses, with spectroscopic as well as astrometric data obtained with well defined selection criteria \citep{Cao15}, the purpose of this work is to use a mass-selected sample of 80 early-type lenses compiled from SLACS, BELLS, LSD, and SL2S to provide independent constraints on the post-Newtonian parameter $\gamma$. Throughout this paper we assume a flat $\Lambda$CDM cosmology with parameters based on the recent \textit{Planck} observations \citep{Planck1}. \section{Method and data}\label{sec:data} Our goal will be to constrain deviations from General Relativity at the level of $\gamma$ post-Newtonian parameter. The PPN form of the Schwarzschild metric can be written as \begin{equation} d\tau^2 = c^2 dt^2 (1-2GM/c^2r) - dr^2 (1-2\gamma GM/c^2r) - r^2 d\Omega^2 \end{equation} General Relativity corresponds to $\gamma=1$. From the theory of gravitational lensing \citep{Schneider92}, for a specific strong lensing system with the intervening galaxy acting as a lens, multiple images can form with angular separations close to the so-called Einstein radius $\theta_E$: \begin{equation} \theta_E = \sqrt{\frac{1+\gamma}{2}} \left(\frac{4G M_E}{c^2} \frac{D_{ls}}{D_s D_l} \right)^{1/2} ~~~. \end{equation} where $M_E$ is the mass enclosed within a cylinder of radius equal to the Einstein radius, $D_s$ is the distance to the source, $D_l$ is the distance to the lens, and $D_{ls}$ is the distance between the lens and the source. All the above mentioned distances are angular-diameter distances. Rearranging terms with $R_E = D_l \theta_E$ ($R$ is the cylindrical radius perpendicular to the line of sight -- the $\mathcal{Z}$-axis), we obtain a useful formula: \begin{equation} \frac{G M_E}{R_E} = \frac{2}{(1+\gamma)} \frac{c^2}{4} \frac{D_s}{D_{ls}} \theta_E~~~, \label{eq:einrad} \end{equation} which indicates that only the matter within the Einstein ring is important according to the Gauss's law. On the other hand, spectroscopic measurements of central velocity dispersions $\sigma$ in elliptical galaxies, can provide a dynamical estimate of this mass, based on power-law density profiles for the total mass density, $\rho$, and luminosity density, $\nu$ \citep{Koopmans06}: \begin{eqnarray} \label{eq:rhopl} \rho(r) &=& \rho_0 \left(\frac{r}{r_0}\right)^{-\alpha} \\ \nu(r) &=& \nu_0 \left(\frac{r}{r_0}\right)^{-\delta} \label{eq:nupl} \end{eqnarray} Here $r$ is the spherical radial coordinate from the lens center: $r^2 = R^2 + \mathcal{Z}^2$. In order to characterize anisotropic distribution of three-dimensional velocity dispersion pattern, one introduces \citep{Bolton06,Koopmans06} an anisotropy parameter $\beta$ \begin{equation} \beta(r) = 1 - {\sigma^2_t} / {\sigma^2_r} \label{eq:beta} \end{equation} where $\sigma^2_t$ and $\sigma^2_r$ are, respectively, the tangential and radial components of the velocity dispersion. In the current analysis we will consider anisotropic distribution $\beta \neq 0$ and assume, as it almost always is assumed, that $\beta$ is independent of $r$. Following the well-known spherical Jeans equation \citep{Binney80}, the radial velocity dispersion of the luminous matter $\sigma_r^2(r)$ in the early-type lens galaxies can be expressed as \begin{equation} \sigma^2_r(r) = \frac{G\int_r^\infty dr' \ \nu(r') M(r') (r')^{2 \beta - 2} }{r^{2\beta} \nu(r)}~~~, \label{eq:binney} \end{equation} where $\beta$ is a constant velocity anisotropy parameter. Combining the mass density profiles in Eq.~(\ref{eq:rhopl}), we obtain the relation between the mass enclosed within a spherical radius $r$ and $M_E$ as \begin{equation} M(r) = \frac{2}{\sqrt{\pi} \lambda(\alpha)} \left(\frac{r}{R_E}\right)^{3 - \alpha} M_E ~~~, \end{equation} where by $\lambda(x) = \Gamma \left(\tfrac{x-1}{2}\right) / \Gamma \left(\tfrac{x}{2}\right)$ we denoted the ratio of respective Euler's gamma functions. Simplifying the formulae with the notation: $\xi = \delta + \alpha - 2$ taken after \citep{Koopmans06}, we obtain a convenient form for the radial velocity dispersion by scaling the dynamical mass to the Einstein radius: \begin{equation} \sigma^2_r(r) = \left[\frac{G M_E}{R_E} \right] \frac{2}{\sqrt{\pi}\left(\xi- 2 \beta \right) \lambda(\alpha)} \left(\frac{r}{R_E}\right)^{2 - \alpha} \end{equation} In all strong lensing measurements we use, the {\em observed} velocity dispersion is reported, which is a projected, luminosity weighted average of the radially-dependent velocity dispersion profile of the lensing galaxy. Its theoretical value can be calculated from the Eq.~(\ref{eq:binney}) with the assumption that the relationship between stellar number density and stellar luminosity density is spatially constant. This assumption is unlikely to be violated appreciably within the effective radius of the early-type lens galaxies under consideration. Moreover, the actual observed velocity dispersion is measured over the effective spectrometer aperture $\theta_{ap}$ and effectively averaged by line-of-sight luminosity. Taking into account the effects of aperture with atmospheric blurring and luminosity-weighted averaging, the averaged observed velocity dispersion takes the form \begin{eqnarray} \nonumber \bar {\sigma}_*^2 &=& \left[\frac{c^2}{4} \frac{D_s}{D_{ls}} \theta_E \right] \frac{2}{\sqrt{\pi}} \frac{(2 \tilde{\sigma}_{\rm atm}^2/\theta_E^2)^{1-\alpha/2}}{ (\xi - 2\beta)} \\ &&\times\left[\frac{\lambda(\xi) - \beta \lambda(\xi+2)} {\lambda(\alpha)\lambda(\delta)}\right] \frac{ \Gamma(\tfrac{3-\xi}{2}) }{\Gamma(\tfrac{3 - \delta}{2}) } ~~~. \label{eq:plsig} \end{eqnarray} where $\tilde{\sigma}_{\rm atm}\approx\sigma_{\rm atm} \sqrt{1 + \chi^2 / 4 + \chi^4 / 40}$ and $\chi = \theta_{\rm ap} / \sigma_{\rm atm}$ \citep{Schwab10}. $\sigma_{\rm atm}$ is the seeing recorded by the spectroscopic guide cameras during observing sessions \citep{Cao16}. The above equation tells us that we can constrain the PPN parameter $\gamma$ on a sample of lenses with known redshifts of the lens and of the source, with measured velocity dispersion and the Einstein radius, provided we have reliable knowledge about cosmological model and about parameters describing the mass distribution of lensing galaxies ($\alpha$, $\beta$, $\delta$). For the purpose of our analysis, the angular diameter distances $D_A(z)$ between reshifts $z_1$ and $z_2$ were calculated using the best-fit matter density parameter $\Omega_m$ given by \textit{Planck} Collaboration assuming flat FRW metric \citep{Planck1}. Moreover, we allow the luminosity density profile to be different from the total-mass density profile, i.e., $\alpha \neq \delta$, and the stellar velocity anisotropy exits, i.e., $\beta \neq 0$. Based on a well-studied sample of nearby elliptical galaxies from \citet{Gerhard01}, the anisotropy $\beta$ is characterized by a Gaussian distribution, $\beta=0.18\pm0.13$, which is also extensively used in the previous works \citep{Bolton06,Schwab10}. More recently, \citet{Xu16} measured the stellar velocity anisotropy parameter $\beta$ and its correlations with redshifts and stellar velocity dispersion, based on the Illustris simulated early-type galaxies with spherically symmetric density distributions. It is worth noting from their results that $\beta$ markedly depends on stellar velocity dispersion and its mean value varies from 0.10 to 0.30 for intermediate-mass galaxies ( $200km/s< \sigma_{ap} \leq 300km/s$), which is consistent with the values used in our analysis. Following our previous analysis \citep{Cao16} concerning power-law mass and luminosity density profiles of elliptical galaxies, we used a mass-selected sample of strong lensing systems, taken from a comprehensive compilation of strong lensing systems observed by four surveys: SLACS, BELLS, LSD and SL2S. The sample has been defined by restricting the velocity dispersions of lensing galaxies \textbf{to} the intermediate range: $200km/s< \sigma_{ap} \leq 300km/s$. Lenses of this sub-sample are located at redshifts ranging from $z_l=0.08$ to $z_l=0.94$. Original data about these strong lenses were derived by \citet{Bolton08a,Auger09,Brownstein12,Koopmans02,Treu02,Treu04,Sonnenfeld13a,Sonnenfeld13}, and more comprehensive data concerning these systems can be found in Table 1 of \citet{Cao15}. Fig.~\ref{fig1} shows the scatter plot for this sample in the plane spanned by the redshift of the lens and its velocity dispersion. \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig1.eps} \end{center} \caption{ Characteristics of the strong lensing data sample of 80 intermediate mass early-type galaxies. Observed velocity dispersion inside the aperture is plotted against redshift to the lens. \label{fig1}} \end{figure} \section{Main results}\label{sec:results} Because $\alpha$, and $\delta$ could not be independently measured for each lensing galaxy, we firstly treated them as free parameters and inferred $\alpha$, $\delta$, $\gamma$, simultaneously. Performing fits on the strong lensing data-set, the 68\% confidence level uncertainties on the three model parameters are \begin{eqnarray} && \alpha= 2.017^{+0.093}_{-0.082}, \nonumber\\ && \delta= 2.485^{+0.445}_{-1.393}, \nonumber\\ && \gamma= 1.010^{+1.925}_{-0.452}. \nonumber \end{eqnarray} Fig.~\ref{fig2} shows these constraints in the parameter space of $\alpha$, $\delta$, and $\gamma$. It is obvious that fits on $\alpha$ and $\delta$ are well consistent with the analysis results of \citet{Bolton06,Grillo08,Schwab10}, which are characterized by Gaussian distributions: \begin{equation} \begin{array}{lclclcl} \avg{\alpha} &=& 2.00 & ; & \sigma_\alpha & = & 0.08 \\ \avg{\delta} & = & 2.40 & ; & \sigma_\delta & = & 0.11~~~. \end{array} \label{eq:alphabeta_pars} \end{equation} More importantly, the degeneracy between the two parameters, $\gamma$ and $\delta$, is apparently indicated by the results presented in Fig.~\ref{fig2}, i.e., a steeper luminosity-density profile for the lensing early-type galaxies will lead to a larger value for the parameterized post-Newtonian parameter. This tendency could also be seen from the sensitivity analysis shown below. Now the parameters characterizing the total mass-profile shape, velocity anisotropy, and light-profile shape of lenses are set at their best measured values. Performing fits on $\gamma$, we find the resulting posterior probability density shown in Fig.~\ref{fig3}. The result $\gamma = 0.995^{+0.037}_{-0.047}$ (1$\sigma$ confidence) is consistent with $\gamma = 1$ and also with previous results of \citep{Bolton06} obtained with strong lensing systems. The scatter of galaxy structure parameters is an important source of systematic errors on the final result. Taking the best-fitted values of the structure parameters as our fiducial model, we investigated how the PPN constraint is altered by introducing the uncertainties on $\alpha$, $\beta$, and $\delta$ as listed in Eq.~(\ref{eq:alphabeta_pars}). Therefore, firstly, we perform a sensitivity analysis, varying the parameter of interest while fixing the other parameters at their best-fit values. In general, one can see from Table~\ref{priorresult} and Fig.~\ref{fig7} that constraint on $\gamma$ is quite sensitive to small systematic shifts in the adopted lens-galaxy parameters. By comparing the contribution of each of these systematic errors to the systematic error on $\gamma$, we find that the largest sources of systematic error are the mass density slope $\alpha$, followed by the anisotropy parameter of velocity dispersion $\beta$ and and the luminosity density slope $\delta$. Secondly, by considering the intrinsic scatter of $\alpha$, $\beta$, and $\delta$ into consideration, we found $\gamma$ varying from 0.845 to 1.240 at 1$\sigma$ confidence level. It means that systematic errors might exceed $\sim 25\%$ of the final result. The large covariances of $\gamma$ with $\alpha$ and $\delta$ seen in Fig.~\ref{fig2} motivate the future use of auxiliary data to improve constraints on $\alpha$, $\beta$ and $\delta$. For example, $\alpha$ can be inferred for individual lenses from high resolution imaging of arcs \citep{Suyu06,Vegetti10,Collett14,Wong15}, while constraints on $\beta$ and $\delta$ can be improved with integral field unit (IFU) data \citep{Barnabe13}, without the assumption of general relativity (GR). \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig2.eps} \end{center} \caption{ Constraints on the PPN $\gamma$ parameter, the total-mass and luminosity density parameters obtained from the sample of strong lensing systems. Blue crosses denote the best-fit values. \label{fig2}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig3.eps} \end{center} \caption{ Normalized posterior likelihood of the PPN $\gamma$ parameter obtained with rigid priors on the nuisance parameters ($\alpha$, $\beta$, $\delta$). \label{fig3}} \end{figure} Another issue which should be discussed is by how much is $\gamma$ affected by the uncertainty of cosmological parameters of the $\Lambda$CDM model used in our study. For this purpose, we also considered WMAP9 result of $\Omega_m=0.279$ in order to make comparison with \textit{Planck} observations. Not surprisingly, the result was that differences were negligible. It could have been expected because cosmology intervenes here through the distance ratio $D_{ls}/D_s$ , which is very weakly dependent on the value of $\Omega_m$ and in flat cosmology does not depend on $H_0$ at all. The next generation wide and deep sky surveys with improved depth, area and resolution may, in the near future, increase the current galactic-scale lens sample sizes by orders of magnitude \citep{Kuhlen04,Marshall05}. Such a significant increase of the number of strong lensing systems will considerably improve the constraints on the PPN parameter. Now we will illustrate what kind of result one could get using the future data from the forthcoming Large Synoptic Survey Telescope (LSST) survey, which may detect 120000 lenses for the most optimistic scenario \citep{Collett15}. In order to make a good comparison with the results derived with current strong lensing systems (Fig.~2), we firstly turn to the simulated LSST population containing $\sim 40000$ lensing galaxies with intermediate velocity dispersions ($200km/s< \sigma_{ap} \leq 300km/s$)\footnote{Our simulated LSST sample is obtained with the simulation programs available on the github.com/tcollett/LensPop.}. Performing fits on this simulated strong lensing data-set, we obtain the constraints in the parameter space of $\alpha$, $\delta$, and $\gamma$ shown in Fig.~\ref{fig8}. It is apparent that from the simulated LSST strong lensing data, we may expect the total-mass density parameter $\alpha$ to be estimated with $10^{-3}$ precision. However, the degeneracy between the PPN $\gamma$ parameter and the luminosity density parameter $\delta$ still needs to be investigated with future high-quality integral field unit (IFU) data \citep{Barnabe13}. In the next section, we will apply a cosmological-independent method to study the degeneracy \citep{Rasanen15} between cosmic curvature and parameterized post-Newtonian parameter $\gamma$. \begin{table} \caption{\label{priorresult} Sensitivity of constraints on $\gamma$ with respect to the galaxy structure parameters. } \begin{center} \begin{tabular}{c|c}\hline\hline Systematics & \hspace{4mm} PPN parameter \hspace{4mm}\\ \hline $\alpha=2.00;\beta=0.18;\delta=2.40$ & $\gamma = 0.995^{+0.037}_{-0.047}$ \\ $\alpha=1.92;\beta=0.18;\delta=2.40$ & $\gamma= 0.860\pm0.040$ \\ $\alpha=2.08;\beta=0.18;\delta=2.40$ & $\gamma= 1.169\pm0.050$ \\ $\alpha=2.00;\beta=0.05;\delta=2.40$ & $\gamma= 0.914\pm0.043$ \\ $\alpha=2.00;\beta=0.31;\delta=2.40$ & $\gamma= 1.087\pm0.043$ \\ $\alpha=2.00;\beta=0.18;\delta=2.29$ & $\gamma= 1.111\pm0.044$ \\ $\alpha=2.00;\beta=0.18;\delta=2.51$ & $\gamma= 0.883\pm0.039$ \\ \hline\hline \end{tabular} \end{center} \end{table} \begin{figure*} \begin{center} \includegraphics[width=0.33\hsize]{fig7.1.eps} \includegraphics[width=0.33\hsize]{fig7.2.eps}\includegraphics[width=0.33\hsize]{fig7.3.eps} \end{center} \caption{Normalized likelihood plot for $\gamma$ by choosing different galaxy structure parameters. \label{fig7}} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig8.eps} \end{center} \caption{ Constraints on the PPN $\gamma$ parameter, the total-mass and luminosity density parameters obtained from the simulated LSST strong lensing data. \label{fig8}} \end{figure} \section{Cosmic curvature and parameterized post-Newtonian formalism} In a homogeneous and isotropic Universe, the dimensionless distance $d(z_1;z_s)=(H_0/c)(1+z_s)D_A(z_1;z_s)$ can be written as \begin{equation} \label{eq:r} d(z_1;z_s) = \frac{1}{ \sqrt{|\Omega_{k}|}} {\rm sinn}\left[\sqrt{|\Omega_{k}|} \int_{z_1}^{z_2}\frac{dz'}{E(z')}\right], \end{equation} where $E(z)=H(z)/H_0$ is the expansion rate, and $\Omega_{k}$ is the spatial curvature density parameter; $sinn(x)= sinh(x)$ for $\Omega_k>0$, $sinn(x)= x$ for $\Omega_k=0$, and $sinn(x)= sin(x)$ for $\Omega_k<0$, respectively. For a strong lensing system with the following notation: $d(z)=d(0;z)$, $d_l=d(0; z_l)$, $d_s=d(0; z_s)$, and $d_{ls}=d(z_l; z_s)$, a simple sum rule could be easily obtained as \begin{equation} \label{dls2}d_{ls}/d_s = \sqrt{1+\Omega_k d_l^2}-d_l/d_s\sqrt{1+\Omega_k d_s^2}. \end{equation} [The case of Eq.~(\ref{dls2}) is given in, e.g., \citet{Peebles03}, p336.] This fundamental formula provides an model-independent probe to test both the spatial curvature, in combination with weak lensing and baryon acoustic oscillations (BAO) measurements \citep{Bernstein06} and the FLRW metric, in combination with strong lensing systems and SNe Ia observations \citep{Rasanen15}. For the purpose of our analysis, we determined the dimensionless distances $d_l$ and $d_s$ of all ``observed'' strong lensing systems (taken from the LSST simulation by \citet{Collett15}) by fitting a polynomial to the Union2.1 SN Ia data covering the redshift range $0<z\leq 1.414$ \citep{Amanullah10}. Therefore we bypassed the need to assume any specific cosmological model. By using Eq.~(\ref{dls2}) we were able to calculate the distance ratio $d_{ls}/d_s$ depending only on the curvature density parameter $\Omega_k$. The reported statistical and systematic uncertainties of the distance modulus for individual SNe Ia are considered in the fitting procedure. In the Union2.1 SN Ia compilation, light-curve fitting parameters which are used for distance estimation are constrained in a global fit. However, compared to the uncertainties in the modeling of the strong lensing systems, the model-dependence of the SNe Ia analysis is likely subdominant \citep{Rasanen15}. Then we assessed the distance ratios $d_{ls}/d_s$ from the strong lensing data (Einstein radius and velocity dispersion) according to the Eq.~\ref{eq:plsig}. For this purpose we used the simulated observations of forthcoming photometric LSST survey \citep{Collett15}. Using the simulation programs available on the github.com/tcollett/LensPop, we obtained 53000 strong lensing systems meeting the redshift criterion $0<z_l<z_s\leq 1.414$ in compliance with SNIa data used in parallel. The simulated catalog is derived on the base of realistic population models of elliptical galaxies acting as lenses, with the mass distribution approximated by the singular isothermal ellipsoids. Following the assumptions underlying the simulation, we fixed $\alpha=\delta=2$ and $\beta=0$ in our analysis. We took the fractional uncertainty of the Einstein radius at the level of 1\% and the observed velocity dispersion at the level of 10\%. Secondary lensing contribution from the matter along the line-of-sight was neglected in our analysis \footnote{ The assumption of 1\% accuracy on the Einstein radius measurements from future LSST survey is reasonable, although the line-of-sight effect might introduce $\sim 3\%$ uncertainties in the Einstein radii \citep{Hilbert09}. However, according to the recent analysis by \citet{Collett16}, the lines-of-sight for monitorable strong lenses (especially for quadruply imaged quasars) might be biased at the level of 1\%. Some attempts to account for the line-of-sight secondary lensing for quasars can also be found in \citet{Collett13,Greene13,Rusu16}.}. Fig.~\ref{fig4} displays the fitting results in the $\Omega_k-\gamma$ plane, thus illustrating the dependence between the cosmic curvature and the PPN $\gamma$ parameter. It is apparent that a flat universe together with the validity of GR ($\Omega_k=0$, $\gamma=1$) is strongly supported. More importantly, it is interesting to note that there exists a significant degeneracy between the spatial curvature of the Universe and the PPN parameter, which captures how much space curvature is provided by unit rest mass of the objects along or near the path of the particles. Similar degeneracy between $\gamma$ and the other cosmological parameters (the matter density fraction, $\Omega_m$ and the equation of state of dark energy, $w$) can also be seen from Fig.~\ref{fig5}. \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig4.eps} \end{center} \caption{ Constraints on the PPN parameter and cosmic curvature from the simulated LSST strong lensing data. \label{fig4}} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig5.eps} \end{center} \caption{ Constraints on the PPN parameter and cosmological parameters from the simulated LSST strong lensing data. \label{fig5}} \end{figure} One can easily check that reduction of the error of $\Omega_k$ would lead to more stringent fits of $\gamma$, which encourages us to consider the possibility of testing PPN at much higher accuracy with future surveys of strong lensing systems. We now set a prior on the cosmic curvature with $-0.007<\Omega_k<0.006$, according to the latest CMB data and baryon acoustic oscillation data \citep{Planck1}, and get a constraint on the PPN parameter: $\gamma=1.000^{+0.0023}_{-0.0025}$. When we changed the fractional uncertainty of the Einstein radius to the level of 1\% and the observed velocity dispersion to the level of 5\%, the resulting constraint on the PPN parameter became: $\gamma=1.000^{+0.0009}_{-0.0011}$. The posterior probability density for $\gamma$ is shown in Fig.~\ref{fig6}. One can see from this plot that much more severe constraints would be achieved, and one can expect $\gamma$ to be estimated with $10^{-3}\sim 10^{-4}$ precision. \begin{figure} \begin{center} \includegraphics[width=1.0\hsize]{fig6.eps} \end{center} \caption{ Constraints on the PPN parameter from simulated LSST strong lensing data, with a prior on the cosmic curvature $-0.007<\Omega_k<0.006$ from \textit{Planck}. \label{fig6}} \end{figure} \section{Conclusions} Based on a mass-selected galaxy-scale strong gravitational lenes from the SLACS, BELLS, LSD and SL2S surveys and a well-motivated fiducial set of lens-galaxy parameters, we tested the weak-field metric on kiloparsec scales and found a constraint on the post-Newtonian parameter $\gamma = 0.995^{+0.037}_{-0.047}$ under the assumption of a flat universe from \textit{Planck} observations. Therefore it is in agreement with the General Relativity value of $\gamma=1$ with $4\%$ accuracy. Considering systematic uncertainties in total mass-profile shape, velocity anisotropy, and light-profile shape, we estimate systematic errors to be $\sim 25\%$. Furthermore, we illustrated what kind of result we could get using the future data from the forthcoming Large Synoptic Survey Telescope (LSST) survey \citep{Collett15}. We applied a cosmological model independent method to study the degeneracy \citep{Rasanen15} between cosmic curvature and the parameterized post-Newtonian parameter $\gamma$. It is apparent that spatially flat Universe with the conservation of GR ($\Omega_k=0$, $\gamma=1$) is strongly supported. Moreover, the reduced uncertainty of $\Omega_k$ leads to more stringent fits of $\gamma$. This opens up the possibility of testing PPN with much higher accuracy using strong lensing systems discovered in the future surveys. By setting a prior on the cosmic curvature with $-0.007<\Omega_k<0.006$, assumed according to the latest CMB plus baryon acoustic oscillation data \citep{Planck1}, the accuracy of $\gamma$ determination reached $10^{-3}\sim 10^{-4}$ precision. Therefore, we conclude that samples of strong lensing systems with measured stellar velocity dispersions, much larger than currently available, may serve as an important probe to test the validity of the GR, provided that mass-dynamical structure of lensing galaxies is better characterized and constrained in the future surveys. \vspace{0.5cm} This work was supported by the Ministry of Science and Technology National Basic Science Program (Project 973) under Grants Nos. 2012CB821804 and 2014CB845806, the Strategic Priority Research Program ``The Emergence of Cosmological Structure" of the Chinese Academy of Sciences (No. XDB09000000), the National Natural Science Foundation of China under Grants Nos. 11503001, 11373014 and 11073005, the Fundamental Research Funds for the Central Universities and Scientific Research Foundation of Beijing Normal University, China Postdoctoral Science Foundation under grant No. 2015T80052, and the Opening Project of Key Laboratory of Computational Astrophysics, National Astronomical Observatories, Chinese Academy of Sciences. Part of the research was conducted within the scope of the HECOLS International Associated Laboratory, supported in part by the Polish NCN grant DEC-2013/08/M/ST9/00664 - M.B. gratefully acknowledges this support. M.B. obtained approval of foreign talent introducing project in China and gained special fund support of foreign knowledge introducing project.
1,314,259,996,143
arxiv
\section{Adjoint QCD and PNJL Models} We extend PNJL models to the case of $SU(3)$ gauge theory with $N_f=2$ of Dirac fermions in the adjoint representation \cite{Nishimura:2009me}. For gauge bosons, the one-loop free energy in a background Polyakov loop is given by \begin{equation} V_{GL}=2\, Tr_{A}[\frac{1}{L}\int\frac{d^{3}k}{(2\pi)^{3}}ln(1-Pe^{-L\Omega_{k}})] \end{equation} where we have inserted a mass parameter in $\Omega_{k}=\sqrt{k^{2}+M^{2}}$ for purely phenomenological reasons \cite{Meisinger:2001cq}. A small-$L$ expansion gives the correct one-loop energy independent of the mass parameter $M$, and by setting $M=596\,MeV$, the next-order term proportional to $M^2/L^2$ yields the correct deconfinement temperature for the pure gauge theory, with a value of $T_d\approx270\,MeV$. The fermionic part of the Lagrangian of our PNJL model is similar to that of NJL model. The $L^{-1}=0$ contribution to the effective potential, $V_{F0}\left(g_S,\Lambda,m\right)$ is unchanged. The potential depends on a four-fermion coupling constant $g_S$, a noncovariant three-dimensional cutoff $\Lambda$, and a constituent mass $m=-2g_S\langle \bar{\psi}\psi\rangle$ with zero current mass. In the PNJL model, the $L$-dependent part of the one-loop fermionic contribution to the effective potential is \begin{equation} V_{FL}=-2N_fTr_{A}[\frac{1}{L}\int\frac{d^{3}k}{(2\pi)^{3}}ln(1- Pe^{-L\omega_k})+h.c.] \end{equation} where $\omega_k=\sqrt{k^2+m^2}$. This term offsets the gluonic potential, maintaining $Z(N)$ symmetry provided that $mL$ is sufficiently small. \begin{figure} \includegraphics[width=.356\textheight]{PhaseDiagram1} \caption{The phase diagram for two-flavor adjoint QCD with periodic boundary conditions in the $L^{-1}$-$\kappa$ plane. C, D and S refer to the confined phase, deconfined phase, and skewed phase, respectively.} \end{figure} For the case of fundamental fermions, our model reproduces the known crossover phase transitions similar to the original PNJL of Fukushima \cite{Fukushima:2003fw}. For the case of adjoint fermions, we calibrate our model with antiperiodic boundary conditions to fix the parameters in $V_{F0}$. We take the ratio $m\left(L^{-1}=0\right)/\Lambda$ to be $0.1$, which is small enough for the cutoff theory to make sense, and set $\Lambda=23.22\,GeV$ so that the ratio of chiral symmetry restoration temperature $T_c$ to the deconfinement temperature $T_d$ becomes $T_c/T_d \approx 7.8$, which is known from the lattice simulations \cite{Karsch:1998qj,Engels:2005te}. We use the same value of $\Lambda$ for the case of periodic boundary conditions, assuming that boundary conditions do not change the scale of cutoff in the theory. We minimize the total effective potential with respect to two order parameters, the Polyakov loop $P$ and the constituent mass $m$ as a function of the dimensionless coupling constant, $g_S\Lambda^2\equiv\kappa$. As a result, we obtain a phase diagram in the $L^{-1}-\kappa$ plane for the $SU(3)$ gauge theory with two adjoint Dirac fermions as shown in Figure 1. This phase diagram is compatible with the lattice simulation by Cossu and D'Elia \cite{Cossu:2009sq}. However, the PNJL model shows that the small-$L$ and large-$L$ confined regions are connected for lower values of $\kappa$, which correspond to lower values of constituent mass for $L^{-1}=0$. Only for a very small value of $\kappa$, chiral symmetry seems to be restored with the condition $L^{-1}<<\Lambda$, but it is difficult to resolve in our calculations. \section{Double-Trace Deformation and Higgs Mechanism} We now consider an SU(2) adjoint Higgs theory on $R^{3}\times S^{1}$. The conventional part of the Euclidean action is given by \begin{equation} S_{c}=\int d^{4}x\left[\frac{1}{4}\left(F_{\mu\nu}^{a}\right)^{2}+\frac{1}{2}\left(D_{\mu}\phi\right)^{T}\cdot D_{\mu}\phi+V\left(\phi\right)\right] \end{equation} where $V\left(\phi\right)=1/2m^2\phi^2+1/4\lambda\left(\phi^2\right)^2$ and $D_{\mu}\phi=\partial_{\mu}\phi-igA_{\mu}\phi$. In addition, we need a double-trace deformation term $V_{dt}$ in order to realize the confined phase for small $L$. Many forms of $V_{dt}$ may be used, but here we choose the form \begin{equation} V_{dt}=\frac{a}{2\pi^{2}L^{4}}\sum_{n=1}^{\infty}\frac{\left|Tr_{F}P^{n}\right|^{2}}{n^{2}}. \end{equation} The infinite series can be summed exactly, leading to an analytically tractable expression for the effective potential. This deformation leads to a second-order deconfinement transition at some critical value $a_c$. The action has two global symmetries: a $Z(2)_H$ symmetry in which $\phi \rightarrow -\phi$ and a $Z(2)_C$ center symmetry in which $P \rightarrow -P$. There are three distinct gauge-invariant order parameters associated with the symmetries; $\left\langle Tr_{F}P\left(x\right)\right\rangle $, which transforms non-trivially under $Z(2)_C$; $\left\langle Tr_{F}\left[P^2\left(x\right)\phi(x)\right]\right\rangle $, which transforms non-trivially under $Z(2)_H$; and $\left\langle Tr_{F}\left[P\left(x\right)\phi(x)\right]\right\rangle $, which transforms non-trivially under both groups. \begin{figure} \includegraphics[height=.21\textheight]{PhaseDiagram2} \caption{The phase diagram for the double-trace deformation theory with adjoint scalars in $a-m^2$ plane with the topological objects identified in each phase. The numbers are the values of gauge-invariant order parameters, $\left( \left\langle Tr_{F}P\left(x\right)\right\rangle ,\left\langle Tr_{F}\left[P^{2}\left(x\right)\phi(x)\right]\right\rangle ,\left\langle Tr_{F}\left[P\left(x\right)\phi(x)\right]\right\rangle \right)$.} \end{figure} We use the following simple approximation to describe the overall phase structure: we will include all tree-level contributions to the effective potential, including the term $V_{dt}$, and the leading $L^{-1}$ term from one loop. Thus our approximate form for the effective potential is simply \begin{equation} V_{eff}\simeq Tr\left[A_{4},\phi\right]^{2}+V\left(\phi\right)+\left[V_{GL}+V_{\phi L}+V_{dt}\right]\left(P\right) \end{equation} where $V_{\phi L}$ is the one-loop potential for $\phi$, which is half of $V_{GL}$ because it is spinless. Note that the first term in $V_{eff}$ does nothing but force $A_4$ and $\phi$ to lie in the same direction in the $SU(2)$ Lie algebra. We choose a gauge where the Polyakov loop lies in the $3$-direction of $SU(2)$, independent of $\vec{x}$, so we can write the value of $P$ as $P=\exp\left(i\theta\sigma_3\right)$ and $\phi$ as $\phi=v\sigma_{3}$, where $\sigma_i$ is the Pauli matrix. Minimization of the effective potential reduces to minimizing two separate functions of a single variable: $V\left(\phi\right)$ and $V_{GL}\left(P\right)+V_{\phi L}\left(P\right)+V_{dt}\left(P\right)$. The expected values $\theta$ and $v$ are not themselves gauge-invariant, but they can be used reliably to calculate the gauge-invariant order parameters mentioned above for small $L$. In this simplified approximation, $v$ is zero for $m^{2}>0$ and non-zero for $m^{2}<0$. On the other hand, for $a>a_c$, $\theta$ is $\pi/2$ and the $Z(2)_C$-symmetric, confined phase is favored. For $a<a_c$, $\theta \neq \pi/2$, and $Z(2)_C$ is spontaneously broken. Neglected terms in the full one-loop potential couple $\phi$ and $P$ directly, and shift the phase diagram somewhat from the predictions of this simple approximation. However, they do not change the overall nature of the phase diagram. We summarize our results in a phase diagram shown in Figure 2. There are four possible spontaneous symmetry breaking patterns of $Z(2)_C\times Z(2)_H$. Thus, with the confining phase corresponding to unbroken $Z(2)_C\times Z(2)_H$, there seem to be five possible phases. However, the order parameters do not allow a phase where both the Higgs mechanism and confinement hold, characterized by $\left( \left\langle Tr_{F}P\left(x\right)\right\rangle ,\left\langle Tr_{F}\left[P^{2}\left(x\right)\phi(x)\right]\right\rangle ,\left\langle Tr_{F}\left[P\left(x\right)\phi(x)\right]\right\rangle \right)=(0,\neq0,0)$ with only a $Z(2)_C$ symmetry unbroken. This is consistent with the general results of 't Hooft \cite{'tHooft:1979uj}. On the other hand, there is a phase where $Z(2)_C\times Z(2)_H$ spontaneously breaks to $Z(2)$. We call this phase a mixed confined phase because symmetry breaking is realized by a linear combination of $\phi$ and $A_4$. There are topological objects of mixed BPS monopoles and mixed Kaluza-Klein (KK) monopoles, whose actions are $4\pi/g^{2}\sqrt{4\theta^{2}+g^{2}L^{2}v^{2}}$ and $4\pi/g^{2}\sqrt{4\left(\pi-\theta\right)^{2}+g^{2}L^{2}v^{2}}$, respectively. If $Z(2)_H$ is restored, then $v=0$, and they reduce to the actions of usual BPS and KK, which are the constituents of calorons \cite{Lee:1997vp,Kraan:1998pm}. \section{Conclusions} We have presented some new results for the phase diagrams in the QCD-like theories on $R^3\times S^1$. For $SU(3)$ adjoint QCD with two Dirac fermions with periodic boundary conditions, we have extended a Polyakov-Nambu-Jona Lasinio model, which incorporates both chiral symmetry breaking and confinement, to the case of adjoint fermions. The phase diagram in this model is compatible with the lattice simulation by Cossu and D'Elia. The large-$L$ and small-$L$ confined regions are connected in the phase diagram for a sufficiently small constituent mass. For $SU(2)$ double-trace deformation theories with adjoint scalar fields, we have shown that according to the gauge-invariant order parameters, there is no phase where small-$L$ confinement and the Higgs mechanism take place. We have found a new mixed confined phase, where $Z(2)_C\times Z(2)_H \rightarrow Z(2)$ is realized by two Higgs fields $\phi$ and $A_4$. We have also constructed monopole solutions in the mixed confined phase using a linear combination of $\phi$ and $A_4$. \begin{theacknowledgments} The authors thank the U.S. Department of Energy for financial support. \end{theacknowledgments} \bibliographystyle{aipproc}
1,314,259,996,144
arxiv
\section{Introduction} Relation classification is to select the relation class that implies the relation of the two nominals (e1, e2) in the given text. For instance, given the following sentence, ``The \textless e1\textgreater \textbf{phone}\textless /e1\textgreater{} went into the \textless e2\textgreater \textbf{washer}\textless /e2\textgreater{}.'', where \textless e1\textgreater{}, \textless /e1\textgreater{}, \textless e2\textgreater{}, \textless /e2\textgreater{} are position indicators that represent the starting and ending positions of nominals, the goal is to find the actual relation \textit{Entity-Destination} of \textbf{phone} and \textbf{washer}. The task is important because the results can be utilized in other Natural Language Processing (NLP) applications like question answering and information retrieval. Recently, Neural Network (NN) approaches to relation classification have been spotlighted since they do not need any handcrafted features but even obtain better performances than traditional models. Such NNs can be simply classified into CNN-based and RNN-based models, and they capture slightly different features to predict a relation class. In general, CNN-based models can only capture local features while RNN-based models are expected to capture global features as well, but the performances of CNN-based models are better than RNN-based models. That could be thought that most of relation-related terms are not scattered but intensively positioned as short expressions on a given sentence, and further even if RNNs are expected to learn such information automatically, it cannot be easily done contrary to our expectation. To overcome the limitation of RNNs, most of the recent work using RNNs have used additional linguistic information like Shortest Dependency Path (SDP), which can reduce the effect of noise words when predicting a relation. In this paper, we propose a simple RNN-based model that strongly pays attention to nominal-related and relation-related parts with multiple range-restricted RNN variants called Gated Recurrent Units (GRUs) \cite{Cho:14} and attention. On the SemEval-2010 Task 8 dataset \cite{Hendrickx:09}, our model with only pretrained word embeddings achieved the F1 score of 84.3\%{}, which is comparable with the state-of-the-art CNN-based and RNN-based models that use additional linguistic resources such as Part-Of-Speech (POS) tags, WordNet and SDP. Our contributions are summarized as follows: \begin{itemize} \item For relation classification, without any additional linguistic information, we suggest modeling nominals and a relation in a sentence with specified range-restriction standards and attention using RNNs. \item We show how effective abstracting nominal parts, a relation part and both separately with the restrictions is to relation classification. \end{itemize} \section{Related Work} Traditional approaches to relation classification are to find important features of relations with various linguistic processors and utilize them to train classifiers. For instance, \newcite{Rink:10} uses NLP tools to extract linguistic features and trains an SVM model with the features. Recently, many deep learning approaches have been proposed. \newcite{Zeng:14} proposes a model based on CNNs to automatically learn important N-gram features. \newcite{Santos:15} proposes a ranking loss function to well distinguish between the real classes and \textit{Other} class. To capture long distance patterns, RNN-based, usually using Long Short-Term Memory (LSTM), approaches have also appeared, one of which is \newcite{Zhang:15}. The model simply feeds on all words in a sentence, then captures important one through the max-pooling operation. \newcite{Xu:15} and \newcite{Miwa:16} propose other RNN models using SDP to ignore noise words in a sentence. In addition, \newcite{Liu:15} and \newcite{Cai:16} propose hybrid models of RNN and CNN. One of the most related work to ours is the attention-based bidirectional LSTM (att-BLSTM) \cite{Zhou:16}. The model uses bidirectional LSTM and attention techniques to abstract important parts. However, the att-BLSTM does not distinguish roles of each part in a sentence, which could not involve sensitive attention. Another of the most related work is by \newcite{Zheng:16}. They try to capture nominal-related and relation-related patterns with CNNs and use neither restrictions nor attention mechanism. \section{The Proposed Model} Figure \ref{fig:architecture} shows the architecture of the proposed model, which will be described in the subsections. \subsection{Word Embeddings} Our model first takes word embeddings to represent a sentence at the word level. Given a sentence \(S\) consisting of \(N\) words, it can be represented as \(S = \{w_1\) , \(w_2\), \(w_3\),..., \(w_N\)\}. We convert each one-hot vector \(w_t\) by multiplying with the word embedding matrix \( W_{e} \in \mathbb{R}^{d_{e} \times |V|}\): \begin{equation} e_t = W_{e} w_t . \end{equation} Then, the sentence can be represented as \(S_e = \{e_1, e_2,..., e_N\}\). \begin{figure*}[t] \centering \noindent \includegraphics[width=\linewidth]{Figure1_overall_architecture4.jpg} \caption{Multiple Range-Restricted Bidirectional GRUs with Attention (\(k = 3\))} \label{fig:architecture} \end{figure*} \subsection{Range-Restricted Bidirectional GRUs} To capture information of two nominals and one relation, our model consists of three bidirectional GRU layers with range restrictions. A GRU is a kind of RNN variant to alleviate the gradient-vanishing problem like LSTM, but it has fewer weights than LSTM. In a GRU, the \(t\)-th hidden \(h_t\) with reset gate \(r_t\) and update gate \(z_t\) is computed as: \begin{flalign} & r_t = \sigma \big(W_r e_t + U_r h_{t-1}\big) , \\ & z_t = \sigma \big(W_z e_t + U_z h_{t-1}\big), \\ & \tilde{h}_t = tanh \big(W e_t + U (r_t \odot h_{t-1})\big) , \\ & h_t = z_t \odot h_{t-1} + (1-z_t) \odot \tilde{h}_t , \end{flalign} where \(\sigma\) is the logistic sigmoid function. The range restrictions can be done by using masking techniques to restrict the input range of the three bidirectional GRUs. Therefore, they should be conducted under three separate standards, but because the standards for two nominals are the same, we introduce two kinds of standards. First, to capture each nominal information, only the \( p_{en} \pm k \) positioned words are regarded as input to the corresponding bidirectional GRU layer, where \( p_{en}\) is the position of nominal e1 or e2 and \( k \) is a hyperparameter affecting their window size. Second, for the relation GRU layer, the input range is set to \([p_{e1} , p_{e2}]\) or \([p_{e2} , p_{e1}]\) according to the relative order of the nominals in a sentence, which means that the range is from the formerly-appearing nominal to the latterly-appearing nominal. After the sentence representation at word level \(S_e\) is fed into the six GRU layers (three GRU layers in two directions) under the restrictions, various hidden units are finally generated from the layers. We call the hidden units of each GRU layer \(\overrightarrow{H}_{e1}, \overleftarrow{H}_{e1}, \overrightarrow{H}_{e2}, \overleftarrow{H}_{e2}, \overrightarrow{H}_{rel}, \overleftarrow{H}_{rel}\) for convenience in the next subsection. \subsection{Sentence-level Representation} Among the hidden units of the six range-restricted GRUs, the model selects important parts by using direct selection from hidden layers and the attention mechanism. To extract e1 and e2 information, we propose to directly select hidden units at each nominal position in the e1 and e2 bidirectional GRUs, and to sum them to construct \(v_{e1}\), \(v_{e2} \in \mathbb{R}^{d_{h}}\), respectively as: \begin{flalign} & v_{e1} = \overrightarrow{h}_{e1} + \overleftarrow{h}_{e1} , \\ & v_{e2} = \overrightarrow{h}_{e2} + \overleftarrow{h}_{e2} , \end{flalign} where each directional \( h_{en} \) represents hidden units at the \(en\) positions in the directional \(H_{en}\). To abstract relation information, we adopt the attention mechanism that has been widely used in many areas \cite{Bahdanau:14,Hermann:15,Chorowski:15,Xu:15att}. We use the attention mechanism \cite{Zhou:16}, but we apply it to each directional GRU layer independently to capture more informative parts with the flexibility. The forward directional relation-abstracted vector \( \overrightarrow{v}_{rel} \) is computed as (\( \overleftarrow{v}_{rel} \) in the same way): \begin{flalign} & \overrightarrow{M} = tanh( \overrightarrow{H}_{rel} ) ,\\ & \overrightarrow{\alpha} = softmax( \overrightarrow{w}_{att}^T \overrightarrow{M} ) ,\\ & \overrightarrow{v}_{rel} = \overrightarrow{H}_{rel} \overrightarrow{\alpha}^T , \end{flalign} where \(\overrightarrow{w}_{att}\) is a trained attention vector for the forward layer. Then, we sum \( \overrightarrow{v}_{rel} \) and \( \overleftarrow{v}_{rel} \) to make the relation-abstracted vector \(v_{rel} \in \mathbb{R}^{d_{h}}\): \begin{equation} v_{rel} = \overrightarrow{v}_{rel} + \overleftarrow{v}_{rel} . \end{equation} Lastly, the final representation \(v_{fin} \in \mathbb{R}^{3d_{h}}\) is constructed by concatenating them: \begin{equation} v_{fin} = v_{e1} \oplus v_{rel} \oplus v_{e2} , \end{equation} where \(\oplus\) is a concatenation operator. \newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}m{#1}} \begin{table*}[t] \centering \begin{tabulary}{\textwidth}{ |C{5.6cm}|L{8.3cm}|C{0.7cm}| } \hline \textbf{Model} & \textbf{Additional Features (Except Word Embeddings)} & \textbf{F1} \\ \hline SDP-LSTM \newline \cite{Xu:15} & - POS, WordNet, dependency parse, grammar relation & 83.7 \\ \hline DepNN \newline \cite{Liu:15} & - NER, dependency parse & 83.6 \\ \hline SPTree \newline \cite{Miwa:16} & - POS, dependency parse & 84.4 \\ \hline MixCNN+CNN \newline \cite{Zheng:16} & - None & \textbf{84.8} \\ \hline att-BLSTM \newline \cite{Zhou:16} & - None & 84.0 \\ \hline Our Model (att-BGRU) \newline Our Model (Relation only) \newline Our Model (Nominals only) \newline Our Model (Nominals and Relation) & - None \newline - None \newline - None \newline - None & 82.9 \newline 83.0 \newline 81.4 \newline \textbf{84.3} \\ \hline \end{tabulary} \caption{Comparison with the results of the state-of-the-art models} \label{table:perf} \end{table*} \subsection{Classification} Our model uses scores of how similar the \(v_{fin}\) is to each class embedding to predict the actual relation \cite{Santos:15}. Concretely, we propose a feed-forward layer in which a weight matrix \(W_{c} \in \mathbb{R}^{classes \times 3d_{h}} \) and a bias vector \(b_c \in \mathbb{R}^{classes}\) can be regarded as a set of the class embeddings. In other words, the inner-product of each row vector in \(W_{c}\) with \(v_{fin}\) represents the similarity between them in vector space, so the class score vector \(s_c \in \mathbb{R}^{classes} \) is just computed as: \begin{equation} s_c = W_{c} v_{fin} + b_{c} . \end{equation} Then, the model chooses the max-valued index that represents the most probable class label \(\hat{c}\) except that every value in the \(s_c\) is negative. In the exceptional case, \(\hat{c}\) is chosen as \textit{Other} \cite{Santos:15}. \subsection{Training Objectives} We adopt the ranking loss function \cite{Santos:15} to train the networks. Let \(s_{cy^+}\) the score of the \(\hat{c}\), and \(s_{cc^-}\) the competitive score that is the best score excluding \(s_{cy^+}\) for convenience. Then, the loss is computed as: \begin{multline} L = log(1 + exp(\gamma(m^+ - s_{cy^+})))\\ + log(1 + exp(\gamma(m^- + s_{cc^-}))) , \end{multline} where \(m^+\) and \(m^-\) represent margins and \(\gamma\) is a factor that magnifies the gap between the score and the margin. \section{Experiments} For the experiments, we implement our model in Python using Theano \cite{theano:16} and use the model with the following descriptions. \subsection{Datasets and Settings} We conduct the experiments with SemEval-2010 Task 8 dataset \cite{Hendrickx:09}, which contains 8,000 sentences as the training dataset, and 2,717 sentences as the test dataset. A sentence consists of two nominals (e1, e2), and a relation between them. Ten relation types are considered: Nine specific types (\textit{Cause-Effect, Component-Whole, Content-Container, Entity-Destination, Entity-Origin, Instrument-Agency, Member-Collection, Message-Topic and Product-Producer}), and the \textit{Other} class. The specific types have directionality, so a total of \(2\times9+1 = 19\) relation classes exist. We use 10-fold cross-validation to tune the hyperparameters. We adopt the 100-dimensional word vectors trained by \newcite{Pennington:14} as initial word embeddings and select the hidden layer dimension \(d_n\) of 100, the learning rate of 1.0 and the batch size of 10. AdaDelta \cite{Zeiler:12} is used as the learning optimizer. Also, we adapt the dropout \cite{Hinton:12} to the word embeddings, GRU hidden units, and feed-forward layer with dropout rates of 0.3, 0.3 and 0.7, respectively, and use the \(k\) of 3. We adopt the position indicator that regards \textless{}e1\textgreater, \textless{}/e1\textgreater, \textless{}e2\textgreater{} and \textless{}/e2\textgreater{} as single words \cite{Zhang:15}. We set \(m^+\), \(m^-\) and \(\gamma\) to 2.5, 0.5 and 2.0, respectively \cite{Santos:15} and adopt the L2 regularization with \(10^{-5}\). The official scorer is used to evaluate our model in the macro-averaged F1 (excluding \textit{Other}). \subsection{Results} In Table \ref{table:perf}, our results are compared with the other state-the-art models. Our model with only pretrained word embeddings achieved the F1 score of 84.3\%, which is comparable to the state-of-the-art models. Furthermore, we investigated the effects of extracting relation, nominals and both of them. Attention-based bidirectional GRUs with no restriction (att-BGRU) were also tested as a reimplementation of the att-BLSTM. Here, our finding is that the restricted version of the att-BGRU (the relation only model) is not significantly better, but by abstracting nominals together, the model achieves higher F1 score. That indicates even if the ranges are slightly overlapped, they capture distinct features and improve the performance. \section{Conclusion} This paper proposed a novel model based on multiple range-restricted RNNs with attention. The proposed model achieved a comparable performance to the state-of-the-art models without any additional linguistic information.
1,314,259,996,145
arxiv
\section{Introduction} Stochastic Resonance (SR) is a resonant phenomenon triggered by noise which can be described as a noise-enhanced signal transmission that occurs in certain non-linear systems~\cite{GHJM98}. It reveals a context where noise ceases to be a nuisance and is turned into a benefit. Loosely speaking one says that a (non-linear) system exhibits SR whenever noise {\it benefits} the system \cite{K06,PK09}. Qualitatively, the signature of an SR benefit is an inverted-U curve behaviour of the physical variable of interest as a function of the noise strength. It can take place in systems where the noise helps detecting faint signals. For example, consider a \emph{threshold detection} of a binary-encoded analog signal such that the threshold is set higher than the two signal values. If there is no noise, then the detector does not recover any information about the encoded signals since they are sub-threshold, and the same occurs if there is too much noise because it will wash out the signal. Thus, there is an optimal amount of noise that will result in maximum performance according to some measure such as signal-to-noise ratio, mutual information, or probability of success. Recently the idea that noise can sometimes play a constructive role like in SR has started to penetrate the quantum information field too. In quantum communication, this possibility has been put forward in Refs.~\cite{Ting,BM,RGK05} and more recently in Refs.~\cite{W09,WK09,QKD,CHP10}. In this setting it has been shown that information theoretic quantities may ``resonate'' at maximum value for a nonzero level of noise added intentionally. It is then important to determine general criteria for the occurrence of such phenomenon in quantum information protocols. Continuous variable quantum systems are usually confined to Gaussian states and processes, and SR effects are not expected in any linear processing of such systems. However, information is often available in digital (discrete) form, and therefore it must be subject to ``de-quantization'' at the input of a continuous Gaussian channel and ``quantization'' at the output~\cite{SJ67-GN98}. These processes are usually involved in the conversion of digital to analog signals and vice versa. Since these mappings are few-to-many and many-to-few, they are inherently non-linear, and similar to the threshold detection described above. We can thus expect the occurrence of the SR effect in this case. The simplest model representing such a situation is one in which a binary variable is encoded into the positive and negative parts of a real continuous alphabet and subsequently decoded by a threshold detection~\cite{KM03}. In some cases, one may not always have the freedom in choosing the threshold, and in such cases it becomes relevant to know that SR can take place. This may happen in {\it homodyne detection} if the square of the average signal times the overall detection efficiency (which accounts for the detector's efficiency, the fraction of the field being measured, etc.) is below the vacuum noise strength \cite{WM93}. It is also the case in discrimination between lossy channels, where the unknown transmissivities together with a faint signal make it unlikely to optimally choose the threshold value. In this paper, we consider a bit encoded into squeezed-coherent states with different amplitudes that are subsequently sent through a Gaussian quantum channel (specifically, a lossy bosonic channel \cite{EW07}). At the output, the states are subjected to threshold measurement of their amplitude. In addition to such a setting, we consider one involving entanglement shared by a sender and receiver as well as one involving quantum channel discrimination. Finally, we also consider the SR effect in quantum communication as well as in private communication. For all of these settings, we determine conditions for the occurrence of the SR effect. These appear as {\it forbidden intervals} for the threshold detection values. A ``forbidden interval'' (or region) is a range of threshold values for which the SR effect does not occur. We can illustrate this point by appealing again to the example of threshold detection of a binary-encoded analog signal. Suppose that the signal values are $A$ or $-A$ where $A>0$. Then if the threshold value $\theta$ is smaller in magnitude than the signal values, so that $ |\theta| \leq |A|$, the SR effect does not occur---adding noise to the signal will only decrease the performance of the system. In the other case where $ |\theta| > |A|$, adding noise can only increase performance because the signals are indistinguishable when no noise is present. As we said before, adding too much noise will wash out the signals, so that there must be some optimal noise level in this latter case. Our results extend those of Refs.~\cite{KM03,WK09} to other schemes. Remarkably, in the private communication scheme, the width of the forbidden interval can vanish depending on whether the sender or the receiver adds the noise. This means that in the former case the noise is always beneficial. \section{Stochastic Resonance in Classical Communication}\label{sec:unassisted-comm} Let us consider a lossy bosonic quantum channel with transmissivity $\eta\in(0,1)$ \cite{HW01,EW07}. Our aim is to evaluate the probability of successful decoding, considered as a performance measure, when sending classical information through such a channel. We consider an encoding of the following kind. Let us suppose that the sender uses as input a displaced and squeezed vacuum. Working in the Heisenberg picture, the input variable of the communication setup is expressed by the operator: \begin{equation}\label{eq:qmessage1} \hat{q}e^{-r}-\alpha_{q}(-1)^{X}, \end{equation} encoding a bit value $X\in\{0,1\}$, where $\hat{q}$ is the position quadrature operator, $\alpha_{q}\in\mathbbm{R}_{+}$ is the displacement amplitude, and $r \geqslant 0$ is the squeezing parameter \cite{G05}. Under the action of a lossy bosonic channel \cite{HW01} with transmissivity $\eta$ the input variable transforms as follows: \begin{equation}\label{lossych} \sqrt{\eta} \left( \hat{q}e^{-r} - \alpha_{q}(-1)^{X} \right) + \sqrt{1-\eta} \, \hat{q}_{E} \, , \end{equation} where $\hat{q}_{E}$ is the position quadrature operator of an environment mode (assumed to be in the vacuum state for the sake of simplicity). At the receiver's end, let us consider the possibility of adding a random, Gaussian-distributed displacement $\nu_{q}\in\mathbbm{R}$, with zero mean and variance $\sigma^2/2$, to the arriving state. Then, the output observable becomes as follows: \begin{equation}\label{noiseB} \sqrt{\eta} \left( \hat{q}e^{-r} - \alpha_{q}(-1)^{X} \right) + \sqrt{1-\eta} \, \hat{q}_{E} + \nu_{q} \, . \end{equation} Notice that we could just as well consider the addition of noise at the sender's end. In that case, the last term $\nu_q$ of (\ref{noiseB}) would appear with a factor $\sqrt{\eta}$ in front. Upon measurement of the position quadrature operator, the following signal value $S_{X}$ results \begin{equation}\label{signal1} S_{X}=\sqrt{\eta} \left( qe^{-r} - \alpha_{q}(-1)^{X} \right) + \sqrt{1-\eta} \, q_{E} + \nu_{q} \, . \end{equation} Following Ref.~\cite{WK09}, we define a random variable summing up all noise terms: \begin{equation}\label{eq:all-noise1} N \equiv \sqrt{\eta} \, q e^{-r} + \sqrt{1-\eta}\, q_{E} + \nu_{q} \, . \end{equation} Its probability density $P_N$ is the convolution of the probability densities of the random variables $qe^{-r}$, $\nu_{q}$ and $q_{E}$, these being independent of each other. Moreover, they are distributed according to Gaussian (normal) distribution, and so $P_N$ reads as \begin{equation}\label{eq:noise-density1} P_{N} = {\mathcal{N}}(0,\eta e^{-2r}/2) \circ {\mathcal{N}}(0,\left( 1-\eta\right)/2) \circ {\mathcal{N}}(0,\sigma^{2}/2) \, , \end{equation} where $\circ$ denotes convolution, and \[ {\mathcal{N}}\left( \mu,K^{2}\right) =\frac{1}{\sqrt{2\pi K^{2}}} \exp\left[ \frac{-\left( x-\mu\right)^2}{2K^{2}} \right] \] denotes the normal distribution (as function of $x$) with mean $\mu$ and variance $K^{2}$. Notice that the noise term (\ref{eq:all-noise1}) does not depend on the encoded value $X$ and neither does its probability density. From (\ref{eq:noise-density1}) we explicitly get \begin{equation}\label{pX1} P_{N} = {\mathcal{N}}(0,(1 - \eta + \eta e^{-2r} +\sigma^{2} )/2) \, . \end{equation} The output signal (\ref{signal1}) can now be written as \[ S_{X} = N - \sqrt{\eta} \, \alpha_{q}(-1)^{X} \, . \] The receiver then thresholds the measurement result with a threshold $\theta \in \mathbbm{R}$ to retrieve a random bit $Y$ where \begin{equation}\label{Y1} Y\equiv H\left( N - \sqrt{\eta} \, \alpha_{q}(-1)^{X} - \theta \right) \, , \end{equation} and $H$ is the Heaviside step function defined as $H\left( x\right) =1$ if $x \geqslant 0$ and $H\left( x\right) =0$ if $x<0$. To evaluate the probability of successful decoding, we compute the conditional probabilities \begin{eqnarray} P_{Y|X}(0|0) & = & \int_{-\infty}^{+\infty}\left[ 1 - H\left( n - \sqrt{\eta} \, \alpha_{q} - \theta \right) \right] P_{N}\left( n\right) \;dn\nonumber\\ & = & 1-P_{Y|X}(1|0) \, , \\ P_{Y|X}(1|1) & = & \int_{-\infty}^{+\infty}H\left( n + \sqrt{\eta} \, \alpha_{q} - \theta\right) P_{N}\left( n\right) \;dn\nonumber\\ & = & 1-P_{Y|X}(0|1) \, . \end{eqnarray} Using (\ref{pX1}), we find \begin{eqnarray} P_{Y|X}(0|0) & = & \frac{1}{2} + \frac{1}{2} \mathrm{erf}\left[ \frac{ \theta + \sqrt{\eta} \, \alpha_{q} }{\sqrt{ 1 - \eta + \eta e^{-2r} + \sigma^{2} }}\right] , \label{P00}\\ P_{Y|X}(1|1) & = & \frac{1}{2} - \frac{1}{2} \mathrm{erf}\left[ \frac{ \theta - \sqrt{\eta} \, \alpha_{q} }{\sqrt{ 1 - \eta + \eta e^{-2r} + \sigma^{2} }}\right] , \label{P11} \end{eqnarray} where $\mathrm{erf}\left(z\right)$ denotes the error function: \[ \mathrm{erf}\left(z\right) \equiv\frac{2}{\sqrt{\pi}}\int_{0}^{z} \exp\left\{ -x^{2}\right\} \ \mbox{d}x. \] This situation is identical to the one treated in Ref.~\cite{WK09}, and the forbidden interval can be determined in a simple way by looking at the probability of successful decoding (note that others have also considered the probability-of-success, or error, criterion~\cite{RAC06,PK09a}). The probability of success is defined as \begin{equation}\label{Pedef} {P}_{s}\equiv P_{X}(0)P_{Y|X}(0|0)+P_{X}(1)P_{Y|X}(1|1) \, . \end{equation} Setting $P_{X}(0)=\wp$ and $P_{X}(1)=(1-\wp)$, the probability of success is as follows: \begin{equation} \fl {P}_{s} = \frac{1}{2} + \frac{1}{2}\wp\ \mathrm{erf}\left( \frac{ \theta + \sqrt{\eta} \, \alpha_{q} }{\sqrt{ 1 - \eta + \eta e^{-2r} + \sigma^{2} }}\right) -\frac{1}{2}(1-\wp)\ \mathrm{erf}\left( \frac{ \theta - \sqrt{\eta} \, \alpha_{q} }{\sqrt{ 1 - \eta + \eta e^{-2r} + \sigma^{2} }}\right). \label{pe1} \end{equation} Our goal is to study the dependence of the success probability on the noise variance $\sigma^2/2$. This leads us to the following proposition: \begin{proposition} [The forbidden interval]\label{prop:unassisted-comm} The probability of success $P_{s}$ shows a non-monotonic behavior versus $\sigma$ iff $\theta\notin\lbrack\theta_{-},\theta_{+}]$, where $\theta_{\pm}$ are the two roots of the following equation: \begin{equation}\label{Eq-P1} \frac{\wp ( \theta + \sqrt{\eta} \, \alpha_{q} )}{(1-\wp)( \theta - \sqrt{\eta} \, \alpha_{q} )} = \exp\left[ \frac{ 4 \sqrt{\eta} \, \alpha_{q} \theta }{1-\eta+e^{-2r}\eta }\right] \, , \end{equation} with $\theta_{-} \leqslant - \sqrt{\eta} \, \alpha_{q} < \sqrt{\eta} \, \alpha_{q} \leqslant \theta_{+}$. \end{proposition} \begin{proof} We consider ${P}_{s}$ as a function of $\sigma^{2}$. In order to have a non-monotonic behavior for ${P}_{s}(\sigma^{2})$, we must check for the presence of a local maximum for positive values of $\sigma$. By imposing \begin{equation}\label{derpe1} \frac{d{P}_{s}(\sigma^{2})}{d\sigma^{2}} = 0 \end{equation} we obtain the following expression for the critical value of $\sigma$: \begin{equation}\label{smin1} \sigma^{2}_\ast = - 1 + \eta - e^{-2r}\eta + \frac{ 4 \sqrt{\eta} \, \alpha_{q} \theta }{\ln\left[ \frac{ \wp ( \theta + \sqrt{\eta} \, \alpha_{q} ) } { (1-\wp) ( \theta - \sqrt{\eta} \, \alpha_{q} ) } \right] } \, . \end{equation} The probability of success is a non-monotonic function of $\sigma^2$ iff $\sigma^{2}_\ast > 0$. This inequality is verified for $\theta\notin\lbrack\theta_{-},\theta_{+}]$, where $\theta_\pm$ are the unique solutions of the equation $\sigma^{2}_\ast = 0$, i.e., equation (\ref{Eq-P1}). Finally, we notice that equation (\ref{Eq-P1}) implies $\frac{\theta + \sqrt{\eta} \, \alpha_{q}}{\theta - \sqrt{\eta} \, \alpha_{q}} > 0$, i.e., $\theta \not\in [ - \sqrt{\eta} \, \alpha_q , \sqrt{\eta} \, \alpha_q ]$, which implies $\theta_{-} \leqslant - \sqrt{\eta} \, \alpha_{q} < \sqrt{\eta} \, \alpha_{q} \leqslant \theta_{+}$. \end{proof} The above proposition improves upon the theorem from Ref.~\cite{WK09}\ in several important ways, due to the assumption that the noise is Gaussian, allowing us to analyze it more carefully. First, (\ref{smin1}) gives the optimal value of the noise that leads to the maximum success probability if the threshold is outside of the forbidden interval (though, note that other works have algorithms to learn the optimal noise parameter \cite{MK98}). Second, there is no need to consider an infinite-squeezing limit, as was the case in Ref.~\cite{WK09}, in order to guarantee the non-monotonic signature of SR. As an example, figure \ref{Pevssig} shows the probability of success $P_s$ as a function of $\sigma$ for various values of the threshold $\theta$ around the high signal level $\sqrt{\eta} \, \alpha_q$. Identical behavior can be found for values of $\theta$ around the low signal level~$- \sqrt{\eta} \, \alpha_q$. \begin{figure \centering\includegraphics[width=0.45\textwidth] {fig1} \caption{ The probability of success $P_{s}$, equation (\ref{pe1}), (corresponding to the choice $\wp=1/2$) versus $\sigma$, for the noise-assisted threshold detection. The values of the parameters are $\eta=0.8$, $\alpha_{q}=1$ and $r=0$, giving $\theta_{\pm}\approx\pm 0.96$ after applying Proposition~\ref{prop:unassisted-comm}. Curves from top to bottom correspond respectively to $\theta=0.85$, $0.95$ (inside the forbidden interval), $1.05$, $1.15$, $1.25$, $1.35$ (outside the forbidden interval). Due to symmetry, we also have the same plot for the values $\theta=-0.85$, $-0.95$, $-1.05$, $-1.15$, $-1.25$, $-1.35$.} \label{Pevssig} \end{figure} Figure~\ref{thevsr} plots the forbidden interval in the $\theta,r$ plane. We can see that increasing the squeezing level reduces the width of the forbidden interval up to $2\sqrt{\eta} \, \alpha_{q}$. Similarly, figure \ref{thevsal} plots the forbidden interval in the $\theta ,\alpha_{q}$ plane. Increasing the amplitude enlarges the width of the forbidden interval up to $2\sqrt{\eta} \, \alpha_{q}$. \begin{figure \centering\includegraphics[width=0.45\textwidth] {fig2}\caption{Forbidden interval (area between upper and lower curves) in the $\theta,r$ plane, drawn according to Proposition \ref{prop:unassisted-comm}. The top (resp. bottom) solid line corresponds to $\theta_{+}$ (resp. $\theta_-$) while the top (resp. bottom) dashed line corresponds to $\sqrt{\eta} \, \alpha_q$ (resp. $-\sqrt{\eta} \, \alpha_q$). The values of the other parameters are $\wp=1/2$, $\eta=0.8$, and $\alpha_{q}=1$.} \label{thevsr} \end{figure} \begin{figure \centering\includegraphics[width=0.45\textwidth] {fig3} \caption{Forbidden interval (area between upper and lower curves) in the $\theta,\alpha_q$ plane, drawn according to Proposition \ref{prop:unassisted-comm}. The top (resp. bottom) solid line corresponds to $\theta_{+}$ (resp. $\theta_-$), while the top (resp. bottom) dashed line corresponds to $\sqrt{\eta} \, \alpha_q$ (resp. $-\sqrt{\eta} \, \alpha_q$). The values of the other parameters are $\wp=1/2$, $\eta=0.8$, and $r=0$.} \label{thevsal} \end{figure} Finally, notice that Proposition \ref{prop:unassisted-comm} holds true even if the noise $\nu_q$ is introduced at the sender's end. \section{Stochastic Resonance in Entanglement-Assisted Classical Communication} Let us consider the same channel as in the previous section, but we now assume that the sender and receiver share an entangled state, namely, a two-mode squeezed vacuum~\cite{G05}, before communication begins. This situation is somehow similar to the communication scenario in super-dense coding~\cite{BW92-BK00}, with the exception that we have continuous variable systems and thresholding at the receiver. Let mode~1 (resp.~2) belong to the sender (resp.~receiver). The sender displaces her share of the entanglement by the complex number $-\alpha_{q}\left( -1\right) ^{X_{q}}-i\alpha_{p}\left( -1\right) ^{X_{p}}$ in order to transmit the two bits $X_{q}$ and $X_{p}$. The resulting displaced squeezed vacuum operators are as follows: \begin{eqnarray} & & (\hat{q}_{1}-\hat{q}_{2})e^{-r}-\alpha_{q}(-1)^{X_{q}} \, , \label{eq:qmes1}\\ & & (\hat{p}_{1}+\hat{p}_{2})e^{-r}-\alpha_{p}(-1)^{X_{p}} \, , \label{eq:qmes2} \end{eqnarray} where $\hat{q}$, $\hat{p}$ are the position, momentum quadrature operators, $r \geqslant 0$ is the squeezing strength, $\alpha_{q},\alpha_{p}\in\mathbbm{R}$ are the displacement amplitudes, and $X_{q},X_{p}\in\{0,1\}$ are binary random variables. Since $\hat{q}_{1}-\hat{q}_{2}$ commutes with $\hat{p}_{1}+\hat{p}_{2}$, it suffices to analyze the output for (\ref{eq:qmes1}). After the sender transmits her share of the entanglement through a lossy bosonic channel with transmissivity $\eta\in\left(0,1\right)$, the operator describing their state is as follows: \[ \left( \sqrt{\eta} \, \hat{q_{1}} - \hat{q}_{2} \right) e^{-r} - \sqrt{\eta} \, \alpha_{q}(-1)^{X_{q}} + \sqrt{1-\eta} \, \hat{q}_{E} \, , \] where $\hat{q}_{E}$ is the position quadrature operator of the environment mode (assumed to be in the vacuum state for the sake of simplicity). At the receiver's end, let us again consider the possibility of adding a random, Gaussian-distributed displacement $\nu_{q}\in\mathbbm{R}$ to the arriving state. Then the output observable becomes as follows: \[ \left( \sqrt{\eta} \, \hat{q_{1}} - \hat{q}_{2} \right) e^{-r} - \sqrt{\eta} \, \alpha_{q}(-1)^{X_{q}} + \sqrt{1-\eta} \, \hat{q}_{E} + \nu_{q} \, . \] Repeating the steps of Section~\ref{sec:unassisted-comm}, we have \[ S_{X_{q}} = N -\sqrt{\eta} \, \alpha_{q}(-1)^{X_{q}} \, , \] where now \begin{equation}\label{eq:all-noise2} N\equiv\left( \sqrt{\eta} \, \hat{q_{1}}-\hat{q}_{2}\right) e^{-r}+\sqrt{1-\eta} \, \hat{q}_{E} + \nu_{q} \, . \end{equation} and \begin{equation}\label{eq:noise-density2} P_{N} = {\mathcal{N}}(0,\eta e^{-2r}/2) \circ {\mathcal{N}}(0,e^{-2r}/2) \circ {\mathcal{N}}(\left( 1-\eta\right)/2) \circ {\mathcal{N}}(0,\sigma^{2}_q/2) \, , \end{equation} so that \begin{equation}\label{pX2} P_{N} =\mathcal{N}\left( 0 , ( 1 - \eta + (1+\eta)e^{-2r} + \sigma^{2}_q )/2\right) \, . \end{equation} The receiver then thresholds the measurement result with a threshold $\theta_{q}\in\mathbbm{R}$, and he retrieves a random bit $Y_{q}$ where \begin{equation}\label{Y2} Y_{q}\equiv H\left( N - \sqrt{\eta} \, \alpha_{q}(-1)^{X_{q}} - \theta_{q} \right) \, , \end{equation} and $H$ is the unit Heaviside step function. Proceeding as in Section~\ref{sec:unassisted-comm}, we obtain the following input/output conditional probabilities \begin{eqnarray} P_{Y_{q}|X_{q}}(0|0) & = & \frac{1}{2}+\frac{1}{2}\mathrm{erf}\left[ \frac{\theta_{q} + \sqrt{\eta} \, \alpha_{q}}{\sqrt{ 1 - \eta + (1+\eta)e^{-2r} + \sigma^{2}_q}}\right] \, ,\\ P_{Y_{q}|X_{q}}(1|1) & = & \frac{1}{2}-\frac{1}{2}\mathrm{erf}\left[ \frac{\theta_{q} - \sqrt{\eta} \, \alpha_{q}}{\sqrt{ 1 - \eta + (1+\eta)e^{-2r} + \sigma^{2}_q }}\right] \, . \end{eqnarray} Then, writing $P_{X_{q}}(0)=\wp_{q}$ and $P_{X_{q}}(1)=1-\wp_{q}$, the probability of success reads \begin{eqnarray} {P}_{s,q} & = & \frac{1}{2}+\frac{1}{2}\wp_{q} \, \mathrm{erf}\left(\frac{\theta_{q}+\sqrt{\eta} \, \alpha_{q}}{\sqrt{ 1 - \eta + (1+\eta )e^{-2r} + \sigma^{2}_q }}\right) \nonumber \\ & & -\frac{1}{2}(1-\wp_{q}) \, \mathrm{erf}\left( \frac{\theta_{q}-\sqrt{\eta} \, \alpha_{q}}{\sqrt{ 1 - \eta + (1+\eta)e^{-2r} + \sigma^{2}_q }}\right) \, . \label{pe2} \end{eqnarray} Analogously, for the quadrature $\hat{p}_{1}+\hat{p}_{2}$, we have \begin{eqnarray} {P}_{s,p} & = & \frac{1}{2}+\frac{1}{2}\wp_{p}\,\mathrm{erf}\left( \frac{\theta_{p}+\sqrt{\eta} \, \alpha_{p}}{\sqrt{ 1 - \eta + (1+\eta)e^{-2r} + \sigma^{2}_p }}\right) \nonumber \\ & & -\frac{1}{2}(1-\wp_{p})\,\mathrm{erf}\left( \frac{\theta_{p}-\sqrt{\eta} \, \alpha_{p}}{\sqrt{ 1 - \eta + (1+\eta)e^{-2r} + \sigma^{2}_p }}\right) \, . \label{pe3} \end{eqnarray} We have assumed that the noise terms added to the quadratures $\hat{q}_{1}-\hat{q}_{2}$ and $\hat{p}_{1}+\hat{p}_{2}$ have variances $\sigma_q^2/2$ and $\sigma_p^2/2$ respectively. We finally arrive at the following proposition: \begin{proposition}[The forbidden rectangle] \label{forbrect} The probability of success $P_s=P_{s,q} P_{s,p}$ shows a non-monotonic behavior vs $\sigma_q$, $\sigma_p$ iff $\theta_q\notin[\theta_{q-},\theta_{q+}]$ or $\theta_p\notin[\theta_{p-},\theta_{p+}]$ where $\theta_{\bullet\pm}$ are the roots of the following equation: \[ \frac{\wp_{\bullet}( \theta_{\bullet} + \sqrt{\eta} \, \alpha_{\bullet} )}{(1-\wp_{\bullet})( \theta_{\bullet} - \sqrt{\eta} \, \alpha_{\bullet} )}= \exp\left[\frac{4\sqrt{\eta} \, \alpha_{\bullet}\theta_{\bullet}}{1-\eta+(1+\eta)e^{-2r}}\right], \] with $\theta_{\bullet -}\le - \sqrt{\eta} \, \alpha_{\bullet} < + \sqrt{\eta} \, \alpha_{\bullet} \le \theta_{\bullet +}$ (here $\bullet$ stands for either $q$ or $p$). \end{proposition} \begin{proof} The proof can be obtained from that of Proposition \ref{prop:unassisted-comm} after replacing $-\eta + \eta e^{-2r}$ with $-\eta + (1+\eta) e^{-2r}$. \end{proof} It is worth remarking in the above proposition that either of the conditions $\theta_q\notin[\theta_{q-},\theta_{q+}]$ or $\theta_p\notin[\theta_{p-},\theta_{p+}]$ (or both) have to be satisfied in order to have a non monotonic behavior for $P_s$. This follows because $P_s$ is a function of two variables $\sigma_q^2$ and $\sigma_p^2$, hence it suffices that the partial derivative with respect to one of them has a maximum to have a non-monotonic behavior. As consequence we have a \textquotedblleft forbidden rectangle\textquotedblright\ rather than \textquotedblleft forbidden stripes\textquotedblright\ in the $\theta_q,\theta_p$ plane. It is also worth noticing the difference of the equation in the above proposition with that in the proposition of the previous section. Here the squeezing factor $e^{-2r}$ is multiplied by $(1+\eta)$ rather than $\eta$. This is because we now have two squeezed modes, one of which is attenuated by the lossy channel. Finally, notice that Proposition \ref{forbrect} holds true even if the noise $\nu_q$ is introduced at the sender's end. \section{Stochastic Resonance in Channel Discrimination} Let us now consider two lossy quantum channels with transmissivities $\eta _{0},\eta_{1}\in(0,1)$ (suppose, without loss of generality, $\eta_{0}>\eta_{1}$). Our aim is to distinguish them. Differently from previous works, here we do not optimize over all possible decoding strategies \cite{lossD}, but concentrate on a given threshold-detection scheme. Then, suppose to use as probe (input) a squeezed and displaced vacuum operator \begin{equation}\label{eq:qmessage3} \hat{q}e^{-r}+\alpha_{q} \,, \end{equation} where $\hat{q}$ is the position quadrature operator, $r$ is the squeezing parameter, and $\alpha_{q}\in\mathbbm{R}$ the displacement amplitude. The transmission through the lossy channel with transmissivity $\eta_{X}$, $X=0,1$, can be considered as the encoding of a binary random variable $X=0,1$ occurring with probability $P_{X}$. The output observable after transmission is then \[ \sqrt{\eta_{X}} \left( \alpha_{q} + \hat{q}e^{-r} \right) + \sqrt{1-\eta_{X}} \, \hat{q}_{E} \, , \] where $\hat{q}_{E}$ is the position quadrature operator of the environment mode (assumed to be in the vacuum state for the sake of simplicity). At the receiver's end, we consider the addition of noise, modeled by a random, Gaussian-distributed displacement $\nu_{q}\in\mathbbm{R}$. Then the output observable becomes \[ \sqrt{\eta_{X}} \left( \alpha_{q}+\hat{q}e^{-r}\right) + \sqrt{1-\eta_{X}} \, \hat{q}_{E} + \nu_{q} \, . \] Upon measurement of the position quadrature operator, the signal value is \begin{equation}\label{signal3} S_{X} = \sqrt{\eta_{X}} \left( \alpha_{q}+qe^{-r}\right) + \sqrt{1-\eta_{X}} \, q_{E}+\nu_{q} \, . \end{equation} We define a conditional random variable $N|X$ summing all noise terms: \begin{equation}\label{eq:all-noise3} N|X \equiv \sqrt{\eta_{X}} \, q e^{-r} + \sqrt{1-\eta_{X}} \, q_{E} + \nu_{q} \, . \end{equation} The density $P_{N|X}\left( n|x\right)$ of the random variable $N|X$ is \begin{equation}\label{eq:noise-density3} P_{N|X} = {\mathcal{N}}(0,\eta_{x}e^{-2r}/2) \circ {\mathcal{N}}(0,\left( 1-\eta_{x}\right)/2) \circ {\mathcal{N}}(0,\sigma^{2}/2) \, . \end{equation} Notice that the noise term (\ref{eq:all-noise3}) explicitly depends on the encoded value $X$ and so does its probability density. From (\ref{eq:noise-density3}), we explicitly obtain \begin{equation}\label{pX3} P_{N|X} = \mathcal{N}\left( 0,( 1 - \eta_{x} + \eta_{x}e^{-2r} + \sigma^{2} )/2 \right) \, . \end{equation} The output signal (\ref{signal3}) can now be written as \[ S_{X} = \sqrt{\eta_{X}} \, \alpha_{q}+N|X \, . \] The receiver then thresholds the measurement result with a threshold $\theta\in\mathbbm{R}$ to retrieve a random bit $Y$ where \[ Y \equiv H\left( \theta - \sqrt{\eta_{X}} \, \alpha_{q} - N|X \right) \, , \] and $H$ is the unit Heaviside step function. In this case, the receiver assigns $Y=1$ if the output signal $S_{X}$ is smaller that the threshold, and assigns $Y=0$ otherwise. The final detected bit $Y$ should be the same as the encoded bit $X$. Hence, the probability of success reads like (\ref{Pedef}) where now \begin{eqnarray} P_{Y|X}(0|0) & = & \int_{-\infty}^{+\infty} \hspace{-0.4cm} \left[ 1 - H\left( \theta -\sqrt{\eta_{0}} \, \alpha_{q}-n\right) \right] P_{N|X} \left( n|0\right) dn \, , \nonumber \\ P_{Y|X}(1|1) & = & \int_{-\infty}^{+\infty} \hspace{-0.4cm} H\left( \theta - \sqrt{\eta_{1}} \, \alpha_{q} - n \right) P_{N|X} \left( n|1 \right) dn \, . \nonumber \end{eqnarray} Using (\ref{pX3}) we obtain \begin{eqnarray} P_{Y|X}(0|0) & = & \frac{1}{2}\left[ 1-\mathrm{erf}\left( \frac{\theta - \sqrt{\eta_{0}} \, \alpha_{q}}{\sqrt{ 1 - \eta_{0} + \eta_{0}e^{-2r} + \sigma^{2} }}\right) \right] \, , \\ P_{Y|X}(1|1) & = & \frac{1}{2}\left[ 1+\mathrm{erf}\left( \frac{\theta - \sqrt{\eta_{1}} \, \alpha_{q}}{\sqrt{ 1 - \eta_{1} + \eta_{1}e^{-2r} + \sigma^{2} }}\right) \right] \, . \end{eqnarray} Then, writing $P_{X}\left( 0\right) =\wp$ and $P_{X}\left( 1\right) = 1-\wp$, we get \begin{eqnarray} {P}_{s} & = & \frac{1}{2}-\frac{1}{2}\wp\,\mathrm{erf}\left( \frac{ \theta - \sqrt{\eta_{0}} \, \alpha_{q} }{\sqrt{ 1 - \eta_{0} + \eta_{0}e^{-2r} + \sigma^{2} }}\right) \nonumber \\ & & +\frac{1}{2}(1-\wp)\,\mathrm{erf}\left( \frac{ \theta - \sqrt{\eta_{1}} \, \alpha_{q} }{\sqrt{ 1 - \eta_{1} + \eta_{1}e^{-2r} + \sigma^{2} }}\right) \, . \label{pe4} \end{eqnarray} Our aim is to analyze the probability of success as a function of the noise variance, for given values of the parameters $\alpha_{q}$, $r$, $\theta$, $\eta_{0}$, $\eta_{1}$. In the simplest case of $r=0$, we have the following proposition: \begin{proposition}[The forbidden interval]\label{forbdiscr} The probability of success $P_{s}$ shows a non-monotonic behavior as a function of $\sigma$ iff $\theta\notin\lbrack\theta_{-},\theta_{+}]$, where $\theta_{\pm}$ are the two roots of the following equation: \begin{equation}\label{cond_cd} \frac{\wp( \theta - \sqrt{\eta_{0}} \, \alpha_{q})}{(1-\wp)( \theta - \sqrt{\eta_{1}} \, \alpha_{q} )}= \exp\left[ \alpha_{q}^{2}(\eta_{0}-\eta_{1})-2\alpha_{q}\theta(\sqrt{\eta_{0}}-\sqrt{\eta_{1}})\right] \, , \end{equation} such that $\theta_{-} \leqslant \sqrt{\eta_{1}} \, \alpha_{q} < \sqrt{\eta_{0}} \, \alpha_{q} \leqslant \theta_{+}$. \end{proposition} \begin{proof} We consider the probability of success as a function of $\sigma^{2}$. In order to have a non-monotonic behavior for ${P}_{s}(\sigma^{2})$, we must check for the presence of a local maximum. By solving \begin{equation}\label{derpe3} \frac{d{P}_{s}(\sigma^{2})}{d\sigma^{2}}=0 \, , \end{equation} we obtain the following expression for the critical value of $\sigma^2$: \begin{equation}\label{smin3} \sigma_{\ast}^{2}=-1+\frac{\alpha_{q}^{2}(\eta_{0}-\eta_{1})-2\alpha_{q} \theta(\sqrt{\eta_{0}}-\sqrt{\eta_{1}})}{\ln\left[ \frac{ \wp(\theta - \sqrt{\eta_{0}} \, \alpha_{q})}{(1-\wp)( \theta - \sqrt{\eta_{1}} \, \alpha_{q} )} \right]} \,. \end{equation} The condition for non-monotonicity of the probability of success as a function of $\sigma^2$, $\sigma_{\ast}^{2} > 0$, is verified iff $\theta \not\in [\theta_-, \theta_+]$, where $\theta_\pm$ are the roots of $\sigma_{\ast}^{2}=0$, i.e., equation (\ref{cond_cd}). Finally, equation (\ref{cond_cd}) implies $\frac{ \theta - \sqrt{\eta_{0}} \, \alpha_{q}}{\theta - \sqrt{\eta_{1}} \, \alpha_{q}} > 0$, which in turn yields $\theta_{-} \leqslant \sqrt{\eta_{1}} \, \alpha_{q} < \sqrt{\eta_{0}} \, \alpha_{q} \leqslant \theta_{+}$. \end{proof} If $r\neq0$, (\ref{derpe3}) is not algebraic---hence we did not succeed in providing an analytical expression for $\sigma_{\ast}^{2}$. However, numerical investigations show a qualitative behavior of the forbidden interval's boundaries identical to that shown in figures \ref{thevsr} and \ref{thevsal} (notice that here $-\sqrt{\eta} \, \alpha_{q}$ and $\sqrt{\eta} \, \alpha_{q}$ are replaced by $\sqrt{\eta_{1}} \, \alpha_{q}$ and $\sqrt{\eta_{0}} \, \alpha_{q}$, respectively). Finally, notice that Proposition \ref{forbdiscr} holds true even if the noise $\nu_q$ is introduced at the sender's end. \section{Stochastic Resonance in Quantum Communication}\label{Sec:quantum} Let us now consider a setting in which the SR effect can occur in the transmission of a qubit ($Q$). The aim is to first encode a qubit state into a bosonic mode ($B$) state, send it through the lossy channel, and finally coherently decode, with a threshold mechanism, the output bosonic mode state into a qubit system at the receiving end. We should qualify that it is unclear to us whether one would actually exploit the encodings and decodings given in this section, but regardless, the setting given here provides a novel scenario in which the SR effect can occur for a quantum system. We might consider the development in this section to be a coherent version of the settings in the previous sections. Also, it is in the spirit of a true ``quantum stochastic resonance'' effect hinted at in Ref.~\cite{B05}. We work in the Schr\"{o}dinger picture, and consider an initial state $|\varphi\rangle_Q\otimes |0\rangle_B$, where $|\varphi\rangle_Q=a|0\rangle_Q+b|1\rangle_Q$ is an arbitrary qubit state and $|0\rangle_B$ is the zero-eigenstate of the position-quadrature operator of the bosonic field. Here for the sake of simplicity we are going to work with infinite-energy position eigenstates rather than with squeezed-coherent states. Suppose that the encoding takes place through the following unitary controlled-operations: \begin{eqnarray} U_1^{QB} & = & \left\vert 0\right\rangle_Q \left\langle 0\right\vert\otimes e^{-i\hat{p}_B x_0}+\left\vert 1\right\rangle_Q \left\langle 1\right\vert \otimes e^{i\hat{p}_B x_0} \, ,\label{U1} \\ U_2^{QB} & = & I_{Q} \otimes \int_{x \geqslant 0}\left\vert x\right\rangle_B \left\langle x\right\vert dx + X_{Q}\otimes\int_{x<0}\left\vert x\right\rangle_B \left\langle x\right\vert dx \, , \label{U2} \end{eqnarray} where $I_{Q} = |0\rangle_Q\langle 0| + |1\rangle_Q\langle 1|$ and $X_{Q} = |0\rangle_Q\langle 1| + |1\rangle_Q\langle 0|$, $\hat{p}_B$ denotes the canonical momentum operator of the bosonic system, $|\pm x_0\rangle$ are the generalized eigenstates of the canonical position operator $\hat{q}_B$, and we assume, without loss of generality, $x_{0}\in\mathbbm{R}_+$. This encoding is a coherent version of encoding a binary number into an analog signal. The effect of such operations on the initial states is \begin{eqnarray} U_2^{QB}U_1^{QB}|\varphi\rangle_Q\otimes |0\rangle_B = |0\rangle_Q\otimes \left(a |x_0\rangle_B +b |-x_0\rangle_B\right). \end{eqnarray} Now, with the bosonic mode state factored out from the qubit state, it can be sent through the lossy channel. For the sake of analytical investigation, we consider a channel with unit trasmittivity (an identity channel). Then, the output state simply reads \[ \rho_B= \left(a |x_0\rangle +b |-x_0\rangle\right)_B\left(\overline{a}\, \langle x_0| +\overline{b}\, \langle -x_0|\right), \] At this point we consider the possibility of adding Gaussian noise before the (threshold) decoding stage. This is modeled as a Gaussian-modulated displacement of the quadrature $\hat{q}_B$. The resulting state will be \begin{equation}\label{rhoBp} \rho_{B^\prime}=\int dq \, \, {\mathcal{N}}(0,\sigma^2) D(q,0) \rho_B D^{\dag}(q,0) \, , \end{equation} where $D(q,0)$ is the displacement operator (displacing only in the $\hat{q}_B$ direction), and ${\mathcal{N}}(0,\sigma^2)$ is a zero-mean, Gaussian distribution with variance $\sigma^2$. Now, the state $\rho_{B^\prime}$ is decoded into a qubit system $Q^{\prime}$ initially prepared in the state $|0\rangle_{Q^{\prime}}$ through the following controlled-unitary operations involving a coherent threshold mechanism \begin{eqnarray} V_1^{Q^{\prime}B^{\prime}} & = & I_{Q}\otimes\int_{x \geqslant \theta}\left\vert x\right\rangle_B \left\langle x\right\vert dx +X_{Q}\otimes\int_{x<\theta}\left\vert x\right\rangle_B \left\langle x\right\vert dx \, , \label{V1} \\ V_2^{Q^{\prime}B^{\prime}} & = & \left\vert 0\right\rangle_Q \left\langle 0\right\vert\otimes e^{i\hat{p}_B x_0}+\left\vert 1\right\rangle_Q \left\langle 1\right\vert \otimes e^{-i\hat{p}_B x_0} \, . \label{V2} \end{eqnarray} Clearly, if $\theta=0$ the decoding unitaries (\ref{V1}), (\ref{V2}) are the inverse of the encoding ones (\ref{U1}), (\ref{U2}). They hence allow unit fidelity encoding/decoding if $\sigma^2=0$. However, if $\theta\neq 0$, there could be a nonzero optimal value of $\sigma^2$. The final qubit state is \begin{eqnarray} \rho_{Q^{\prime}} & = & \mathcal{E}(|\varphi\rangle_Q\langle\varphi|) \label{sqp} \\ & = & {\rm Tr}_{B^{\prime}} \left\{ V_2^{Q^{\prime}B^{\prime}} V_1^{Q^{\prime}B^{\prime}} \left(|0\rangle_Q^{\prime}\langle 0|\otimes \rho_{B^{\prime}}\right) \left( V_1^{Q^{\prime}B^{\prime}}\right)^{\dag} \left(V_2^{Q^{\prime}B^{\prime}}\right)^{\dag} \right\} \\ & = & \left[ |a|^2 (1-\Pi_<) + |b|^2 \Pi_>) \right] |0\rangle_Q\langle 0| + \left[ |a|^2 \Pi_< + |b|^2 (1-\Pi_>) \right] |1\rangle_Q\langle 1| \nonumber\\ & & +(1-\Pi_<-\Pi_>) \left( a \bar{b}|0\rangle_Q\langle 1| + \bar{a}b |1\rangle_Q\langle 0| \right) \, , \label{sigmaout} \end{eqnarray} where \begin{eqnarray} \Pi_< & = & \frac{1}{2} + \frac{1}{2} {\rm erf}\left( \frac{\theta-x_0}{\sqrt{2\sigma^2}} \right) \, , \\ \Pi_> & = & \frac{1}{2} - \frac{1}{2} {\rm erf}\left( \frac{\theta+x_0}{\sqrt{2\sigma^2}} \right) \, . \end{eqnarray} Then we can calculate the average channel fidelity \begin{equation}\label{Pefid} \langle \mathcal{F} \rangle = \int d\varphi \,\, {}_Q\langle\varphi|\rho_{Q^{\prime}}|\varphi\rangle_Q \, , \end{equation} where $d\varphi$ is the uniform measure induced by the Haar measure on $\mathrm{SU}(2)$. Using (\ref{rhoBp}), (\ref{V1}), (\ref{V2}), (\ref{sqp}) and (\ref{Pefid}) we finally obtain \begin{equation}\label{fidqc} \langle \mathcal{F} \rangle = \frac{1}{2} + \frac{1}{4} \left[ {\rm erf}\left(\frac{\theta+x_0}{\sqrt{2\sigma^2}}\right) - {\rm erf}\left(\frac{\theta-x_0}{\sqrt{2\sigma^2}}\right) \right]. \end{equation} Then, we have the following proposition: \begin{proposition}\label{forbquantum} [The forbidden interval] The average channel fidelity $\langle \mathcal{F} \rangle$ shows a non-monotonic behavior as a function of $\sigma$ iff $\theta \notin [- x_0 , x_0 ]$. \end{proposition} \begin{proof} We consider $\langle \mathcal{F} \rangle$ as a function of $\sigma^2$. In order to have a non-monotonic behavior, we must check for the presence of a local maximum for $\sigma^2 > 0$. The condition $\frac{d\langle F \rangle}{d\sigma}=0$ yields the following expression for the critical value of $\sigma^2$ \begin{equation} \sigma^2_\ast = \frac{2\theta x_0}{\ln\left(\frac{\theta+x_0}{\theta-x_0}\right)} \, . \end{equation} We hence conclude that the average fidelity is a non-monotonic function of $\sigma$ iff $\theta \notin [ - x_0 , x_0 ]$. \end{proof} \begin{figure \centering\includegraphics[width=0.45\textwidth] {fidelity} \caption{The average channel fidelity $\langle \mathcal{F} \rangle$, equation (\ref{fidqc}) vs $\sigma$ for $x_0=0.3$ and several values of $\theta$, inside and outside the forbidden interval. From top to bottom, $\theta=0.20$, $\theta=0.25$, $\theta=0.29$ (inside the forbidden interval), $\theta=0.31$, $\theta=0.35$, $\theta=0.40$ (outside the forbidden interval).} \label{fidelity} \end{figure} Figure~\ref{fidelity} shows $\langle \mathcal{F} \rangle$ as a function of $\sigma$ for a given value of $x_0$ and several values of $\theta$, both inside and outside the forbidden interval. A non-monotonic behavior is observed in the latter cases. It is worth noticing that the presence of noise can augment the average channel fidelity above the value of $2/3$, which is the maximum value achievable by measure-and-prepare protocols. Hence, in this sense, the presence of noise can lead to a transition from a classical to a quantum regime in the average communication fidelity. As shown in figure \ref{quantum}, an analogous SR-like effect is observed in the same parametric region for the logarithmic negativity \cite{P05} \begin{equation}\label{LogNeg} LN=\log_2\left\{\mathrm{Tr}\left[\mathcal{E}\otimes\mathcal{I}\left(|\Psi\rangle\langle\Psi|\right)\right]^{\Gamma}\right\} \, , \end{equation} where $\Gamma$ indicates the partial transpose operation, $\mathcal{I}$ the identity map and $|\Psi\rangle$ a maximally entangled two-qubit Bell state. This quantity is the logarithmic negativity of the Choi-Jamiolkowski state associated to the quantum channel \cite{J72} and gives an upper bound on its two-way distillable entanglement (this latter quantity in turn equals the quantum capacity of a channel assisted by unbounded two-way classical communication \cite{BDSW96}). \begin{figure \centering\includegraphics[width=0.45\textwidth] {quantum} \caption{The logarithmic negativity, equation (\ref{LogNeg}), vs $\sigma$ for $x_0=0.3$ and several values of $\theta$ inside and outside the forbidden interval. From top to bottom, $\theta=0.20$, $\theta=0.25$, $\theta=0.29$ (inside the forbidden interval), $\theta=0.31$, $\theta=0.35$, $\theta=0.40$ (outside the forbidden interval).} \label{quantum} \end{figure} Finally, notice that Proposition \ref{forbquantum} holds true even if the noise $\nu_q$ is introduced at the sender's end. \section{Stochastic Resonance in Quantum Key Distribution} In this section we investigate stochastic resonance effects in quantum key distribution (see also \cite{RGK05,QKD}). An achievable rate for private classical communication over a quantum channel is given by the following formula \cite{Dev05,CWY04}: \begin{equation}\label{CP} C_P = I(A:B) - I(A:E) \, , \end{equation} where $I(A:B)$ and $I(A:E)$ are the maximal mutual information between sender ($A$) and receiver ($B$) and sender and eavesdropper ($E$), respectively. In the above formula, $A$ is a classical system, $B$ and $E$ are quantum systems, and the optimization is over all ensembles that Alice can prepare at the input. We consider the same encoding of a binary variable into a single bosonic mode expressed by Eq.\ (\ref{eq:qmessage1}). Also, we consider the case in which information is transmitted through a lossy bosonic channel characterized by the transmissivity parameter $\eta$ \cite{HW01}. The input variable after the transmission through the noisy channel is hence expressed by Eq.\ (\ref{lossych}). First, suppose that the noise is added at the receiver's end. In this case, the quantity $I(A:E)$ is not affected at all by the noise, and so the behavior of $C_P$ versus $\sigma$ is simply determined by $I(A:B)$. Since this latter relies on the probability of success given by equation (\ref{pe1}), we are in the same situation as in Proposition \ref{prop:unassisted-comm}. In particular, the private communication rate in Eq.\ (\ref{CP}) exhibits a non-monotonic behavior as a function of the noise variance if and only if the threshold value $\theta$ lies outside of the forbidden interval $\lbrack\theta_{-},\theta_{+}]$, where $\theta_{\pm}$ are the two roots of equation (\ref{Eq-P1}). Second, suppose that the noise is added at the sender's end. In this case, equation (\ref{noiseB}) changes as follows: \begin{equation} \sqrt{\eta}\left( \hat{q}e^{-r} - \alpha_{q}(-1)^{X} + \nu_{q}\right) + \sqrt{1-\eta} \, \hat{q}_{E} \, . \end{equation} Using a threshold decoding, we get the same expression as in equation (\ref{pe1}) for the success probability, upon replacing $\sigma^2 \to \eta\sigma^2$. From this, it is straightforward to calculate the mutual information $I(A:B)$. In turn, we assume that the eavesdropper has access to the conjugate mode at the output of the beam-splitter transformation, and so its variable is given by \begin{equation}\label{evemode} \sqrt{1-\eta}\left( \hat{q}e^{-r} - \alpha_{q}(-1)^{X} + \nu_{q} \right) + \sqrt{\eta} \, \hat{q}_{E} \, . \end{equation} The maximum mutual information between the sender and eavesdropper is given in terms of the Holevo information \cite{Holevo}. Since the average state corresponding to the variable (\ref{evemode}) is non-Gaussian, the analytical evaluation of its Holevo information appears not to be possible. However, the monotonicity property of the Holevo information under composition of quantum channels ensures that it has to be a monotonically decreasing function of the noise variance $\sigma^2/2$. As a consequence, we expect that its contribution to the private communication rate will increase with increasing value of the noise. Indeed, a numerical analysis suggests that the private communication rate can exhibit a non-monotonic behavior as function of $\sigma$ for all values of $\theta$. Examples of this behavior are shown in figure \ref{Private}. \medskip We are then led to formulate the following conjecture: \begin{conjecture}\label{forbprivate} [The forbidden interval] The private communication rate $C_P$ shows a non-monotonic behavior as a function of $\sigma$ for all $\theta\in\mathbbm{R}$ if $\nu_q$ is added at the sender's end. \end{conjecture} \begin{figure \centering \includegraphics[width=0.45\textwidth] {Private} \caption{The private communication rate (corresponding to the choice $\wp=1/2$), equation (\ref{CP}), versus $\sigma$, for the case of noise added by the sender. The values of the parameters are $\eta=0.8$, $\alpha_{q}=1$ and $r=0$. Curves from top to bottom correspond respectively to $\theta=0$, $0.5$, $1$, $1.5$, $2$, $2.5$.} \label{Private} \end{figure} \section{Conclusion} In conclusion, we have determined necessary and sufficient conditions for observing SR when transmitting classical, private, and quantum information over a lossy bosonic channel or when discriminating lossy channels. Nonlinear coding and decoding by threshold mechanisms have been exploited together with the addition of Gaussian noise. Specifically, we have considered a bit encoded into coherent states with different amplitudes that are subsequently sent through a lossy bosonic channel and decoded at the output by threshold measurement of their amplitudes (without and with the assistance of entanglement shared by sender and receiver). We have also considered discrimination of lossy bosonic channels with different loss parameters. In all these cases, the performance is evaluated in terms of success probability. Since the mutual information is a monotonic function of this probability, the same conclusions can be drawn in terms of mutual information. SR effects appear whenever the threshold lies outside of the different forbidden intervals that we have established. If it lies inside of a forbidden interval, then the SR effect does not occur. Actually, absolute maxima of success probability are obtained when the threshold is set in the middle of the forbidden interval. Generally speaking, SR effects are known to improve analog-to-digital conversion performance \cite{Gamma}. In fact, if two distinct signals by continuous-to-binary conversion fall within the same interval they can no longer be distinguished. In such a situation the addition of a moderate amount of noise turns out to be useful as long as it shifts the signals apart to help in distinguishing them. While it is important to confirm this possibility also in the quantum framework, we have also shown that the same kind of effects may arise in a purely quantum framework. Indeed, we have also considered the transmission of quantum information, represented by a qubit which is encoded into the state of a bosonic mode and then decoded according to a threshold mechanism. The found nonmonotonicity of the average channel fidelity and of the output entanglement (quantified by the logarithmic negativity) outside the forbidden interval, represents a clear signature of a purely quantum SR effect. In all the above mentioned cases it does not matter whether the sender or the receiver adds the noise. The exception occurs when the goal is to transmit private information. In fact, by considering achievable rates for private transmission over the lossy channel, we have pointed out that the forbidden interval can change drastically, depending on whether the receiver or the sender adds noise. In the former case, it is exactly the same as the case of sending classical (non private) information. In the latter case, we conjecture that it vanishes, i.e., the noise addition turns out to be beneficial always. This feature of the private communication rate can be interpreted as a consequence of the asymmetry between the legitimate receiver of the private information and the eavesdropper. In fact, while the legitimate receiver is restricted to threshold detection, we have allowed the eavesdropper to use more general detection schemes. \section*{References}
1,314,259,996,146
arxiv
\section{Introduction} \IEEEPARstart{F}{or} a long time, steganography and steganalysis always developed in the struggle with each other. Steganography seeks to hide secret information into a specific cover as much as possible and makes the changes of cover as little as possible, so that the stego is close to the cover in terms of visual quality and statistical characteristics[1,2,3]. Meanwhile, steganalysis uses signal processing and machine learning theory, to analyze the statistical differences between stego and cover. It improves detecting accuracy by increasing the number of features and enhancing the classifier performance[4]. \par Currently, the existing steganalysis methods include specific steganalysis algorithms and universal steganalysis algorithms. Early steganalysis methods aimed at the detection of specific steganography algorithms[5], and the general-purpose steganalysis algorithms usually use statistical features and machine learning[6]. The commonly used statistical features include the binary similarity measure feature[7], DCT[8,9] and wavelet coefficient feature[10], co-occurrence matrix feature[11] and so on. In recent years, higher-order statistical features based on the correlation between neighboring pixels have become the mainstream in the steganalysis. These features improve the detection performance by capturing complex statistical characteristics associated with image steganography, such as SPAM[12], Rich Models[13], and its several variants[14,15]. However, those advanced methods are based on rich models that include tens of thousands of features. Dealing with such high-dimensional features will inevitably lead to increasing the training time, overfitting and other issues. Besides, the success of feature-based steganalyzer to detect the subtle changes of stego largely depends on the feature construction. The feature construction requires a great deal of human intervention and expertise. \par Benefiting from the development of deep learning, convolutional neural networks (CNN) perform well in various steganalysis detectors[16,17,18,19]. CNN can automatically extract complex statistical dependencies from images and improve the detection accuracy. Considering the GPU memory limitation, existing steganography analyzers are typically trained on relatively small images (usually $256\times 256$). But the real-world images are of arbitrary size. This leads to a problem that how an arbitrary sized image can be steganalyzed by the CNN-based detector with a fixed size input. In traditional computer vision tasks, the size of the input image is usually adjusted directly to the required size. However, this would not be a good practice for steganalysis as the relation between pixels are very weak and independent. Resizing before classification would compromise the detector accuracy. \par In this paper, we have proposed a new CNN network structure named ``Zhu-Net'' to improve the accuracy of spatial domain steganalysis. The proposed CNN performs well in both the detection accuracy and compatibility, and shows some distinctive characteristics compared with other CNNs, which are summarized as follows: \par (1) In the preprocessing layer, we modify the size of the convolution kernel and use 30 basic filters of SRM[13] to initialize the kernels in the preprocessing layer to reduce the number of parameters and optimize local features. Then, the convolution kernel is optimized by training to achieve better accuracy and to accelerate the convergence of the network. \par (2) We use two separable convolution blocks to replace the traditional convolution layer. Separable convolution can be used to extract spatial correlation and channel correlation of residuals, to increase the signal to noise ratio, and obviously improve the accuracy. \par (3) We use spatial pyramid pooling[20] to deal with arbitrary sized images in the proposed network. Spatial pyramid pooling can map feature maps to fixed lengths and extract features through multi-level pooling. \par We design experiments to compare the proposed CNN network with Xu-Net[17], Ye-Net[19], and Yedroudj-Net[21]. The proposed CNN shows excellent detection accuracy, which even exceeds the most advanced manual feature set, such as SRM[13]. \par The rest of the paper is organized as follows. In Section II, we present a brief review of the framework of popular image steganalysis methods based on convolutional neural networks (CNNs) in the spatial domain. The proposed CNN is described in Section III, which is followed by experimental results and analysis in Section IV. Finally, the concluding remarks are drawn in Section V. \section{Related Works} \par The usual ways to improve CNN structure for steganalysis include: using truncated linear units, modifying topology by mimicking the Rich Models extraction process, and using deeper networks such as ResNet[22], DenseNet[23], and others. \par Tan et.al used a CNN network with four convolution layers for image steganalysis[24]. Their experiments showed that a CNN with random initialized weights usually cannot converge and initializing the first layer's weights with the KV kernel can improve accuracy. Qian et al.[25] proposed a steganalysis model using standard CNN architecture with Gaussian activation function, and further proved that transfer learning is beneficial for a CNN model to detect a steganography algorithm with low payloads. The performance of these schemes is comparable to or better than the SPAM scheme[12], but is still worse than the SRM scheme[13]. Xu et al.[17] proposed a CNN structure with some techniques used for image classification, such as batch normalization (BN)[26], 1×1 convolution, and global average pooling. They also did pre-processing with a high-pass filter and used an absolute (ABS) activation layer. Their experiments showed better performance. By improving the Xu-CNN, they achieved a more stable performance[27]. In JPEG domain, Xu et al.[18] proposed a network based on decompressed image and achieved better detection accuracy than traditional methods in JPEG domain. By simulating the traditional steganalysis scheme of hand-crafted features, Fridrich et al.[28] proposed a CNN structure with histogram layers, which is formed by a set of Gaussian activation functions. Ye et al.[19] proposed a CNN structure with a group of high-pass filters for pre-processing and adopted a set of hybrid activation functions to better capture the embedding signals. With the help of selection channel knowledge and data augmentation, their model obtained significant performance improvements than the classical SRM. Fridrich[29] proposed a different network architecture to deal with steganalyzed images of arbitrary size by manual feature extraction. Their scheme inputs statistical elements of feature maps to the fully-connected-network classifier. \par Generally, there are two disadvantages for the existing networks. \par (1) A CNN is composed of two parts: the convolution layer and the fully connected layer (ignoring the pooling layer, etc.). The function of convolution layer is to convolve input and to output the corresponding feature map. The input of the convolution layer does not need a fixed size image, but its output feature maps can be of any size. The fully connected layer requires a fixed-size input. Hence, the fully connected layer leads to the fixed size constraint for network. The two existing solutions are as follows. \begin{itemize} \item Resizing the input image directly to the desired size. However, the relationship between the image pixels is fragile and independent in the steganalysis task. Detecting the presence of steganographic embedding changes really means detecting a very weak noise signal added to the cover image. Therefore, resizing the image size directly before inputting image to CNN will greatly affect the detection performance of the network. \item Using a full convolutional neural network(FCN), because the convolutional layer does not require a fixed image size. \end{itemize} \par In this paper, we propose the third solution: mapping the feature map to a fixed size before sending it to the fully-connected layer, such as SPP-Net[20]. The proposed network can map feature maps to a fixed length by using spp-module, so as to steganalyze arbitrary size images. \par (2) Accuracy of steganalysis based on CNN seriously relies on signal-to-noise ratio of feature maps. CNN network favorites high signal-to-noise ration to detect small differences between stego signals and cover signals. Many steganalyzers usually extract the residuals of images to increase the signal-to-noise ratio. However, some existing schemes directly convolve the extracted residuals without thinking of the cross-channel correlations of residuals, which do not make good use of the residuals. \par In this paper, we increase signal-to-noise ratio by three ways as follows. \begin{itemize} \item Optimizing the convolution kernels by reducing kernel size and the proposed ``forward-backward-gradient descent'' method. \item Using group convolution to process the spatial correlation and channel correlation of residuals separately. \end{itemize} \par We greatly improve the accuracy of steganalysis by combining the above two ways. \section{Proposed Scheme} \begin{figure*} \includegraphics[scale=0.6]{fig1.eps} \caption{The architecture of the proposed CNN. For each block, $x_1 \rightarrow x_2;x_2 (a*a*x_1)$ denotes the block with the kernel size $a*a$ for $x_1$ input feature maps and $x_2$ output feature maps. Batch normalization is abbreviated as BN.} \end{figure*} \subsection{Architecture} \par The framework of the proposed CNN based steganalysis is shown in Fig. 1. The CNN accepts an input image of size $256 \times 256$ and outputs a two class labels (stego and cover). The proposed CNN is composed of a number of layers including one image preprocessing layer, two separable convolution (sepconv) block, four basic blocks for feature extraction, a spatial pyramid pooling(SPP) module, and two fully connected layers followed by a softmax. \par The convolutional blocks have four blocks marked as `Basic Block 1' through `Basic Block 4' to extract spatial correlation between feature maps and finally transport to the fully connected layer for classification. Each Basic Block is made of the following steps: \subsubsection{Convolution Layer} \par Unlike the existing networks that use large convolution kernel(e.g. $5\times 5$), we use small convolution kernels(e.g. $3\times 3$) to reduce the number of parameters. The small convolution kernels can increase the nonlinearity of the network, which significantly increase the capability of feature representation. Therefore, we set the size of the convolutional kernels to $3\times 3$ for the Basic Block 1-4. For Basic Block 1 to Basic Block 4, there are 32, 32, 64, 128 channels. The number of channels in each Basic Block is also based on a comprehensive consideration of computational complexity and network performance. Stride and padding size are shown in the Fig. 1. \subsubsection{Batch Normalization (BN) layer} \par The Batch normalization(BN)[26] is usually used to normalize the distribution of each mini-batch to a zero-mean and a unit-variance during the training. The advantage of using a BN layer is that it effectively prevents the gradient vanishing/exploding and over-fitting in the deep neural network[26], and allows a relatively large learning rate to speed up the convergence. From the experiments, we found that the networks without BN, such as Ye-Net, are very sensitive to the initialization of parameters and may not converge with inappropriate initializations. Therefore, we use BN in the proposed scheme. \subsubsection{Non-linear activation function} \par For all the blocks in Zhu-Net, we use the classical rectifying linear unit (ReLU) as the activation function to prevent gradient vanishing/exploding, produce sparse features, accelerate network convergence and so on. Applying ReLU to neurons can make them selectively respond to useful signals among the inputs, resulting in more efficient features. The ReLU function is also convenient for derivation and benefits back-propagation gradient calculations. We do not use the truncated linear unit (TLU) in our network, because we find that the TLU decreases the non-linearity. To verify that, we compare TLU (threshold $T=3$) with ReLU. From the Table \uppercase\expandafter{\romannumeral1}, Zhu-Net with ReLU has lower error rate of detecting various steganography algorithms. ReLU also accelerates the convergence and shows better performance than TLU, as shown in Fig.2. \begin{table} \centering \caption{Steganalysis error rates comparison of Zhu-Net with TLU and Zhu-Net with ReLU against two algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. Both networks are trained and tested on BOSS dataset.} \begin{tabular}{ccc} \hline Algorithms& \tabincell{c}{Zhu-Net with \\TLU}& \tabincell{c}{Zhu-Net with \\ReLU}\\ \hline WOW(0.2bpp) & 0.257 & \textbf{0.233} \\ WOW(0.4bpp) & 0.138 & \textbf{0.118} \\ S-UNIWARD(0.2bpp) & 0.316 & \textbf{0.285} \\ S-UNIWARD(0.4bpp) & 0.188 & \textbf{0.153} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{compare_TLU_ReLU.jpg} \caption{Comparing convergence performances of training Zhu-Net with TLU and Zhu-Net with ReLU against two algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. Both networks are trained and tested on BOSS dataset.} \end{figure} \subsubsection{Average pooling layer} \par Average pooling layers are used in Basic Block 1 to Basic Block 3. It down-samples feature maps, better abstracts the image features, reduces the size of feature map and enlarges the receptive fields. With invariance, the average pooling also enhances the generalization ability of the network. \par Note that we design separable convolution blocks to enhance SNR (signal noise ratio of stego signal to image) and remove the image content effectively from features. In the last block, we use a SPP module to better extract features. The SPP module enriches feature expressions by multi-level pooling, such that our network can train and test with multi-size images. Details are elaborated in Section III-D. \subsection{Improving Kernels} \par The embedding operation of steganography can be viewed as adding a smaller amplitude noise signal to the cover signal. Therefore, it is a good idea to perform residual calculation prior to feature extraction in network. In the preprocessing layer, we use a set of high-pass filter (for example, 30 basic high-pass filters of SRM[12], that is, the “spam” filters and their rotated counterparts, Similar to Ye-Net [19] and Yedroudj-Net[21]) to extract noise residuals map from input image. In [24], the authors showed that without such preliminary high-pass filter, CNN convergence rate should be very slow. Hence, using multiple filters can effectively improve network performance. \par We use the following strategy to initialize the weights of the preprocessing layer. \subsubsection{Small sized kernels} \par Small sized convolution kernels can reduce the number of parameters and prevent modeling larger regions, to effectively reduce calculations. As some existing schemes showed, the kernel size of $5\times 5$ is suitable for some filters in SRM, such as ``SQUARE $5\times 5$'', ``EDGE $5\times 5$''. But for the remaining 25 filters, the $5\times 5$ convolution kernel will model residual elements in a big local region. So we keep ``SQUARE $5\times 5$'' and ``EDGE $5\times 5$'', and use $3\times 3$ size for the remaining 25 high-pass filters. We initialize the central part of convolution kernels with the SRM kernels and pad the remaining elements to zero, as shown in Fig. 3. Two parts of the residuals calculated by these filters are stacked together as the input of the next convolutional layer. \begin{figure} \centering \includegraphics[scale=0.6]{fig2.eps} \caption{An example of initializing convolution kernels wiht different sized high-pass filters. (a) SQUARE $5\times 5$ high-pass filter. (b) The convolution kernel corresponding to (a). (c) EDGE$3\times 3$ high-pass filter. (d) The convolution kernel corresponding to (c).} \end{figure} \subsubsection{Optimizing kernels} \par Modeling residuals instead of pixel values can extract more robust features. In Yedroudj-Net and Xu-net, convolution kernels in the preprocessing layer are fixed during the training. To optimize the SRM hand-crafted feature set designed with domain knowledge, we design a method named ``forward-backward-gradient descent'', and use it in preprocessing layer. We calculate the residual as follows: \par For each image $X = X_{ij}$, the residual $R = R_{ij}$ is: \begin{equation} R_{ij} = X_{pred}(N_{ij}) - cX_{ij}, \end{equation} \par where $c \in N$ is the residual order, $N_{ij}$ is the neighboring pixels of $X_{ij}$ and $X_{pred}(.)$ is a predictor of $cX_{ij}$ defined on $N_{ij}$. In practice, we usually use high-pass filters to achieve $X_{pred}(.)$. \par The complete process of optimizing the kernel is given as follows: \emph{\textbf{The forward-backward-gradient descent Method}} \emph{\textbf{Forward propagation:}} \par \textbf{Input:} An image $X = X_{ij}$, the high-pass filter $K$ \par \textbf{Output:} The noise residuals map $R = R_{ij}.$ \par \textbf{Step 1:} Initialization: The convolution kernel of the preprocessing layer is initialized by a high-pass filter in the SRM, and the weights of the convolution kernel are described by $K$. \par \textbf{Step 2:} Calculate residuals: \begin{equation} R=X\ast K = (\sum _{m,n}X_{i,j}^{m,n}\cdot K^{m,n}), \end{equation} \par where $\ast$ denotes the convolution operator, and $m, n$ are the corresponding index of the kernel $K$. \emph{\textbf{Back propagation:}} \par \textbf{Input:} The gradient of the previous layer $\delta ^{l+1}$, the high-pass filter $K$. \par \textbf{Output:} The gradient of the preprocessing layer $\delta ^{l}.$ \par \textbf{1:} Let the backward gradient of the previous layer be $\delta ^{l+1}$. Then the gradient of the preprocessing layer is: \begin{equation} \delta ^{l} = \frac{\partial Loss}{\partial K} = \frac{\partial Loss}{\partial R}\frac{\partial R}{\partial K} = \delta ^{l+1} * K, \end{equation} \par \textbf{2:} Return the gradient of the preprocessing layer $\delta ^{l+1}$. \emph{\textbf{Gradient descent:}} \par \textbf{Input:} The gradient of the preprocessing layer $\delta ^{l}$, the high-pass filter $K$, the learning rate $lr$. \par \textbf{Output:} The optimized kernels $K^{'}.$ \par \textbf{1:} Optimize the weight of the preprocessing layer by: \begin{equation} K^{'} = K - lr * \delta^{l}, \end{equation} \par \textbf{2:} Return the optimized kernels $K^{'}$. \par The corresponding experiment results are shown in Fig. 4 and Table \uppercase\expandafter{\romannumeral2}. We compared the Zhu-Net with fixed kernels, and the network Zhu-Net with optimizable kernels. \begin{table} \centering \caption{Steganalysis error rates comparison between Zhu-Net with fixed kernels and Zhu-Net with optimized kernels against two steganography algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. Both networks are trained and tested on BOSS dataset. } \begin{tabular}{ccc} \hline Algorithms& \tabincell{c}{Zhu-Net with \\fixed kernels}& \tabincell{c}{Zhu-Net with \\optimized kernels}\\ \hline WOW(0.2bpp) & 0.243 & \textbf{0.233} \\ WOW(0.4bpp) & 0.130 & \textbf{0.118} \\ S-UNIWARD(0.2bpp) & 0.324 & \textbf{0.285} \\ S-UNIWARD(0.4bpp) & 0.169 & \textbf{0.153} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig3.jpg} \caption{Comparing convergence performances of training Zhu-Net with fixed kernels and Zhu-Net with optimized kernels against two algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. Both networks are trained and tested on BOSS dataset.} \end{figure} \par From Table II, it is observed that using the “forward-backward-optimize” method, Zhu-Net achieves higher accuracy than the network with fixed kernels for detecting various steganography algorithms. According to Fig. 4, our network is more quickly converged and has less training loss than the network with fixed kernels. \subsection{Separable Convolution} \subsubsection{Problem Formulation} \par Existing steganalysis schemes directly learn filters in a 3D space without thinking of cross-channel correlations of the residuals, so that the residual information is not well utilized. To resolve this problem, we used two separable convolution blocks (i.e. the sepconv blocks) consisting of a $1\times 1$ convolution and a $3\times 3$ convolution after preprocessing the layer (as shown in Fig. 1). \par Separable convolution has recently made great progress in computer vision tasks, such as Inception[30], Xception[31] and other structures. Xception, a variant of an Inception module is shown in Fig. 5(a). This extreme version of inception completely separates the correlation between channels, reducing storage space and enhancing the expressiveness of the model. Therefore, we use Xception structure to design the corresponding sepconv block to achieve the group convolution of residuals. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{test.eps} \caption{ (a) A variant version of Inception module[31]; (b) structure of sepconv blocks} \end{figure} \par In our scheme, we assume that the channel correlation and spatial correlation of residuals are independent. The sepconv block can perform group convolution for each feature map generated by a high-pass filter, which makes full use of the residual information and removes image contents from features to improve the signal-to-noise ratio. The design of the sepconv blocks is shown in Fig. 5(b). \par First, $1\times 1$ pointwise convolution is performed in a sepconv block to extract residual channel correlations. Then $3\times 3$ depthwise convolution is performed to extract spatial correlations, where the number of groups is 30 and a sepconv block includes $1\times 1$ pointwise convolution and $3\times 3$ depthwise convolution. Note that there is no activation function in these two convolutions. After the 1×1 convolutional layer of sepconv block1, considering domain knowledge, we insert an Absolute Activation (ABS) layer to make our network learning the symbol symmetry of the residual noise. We also use residual connections in the two sepconv blocks, for accelerating network convergence, preventing gradient vanishing/exploding and improving classification performance. \subsubsection{Experimental Verification and Analysis} \par In order to compare Zhu-Net with Yedroudj-Net, we visualize the feature map in the first convolutional layer (the feature map of the CNN is difficult to interpret and visualize when the layer is deeper). The feature map is a good description of the feature extraction process. \par Both Yedroudj-Net and Zhu-Net are trained using WOW, at the payload of 0.2 bpp. We visualize the feature map of the first convolutional layer for Yedroudj-Net and the feature map of the sepconv block 2 for our network. The comparisons of the feature maps of stego and cover are shown in the Fig. 6. \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{fig7.eps} \caption{The comparison of feature maps between Zhu-Net and Yedroudj-Net. (a) Cover image. (b) Stego image. (c) The feature map of cover generated by Zhu-Net. (d) The feature map of cover generated by Yedroudj-Net. (e) The feature map of stego generated by Zhu-Net. (f) The feature map of stego generated by Yedroudj-Net.} \end{figure*} \par It is observed that the feature maps generated by the proposed scheme remain less image content information, and keep an increasing signal-to-noise ratio between stego signal and image signal. For either a cover or a stego, the proposed scheme can extract features with strong expression ability. Meanwhile, the similarity between every feature maps is relatively low, so that sepconv blocks facilitates subsequent convolution and classification. Comparative speaking, the feature maps generated by Yedroudj-Net remain more image contents, and the differences among feature maps are not obvious. \begin{table} \centering \caption{Steganalysis error rates comparison between Yedroudj-Net and Zhu-Net against two steganography algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. Both networks are trained and tested on BOSS dataset.} \begin{tabular}{ccc} \hline Algorithms & Yedroudj-Net & Zhu-Net \\ \hline WOW(0.2bpp) & 0.278 & \textbf{0.233} \\ WOW(0.4bpp) & 0.141 & \textbf{0.118} \\ S-UNIWARD(0.2bpp) & 0.367 & \textbf{0.285} \\ S-UNIWARD(0.4bpp) & 0.228 & \textbf{0.153} \\ \hline \end{tabular} \end{table} \par In addition, we compare the detection error rate of Zhu-Net and Yedroudj-Net. Table III shows the performance of these two CNN networks against two steganography schemes such as S-UNIWARD and WOW. Experiment results show that, Zhu-Net obviously achieves better performance compared with Yedroudj-Net, reducing the detection error rate by 2.3\%-8.2\%. \par Table IV shows the performance of detection error rate of Zhu-Net and Zhu-Net with full sepconv block against the algorithm WOW. The experiment results show that the detection accuracy goes down when all basic blocks are replaced by sepconv blocks. But the accuracy of Zhu-Net with all sepconv block is still better than Yedroudj-Net at a low embedding rate (e.g., 0.2 bpp). How to embed more sepcov blocks in CNN requires follow-up studies. Now, we choose the network with two sepconv block in our implementation so as to achieve a good detection performance. \begin{table} \centering \caption{Steganalysis error rates comparison using Zhu-Net with different numbers of sepconv blocks against WOW at 0.2 bpp and 0.4 bpp. Both networks are trained and tested on BOSS dataset.} \begin{tabular}{ccc} \hline Algorithms& \tabincell{c}{Zhu-Net with \\ full sepconv blocks}& \tabincell{c}{Zhu-Net with \\two sepconv blocks}\\ \hline WOW(0.2bpp) & 0.249 & \textbf{0.233} \\ WOW(0.4bpp) & 0.152 & \textbf{0.118} \\ \hline \end{tabular} \end{table} \subsection{Spatial pyramid pooling module} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig8.eps} \caption{A network structure with a spatial pyramid pooling layer} \end{figure} \par For some steganalysis networks[18,21], a global average pooling (GAP) layer is added after the last convolution layer for down-sampling, which can greatly reduce the feature dimension. For image classification, GAP is generally used to replace full connected layer to prevent overfitting and reduce computational complexity. This global averaging operation equals to modeling the entire feature map, which leads to information loss of local features. However, for steganalysis networks, modeling local information is of key importance. \par In our network, we use spatial pyramid pooling (SPP) to model local feature map, as shown in Fig. 7. SPP has the following properties[20]: \par (1) SPP outputs a fixed-length feature for any size input. \par (2) SPP uses multi-level pooling to effectively detect object deformation. \par (3) Since input is of arbitrary size, SPP can perform feature aggregation for any scale or size images. \par Similar to [20], we divide the feature maps into several bins. In each spatial bin, we pool the responses of each feature map (we use average pooling hereinafter). The output of the spatial pyramid pool is a fixed $k\times M$ dimensional vector, where $M$ is the number of bins and $k$ is the number of filters in the final convolution layer. The major steps of the SPP-module mapping feature maps to fixed length vector are listed as follows: \emph{\textbf{The steps of SPP-module mapping feature maps to fixed length vector}} \par \textbf{Input:} The feature maps after basic block 4 with a size of $a\times a$ and channels of $K$. an $l$-level pyramid with $n\times n$ bins in each level. \par \textbf{Output:} The fixed length feature with a size of $[1, K\times M]$, where $M$ is the number of bins. \par \textbf{Step 1:} For a pyramid level of $n \times n$ bins, implement this pooling level as a sliding window pooling, where the window size win = $\left \lceil a/n \right \rceil$, and stride str = $\left \lfloor a/n \right \rfloor$ with $\left \lceil \cdot \right \rceil$ and $\left \lfloor \cdot \right \rfloor$ denoting ceiling and floor operations. \par \textbf{Step 2:} Implement windows pooling on every feature map, obtain the generated feature with the length of $n\times n$. \par \textbf{Step 3:} Repeat step1-step2 for every pyramid level in an $l$-lever pyramid. \par \textbf{Step 4:} Stack all generated feature vectors together (in Pytorch we use torch.cat function). Pre-compute the length of each feature map by $M = \sum _{i=1}^{l} n \times n$, and the total length of feature is $K\times M$. \par \textbf{Step 5:} Resize the output feature to a size of $[1, K\times M]$. \par We use a 3-level pyramid pool ($4\times4$, $2\times2$, $1\times1$), which means that the number of bins is 21($4\times 4$ + $2\times 2$ + $1\times 1$). For a given size image, we pre-calculate the size of the output fixed-length vector. Assume that after the basic block 4 there is $a\times a$ (for example, $32\times 32$) size feature maps. When the pooling level is $4\times 4$, we divide the $32\times 32$ feature map into 16 small blocks, that is, the size of each small block is $8\times 8$. Then a GAP is performed on each $8\times 8$ block to obtain a 16-dimensional feature vector. In the pytorch toolbox, we can use average pooling (strinde:8, kernel:8) to achieve such sliding window pooling operations. The pooling level of $2\times 2$ and $1\times 1$ are similar. Finally, we can get a ($4\times 4 + 2\times 2 + 1\times 1) \times k$ dimensional vector, where $k$ is the number of filters in the last convolutional layer. \par Interestingly, the $1\times 1$ level pooling actually equal to the global average pooling layer used in many steganalysis networks. This shows that we have gathered the information from feature maps at different levels, which not only integrates features of different scales but also better models the local features. \par In order to verify the effectiveness of the SPP module in feature extraction, we compare Zhu-Net (with SPP-module) with Ye-Net and Yedroudj-Net (with GAP-module). All the networks are trained against WOW and S-UNIWARD, at the payload of 0.2 bpp. The experiment results are shown in Table V. Considering GPU computing power and time limitation, we construct a training set with two predefined sizes: $224\times 224$ and $256\times 256$. We resampled all the $512\times 512$ images to $256\times 256$ images and $224\times 224$ images. In order to compare with the existing networks, the size of testing images is still $256\times 256$. \begin{table} \centering \caption{Steganalysis error probability comparison of Zhu-Net with different training schemes and Yedroudj-Net against the two algorithms WOW and S-UNIWARD at 0.2bpp. Both networks are trained and tested on BOSS dataset. } \begin{tabular}{ccc} \hline Algorithms& \tabincell{c}{WOW\\(0.2bpp)}& \tabincell{c}{S-UNIWARD\\(0.2bpp)}\\ \hline \tabincell{c}{Ye-Net} & 0.331 & 0.400 \\ \tabincell{c}{Yedroudj-Net} & 0.278 & 0.248 \\ \tabincell{c}{Zhu-Net wiht 256-size Tested} & \textbf{0.234} & \textbf{0.281} \\ \tabincell{c}{Zhu-Net with multi-size Tested} & 0.241 & 0.289 \\ \hline \end{tabular} \end{table} \par The experiment results have shown that Zhu-Net has a higher accuracy than Yedroudj-Net and Ye-Net. That is, compared with the single-size training, the multi-size training can slightly improve the accuracy. We believe that the multi-size training relieves overfitting to a certain extent, and the multi-size dataset enhances the generalization ability of the network. \par Besides, we create a random-sized test set whose image sizes range from [224, 256]. The error rate of Zhu-Net against WOW and S-UNIWARD at 0.2 bpp is 0.241 and 0.289. \par The experiment results show that the detection with SPP-module is better than the detection with GAP, and the former network has better feature expression ability. Another advantage of using SPP-module is that it can handle inputs with arbitrary sizes. \section{Experiments} \subsection{The environments} \par In our experiments, we use two well-known content-adaptive staganographic methods, i.e. S-UNIWARD [3], and WOW [2], by Matlab implementations with random embedding key. \par Our proposed CNN network is compared with four popular networks: Xu-Net[17], Ye-Net[19], Yedroudj-Net[21], and SRM + EC standing for the hand-crafted feature set named Spatial-Rich-Model [13] and Ensemble Classifier[32]. All the five networks are tested on the same datasets. All the experiments were run on an Nvidia GTX 1080Ti GPU card. \subsection{Datasets} \par In this paper, we use standard datasets to test the performance of the proposed networks. The two standard datasets are as follows: \begin{itemize} \item the BOSSBase v1.01[33] consisting of 10,000 grey-level images of size $512\times 512$, never compressed, and coming from 7 different cameras. \item the BOWS2[34] consisting of 10,000 grey-level images of size $512\times 512$, never compressed, and whose distribution is close to BOSSBase. \end{itemize} \par Due to our GPU computing power and time limitation, we do all the experiments on images of $256\times 256$ pixels. The specific training set and test set division will be detailed in Section IV-D. \subsection{Hyper-parameters} \par We apply a mini-batch stochastic gradient descent (SGD) to train the CNN networks. The momentum and the weight decay of networks are set to 0.9 and 0.0005 respectively. Due to GPU memory limitation, the mini-batch size in the training is set to 16 (8 cover/stego pairs). All layers are initialized using Xavier method[35]. Based on the above settings, the networks are then trained to minimize the cross-entropy loss. During the training, we adjust the learning rate as follows (initialized to 0.005). \par When the training iteration equals to one of the specified step values, the learning rate will be divided by 5. Concretely, the learning rate will be decreased at epochs 50, 150 and 250 respectively. During later training, using a smaller learning rate can effectively reduce training loss and improve accuracy. CNN training is up to 400 epochs. Actually, we often stop training before 400 epochs to prevent over-fitting. That is, when cross-entropy loss on training set keeps decreasing but detecting accuracy on validation set begins declining, we stop training. We choose the best trained model on validation set. \subsection{Results} \subsubsection{Results without data augmentation} \begin{table} \centering \caption{Steganalysis error rates comparison using Yedroudj-Net, Xu-Net, Ye-Net, and SRM+EC against two steganography algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. All networks are trained and tested on BOSS dataset.} \begin{tabular}{ccccc} \hline Algorithms& \tabincell{c}{WOW\\(0.2bpp)}& \tabincell{c}{WOW\\(0.4bpp)}& \tabincell{c}{S-UNIWARD\\(0.2bpp)}& \tabincell{c}{S-UNIWARD\\(0.4bpp)}\\ \hline SRM+EC & 0.365 & 0.255 & 0.366 & 0.247 \\ Xu-Net & 0.324 & 0.207 & 0.391 & 0.272 \\ Ye-Net & 0.331 & 0.232 & 0.400 & 0.312 \\ Yedroudj-Net & 0.278 & 0.141 & 0.367 & 0.228 \\ Zhu-Net & \textbf{0.233} & \textbf{0.118} & \textbf{0.285} & \textbf{0.153} \\ \hline \end{tabular} \end{table} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{fig10.jpg} \caption{Steganalysis error rates comparison of the five steganalysis methods against two algorithms WOW and S-UNIWARD at 0.2 bpp and 0.4 bpp. All networks are trained and tested on BOSS dataset.} \end{figure} \par In Table VI, we report the performance comparison among steganalyzers without data augmentation. The BOSSBase images were randomly split into a training set with 4,000 cover and stego image pairs, a validation set with 1,000 image pairs, and a testing set containing 5,000 image pairs. For a fair comparison, we report the performance of Yedroudj-Net, Ye-Net, Xu-Net, and the Spatial Rich Model + the Ensemble Classifier (SRM + EC), against the embedding algorithm WOW and S-UNIWARD at payload 0.2 bpp and 0.4 bpp. \par As Fig. 8 shows, the proposed network has significantly better performance than the other networks, regardless of the embedding method and payload. Due to the ability of CNN feature extraction, the proposed network has reduced error rate by 8.1\% to 13.7\%, comparing with the traditional network SRM+EC. Results also show that it is effective to use proposed network to optimize feature extraction and classification in a unified framework. \par In addition, for S-UNIWARD and WOW with different payloads, the proposed network is 8.9\% to 11.9\% better than Xu-Net, 9.8\% to 15.9\% better than Ye-Net, and 2.3\% to 8.2\% better than Yedroudj-Net. It demonstrates that the proposed network effectively extracts the correlation of residuals, and has a good network structure including the multi-level pooling of SPP-module to improve the accuracy. Briefly, experiments prove that Zhu-Net outperforms other networks against various steganography schemes at any payloads. Note that above experiments were operated without tricks such as transfer-learning or virtual expansion of databases. \subsubsection{Result with data augmentation} \begin{table} \centering \caption{Steganalysis error rates comparison using Yedroudj-Net, Ye-Net and Zhu-Net on WOW at 0.2 bpp with a learning base augmented with BOWS2, and Data Augmentation} \begin{tabular}{cccc} \hline Algorithms & \tabincell{c}{BOSS} & \tabincell{c}{BOSS+BOWS2} & \tabincell{c}{BOSS+BOWS2+DA} \\ \hline Ye-Net & 0.331 & 0.261 & 0.222 \\ Yedroudj-Net & 0.278 & 0.237 & 0.208 \\ Zhu-Net & \textbf{0.233} & \textbf{0.178} & \textbf{0.131} \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Steganalysis error rates comparison using Yedroudj-Net, Ye-Net and Zhu-Net on S-UNIWARD at 0.2 bpp with a learning base augmented with BOWS2, and Data Augmentation} \begin{tabular}{cccc} \hline Algorithms & \tabincell{c}{BOSS} & \tabincell{c}{BOSS+BOWS2} & \tabincell{c}{BOSS+BOWS2+DA} \\ \hline Ye-Net & 0.400 & - & 0.335 \\ Yedroudj-Net & 0.366 & 0.344 & 0.311 \\ Zhu-Net & \textbf{0.285} & \textbf{0.243} & \textbf{0.171} \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{Steganalysis error rates comparison using Yedroudj-Net, Ye-Net and Zhu-Net on WOW at different payloads with Data Augmentation} \begin{tabular}{ccccc} \hline Algorithms& \tabincell{c}{Payload\\(bpp)}& \tabincell{c}{Ye-Net[13]}& \tabincell{c}{Yedroudj-Net[31]} & \tabincell{c}{Zhu-Net}\\ \hline \multirow{4}*{WOW} & 0.1 & 0.348 & 0.330 & \textbf{0.233} \\ & 0.2 & 0.262 & 0.208 & \textbf{0.131} \\ & 0.3 & 0.225 & 0.189 & \textbf{0.084} \\ & 0.4 & 0.184 & 0.158 & \textbf{0.065} \\ \cline{1-5} \multirow{4}*{S-UNIWARD} & 0.1 & 0.400 & 0.383 & \textbf{0.268} \\ & 0.2 & 0.335 & 0.331 & \textbf{0.171} \\ & 0.3 & 0.256 & 0.221 & \textbf{0.125} \\ & 0.4 & 0.226 & 0.171 & \textbf{0.081} \\ \hline \end{tabular} \end{table} \begin{figure*} \centering \includegraphics[width=1\textwidth]{fig111.eps} \caption{Steganalysis error rates comparison using YedroudjNet, Ye-Net and Zhu-Net on S-UNIWARD and WOW at different payloads. (a)WOW (b) S-UNIWARD} \end{figure*} \par Data augmentation can effectively improve the performance of the network by increasing the size of training database. Using a large database can improve accuracy and avoid overfitting. But, traditional data augmentation solutions such as clipping and resizing are not good choices for steganalysis because these solutions will destroy the correlation of pixels and drastically reduce the performance of network. \par In order to study the effect of increasing datasets on the performance, we use the following data enhancement schemes. All photos are resampled into the size of $256\times 256$ pixels (using "imresize()" function in Matlab with default settings); \par (1) training set BOSS: The BOSSBase images were randomly divided into a training set with 4,000 cover and stego image pairs, a validation set with 1,000 image pairs, and a testing set containing 5,000 image pairs. \par (2) training set BOSS+BOWS2: Based on training set BOSS, 10,000 additional pairs of cover/stego pair (obtained by resampling BOWS2Base[36]) were added to the training set. The training database now contains 14,000 pairs of cover/stego images and the validation set contains 1,000 pairs from BOSS. \par (3) training set BOSS+BOWS2+DA: The database BOSS+BOWS2+DA is virtually augmented by performing the label-preserving flips and rotations on the BOSS+BOWS2 training set. The size of the BOSS+BOWS2 training set is thus increased by a factor of 8, which gives a final learning database made of 112,000 pairs of cover/stego images. The validation set contains 1,000 pairs from BOSS. \par (4) testing set BOSS: it contains the remaining 5,000 images in BOSSbase other than the ones in training set BOSS. \par Table VII and Table VIII shows the comparisons of Yedroudj-Net, Ye-Net and Zhu-Net trained on different training sets, against the embedding algorithm WOW and S-UNIWARD at payload 0.2 bpp. The experiment results show that when the training set is incremented, the detection performance for all the network will be improved compared with that using the BOSS training set only. For WOW at 0.2 bpp, using training set BOSS+BOSW2 comparing to using only BOSS training set, Zhu-Net reduced the error rate by 5.5\% and achieved best results in all counterparts. The Yedroudj-Net and YeNet error rates are reduced by 4.1\% and 7\%. Similarly, for S-UNIWARD at 0.2 bpp, the detection error rates of the Ye-Net, Yedroudj-Net, Zhu-Net decreased by 2.2\% and 3.6\% comparing to only using BOSS training dataset, respectively. Zhu-Net still achieved best performance in all counterparts. The result shows that over-fitting is effectively mitigated by data augmentation. \par This prompted us to use larger datasets for training. We further train three networks on BOSS + BOWS2 + DA. The results show that all CNN-based methods have improved performance. Compared those training only using BOSS, Zhu-Net still achieves best performance such as the decreased detection error by 10.2\% and 11.4\% against WOW and S-UNIWARD (Ye-Net by 10.9\% and 6.5\%, and Yedroudj-Net by 7\% and 5.5\%). \par In Table IX and Fig. 9, we further illustrate the detection errors of three CNN-based steganalyzer against WOW and S-UNIWARD at different payloads. We note that Zhu-Net achieves significant improvements and best results compared with other CNN-based networks on different datasets against various steganography algorithms. Similarly, we attribute this improvement to the good network structure of Zhu-Net including sepconv block and SPP-module. \par All the experiments have shown that in order to effectively perform feature extraction and classification, CNN requires enough samples for training, even 112,000 pairs of pictures may not be enough. How to further increment dataset to meet the requirement of the steganalysis tasks needs further research. \section{Conclusion} \par It is significantly superior for steganalysis researchers to use CNN instead of traditional handcraft features - the ensemble classifier trained on the Rich Model. In this paper, we focus on designing a new CNN structure for steganalysis. The proposed network achieves a great improvement compared with existing CNN-based networks. The advantages of proposed network focus on: (1) We improve convolution kernel in preprocessing layer to extract the image residuals. Better convolution kernels reduce number of parameters and model local features; (2) We use separable convolution to extract channel correlation and spatial correlation of residuals, and thus the image content is removed from the features and the signal-to-noise ratio is improved. Utilization of residual in the preprocessing layer is more effective; (3) We use SPP-module instead of global pooling layer. By using different levels of average pooling to obtain multi-level features, the network performance is improved. Meanwhile, the SPP-module is a flexible solution for handling different sizes. It can map feature maps to a fixed number of dimensions, enabling detecting arbitrary sized images without any loss of accuracy. Finally, the performance of the proposed CNN is further boosted by using larger data sets. Experiment results show that the proposed CNN network is significantly better in the detection accuracy compared with the other networks. \ifCLASSOPTIONcaptionsoff \newpage \fi
1,314,259,996,147
arxiv
\section{Introduction} Let $M$ be a symmetric $n\times n$ matrix with integer entries $m_{ij}\in\mathbb N\cup\{\infty\}$ where $m_{ii}=1$ and $m_{ij}\ge 2$ for $i\ne j$. The \emph{Artin group} of type $M$ is defined by the presentation $$ A(M)=\langle s_1,\ldots,s_n\mid \underbrace{s_is_js_i\cdots}_{m_{ij}} =\underbrace{s_js_is_j\cdots}_{m_{ij}} \quad\mbox{for all $i\ne j$, $m_{ij}\ne\infty$}\rangle. $$ The \emph{Coxeter group} $W(M)$ of type $M$ is the quotient of $A(M)$ by the relation $s_i^2=1$. We say that $A(M)$ is of \emph{finite type} if the associated Coxeter group $W(M)$ is finite, and that $A(M)$ is of \emph{affine (or Euclidean) type} if $W(M)$ acts as a proper, cocompact group of isometries on some Euclidean space with the generators $s_1,\ldots,s_n$ acting as affine reflections. It is convenient to define an Artin group by a \emph{Coxeter graph}, whose vertices are numbered $1,\ldots,n$ and which has an edge labelled $m_{ij}$ between the vertices $i$ and $j$ whenever $m_{ij}\ge 3$ or $m_{ij}=\infty$. The label 3 is usually suppressed. In this paper, we show the uniqueness of roots up to conjugacy for elements of the Artin groups of finite type $\mathbf A_n$, $\mathbf B_n=\mathbf C_n$ and affine type $\tilde \mathbf A_{n-1}$, $\tilde \mathbf C_{n-1}$. The Coxeter graphs associated to them are in Figure~\ref{fig:graph}. \begin{figure}[t] \begin{tabular}{ccccc} \raisebox{1em}{$\mathbf A_n$} & \includegraphics[scale=.5]{Artin-An.eps} &\mbox{}\qquad\mbox{}& \raisebox{1em}{$\tilde \mathbf A_{n-1}$} & \includegraphics[scale=.5]{Artin-A-tilde.eps}\\ \raisebox{1em}{$\mathbf B_n$} & \includegraphics[scale=.5]{Artin-Bn.eps} && \raisebox{1em}{$\tilde \mathbf C_{n-1}$} & \includegraphics[scale=.5]{Artin-C-tilde.eps} \end{tabular} \caption{Coxeter graphs} \label{fig:graph} \end{figure} \begin{theorem}\label{thm:artin} Let\/ $G$ denote one of the Artin groups of finite type $\mathbf A_n$, $\mathbf B_n=\mathbf C_n$ and affine type $\tilde \mathbf A_{n-1}$, $\tilde \mathbf C_{n-1}$. If\/ $\alpha,\beta\in G$ are such that $\alpha^k=\beta^k$ for some nonzero integer $k$, then $\alpha$ and $\beta$ are conjugate in $G$. \end{theorem} In fact, we prove a stronger theorem. Before stating it, let us explain the motivations. The Artin group $A(\mathbf A_n)$ is well-known as $B_{n+1}$, the braid group on $n+1$ strands. The generators of $B_{n+1}$ are usually written as $\sigma_i$, hence it has the presentation $$ B_{n+1}=\left\langle\sigma_1,\ldots,\sigma_n\biggm| \begin{array}{ll} \sigma_i\sigma_j=\sigma_j\sigma_i & \mbox{if } |i-j| > 1, \\ \sigma_i\sigma_{j}\sigma_i=\sigma_{j}\sigma_i\sigma_{j} & \mbox{if } |i-j| = 1. \end{array} \right\rangle. $$ The following are well-known theorems on uniqueness of roots of braids. \begin{theorem}[J.~Gonz\'alez-Meneses~\cite{Gon03}]\label{thm:gon} Let $\alpha$ and $\beta$ be $n$-braids such that $\alpha^k=\beta^k$ for some nonzero integer $k$. Then $\alpha$ and $\beta$ are conjugate in $B_n$. \end{theorem} \begin{theorem}[V.~G.~Bardakov~\cite{Bar}, D.~Kim and D.~Rolfsen~\cite{KR03}]\label{thm:KR03} Let $\alpha$ and $\beta$ be pure braids such that $\alpha^k=\beta^k$ for some nonzero integer $k$. Then $\alpha$ and $\beta$ are equal. \end{theorem} Theorem~\ref{thm:gon} was conjectured by G.~S.~Makanin~\cite{Mak71} in the early seventies, and proved recently by J.~Gonz\'alez-Meneses. Thus the new contribution of Theorem~\ref{thm:artin} is for the Artin groups of type $\mathbf B_n=\mathbf C_n$, $\tilde \mathbf A_{n-1}$ and $\tilde \mathbf C_{n-1}$. Theorem~\ref{thm:KR03} was first proved by V.~G.~Bardakov by combinatorial arguments, and it follows easily from the bi-orderability of pure braids by D.~Kim and D.~Rolfsen. (To see this, let $<$ be a bi-ordering of pure braids. If $\alpha>\beta$ (resp. $\alpha<\beta$), then $\alpha^k>\beta^k$ (resp. $\alpha^k<\beta^k$) for all $k\ge 1$. Therefore, $\alpha^k=\beta^k$ implies $\alpha=\beta$.) It is worth mentioning that D.~Bessis showed the uniqueness of roots up to conjugacy for periodic elements in the braid groups of irreducible well-generated complex reflection groups, and hence for periodic elements in finite type Artin groups: if $G$ is the braid group of an irreducible well-generated complex reflection group and if $\alpha,\beta\in G$ are such that $\alpha$ has a central power and $\alpha^k=\beta^k$ for some nonzero integer $k$, then $\alpha$ and $\beta$ are conjugate in $G$~\cite[Theorem 12.5~(ii)]{Bes06}. Comparing the above two theorems, we can see that one obtains a stronger result for pure braids. Motivated by the above observation, we study the case of ``partially pure'' braids---that is, braids some of whose strands are pure. Moreover, the Artin groups $A(\mathbf B_n)$, $A(\tilde \mathbf A_{n-1})$ and $A(\tilde \mathbf C_{n-1})$ are isomorphic to some subgroups of $B_{n+1}$, which can be described by pure strands and linking number of the first strand with the other strands. In order to deal with ``partially pure'' braids and elements of those Artin groups simultaneously, we introduce the following definitions. \begin{definition} For an $n$-braid $\alpha$, let $\pi_\alpha$ denote the induced permutation of $\alpha$. \begin{itemize} \item For an $n$-braid $\alpha$ and an integer $1\le i\le n$, we say that $\alpha$ is \emph{$i$-pure}, or the $i$-th strand of $\alpha$ is \emph{pure}, if $\pi_\alpha(i)=i$. \item Let $B_{n,1}$ denote the subgroup of $B_n$ consisting of 1-pure braids. \item Let $P\subset\{1,\ldots,n\}$. An $n$-braid $\alpha$ is said to be \emph{$P$-pure} if $\alpha$ is $i$-pure for each $i\in P$. Note that $\{1,\ldots,n\}$-pure braids are nothing more than pure braids in the usual sense. \item Let $P\subset\{1,\ldots,n\}$. An $n$-braid is said to be \emph{$P$-straight} if it is $P$-pure and it becomes trivial when we delete all the $i$-th strands for $i\not\in P$. Note the following: if $|P|=1$, then a braid is $P$-pure if and only if it is $P$-straight; if $|P|=n$ and $\alpha$ is a $P$-straight $n$-braid, then $\alpha$ is the identity; a braid $\alpha$ is called a \emph{brunnian braid} if it is $P$-straight for all $P$ with $|P|=n-1$. \end{itemize} \end{definition} For example, the braid in Figure~\ref{fig:p-pure} is $\{1,4,5\}$-pure, $\{1,4\}$-straight and $\{1,5\}$-straight. \begin{definition} There is a homomorphism $\operatorname{lk}:B_{n,1}\to \mathbb Z$ which measures the linking number of the first strand with the other strands: let $\sigma_1,\ldots,\sigma_{n-1}$ be the Artin generators for $B_n$, then $B_{n,1}$ is generated by $\sigma_1^2,\sigma_2,\sigma_3,\ldots,\sigma_{n-1}$, and the homomorphism $\operatorname{lk}$ is defined by $\operatorname{lk}(\sigma_1^2)=1$ and $\operatorname{lk}(\sigma_i)=0$ for $i\ge 2$. Note that $\operatorname{lk}(\cdot )$ is a conjugacy invariant in $B_{n,1}$ because $\operatorname{lk}(\gamma\alpha\gamma^{-1})=\operatorname{lk}(\gamma)+\operatorname{lk}(\alpha)-\operatorname{lk}(\gamma)=\operatorname{lk}(\alpha)$ for any $\alpha,\gamma\in B_{n,1}$. A braid $\alpha$ is said to be \emph{1-unlinked} if it is 1-pure and $\operatorname{lk}(\alpha)=0$. For example, the braid in Figure~\ref{fig:p-pure} is 1-unlinked. \end{definition} \begin{figure} $$ \includegraphics[scale=.9]{root-P-pure.eps} $$ \caption{This braid is $\{1,4,5\}$-pure, $\{1,4\}$-straight, $\{1,5\}$-straight and 1-unlinked.}\label{fig:p-pure} \end{figure} It is well known that the following isomorphisms hold~\cite{Cri99, All02, CC05, BM05}: \begin{eqnarray*} A(\mathbf B_n) &\simeq& B_{n+1,1};\\ A(\tilde \mathbf A_{n-1}) &\simeq& \{\alpha\in B_{n+1,1}\mid \mbox{$\alpha$ is 1-unlinked} \};\\ A(\tilde \mathbf C_{n-1}) &\simeq& \{\alpha\in B_{n+1,1}\mid \mbox{$\alpha$ is $\{1,n+1\}$-pure} \}. \end{eqnarray*} As we do not need to consider the Artin group $A(\mathbf A_n)$ due to Theorem~\ref{thm:gon}, it suffices to consider 1-pure braids. From now on, we restrict ourselves to $B_{n,1}$, the group of 1-pure braids on $n$ strands. Exploiting the Nielsen-Thurston classification of braids, we establish the following theorem. \begin{theorem}\label{thm:main} Let $P$ be a subset of\/ $\{1,\ldots,n\}$ with $1\in P$. Let $\alpha$ and $\beta$ be $P$-pure $n$-braids such that $\alpha^k=\beta^k$ for some nonzero integer $k$. Then there exists a $P$-straight, 1-unlinked $n$-braid $\gamma$ with $\beta=\gamma\alpha\gamma^{-1}$. \end{theorem} Applying Theorem~\ref{thm:main} to $\{1\}$-pure $(n+1)$-braids (resp. $\{1,n+1\}$-pure $(n+1)$-braids), we have Theorem~\ref{thm:artin} for $A(\mathbf B_n)$ and $A(\tilde\mathbf A_{n-1})$ (resp. for $A(\tilde\mathbf C_{n-1})$). \smallskip We close this section with some remarks. An easy consequence of Theorem~\ref{thm:artin} is the following. \begin{quote} Let $G$ denote one of the Artin groups of finite type $\mathbf A_n$, $\mathbf B_n=\mathbf C_n$ and affine type $\tilde \mathbf A_{n-1}$, $\tilde \mathbf C_{n-1}$. Let $\alpha,\beta\in G$ and let $k$ be a nonzero integer. Then $\alpha$ is conjugate to $\beta$ if and only if\/ $\alpha^k$ is conjugate to $\beta^k$. \end{quote} \smallskip Theorem~\ref{thm:KR03} follows easily from Theorem~\ref{thm:main}: Let $\alpha$ and $\beta$ be pure $n$-braids with $\alpha^k=\beta^k$. In our terminology, both $\alpha$ and $\beta$ are $\{1,\ldots,n\}$-pure, hence there exists a $\{1,\ldots,n\}$-straight $n$-braid $\gamma$ such that $\beta=\gamma\alpha\gamma^{-1}$. Because $\gamma$ is $\{1,\ldots,n\}$-straight, we have $\gamma=1$, hence $\alpha=\beta$. \smallskip Theorem~\ref{thm:artin}, even for $A(\mathbf B_n)$, does not follow easily from Theorem~\ref{thm:gon} because there are 1-pure braids that are conjugate in $B_n$, but not in the subgroup $B_{n,1}$, as in the following example. \begin{example} Consider the 1-pure 3-braids which are depicted in Figure~\ref{fig:B31}: $$ \left\{\begin{array}{l} \alpha_1=\sigma_1^2,\\ \beta_1=\sigma_2^2,\end{array}\right. \qquad\mbox{and}\qquad \left\{\begin{array}{l} \alpha_2=\sigma_1^2\sigma_2^4,\\ \beta_2=\sigma_2^2\sigma_1^4. \end{array}\right. $$ Because $\Delta\alpha_i\Delta^{-1}=\beta_i$ for $i=1,2$, where $\Delta=\sigma_1\sigma_2\sigma_1$, the braid $\alpha_i$ is conjugate to $\beta_i$ in $B_3$. However, $\alpha_i$ is not conjugate to $\beta_i$ in $B_{3,1}$ for $i=1, 2$ because $\operatorname{lk}(\alpha_1)=\operatorname{lk}(\alpha_2)=1$, $\operatorname{lk}(\beta_1)=0$ and $\operatorname{lk}(\beta_2)=2$. Note that $\alpha_1$ and $\beta_1$ are reducible, and that $\alpha_2$ and $\beta_2$ are pseudo-Anosov. \end{example} \begin{figure}\center \begin{tabular}{cccc} \mbox{}~\includegraphics[scale=1.2]{sym-11.eps}~\mbox{} & \mbox{}~\includegraphics[scale=1.2]{sym-22.eps}~\mbox{} & \mbox{}~\includegraphics[scale=1.2]{sym-11-2222.eps}~\mbox{} & \mbox{}~\includegraphics[scale=1.2]{sym-22-1111.eps}~\mbox{}\\ $\alpha_1=\sigma_1^2$ & $\beta_1=\sigma_2^2$ & $\alpha_2=\sigma_1^2\sigma_2^4$ & $\beta_2=\sigma_2^2\sigma_1^4$ \end{tabular} \caption{$\alpha_i, \beta_i\in B_{n,1}$ are conjugate in $B_n$ but not in $B_{n,1}$.} \label{fig:B31} \end{figure} \section{Preliminaries} Here, we review basic definitions and results on braids. See~\cite{Art25,Bir,Thu88,FLP79,GW04,LL08}. Let $D^2=\{z\in\mathbb{C}: |z|\le n+1\}$, and let $D_n$ be the $n$-punctured disk $D^2\setminus\{1,2,\ldots,n\}$. The Artin braid group $B_n$ is the group of automorphisms of $D_n$ that fix the boundary pointwise, modulo isotopy relative to the boundary. Geometrically, an $n$-braid can be interpreted as an isotopy class of the collections of pairwise disjoint $n$ strands $l=l_1\cup\cdots \cup l_n\subset D^2\times[0,1]$ such that $l\cap (D^2\times\{t\})$ consists of $n$ points for each $t\in[0,1]$, and, in particular, it is $\{(1,t),\ldots,(n,t) \}$ for $t\in\{0,1\}$. The admissible isotopies lie in the interior of $D^2\times [0,1]$. The center of the $n$-braid group $B_n$ is infinite cyclic generated by $\Delta^2$, where $\Delta=\sigma_1(\sigma_2\sigma_1)\cdots (\sigma_{n-1}\cdots\sigma_1)$. The well-known Nielsen-Thurston classification of mapping classes of punctured surfaces into periodic, reducible and pseudo-Anosov ones~\cite{Thu88,FLP79} yields an analogous classification of braids: an $n$-braid $\alpha$ is \emph{periodic} if some power of $\alpha$ is central; $\alpha$ is \emph{reducible} if there exists an essential curve system in $D_n$ which is invariant up to isotopy under the action of $\alpha$; $\alpha$ is \emph{pseudo-Anosov} if no non-trivial power of $\alpha$ is reducible. \begin{lemma}\label{lem:NT} Let $\alpha,\beta\in B_n$ be such that $\alpha^k=\beta^k$ for a nonzero integer $k$. Then \begin{itemize} \item[(i)] $\alpha$ and $\beta$ are of the same Nielsen-Thurston type; \item[(ii)] if $\alpha$ is pseudo-Anosov, then $\alpha=\beta$. \end{itemize} \end{lemma} \begin{proof} (i) is well known. (ii) was proved by Gonz\'alez-Meneses~\cite{Gon03}. \end{proof} \subsection{Periodic braids} Let $\delta=\sigma_{n-1}\cdots\sigma_1$ and $\epsilon=\delta\sigma_1$, then $\delta^n = \Delta^2 = \epsilon^{n-1}$. (If we need to specify the number of strands, we will write $\delta=\delta_{(n)}$, $\epsilon = \epsilon_{(n)}$ and $\Delta = \Delta_{(n)}$.) Note that $\delta$ and $\epsilon$ are represented by rigid rotations of the $n$-punctured disk as in Figure~\ref{fig:circ} when the punctures are at the center of the disk or on a round circle centered at the origin. By Brouwer, Ker\'ekj\'art\'o and Eilenberg, it is known that an $n$-braid $\alpha$ is periodic if and only if it is conjugate to a power of either $\delta$ or $\epsilon$~\cite{Bro19,Ker19,Eil34,BDM02}. \begin{figure} \begin{tabular}{ccc} \includegraphics[scale=.65]{sym-disc-c12-num.eps} & & \includegraphics[scale=.65]{sym-disc-c13-num.eps}\\ (a) $\delta_{(n)} = \sigma_{n-1}\sigma_{n-2}\cdots\sigma_1 \in B_{n}$ && (b) $\epsilon_{(n)} = \delta_{(n)}\sigma_1\in B_{n}$ \end{tabular} \caption{ The braid $\delta_{(n)}$ is represented by the $2\pi/n$-rotation of the $n$-punctured disk in a clockwise direction where the punctures lie on a round circle as in (a). The braid $\epsilon_{(n)}$ is represented by the $2\pi/(n-1)$-rotation of the $n$-punctured disk in a clockwise direction where one puncture is at the center and the other $n-1$ punctures lie on a round circle as in (b).} \label{fig:circ} \end{figure} \begin{lemma}\label{lem:per} An $n$-braid $\alpha$ is periodic if and only if $\alpha$ is conjugate to either $\delta^m$ or $\epsilon^m$ for some integer $m$. Further, if $\alpha$ is periodic and non-central, then exactly one of the following holds. \begin{itemize} \item[(i)] $\alpha$ is conjugate to $\delta^m$ for some $m\not\equiv 0\pmod n$. In this case, $\alpha$ has no pure strand. \item[(ii)] $\alpha$ is conjugate to $\epsilon^m$ for some $m\not\equiv 0\pmod{n-1}$. In this case, $\alpha$ has only one pure strand. \end{itemize} \end{lemma} \begin{corollary}\label{cor:per} Let $\alpha$ be a periodic $n$-braid whose first strand is pure. \begin{itemize} \item[(i)] If\/ $\alpha$ has at least two pure strands, then $\alpha$ is central. \item[(ii)] If\/ $\alpha$ is 1-unlinked, then $\alpha$ is the identity. \end{itemize} \end{corollary} \begin{proof} (i)\ \ It is immediate from Lemma~\ref{lem:per}. (ii)\ \ Let $\alpha$ be 1-unlinked and periodic. Because $\alpha$ is 1-pure, it is conjugate to $\epsilon^m$. Because $\operatorname{lk}(\epsilon)=1$ and $\alpha$ is 1-unlinked, $0=\operatorname{lk}(\alpha)=\operatorname{lk}(\epsilon^m)=m\operatorname{lk}(\epsilon)=m$, hence $\alpha$ is the identity. \end{proof} \subsection{Reducible braids} \begin{definition} A curve system $\mathcal C$ in $D_n$ means a finite collection of disjoint simple closed curves in $D_n$. It is said to be \emph{essential}\/ if each component is homotopic neither to a point nor to a puncture nor to the boundary. It is said to be \emph{unnested} if none of its components encloses another component as in Figure~\ref{fig:standard}~(b). \end{definition} \begin{definition} The $n$-braid group $B_n$ acts on the set of curve systems in $D_n$. Let $\alpha*\mathcal C$ denote the left action of $\alpha\in B_n$ on the curve system $\mathcal C$ in $D_n$. An $n$-braid $\alpha$ is said to be \emph{reducible} if $\alpha*\mathcal C=\mathcal C$ for some essential curve system $\mathcal C$ in $D_n$. Such a curve system $\mathcal C$ is called a \emph{reduction system} of $\alpha$. \end{definition} \def\temp{ If $\mathcal C$ is an essential curve system in $D_n$, then there could be punctures of $D_n$ not enclosed by any circle in $\mathcal C$. In order to simplify the notations, define the curve system $\overline{\mathcal C}$ to contain exactly the curves of $\mathcal C$, plus one circle around each such puncture of $D_n$. } \subsubsection{Canonical reduction system} For a reduction system $\mathcal C$ of an $n$-braid $\alpha$, let $D_\mathcal C$ be the closure of $D_n\setminus N(\mathcal C)$ in $D_n$, where $N(\mathcal C)$ is a regular neighborhood of $\mathcal C$. The restriction of $\alpha$ induces an automorphism on $D_\mathcal C$ that is well defined up to isotopy. Due to Birman, Lubotzky and McCarthy~\cite{BLM83} and Ivanov~\cite{Iva92}, for any $n$-braid $\alpha$, there is a unique \emph{canonical reduction system} $\operatorname{\mathcal R}(\alpha)$ with the following properties. \begin{enumerate} \item[(i)] $\operatorname{\mathcal R}(\alpha^m)=\operatorname{\mathcal R}(\alpha)$ for all $m\ne 0$. \item[(ii)] $\operatorname{\mathcal R}(\beta\alpha\beta^{-1})=\beta*\operatorname{\mathcal R}(\alpha)$ for all $\beta\in B_n$. \item[(iii)] The restriction of $\alpha$ to each component of $D_{\operatorname{\mathcal R}(\alpha)}$ is either periodic or pseudo-Anosov. A reduction system with this property is said to be \emph{adequate}. \item[(iv)] If $\mathcal C$ is an adequate reduction system of $\alpha$, then $\operatorname{\mathcal R}(\alpha)\subset\mathcal C$. \end{enumerate} By the properties of canonical reduction systems, a braid $\alpha$ is reducible and non-periodic if and only if $\operatorname{\mathcal R}(\alpha)\ne\emptyset$. Let $\R_\ext(\alpha)$ denote the collection of the outermost components of $\operatorname{\mathcal R}(\alpha)$. Then $\R_\ext(\alpha)$ is an unnested curve system satisfying the properties (i) and (ii). \subsubsection{Standard reduction system} In this paper we use a notation, introduced in~\cite{LL08}, for reducible braids with standard reduction system. \begin{definition} An essential curve system in $D_n$ is said to be \emph{standard}\/ if each component is isotopic to a round circle centered at the real axis as in Figure~\ref{fig:standard}~(a). \end{definition} The unnested standard curve systems in $D_n$ are in one-to-one correspondence with the $r$-compositions of $n$ for $2\le r\le n-1$. Recall that an ordered $r$-tuple $\mathbf n=(n_1,\ldots,n_r)$ is an \emph{$r$-composition} of $n$ if $n_i\ge 1$ for each $i$ and $n=n_1+\cdots+n_r$. \begin{definition}\label{def:CurSys} For a composition $\mathbf n=(n_1,\ldots,n_r)$ of $n$, let $\mathcal C_\mathbf n$ denote the unnested standard curve system $\cup_{n_i\ge 2}C_i$, where each $C_i$ is a round circle, centered at the real line, enclosing the punctures $\{m\mid \sum_{j=1}^{i-1}n_j< m\le \sum_{j=1}^{i}n_j\}$. For example, Figure~\ref{fig:standard}~(b) shows the unnested standard curve system $\mathcal C_\mathbf n$ for $\mathbf n=(1,1,2,1,2,3)$. \end{definition} \begin{figure} \begin{tabular}{ccc} \includegraphics[scale=.6]{sym-standard.eps} && \includegraphics[scale=.6]{sym-unnested.eps}\\ (a) &\mbox{}\quad\mbox{}& (b) \end{tabular} \vskip -2mm \caption{(a) shows a standard curve system in $D_{10}$. (b) shows the unnested standard curve system $\mathcal C_\mathbf n$ for $\mathbf n=(1,1,2,1,2,3)$} \label{fig:standard} \end{figure} The $r$-braid group $B_r$ acts on the set of $r$-compositions of $n$ via the induced permutations: for an $r$-composition $\mathbf n=(n_1,\cdots,n_r)$ of $n$ and $\alpha\in B_r$ with induced permutation $\theta$, $\alpha*\mathbf n=(n_{\theta^{-1}(1)},\ldots,n_{\theta^{-1}(r)})$. \begin{remark} Throughout this paper, braids and permutations act on the left. That is, if $\alpha$ and $\beta$ are $n$-braids, then $(\alpha\beta)*\mathcal C = \alpha*(\beta*\mathcal C)$ for a curve system $\mathcal C$ in $D_n$; if $\alpha$ and $\beta$ are $r$-braids, then $(\alpha\beta)*\mathbf n=\alpha*(\beta*\mathbf n)$ for an $r$-composition $\mathbf n$ of $n$; if $\pi_1$ and $\pi_2$ are $n$-permutations, then $(\pi_1\circ\pi_2)(i)=\pi_1(\pi_2(i))$ for $1\le i\le n$. \end{remark} \begin{definition} Let $\mathbf n=(n_1,\cdots,n_r)$ be a composition of $n$. \begin{itemize} \item Let $\alpha_0=l_1\cup\cdots\cup l_r$ be an $r$-braid with $l_i\cap(D^2\times\{1\})=\{(i, 1)\}$ for each $i$. We define $\myangle{\alpha_0}_\mathbf n$ as the $n$-braid obtained from $\alpha_0$ by taking $n_i$ parallel copies of $l_i$ for each $i$. See Figure~\ref{fig:copy}~(a). \item Let $\alpha_i\in B_{n_i}$ for $i=1,\ldots,r$. We define $(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$ as the $n$-braid $\alpha_1'\alpha_2'\cdots\alpha_r'$, where each $\alpha_i'$ is the image of $\alpha_i$ under the homomorphism $B_{n_i}\to B_n$ defined by $\sigma_j\mapsto\sigma_{n_1+\cdots+n_{i-1}+j}$. See Figure~\ref{fig:copy}~(b). \end{itemize} \end{definition} We will use the notation $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$ throughout the paper. See Figure~\ref{fig:copy}~(c). The following lemma shows some elementary properties. \begin{figure} \begin{tabular}{ccccc} \includegraphics[scale=1]{sym-red1.eps} &\qquad& \includegraphics[scale=1]{sym-red2.eps} &\qquad& \includegraphics[scale=1]{sym-red3.eps} \\ \small (a) $\myangle{\sigma_1^{-1}\sigma_2}_\mathbf n$&& \small (b) $(\sigma_1^3\oplus\sigma_1^{-2}\sigma_2^3\oplus 1)_\mathbf n$&& \small (c) $\myangle{\sigma_1^{-1} \sigma_2}_\mathbf n(\sigma_1^3\oplus\sigma_1^{-2}\sigma_2^3\oplus 1)_\mathbf n$ \end{tabular} \caption{$\mathbf n=(2,3,1)$}\label{fig:copy} \end{figure} \begin{lemma}[{\cite[Lemmas 3.5 and 3.6]{LL08}}]\label{thm:decom} Let\/ $\mathbf n=(n_1,\ldots,n_r)$ be a composition of\/ $n$. \begin{enumerate} \item[(i)] The expression $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$ is unique, i.e. if\/ $\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n =\myangle{\beta_0}_\mathbf n(\beta_1\oplus\cdots\oplus\beta_r)_\mathbf n$, then $\alpha_i=\beta_i$ for $i=0, 1,\ldots,r$. \item[(ii)] If $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$, then $\alpha*\mathcal C_\mathbf n$ is standard and, further, $\alpha*\mathcal C_\mathbf n=\mathcal C_{\alpha_0\ast\mathbf n}$. Conversely, if\/ $\alpha*\mathcal C_\mathbf n$ is standard, then $\alpha$ can be expressed as $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$. \item[(iii)] $\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n = (\alpha_{\theta^{-1}(1)}\oplus\cdots\oplus\alpha_{\theta^{-1}(r)})_{\alpha_0\ast\mathbf n} \myangle{\alpha_0}_\mathbf n$, where $\theta$ is the induced permutation of\/ $\alpha_0$. \item[(iv)] $\myangle{\alpha_0 \beta_0}_\mathbf n =\myangle{\alpha_0}_{\beta_0*\mathbf n}\myangle{\beta_0}_\mathbf n$. \item[(v)] $(\myangle{\alpha_0}_\mathbf n)^{-1}=\myangle{\alpha_0^{-1}}_{\alpha_0*\mathbf n}$. \item[(vi)] $ (\alpha_1\beta_1\oplus\cdots\oplus\alpha_r\beta_r)_\mathbf n =(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n (\beta_1\oplus\cdots\oplus\beta_r)_\mathbf n $ \item[(vii)] $(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n^{-1} =(\alpha_1^{-1}\oplus\cdots\oplus\alpha_r^{-1})_\mathbf n$. \item[(viii)] Let $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$. Then $\alpha = \Delta_{(n)}$ if and only if\/ $\alpha_0=\Delta_{(r)}$ and $\alpha_i=\Delta_{(n_i)}$ for $1\le i\le r$. \end{enumerate} \end{lemma} \def\temp{ For an $n$-braid $\alpha$ and a composition $\mathbf n=(n_1,\ldots,n_r)$ of $n$, if $\alpha*\mathcal C_\mathbf n$ is standard, then there exists a unique expression $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$ by the above lemma. We denote $\alpha_0$ by $\Ext_{\mathbf n}(\alpha)$. For another $n$-braid $\beta$, if $\alpha*\mathcal C_\mathbf n=\beta*\mathcal C_\mathbf n=\mathcal C_\mathbf n$, then $\Ext_{\mathbf n}(\alpha\beta)=\Ext_{\mathbf n}(\alpha)\Ext_{\mathbf n}(\beta)$. } \subsection{Basic properties of $P$-pure, $P$-straight or 1-unlinked braids} \begin{lemma}\label{lem:gamma} Let $\alpha$, $\beta$ and $\gamma$ be $n$-braids, and let $P$ be a subset of\/ $\{1,\ldots,n\}$. \begin{itemize} \item[(i)] If\/ $\alpha$ is $P$-pure, then $\gamma\alpha\gamma^{-1}$ is $\pi_\gamma(P)$-pure. \item[(ii)] If\/ $\alpha$ is $P$-straight, then $\gamma\alpha\gamma^{-1}$ is $\pi_\gamma(P)$-straight. \item[(iii)] If\/ $\alpha$ is 1-unlinked and $\gamma$ is 1-pure, then $\gamma\alpha\gamma^{-1}$ is 1-unlinked. \item[(iv)] If\/ both $\alpha$ and $\beta$ are $P$-pure (resp.{} $P$-straight, 1-unlinked), then $\alpha^p\beta^q$ is $P$-pure (resp.{} $P$-straight, 1-unlinked) for any integers $p$ and $q$. \end{itemize} \end{lemma} \begin{proof} (i) and (ii) are obvious. (iii)\ \ It follows from $\operatorname{lk}(\gamma\alpha\gamma^{-1})=\operatorname{lk}(\gamma)+\operatorname{lk}(\alpha)-\operatorname{lk}(\gamma)$. (iv)\ \ It is obvious for $P$-pureness and $P$-straightness. The 1-unlinkedness follows from $\operatorname{lk}(\alpha^p\beta^q)=p\operatorname{lk}(\alpha)+q\operatorname{lk}(\beta)=0$. \end{proof} \begin{lemma}\label{lem:per2} If\/ $\alpha\in B_{n,1}$ is a periodic braid, then there exists a 1-unlinked $n$-braid $\gamma$ such that $\gamma\alpha\gamma^{-1}=\epsilon^m$ for some integer $m$. \end{lemma} \begin{proof} If $\alpha$ is central, then we can take the identity as the conjugating element $\gamma$. Therefore we may assume that $\alpha$ is non-central, hence $\alpha$ is conjugate to $\epsilon^m$ for some $m\not\equiv 0\bmod n-1$. There exists an $n$-braid $\gamma_1$ such that $\gamma_1\alpha\gamma_1^{-1}=\epsilon^m$. Because $\gamma_1\alpha\gamma_1^{-1}$ is $\pi_{\gamma_1}(1)$-pure and $\epsilon^m$ has the first strand as the only pure strand, we have $\pi_{\gamma_1}(1)=1$, that is, $\gamma_1$ is 1-pure. Let $q=\operatorname{lk}(\gamma_1)$ and $\gamma=\epsilon^{-q}\gamma_1$. Then $$ \gamma\alpha\gamma^{-1} =\epsilon^{-q}(\gamma_1\alpha\gamma_1^{-1})\epsilon^q =\epsilon^{-q}\epsilon^m\epsilon^q =\epsilon^m. $$ Since $\gamma_1$ and $\epsilon$ are 1-pure, so is $\gamma$. Since $\operatorname{lk}(\epsilon)=1$, we have $$ \operatorname{lk}(\gamma) =\operatorname{lk}(\gamma_1)+\operatorname{lk}(\epsilon^{-q}) =\operatorname{lk}(\gamma_1)-q=0. $$ Therefore $\gamma$ is a conjugating element from $\alpha$ to $\epsilon^m$, which is 1-unlinked. \end{proof} \begin{figure} \begin{tabular}{ccc} \includegraphics[scale=.9]{root-lk1.eps} &\mbox{}\qquad\mbox{} & \includegraphics[scale=.9]{root-lk2.eps} \\ (a) $\alpha=\sigma_2^{-1}\sigma_1^2\sigma_2^{-1}\sigma_1^{-2}\sigma_2^{-1} \sigma_1^{-2}\sigma_3\sigma_2\sigma_1^2\sigma_2\sigma_3$ && (b) $\beta=\myangle{\alpha}_\mathbf n$ for $\mathbf n=(3,1,1,2)$ \end{tabular} \caption{ For the above 4-braid $\alpha$, we have $\operatorname{lk}_2(\alpha)=0$, $\operatorname{lk}_3(\alpha)=-1$, $\operatorname{lk}_4(\alpha)=1$, hence $\operatorname{lk}(\alpha)=0+(-1)+1=0$. For the above 7-braid $\beta$, we have $\operatorname{lk}_2(\beta)=\operatorname{lk}_3(\alpha)=0$, $\operatorname{lk}_4(\beta)=0$, $\operatorname{lk}_5(\beta)=-1$, $\operatorname{lk}_6(\beta)=\operatorname{lk}_7(\alpha)=1$, hence $\operatorname{lk}(\beta)=1$.} \label{fig:lk} \end{figure} \begin{definition} For a braid $\alpha\in B_{n,1}$ and an integer $2\le i\le n$, we define the \emph{$i$-th linking number} $\operatorname{lk}_i(\alpha)$ of $\alpha$ as the linking number between the first and the $i$-th strands of $\alpha$. See Figure~\ref{fig:lk}. \end{definition} The following is an obvious relation between the linking number and the $i$-th linking number. \begin{lemma}\label{lem:LinkingNo} Let $\alpha=\myangle{\alpha_0}_\mathbf n (\alpha_1\oplus\alpha_2\oplus\cdots\oplus\alpha_r)_\mathbf n \in B_{n,1}$ for a composition $\mathbf n=(n_1,\ldots,n_r)$ of\/ $n$. Then $\operatorname{lk}(\alpha)=\operatorname{lk}(\alpha_1) +\sum_{i=2}^r n_i\operatorname{lk}_i(\alpha_0)$. \end{lemma} \begin{definition} For a set $P\subset\{1, 2,\ldots,n\}$ and a composition $\mathbf n=(n_1,\ldots,n_r)$ of $n$, define the sets $P_{\mathbf n,0}, P_{\mathbf n,1},\ldots,P_{\mathbf n,r}$ as follows: \begin{eqnarray*} P_{\mathbf n,i}&=&\{ 1\le j\le n_i \mid (n_1+\cdots+n_{i-1}) + j \in P \} \qquad\mbox{for $i=1,\ldots,r$};\\ P_{\mathbf n,0} &=&\{1\le i\le r \mid P_{\mathbf n,i}\neq \emptyset\}. \end{eqnarray*} \end{definition} Note that, using the above notations, $P=\bigcup_{i=1}^r ((n_1+\cdots+n_{i-1})+P_{\mathbf n,i})$. The following lemma is easy. \begin{lemma}\label{lem:indu} Let $P\subset\{1, 2, \ldots,n\}$ and $\alpha=\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r )_\mathbf n$ for a composition $\mathbf n$ of\/ $n$. \begin{itemize} \item[(i)] $\alpha$ is $P$-pure if and only if\/ $\alpha_i$'s are $P_{\mathbf n,i}$-pure for all $i=0, 1,\ldots,r$. \item[(ii)] $\alpha$ is $P$-straight if and only if\/ $\alpha_i$'s are $P_{\mathbf n,i}$-straight for all $i=0, 1,\ldots,r$. \item[(iii)] If\/ $\alpha_1$ is 1-unlinked, then $(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$ is 1-unlinked. \def\temp{ \item[(iv)] If\/ $\alpha_0$ is 1-unlinked and $n_2=\cdots=n_r$, then $\myangle{\alpha_0}_\mathbf n$ is 1-unlinked. \item[(v)] If\/ $\alpha_0$ and $\alpha_1$ are 1-unlinked and $n_2=\cdots=n_r$, then $\myangle{\alpha_0}_\mathbf n(\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n$ is 1-unlinked. } \end{itemize} \end{lemma} \section{Uniqueness of roots up to conjugacy} In this section, we prove Theorem~\ref{thm:main}. Let us explain our strategy for proof. Suppose we are given $P$-pure braids $\alpha$ and $\beta$ such that $\alpha^k=\beta^k$ for some nonzero integer $k$. Note that $\alpha$ is either pseudo-Anosov, or periodic, or reducible and non-periodic. Lemma~\ref{lem:irred} deals with the case where $\alpha$ is pseudo-Anosov or periodic. Now, suppose $\alpha$ is reducible and non-periodic. There are three cases: $\alpha_{\operatorname{ext}}$ is pseudo-Anosov; $\alpha_{\operatorname{ext}}$ is central; $\alpha_{\operatorname{ext}}$ is periodic and non-central. (Here $\alpha_{\operatorname{ext}}$ is a particular tubular braid of $\alpha$. See Definition~\ref{def:ext-br}.) If $\alpha_{\operatorname{ext}}$ is either pseudo-Anosov or central, we may assume $\alpha_{\operatorname{ext}}=\beta_{\operatorname{ext}}$, and this case is resolved in Lemma~\ref{lem:same-ext}. For the case where $\alpha_{\operatorname{ext}}$ is periodic and non-central, we construct a $P$-straight conjugating element from $\alpha$ to $\beta$, and then modify this conjugating element in order to make it 1-unlinked. Lemma~\ref{lem:per-ext} is useful in this modification. In the end we give the proof of Theorem~\ref{thm:main}. Due to the lemmas mentioned above, it suffices to construct a $P$-straight conjugating element from $\alpha$ to $\beta$ for the case where $\alpha_{\operatorname{ext}}$ is periodic and non-central. \medskip From now on, we will say that \emph{Theorem~\ref{thm:main} is true for $(\alpha,\beta,P,k)$} if $(\alpha,\beta,P,k)$ is given as in Theorem~\ref{thm:main} and there exists a $P$-straight, 1-unlinked braid $\gamma$ with $\beta=\gamma\alpha\gamma^{-1}$. \begin{lemma}\label{lem:conj} Let $(\alpha,\beta,P,k)$ be given as in Theorem~\ref{thm:main}. \begin{itemize} \item[(i)] Let $\chi$ be a 1-pure $n$-braid. If Theorem~\ref{thm:main} is true for $(\chi\alpha\chi^{-1}, \chi\beta\chi^{-1},\pi_\chi(P),k)$, then it is also true for $(\alpha,\beta,P,k)$. \item[(ii)] Let $\chi$ be a $P$-straight, 1-unlinked $n$-braid with $\chi\beta^k=\beta^k\chi$. If Theorem~\ref{thm:main} is true for $(\alpha, \chi\beta\chi^{-1},P,k)$, then it is also true for $(\alpha,\beta,P,k)$. \end{itemize} \end{lemma} \begin{proof} (i)\ \ Note that $(\chi\alpha\chi^{-1})^k=(\chi\beta\chi^{-1})^k$, that both $\chi\alpha\chi^{-1}$ and $\chi\beta\chi^{-1}$ are $\pi_\chi(P)$-pure by Lemma~\ref{lem:gamma}~(i), and that $1\in\pi_\chi(P)$ because $1\in P$ and $\chi$ is 1-pure. Suppose Theorem~\ref{thm:main} is true for $(\chi\alpha\chi^{-1}, \chi\beta\chi^{-1},\pi_\chi(P),k)$, that is, there exists a $\pi_\chi(P)$-straight, 1-unlinked $n$-braid $\gamma_1$ with $\chi\beta\chi^{-1}=\gamma_1(\chi\alpha\chi^{-1})\gamma_1^{-1}$. Let $\gamma=\chi^{-1}\gamma_1\chi$, then $\beta=\gamma\alpha\gamma^{-1}$. Since $\gamma_1$ is $\pi_\chi(P)$-straight, $\gamma$ is $P$-straight by Lemma~\ref{lem:gamma}~(ii). Since $\gamma_1$ is 1-unlinked and $\chi$ is 1-pure, $\gamma$ is 1-unlinked by Lemma~\ref{lem:gamma}~(iii). \smallskip(ii)\ \ Since $\chi$ commutes with $\beta^k$, $(\chi\beta\chi^{-1})^k=\chi\beta^k\chi^{-1}=\beta^k=\alpha^k$. Because both $\beta$ and $\chi$ are $P$-pure, $\chi\beta\chi^{-1}$ is $P$-pure by Lemma~\ref{lem:gamma}~(iv). Suppose Theorem~\ref{thm:main} is true for $(\alpha, \chi\beta\chi^{-1},P,k)$, that is, there exists a $P$-straight, 1-unlinked $n$-braid $\gamma_1$ such that $\chi\beta\chi^{-1}=\gamma_1\alpha\gamma_1^{-1}$. Let $\gamma=\chi^{-1}\gamma_1$, then $\beta=\gamma\alpha\gamma^{-1}$. Since both $\gamma_1$ and $\chi$ are $P$-straight and 1-unlinked, $\gamma$ is $P$-straight and 1-unlinked by Lemma~\ref{lem:gamma}~(iv). \end{proof} \begin{lemma}\label{lem:irred} Let $(\alpha,\beta,P,k)$ be given as in Theorem~\ref{thm:main}. If\/ $\alpha$ is either pseudo-Anosov or periodic, then Theorem~\ref{thm:main} is true for $(\alpha,\beta,P,k)$. \end{lemma} \begin{proof} If $\alpha$ is pseudo-Anosov, then $\alpha=\beta$ by Lemma~\ref{lem:NT}. If $\alpha$ is central, then $\alpha=\beta$ because $\beta$ is conjugate to $\alpha$. In these two cases, we can take the identity as the desired conjugating element $\gamma$. Suppose that $\alpha$ is periodic and non-central. Then both $\alpha$ and $\beta$ are conjugate to $\epsilon^m$ for some $m\not\equiv 0\bmod n-1$ by Lemma~\ref{lem:per} since they are 1-pure and non-central. By Lemma~\ref{lem:per2}, there exist 1-unlinked $n$-braids $\gamma_1$ and $\gamma_2$ such that $\gamma_1\alpha\gamma_1^{-1}=\epsilon^m=\gamma_2\beta\gamma_2^{-1}$. Let $\gamma=\gamma_2^{-1}\gamma_1$, then $\beta=\gamma\alpha\gamma^{-1}$. Because both $\gamma_1$ and $\gamma_2$ are 1-unlinked, so is $\gamma$. Because the first strand is the only pure strand of $\alpha$ and $1\in P$, we have $P=\{1\}$. Therefore $\gamma$ is $P$-straight. \end{proof} \begin{definition}\label{def:ext-br} Let $\alpha$ be an $n$-braid with $\R_\ext(\alpha)$ standard. Then there exists a composition $\mathbf n=(n_1,\ldots, n_r)$ of $n$ such that $\R_\ext(\alpha)=\mathcal C_\mathbf n$ and $\alpha$ can be expressed as $$\alpha = \myangle{\alpha_0}_\mathbf n (\alpha_1\oplus\cdots\oplus\alpha_r)_\mathbf n.$$ In this case, the tubular $r$-braid $\alpha_0$ of $\alpha$ is specially denoted by $\alpha_{\operatorname{ext}}$. \end{definition} Note that for non-periodic reducible braids $\alpha$ and $\beta$, if $\alpha^k =\beta^k$ for a nonzero integer $k$, then $$\emptyset\neq\R_\ext(\alpha)=\R_\ext(\alpha^k)=\R_\ext(\beta^k)=\R_\ext(\beta).$$ \begin{lemma}\label{lem:same-ext} Let $(\alpha,\beta,P,k)$ be given as in Theorem~\ref{thm:main}. If\/ $\R_\ext(\alpha)$ is standard and $\alpha_{\operatorname{ext}}=\beta_{\operatorname{ext}}$, then Theorem~\ref{thm:main} is true for $(\alpha,\beta,P,k)$. \end{lemma} \begin{proof} We will show this lemma by induction on the braid index $n$. If $n=2$, Theorem~\ref{thm:main} is obvious because $B_{2,1}$ is infinite cyclic generated by $\sigma_1^2$: if $\alpha=\sigma_1^{2p}$ and $\beta=\sigma_1^{2q}$, then $\alpha^k=\beta^k$ implies $p=q$, and hence $\alpha=\beta$ and the identity is a conjugating element from $\alpha$ to $\beta$. Suppose that $n>2$ and that the theorem is true for braids with less than $n$ strands. Let $\R_\ext(\alpha)=\mathcal C_\mathbf n$ for an $r$-composition $\mathbf n$ of $n$. Let $\alpha_0=\alpha_{\operatorname{ext}}\in B_r$. Since $\alpha$ is 1-pure, so is $\alpha_0$, that is, $\pi_{\alpha_0}(1)=1$. Let $\{ z_2, z_3, \ldots, z_m\}$ be the set of all points other than 1 each of which is fixed by $\pi_{\alpha_0}$. \begin{claim} Without loss of generality, we may assume that $\{ z_2, \ldots, z_m\} =\{2, \ldots, m\}$ (i.e. $\pi_{\alpha_0}(i)=i$ for all $1\le i\le m$) and each of the other cycles of $\pi_{\alpha_0}$ is of the form $(i+r_i,\ldots, i+2, i+1)$ for some $i\ge m$ and $r_i\ge 2$. \end{claim} \begin{proof}[Proof of Claim] Choose an $r$-permutation $\theta$ such that $\theta(1)=1$, $\theta(\{ z_2, \ldots, z_m\})=\{ 2, \ldots, m\}$ and each cycle (of length $\ge 2$) of $\theta\pi_{\alpha_0}\theta^{-1}$ is of the form $(i+r_i,\ldots, i+2, i+1)$. Note that $\theta\pi_{\alpha_0}\theta^{-1}$ fixes each point of $\{1,\ldots,m\}$. Let $\zeta_0$ be an $r$-braid whose induced permutation is $\theta$, and let $\zeta=\myangle{\zeta_0}_\mathbf n$. Since $\zeta_0$ is 1-pure, $\zeta$ is also 1-pure. Applying Lemma~\ref{lem:conj}~(i) to $\zeta$ and $(\alpha,\beta,P, k)$, it suffices to show that Theorem~\ref{thm:main} is true for $(\zeta\alpha\zeta^{-1}, \zeta\beta\zeta^{-1}, \pi_{\zeta}(P), k)$. Note that $\R_\ext(\zeta\alpha\zeta^{-1})=\zeta*\R_\ext(\alpha)=\mathcal C_{\zeta_0*\mathbf n}$ is standard and that $(\zeta\alpha\zeta^{-1})_{\operatorname{ext}} =\zeta_0\alpha_{\operatorname{ext}}\zeta_0^{-1} =\zeta_0\beta_{\operatorname{ext}}\zeta_0^{-1} =(\zeta\beta\zeta^{-1})_{\operatorname{ext}}$. \end{proof} Using the above claim, we assume that $\pi_{\alpha_0}(i)=i$ for all $1\le i\le m$ and each of the other cycles of $\pi_{\alpha_0}$ is of the form $(i+r_i,\ldots, i+2, i+1)$ for some $i\ge m$ and $r_i\ge 2$. Then $\mathbf n$, $\alpha$ and $\beta$ are as follows: \begin{eqnarray*} \mathbf n &=& ( n_1, \ldots, n_m, \underbrace{n_{m+1},\ldots, n_{m+1}}_{r_{m+1}}, \ldots, \underbrace{n_s,\ldots, n_s}_{r_s}),\\ \alpha & = & \myangle{\alpha_0}_{\mathbf n} (\alpha_1\oplus\cdots\oplus\alpha_m\oplus(\alpha_{m+1,1}\oplus\cdots\oplus\alpha_{m+1,r_{m+1}}) \oplus\cdots\oplus(\alpha_{s,1}\oplus\cdots\oplus\alpha_{s,r_s}))_\mathbf n, \\ \beta & = & \myangle{\alpha_0}_{\mathbf n} (\beta_1\oplus\cdots\oplus\beta_m\oplus(\beta_{m+1,1}\oplus\cdots\oplus\beta_{m+1,r_{m+1}}) \oplus\cdots\oplus(\beta_{s,1}\oplus\cdots\oplus\beta_{s,r_s}))_\mathbf n. \end{eqnarray*} \medskip By raising the power $k$ large enough, we may assume that the lengths $r_i$ of the cycles of $\pi_{\alpha_0}$ are all divisors of $k$. Let $k=r_ip_i$ for $m< i\le s$. Then \begin{eqnarray*} \alpha^k&=& \myangle{\alpha_0^k}_{\mathbf n} (\alpha_1^k \oplus\cdots\oplus \alpha_m^k \oplus(\tilde\alpha_{m+1,1}^{p_{m+1}}\oplus\cdots\oplus\tilde\alpha_{m+1,r_{m+1}}^{p_{m+1}}) \oplus\cdots\oplus (\tilde\alpha_{s,1}^{p_s}\oplus\cdots\oplus\tilde\alpha_{s,r_s}^{p_s}))_\mathbf n,\\ \beta^k&=& \myangle{\alpha_0^k}_{\mathbf n} (\beta_1^k \oplus\cdots\oplus \beta_m^k \oplus(\tilde\beta_{m+1,1}^{p_{m+1}}\oplus\cdots\oplus\tilde\beta_{m+1,r_{m+1}}^{p_{m+1}}) \oplus\cdots\oplus (\tilde\beta_{s,1}^{p_s}\oplus\cdots\oplus\tilde\beta_{s,r_s}^{p_s}))_\mathbf n, \end{eqnarray*} where \begin{eqnarray*} \tilde\alpha_{i,j} &=& \alpha_{i,j-r_i+1}\alpha_{i,j-r_i+2}\cdots\alpha_{i,j-1}\alpha_{i,j},\\ \tilde\beta_{i,j} &=& \beta_{i,j-r_i+1}\beta_{i,j-r_i+2}\cdots\beta_{i,j-1}\beta_{i,j} \end{eqnarray*} for $m< i\le s$ and $1\le j\le r_i$. Hereafter we regard the second index $j$ of $(i,j)$ as being taken modulo $r_i$. Since $\alpha^k=\beta^k$, one has \begin{eqnarray*} & \alpha_i^k =\beta_i^k & \quad\mbox{for $1\le i\le m$}, \\ & \tilde\alpha_{i,j}^{p_i}=\tilde\beta_{i,j}^{p_i} & \quad\mbox{for $m < i\le s$ and $1\le j\le r_i$}. \end{eqnarray*} Recall that $\alpha$ and $\beta$ are $P$-pure, hence $\alpha_i$ and $\beta_i$ are $P_{\mathbf n,i}$-pure for $1\le i\le m$ by Lemma~\ref{lem:indu}. Recall also that the induced permutation of $\alpha_0$ fixes no point $i>m$, hence $P_{\mathbf n,i}=\emptyset$ for $i>m$. \medskip From now on, we will construct an $n$-braid $\gamma$ such that $\beta=\gamma\alpha\gamma^{-1}$. It will be of the form $$ \gamma = (\gamma_1\oplus\cdots\oplus\gamma_m\oplus (\gamma_{m+1,1}\oplus\cdots\oplus\gamma_{m+1,r_{m+1}}) \oplus\cdots\oplus(\gamma_{s,1}\oplus\cdots\oplus\gamma_{s,r_s}))_\mathbf n, $$ where $\gamma_1$ is 1-unlinked and $P_{\mathbf n,1}$-straight, and $\gamma_i$ is $P_{\mathbf n,i}$-straight for $2\le i\le m$. Then $\gamma$ is 1-unlinked by Lemma~\ref{lem:indu}~(iii) because $\gamma_1$ is 1-unlinked. And $\gamma$ is $P$-straight by Lemma~\ref{lem:indu}~(ii) because $\gamma_i$ is $P_{\mathbf n,i}$-straight for $1\le i\le m$ and $P_{\mathbf n,i}=\emptyset$ for $i>m$. \smallskip Note that $\alpha_1^k=\beta_1^k$ and that $1\in P_{\mathbf n,1}$ because $1\in P$. By the induction hypothesis on the braid index, there exists a $P_{\mathbf n,1}$-straight, 1-unlinked $n_1$-braid $\gamma_1$ with $\beta_1=\gamma_1\alpha_1\gamma_1^{-1}$. \smallskip Let $2\le i\le m$. Note that $\alpha_i^k=\beta_i^k$. If $P_{\mathbf n,i} =\emptyset$, there is an $n_i$-braid $\gamma_i$ such that $\beta_i = \gamma_i\alpha_i\gamma_i^{-1}$ by~\cite{Gon03}. Suppose $P_{\mathbf n,i} \neq\emptyset$. Then there is an $n_i$-braid $\zeta_i$ with $1\in\pi_{\zeta_i}(P_{\mathbf n,i})$. Since $\alpha_i$ and $\beta_i$ are $P_{\mathbf n,i}$-pure, $\zeta_i\alpha_i\zeta_i^{-1}$ and $\zeta_i\beta_i\zeta_i^{-1}$ are $\pi_{\zeta_i}(P_{\mathbf n,i})$-pure $n_i$-braids with $(\zeta_i\alpha_i\zeta_i^{-1})^k = (\zeta_i\beta_i\zeta_i^{-1})^k$. By the induction hypothesis on the braid index, Theorem~\ref{thm:main} is true for $(\zeta_i\alpha_i\zeta_i^{-1}, \zeta_i\beta_i\zeta_i^{-1}, \pi_{\zeta_i}(P_{\mathbf n,i}), k)$, hence there exists a $\pi_{\zeta_i}(P_{\mathbf n,i})$-straight $n_i$-braid $\chi_i$ such that $\zeta_i\beta_i\zeta_i^{-1} = \chi_i (\zeta_i\alpha_i\zeta_i^{-1}) \chi_i^{-1}$. Let $\gamma_i = \zeta_i^{-1}\chi_i\zeta_i$. Then $\gamma_i$ is a $P_{\mathbf n,i}$-straight $n_i$-braid with $\beta_i = \gamma_i\alpha_i\gamma_i^{-1}$. \smallskip Recall that $\tilde\alpha_{i,r_i}^{p_i} =\tilde\beta_{i,r_i}^{p_i}$ for all $m< i\le s$, which implies that there are $\zeta_i\in B_{n_i}$ with $$ \tilde\beta_{i,r_i}=\zeta_i\tilde\alpha_{i,r_i}\zeta_i^{-1}. $$ For $m < i\le s$ and $1\le j\le r_i$, define $\gamma_{i,j}$ by $$ \gamma_{i,j} = (\beta_{i,j}^{-1}\cdots\beta_{i,2}^{-1}\beta_{i,1}^{-1})\zeta_i (\alpha_{i,1}\alpha_{i,2}\cdots\alpha_{i,j}). $$ Then, for $m< i\le s$ and $1< j \le r_i$, \begin{eqnarray*} \gamma_{i,r_i}\alpha_{i,1}\gamma_{i,1}^{-1} &=& (\beta_{i,r_i}^{-1}\cdots\beta_{i,1}^{-1}\zeta_i \alpha_{i,1}\cdots\alpha_{i,r_i}) \alpha_{i,1} (\alpha_{i,1}^{-1}\zeta_i^{-1}\beta_{i,1}) = \tilde\beta_{i,r_i}^{-1}\zeta_i\tilde\alpha_{i,r_i}\zeta_i^{-1}\beta_{i,1} = \beta_{i,1}, \\ \gamma_{i,j-1}\alpha_{i,j}\gamma_{i,j}^{-1} &=& (\beta_{i,j-1}^{-1}\cdots\beta_{i,1}^{-1}\zeta_i \alpha_{i,1}\cdots\alpha_{i,j-1}) \alpha_{i,j} (\alpha_{i,j}^{-1}\cdots\alpha_{i,1}^{-1}\zeta_i^{-1} \beta_{i,1}\cdots\beta_{i,j}) = \beta_{i,j}. \end{eqnarray*} Therefore $$ \gamma_{i,j-1}\alpha_{i,j}\gamma_{i,j}^{-1} =\beta_{i,j} \qquad\mbox{for $m< i\le s$ and $1\le j \le r_i$}. $$ \smallskip So far, we have constructed the desired $P$-straight and 1-unlinked $n$-braid $$ \gamma = (\gamma_1\oplus\cdots\oplus\gamma_m\oplus (\gamma_{m+1,1}\oplus\cdots\oplus\gamma_{m+1,r_{m+1}}) \oplus\cdots\oplus(\gamma_{s,1}\oplus\cdots\oplus\gamma_{s,r_s}))_\mathbf n. $$ It remains to show $\beta=\gamma\alpha\gamma^{-1}$, which will be done by a direct computation. In the following, $\bigoplus_{i=1}^\ell \chi_i$ means $\chi_1\oplus \chi_2\oplus\cdots\oplus \chi_\ell$. \begin{eqnarray*} \gamma\alpha &=& (\gamma_1\oplus\cdots\oplus\gamma_m \oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\gamma_{i,j})_\mathbf n \cdot\myangle{\alpha_0}_{\mathbf n} (\alpha_1\oplus\cdots\oplus\alpha_m\oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\alpha_{i,j})_\mathbf n \\ &=& \myangle{\alpha_0}_{\mathbf n}\cdot (\gamma_1\oplus\cdots\oplus\gamma_m \oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\gamma_{i,j-1})_\mathbf n \cdot (\alpha_1\oplus\cdots\oplus\alpha_m\oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\alpha_{i,j})_\mathbf n \\ & = & \myangle{\alpha_0}_{\mathbf n} (\gamma_1\alpha_1 \oplus\cdots\oplus\gamma_m\alpha_m \oplus \bigoplus_{i=m+1}^s\bigoplus_{j=1}^{r_i} \gamma_{i,j-1}\alpha_{i,j})_\mathbf n, \\ \beta\gamma &=& \myangle{\alpha_0}_{\mathbf n} (\beta_1\oplus\cdots\oplus\beta_m\oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\beta_{i,j})_\mathbf n \cdot (\gamma_1\oplus\cdots\oplus\gamma_m \oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\gamma_{i,j})_\mathbf n\\ &=& \myangle{\alpha_0}_{\mathbf n} (\beta_1\gamma_1\oplus\cdots\oplus\beta_m\gamma_m \oplus \bigoplus_{i=m+1}^s \bigoplus_{j=1}^{r_i}\beta_{i,j}\gamma_{i,j})_\mathbf n. \end{eqnarray*} Because $\gamma_i\alpha_i\gamma_i^{-1}=\beta_i$ for $1\le i\le m$ and $\gamma_{i,j-1}\alpha_{i,j}\gamma_{i,j}^{-1}=\beta_{i,j}$ for $m< i\le s$ and $1\le j \le r_i$, we have $\gamma\alpha=\beta\gamma$, and hence $\gamma\alpha\gamma^{-1}=\beta$. \end{proof} \begin{definition} Let $r$, $s$ and $d$ be integers with $s\ge 2$, $d\ge 1$ and $r=ds+1$. For $1\le j\le d$, define an $r$-braid $\mu_{s,j}$ as $$ \mu_{s,j}=(\sigma_{js}\sigma_{js-1}\cdots\sigma_2\sigma_1) (\sigma_1\sigma_2\cdots\sigma_{(j-1)s}\sigma_{(j-1)s+1}). $$ Define $\mu_s$ as $\mu_s =\mu_{s,1}\mu_{s,2}\cdots\mu_{s,d}$. See Figure~\ref{fig:mu} for the case $r=7$, $s=3$ and $d=2$. \end{definition} \begin{figure} \tabcolsep=1em \begin{tabular}{ccc} \includegraphics[scale=.8]{root-mu1.eps} & \includegraphics[scale=.8]{root-mu2.eps} & \includegraphics[scale=.8]{sym-mu2.eps} \\ (a) $\mu_{3,1}$ & (b) $\mu_{3,2}$ & (c) $\mu_{3}=\mu_{3,1}\mu_{3,2}$ \end{tabular} \caption{$\mu_{3,1}$, $\mu_{3,2}$ and $\mu_3$ when $r=7$} \label{fig:mu} \end{figure} It is easy to see the following. \begin{itemize} \item $\mu_s$ is conjugate to $\epsilon^d_{(r)}$ because $(\mu_s)^s=\Delta_{(r)}^2=(\epsilon^d_{(r)})^s$. \item For any $1\le i, j\le d$, $\mu_{s,i}$ and $\mu_{s,j}$ mutually commute. \item $\operatorname{lk}(\mu_{s,j})=1$ for $1\le j\le d$. \end{itemize} \begin{lemma}\label{lem:per-ext} Let $P$ be a subset of\/ $\{1,\ldots,n\}$ with $1\in P$. Let $\alpha$ be a $P$-pure $n$-braid with $\R_\ext(\alpha)$ standard, hence $\R_\ext(\alpha)=\mathcal C_\mathbf n$ for a composition $\mathbf n=(n_1,\ldots,n_r)$ of\/ $n$. Let $\alpha_{\operatorname{ext}}$ be periodic and non-central. \begin{itemize} \item[(i)] For each $2\le i\le r$, there exists a $P$-straight $n$-braid $\gamma$ such that $\gamma\alpha=\alpha\gamma$ and $\operatorname{lk}(\gamma)=n_i$. \item[(ii)] Let $\chi = \myangle{\chi_0 }_{\mathbf n} ( \chi_1\oplus\cdots\oplus\chi_r)_\mathbf n$ be $P$-straight such that $\chi_1$ is 1-unlinked. Then there exists a $P$-straight $n$-braid $\gamma$ such that $\gamma\alpha =\alpha\gamma$ and $\operatorname{lk}(\gamma)= -\operatorname{lk}(\chi)$. \end{itemize} \end{lemma} \begin{proof} (i) \ \ Note that $\alpha_{\operatorname{ext}}$ is 1-pure because $\alpha$ is 1-pure. In addition, $\alpha_{\operatorname{ext}}$ is periodic and non-central. Thus $\alpha_{\operatorname{ext}}$ is conjugate to $\epsilon_{(r)}^m$ for some $m\not\equiv 0 \bmod r-1$. Let $d=\gcd(m,r-1)$, $m=dt$ and $r-1=ds$. \begin{claim} Without loss of generality, we may assume $\alpha_{\operatorname{ext}}=\mu_s^t$. \end{claim} \begin{proof}[Proof of Claim] Assume that (i) holds for braids $\alpha'$ with $\alpha'_{\operatorname{ext}}=\mu_s^t$. Since $\alpha_{\operatorname{ext}}$ is conjugate to $\epsilon_{(r)}^{m} =\epsilon_{(r)}^{dt}$ and $\epsilon_{(r)}^{d}$ is conjugate to $\mu_s$, $\mu_s^t$ is conjugate to $\alpha_{\operatorname{ext}}$. Since both $\alpha_{\operatorname{ext}}$ and $\mu_{s}^{t}$ are 1-pure braids that are periodic and non-central, they have the first strand as the only pure strand. Thus there exists a 1-pure $r$-braid $\zeta_0$ such that $\mu_{s}^{t} = \zeta_0\alpha_{\operatorname{ext}}\zeta_0^{-1}$. Let $$ \zeta=\myangle{\zeta_0}_\mathbf n\quad\mbox{and}\quad \beta = \zeta\alpha\zeta^{-1}. $$ Since $\alpha$ is $P$-pure, $\beta$ is $\pi_{\zeta}(P)$-pure. Since $\zeta$ is 1-pure and $1\in P$, we have $1\in\pi_\zeta(P)$. Since $\R_\ext(\beta)=\zeta\ast\R_\ext(\alpha)=\zeta*\mathcal C_{\mathbf n}=\mathcal C_{\zeta_0\ast\mathbf n}$, $\R_\ext(\beta)$ is standard and $\beta_{\operatorname{ext}}=\zeta_0\alpha_{\operatorname{ext}}\zeta_0^{-1}=\mu_{s}^{t}$. Fix any $2\le i\le r$. Since $\zeta_0\ast\mathbf n=(n_1, n'_2,\ldots,n'_r)$, where $(n'_2,\ldots,n'_r)$ is a rearrangement of $(n_2,\ldots,n_r)$, there exists $2\le j\le r$ such that $n_i=n'_j$. By the assumption, there exists a $\pi_{\zeta}(P)$-straight $n$-braid $\chi$ such that $\chi\beta\chi^{-1}=\beta$ and $\operatorname{lk}(\chi)=n'_j$. Let $\gamma=\zeta^{-1}\chi\zeta$. Then $\gamma\alpha\gamma^{-1}=\alpha$. Because $\gamma$ is $P$-straight with $\operatorname{lk}(\gamma)=\operatorname{lk}(\chi)=n'_j=n_i$, we are done. \end{proof} Now, we assume $\alpha_{\operatorname{ext}}=\mu_s^t$. Then $\alpha$ can be expressed as $$ \alpha=\myangle{\mu_s^t}_\mathbf n(\alpha_1\oplus (\alpha_{1,1}\oplus\alpha_{1,2}\oplus\cdots\oplus\alpha_{1,s})\oplus\cdots\oplus (\alpha_{d,1}\oplus\alpha_{d,2}\oplus\cdots\oplus\alpha_{d,s}) )_\mathbf n. $$ For convenience, let $[k,\ell]$ denote the integer $(k-1)s+\ell+1$ for $1\le k\le d$ and $1\le \ell\le s$. Then $$ \mathbf n=(n_1,n_2,\ldots,n_r) =(n_1,\underbrace{n_{[1,1]},n_{[1,2]},\ldots,n_{[1,s]}}_s, \ldots,\underbrace{n_{[d,1]},n_{[d,2]},\ldots,n_{[d,s]}}_s). $$ Hereafter we regard the second index $\ell$ of $[k,\ell]$ as being taken modulo $s$. Notice the following. \begin{itemize} \item The induced permutation of $\mu_s^t$ fixes 1 and maps $[k,\ell]$ to $[k,\ell-t]$. Because $\gcd(s, t)=1$, the induced permutation of $\mu_s^t$ has a single fixed point and each of the other cycles has length $s$. Therefore $$P=P_{\mathbf n,1} \quad\mbox{and}\quad n_{[k,1]} = n_{[k,2]} =\cdots = n_{[k,s]} \quad\mbox{for $1\le k\le d$}. $$ \item The induced permutation of $\mu_{s,j}$ maps $[j,\ell]$ to $[j,\ell -1]$ for $1\le \ell\le s$, and it fixes the other points. \item $\operatorname{lk}_{[j,1]}(\mu_{s,j})=1$, and $\operatorname{lk}_{[k,\ell]}(\mu_{s,j})=0$ if $(k,\ell)\ne(j,1)$. \end{itemize} Because $\gcd(s, t)=1$, there exist integers $a > 0$ and $b$ such that $a t+ b s=1$. Fix any $2\le i\le r$. Then $n_i$ is equal to $n_{[j,1]}$ for some $1\le j\le d$. Define an $n$-braid $\gamma$ to be $$ \gamma=\myangle{\mu_{s,j}}_\mathbf n(\gamma_1\oplus (\gamma_{1,1}\oplus\gamma_{1,2}\oplus\cdots\oplus\gamma_{1,s})\oplus\cdots\oplus (\gamma_{d,1}\oplus\gamma_{d,2}\oplus\cdots\oplus\gamma_{d,s}) )_\mathbf n, $$ where $\gamma_1=1$, $\gamma_{k, \ell}=1$ for $k\ne j$ and $$ \gamma_{j,\ell} = \alpha_{j, \ell-(a-1)t}\alpha_{j, \ell-(a-2)t}\cdots \alpha_{j, \ell-2t} \alpha_{j, \ell-t} \alpha_{j, \ell} \quad\mbox{for $1\le \ell\le s$}. $$ Then $\gamma$ is $P$-straight because $P = P_{\mathbf n,1}$, $\mu_{s,j}$ is 1-pure, and $\gamma_1=1$. In addition, by Lemma~\ref{lem:LinkingNo}, $$ \operatorname{lk}(\gamma) =\operatorname{lk}(\gamma_1)+\sum_{k=1}^d\sum_{\ell=1}^s n_{[k,\ell]} \operatorname{lk}_{[k,\ell]}(\mu_{s,j}) =n_{[j,1]}\operatorname{lk}_{[j,1]}(\mu_{s,j})=n_{[j,1]}=n_i. $$ \smallskip Now, it remains to show $\alpha\gamma = \gamma\alpha$. We will do it by a straightforward computation together with the following claim. \begin{claim} For $1\le \ell\le s$, we have $\alpha_{j,\ell -1}\gamma_{j,\ell} = \gamma_{j,\ell-t} \alpha_{j,\ell}$. \end{claim} \begin{proof}[Proof of Claim] Recall that $\gamma_{j,\ell} = \alpha_{j, \ell-(a-1)t}\alpha_{j, \ell-(a-2)t}\cdots \alpha_{j, \ell-2t} \alpha_{j, \ell-t} \alpha_{j, \ell}$. Hence \begin{eqnarray*} \alpha_{j,\ell -1}\gamma_{j,\ell} &=& \alpha_{j,\ell -1} \alpha_{j, \ell-(a-1)t}\alpha_{j, \ell-(a-2)t}\cdots \alpha_{j, \ell-2t} \alpha_{j, \ell-t} \alpha_{j, \ell}, \\ \gamma_{j,\ell-t} \alpha_{j,\ell} &=& \alpha_{j, \ell-at}\alpha_{j, \ell-(a-1)t}\alpha_{j, \ell-(a-2)t}\cdots \alpha_{j, \ell-2t} \alpha_{j, \ell-t}\alpha_{j,\ell}. \end{eqnarray*} Notice that $\alpha_{j,\ell -1} = \alpha_{j, \ell-at}$ because $at\equiv 1 \bmod s$. Therefore $\alpha_{j,\ell -1}\gamma_{j,\ell} = \gamma_{j,\ell-t} \alpha_{j,\ell}$. \end{proof} For simplicity of notations, let $$ \begin{array}{ll} \tilde\alpha_k=(\alpha_{k,1}\oplus\cdots\oplus\alpha_{k,s})_{\mathbf n_k},\qquad &\tilde\alpha_k^{(p)}=(\alpha_{k,p+1}\oplus\cdots\oplus\alpha_{k,p+s})_{\mathbf n_k},\\ \tilde\gamma_k=(\gamma_{k,1}\oplus\cdots\oplus\gamma_{k,s})_{\mathbf n_k},\qquad &\tilde\gamma_k^{(p)}=(\gamma_{k,p+1}\oplus\cdots\oplus\gamma_{k,p+s})_{\mathbf n_k}, \end{array} $$ where $1\le k\le d$, $\mathbf n_k=(n_{[k,1]},\ldots,n_{[k,s]})$ and $p$ is an integer. Then $$ \alpha=\myangle{\mu_s^t}_\mathbf n(\alpha_1\oplus \tilde\alpha_1\oplus\cdots\oplus\tilde\alpha_d)_\mathbf n \quad\mbox{and}\quad \gamma=\myangle{\mu_{s,j}}_\mathbf n(\gamma_1\oplus \tilde\gamma_1\oplus\cdots\oplus\tilde\gamma_d)_\mathbf n. $$ Because $\gamma_1=1$ and $\tilde\gamma_k=1$ for $k\ne j$, we just write $\gamma=\myangle{\mu_{s,j}}_\mathbf n(\cdots\oplus 1 \oplus\tilde\gamma_j\oplus 1\oplus\cdots)_\mathbf n$. Then \begin{eqnarray*} \alpha\gamma &=& \myangle{\mu_s^t}_\mathbf n(\alpha_1\oplus\tilde\alpha_1 \oplus\cdots\oplus\tilde\alpha_d)_\mathbf n \cdot \myangle{\mu_{s,j}}_\mathbf n(\cdots\oplus 1 \oplus\tilde\gamma_j\oplus 1\oplus\cdots)_\mathbf n\\ &=& \myangle{\mu_s^t}_\mathbf n\cdot \myangle{\mu_{s,j}}_\mathbf n\cdot (\alpha_1\oplus \cdots\oplus\tilde\alpha_{j-1} \oplus\tilde\alpha_j^{(-1)}\oplus\tilde\alpha_{j+1} \oplus\cdots\oplus\tilde\alpha_d)_\mathbf n \cdot(\cdots\oplus1\oplus\tilde\gamma_j\oplus1\oplus\cdots)_\mathbf n \\ &=& \myangle{\mu_s^t\mu_{s,j}}_\mathbf n (\alpha_1\oplus \tilde\alpha_1\oplus\cdots\oplus\tilde\alpha_{j-1} \oplus\tilde\alpha_j^{(-1)}\tilde\gamma_j\oplus\tilde\alpha_{j+1} \oplus\cdots\oplus\tilde\alpha_d)_\mathbf n,\\ \gamma\alpha &=& \myangle{\mu_{s,j}}_\mathbf n(\cdots\oplus 1\oplus\tilde\gamma_j \oplus 1 \oplus\cdots)_\mathbf n \cdot \myangle{\mu_s^t}_\mathbf n(\alpha_1\oplus\tilde\alpha_1 \oplus\cdots\oplus\tilde\alpha_d)_\mathbf n\\ &=& \myangle{\mu_{s,j}}_\mathbf n\cdot\myangle{\mu_s^t}_\mathbf n\cdot (\cdots \oplus 1 \oplus\tilde\gamma_j^{(-t)} \oplus 1 \oplus\cdots)_\mathbf n\cdot (\alpha_1\oplus\tilde\alpha_1\oplus\cdots\oplus\tilde\alpha_d)_\mathbf n\\ &=& \myangle{\mu_{s,j}\mu_s^t}_\mathbf n (\alpha_1\oplus \tilde\alpha_1\oplus\cdots\oplus\tilde\alpha_{j-1} \oplus\tilde\gamma_j^{(-t)}\tilde\alpha_j\oplus\tilde\alpha_{j+1} \oplus\cdots\oplus\tilde\alpha_d)_\mathbf n. \end{eqnarray*} From the above equations, since $\mu_{s,j}\mu_s=\mu_s\mu_{s,j}$, we can see that $\alpha\gamma=\gamma\alpha$ if and only if $\tilde\alpha_j^{(-1)}\tilde\gamma_j=\tilde\gamma_j^{(-t)}\tilde\alpha_j$. On the other hand, \begin{eqnarray*} \tilde\alpha_j^{(-1)}\tilde\gamma_j &=& (\alpha_{j,s}\oplus\alpha_{j,1}\oplus\cdots\oplus\alpha_{j,s-1})_{\mathbf n_j} \cdot (\gamma_{j,1}\oplus\cdots\oplus\gamma_{j,s})_{\mathbf n_j} = \bigoplus_{\ell=1}^s \alpha_{j,\ell -1}\gamma_{j,\ell}, \\ \tilde\gamma_j^{(-t)}\tilde\alpha_j &=& (\gamma_{j,1-t}\oplus\cdots\oplus\gamma_{j,s-t})_{\mathbf n_j} \cdot(\alpha_{j,1}\oplus\cdots\oplus\alpha_{j,s})_{\mathbf n_j} = \bigoplus_{\ell=1}^s \gamma_{j,\ell-t}\alpha_{j,\ell}, \end{eqnarray*} where $\mathbf n_j = (n_{[j,1]}, n_{[j,2]},\ldots, n_{[j,s]})$. By the above claim, we are done. \medskip (ii) \ \ As $\operatorname{lk}(\chi_1)=0$, we have $\operatorname{lk}(\chi)=\sum_{i=2}^r n_i\operatorname{lk}_i(\chi_0)$. By (i), for each $2\le i\le r$, there exists a $P$-straight $n$-braid $\zeta_{i}$ such that $\operatorname{lk}(\zeta_{i})= n_i$ and $\zeta_{i}$ commutes with $\alpha$. Let $\gamma_{i} = \zeta_{i}^{-\operatorname{lk}_i(\chi_0)}$, then $\gamma_{i}$ is a $P$-straight $n$-braid such that it commutes with $\alpha$ and $\operatorname{lk}(\gamma_{i})= -n_i\operatorname{lk}_i(\chi_0)$. Let $\gamma=\gamma_{2}\gamma_{3}\cdots\gamma_{r}$. Then $\gamma$ is a $P$-straight $n$-braid such that $\gamma\alpha =\alpha\gamma$. Moreover, $\operatorname{lk}(\gamma)= - \sum_{i=2}^r n_i\operatorname{lk}_i(\chi_0)= -\operatorname{lk}(\chi)$. \end{proof} Now we are ready to prove Theorem~\ref{thm:main}. \begin{proof}[\textbf{Proof of Theorem~\ref{thm:main}}] We will show the theorem by induction on the braid index $n$. If $n=2$, Theorem~\ref{thm:main} is obvious as we have observed in the proof of Lemma~\ref{lem:same-ext}. Suppose that $n>2$ and that the theorem is true for braids with less than $n$ strands. Recall that $1\in P\subset\{1,\ldots,n\}$, and that $\alpha$ and $\beta$ are $P$-pure $n$-braids such that $\alpha^k=\beta^k$ for some $k\ne 0$. If $\alpha$ is either pseudo-Anosov or periodic, the theorem is true by Lemma~\ref{lem:irred}. Thus we assume that $\alpha$ is reducible and non-periodic. \smallskip If $\R_\ext(\alpha)$ is not standard, choose $\zeta\in B_{n,1}$ such that $\zeta*\R_\ext(\alpha)=\R_\ext(\zeta\alpha\zeta^{-1})$ is standard. By Lemma~\ref{lem:conj}, it suffices to prove the theorem for $(\zeta\alpha\zeta^{-1},\zeta\beta\zeta^{-1},\pi_{\zeta}(P), k)$. Therefore, without loss of generality, we assume that $\R_\ext(\alpha)$ is standard. \smallskip There exists a composition $\mathbf n=(n_1,\ldots,n_r)$ of $n$ such that $\R_\ext(\alpha)=\R_\ext(\beta)=\mathcal C_\mathbf n$, hence $\alpha$ and $\beta$ are expressed as $$ \alpha = \myangle{\alpha_0}_{\mathbf n}(\alpha_1\oplus\alpha_2\oplus\cdots\oplus\alpha_r)_{\mathbf n} \quad\mbox{and}\quad \beta = \myangle{\beta_0}_{\mathbf n} ( \beta_1\oplus\beta_2\oplus\cdots\oplus\beta_r)_{\mathbf n}. $$ Since $\alpha$ and $\beta$ are $P$-pure, $\alpha_i$ and $\beta_i$ are $P_{\mathbf n,i}$-pure for $i=0,1$ by Lemma~\ref{lem:indu}. In particular, because $\alpha$ and $\beta$ are 1-pure, the braids $\alpha_0$ and $\beta_0$ are 1-pure. Hence $$ \alpha^k = \myangle{\alpha_0^k}_{\mathbf n}(\alpha_1^k\oplus\cdots)_{\mathbf n} \quad\mbox{and}\quad \beta^k = \myangle{\beta_0^k}_{\mathbf n}(\beta_1^k\oplus\cdots)_{\mathbf n}. $$ (Here the second interior braid of $\alpha^k$ is not necessarily $\alpha_2^k$ unlike the first interior braid $\alpha_1^k$.) Since $\alpha^k = \beta^k$, we have $\alpha_0^k = \beta_0^k$ and $\alpha_1^k = \beta_1^k$. Note that $\alpha_0$ is periodic or pseudo-Anosov. If $\alpha_0$ is central, then it is obvious that $\alpha_0=\beta_0$. If $\alpha_0$ is pseudo-Anosov, then $\alpha_0=\beta_0$ by Lemma~\ref{lem:NT}. For these two cases, we are done by Lemma~\ref{lem:same-ext}. Therefore we assume that $\alpha_0$ is periodic and non-central. \medskip Since $\alpha^k = \beta^k$, there exists $\zeta\in B_n$ with $\beta=\zeta\alpha\zeta^{-1}$ by Theorem~\ref{thm:gon}. Notice that $$ \R_\ext(\alpha) =\R_\ext(\alpha^k) =\R_\ext(\beta^k) = \R_\ext(\beta) = \R_\ext(\zeta\alpha\zeta^{-1}) = \zeta*\R_\ext(\alpha), $$ i.e. $\zeta$ preserves the curve system $\R_\ext(\alpha)=\mathcal C_\mathbf n$. Hence $\zeta$ can be expressed as $$ \zeta = \myangle{\zeta_0 }_{\mathbf n} ( \zeta_1\oplus\zeta_2\oplus\cdots\oplus\zeta_r)_\mathbf n. $$ We will replace $\zeta_1$ in the above expression of $\zeta$ with another braid $\xi_1$ in order to make it $P$-straight, and then will multiply it by another $n$-braid $\xi'$ in order to make it 1-unlinked. Because $\alpha_0$ and $\beta_0$ are 1-pure, periodic and non-central, they have the first strand as the only pure strand by Corollary~\ref{cor:per}. Hence the $r$-braid $\zeta_0$ must be 1-pure because $\beta_0= \zeta_0 \alpha_0 \zeta_0^{-1}$. In addition, $P_{\mathbf n,0} = \{ 1\}$ and $P=P_{\mathbf n,1}$. Recall that $\alpha_1$ and $\beta_1$ are $P_{\mathbf n,1}$-pure. Because $\alpha_1^k = \beta_1^k$ and $1\in P_{\mathbf n,1}$, there exists a $P_{\mathbf n,1}$-straight, 1-unlinked $n_1$-braid $\xi_1$ such that $\beta_1=\xi_1\alpha_1\xi_1^{-1}$, by the induction hypothesis on the braid index. Let $$ \xi= \myangle{\zeta_0}_{\mathbf n} (\xi_1\oplus\zeta_2\oplus\zeta_3\oplus\cdots\oplus\zeta_r)_\mathbf n. $$ Then $\xi$ is $P$-straight since $P=P_{\mathbf n,1}$, $\zeta_0$ is 1-pure and $\xi_1$ is $P_{\mathbf n,1}$-straight. Notice that $\zeta$ and $\xi$ are the same except for the first interior braids, $\zeta_1$ and $\xi_1$. Notice also that $\zeta_1\alpha_1\zeta_1^{-1}=\beta_1 = {\xi_1}\alpha_1{\xi_1}^{-1}$. Therefore $\xi\alpha\xi^{-1} = \zeta\alpha\zeta^{-1}=\beta$. By Lemma~\ref{lem:per-ext}~(ii), there exists a $P$-straight $n$-braid $\xi'$ such that $\xi'\alpha=\alpha\xi'$ and $\operatorname{lk}(\xi')=-\operatorname{lk}(\xi)$. Let $\gamma=\xi\xi'$. Then $\gamma$ is $P$-straight and 1-unlinked, and $\gamma\alpha \gamma^{-1} = \xi\alpha\xi^{-1} =\beta$. \end{proof} \subsection*{Acknowledgements} This work was done partially while the authors were visiting the Institute for Mathematical Sciences, National University of Singapore in 2007. We thank the institute for supporting the visit. This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MEST) (No.~2009-0063965).
1,314,259,996,148
arxiv
\section*{Introduction.} In this paper we give a constructive procedure to check basicness of open (or closed) semialgebraic sets in a compact, non singular, real algebraic surface $X$. It is rather clear that if a semialgebraic set $S$ can be separated from each connected component of $X\setminus(S\cup\partial _{\rm z} S)$ (when $\partial _{\rm z} S$ stands for the Zariski closure of $(\overline S\setminus{\rm Int}(S))\cap{\rm Reg}(X)$), then $S$ is basic. This leads to associate to $S$ a finite family of sign distributions on $X\setminus\partial _{\rm z} S$; we prove the equivalence between basicness and two properties of these distributions, which can be tested by an algorithm described in [ABF]. By this method we find for surfaces a general result of [AR2] about the ``ubiquity of Lojasiewicz's example" of non basic semialgebraic sets (2.9). There is a close relation between these two properties and the behaviour of fans in the algebraic functions field of $X$ associated to a real prime divisor (Lemmas 3.5 and 3.6). We use this fact to get an easy proof (Theorem 3.8), for a general surface $X$, of the well known 4-elements fan's criterion for basicness (see [Br] and [AR1]). Furthermore, if the criterion fails, using the description of fans in dimension 2 [Vz], we find an algorithmic method to exhibit the failure. Finally, exploiting this thecnics of sign distribution we give one improvement of the 4-elements fan's criterion in [Br] to check if a semialgebraic set is principal. In fact, one goal of this paper is to give purely geometric proofs, in the case of surfaces, of the theory of fans currently used in Semialgebraic Geometry ([Br]). In particular, we only need the definitions of that theory. It is at least remarkable that while the notion of fan is highly geometric in nature all known proofs of the main results are pure quadratic forms theory. \section{Geometric review of basicness} Let $X\subset {\Bbb R}^n$ be an algebraic surface. Denote by ${\cal R}(X)$ the ring of regular functions on $X$. Let $S\subset X$ be a {\em semialgebraic set}, that is $$S=\bigcup_{i=1}^p\{x\in X:f_{i1}(x)>0,\dots,f_{ir_i}(x)>0,g_i(x)=0\}$$ with $f_{i1},\dots,f_{ir_i},g\in {\cal R}(X)$, for $i=1,\dots,p$.\\ We will simply write: $S=\bigcup\{f_{i1}>0,\dots,f_{ir_i}>0,g_i=0\}$. \begin{defn} A semialgebraic set $S$ is {\em basic open} (resp. {\em basic closed}) if there exist $f_1,\dots,f_r\in {\cal R}(X)$ such that $$S=\{x\in X:f_1(x)>0,\dots,f_r(x)>0\}$$ $$({\it resp.}\; S=\{x\in X:f_1(x)\geq 0,\dots,f_r(x)\geq 0\})$$ \end{defn} \begin{defn} A semialgebraic set $S\subset X$ is {\em generically basic} if there exist a Zariski closed set $C\subset X$, with ${\rm dim}(C)\leq 1$, such that $S\setminus C$ is basic open. \end{defn} Denote by $\partial _{\rm z} S$ the Zariski closure of the set $\partial(S)=(\overline S\setminus{\rm Int}(S))\cap{\rm Reg}(X)$. It is known that in dimension 2 {\em basic} and {\em generically basic} have almost the same meaning (see [Br]). We give here a direct proof of this fact. \begin{lem} Let $S$ be an open semialgebraic set in $X$, if $S$ is generically basic then $S\cap \partial _{\rm z} S$ is a finite set. \end{lem} \begin{pf} Suppose there exist $f_1,\dots,f_s\in {\cal R}(X)$ and an algebraic set $C\subset X$ such that ${\rm dim}(C)\leq 1$ and $$S\setminus C=\{f_1>0,\dots,f_s>0\}.$$ Suppose also that there is an irreducible component $H$ of $\partial _{\rm z} S$ such that ${\rm dim}(H\cap S)=1$. Let ${\frak p}$ be the ideal of $H$ in ${\cal R}(X)$ and pick any $x_0\in {\rm Reg}(H)$. Then ${\cal R}(X)_{x_0}$ (the localization of ${\cal R}(X)$ at the maximal ideal ${\frak m}$ of $x_0$, so ${\frak p}\subset{\frak m}$) is a factorial ring and ${\rm ht}({\frak p}{\cal R}(X)_{x_0})=1$, hence ${\frak p}{\cal R}(X)_{x_0}=h{\cal R}(X)_{x_0}$ for some $h\in {\frak p}$. Take $g_1,\dots,g_r\in{\cal R}(X)$ such that ${\frak p}{\cal R}(X)=(g_1,\dots,g_r)$, then there exist $\lambda_i,s_i\in{\cal R}(X)$ with $s_i(x_0)\not= 0$ (in particular, $s_i\not\in{\frak p}$) such that $s_ig_i=\lambda_ih$, for $i=1,\dots,r$.\\ \indent Consider $U=X\setminus\{s_1\cdots s_r=0\}$ which is Zariski open in $X$. Then we have ${\frak p}{\cal R}(X)_x=(h){\cal R}(X)_x$ for all $x\in U_1=U\cap{\rm Reg}(H)$.\\ \indent Take now $x_0\in U_1$, we have $f_j=\rho_jh^{\alpha_j}$ with $\alpha_j\geq 0$ and $\rho_j\in {\cal R}(X)_{x_0}$ such that $h$ does not divide $\rho_j$ (in particular, $\rho_j\not\in {\frak p}{\cal R}(X)_{x_0}$). Then $\rho_j=p_j/q_j$, $j=1,\dots,s$, where $p_j,q_j\in{\cal R}(X)$ and $p_j(x_0)\not=0,q_j(x_0)\not=0$. Let $U_2$ be the Zariski open set $U_1\setminus\{q_1\cdots q_s=0\}\subset H$, then for all $x\in U_2$, $q_jf_j=p_jh^{\alpha_j}$, $p_j,q_j$ do not change sign in a neighbourhood of $x$, for $j=1,\dots,s$ and $h$ changes sign in a neighbourhood of $x$, since locally $h$ is a parameter of $H$ in $x$. Hence for any $j=1,...,s$, $f_j$ changes sign in a neighbourhood of $x$ if $\alpha_j$ is odd and does not change sign if $\alpha_j$ is even.\\ \indent Using the fact that ${\rm dim}(S\cap H)=1$, ${\rm dim}(C)\leq 1$ and $s_i,p_j,q_j\not\in{\frak p}$, there exist a Zariski dense open set ${\mit\Omega}$ in $U_2$ such that $f_j$ does not change sign through ${\mit\Omega}$, for all $j=1,\dots,s$, then $\alpha_j$ is even for all $j=1,\dots,s$. But also there is a Zariki dense open set ${\mit\Omega}'$ in $U_2$ such that ${\mit\Omega}'\subset \overline S\setminus S$, then there is $l\in \{1,\dots,s\}$ such that $f_l$ changes sign through ${\mit\Omega}'$, and $\alpha_l$ is odd, which is impossible. \end{pf} \begin{prop} Let $S$ be a semialgebraic set in $X$.\\ \indent (1) Let $S$ be open. Then $S$ is generically basic if and only if there exist $p_1,\dots,p_l\in \partial _{\rm z} S$ such that $S\setminus\{p_1,\dots,p_l\}$ is basic open.\\ \indent (2) $S$ is basic open if and only if $S$ is generically basic and $S\cap\partial _{\rm z} S=\emptyset$.\\ \indent (3) Let $S$ be closed. Then $S$ is generically basic if and only if $S$ is basic closed. \end{prop} \begin{pf} First we prove {\it (1)}. The ``if part'' is trivial. Suppose $S$ to be open and generically basic, that is there are $f_1,...,f_r\in{\cal R}(X)$ and an algebraic set $C\subset X$ with ${\rm dim}(C)\leq 1$ such that $$S\setminus C=\{f_1>0,\dots,f_r>0\}.$$ \noindent We suppose first that $C$ is a curve and we can also suppose that $\{f_1\cdots f_r=0\}\subset C$.\\ \indent $C\cap S$ is an open semialgebraic set in $C$, then using [Rz, 2.2], there exist $g_1\in {\cal R}(C)$ such that $$C\cap S=\{x\in C:g_1(x)>0\}$$ $$\overline{C\cap S}=\{x\in C:g_1(x)\geq 0\}$$ \noindent choose $g_1$ to be the restriction of a regular function $g\in {\cal R}(X)$.\\ \indent For each $i=1,\dots,r$ consider the open sets $B_i=\{x\in X:f_i(x)<0\}$ \noindent and the closed sets $T_i=(\overline S\cap\{g\leq 0\})\cup(\overline{B_i}\cap\{g\geq 0\}).$\\ \indent Applying [BCR, 7.7.10] to $T_i$, $f_i$ and $g$ for $i=1,...,r$, we can find $p_i,q_i\in {\cal R}(X)$, with $p_i>0$, $q_i\geq 0$, such that\\ \indent {\em (i)} $F_i=p_if_i+q_ig$ has the same signs as $f_i$ on $T_i$;\\ \indent {\em (ii)} The zero set $Z(q_i)$ of $q_i$ verifies, $Z(q_i)={\rm Adh}_{\rm z}(Z(f_i)\cap T_i)$.\\ We remark the following:\\ \indent {\em a)} $F_i(S\setminus C)>0$ for $i=1,\dots,r$, since $F_i$ has the same signs as $f_i$ on $T_i\cap S$ and outside is the sum of a strictly positive function and a nonnegative one.\\ \indent {\em b)} $F_i(B_i)<0$ for $i=1,\dots,r$, by the same reasons.\\ \indent {\em c)} $Z(q_i)\subset \partial _{\rm z} S$. In fact, denote $$Z_1^i=Z(f_i)\cap \overline S\cap\{g\leq 0\}$$ $$Z_2^i=Z(f_i)\cap \overline{B_i}\cap\{g\geq 0\}$$ \noindent then we have $Z(q_i)={\rm Adh}_{\rm z}(Z_1^i)\cup{\rm Adh}_{\rm z}(Z_2^i)\subset Z(f_i)\subset C.$ Since $g$ is positive on $C\cap S$ and $Z_1^i\subset C\cap \{g\leq 0\}$, we have $Z_1^i\cap S=\emptyset$, hence $Z_1^i\subset \partial(S)$. Indeed, since $B_i\cap S=\emptyset$ and $S$ is open we have $\overline{B_i}\cap S=\emptyset$. Moreover $C\cap \overline{B_i}\subset\{g\leq 0\}$, hence $Z_2^i\subset\{g=0\}\cap C\subset \overline S$, then $Z_2^i\subset\partial(S)$.\\ \indent From these remarks, denoting $Z=\bigcup_{i=1}^rZ(q_i)$, we have $$S\setminus Z=\{F_1>0,\dots,F_r>0\}.$$ \noindent In fact, if $x\in S\setminus C$ then $F_i(x)>0$ for $i=1,...,r$, if $x\in (C\cap S)\setminus Z$ then $f_i(x)\geq 0$, $q_i(x)\geq 0$ and $g(x)>0$, hence $F_i(x)>0$ for $i=1,...,r$ and $x\in S\setminus Z$. Otherwise, suppose $x\not\in S\setminus Z$, then $x\in (X\setminus S)\cup(S\cap Z)$: if $x\in X\setminus S$ there is $l\in \{1,\dots,r\}$ such that $f_l(x)\leq 0$, and we can have $x\not\in C$ or $x\in C$, in the first case $f_i(x)\not= 0$ for all $i$, so $x\in B_l$ and $F_l(x)<0$, in the second case $g(x)\leq 0$, $q_l(x)\geq 0$, so $F_l(x)\leq 0$; if $x\in S\cap Z$ there is $l\in \{1,\dots,r\}$ such that $q_l(x)=0$, then $f_l(x)=0$ and $F_l(x)=0$. In any way, there is $l$ such that $F_l(X)\leq 0$ if $x\not\in S\setminus Z$.\\ \indent By 1.3 and remark {\em c)} above we have that there exist $p_1,\dots,p_l\in\partial _{\rm z} S$ such that $\bigcup_{i=1}^rZ(q_i)\cap S=\{p_1,\dots,p_l\}$, hence $$S\setminus\{p_1,\dots,p_l\}=\{F_1>0,\dots,F_r>0\}.$$ \indent If $C=\{a_1,\dots,a_m\}$ is a finite set we have to check that we can throw out from $C$ all the $a_i$ which do not lie in $\partial _{\rm z} S$. This can be done as before by taking the function 1 at the place of $g$ and putting $T_i=\overline{B_i}$.\\ \indent From {\it (1)} we have immediately {\it (2)}, because if $S$ is generically basic and $\partial _{\rm z} S\cap S=\emptyset$, $S$ is basic open, since following the proof above we have $Z(q_i)\cap S=\emptyset$ for $i=1,...,r$. On the countrary, if $S$ is basic open then it is generically basic and $\partial _{\rm z} S\cap S=\emptyset$ (because $\partial _{\rm z} S\subset\{f_1\cdots f_r=0\}$ if $S=\{f_1>0,\dots,f_r>0\}$).\\ \indent Finally we prove {\it (3)}. The ``if part" is trivial. Then suppose $S$ to be closed and generically basic, i. e. $$S\setminus C=\{f_1>0,\dots,f_r>0\}$$ \noindent with $f_1,\dots,f_r\in {\cal R}(X)$ and ${\rm dim}(C)<2$. We can suppose $\{f_1\cdots f_r=0\}\subset C$.\\ \indent $C\cap S$ is a closed semialgebraic in $C$, by [Rz, 2.2], there is $g_1\in {\cal R}(C)$ such that $$C\cap S=\{x\in C:g_1(x)\geq 0\}.$$ \noindent Take $g\in {\cal R}(X)$ as above, $f\in {\cal R}(X)$ a positive equation of $C$ and $T=S\cap \{g\leq 0\}$. Applying again [BCR, 7.7.10] to $T$, $f$ and $g$, we find $p,q\in{\cal R}(X)$ with $p>0$, $q\geq 0$ such that\\ \indent {\em (i)} $h=pf+qg$ has the same sign as $f$ on $T$;\\ \indent {\em (ii)} $Z(q)={\rm Adh}_{\rm z}(Z(f)\cap T)$.\\ Notice that $h(S)\geq 0$ and $Z(f)\cap T=C\cap S\cap\{g\leq 0\}\subset\{g=0\}\cap C$, because $C\cap S\subset\{g\geq 0\}$; but $\{g=0\}\cap C$ is a finite set contained in $S$, then $Z(q)$ is a finite set contained in $S$.\\ \indent We will prove that $$S=\{f_1\geq 0,\dots,f_r\geq 0,h\geq 0\}.$$ \noindent In fact, if $x\in S\setminus C$ then $f_i(x)>0$ for all $i$ and $h(x)>0$, if $x\in S\cap C$ then $f_i(x)\geq 0$ for all $i$ (because ${\rm dim}(C)<2$) and $h(x)\geq 0$, hence $S\subset\{f_1\geq 0,\dots,f_r\geq 0,h\geq 0\}$. Otherwise suppose $x\not\in S\supset S\setminus C$, then there is $l\in \{1,...,r\}$ such that $f_l(x)\leq 0$, if $x\not\in C$, $f_i(x)\not= 0$ for all $i=1,...,r$, then $f_l(x)<0$; if $x\in C\setminus (C\cap S)$, $f_l(x)\leq 0$, $q(x)\not= 0$ and $g(x)<0$, then $h(x)<0$. \end{pf} \begin{rmks} {\em (1)} {\rm Let $S$ be a closed semialgebraic set, then $S$ is basic closed if and only if $S\setminus\partial _{\rm z} S$ is basic open.\\ \indent In fact, if $S$ is basic closed then it is generically basic, hence $S\setminus\partial _{\rm z} S$ is generically basic and as $(S\setminus\partial _{\rm z} S)\cap\partial _{\rm z}(S\setminus\partial _{\rm z} S)=\emptyset$, $S\setminus\partial _{\rm z} S$ is basic; on the contrary, if $S\setminus\partial _{\rm z} S$ is basic open then $S$ is basic closed, since $S$ is closed.} \indent {\em (2)} {\rm Let $S$ be a semialgebraic set, and let $S^\ast$ denote the set ${\rm Int}(\overline S)$. Then, $S$ is basic open if and only if $S^\ast$ is generically basic and $S\cap\partial _{\rm z} S=\emptyset$.\\ \indent In fact, $S$ and $S^\ast$ are generically equal.} \end{rmks} \section{Basicness and sign distributions} We recall some definitions and results from [ABF]. Let $X$ be a compact, non singular, real algebraic surface and $Y\subset X$ an algebraic curve.\\ \indent Consider a {\em partial sign distribution} $\sigma$ on $X\setminus Y$, which gives the sign $1$ to some connected components of $X\setminus Y$ (whose union is denoted by $\sigma^{-1}(1)$) and the sign $-1$ to some others (whose union is denoted by $\sigma^{-1}(-1)$). So $\sigma^{-1}(1)$ and $\sigma^{-1}(-1)$ are disjoint open semialgebraic sets in $X$. \begin{defns} {\em (1)} A sign distribution $\sigma$ is {\em completable} if $\sigma^{-1}(1)$ and $\sigma^{-1}(-1)$ can be separated by a regular function, i.e. there is $f\in{\cal R}(X)$ such that $f(\sigma^{-1}(1))>0$, $f(\sigma^{-1}(-1))<0$ and $f^{-1}(0)\supset Y$. Briefly, we say that {\em $f$ induces $\sigma$}. {\em (2)} A sign distribution $\sigma$ is {\em locally completable} at a point $p\in Y$ if there is $f\in{\cal R}(X)$ such that $f$ induces $\sigma$ on a neighbourhood of $p$. {\em (3)} An irreducible component $Z$ of $Y$ is a {\em type changing component} with respect to $\sigma$ if there exist two nonempty open sets ${\mit\Omega}_1,{\mit\Omega}_2\subset Z\cap{\rm Reg}(Y)$ such that\\ \hspace{.2in} (a) ${\mit\Omega}_1\subset\overline{\sigma^{-1}(1)}\cap\overline{\sigma^{-1}(-1)}$,\\ \hspace{.2in} ($b_+$) ${\mit\Omega}_2\subset {\rm Int}(\overline{\sigma^{-1}(1)})$ \hspace{.2in} or \hspace{.2in} ($b_-$) ${\mit\Omega}_2\subset {\rm Int}(\overline{\sigma^{-1}(-1)})$.\\ If ($b_+$) (resp. ($b_-$)) holds we say that $Z$ is {\em positive type changing} (resp. {\em negative type changing}) with respect to $\sigma$. {\em (4)} An irreducible component $Z$ of $Y$ is a {\em change component} if there exist a nonempty open set ${\mit\Omega}\subset Z\cap{\rm Reg}(Y)$ verifying (a). \end{defns} Completable sign distributions are characterized by the following theorem. \begin{thm} {\rm (See [ABF, 1.4 and 1.7])} Denote by $Y^c$ the union of the change components of $Y$ with respect to $\sigma$. Then $\sigma$ is completable if and only if\\ \indent {\em (1)} $Y$ does not have type changing component with respect to $\sigma$;\\ \indent {\em (2)} $\sigma$ is locally completable at any point $p\in{\rm Sing}(Y)$;\\ \indent {\em (3)} There exist an algebraic curve $Z\subset X$ such that $Z\cap(\sigma^{-1}(1)\cup\sigma^{-1}(-1))=\emptyset$ and $[Z]=[Y^c]$ in ${\rm H}_1(X,{\Bbb Z}_2)$.\\ \indent Moreover condition {\em (2)} becomes condition {\em (1)} after the blowings-up of the canonical desingularization of $Y$, namely each point where $\sigma$ is not locally completable corresponds to at least one type changing component of the exceptional divisor with respect to the lifted sign distribution. \end{thm} \begin{prop} {(\rm See [ABF, ``the procedure"])} Condition {\em (2)} of theorem 2.2 can be tested, without performing the blowings-up, by an algorithm that only uses the Puiseux expansions of the branches of $Y$ at its singular points. \end{prop} In fact, there are two algorithms: \noindent \fbox{A1} (see [ABF, 2.4.19]) Given a branch $C$ of an algebraic curve through a point $p_0$ and an integer $\rho>0$, it is possible to find explicitely, in terms of the Puiseux expansion of $C$, the irreducible Puiseux parametrizations of all analytic arcs $\gamma$ through $p_0$ with the following properties:\\ \indent a) Denoting respectively by $\gamma_{\rho}$ and by $C_{\rho}$ the strict transform of $\gamma$ and $C$ after $\rho$ blowings-up in the standard resolution of $C$ and by $D_{\rho}$ the exceptional divisor arising at the last blowing-up, then $\gamma_{\rho-1}$ is parametrized by \[ \left\{ \begin{array}{l} x=t\\ y=at+\cdots\end{array}\right. \] \noindent and $C_{\rho-1}\cap\gamma_{\rho-1}=0\in D_{\rho-1}$.\\ \indent b) $\gamma_{\rho-1}$ and $C_{\rho-1}$ have distinct tangents at 0. \noindent \fbox{A2} (see [ABF, solution to problem 2]) Given an analytic arc $\gamma$ and a region of ${\Bbb R}^2$ bounded by two analytic arcs $\gamma_1,\gamma_2$, with $\gamma,\gamma_1,\gamma_2$ through $0\in {\Bbb R}^2$, it is possible to decide, looking at the Puiseux parametrizations, whether $\gamma$ crosses the region or not. Using \fbox{A1} and \fbox{A2} one can decide whether $D_{\rho}$ is positive or negative type changing with respect to the lifted sign distribution $\sigma_{\rho}$ without performing the blowings up, because for the family of arcs given by \fbox{A1}, whose strict transform are transversal to $D_{\rho}$, we can decide by \fbox{A2} whether or not $\sigma$ changes sign along some elements of the family and whether or not $\sigma$ has constant positive or constant negative sign along some other ones.\\ Now let $S$ be an open semialgebraic set. \begin{nott} {\rm Let $A_1,\dots,A_t$ be the connected component of $X\setminus(S\cup\partial _{\rm z} S)$. For each $i=1,\dots,t$ we denote by $\sigma_i^S$ (or simply $\sigma_i$ when there is not risk of confusion) the following sign distribution on $X\setminus\partial _{\rm z} S$: \begin{eqnarray*} (\sigma_i^S)^{-1}(1)&=&S\setminus\partial _{\rm z} S\\ (\sigma_i^S)^{-1}(-1)&=&A_i \end{eqnarray*} } \end{nott} \begin{lem} Let $S$ be a semialgebraic set and $S^\ast={\rm Int}(\overline S)$. Then, ${\rm dim}(S^\ast\cap\partial _{\rm z} S^\ast)=1$ if and only if there exists $i\in\{1,\dots,t\}$ such that $\partial _{\rm z} S$ has a positive type changing component with respect to $\sigma_i^S$. \end{lem} \begin{pf} Suppose ${\rm dim}(S^\ast\cap\partial _{\rm z} S^\ast)=1$, then there is an irreducible component $H$ of $\partial _{\rm z} S^\ast$ such that ${\rm dim}(H\cap S^\ast)=1$. So we can find an open 1-dimensional set ${\mit\Omega}\subset H\cap S^\ast$, hence $${\mit\Omega}\subset{\rm Int}(\overline S)={\rm Int}(\overline{S\setminus\partial _{\rm z} S})={\rm Int}(\overline{\sigma_i^{-1}(1)})$$ \noindent for each $i=1,\dots,t$. And we can find another open 1-dimensional set ${\mit\Omega}'\subset\partial(S^\ast)\cap H$ such that $${\mit\Omega}'\subset\overline{S^\ast}=\overline S=\overline{S\setminus\partial _{\rm z} S}=\overline{\sigma_i^{-1}(1)}$$ \noindent for each $i=1,\dots,t$ and $${\mit\Omega}'\subset X\setminus S^\ast=\bigcup_{i=1}^t\overline{A_i}\; .$$ \noindent Then ${\mit\Omega}'\subset\bigcup_{i=1}^t(\overline{A_i}\setminus A_i)$, since $A_i\cap S=\emptyset$ and $S$ is open. Hence, there exist a 1-dimensional open set ${\mit\Omega}''\subset{\mit\Omega}'$ and $i_0\in\{1,...,t\}$ such that ${\mit\Omega}''\subset\overline{A_{i_0}}\setminus A_{i_0}$, because ${\rm dim}({\mit\Omega}')=1$ and we have a finite number of $A_i$. Hence ${\mit\Omega}''\subset\overline{\sigma_{i_0}^{-1}(1)}\cap\overline{\sigma_{i_0}^{-1}(-1)}$. But since $H$ is a 1-dimensional component of $\partial _{\rm z} S^\ast$ and $\partial _{\rm z} S^\ast \subset\partial _{\rm z} S$, $H$ is an irreducible component of $\partial _{\rm z} S$ of dimension 1; then if we take ${\mit\Omega}_1={\mit\Omega}''\cap{\rm Reg}(\partial _{\rm z} S)$ and ${\mit\Omega}_2={\mit\Omega}\cap{\rm Reg}(\partial _{\rm z} S)$, $H$ is a positive type changing component with respect to $\sigma_{i_o}$.\\ \indent On the contrary suppose that $H$ is an irreducible component of $\partial _{\rm z} S$ which is a positive type changing component with respect to some $\sigma_l$ ($l=1,...,t$). So there exist open sets ${\mit\Omega}_1,{\mit\Omega}_2\subset H$ of ${\rm Reg}(\partial _{\rm z} S)$ such that ${\mit\Omega}_1\subset\overline{\sigma_l^{-1}(1)}\cap\overline{\sigma_l^{-1}(-1)}$ and ${\mit\Omega}_2\subset{\rm Int}(\overline{\sigma_l^{-1}(1)})$. Then ${\mit\Omega}_2\subset S^\ast$ and ${\rm dim}(S^\ast\cap H)=1$. But ${\mit\Omega}_1\subset\overline{S\setminus\partial _{\rm z} S}=\overline {S^\ast}$ and ${\mit\Omega}_1\subset\overline{A_i}$, then ${\mit\Omega}_1\subset\overline{S^\ast}\setminus S^\ast$ because $X\setminus S^\ast=\bigcup\overline{A_i}$. So $H$ is an irreducible component of $\partial _{\rm z} S^\ast$, hence ${\rm dim}(S^\ast\cap\partial _{\rm z} S^\ast)=1$. \end{pf} \begin{prop} Let $S$ be a semialgebraic set in the sphere ${\Bbb S}^2$ such that $\partial _{\rm z} S\cap S=\emptyset$ (resp. a closed semialgebraic set in ${\Bbb S}^2$). Then $S$ is basic open (resp. closed) if and only if for each $i=1,\dots,t$, $\sigma_i^S$ is completable. \end{prop} \begin{pf} It suffices to prove the result for a semialgebraic $S$ such that $\partial _{\rm z} S\cap S=\emptyset$, because if $S$ is closed, applying 1.5 we have done.\\ \indent Suppose $S$ to be basic open, then $S^\ast$ is generically basic and ${\rm dim}(S^\ast \cap\partial _{\rm z} S^\ast)<1$. By lemma 2.5 no irreducible component of $\partial _{\rm z} S$ is positive type changing with respect to $\sigma_i^S$ for each $i=1,...,t$. But also they cannot be negative type changing, because any curve in ${\Bbb S}^2$ divides ${\Bbb S}^2$ into connected components in such a way that none of them lies on both sides of a branch of the curve (because the curves in ${\Bbb S}^2$ have orientable neighbourdhoods).\\ \indent If for some $i\in\{1,...,t\}$, $\sigma_i^S$ were not locally completable at some point $p\in\partial _{\rm z} S$, we could find (Theorem 2.2) a non singular surface $V$ together with a contraction $\pi:V\to{\Bbb S}^2$ of an algebraic curve $E\subset V$ to the point $p$, with $\pi^{-1}(\partial _{\rm z} S)$ normal crossing in $V$, such that an irreducible component $D$ of $E$ would be type changing with respect to $\sigma_i'=\sigma_i^S\cdot \pi$. But $p\not\in S\cup A_i$, so $\sigma_i'$ is defined on $V\setminus\pi^{-1}(\partial _{\rm z} S)$ by \begin{eqnarray*} (\sigma_i')^{-1}(1)=&\pi^{-1}(S)=&T\\ (\sigma_i')^{-1}(-1)=&\pi^{-1}(A_i)&; \end{eqnarray*} \noindent and as $\pi:V\setminus(\pi^{-1}(\partial _{\rm z} S))\to {\Bbb S}^2\setminus\partial _{\rm z} S$ is a biregular isomorphism, $\sigma_i'=\sigma_i^T$. So $T$ and $T^\ast$ are generically basic, being biregularly isomorphic to $S$, hence ${\rm dim}(\partial _{\rm z} T^\ast\cap T^\ast)<1$; so $D$ cannot be positive type changing by lemma 2.5. We have to exclude that $D$ is negative type changing. Suppose it is so; then we can find two open sets ${\mit\Omega},{\mit\Omega}'\subset D$ such that $\pi^{-1}(A_i)$ lies on both sides of ${\mit\Omega}$ and ${\mit\Omega}'$ divides $T$ from $\pi^{-1}(A_i)$. Then there exists an irreducible component $Z$ of $\partial _{\rm z} S$ such that its strict transform $Z'$ crosses $D$ between ${\mit\Omega}$ and ${\mit\Omega}'$. Then $Z'$ must cross $\pi^{-1}(A_i)$, because this open set is connected. This is impossible since $A_i$ does not lie on both sides of $Z$. Then no irreducible component of $\partial _{\rm z} S$ is type changing with respect to $\sigma_i^S$ and $\sigma_i^S$ is locally completable at any $p\in\partial _{\rm z} S$ for all $i$; so $\sigma_i^S$ is completable, since ${\rm H}_1({\Bbb S}^2,{\Bbb Z}_2)=0$ (2.2).\\ \indent Suppose now that $\sigma_i^S$ is completable for each $i=1,...,t$ and let $f_i\in{\cal R}({\Bbb S}^2)$ be a regular function inducing $\sigma_i^S$. Clearly $S\subset\{f_1>0,\dots,f_t>0\}.$ But if $x\not\in S$ then $x\in A_l$ for some $l=1,...,t$ or $x\in\partial _{\rm z} S$. If $x\in A_l$, then $f_l<0$; if $x\in\partial _{\rm z} S$, then $f_i(x)=0$ for all $i=1,...,t$, since $f_i$ vanishes on $\partial _{\rm z} S$ by the very definition of completability. So $S=\{f_1>0,\dots,f_t>0\}$ and it is basic. \end{pf} Before proving an analogous result for a general surface we need a lemma. \begin{lem} Let $S\subset X$ be an open semialgebraic set such that $S=S^\ast$, $\partial _{\rm z} S\cap S=\emptyset$ and $\partial _{\rm z} S$ is normal crossing. Let $H$ be an irreducible component of $\partial _{\rm z} S$. Then there exist a non singular algebraic set $H'\subset X$ such that\\ \indent (a) $[H]=[H']$ in ${\rm H}_1(X,{\Bbb Z}_2)$,\\ \indent (b) $H$ and $H'$ are transversals,\\ \indent (c) $H'\cap S=\emptyset$. \end{lem} \begin{pf} $H$ is a smooth compact algebraic curve, composed by several ovals and locally disconnects its neighbourhood; moreover $S$ is locally only on one side of $H$, because $S=S^\ast$ and $H\cap S^\ast=\emptyset$, at least outside a neighbourhood of the singular points lying in $H$, namely the points where $H$ crosses an other component of $\partial _{\rm z} S$. At each of these points we have only two possible situations (see figures 2.7). From this it is clear how to construct a smooth differentiable curve $C_H$ with $[C_H]=[H]$ in ${\rm H}_1(X,{\Bbb Z}_2)$ such that $C_H$ is transversal to each irreducible component of $\partial _{\rm z} S$ and $C_H\cap S=\emptyset$, using a suitable tubular neighbourhood of $H$. \begin{center} \setlength{\unitlength}{1mm} \begin{picture}(100,40) \put(0,20){\line(1,0){40}} \put(60,20){\line(1,0){40}} \put(20,0){\line(0,1){40}} \put(80,0){\line(0,1){40}} \put(78,4){\makebox(0,0){$ _H$}} \put(18,4){\makebox(0,0){$ _H$}} \put(5,35){\makebox(0,0){$S$}} \put(65,35){\makebox(0,0){$S$}} \multiput(0,22)(0,2){5}{\line(1,0){19.5}} \multiput(10,32)(0,2){4}{\line(1,0){9.5}} \multiput(60,22)(0,2){5}{\line(1,0){19.5}} \multiput(70,32)(0,2){4}{\line(1,0){9.5}} \multiput(80.5,2)(0,2){9}{\line(1,0){19.5}} \put(23,2){\line(0,1){36}} \put(26,35){\makebox(0,0){$ _{C_H}$}} \put(89,35){\makebox(0,0){$ _{C_H}$}} \bezier{100}(85,39)(85,25)(80,20) \bezier{100}(80,20)(75,15)(75,1) \put(20,-4){\makebox(0,0){{\small Figure 2.7.a}}} \put(80,-4){\makebox(0,0){{\small Figure 2.7.b}}} \end{picture} \end{center} \vskip .2cm \indent Now we want to approximate $C_H$ by a nonsingular algebraic curve $H'$ with the same properties. To do this we use the fact that $[H]$ gives a strongly algebraic line bundle $\pi:E\rightarrow X$ on $X$ (see [BCR, 12.2.5]). For this line bundle, $H$ is the zero set of an algebraic section $h$ and $C_H$ is the zero set of a ${\cal C}^\infty$ section $c$. Let $Q_1,\dots,Q_k$ be the points in the set $H\cap C_H$. We can take a finite open covering $V_1,\dots,V_l$ of $X$ with the following properties:\\ \indent 1) $Q_i\in V_i$ for $i=1,...,k$ and $V_1,\dots,V_k$ are pairwise disjoint.\\ \indent 2) For each $j=1,...,l$ there exist an algebraic section $s_j$ of $E$ such that $s_j(x)\not=0$ for each $x\in V_j$ ($s_j$ generates $E_x$ for each $x\in V_j$).\\ \indent Take a ${\cal C}^\infty$ partition of the unity $\{\varphi_1,\dots,\varphi_l\}$ associated to the covering, with the property that for $i=1,...,k$, $\varphi_i^{-1}(1)$ is a closed neighbourhood of $Q_i$ in $V_i$, $\varphi_i(Q_j)=0$ for $j\not= i$. For $j=1,...,l$ we can write $c\,\vline_{V_j}=\alpha_j s_j,$ \noindent with $\alpha_j\in {\cal C}^\infty(V_j)$ and $\alpha_j(Q_j)=0$ if $j=1,...,k$, because $s_j$ generates the fiber. Then we have $$c=\sum_{j=1}^l\varphi_jc=\sum_{j=1}^l(\varphi_j\alpha_j)s_j$$ \noindent and the smooth functions $\beta_j=\varphi_j\alpha_j\in{\cal C}^\infty(X)$ vanishes at $Q_i$ for $i=1,...,k$.\\ \indent By a classical relative approximation theorem (see [BCR, 12.5.5]) we can approximate $\beta_j$ on $X$ by a regular function $f_j$ on $X$ vanishing at $\{Q_1,\dots,Q_k\}$. Then the algebraic section $$s=\sum_{j=1}^lf_js_j$$ \noindent has an algebraic zero set $H'$ passing through $Q_1,\dots,Q_k$. Moreover $H'\cap S=\emptyset$, because $H'$ is very close to $C_H$, and $[H']=[H]$ in ${\rm H}_1(X,{\Bbb Z}_2)$. \end{pf} Finally we have. \begin{thm} Let $X$ be a compact, non-singular, real algebraic surface and $S\subset X$ be a semialgebraic set with $\partial _{\rm z} S\cap S=\emptyset$ (resp. $S\subset X$ be a closed semialgebraic set).\\ \indent Then $S$ is basic if and only if for each $i=1,\dots,t$ the sign distribution $\sigma_i^S$ verifies the following two properties:\\ \indent (a) No irreducible component of $\partial _{\rm z} S$ is positive type changing with respect to $\sigma_i^S$.\\ \indent (b) No irreducible component of the exceptional divisor of a standard resolution \linebreak $\pi:V\to X$ of $\partial _{\rm z} S$ is positive type changing with respect to $\sigma_i'=\sigma_i^S\cdot\pi$. \end{thm} \begin{pf} It suffices to prove for $S$ with $S\cap\partial _{\rm z} S=\emptyset$, because this implies that $S$ is open and by remark 1.5.1 we have done for $S$ closed.\\ \indent The ``only if part" is the same as for ${\Bbb S}^2$, without the argument proving there are not negative type changing component. For the ``if part" we can reason as follows.\\ \indent Let $\pi:V\to X$ be the standard resolution of $\partial _{\rm z} S$. Denote by $Y$ the curve $\pi^{-1}(\partial _{\rm z} S)$ (see [EC] or [BK]), $Y$ is normal crossing in $V$. Each irreducible component of $Y$ is a non-singular curve consisting possibly of several ovals. Each of this ovals can have an orientable neighbourhood (in which case it is homologically trivial) or a non orientable neighbourhood isomorphic to the M\" obius band.\\ \indent Denote as it is usual $\sigma_i'$ the sign distribution $\sigma_i^S\cdot \pi$ and consider the sets $T=\pi^{-1}(S)$ and $T^\ast={\rm Int}(\overline T)$. Conditions {\em (a)} and {\em (b)} for $\sigma_i^S$ imply that $Y$ has not positive type changing components with respect to $\sigma_i'$. So by 2.5, $\partial _{\rm z} T^\ast\cap T^\ast$ is a finite set, and as $\partial _{\rm z} T^\ast\subset Y$ has not isolated points and $T^\ast$ is open, we have $T^\ast\cap \partial _{\rm z} T^\ast=\emptyset$.\\ \indent Consider now the sign distributions $\sigma_j^{T^\ast}$ on $V\setminus \partial _{\rm z} T^\ast$ defined as before for $j=1,\dots,l$, where $l$ is the number of connected component of $V\setminus(T^\ast\cup\partial _{\rm z} T^\ast)$; clearly they do not have positive type changing components. Apply 2.7 to each irreducible component $H$ of $\partial _{\rm z} T^\ast$ being a change component for some $\sigma_j^{T^\ast}$. Then, we find a non-singular algebraic set $H'$ such that $H'\cap T^\ast=\emptyset$ and $[H]=[H']$ in ${\rm H}_1(V,{\Bbb Z}_2)$. The union of all $H'$ gives an algebraic set $Z\subset V$.\\ \indent Remark that if $H$ is a negative type changing component with respect some $\sigma_j^{T^\ast}$ then this phenomenon occurs along an oval of $H$ whose neighbourhood is a M\" obius band, because in the other case the two sides of the oval whould be in different connected components of $V\setminus\partial _{\rm z} T^\ast$. Take the sign distributions $\tau_K$ on $V\setminus(\partial _{\rm z} T^\ast\cup Z)$, $k=1,\dots,m$, defined by \begin{eqnarray*} (\tau_k)^{-1}(1)&=&T^\ast\\ (\tau_k)^{-1}(-1)&=&B_k, \end{eqnarray*} \noindent where $B_1,\dots,B_m$ are the connected components of $V\setminus (T^\ast\cup\partial _{\rm z} T^\ast\cup Z)$.\\ \indent We claim that $\tau_k$ is completable for each $k=1,...,m$. In fact, we prove that conditions (1), (2) and (3) of 2.2 are verified.\\ \indent (1) No irreducible component of $\partial _{\rm z} T^\ast\cup Z$ is neither positive nor negative type changing with respect $\tau_k$: this is true because now the sign $-1$ can occur at most on one side of each irreducible component $H$ of $\partial _{\rm z} T^\ast\cup Z$ (a M\" obius band is divided by two transversal generators of its not vanishing homological class into two connected components), then there are not negative type changing components with respect $\tau_k$.\\ \indent (2) $\tau_k$ is completable at each point $p\in \partial _{\rm z} T^\ast\cup Z$. In fact, if $\partial _{\rm z} T^\ast\cup Z$ is normal crossing in $p$, since there are not type changing components, $\tau_k$ is locally completable at $p$. But, in general $\partial _{\rm z} T^\ast\cup Z$ is not normal crossing. If $p_0$ is not normal crossing we have, by construction 2.7, two irreducible components $H,H_1$ of $\partial _{\rm z} T^\ast$ and one irreducible component $H'$ of $Z$, with $[H']=[H]$, meeting pairwise transversally at $p_0$. \begin{center}\setlength{\unitlength}{1mm}\begin{picture}(40,40) \put(0,20){\line(1,0){37}} \put(20,2){\line(0,1){35}} \put(35,5){\line(-1,1){30}} \put(40,20){\makebox(0,0){$ _{H_1}$}} \put(20,40){\makebox(0,0){$ _H$}} \put(38,2){\makebox(0,0){$ _{H'}$}} \put(10,10){\makebox(0,0){$+$}} \put(30,30){\makebox(0,0){$+$}} \put(30,30){\makebox(0,0){$+$}} \put(30,15){\makebox(0,0){$-$}} \put(10,25){\makebox(0,0){$-$}} \put(20,-3){\makebox(0,0){\small Figure 2.8}} \end{picture} \end{center} \vskip .2cm \noindent Then by construction we have two signs $+1$ between $H$ and $H_1$ near $p_0$ (if not $H'$ would not cross $H$) and at most two signs $-1$ between $H$ and $H'$ or between $H'$ and $H_1$ (see figure 2.8). So $\tau_k$ is locally completable at $p_0$.\\ \indent (3) If $H$ is a change component for $\tau_k$, then $H\subset \partial _{\rm z} T^\ast$ and if $[H]\not= 0$ then for some irreducible component $Z_H$ of $Z$, $[H\cup Z_H]=0$ and $Z_H\cap T^\ast=\emptyset$, $Z_H\cap B_k=\emptyset$, by construction.\\ \indent So $\tau_k$ is completable for $k=1,...,m$. Let $P_k$ be a regular function inducing $\tau_k$. Then $$T^\ast\subset\{P_1>0,\dots,P_m>0\},$$ \noindent but if $x\not\in T^\ast$, $x\in (\bigcup_{k=1}^mB_k)\cup\partial _{\rm z} T^\ast\cup Z$, hence at least one among $P_k$ verifies $P_k(x)\leq 0$. So $$T^\ast=\{P_1>0,\dots,P_m>0\}$$ \noindent then it is basic, hence $S$ is generically basic, but $S\cap\partial _{\rm z} S=\emptyset$ so, by 1.4, $S$ is basic. \end{pf} From 2.8 we find for surfaces a geometric proof of a general basicness characterization in [AR2]. We call {\em birational model} of a semialgebraic set $S$ any semialgebraic set obtained from $S$ by a birational morphism on $X$. \begin{cor} Let $X$ be a surface and $S\subset X$ a semialgebraic set. Then, $S$ is basic open if and only if $\partial _{\rm z} S\cap S=\emptyset$ and for each birational model $T$ of $S$ we have $\partial _{\rm z} T^\ast\cap T^\ast$ is a finite set. \end{cor} \begin{pf} By 1.4 we are done the {\em only if part}. Suppose now that $\partial _{\rm z} S\cap S=\emptyset$ and for each birational model $T$ of $S$ we have that $\partial _{\rm z} T^\ast\cap T^\ast$ is a finite set. Then, $S$ is open and we will prove that it is basic open.\\ \indent Take a compactification of $X$ (for instance its closure in a projective space) and then take a non-singular birational model $X_1$ of $X$, obtained by a finite sequence of blowings-up along smooth centers. The strict transform $S_1$ of $S$ is a birational model of $S$.\\ \indent Consider now the standard resolution $\pi:X_2\to X_1$ of $\partial _{\rm z} S_1$ and take $S_2$ the strict transform of $S_1$ by $\pi$. Then $\partial _{\rm z} S_2$ is normal crossing and $\partial _{\rm z} S_2^\ast\cap S_2^\ast$ is a finite set, because $S_2$ is a birational model of $S$. But, $\partial _{\rm z} S_2^\ast\subset \partial _{\rm z} S_2$ has not isolated points (it is normal crossing) and $S_2^\ast$ is open, then $\partial _{\rm z} S_2^\ast\cap S_2^\ast=\emptyset$. So by 2.5 and 2.8, $S_2^\ast$ is basic open. Hence $S$ is generically basic and, as $\partial _{\rm z} S\cap S=\emptyset$, $S$ is basic open. \end{pf} \section{Geometric review of fans} For all notions of real algebra, real spectra, specialization, real valuation rings, etc., we refer to [BCR]. Only for tilde operation we use a slightly different definition: for a semialgebraic set $S$ in an algebraic set $X$, $\tilde S$ is the constructible set of ${\rm Spec}_r ({\cal R}(X))$ (instead of ${\cal P}(X)$) defined by the same formula which defines $S$. The properties of this tilde operation are the the same as the usual ones (see [BCR, chap.7]). Let $K$ be a real field, a subset $F=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ of ${\rm Spec}_r K$ is a {\em 4-element fan} (or simply a {\em fan}) if each $\alpha_i$ is the product of the other three, that is for each $f\in K$ we have $$\alpha_i\alpha_j\alpha_k(f)=\alpha_l(f)$$ \noindent for all $\{i,j,k,l\}=\{1,2,3,4\}$, where $\alpha(f)$ denotes the sign ($1$ or $-1$) of $f$ in the ordering $\alpha\in {\rm Spec}_r(K)$.\\ \indent Given a fan $F$ we can find a valuation ring $V$ of $K$ such that\\ \indent a) Each $\alpha_i\in F$ is compatible with $V$; that is, the maximal ideal ${\frak m}_V$ of $V$ is $\alpha_i$-convex.\\ \indent b) $F$ induces at most two orderings in the residue field $k_V$ of $V$.\\ In this situation we say that $F$ trivializes along $V$ (see [BCR, chap.10] and [Br]). Let $X$ be a real algebraic set, and ${\cal K}(X)$ be the function field of $X$, that is a finitely generated real extension of ${\Bbb R}$. Denote $K={\cal K}(X)$. \begin{defn} A fan $F$ of $K$ is {\em associated to a real prime divisor $V$} if\\ \indent {\em (a)} $V$ is a discrete valuation ring such that $F$ trivializes along $V$.\\ \indent {\em (b)} The residue field $k_V$ of $V$ is a finitely generated real extension of ${\Bbb R}$ such that ${\rm dg.tr.}[K:{\Bbb R}]={\rm dg.tr.}[k_V:{\Bbb R}]+1$. \end{defn} \begin{rmk} {\rm Let $F$ be a not trivial fan (i.e. the $\alpha_i$'s are distincts) associated to a real prime divisor $V$, then it induces two distinct orderings $\tau_1,\tau_2$ in $k_V$ ([BCR, 10.1.10]). If $F=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ we suppose that $\alpha_1,\alpha_3$ (resp. $\alpha_2,\alpha_4$) induce $\tau_1$ (resp. $\tau_2$) and we write this \[\begin{array}{ccccccccc} V& &\alpha_1& &\alpha_3& &\alpha_2& &\alpha_4\\ \downarrow& & &\searrow\swarrow& & & &\searrow\swarrow& \\ k_V& & &\tau_1& & & &\tau_2& \end{array}\] Conversely, let $\tau_1,\tau_2\in{\rm Spec}_r(k_V)$ be distinct, and let $t\in V$ be a uniformizer for $V$. Each $f\in V$ can be written as $f=t^nu$, where $n$ is the valuation of $f$ and $u$ is a unit in $V$. Denote by $\overline u$ the class of $u$ in $k_V$ and consider the orderings in $K$ defined as follows: \[\begin{array}{ccc} \alpha_1(f)=\tau_1(\overline u)&;&\alpha_3(f)=(-1)^n\tau_1(\overline u)\\ \alpha_2(f)=\tau_2(\overline u)&;&\alpha_4(f)=(-1)^n\tau_2(\overline u)\\ \end{array}\] \noindent They form a fan $F$ of $K$ associated to the real prime divisor $V$.\\ \indent We may consider $\tau_1,\tau_2\in{\rm Spec}_r(k_V)$ as elements of ${\rm Spec}_r(V)$ with ${\frak m}_V$ as support. Then we have that $\alpha_1,\alpha_3$ (resp. $\alpha_2,\alpha_4$) specialize to $\tau_1$ (resp. $\tau_2$) in ${\rm Spec}_r(V)$.\\ \indent When $\alpha$ specializes to $\tau$, we write $\alpha\to \tau$.} \end{rmk} From now we consider the field of rational functions ${\cal K}(X)$ of a compact non-singular real algebraic surface $X$, which is a finitely generated real extension of ${\Bbb R}$ with transcendence degree over ${\Bbb R}$ equal to 2. \begin{rmk} {\rm Let $F$ be a fan in ${\cal K}(X)$ associated to a real prime divisor V of ${\cal K}(X)$. Then, ${\cal R}(X)\subset V$ (because $X$ is compact); consider the real prime ideal ${\frak p}={\cal R}(X)\cap{\frak m}_V$. We have that $V$ dominates ${\cal R}(X)_{\frak p}$ and there are two possibilities:\\ \indent 1) If the height of ${\frak p}$ is 1, it is the ideal of an irreducible algebraic curve $H\subset X$. Since $X$ is non-singular, ${\cal R}(X)_{\frak p}$ is a discrete valuation ring, which is dominated by $V$. Hence $V={\cal R}(X)_{\frak p}$ and $k_V$ is the function field ${\cal K}(H)$ of $H$; so $\tau_1,\tau_2\in{\rm Spec}_r(H)$.\\ \indent 2) If ${\frak p}$ is a maximal ideal, it is the ideal of a point $p\in X$, because $X$ is compact.} \end{rmk} \begin{defn} Let $F$ be a fan of ${\cal K}(X)$ associated to a real prime divisor $V$ and let ${\frak p}={\cal R}(X)\cap{\frak m}_V$. The {\em center of $F$} is the zero set $Z({\frak p})$ of ${\frak p}$.\\ \indent We say that {\em $F$ is centered at a curve (resp. a point)} if ${\frak p}$ has height 1 (resp. is maximal). \end{defn} \begin{lem} Let $S\subset X$ be an open semialgebraic set. Then the following facts are equivalent:\\ \indent {\em (i)} For each fan $F$ of ${\cal K}(X)$ centered at a curve, $\#(F\cap\tilde S)\not= 3$\\ \indent {\em (ii)} $\partial _{\rm z} S$ has not positive type changing components with respect to the sign distributions $\sigma_i^S$ for $i=1,...,t$, defined in 2.4. \end{lem} \begin{pf} Suppose to have a fan $F$ centered at a curve $H\subset X$, such that $\#(F\cap\tilde S)=3$. Then by remarks 3.2 and 3.3 we have:\\ \indent a) $F$ is associated to a real prime divisor $V={\cal R}(X)_{\frak p}$, where ${\frak p}$ is the ideal of $H$.\\ \indent b) If $F=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$, then $\alpha_1,\alpha_3\to\tau_1\; ,\;\alpha_2,\alpha_4\to\tau_2$ in ${\rm Spec}_r(V)$, with $\tau_1\not=\tau_2$ and $\tau_1,\tau_2\in{\rm Spec}_r({\cal K}(H))$.\\ \indent Suppose $\alpha_1,\alpha_2,\alpha_3\in \tilde S$ and $\alpha_4\not\in\tilde S$. Remark that an element of ${\rm Spec}_r({\cal R}(X)_{\frak p})$ is a prime cone of ${\rm Spec}_r({\cal R}(X))$ which support is contained in ${\frak p}$. So we can consider $\alpha_i,\tau_j\in{\rm Spec}_r({\cal R}(X))$ ($i=1,2,3,4$, $j=1,2$) with $\tau_1,\tau_2\in\tilde H$ and $\alpha_1,\alpha_3\to\tau_1\; ,\;\alpha_2,\alpha_4\to\tau_2$ in ${\rm Spec}_r({\cal R}(X))$.\\ \indent We have $\tau_1\in\overline{\tilde S}=\tilde{\overline S}$, because $\alpha_1,\alpha_3\in \tilde S$. But by [BCR, 10.2.8] there are precisely two prime cone different from $\tau_1$ in ${\rm Spec}_r({\cal R}(X))$ specializing to $\tau_1$, so they are $\alpha_1,\alpha_3$. And as $\alpha_1,\alpha_3,\tau_1\in\tilde{\overline S}$, we get that $\tau_1$ is an interior point of $\tilde{\overline S}$, so $\tau_1\in\tilde{S^\ast}$. This means $\tau_1\in\tilde H\cap\tilde{S^\ast}$, so ${\rm dim}(H\cap S^\ast)= 1$.\\ \indent Now $\tau_2\in\tilde{\overline S}$, because $\alpha_2\in\tilde S$ and $\alpha_2\to\tau_2$. Again [BCR, 10.2.8] the prime cones specializing to $\tau_2$ and different from it are precisely $\alpha_2,\alpha_4$. Since $\alpha_4\not\in\tilde S$ and $\tilde S\cap{\rm Spec}_r({\cal K}(X))=\tilde{\overline S}\cap{\rm Spec}_r({\cal K}(X))$, we have that $\alpha_4\not\in\tilde{\overline S}$, so $\tau_2$ is not interior to $\tilde{\overline S}$, that is $\tau_2\not\in\tilde{S^\ast}$. But $\overline S=\overline{S^\ast}$, then $$\tau_2\in\widetilde{\overline{S^\ast}\setminus S^\ast}\subset\widetilde{\partial _{\rm z} S^\ast} \,.$$ \noindent This implies that ${\rm dim}(H\cap\partial _{\rm z} S^\ast)=1$, then $H$ is an irreducible component of $\partial _{\rm z} S^\ast$. So ${\rm dim}(S^\ast\cap\partial _{\rm z} S^\ast)=1$ and by 2.5 $\partial _{\rm z} S$ has a positive type changing component with respect $\sigma_i^S$ for some $i=1,...,t$.\\ \indent Conversely, let $H$ be a irreducible component of $\partial _{\rm z} S$ which is positive type changing with respect $\sigma_i^S$ for some $i$. Then we can find open sets ${\mit\Omega}_1,{\mit\Omega}_2\in H\cap{\rm Reg}(\partial _{\rm z} S)$ such that\\ \indent a) ${\mit\Omega}_1\subset\overline{\sigma_i^{-1}(1)}\cap\overline{\sigma_i^{-1}(-1)}=\overline S\cap\overline A_i$\\ \indent b) ${\mit\Omega}_2\subset{\rm Int}(\overline{\sigma_i^{-1}(1)})=S^\ast$\\ Let ${\frak p}$ be the ideal of $H$ in ${\cal R}(X)$ and $V$ be the discrete valuation ring ${\cal R}(X)_{\frak p}$. Consider two orderings $\tau_1,\tau_2\in{\rm Spec}_r({\cal K}(H))$, with $\tau_1\in \tilde{\mit\Omega}_1$, $\tau_2\in\tilde{\mit\Omega}_2$, and let $F$ be the fan defined by $\tau_1,\tau_2$ as in 3.2. So $\alpha_1,\alpha_3\to\tau_1\; ,\;\alpha_2,\alpha_4\to\tau_2$ in ${\rm Spec}_r({\cal R}(X))$, as before.\\ \indent We have $\alpha_2,\alpha_4\in\tilde S^\ast$, because $\tau_2\in\tilde S^\ast$ and $S^\ast$ is open. But $\alpha_2,\alpha_4\in{\rm Spec}_r({\cal K}(X))$ and $\tilde S\cap{\rm Spec}_r({\cal K}(X))=\tilde S^\ast\cap{\rm Spec}_r({\cal K}(X))$, then $\alpha_2,\alpha_4\in\tilde S$.\\ \indent On the other hand, $\tau_1\in \tilde{\overline S}\cap\tilde{\overline A_i}$. So there exist $\alpha\in\tilde S$ and $\beta\in\tilde A_i$ with $\alpha,\beta\to\tau_1$. Again by [BCR, 10.2.8] we must have $\alpha=\alpha_1$ and $\beta=\alpha_3$. So $\#(F\cap\tilde S)=3$. \end{pf} \begin{lem} Let $S$ be a open semialgebraic set in $X$ such that $\partial _{\rm z} S^\ast\cap S^\ast$ is a finite set. Fix $p\in\partial S$. Then the following facts are equivalent:\\ \indent {\em (i)} For each fan $F$ centered at $p$, $\#(F\cap\tilde S)\not= 3$.\\ \indent {\em (ii)} For each contraction $\pi:X'\to X$ of a curve $E$ to the point $p$, no irreducible component of $E$ is positive type changing with respect to $\sigma_i'=\sigma_i^S\cdot \pi$, for $i=1,...,t$. \end{lem} \begin{pf} Suppose that there exist a contraction $\pi:X'\to X$ of a curve $E$ to the point $p$ and $i=1,...,t$ such that an irreducible component $H$ of $E$ is positive type changing with respect to $\sigma_i'$. But if $T=\pi^{-1}(S)$, then $(\sigma_i')^{-1}(1)=T$, $(\sigma_i')^{-1}(-1)=\pi^{-1}(A_i)$; moreover, as $p\not\in S$, the set $\{\pi^{-1}(A_i):i=1,...,t\}$ is precisely the set of connected component of $X'\setminus(T\cup\partial _{\rm z} T)$, then $\sigma_i'=\sigma_i^T$. Now by 3.5 there exists a fan $F=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ of ${\cal K}(X')$ with center the curve $H$ such that $\#(F\cap\tilde T)=3$. \\ \indent The contraction $\pi$ induces a field isomorphism $$\pi_*:{\cal K}(X)\to{\cal K}(X')$$ \noindent and an injective ring homomorphism $\pi_*\,\vline_{{\cal R}(X)}:{\cal R}(X)\to{\cal R}(X').$ Let $G$ be the fan of ${\cal K}(X')$ inverse image of $F$ by $\pi_*$, namely $$G=\{\pi_*^{-1}(\alpha_1),\pi_*^{-1}(\alpha_2),\pi_*^{-1}(\alpha_3),\pi_*^{-1}(\alpha_4)\}\subset{\rm Spec}_r({\cal K}(X))$$ Then $\#(G\cap\tilde S)=3$ and we have to prove that $p$ is the center of $G$. Let $V$ be the real prime divisor associated to $F$, then $\pi_*^{-1}(V)=W$ is the real prime divisor associated of $G$, so $${\frak m}_W\cap{\cal R}(X)=\pi_*^{-1}({\cal J}(H))$$ \noindent where ${\cal J}(H)$ denotes the ideal of $H$ in ${\cal R}(X')$. Hence, $${\frak m}_W\cap{\cal R}(X)={\cal J}(\pi(H))={\frak m}_p$$ \noindent with ${\frak m}$ the maximal ideal of $p$.\\ \indent On the contrary, we suppose that no irreducible component of $E$ is positive type changing with respect to $\sigma_i'=\sigma_i\cdot\pi$, for each contraction $\pi$ of a curve to $p$. Take a neighbourhood $U$ of $p$, homeomorphic to a disk in ${\Bbb R}^2$ ($X$ is non-singular), such that $U$ does not meet any irreducible component of $\partial _{\rm z} S$ unless it contains $p$. Consider the sign distributions $\delta_j$ in $X\setminus (\partial _{\rm z} S\cup \partial U)$ for $j=1,...,l$, defined by \begin{eqnarray*} \delta_j^{-1}(1)&=&U\cap S\\ \delta_j^{-1}(-1)&=&B_j \end{eqnarray*} \noindent where $B_1,\dots,B_l$ are the connected components of $U\setminus(S\cup\partial _{\rm z} S)$. As $\partial _{\rm z} S^\ast\cap S^\ast$ is a finite set, by 2.5 $\partial _{\rm z} S$ has not positive type changing components with respect to $\sigma_i^S$ for all $i=1,...,t$. We claim that $\partial _{\rm z}(U\cap S)$ has no type changing components at all. In fact, as $B_j\subset A_i$ for some $i$, there are not positive ones; $\partial U$ cannot be type changing because the signs may lie only on one side of it and no other component can be negative type changing, for the same reasons as in the proof of 2.6.\\ \indent Now if $\partial _{\rm z} S$ is normal crossing at $p$, by 2.2 $\delta_j$ is locally completable at $p$ for $i=1,...,l$. If not, consider the standard singularity resolution $\pi:X'\to X$ of $\partial _{\rm z} S$ at $p$ and the sign distributions \[\begin{array}{c} \sigma_i'=\sigma_i^S\cdot\pi,\;{\it for}\;i=1,...,t\\ \delta_j'=\delta_j\cdot\pi,\;{\it for}\;i=1,...,l \end{array} \] \noindent As no irreducible component of $\pi^{-1}(p)$ is positive type changing with respect to $\sigma_i'$ for all $i$, the same is true with respect to $\delta_j'$ ($j=1,...,l$). More over each such component has a M\"obius neighbourhood in $\pi^{-1}(U)$ where the sign minus can occur locally only on one side of the curve, so it cannot be negative type changing. Then, by 2.2 $\delta_j$ is locally completable at $p$ for each $j=1,..,l$.\\ \indent Hence, for each $j=1,...,l$, take $f_j\in{\cal R}(X)$ and an open set $U_j\ni p$, $U_j\subset U$, such that $f_j$ induces $\delta_j$ on $U_j$. Consider $A=\bigcap_{j=1}^lU_j$, then $$S\cap A=\{f_1>0,\dots,f_l>0\}\cap A.$$ \noindent In fact, by definition of locally completable we have $$S\cap A\subset \{f_1>0,\dots,f_l>0\};$$ \noindent but if $x\in A\setminus S$, then $x\in(\bigcup B_j)\cup\partial _{\rm z} S$, so there is a $j_0$ such that $f_{j_0}(x)\leq 0$.\\ \indent Let $F=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ a fan centered at $p$. Clearly $F\subset\tilde A$, because each $\alpha_i$ specializes in ${\rm Spec}_r({\cal R}(X))$ to the prime cone having ${\frak m}_p$ as support, i.e the unique prime cone giving to $f\in{\cal R}(X)$ the sign of $f(p)$ ([BCR, 10.2.3]). Suppose that $\alpha_1,\alpha_2,\alpha_3\in\tilde S$ and $\alpha_4\not\in\tilde S$; then $\alpha_1,\alpha_2,\alpha_3\in\tilde A\cap\tilde S$ and $\alpha_4\not\in\tilde A\cap\tilde S$. So for all $j=1,...,l$, $\alpha_i(f_j)>0$ for $i=1,2,3$ and there is $j_0$ such that $\alpha_4(f_{j_0})<0$. But this is imposible because $F$ is a fan and $\alpha_1\alpha_2\alpha_3(f_{j_0})\not=\alpha_4(f_{j_0})$. \end {pf} \begin{rmk} {\rm Let $S$ be an open semialgebraic set such that $\partial _{\rm z} S^\ast\cap S^\ast$ is a finite set. Let $p\in\partial _{\rm z} S$ such that $p\not\in\partial S$ or $\partial _{\rm z} S$ is normal crossing at $p$. Then for each fan centered at $p$, $\#(F\cap\tilde S)\not= 3$.} \end{rmk} \begin{thm} {\rm (See [Br] and [AR1])} Let $X$ be a real irreducible algebraic surface. Let $S$ be a semialgebraic set such that $\partial _{\rm z} S\cap S=\emptyset$ (resp. a closed semialgebraic set). Then, $S$ is basic open (resp. basic closed) if and only if for each fan $F$ of ${\cal K}(X)$ which is associated to a real prime divisor, $\#(F\cap\tilde S)\not=3$. \end{thm} \begin{pf} Suppose $S$ to be basic open, then $$S=\{f_1>0,\dots,f_r>0\},\;{\rm with}\; f_1,\dots,f_r\in{\cal R}(X).$$ \noindent Let $F=\{\alpha_1,\alpha_2,\alpha_3,\alpha_4\}$ be a fan in ${\cal K}(X)$ and suppose that $\alpha_1,\alpha_2,\alpha_3\in S$ and $\alpha_4\not\in S$. Then for all $i=1,...,r$, $f_i(\alpha_j)>0$ ($j=1,2,3$) and there exists $i_0\in\{1,...,r\}$ such that $\alpha_4(f_{i_0})>0$; which is impossible because $F$ is a fan.\\ \indent Conversely, suppose $S$ to be not basic open. If $X$ is compact and non singular, by 2.8, 3.5 and 3.6 we have done. If not, take a birational model $X_1$ of $X$ obtained compactifying and desingularizing $X$. Let $S_1$ be the strict transform of $S$ in $X_1$. Then $S_1$ is not basic open and $\partial _{\rm z}(S_1)\cap S_1=\emptyset$, because $S$ verifies these properties. So, by 2.8, 3.5 and 3.6, we can find a fan $F$ of ${\cal K}(X_1)$ associated to a real prime divisor such that $\#(F\cap\tilde S_1)=3$. Since ${\cal K}(X)$ and ${\cal K}(X_1)$ are isomorphic, $F$ gives a fan $G$ of ${\cal K}(X)$ such that $G$ is associated to a real prime divisor and $\#(G\cap\tilde S)=3$. \end{pf} \section{The algorithms} By 2.3 (see also [ABF]) there is an algorithmic method for checking properties {\em (a)} and {\em (b)} of 2.8; so we can decide algorithmically if a semialgebraic $S$ with $\partial _{\rm z} S\cap S=\emptyset$ is open basic. This method works as follows:\\ \indent 1) It calculates ${\rm dim}(S^\ast\cap\partial _{\rm z} S^\ast)$ by tecniques of cilindrical algebraic descomposition (C.A.D.), for instance (see [BCR]). If it is equal to 1, we know that $S$ is not basic. If not, we continue with 2).\\ \indent 2) It decides if some irreducible component of the exceptional divisor of the standard resolution of $\partial _{\rm z} S$ is positive type changing using \framebox{A1} and \framebox{A2} in the points of $\partial S$ which are not normal crossing (remark that \framebox{A2} decides if the non-local completability at a point is due to a positive or a negative type changing component after some blowing-up). Moreover, from [Vz] we have a complete description for fans in ${\Bbb R}(x,y)$ associated to a real prime divisor.\\ \indent Consider the field ${\Bbb R}((u,v))$ of formal series in two variables over ${\Bbb R}$, with the ordering which extends $0^+$ in ${\Bbb R}((u))$ by $v>0$. Any ordering in ${\Bbb R}(x,y)$ is defined by an ordered ${\Bbb R}$-homomorphism $\psi:{\Bbb R}(x,y)\to{\Bbb R}((u,v))$ (see [AGR]). So a non-trivial fan $F$ is given by 4 homomorphisms $\psi_1,\psi_2,\psi_3,\psi_4$. More precisely: \begin{thm} {\rm (See [Vz])} Let $F$ be a fan in ${\Bbb R}(x,y)$. Then $F$ is described as follows:\\ \indent {\em 1)} If $F$ has as center an irreducible curve $H\subset{\Bbb S}^2$ and if $P(x,y)\in{\Bbb R}[x,y]$ is a polynomial generating the ideal ${\cal J}(H)\subset{\Bbb R}[x,y]$ of the image of $H$ by a suitable stereographic projection, then $$\psi_i:{\Bbb R}(x,y)\to {\Bbb R}((t,z)),\; for\; i=1,2,3,4$$ \noindent are defined (possibly interchanging $x$ and $y$) as follows: \[\left\{ \begin{array}{l} \psi_1(x)=a_1+\delta t^N\\ \psi_1(y)=a_2+\sum_{i\geq 1}c_it^{n_i}+z \end{array}\right. \qquad \left\{\begin{array}{l} \psi_2(x)=b_1+\delta't^M\\ \psi_2(y)=b_2+\sum_{i\geq 1}d_it^{m_i}+z \end{array}\right.\] \[\left\{ \begin{array}{l} \psi_3(x)=a_1+\delta t^N\\ \psi_3(y)=a_2+\sum_{i\geq 1}c_it^{n_i}-z \end{array}\right. \qquad \left\{\begin{array}{l} \psi_4(x)=b_1+\delta't^M\\ \psi_4(y)=b_2+\sum_{i\geq 1}d_it^{m_i}-z \end{array}\right.\] \noindent where $(a_1+\delta t^N,\, a_2+\sum_{i\geq 1}c_it^{n_i})$ and $(b_1+\delta't^M,\, b_2+\sum_{i\geq 1}d_it^{m_i})$ are irreducible Puiseux parametrizations of two half-branches of $H$, centered respectively at $(a_1,a_2)$, $(b_1,b_2)$. {\em 2)} If $F$ is centered at a point $p\in{\Bbb S}^2$, we may suppose $p=(0,0)$ in a suitable stereographic projection, then $$\psi_i:{\Bbb R}(x,y)\to {\Bbb R}((z,t)),\; for\; i=1,2,3,4$$ \noindent are given by one of the following expressions:\\ \indent {\em a)} \[\left\{ \begin{array}{l} \psi_1(x)=t\\ \psi_1(y)=tw \phantom{-} \end{array}\right. \qquad \left\{\begin{array}{l} \psi_2(x)=tw'\phantom{-}\\ \psi_2(y)=t \end{array}\right.\] \[\left\{ \begin{array}{l} \psi_3(x)=-t\\ \psi_3(y)=-tw \end{array}\right. \qquad \left\{\begin{array}{l} \psi_4(x)=-tw'\\ \psi_4(y)=-t \end{array}\right.\] \noindent with $w\in\{z+a,-z+a:a\in{\Bbb R}\}$, $w'\in\{z,-z\}$.\\ \indent {\em b)} Up to interchanging $x$ and $y$, \[\left\{ \begin{array}{l} \psi_1(x)=\delta t^N\\ \psi_1(y)=\sum_{i=1}^sc_it^{n_i}+t^mw\phantom{(-1)^{n_i}(-1)^m} \end{array}\right. \qquad \left\{\begin{array}{l} \psi_2(x)=\delta t^N\\ \psi_2(y)=\sum_{i=1}^sc_it^{n_i}+t^mw'\phantom{(-1)^{n_i}(-1)^m} \end{array}\right.\] \[\left\{ \begin{array}{l} \psi_3(x)=(-1)^N\delta t^N\\ \psi_3(y)=\sum_{i=1}^s(-1)^{n_i}c_it^{n_i}+(-1)^mt^mw \end{array}\right. \qquad \left\{\begin{array}{l} \psi_4(x)=(-1)^N\delta t^N\\ \psi_4(y)=\sum_{i=1}^s(-1)^{n_i}c_it^{n_i}+(-1)^mt^mw' \end{array}\right.\] \noindent with $\delta\in\{1,-1\}$; $c_i\in{\Bbb R}$ for $i=1,...,s$; $N\leq n_1<n_2<\dots <n_s$, ${\rm g.c.d.}(N,n_1,...,n_s)=d$ and ${\rm g.c.d}(d,m)=1$; and $w,w'\in\{z+a,-z+a,1/z,-1/z:a\in {\Bbb R}\}$, with $w\not= w'$ if $d$ is odd and $w\not= w',w\not= -w'$ if $d$ is even. If $N=1$, $c_1=\dots=c_s=0$, then $w,w'\not\in\{1/z,-1/z\}$.\\ \indent {\em c)} Up to interchanging $x$ and $y$, \[\left\{ \begin{array}{l} \psi_1(x)=\delta t^N\\ \psi_1(y)=\sum_{i=1}^sc_it^{n_i}+t^mw \end{array}\right. \qquad \left\{\begin{array}{l} \psi_2(x)=(-1)^{N/d}\delta t^N\\ \psi_2(y)=\sum_{i=1}^s(-1)^{n_i/d}c_it^{n_i}+t^mw' \end{array}\right.\] \[\left\{ \begin{array}{l} \psi_3(x)=\delta t^N\\ \psi_3(y)=\sum_{i=1}^sc_it^{n_i}-t^mw \end{array}\right. \qquad \left\{\begin{array}{l} \psi_4(x)=(-1)^{N/d}\delta t^N\\ \psi_4(y)=\sum_{i=1}^s(-1)^{n_i/d}c_it^{n_i}-t^mw' \end{array}\right.\] \noindent with $\delta,c_i,N,n_i,w,w'$ as in {\em b)}, but $d$ always even and without supplementary conditions on $w,w'$. \end{thm} \begin{rmk} {\rm To each fan $F$ in ${\Bbb R}(x,y)$ centered at $(0,0)$ we can associate two families of arcs through $(0,0)$ which are parametrized by $z\in (0,\epsilon)$. In fact, for each fixed $z\in(0,\epsilon)$, $\psi_1$ and $\psi_3$ (resp. $\psi_2$ and $\psi_4$) define the two half-branches of the same curve germ $\gamma_1^z$ (resp. $\gamma_2^z$). This curves, $\gamma_1^z$ and $\gamma_2^z$, verify one with respect to the other conditions a) and b) of 2.3 (see [ABF, 2.19]).} \end{rmk} We want to know the relations between these two families of arcs associated to $F$ and the families of arcs of \framebox{A1} and \framebox{A2} (2.3). Let $C$ be a curve germ through $(0,0)\in{\Bbb R}^2$; and consider the standard resolution $\pi=\pi_N\cdots\pi_1$ of $C$ at $(0,0)$. For each $i=1,\dots,N$, denote by $C_i$ the curve $(\pi_i\cdots\pi_1)^{-1}(C)$, by $D_i$ the exceptional divisor arising during the $i^{th}$ blowing-up, and by $E_i$ the exceptional curve after $i$ blowings-up (i.e. $E_1=(\pi_i\cdots\pi_1)^{-1}(0,0)$). $D_i$ is an irreducible component of $E_i$. \begin{defn} Let $F$ be a fan of ${\Bbb R}(x,y)$ with center $(0,0)\in{\Bbb R}^2$, for $i=1,...,N$ denote by $F_i$ the fan obtained from $F$ after $i$ blowings-up by lifting the orderings of $F$. We say that $F$ has the {\em property $\star(\rho)$} with respect to $C$ if it verifies:\\ \indent {\em a)} $F_{\rho-1}$ is centered at the point $0=C_{\rho-1}\cap D_{\rho-1}$.\\ \indent{\em b)} $F_{\rho-1}$ is described in the sense of 4.1 as in {\em 2-a)} or {\em 2-b)} with $N=1$, $c_1=\dots=c_s=0$.\\ \indent {\em c)} $C_{\rho-1}$ is not tangent to any curve of the two families associates to $F_{\rho-1}$. \end{defn} \begin{rmk} {\rm A fan $F$ verifies $\star(\rho)$ with respect to a curve $C$ if and only if for each $z\in(0,\epsilon)$ the curves $\gamma_1^z$ and $\gamma_2^z$ verify a) and b) of 2.3 with respect to $C$ (see also [ABF, 2.9]).} \end{rmk} Finally we have: \begin{thm} Let $S$ be an open semialgebraic set in $X$, such that $\partial _{\rm z} S\cap S=\emptyset$. Suppose that $\partial _{\rm z} S^\ast\cap S^\ast$ is a finite set and $S$ is not basic. Then, there exists an algorithmic method for finding a fan $F$ of ${\cal K}(X)$ with $\#(F\cap\tilde S)=3$. \end{thm} \begin{pf} By 2.5 $\partial _{\rm z} S$ has not type changing components with respect to $\sigma_i^S$; then by 2.8 there is at least one irreducible component $D_\rho$ of the exceptional divisor of a standard resolution $\pi$ of $\partial _{\rm z} S$ at a point $O=(0,0)$ which is positive type changing with respect to some $\sigma_i'=\sigma_i^S\cdot\pi$. By 2.3 we can find algorithmically two arcs $\gamma_1,\gamma_2$ with the properties a) and b) of 2.3 with respect to an irreducible component of $\partial _{\rm z} S$ through $p=\pi(D)$, for $\rho>0$, such that $\gamma_1$ joins two regions in $(\sigma_i^S)^{-1}(1)$, while $\gamma_2$ joins a region in $(\sigma_i^S)^{-1}(1)$ to a region in $(\sigma_i^S)^{-1}(-1)$. Moreover, each $\gamma_i$ ($i=1,2$) is defined by open conditions.\\ \indent Then $(\gamma_1)_\rho$ and $(\gamma_2)_\rho$ are smooth arcs wich meet $D_\rho$ transversally in different points $p,q\in D_\rho$ and $\rho$ is the first level in the resolution process at wich $\gamma_1$ and $\gamma_2$ are separated. By [ABF, 2.19], $\gamma_1$ and $\gamma_2$ are parametrized by \[ \gamma_1:\left\{ \begin{array}{l} x=\delta t^N\\ y=\sum_{i=1}^sc_it^{n_i}+f(t) \end{array}\right. \qquad \gamma_2:\left\{\begin{array}{l} x=\delta' t^N\\ y=\sum_{i=1}^sd_it^{n_i}+g(t) \end{array}\right.\] \noindent where $\delta,\delta'\in\{1,-1\}$, $c_i,d_i\in{\Bbb R}$ are determined by [ABF, 2.19] as follows: $\delta'=\delta$ and $d_i=c_i$ or $\delta'=(-1)^{N/d}\delta$ and $d_i=(-1)^{n_i/d}c_i$, for $i=1,...,s$, with $d={\rm g.c.d.}(N,n_1,\dots,n_s)$; and $f(t)=at^m+...$, $g(t)=bt^m+...$ with $a\not= b$.\\ \indent We can construct four fans with the property $\star(\rho)$ with respect to $\gamma_1$ and $\gamma_2$ as follows:\\ \indent At the level $\rho$ of the resolution process we have four fans centered at $D_\rho$ in half-branches at $p$ and $q$ (Fig. 4.5), obtained by taking respectively a half-branch of $D_\rho$ at $p$ and an other at $q$. \begin{center}\setlength{\unitlength}{1mm}\begin{picture}(100,30) \multiput(20,2)(20,0){4}{\line(0,1){26}} \multiput(20,30)(20,0){4}{\makebox(0,0){$_{D_\rho}$}} \multiput(16,7)(0,1){3}{\line(4,1){4}} \multiput(56,7)(0,1){3}{\line(4,1){4}} \multiput(20,8)(0,1){3}{\line(4,-1){4}} \multiput(60,8)(0,1){3}{\line(4,-1){4}} \multiput(16,21)(0,1){3}{\line(4,-1){4}} \multiput(36,21)(0,1){3}{\line(4,-1){4}} \multiput(20,20)(0,1){3}{\line(4,1){4}} \multiput(40,20)(0,1){3}{\line(4,1){4}} \multiput(36,11)(0,1){3}{\line(4,-1){4}} \multiput(76,11)(0,1){3}{\line(4,-1){4}} \multiput(40,10)(0,1){3}{\line(4,1){4}} \multiput(80,10)(0,1){3}{\line(4,1){4}} \multiput(56,17)(0,1){3}{\line(4,1){4}} \multiput(76,17)(0,1){3}{\line(4,1){4}} \multiput(60,18)(0,1){3}{\line(4,-1){4}} \multiput(80,18)(0,1){3}{\line(4,-1){4}} \multiput(20,10)(20,0){4}{\circle*{1.5}} \multiput(20,20)(20,0){4}{\circle*{1.5}} \multiput(22,12)(40,0){2}{\makebox(0,0){$_p$}} \multiput(22,18)(20,0){2}{\makebox(0,0){$_q$}} \multiput(42,8)(40,0){2}{\makebox(0,0){$_p$}} \multiput(62,22)(20,0){2}{\makebox(0,0){$_q$}} \put(50,-1){\makebox(0,0){Figure 4.5}} \end{picture} \end{center} \indent Going back by the birational morphism $\pi_\rho\cdots\pi_1$ we obtain four fans centered at $O=(0,0)$ such that all they have the property $\star(\rho)$ with respect to $\gamma_1$ and $\gamma_2$. More precisely applying again [ABF, 2.19] we can describe them in terms of 4.1: for every pair $\eta,\eta'\in\{1,-1\}$ we have one of this fans $F_{\eta,\eta'}$ and his associated arcs are \[ \gamma_1^z:\left\{ \begin{array}{l} x=\delta t^N\\ y=\sum_{i=1}^sc_it^{n_i}+(\eta z+a)t^m \end{array}\right. \qquad \gamma_2^z:\left\{\begin{array}{l} x=\delta' t^N\\ y=\sum_{i=1}^sd_it^{n_i}+(\eta'z+b)t^m \end{array}\right.\] \indent By the construction of $\gamma_1,\gamma_2$ it is easy to check that $\#(F_{\eta,\eta'}\cap\tilde S)=3$. \end{pf} \begin{rmk} {\rm In the hypotesis of 3.13 there are in fact infinite fans $F$ of ${\cal K}(X)$ verifying $\#(F\cap\tilde S)=3$, because there are infinite pairs of arcs joining respectively two region with sign $1$ and a region with sign $1$ with a region with sign $-1$.\\ \indent So we find only fans with $w,w'\in\{z+a,-z+a:a\in{\Bbb R}\}$, according to description 4.1.2. In other case, $(\gamma_1^z)_{\rho}$ or $(\gamma_2^z)_{\rho}$ for some $\rho$, would be tangent to $D_{\rho}$, but applying \framebox{A1} and \framebox{A2} as in [ABF] we take $\gamma_1$, $\gamma_2$ without this property. This means that if it exists a fan $F$ with $w\;{\rm or}\; w'\in\{1/z,-1/z\}$ and $\#(F\cap\tilde S)=3$, there is another fan $F'$ with $w,w'\not\in\{1/z,-1/z\}$ and $\#(F'\cap\tilde S)=3$.} \end{rmk} \section{Principal sets} Using the results of the previous sections, we obtain a simple characterization of principal open (resp. closed) sets. In order to conserve the unity of this paper we give all results about principal sets in dimension 2, but in fact they can be extended to arbitrary dimension following similar proofs (Remarks 5.9). Details can be found in [Vz].\\ \indent Let $X$ be a compact, non singular, real algebraic surface. \begin{defn} A semialgebraic set $S\subset X$ is {\em principal open} (resp. {\em principal closed}) if there exists $f\in{\cal R}(X)$ such that $$S=\{x\in X:f(x)>0\}$$ $$(resp.\; S=\{x\in X:f(x)\geq 0\})$$ \end{defn} \begin{defn} A semialgebraic set $S\in X$ is {\em generically principal} if there exists a Zariski closed set $C\in X$ with ${\rm dim} (C)\leq 1$ such that $S\setminus C$ is principal open. \end{defn} \begin{rmk} {\rm A semialgebraic set $S$ is principal closed if and only if $X\setminus S$ is principal open.\\ \indent Then it suffices to work with principal open sets.} \end{rmk} \begin{nott} {\rm Let $S$ be an open semialgebraic set and $Y=\partial _{\rm z} S$. We denote by $S^c$ the open semialgebraic set $X\setminus(S\cup Y)$ and by $A_1,\dots,A_t$ (resp. $B_1,\dots,B_l$) the connected component of $S^c$ (resp. $S\setminus Y$).\\ \indent Let $\sigma_i$ be the sign distributions $\sigma_i^S$ for $i=1,\dots,t$ and $\sigma_j^c$ be the sign distributions $\sigma_j^{S^c}$ for $j=1,\dots,l$ defined as in 2.4. And denote by $\delta$ the total sign distribution defined by \begin{eqnarray*} \delta^{-1}(1)&=&S\setminus Y\\ \delta^{-1}(-1)&=&S^c \end{eqnarray*} } \end{nott} \begin{rmk} {\rm A semialgebraic set $S$ such that $\partial _{\rm z} S\cap S=\emptyset$ is principal open if and only if the sign distribution $\delta$ is admissible (that is, there exists $f\in{\cal R}(X)$ such that $f$ induces $\delta$ on $X\setminus\partial _{\rm z} S$).} \end{rmk} \begin{thm} Let $S$ be a semialgebraic set such that $\partial _{\rm z} S\cap S=\emptyset$. Then $S$ is principal open if and only if $S^*\cap\partial _{\rm z} S^*$ and $(S^c)^*\cap\partial _{\rm z}(S^c)^*$ are finite sets. \end{thm} \begin{pf} Suppose that $S$ is principal then $S$ and $S^c$ are basic and by 2.8 no irreducible component of $\partial _{\rm z} S$ is positive type changing with respect to $\sigma_i$ and $\sigma_j^c$ for each $i=1,...,t$, $j=1,...,l$. Applying now 2.5 we have that $S^*\cap\partial _{\rm z} S^*$ and $(S^c)^*\cap\partial _{\rm z}(S^c)^*$ are finite sets.\\ \indent Conversely suppose that $S^*\cap\partial _{\rm z} S^*$ and $(S^c)^*\cap\partial _{\rm z}(S^c)^*$ are finite sets, then by 2.5 again no irreducible component of $\partial _{\rm z} S$ is positive type changing with respect to $\sigma_i$ and $\sigma_j^c$ for $i=1,...,t$, $j=1,...,l$.\\ \indent Remark that an irreducible component $H$ of $\partial _{\rm z} S$ is positive (resp. negative) type changing with respect to $\delta$ if and only if $H$ is positive type changing with respect to $\sigma_i$ for some $i$ (resp. $\sigma_j^c$ for some $j$).\\ \indent Hence no irreducible component of $\partial _{\rm z} S$ is type changing with respect to $\delta$. Denote by $Z^c$ the union of all the change components of $\partial _{\rm z} S$ and by $Z$ the set of points where $Z^c$ has dimension 1. So $[Z]=[Z^c]$ and $[Z]=0$ in ${\rm H}_1(M,{\Bbb Z}_2)$, because it bounds the open sets ${\rm Int}(\overline{\sigma^{-1}(1)})$ and ${\rm Int}(\overline{\sigma^{-1}(-1)})$. Then by [BCR, 12.4.6] the ideal ${\cal J}(Z^c)$ of $Z^c$ is principal. Let $f$ be a generator of ${\cal J}(Z^c)$. Again by [BCR, 12.4.6] for each irreducible component $H_k$ of $\partial _{\rm z} S$ not lying in $Z^c$ we can choose a generator $h_k$ of ${\cal J}(H_k)^2$ (wich exists because $2[H_k]=0$). Then the regular function $f\cdot\prod h_k$ induces $\delta$ or $-\delta$. So $\delta$ is admisible and $S$ principal. \end{pf} Remark that this proof is almost the same as the proof of [AB, Proposition 2]. \begin{thm} Let $S$ be a semialgebraic set in $X$.\\ \indent (1) $S$ is principal open if and only if $\partial _{\rm z} S\cap S=\emptyset$ and for each fan $F$ centered at a curve, $\#(F\cap\tilde S)\not= 1,3$.\\ \indent (2) $S$ is principal closed if and only if $\partial _{\rm z} S\cap(X\setminus S)=\emptyset$ and for each fan $F$ centered at a curve, $\#(F\cap\tilde S)\not= 1,3$. \end{thm} \begin{pf} It is immediately using 5.6 and 3.5. \end{pf} \begin{rmks} {\rm (1) Results 5.6 and 5.7 can be generalized to an arbitrary surface compactifying and desingularizing as in 2.9.\\ \indent (2) All this section can be generalized to a compact, non singular, real algebraic set $X$, because the results of the previus sections used here (specifically 1.3 and 2.5) can be generalized to arbitrary dimension. Moreover, defining fan centered at a hypersurface $H$ of $X$ as a fan $F$ associated to a real prime divisor $V$ such that the prime ideal ${\frak p}={\frak m}_V\cap{\cal R}(X)$ has height $1$ and ${\cal Z}({\frak p})=H$, we find an improvement of 4-elements fans criterion [Br, 5.3].\\ \indent (3) For a compact, non singular, real algebraic set we obtain:\\ {\em A semialgebraic set $S$ is principal open (resp. closed) if and only if $\partial _{\rm z} S\cap S=\emptyset$ (resp. $\partial _{\rm z} S\cap(X\setminus S)=\emptyset$) and $S$ is generically principal.}\\ } \end{rmks}
1,314,259,996,149
arxiv
\section{Introduction} In our recent experimental work aimed at a precision measurement of the Casimir force between pure germanium (Ge) plates \cite{1}, we have discovered that the electrostatic calibration for Ge, a semiconductor, is substantially different from what was expected, assuming Ge to be a good conductor. The problem is that a static electric field can propagate a finite distance into a semiconductor; this distance is determined by the combined consideration of diffusion and field driven electric currents, leading to an effective field penetration length (Debye-H\"uckel length) \begin{equation} \lambda=\sqrt{\epsilon \epsilon_0 kT\over e^2 c_t} \end{equation} where $c_t=c_h+c_e$ is the total carrier concentration, which for an intrinsic semiconductor, $c_e=c_h$. For intrinsic Ge $\lambda\approx 1 \ \mu$m,while for a good conductor, it is less than 1 nm. $\lambda$ is independent of the applied field so long as the applied field $E$ times $\lambda$ is less than the thermal energy, $k_bT$ where $k_b$ is Boltzmann's constant. In this limit, and at sufficiently low frequencies and wavenumbers, thermal diffusion dominates the field penetration into the material. A sufficiently low frequency for Ge would be $v_c/\lambda\sim 10$ GHz, where $v_c$ is a typical thermal velocity of a carrier. An analysis of the electrostatic energy between parallel plates is given in the Appendix below, as well as the effect of field amplitude on $\lambda$. This analysis is crucial toward our ongoing experimental efforts, especially for the electrostatic calibration. In light of this analysis, it has become clear that none of the recent papers describing finite temperature effects on the Casimir force have taken into account the thermal diffusion of carriers (electrons and/or holes) in the treatment of the boundary value problem. As such, a comprehensive review of recent work will not be presented here; only the work by Bostr\"om and Sernelius, that led to the recent ``controversy," will be discussed\cite{2}. \section{Calculation of the Thermal Correction} To calculate the effects of finite temperature,the electromagnetic mode photon excitation number of $1/2$ due to zero point fluctuations is replaced by \begin{equation} n(\omega)=\coth\left[{\hbar \omega \over 2 k_bT}\right] \end{equation} which has simple poles at \begin{equation} \omega_n={2\pi i k_bT\over \hbar} \end{equation} where in the following discussion we take only integers $n\geq 0$. Following \cite{2}, the integral over $\omega$ in determining the field energy between two flat plates is replaced by the sum over the poles that occur at the Matsubara frequencies, $\omega_n$, \begin{equation} {\cal E}=k_bT {\sum_{n=0}^\infty}' f(\omega_n) \end{equation} where the prime indicates a factor of $1/2$ for the $n=0$ term, and for the $TE$ modes (electric field parallel to the surface) \begin{equation} f(\omega_n)={1\over 2\pi}\int \ln \left[G^{TE}(q,i\omega_n)\right] q dq \end{equation} where $q$ represents the electromagnetic field wavenumber in the space between the plates, in direction perpendicular to the plates. The function in the integral, for the $TE$ mode, is \begin{equation} G^{TE}(\omega_n,q)= 1-\left({\gamma_1-\gamma_0\over \gamma_1+\gamma_0}\right)^2 e^{-2\gamma_0 d} \end{equation} \begin{equation}\label{modeq} \gamma_i=\sqrt{q^2+\epsilon_i \omega ^2/c^2} \end{equation} where $i=0,1$ represents the space between the plates (0) or inside the plates, (1), and $\epsilon_i$ is the respective electric permittivity along the imaginary frequency axis, and $d$ is the plate separation. It is argued in \cite{2} that for realistic materials, $G^{TE}(0,q)=1$ and hence does not contribute to the energy that leads to the Casimir force between the plates. For example, the low-frequency permittivity of a metal is given by the conductivity $\sigma$, \begin{equation} \epsilon_1(i \omega)= {4 \pi \sigma\over \omega} \end{equation} for which $\gamma_1=\gamma_2$ at $\omega=0$.For distances greater than a few $\mu$ m, for $T=300$ K, the net force is reduced by a factor of two compared to what is expected for a near perfect conductor if both the $TE$ and $TM$ modes are included. This result is in contradiction to experimental results, particularly \cite{3}. There has been much discussion of this correction, but to now, the effect of diffusive field screening has not be taken into account in a satisfactory manner, or at all. \section{Inclusion of the Debye Screening Length} For realistic conducting materials (metal, semiconductor), Eq. (\ref{modeq}) is not possibly correct, for we know an electric field near the surface causes charges to move, and this tends to screen out the applied field. Electric fields applied either parallel or perpendicular to the surface will be screened, varying compared to the field $E_0$ at the surface, in the material, as \begin{equation} E(x)=E_0 e^{-|x|/\lambda} \end{equation} which is valid when$E_0\lambda<k_bT$ (or $E_0\lambda<k_bT/\epsilon_1$ for fields perpendicular to the surface). The fact that electric fields do not penetrate any appreciable distance into even very poor conductors is experimentally well-known. Eq. (\ref{modeq}) incorrectly describes the observed penetration of low frequency fields into conductors. The screening effect can be treated in a heuristic fashion. The exact solution to the combined electrodynamic and diffusion problem is beyond the scope of the present note, the intent of which is to illustrate an effect that has been overlooked. Noting that $\lambda$ is independent of frequency at low frequencies ($<10^{10}$ Hz), and that $E$ and its derivative are continuous across the boundary, we can rewrite Eq. {\ref{modeq}) to include the spatial variation due to Debye screening, \begin{equation}\label{modeq2} \gamma_1=\sqrt{q^2+\lambda^{-2}+\epsilon_i \omega ^2/c^2} \end{equation} assures continuity of the derivative across the boundary. The point is that the wavenumber in the material cannot be less than $1/\lambda$. For this modified function, for $\omega\rightarrow 0$, $G^{TE}$ is equivalent to the perfect conductor result. Thus the large correction suggested in \cite{2} applies only to materials where charges are not free to move, and diffusive effects do not enter. We can question whether this condition can ever really be met, but for any realistic slightly conducting material at finite temperature, there will always be a finite screening length, and hence a full contribution from the $G^{TE}$, $n=0$ mode, for good conductors where $\lambda$ is large to frequencies of order $10^{13}$ Hz. A detailed calculation for Ge is required because $\lambda$ starts falling off for frequencies above 10 GHz, in the region where the thermally excited photons contribute most significantly \cite{4}. \section{Violation of the Third Law?} It has been suggested that the $\omega=0$ term in Eq. () show a manifest violation of the Third Law of Thermodynamics (sometimes referred to as the Nernst Heat Theorem) because the system entropy (identifying $\cal E$ as the free energy $F$), given by \begin{equation} S={\partial F\over \partial T}\big\vert_V=k_b f(0)/2 \end{equation} is not zero in the limit $T\rightarrow 0$.This conundrum has been addressed and clarified in \cite{5}, but there is a simpler argument that will be presented here. In particular, we can question what it means to convert the integral over $\omega$ into a contour integral and hence sum over the Matsurba frequencies. If we did not convert this integral into a sum, the separate identification of the zero frequency contribution would not be made. Specifically, $T$ does not appear in the Casimir force calculation expect through the mode photon number. Differentiating Eq. (2) with $T$, we find \begin{equation} {d n(\omega)\over d T} =-\left[{1\over e^{\hbar\omega/ k_bT}-1}\right]^2\left[{\hbar \omega \over k_bT^2} e^{\hbar\omega/ k_bT}\right] \end{equation} which indeed goes to zero at $T=0$. Alternatively, if we consider the entire sum, Eq. (4), let $T$ go to zero, we find, \begin{equation} {\cal E}=k_bT {\sum_{n=0}^\infty} ' f(\omega_n)=k_b T\int f(2\pi n k_bT/ \hbar ) dn = {\hbar\over 2 \pi}\int f(\omega) d \omega \end{equation} where the simple substitution $2\pi n k_b T/\hbar =\omega$ was made. Hence, the temperature does not appear explicitly in the total free energy, and the entropy indeed goes to zero at zero temperature. The apparent violation of the Third Law is due simply to the isolation of a single term in the total free energy. Not considering the entire system in calculating the entropy is generally considered a sophomoric error. \section{Conclusion} By including the effect of charge movement and screening through the Debye length, it is shown that the large correction to the Casimir force predicted in \cite{2} is not applicable to realistic materials. It should be noted that these correction apply to all conductors when the distance scale approaches the Debye length, which for a good conductor is 0.1 nm. The Debye length is constant in good conductors up to frequencies of order $10^{14}$ Hz, so we can expect the full perfectly conducting force for any metal, at large separations. The case of a semiconductor like Ge is slightly more complicated because $\lambda$ begins to increase for frequencies of order 10 GHz, so a detailed anlysis of this case is required. However, it can be expected that there is a significant contribution from the $TE,\ n=0$ mode. It is also shown that the apparent violation of the 3rd Law of Thermodynamics is due to the isolation of a single term in an expansion that becomes an integral in the limit of $T\rightarrow 0$. \section{Appendix} The potential in a plane semiconductor, if the potential is defined on a surface $x=0$ is \begin{equation} V(x)=V(0)e^{-|x|/\lambda} \end{equation} where $\lambda$ is the Debye-H\"uckle screening length, defined previously. We are interested in finding the energy between two thick Ge plates separated by a distance $d$, with a voltages $+V/2$ and $-V/2$ applied to the backs of the plates. In this case, the field is normal to the surface. After we find the energy per unit area, we can use the Proximity Force Theorem to get the attractive force between a spherical and flat plate. Let $x=0$ refer to the surface of the plate 1, and $x=d$ refer to the surface of plate 2. By symmetry, the potential at the center position between the plates is zero. The potential in plate 1 can be written \begin{equation} V_1(x)=V/2-(V/2-V_s) e^{-|x|/\lambda} \end{equation} and the space between the plates $$V_0(x)=-2 V_s x/d +V_s$$ where we assume the field is uniform. $V_s$, the surface potential, is to be determined. We need only consider the boundary conditions in plate 1, which are $$V_1(-\infty)=V/2$$ $$V_0(0)=V_1(0)$$ (which has already been used) $$\epsilon {d V_1(x)\over dx}\vert_{x=0}={d V_0(x)\over dx}\vert_{x=0}$$ where the last two imply that $D=\epsilon E$ is continuous across the boundary. The solution is \begin{equation} V_s={V\over 2}\left({1\over 1+2\lambda/\epsilon d}\right). \end{equation} With this result, it is straightforward to calculate the total field energy per unit area in both plates and in the space between the plates. The result is \begin{equation} {\cal E}={1\over 2} {\epsilon_0 V^2\over d}\left[{y+y^2\over (y+2)^2}\right] \end{equation} where the dimensionless length $y=\epsilon d/\lambda$ has been introduced. If $V-V_s$ is large compared to $k_bT$, the effective penetration depth increases because the charge density is modified in the vicinity of the surface. The potential in the plates is no longer a simple exponential, however one can define an effective shielding length \cite{6} \begin{equation} {\lambda'\over \lambda}={|\Phi|\over \sqrt{e^{\Phi}+e^{-\Phi}-2}} \end{equation} where \begin{equation} \Phi={V/2-V_s\over k_bT} \end{equation} with the results plotted in Fig. 2. Given that $k_bT=30$ meV, at plate separations of order 1 $\mu$m for Ge this begins to be a large correction when voltages larger than 60 mV are applied between the plates at separations of order $1\ \mu$m.
1,314,259,996,150
arxiv
\section{The ALICE Collaboration} \begingroup \small \begin{flushleft} J.~Adam\Irefn{org40}\And D.~Adamov\'{a}\Irefn{org83}\And M.M.~Aggarwal\Irefn{org87}\And G.~Aglieri Rinella\Irefn{org36}\And M.~Agnello\Irefn{org111}\And N.~Agrawal\Irefn{org48}\And Z.~Ahammed\Irefn{org132}\And S.U.~Ahn\Irefn{org68}\And I.~Aimo\Irefn{org94}\textsuperscript{,}\Irefn{org111}\And S.~Aiola\Irefn{org137}\And M.~Ajaz\Irefn{org16}\And A.~Akindinov\Irefn{org58}\And S.N.~Alam\Irefn{org132}\And D.~Aleksandrov\Irefn{org100}\And B.~Alessandro\Irefn{org111}\And D.~Alexandre\Irefn{org102}\And R.~Alfaro Molina\Irefn{org64}\And A.~Alici\Irefn{org105}\textsuperscript{,}\Irefn{org12}\And A.~Alkin\Irefn{org3}\And J.R.M.~Almaraz\Irefn{org119}\And J.~Alme\Irefn{org38}\And T.~Alt\Irefn{org43}\And S.~Altinpinar\Irefn{org18}\And I.~Altsybeev\Irefn{org131}\And C.~Alves Garcia Prado\Irefn{org120}\And C.~Andrei\Irefn{org78}\And A.~Andronic\Irefn{org97}\And V.~Anguelov\Irefn{org93}\And J.~Anielski\Irefn{org54}\And T.~Anti\v{c}i\'{c}\Irefn{org98}\And F.~Antinori\Irefn{org108}\And P.~Antonioli\Irefn{org105}\And L.~Aphecetche\Irefn{org113}\And H.~Appelsh\"{a}user\Irefn{org53}\And S.~Arcelli\Irefn{org28}\And N.~Armesto\Irefn{org17}\And R.~Arnaldi\Irefn{org111}\And I.C.~Arsene\Irefn{org22}\And M.~Arslandok\Irefn{org53}\And B.~Audurier\Irefn{org113}\And A.~Augustinus\Irefn{org36}\And R.~Averbeck\Irefn{org97}\And M.D.~Azmi\Irefn{org19}\And M.~Bach\Irefn{org43}\And A.~Badal\`{a}\Irefn{org107}\And Y.W.~Baek\Irefn{org44}\And S.~Bagnasco\Irefn{org111}\And R.~Bailhache\Irefn{org53}\And R.~Bala\Irefn{org90}\And A.~Baldisseri\Irefn{org15}\And F.~Baltasar Dos Santos Pedrosa\Irefn{org36}\And R.C.~Baral\Irefn{org61}\And A.M.~Barbano\Irefn{org111}\And R.~Barbera\Irefn{org29}\And F.~Barile\Irefn{org33}\And G.G.~Barnaf\"{o}ldi\Irefn{org136}\And L.S.~Barnby\Irefn{org102}\And V.~Barret\Irefn{org70}\And P.~Bartalini\Irefn{org7}\And K.~Barth\Irefn{org36}\And J.~Bartke\Irefn{org117}\And E.~Bartsch\Irefn{org53}\And M.~Basile\Irefn{org28}\And N.~Bastid\Irefn{org70}\And S.~Basu\Irefn{org132}\And B.~Bathen\Irefn{org54}\And G.~Batigne\Irefn{org113}\And A.~Batista Camejo\Irefn{org70}\And B.~Batyunya\Irefn{org66}\And P.C.~Batzing\Irefn{org22}\And I.G.~Bearden\Irefn{org80}\And H.~Beck\Irefn{org53}\And C.~Bedda\Irefn{org111}\And N.K.~Behera\Irefn{org48}\textsuperscript{,}\Irefn{org49}\And I.~Belikov\Irefn{org55}\And F.~Bellini\Irefn{org28}\And H.~Bello Martinez\Irefn{org2}\And R.~Bellwied\Irefn{org122}\And R.~Belmont\Irefn{org135}\And E.~Belmont-Moreno\Irefn{org64}\And V.~Belyaev\Irefn{org76}\And G.~Bencedi\Irefn{org136}\And S.~Beole\Irefn{org27}\And I.~Berceanu\Irefn{org78}\And A.~Bercuci\Irefn{org78}\And Y.~Berdnikov\Irefn{org85}\And D.~Berenyi\Irefn{org136}\And R.A.~Bertens\Irefn{org57}\And D.~Berzano\Irefn{org36}\textsuperscript{,}\Irefn{org27}\And L.~Betev\Irefn{org36}\And A.~Bhasin\Irefn{org90}\And I.R.~Bhat\Irefn{org90}\And A.K.~Bhati\Irefn{org87}\And B.~Bhattacharjee\Irefn{org45}\And J.~Bhom\Irefn{org128}\And L.~Bianchi\Irefn{org122}\And N.~Bianchi\Irefn{org72}\And C.~Bianchin\Irefn{org135}\textsuperscript{,}\Irefn{org57}\And J.~Biel\v{c}\'{\i}k\Irefn{org40}\And J.~Biel\v{c}\'{\i}kov\'{a}\Irefn{org83}\And A.~Bilandzic\Irefn{org80}\And R.~Biswas\Irefn{org4}\And S.~Biswas\Irefn{org79}\And S.~Bjelogrlic\Irefn{org57}\And F.~Blanco\Irefn{org10}\And D.~Blau\Irefn{org100}\And C.~Blume\Irefn{org53}\And F.~Bock\Irefn{org74}\textsuperscript{,}\Irefn{org93}\And A.~Bogdanov\Irefn{org76}\And H.~B{\o}ggild\Irefn{org80}\And L.~Boldizs\'{a}r\Irefn{org136}\And M.~Bombara\Irefn{org41}\And J.~Book\Irefn{org53}\And H.~Borel\Irefn{org15}\And A.~Borissov\Irefn{org96}\And M.~Borri\Irefn{org82}\And F.~Boss\'u\Irefn{org65}\And E.~Botta\Irefn{org27}\And S.~B\"{o}ttger\Irefn{org52}\And P.~Braun-Munzinger\Irefn{org97}\And M.~Bregant\Irefn{org120}\And T.~Breitner\Irefn{org52}\And T.A.~Broker\Irefn{org53}\And T.A.~Browning\Irefn{org95}\And M.~Broz\Irefn{org40}\And E.J.~Brucken\Irefn{org46}\And E.~Bruna\Irefn{org111}\And G.E.~Bruno\Irefn{org33}\And D.~Budnikov\Irefn{org99}\And H.~Buesching\Irefn{org53}\And S.~Bufalino\Irefn{org36}\textsuperscript{,}\Irefn{org111}\And P.~Buncic\Irefn{org36}\And O.~Busch\Irefn{org93}\textsuperscript{,}\Irefn{org128}\And Z.~Buthelezi\Irefn{org65}\And J.B.~Butt\Irefn{org16}\And J.T.~Buxton\Irefn{org20}\And D.~Caffarri\Irefn{org36}\And X.~Cai\Irefn{org7}\And H.~Caines\Irefn{org137}\And L.~Calero Diaz\Irefn{org72}\And A.~Caliva\Irefn{org57}\And E.~Calvo Villar\Irefn{org103}\And P.~Camerini\Irefn{org26}\And F.~Carena\Irefn{org36}\And W.~Carena\Irefn{org36}\And J.~Castillo Castellanos\Irefn{org15}\And A.J.~Castro\Irefn{org125}\And E.A.R.~Casula\Irefn{org25}\And C.~Cavicchioli\Irefn{org36}\And C.~Ceballos Sanchez\Irefn{org9}\And J.~Cepila\Irefn{org40}\And P.~Cerello\Irefn{org111}\And J.~Cerkala\Irefn{org115}\And B.~Chang\Irefn{org123}\And S.~Chapeland\Irefn{org36}\And M.~Chartier\Irefn{org124}\And J.L.~Charvet\Irefn{org15}\And S.~Chattopadhyay\Irefn{org132}\And S.~Chattopadhyay\Irefn{org101}\And V.~Chelnokov\Irefn{org3}\And M.~Cherney\Irefn{org86}\And C.~Cheshkov\Irefn{org130}\And B.~Cheynis\Irefn{org130}\And V.~Chibante Barroso\Irefn{org36}\And D.D.~Chinellato\Irefn{org121}\And P.~Chochula\Irefn{org36}\And K.~Choi\Irefn{org96}\And M.~Chojnacki\Irefn{org80}\And S.~Choudhury\Irefn{org132}\And P.~Christakoglou\Irefn{org81}\And C.H.~Christensen\Irefn{org80}\And P.~Christiansen\Irefn{org34}\And T.~Chujo\Irefn{org128}\And S.U.~Chung\Irefn{org96}\And Z.~Chunhui\Irefn{org57}\And C.~Cicalo\Irefn{org106}\And L.~Cifarelli\Irefn{org12}\textsuperscript{,}\Irefn{org28}\And F.~Cindolo\Irefn{org105}\And J.~Cleymans\Irefn{org89}\And F.~Colamaria\Irefn{org33}\And D.~Colella\Irefn{org36}\textsuperscript{,}\Irefn{org59}\textsuperscript{,}\Irefn{org33}\And A.~Collu\Irefn{org25}\And M.~Colocci\Irefn{org28}\And G.~Conesa Balbastre\Irefn{org71}\And Z.~Conesa del Valle\Irefn{org51}\And M.E.~Connors\Irefn{org137}\And J.G.~Contreras\Irefn{org11}\textsuperscript{,}\Irefn{org40}\And T.M.~Cormier\Irefn{org84}\And Y.~Corrales Morales\Irefn{org27}\And I.~Cort\'{e}s Maldonado\Irefn{org2}\And P.~Cortese\Irefn{org32}\And M.R.~Cosentino\Irefn{org120}\And F.~Costa\Irefn{org36}\And P.~Crochet\Irefn{org70}\And R.~Cruz Albino\Irefn{org11}\And E.~Cuautle\Irefn{org63}\And L.~Cunqueiro\Irefn{org36}\And T.~Dahms\Irefn{org92}\textsuperscript{,}\Irefn{org37}\And A.~Dainese\Irefn{org108}\And A.~Danu\Irefn{org62}\And D.~Das\Irefn{org101}\And I.~Das\Irefn{org51}\textsuperscript{,}\Irefn{org101}\And S.~Das\Irefn{org4}\And A.~Dash\Irefn{org121}\And S.~Dash\Irefn{org48}\And S.~De\Irefn{org120}\And A.~De Caro\Irefn{org31}\textsuperscript{,}\Irefn{org12}\And G.~de Cataldo\Irefn{org104}\And J.~de Cuveland\Irefn{org43}\And A.~De Falco\Irefn{org25}\And D.~De Gruttola\Irefn{org12}\textsuperscript{,}\Irefn{org31}\And N.~De Marco\Irefn{org111}\And S.~De Pasquale\Irefn{org31}\And A.~Deisting\Irefn{org97}\textsuperscript{,}\Irefn{org93}\And A.~Deloff\Irefn{org77}\And E.~D\'{e}nes\Irefn{org136}\And G.~D'Erasmo\Irefn{org33}\And D.~Di Bari\Irefn{org33}\And A.~Di Mauro\Irefn{org36}\And P.~Di Nezza\Irefn{org72}\And M.A.~Diaz Corchero\Irefn{org10}\And T.~Dietel\Irefn{org89}\And P.~Dillenseger\Irefn{org53}\And R.~Divi\`{a}\Irefn{org36}\And {\O}.~Djuvsland\Irefn{org18}\And A.~Dobrin\Irefn{org57}\textsuperscript{,}\Irefn{org81}\And T.~Dobrowolski\Irefn{org77}\Aref{0}\And D.~Domenicis Gimenez\Irefn{org120}\And B.~D\"{o}nigus\Irefn{org53}\And O.~Dordic\Irefn{org22}\And A.K.~Dubey\Irefn{org132}\And A.~Dubla\Irefn{org57}\And L.~Ducroux\Irefn{org130}\And P.~Dupieux\Irefn{org70}\And R.J.~Ehlers\Irefn{org137}\And D.~Elia\Irefn{org104}\And H.~Engel\Irefn{org52}\And B.~Erazmus\Irefn{org36}\textsuperscript{,}\Irefn{org113}\And I.~Erdemir\Irefn{org53}\And F.~Erhardt\Irefn{org129}\And D.~Eschweiler\Irefn{org43}\And B.~Espagnon\Irefn{org51}\And M.~Estienne\Irefn{org113}\And S.~Esumi\Irefn{org128}\And J.~Eum\Irefn{org96}\And D.~Evans\Irefn{org102}\And S.~Evdokimov\Irefn{org112}\And G.~Eyyubova\Irefn{org40}\And L.~Fabbietti\Irefn{org37}\textsuperscript{,}\Irefn{org92}\And D.~Fabris\Irefn{org108}\And J.~Faivre\Irefn{org71}\And A.~Fantoni\Irefn{org72}\And M.~Fasel\Irefn{org74}\And L.~Feldkamp\Irefn{org54}\And D.~Felea\Irefn{org62}\And A.~Feliciello\Irefn{org111}\And G.~Feofilov\Irefn{org131}\And J.~Ferencei\Irefn{org83}\And A.~Fern\'{a}ndez T\'{e}llez\Irefn{org2}\And E.G.~Ferreiro\Irefn{org17}\And A.~Ferretti\Irefn{org27}\And A.~Festanti\Irefn{org30}\And V.J.G.~Feuillard\Irefn{org15}\textsuperscript{,}\Irefn{org70}\And J.~Figiel\Irefn{org117}\And M.A.S.~Figueredo\Irefn{org124}\And S.~Filchagin\Irefn{org99}\And D.~Finogeev\Irefn{org56}\And E.M.~Fiore\Irefn{org33}\And M.G.~Fleck\Irefn{org93}\And M.~Floris\Irefn{org36}\And S.~Foertsch\Irefn{org65}\And P.~Foka\Irefn{org97}\And S.~Fokin\Irefn{org100}\And E.~Fragiacomo\Irefn{org110}\And A.~Francescon\Irefn{org30}\textsuperscript{,}\Irefn{org36}\And U.~Frankenfeld\Irefn{org97}\And U.~Fuchs\Irefn{org36}\And C.~Furget\Irefn{org71}\And A.~Furs\Irefn{org56}\And M.~Fusco Girard\Irefn{org31}\And J.J.~Gaardh{\o}je\Irefn{org80}\And M.~Gagliardi\Irefn{org27}\And A.M.~Gago\Irefn{org103}\And M.~Gallio\Irefn{org27}\And D.R.~Gangadharan\Irefn{org74}\And P.~Ganoti\Irefn{org88}\And C.~Gao\Irefn{org7}\And C.~Garabatos\Irefn{org97}\And E.~Garcia-Solis\Irefn{org13}\And C.~Gargiulo\Irefn{org36}\And P.~Gasik\Irefn{org92}\textsuperscript{,}\Irefn{org37}\And M.~Germain\Irefn{org113}\And A.~Gheata\Irefn{org36}\And M.~Gheata\Irefn{org62}\textsuperscript{,}\Irefn{org36}\And P.~Ghosh\Irefn{org132}\And S.K.~Ghosh\Irefn{org4}\And P.~Gianotti\Irefn{org72}\And P.~Giubellino\Irefn{org36}\textsuperscript{,}\Irefn{org111}\And P.~Giubilato\Irefn{org30}\And E.~Gladysz-Dziadus\Irefn{org117}\And P.~Gl\"{a}ssel\Irefn{org93}\And A.~Gomez Ramirez\Irefn{org52}\And P.~Gonz\'{a}lez-Zamora\Irefn{org10}\And S.~Gorbunov\Irefn{org43}\And L.~G\"{o}rlich\Irefn{org117}\And S.~Gotovac\Irefn{org116}\And V.~Grabski\Irefn{org64}\And L.K.~Graczykowski\Irefn{org134}\And K.L.~Graham\Irefn{org102}\And A.~Grelli\Irefn{org57}\And A.~Grigoras\Irefn{org36}\And C.~Grigoras\Irefn{org36}\And V.~Grigoriev\Irefn{org76}\And A.~Grigoryan\Irefn{org1}\And S.~Grigoryan\Irefn{org66}\And B.~Grinyov\Irefn{org3}\And N.~Grion\Irefn{org110}\And J.F.~Grosse-Oetringhaus\Irefn{org36}\And J.-Y.~Grossiord\Irefn{org130}\And R.~Grosso\Irefn{org36}\And F.~Guber\Irefn{org56}\And R.~Guernane\Irefn{org71}\And B.~Guerzoni\Irefn{org28}\And K.~Gulbrandsen\Irefn{org80}\And H.~Gulkanyan\Irefn{org1}\And T.~Gunji\Irefn{org127}\And A.~Gupta\Irefn{org90}\And R.~Gupta\Irefn{org90}\And R.~Haake\Irefn{org54}\And {\O}.~Haaland\Irefn{org18}\And C.~Hadjidakis\Irefn{org51}\And M.~Haiduc\Irefn{org62}\And H.~Hamagaki\Irefn{org127}\And G.~Hamar\Irefn{org136}\And A.~Hansen\Irefn{org80}\And J.W.~Harris\Irefn{org137}\And H.~Hartmann\Irefn{org43}\And A.~Harton\Irefn{org13}\And D.~Hatzifotiadou\Irefn{org105}\And S.~Hayashi\Irefn{org127}\And S.T.~Heckel\Irefn{org53}\And M.~Heide\Irefn{org54}\And H.~Helstrup\Irefn{org38}\And A.~Herghelegiu\Irefn{org78}\And G.~Herrera Corral\Irefn{org11}\And B.A.~Hess\Irefn{org35}\And K.F.~Hetland\Irefn{org38}\And T.E.~Hilden\Irefn{org46}\And H.~Hillemanns\Irefn{org36}\And B.~Hippolyte\Irefn{org55}\And R.~Hosokawa\Irefn{org128}\And P.~Hristov\Irefn{org36}\And M.~Huang\Irefn{org18}\And T.J.~Humanic\Irefn{org20}\And N.~Hussain\Irefn{org45}\And T.~Hussain\Irefn{org19}\And D.~Hutter\Irefn{org43}\And D.S.~Hwang\Irefn{org21}\And R.~Ilkaev\Irefn{org99}\And I.~Ilkiv\Irefn{org77}\And M.~Inaba\Irefn{org128}\And M.~Ippolitov\Irefn{org76}\textsuperscript{,}\Irefn{org100}\And M.~Irfan\Irefn{org19}\And M.~Ivanov\Irefn{org97}\And V.~Ivanov\Irefn{org85}\And V.~Izucheev\Irefn{org112}\And P.M.~Jacobs\Irefn{org74}\And S.~Jadlovska\Irefn{org115}\And C.~Jahnke\Irefn{org120}\And H.J.~Jang\Irefn{org68}\And M.A.~Janik\Irefn{org134}\And P.H.S.Y.~Jayarathna\Irefn{org122}\And C.~Jena\Irefn{org30}\And S.~Jena\Irefn{org122}\And R.T.~Jimenez Bustamante\Irefn{org97}\And P.G.~Jones\Irefn{org102}\And H.~Jung\Irefn{org44}\And A.~Jusko\Irefn{org102}\And P.~Kalinak\Irefn{org59}\And A.~Kalweit\Irefn{org36}\And J.~Kamin\Irefn{org53}\And J.H.~Kang\Irefn{org138}\And V.~Kaplin\Irefn{org76}\And S.~Kar\Irefn{org132}\And A.~Karasu Uysal\Irefn{org69}\And O.~Karavichev\Irefn{org56}\And T.~Karavicheva\Irefn{org56}\And L.~Karayan\Irefn{org97}\textsuperscript{,}\Irefn{org93}\And E.~Karpechev\Irefn{org56}\And U.~Kebschull\Irefn{org52}\And R.~Keidel\Irefn{org139}\And D.L.D.~Keijdener\Irefn{org57}\And M.~Keil\Irefn{org36}\And K.H.~Khan\Irefn{org16}\And M.M.~Khan\Irefn{org19}\And P.~Khan\Irefn{org101}\And S.A.~Khan\Irefn{org132}\And A.~Khanzadeev\Irefn{org85}\And Y.~Kharlov\Irefn{org112}\And B.~Kileng\Irefn{org38}\And B.~Kim\Irefn{org138}\And D.W.~Kim\Irefn{org44}\textsuperscript{,}\Irefn{org68}\And D.J.~Kim\Irefn{org123}\And H.~Kim\Irefn{org138}\And J.S.~Kim\Irefn{org44}\And M.~Kim\Irefn{org44}\And M.~Kim\Irefn{org138}\And S.~Kim\Irefn{org21}\And T.~Kim\Irefn{org138}\And S.~Kirsch\Irefn{org43}\And I.~Kisel\Irefn{org43}\And S.~Kiselev\Irefn{org58}\And A.~Kisiel\Irefn{org134}\And G.~Kiss\Irefn{org136}\And J.L.~Klay\Irefn{org6}\And C.~Klein\Irefn{org53}\And J.~Klein\Irefn{org36}\textsuperscript{,}\Irefn{org93}\And C.~Klein-B\"{o}sing\Irefn{org54}\And A.~Kluge\Irefn{org36}\And M.L.~Knichel\Irefn{org93}\And A.G.~Knospe\Irefn{org118}\And T.~Kobayashi\Irefn{org128}\And C.~Kobdaj\Irefn{org114}\And M.~Kofarago\Irefn{org36}\And T.~Kollegger\Irefn{org97}\textsuperscript{,}\Irefn{org43}\And A.~Kolojvari\Irefn{org131}\And V.~Kondratiev\Irefn{org131}\And N.~Kondratyeva\Irefn{org76}\And E.~Kondratyuk\Irefn{org112}\And A.~Konevskikh\Irefn{org56}\And M.~Kopcik\Irefn{org115}\And M.~Kour\Irefn{org90}\And C.~Kouzinopoulos\Irefn{org36}\And O.~Kovalenko\Irefn{org77}\And V.~Kovalenko\Irefn{org131}\And M.~Kowalski\Irefn{org117}\And G.~Koyithatta Meethaleveedu\Irefn{org48}\And J.~Kral\Irefn{org123}\And I.~Kr\'{a}lik\Irefn{org59}\And A.~Krav\v{c}\'{a}kov\'{a}\Irefn{org41}\And M.~Krelina\Irefn{org40}\And M.~Kretz\Irefn{org43}\And M.~Krivda\Irefn{org102}\textsuperscript{,}\Irefn{org59}\And F.~Krizek\Irefn{org83}\And E.~Kryshen\Irefn{org36}\And M.~Krzewicki\Irefn{org43}\And A.M.~Kubera\Irefn{org20}\And V.~Ku\v{c}era\Irefn{org83}\And T.~Kugathasan\Irefn{org36}\And C.~Kuhn\Irefn{org55}\And P.G.~Kuijer\Irefn{org81}\And I.~Kulakov\Irefn{org43}\And A.~Kumar\Irefn{org90}\And J.~Kumar\Irefn{org48}\And L.~Kumar\Irefn{org79}\textsuperscript{,}\Irefn{org87}\And P.~Kurashvili\Irefn{org77}\And A.~Kurepin\Irefn{org56}\And A.B.~Kurepin\Irefn{org56}\And A.~Kuryakin\Irefn{org99}\And S.~Kushpil\Irefn{org83}\And M.J.~Kweon\Irefn{org50}\And Y.~Kwon\Irefn{org138}\And S.L.~La Pointe\Irefn{org111}\And P.~La Rocca\Irefn{org29}\And C.~Lagana Fernandes\Irefn{org120}\And I.~Lakomov\Irefn{org36}\And R.~Langoy\Irefn{org42}\And C.~Lara\Irefn{org52}\And A.~Lardeux\Irefn{org15}\And A.~Lattuca\Irefn{org27}\And E.~Laudi\Irefn{org36}\And R.~Lea\Irefn{org26}\And L.~Leardini\Irefn{org93}\And G.R.~Lee\Irefn{org102}\And S.~Lee\Irefn{org138}\And I.~Legrand\Irefn{org36}\And F.~Lehas\Irefn{org81}\And R.C.~Lemmon\Irefn{org82}\And V.~Lenti\Irefn{org104}\And E.~Leogrande\Irefn{org57}\And I.~Le\'{o}n Monz\'{o}n\Irefn{org119}\And M.~Leoncino\Irefn{org27}\And P.~L\'{e}vai\Irefn{org136}\And S.~Li\Irefn{org7}\textsuperscript{,}\Irefn{org70}\And X.~Li\Irefn{org14}\And J.~Lien\Irefn{org42}\And R.~Lietava\Irefn{org102}\And S.~Lindal\Irefn{org22}\And V.~Lindenstruth\Irefn{org43}\And C.~Lippmann\Irefn{org97}\And M.A.~Lisa\Irefn{org20}\And H.M.~Ljunggren\Irefn{org34}\And D.F.~Lodato\Irefn{org57}\And P.I.~Loenne\Irefn{org18}\And V.~Loginov\Irefn{org76}\And C.~Loizides\Irefn{org74}\And X.~Lopez\Irefn{org70}\And E.~L\'{o}pez Torres\Irefn{org9}\And A.~Lowe\Irefn{org136}\And P.~Luettig\Irefn{org53}\And M.~Lunardon\Irefn{org30}\And G.~Luparello\Irefn{org26}\And P.H.F.N.D.~Luz\Irefn{org120}\And A.~Maevskaya\Irefn{org56}\And M.~Mager\Irefn{org36}\And S.~Mahajan\Irefn{org90}\And S.M.~Mahmood\Irefn{org22}\And A.~Maire\Irefn{org55}\And R.D.~Majka\Irefn{org137}\And M.~Malaev\Irefn{org85}\And I.~Maldonado Cervantes\Irefn{org63}\And L.~Malinina\Aref{idp3797616}\textsuperscript{,}\Irefn{org66}\And D.~Mal'Kevich\Irefn{org58}\And P.~Malzacher\Irefn{org97}\And A.~Mamonov\Irefn{org99}\And V.~Manko\Irefn{org100}\And F.~Manso\Irefn{org70}\And V.~Manzari\Irefn{org36}\textsuperscript{,}\Irefn{org104}\And M.~Marchisone\Irefn{org27}\And J.~Mare\v{s}\Irefn{org60}\And G.V.~Margagliotti\Irefn{org26}\And A.~Margotti\Irefn{org105}\And J.~Margutti\Irefn{org57}\And A.~Mar\'{\i}n\Irefn{org97}\And C.~Markert\Irefn{org118}\And M.~Marquard\Irefn{org53}\And N.A.~Martin\Irefn{org97}\And J.~Martin Blanco\Irefn{org113}\And P.~Martinengo\Irefn{org36}\And M.I.~Mart\'{\i}nez\Irefn{org2}\And G.~Mart\'{\i}nez Garc\'{\i}a\Irefn{org113}\And M.~Martinez Pedreira\Irefn{org36}\And Y.~Martynov\Irefn{org3}\And A.~Mas\Irefn{org120}\And S.~Masciocchi\Irefn{org97}\And M.~Masera\Irefn{org27}\And A.~Masoni\Irefn{org106}\And L.~Massacrier\Irefn{org113}\And A.~Mastroserio\Irefn{org33}\And H.~Masui\Irefn{org128}\And A.~Matyja\Irefn{org117}\And C.~Mayer\Irefn{org117}\And J.~Mazer\Irefn{org125}\And M.A.~Mazzoni\Irefn{org109}\And D.~Mcdonald\Irefn{org122}\And F.~Meddi\Irefn{org24}\And Y.~Melikyan\Irefn{org76}\And A.~Menchaca-Rocha\Irefn{org64}\And E.~Meninno\Irefn{org31}\And J.~Mercado P\'erez\Irefn{org93}\And M.~Meres\Irefn{org39}\And Y.~Miake\Irefn{org128}\And M.M.~Mieskolainen\Irefn{org46}\And K.~Mikhaylov\Irefn{org58}\textsuperscript{,}\Irefn{org66}\And L.~Milano\Irefn{org36}\And J.~Milosevic\Irefn{org22}\textsuperscript{,}\Irefn{org133}\And L.M.~Minervini\Irefn{org104}\textsuperscript{,}\Irefn{org23}\And A.~Mischke\Irefn{org57}\And A.N.~Mishra\Irefn{org49}\And D.~Mi\'{s}kowiec\Irefn{org97}\And J.~Mitra\Irefn{org132}\And C.M.~Mitu\Irefn{org62}\And N.~Mohammadi\Irefn{org57}\And B.~Mohanty\Irefn{org132}\textsuperscript{,}\Irefn{org79}\And L.~Molnar\Irefn{org55}\And L.~Monta\~{n}o Zetina\Irefn{org11}\And E.~Montes\Irefn{org10}\And M.~Morando\Irefn{org30}\And D.A.~Moreira De Godoy\Irefn{org113}\textsuperscript{,}\Irefn{org54}\And S.~Moretto\Irefn{org30}\And A.~Morreale\Irefn{org113}\And A.~Morsch\Irefn{org36}\And V.~Muccifora\Irefn{org72}\And E.~Mudnic\Irefn{org116}\And D.~M{\"u}hlheim\Irefn{org54}\And S.~Muhuri\Irefn{org132}\And M.~Mukherjee\Irefn{org132}\And J.D.~Mulligan\Irefn{org137}\And M.G.~Munhoz\Irefn{org120}\And S.~Murray\Irefn{org65}\And L.~Musa\Irefn{org36}\And J.~Musinsky\Irefn{org59}\And B.K.~Nandi\Irefn{org48}\And R.~Nania\Irefn{org105}\And E.~Nappi\Irefn{org104}\And M.U.~Naru\Irefn{org16}\And C.~Nattrass\Irefn{org125}\And K.~Nayak\Irefn{org79}\And T.K.~Nayak\Irefn{org132}\And S.~Nazarenko\Irefn{org99}\And A.~Nedosekin\Irefn{org58}\And L.~Nellen\Irefn{org63}\And F.~Ng\Irefn{org122}\And M.~Nicassio\Irefn{org97}\And M.~Niculescu\Irefn{org62}\textsuperscript{,}\Irefn{org36}\And J.~Niedziela\Irefn{org36}\And B.S.~Nielsen\Irefn{org80}\And S.~Nikolaev\Irefn{org100}\And S.~Nikulin\Irefn{org100}\And V.~Nikulin\Irefn{org85}\And F.~Noferini\Irefn{org105}\textsuperscript{,}\Irefn{org12}\And P.~Nomokonov\Irefn{org66}\And G.~Nooren\Irefn{org57}\And J.C.C.~Noris\Irefn{org2}\And J.~Norman\Irefn{org124}\And A.~Nyanin\Irefn{org100}\And J.~Nystrand\Irefn{org18}\And H.~Oeschler\Irefn{org93}\And S.~Oh\Irefn{org137}\And S.K.~Oh\Irefn{org67}\And A.~Ohlson\Irefn{org36}\And A.~Okatan\Irefn{org69}\And T.~Okubo\Irefn{org47}\And L.~Olah\Irefn{org136}\And J.~Oleniacz\Irefn{org134}\And A.C.~Oliveira Da Silva\Irefn{org120}\And M.H.~Oliver\Irefn{org137}\And J.~Onderwaater\Irefn{org97}\And C.~Oppedisano\Irefn{org111}\And R.~Orava\Irefn{org46}\And A.~Ortiz Velasquez\Irefn{org63}\And A.~Oskarsson\Irefn{org34}\And J.~Otwinowski\Irefn{org117}\And K.~Oyama\Irefn{org93}\And M.~Ozdemir\Irefn{org53}\And Y.~Pachmayer\Irefn{org93}\And P.~Pagano\Irefn{org31}\And G.~Pai\'{c}\Irefn{org63}\And C.~Pajares\Irefn{org17}\And S.K.~Pal\Irefn{org132}\And J.~Pan\Irefn{org135}\And A.K.~Pandey\Irefn{org48}\And D.~Pant\Irefn{org48}\And P.~Papcun\Irefn{org115}\And V.~Papikyan\Irefn{org1}\And G.S.~Pappalardo\Irefn{org107}\And P.~Pareek\Irefn{org49}\And W.J.~Park\Irefn{org97}\And S.~Parmar\Irefn{org87}\And A.~Passfeld\Irefn{org54}\And V.~Paticchio\Irefn{org104}\And R.N.~Patra\Irefn{org132}\And B.~Paul\Irefn{org101}\And T.~Peitzmann\Irefn{org57}\And H.~Pereira Da Costa\Irefn{org15}\And E.~Pereira De Oliveira Filho\Irefn{org120}\And D.~Peresunko\Irefn{org100}\textsuperscript{,}\Irefn{org76}\And C.E.~P\'erez Lara\Irefn{org81}\And E.~Perez Lezama\Irefn{org53}\And V.~Peskov\Irefn{org53}\And Y.~Pestov\Irefn{org5}\And V.~Petr\'{a}\v{c}ek\Irefn{org40}\And V.~Petrov\Irefn{org112}\And M.~Petrovici\Irefn{org78}\And C.~Petta\Irefn{org29}\And S.~Piano\Irefn{org110}\And M.~Pikna\Irefn{org39}\And P.~Pillot\Irefn{org113}\And O.~Pinazza\Irefn{org105}\textsuperscript{,}\Irefn{org36}\And L.~Pinsky\Irefn{org122}\And D.B.~Piyarathna\Irefn{org122}\And M.~P\l osko\'{n}\Irefn{org74}\And M.~Planinic\Irefn{org129}\And J.~Pluta\Irefn{org134}\And S.~Pochybova\Irefn{org136}\And P.L.M.~Podesta-Lerma\Irefn{org119}\And M.G.~Poghosyan\Irefn{org84}\textsuperscript{,}\Irefn{org86}\And B.~Polichtchouk\Irefn{org112}\And N.~Poljak\Irefn{org129}\And W.~Poonsawat\Irefn{org114}\And A.~Pop\Irefn{org78}\And S.~Porteboeuf-Houssais\Irefn{org70}\And J.~Porter\Irefn{org74}\And J.~Pospisil\Irefn{org83}\And S.K.~Prasad\Irefn{org4}\And R.~Preghenella\Irefn{org105}\textsuperscript{,}\Irefn{org36}\And F.~Prino\Irefn{org111}\And C.A.~Pruneau\Irefn{org135}\And I.~Pshenichnov\Irefn{org56}\And M.~Puccio\Irefn{org111}\And G.~Puddu\Irefn{org25}\And P.~Pujahari\Irefn{org135}\And V.~Punin\Irefn{org99}\And J.~Putschke\Irefn{org135}\And H.~Qvigstad\Irefn{org22}\And A.~Rachevski\Irefn{org110}\And S.~Raha\Irefn{org4}\And S.~Rajput\Irefn{org90}\And J.~Rak\Irefn{org123}\And A.~Rakotozafindrabe\Irefn{org15}\And L.~Ramello\Irefn{org32}\And R.~Raniwala\Irefn{org91}\And S.~Raniwala\Irefn{org91}\And S.S.~R\"{a}s\"{a}nen\Irefn{org46}\And B.T.~Rascanu\Irefn{org53}\And D.~Rathee\Irefn{org87}\And K.F.~Read\Irefn{org125}\And J.S.~Real\Irefn{org71}\And K.~Redlich\Irefn{org77}\And R.J.~Reed\Irefn{org135}\And A.~Rehman\Irefn{org18}\And P.~Reichelt\Irefn{org53}\And F.~Reidt\Irefn{org93}\textsuperscript{,}\Irefn{org36}\And X.~Ren\Irefn{org7}\And R.~Renfordt\Irefn{org53}\And A.R.~Reolon\Irefn{org72}\And A.~Reshetin\Irefn{org56}\And F.~Rettig\Irefn{org43}\And J.-P.~Revol\Irefn{org12}\And K.~Reygers\Irefn{org93}\And V.~Riabov\Irefn{org85}\And R.A.~Ricci\Irefn{org73}\And T.~Richert\Irefn{org34}\And M.~Richter\Irefn{org22}\And P.~Riedler\Irefn{org36}\And W.~Riegler\Irefn{org36}\And F.~Riggi\Irefn{org29}\And C.~Ristea\Irefn{org62}\And A.~Rivetti\Irefn{org111}\And E.~Rocco\Irefn{org57}\And M.~Rodr\'{i}guez Cahuantzi\Irefn{org2}\And A.~Rodriguez Manso\Irefn{org81}\And K.~R{\o}ed\Irefn{org22}\And E.~Rogochaya\Irefn{org66}\And D.~Rohr\Irefn{org43}\And D.~R\"ohrich\Irefn{org18}\And R.~Romita\Irefn{org124}\And F.~Ronchetti\Irefn{org72}\And L.~Ronflette\Irefn{org113}\And P.~Rosnet\Irefn{org70}\And A.~Rossi\Irefn{org30}\textsuperscript{,}\Irefn{org36}\And F.~Roukoutakis\Irefn{org88}\And A.~Roy\Irefn{org49}\And C.~Roy\Irefn{org55}\And P.~Roy\Irefn{org101}\And A.J.~Rubio Montero\Irefn{org10}\And R.~Rui\Irefn{org26}\And R.~Russo\Irefn{org27}\And E.~Ryabinkin\Irefn{org100}\And Y.~Ryabov\Irefn{org85}\And A.~Rybicki\Irefn{org117}\And S.~Sadovsky\Irefn{org112}\And K.~\v{S}afa\v{r}\'{\i}k\Irefn{org36}\And B.~Sahlmuller\Irefn{org53}\And P.~Sahoo\Irefn{org49}\And R.~Sahoo\Irefn{org49}\And S.~Sahoo\Irefn{org61}\And P.K.~Sahu\Irefn{org61}\And J.~Saini\Irefn{org132}\And S.~Sakai\Irefn{org72}\And M.A.~Saleh\Irefn{org135}\And C.A.~Salgado\Irefn{org17}\And J.~Salzwedel\Irefn{org20}\And S.~Sambyal\Irefn{org90}\And V.~Samsonov\Irefn{org85}\And X.~Sanchez Castro\Irefn{org55}\And L.~\v{S}\'{a}ndor\Irefn{org59}\And A.~Sandoval\Irefn{org64}\And M.~Sano\Irefn{org128}\And D.~Sarkar\Irefn{org132}\And E.~Scapparone\Irefn{org105}\And F.~Scarlassara\Irefn{org30}\And R.P.~Scharenberg\Irefn{org95}\And C.~Schiaua\Irefn{org78}\And R.~Schicker\Irefn{org93}\And C.~Schmidt\Irefn{org97}\And H.R.~Schmidt\Irefn{org35}\And S.~Schuchmann\Irefn{org53}\And J.~Schukraft\Irefn{org36}\And M.~Schulc\Irefn{org40}\And T.~Schuster\Irefn{org137}\And Y.~Schutz\Irefn{org113}\textsuperscript{,}\Irefn{org36}\And K.~Schwarz\Irefn{org97}\And K.~Schweda\Irefn{org97}\And G.~Scioli\Irefn{org28}\And E.~Scomparin\Irefn{org111}\And R.~Scott\Irefn{org125}\And K.S.~Seeder\Irefn{org120}\And J.E.~Seger\Irefn{org86}\And Y.~Sekiguchi\Irefn{org127}\And D.~Sekihata\Irefn{org47}\And I.~Selyuzhenkov\Irefn{org97}\And K.~Senosi\Irefn{org65}\And J.~Seo\Irefn{org96}\textsuperscript{,}\Irefn{org67}\And E.~Serradilla\Irefn{org64}\textsuperscript{,}\Irefn{org10}\And A.~Sevcenco\Irefn{org62}\And A.~Shabanov\Irefn{org56}\And A.~Shabetai\Irefn{org113}\And O.~Shadura\Irefn{org3}\And R.~Shahoyan\Irefn{org36}\And A.~Shangaraev\Irefn{org112}\And A.~Sharma\Irefn{org90}\And M.~Sharma\Irefn{org90}\And M.~Sharma\Irefn{org90}\And N.~Sharma\Irefn{org125}\textsuperscript{,}\Irefn{org61}\And K.~Shigaki\Irefn{org47}\And K.~Shtejer\Irefn{org9}\textsuperscript{,}\Irefn{org27}\And Y.~Sibiriak\Irefn{org100}\And S.~Siddhanta\Irefn{org106}\And K.M.~Sielewicz\Irefn{org36}\And T.~Siemiarczuk\Irefn{org77}\And D.~Silvermyr\Irefn{org84}\textsuperscript{,}\Irefn{org34}\And C.~Silvestre\Irefn{org71}\And G.~Simatovic\Irefn{org129}\And G.~Simonetti\Irefn{org36}\And R.~Singaraju\Irefn{org132}\And R.~Singh\Irefn{org79}\And S.~Singha\Irefn{org132}\textsuperscript{,}\Irefn{org79}\And V.~Singhal\Irefn{org132}\And B.C.~Sinha\Irefn{org132}\And T.~Sinha\Irefn{org101}\And B.~Sitar\Irefn{org39}\And M.~Sitta\Irefn{org32}\And T.B.~Skaali\Irefn{org22}\And M.~Slupecki\Irefn{org123}\And N.~Smirnov\Irefn{org137}\And R.J.M.~Snellings\Irefn{org57}\And T.W.~Snellman\Irefn{org123}\And C.~S{\o}gaard\Irefn{org34}\And R.~Soltz\Irefn{org75}\And J.~Song\Irefn{org96}\And M.~Song\Irefn{org138}\And Z.~Song\Irefn{org7}\And F.~Soramel\Irefn{org30}\And S.~Sorensen\Irefn{org125}\And M.~Spacek\Irefn{org40}\And E.~Spiriti\Irefn{org72}\And I.~Sputowska\Irefn{org117}\And M.~Spyropoulou-Stassinaki\Irefn{org88}\And B.K.~Srivastava\Irefn{org95}\And J.~Stachel\Irefn{org93}\And I.~Stan\Irefn{org62}\And G.~Stefanek\Irefn{org77}\And M.~Steinpreis\Irefn{org20}\And E.~Stenlund\Irefn{org34}\And G.~Steyn\Irefn{org65}\And J.H.~Stiller\Irefn{org93}\And D.~Stocco\Irefn{org113}\And P.~Strmen\Irefn{org39}\And A.A.P.~Suaide\Irefn{org120}\And T.~Sugitate\Irefn{org47}\And C.~Suire\Irefn{org51}\And M.~Suleymanov\Irefn{org16}\And R.~Sultanov\Irefn{org58}\And M.~\v{S}umbera\Irefn{org83}\And T.J.M.~Symons\Irefn{org74}\And A.~Szabo\Irefn{org39}\And A.~Szanto de Toledo\Irefn{org120}\Aref{0}\And I.~Szarka\Irefn{org39}\And A.~Szczepankiewicz\Irefn{org36}\And M.~Szymanski\Irefn{org134}\And J.~Takahashi\Irefn{org121}\And N.~Tanaka\Irefn{org128}\And M.A.~Tangaro\Irefn{org33}\And J.D.~Tapia Takaki\Aref{idp5933408}\textsuperscript{,}\Irefn{org51}\And A.~Tarantola Peloni\Irefn{org53}\And M.~Tarhini\Irefn{org51}\And M.~Tariq\Irefn{org19}\And M.G.~Tarzila\Irefn{org78}\And A.~Tauro\Irefn{org36}\And G.~Tejeda Mu\~{n}oz\Irefn{org2}\And A.~Telesca\Irefn{org36}\And K.~Terasaki\Irefn{org127}\And C.~Terrevoli\Irefn{org30}\textsuperscript{,}\Irefn{org25}\And B.~Teyssier\Irefn{org130}\And J.~Th\"{a}der\Irefn{org74}\textsuperscript{,}\Irefn{org97}\And D.~Thomas\Irefn{org118}\And R.~Tieulent\Irefn{org130}\And A.R.~Timmins\Irefn{org122}\And A.~Toia\Irefn{org53}\And S.~Trogolo\Irefn{org111}\And V.~Trubnikov\Irefn{org3}\And W.H.~Trzaska\Irefn{org123}\And T.~Tsuji\Irefn{org127}\And A.~Tumkin\Irefn{org99}\And R.~Turrisi\Irefn{org108}\And T.S.~Tveter\Irefn{org22}\And K.~Ullaland\Irefn{org18}\And A.~Uras\Irefn{org130}\And G.L.~Usai\Irefn{org25}\And A.~Utrobicic\Irefn{org129}\And M.~Vajzer\Irefn{org83}\And M.~Vala\Irefn{org59}\And L.~Valencia Palomo\Irefn{org70}\And S.~Vallero\Irefn{org27}\And J.~Van Der Maarel\Irefn{org57}\And J.W.~Van Hoorne\Irefn{org36}\And M.~van Leeuwen\Irefn{org57}\And T.~Vanat\Irefn{org83}\And P.~Vande Vyvre\Irefn{org36}\And D.~Varga\Irefn{org136}\And A.~Vargas\Irefn{org2}\And M.~Vargyas\Irefn{org123}\And R.~Varma\Irefn{org48}\And M.~Vasileiou\Irefn{org88}\And A.~Vasiliev\Irefn{org100}\And A.~Vauthier\Irefn{org71}\And V.~Vechernin\Irefn{org131}\And A.M.~Veen\Irefn{org57}\And M.~Veldhoen\Irefn{org57}\And A.~Velure\Irefn{org18}\And M.~Venaruzzo\Irefn{org73}\And E.~Vercellin\Irefn{org27}\And S.~Vergara Lim\'on\Irefn{org2}\And R.~Vernet\Irefn{org8}\And M.~Verweij\Irefn{org135}\textsuperscript{,}\Irefn{org36}\And L.~Vickovic\Irefn{org116}\And G.~Viesti\Irefn{org30}\Aref{0}\And J.~Viinikainen\Irefn{org123}\And Z.~Vilakazi\Irefn{org126}\And O.~Villalobos Baillie\Irefn{org102}\And A.~Vinogradov\Irefn{org100}\And L.~Vinogradov\Irefn{org131}\And Y.~Vinogradov\Irefn{org99}\Aref{0}\And T.~Virgili\Irefn{org31}\And V.~Vislavicius\Irefn{org34}\And Y.P.~Viyogi\Irefn{org132}\And A.~Vodopyanov\Irefn{org66}\And M.A.~V\"{o}lkl\Irefn{org93}\And K.~Voloshin\Irefn{org58}\And S.A.~Voloshin\Irefn{org135}\And G.~Volpe\Irefn{org136}\textsuperscript{,}\Irefn{org36}\And B.~von Haller\Irefn{org36}\And I.~Vorobyev\Irefn{org37}\textsuperscript{,}\Irefn{org92}\And D.~Vranic\Irefn{org36}\textsuperscript{,}\Irefn{org97}\And J.~Vrl\'{a}kov\'{a}\Irefn{org41}\And B.~Vulpescu\Irefn{org70}\And A.~Vyushin\Irefn{org99}\And B.~Wagner\Irefn{org18}\And J.~Wagner\Irefn{org97}\And H.~Wang\Irefn{org57}\And M.~Wang\Irefn{org7}\textsuperscript{,}\Irefn{org113}\And Y.~Wang\Irefn{org93}\And D.~Watanabe\Irefn{org128}\And Y.~Watanabe\Irefn{org127}\And M.~Weber\Irefn{org36}\And S.G.~Weber\Irefn{org97}\And J.P.~Wessels\Irefn{org54}\And U.~Westerhoff\Irefn{org54}\And J.~Wiechula\Irefn{org35}\And J.~Wikne\Irefn{org22}\And M.~Wilde\Irefn{org54}\And G.~Wilk\Irefn{org77}\And J.~Wilkinson\Irefn{org93}\And M.C.S.~Williams\Irefn{org105}\And B.~Windelband\Irefn{org93}\And M.~Winn\Irefn{org93}\And C.G.~Yaldo\Irefn{org135}\And H.~Yang\Irefn{org57}\And P.~Yang\Irefn{org7}\And S.~Yano\Irefn{org47}\And Z.~Yin\Irefn{org7}\And H.~Yokoyama\Irefn{org128}\And I.-K.~Yoo\Irefn{org96}\And V.~Yurchenko\Irefn{org3}\And I.~Yushmanov\Irefn{org100}\And A.~Zaborowska\Irefn{org134}\And V.~Zaccolo\Irefn{org80}\And A.~Zaman\Irefn{org16}\And C.~Zampolli\Irefn{org105}\And H.J.C.~Zanoli\Irefn{org120}\And S.~Zaporozhets\Irefn{org66}\And N.~Zardoshti\Irefn{org102}\And A.~Zarochentsev\Irefn{org131}\And P.~Z\'{a}vada\Irefn{org60}\And N.~Zaviyalov\Irefn{org99}\And H.~Zbroszczyk\Irefn{org134}\And I.S.~Zgura\Irefn{org62}\And M.~Zhalov\Irefn{org85}\And H.~Zhang\Irefn{org18}\textsuperscript{,}\Irefn{org7}\And X.~Zhang\Irefn{org74}\And Y.~Zhang\Irefn{org7}\And C.~Zhao\Irefn{org22}\And N.~Zhigareva\Irefn{org58}\And D.~Zhou\Irefn{org7}\And Y.~Zhou\Irefn{org80}\textsuperscript{,}\Irefn{org57}\And Z.~Zhou\Irefn{org18}\And H.~Zhu\Irefn{org18}\textsuperscript{,}\Irefn{org7}\And J.~Zhu\Irefn{org113}\textsuperscript{,}\Irefn{org7}\And X.~Zhu\Irefn{org7}\And A.~Zichichi\Irefn{org12}\textsuperscript{,}\Irefn{org28}\And A.~Zimmermann\Irefn{org93}\And M.B.~Zimmermann\Irefn{org54}\textsuperscript{,}\Irefn{org36}\And G.~Zinovjev\Irefn{org3}\And M.~Zyzak\Irefn{org43} \renewcommand\labelenumi{\textsuperscript{\theenumi}~} \section*{Affiliation notes} \renewcommand\theenumi{\roman{enumi}} \begin{Authlist} \item \Adef{0}Deceased \item \Adef{idp3797616}{Also at: M.V. Lomonosov Moscow State University, D.V. Skobeltsyn Institute of Nuclear, Physics, Moscow, Russia} \item \Adef{idp5933408}{Also at: University of Kansas, Lawrence, Kansas, United States} \end{Authlist} \section*{Collaboration Institutes} \renewcommand\theenumi{\arabic{enumi}~} \begin{Authlist} \item \Idef{org1}A.I. Alikhanyan National Science Laboratory (Yerevan Physics Institute) Foundation, Yerevan, Armenia \item \Idef{org2}Benem\'{e}rita Universidad Aut\'{o}noma de Puebla, Puebla, Mexico \item \Idef{org3}Bogolyubov Institute for Theoretical Physics, Kiev, Ukraine \item \Idef{org4}Bose Institute, Department of Physics and Centre for Astroparticle Physics and Space Science (CAPSS), Kolkata, India \item \Idef{org5}Budker Institute for Nuclear Physics, Novosibirsk, Russia \item \Idef{org6}California Polytechnic State University, San Luis Obispo, California, United States \item \Idef{org7}Central China Normal University, Wuhan, China \item \Idef{org8}Centre de Calcul de l'IN2P3, Villeurbanne, France \item \Idef{org9}Centro de Aplicaciones Tecnol\'{o}gicas y Desarrollo Nuclear (CEADEN), Havana, Cuba \item \Idef{org10}Centro de Investigaciones Energ\'{e}ticas Medioambientales y Tecnol\'{o}gicas (CIEMAT), Madrid, Spain \item \Idef{org11}Centro de Investigaci\'{o}n y de Estudios Avanzados (CINVESTAV), Mexico City and M\'{e}rida, Mexico \item \Idef{org12}Centro Fermi - Museo Storico della Fisica e Centro Studi e Ricerche ``Enrico Fermi'', Rome, Italy \item \Idef{org13}Chicago State University, Chicago, Illinois, USA \item \Idef{org14}China Institute of Atomic Energy, Beijing, China \item \Idef{org15}Commissariat \`{a} l'Energie Atomique, IRFU, Saclay, France \item \Idef{org16}COMSATS Institute of Information Technology (CIIT), Islamabad, Pakistan \item \Idef{org17}Departamento de F\'{\i}sica de Part\'{\i}culas and IGFAE, Universidad de Santiago de Compostela, Santiago de Compostela, Spain \item \Idef{org18}Department of Physics and Technology, University of Bergen, Bergen, Norway \item \Idef{org19}Department of Physics, Aligarh Muslim University, Aligarh, India \item \Idef{org20}Department of Physics, Ohio State University, Columbus, Ohio, United States \item \Idef{org21}Department of Physics, Sejong University, Seoul, South Korea \item \Idef{org22}Department of Physics, University of Oslo, Oslo, Norway \item \Idef{org23}Dipartimento di Elettrotecnica ed Elettronica del Politecnico, Bari, Italy \item \Idef{org24}Dipartimento di Fisica dell'Universit\`{a} 'La Sapienza' and Sezione INFN Rome, Italy \item \Idef{org25}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Cagliari, Italy \item \Idef{org26}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Trieste, Italy \item \Idef{org27}Dipartimento di Fisica dell'Universit\`{a} and Sezione INFN, Turin, Italy \item \Idef{org28}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Bologna, Italy \item \Idef{org29}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Catania, Italy \item \Idef{org30}Dipartimento di Fisica e Astronomia dell'Universit\`{a} and Sezione INFN, Padova, Italy \item \Idef{org31}Dipartimento di Fisica `E.R.~Caianiello' dell'Universit\`{a} and Gruppo Collegato INFN, Salerno, Italy \item \Idef{org32}Dipartimento di Scienze e Innovazione Tecnologica dell'Universit\`{a} del Piemonte Orientale and Gruppo Collegato INFN, Alessandria, Italy \item \Idef{org33}Dipartimento Interateneo di Fisica `M.~Merlin' and Sezione INFN, Bari, Italy \item \Idef{org34}Division of Experimental High Energy Physics, University of Lund, Lund, Sweden \item \Idef{org35}Eberhard Karls Universit\"{a}t T\"{u}bingen, T\"{u}bingen, Germany \item \Idef{org36}European Organization for Nuclear Research (CERN), Geneva, Switzerland \item \Idef{org37}Excellence Cluster Universe, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany \item \Idef{org38}Faculty of Engineering, Bergen University College, Bergen, Norway \item \Idef{org39}Faculty of Mathematics, Physics and Informatics, Comenius University, Bratislava, Slovakia \item \Idef{org40}Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University in Prague, Prague, Czech Republic \item \Idef{org41}Faculty of Science, P.J.~\v{S}af\'{a}rik University, Ko\v{s}ice, Slovakia \item \Idef{org42}Faculty of Technology, Buskerud and Vestfold University College, Vestfold, Norway \item \Idef{org43}Frankfurt Institute for Advanced Studies, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \item \Idef{org44}Gangneung-Wonju National University, Gangneung, South Korea \item \Idef{org45}Gauhati University, Department of Physics, Guwahati, India \item \Idef{org46}Helsinki Institute of Physics (HIP), Helsinki, Finland \item \Idef{org47}Hiroshima University, Hiroshima, Japan \item \Idef{org48}Indian Institute of Technology Bombay (IIT), Mumbai, India \item \Idef{org49}Indian Institute of Technology Indore, Indore (IITI), India \item \Idef{org50}Inha University, Incheon, South Korea \item \Idef{org51}Institut de Physique Nucl\'eaire d'Orsay (IPNO), Universit\'e Paris-Sud, CNRS-IN2P3, Orsay, France \item \Idef{org52}Institut f\"{u}r Informatik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \item \Idef{org53}Institut f\"{u}r Kernphysik, Johann Wolfgang Goethe-Universit\"{a}t Frankfurt, Frankfurt, Germany \item \Idef{org54}Institut f\"{u}r Kernphysik, Westf\"{a}lische Wilhelms-Universit\"{a}t M\"{u}nster, M\"{u}nster, Germany \item \Idef{org55}Institut Pluridisciplinaire Hubert Curien (IPHC), Universit\'{e} de Strasbourg, CNRS-IN2P3, Strasbourg, France \item \Idef{org56}Institute for Nuclear Research, Academy of Sciences, Moscow, Russia \item \Idef{org57}Institute for Subatomic Physics of Utrecht University, Utrecht, Netherlands \item \Idef{org58}Institute for Theoretical and Experimental Physics, Moscow, Russia \item \Idef{org59}Institute of Experimental Physics, Slovak Academy of Sciences, Ko\v{s}ice, Slovakia \item \Idef{org60}Institute of Physics, Academy of Sciences of the Czech Republic, Prague, Czech Republic \item \Idef{org61}Institute of Physics, Bhubaneswar, India \item \Idef{org62}Institute of Space Science (ISS), Bucharest, Romania \item \Idef{org63}Instituto de Ciencias Nucleares, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico \item \Idef{org64}Instituto de F\'{\i}sica, Universidad Nacional Aut\'{o}noma de M\'{e}xico, Mexico City, Mexico \item \Idef{org65}iThemba LABS, National Research Foundation, Somerset West, South Africa \item \Idef{org66}Joint Institute for Nuclear Research (JINR), Dubna, Russia \item \Idef{org67}Konkuk University, Seoul, South Korea \item \Idef{org68}Korea Institute of Science and Technology Information, Daejeon, South Korea \item \Idef{org69}KTO Karatay University, Konya, Turkey \item \Idef{org70}Laboratoire de Physique Corpusculaire (LPC), Clermont Universit\'{e}, Universit\'{e} Blaise Pascal, CNRS--IN2P3, Clermont-Ferrand, France \item \Idef{org71}Laboratoire de Physique Subatomique et de Cosmologie, Universit\'{e} Grenoble-Alpes, CNRS-IN2P3, Grenoble, France \item \Idef{org72}Laboratori Nazionali di Frascati, INFN, Frascati, Italy \item \Idef{org73}Laboratori Nazionali di Legnaro, INFN, Legnaro, Italy \item \Idef{org74}Lawrence Berkeley National Laboratory, Berkeley, California, United States \item \Idef{org75}Lawrence Livermore National Laboratory, Livermore, California, United States \item \Idef{org76}Moscow Engineering Physics Institute, Moscow, Russia \item \Idef{org77}National Centre for Nuclear Studies, Warsaw, Poland \item \Idef{org78}National Institute for Physics and Nuclear Engineering, Bucharest, Romania \item \Idef{org79}National Institute of Science Education and Research, Bhubaneswar, India \item \Idef{org80}Niels Bohr Institute, University of Copenhagen, Copenhagen, Denmark \item \Idef{org81}Nikhef, Nationaal instituut voor subatomaire fysica, Amsterdam, Netherlands \item \Idef{org82}Nuclear Physics Group, STFC Daresbury Laboratory, Daresbury, United Kingdom \item \Idef{org83}Nuclear Physics Institute, Academy of Sciences of the Czech Republic, \v{R}e\v{z} u Prahy, Czech Republic \item \Idef{org84}Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States \item \Idef{org85}Petersburg Nuclear Physics Institute, Gatchina, Russia \item \Idef{org86}Physics Department, Creighton University, Omaha, Nebraska, United States \item \Idef{org87}Physics Department, Panjab University, Chandigarh, India \item \Idef{org88}Physics Department, University of Athens, Athens, Greece \item \Idef{org89}Physics Department, University of Cape Town, Cape Town, South Africa \item \Idef{org90}Physics Department, University of Jammu, Jammu, India \item \Idef{org91}Physics Department, University of Rajasthan, Jaipur, India \item \Idef{org92}Physik Department, Technische Universit\"{a}t M\"{u}nchen, Munich, Germany \item \Idef{org93}Physikalisches Institut, Ruprecht-Karls-Universit\"{a}t Heidelberg, Heidelberg, Germany \item \Idef{org94}Politecnico di Torino, Turin, Italy \item \Idef{org95}Purdue University, West Lafayette, Indiana, United States \item \Idef{org96}Pusan National University, Pusan, South Korea \item \Idef{org97}Research Division and ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum f\"ur Schwerionenforschung, Darmstadt, Germany \item \Idef{org98}Rudjer Bo\v{s}kovi\'{c} Institute, Zagreb, Croatia \item \Idef{org99}Russian Federal Nuclear Center (VNIIEF), Sarov, Russia \item \Idef{org100}Russian Research Centre Kurchatov Institute, Moscow, Russia \item \Idef{org101}Saha Institute of Nuclear Physics, Kolkata, India \item \Idef{org102}School of Physics and Astronomy, University of Birmingham, Birmingham, United Kingdom \item \Idef{org103}Secci\'{o}n F\'{\i}sica, Departamento de Ciencias, Pontificia Universidad Cat\'{o}lica del Per\'{u}, Lima, Peru \item \Idef{org104}Sezione INFN, Bari, Italy \item \Idef{org105}Sezione INFN, Bologna, Italy \item \Idef{org106}Sezione INFN, Cagliari, Italy \item \Idef{org107}Sezione INFN, Catania, Italy \item \Idef{org108}Sezione INFN, Padova, Italy \item \Idef{org109}Sezione INFN, Rome, Italy \item \Idef{org110}Sezione INFN, Trieste, Italy \item \Idef{org111}Sezione INFN, Turin, Italy \item \Idef{org112}SSC IHEP of NRC Kurchatov institute, Protvino, Russia \item \Idef{org113}SUBATECH, Ecole des Mines de Nantes, Universit\'{e} de Nantes, CNRS-IN2P3, Nantes, France \item \Idef{org114}Suranaree University of Technology, Nakhon Ratchasima, Thailand \item \Idef{org115}Technical University of Ko\v{s}ice, Ko\v{s}ice, Slovakia \item \Idef{org116}Technical University of Split FESB, Split, Croatia \item \Idef{org117}The Henryk Niewodniczanski Institute of Nuclear Physics, Polish Academy of Sciences, Cracow, Poland \item \Idef{org118}The University of Texas at Austin, Physics Department, Austin, Texas, USA \item \Idef{org119}Universidad Aut\'{o}noma de Sinaloa, Culiac\'{a}n, Mexico \item \Idef{org120}Universidade de S\~{a}o Paulo (USP), S\~{a}o Paulo, Brazil \item \Idef{org121}Universidade Estadual de Campinas (UNICAMP), Campinas, Brazil \item \Idef{org122}University of Houston, Houston, Texas, United States \item \Idef{org123}University of Jyv\"{a}skyl\"{a}, Jyv\"{a}skyl\"{a}, Finland \item \Idef{org124}University of Liverpool, Liverpool, United Kingdom \item \Idef{org125}University of Tennessee, Knoxville, Tennessee, United States \item \Idef{org126}University of the Witwatersrand, Johannesburg, South Africa \item \Idef{org127}University of Tokyo, Tokyo, Japan \item \Idef{org128}University of Tsukuba, Tsukuba, Japan \item \Idef{org129}University of Zagreb, Zagreb, Croatia \item \Idef{org130}Universit\'{e} de Lyon, Universit\'{e} Lyon 1, CNRS/IN2P3, IPN-Lyon, Villeurbanne, France \item \Idef{org131}V.~Fock Institute for Physics, St. Petersburg State University, St. Petersburg, Russia \item \Idef{org132}Variable Energy Cyclotron Centre, Kolkata, India \item \Idef{org133}Vin\v{c}a Institute of Nuclear Sciences, Belgrade, Serbia \item \Idef{org134}Warsaw University of Technology, Warsaw, Poland \item \Idef{org135}Wayne State University, Detroit, Michigan, United States \item \Idef{org136}Wigner Research Centre for Physics, Hungarian Academy of Sciences, Budapest, Hungary \item \Idef{org137}Yale University, New Haven, Connecticut, United States \item \Idef{org138}Yonsei University, Seoul, South Korea \item \Idef{org139}Zentrum f\"{u}r Technologietransfer und Telekommunikation (ZTT), Fachhochschule Worms, Worms, Germany \end{Authlist} \endgroup \section{Introduction} \label{sec:intro} \input{introduction.tex} \section{Experimental apparatus and data sample} \label{sec:detector} \input{detector.tex} \section{Data analysis} \label{sec:Dmesons} \input{RecoSelCorr.tex} \section{Results and discussion} \label{sec:DRAA} \input{D_Raa.tex} \input{models.tex} \section{Summary} \label{sec:conclusions} \input{conclusions.tex} \newenvironment{acknowledgement}{\relax}{\relax} \section*{Acknowledgements} \input{acknowledgements.tex} \bibliographystyle{utphys}
1,314,259,996,151
arxiv
\section{Conclusion} Advances in GPU computing opens new possibilities to accelerate high performance parallel and large scale distributed applications. GPUs enable us to consolidate huge compute power and memory bandwidth on one or few machines, which may reduce the demand for big distributed clusters. This scale-up approach provides an alternative to the scale-out systems in distributed applications. Evidently, cuMF using a single machine with GPUs is faster and cheaper to solve matrix factorization, compared with distributed CPU systems. CuMF achieves this by optimizing memory access, combining data and model parallelism, and applying topology-aware parallel reduction. In future work we plan to extend cuMF to deal with other sparse problems such as graph algorithms \cite{hpdc2014gpugraph}, and use it to accelerate Hadoop/Spark framework~\cite{hpdc2015gpu}. \section{Experiments} \label{sec:exp} This section reports the performance evaluations on cuMF. We compare cuMF with multi-core solutions libMF \cite{libmf-13} and NOMAD \cite{nomad14}. We also compare with distributed solutions including NOMAD (on multi-nodes), Factorbird \cite{factorbird14}, Spark ALS \cite{sparkals14}, and a Giraph based solution from Facebook \cite{facebook15}. We select these solutions because they either perform better than earlier studies ~\cite{DSGD-kdd11, DBLP:conf/icdm/TeflioudiMG12, sparkler13, fastccd, ccd++-icdm12}, or are able to handle large data sets. Because none of existing GPU-based solutions \cite{gpu-rbm, Zastrau:2012:SGD} can tackle big data sets, we do not compare with their results. The goals of our experiments are to provide key insights on the following questions: \begin{enumerate} \item how would cuMF on a single GPU compare with highly optimized multi-core methods, such as libMF and NOMAD, on medium-size problems? (Section \ref{sec:exp-one-gpu}) \item are the memory optimization done by MO-ALS effective? (Section \ref{sec:exp-mo}) \item is SU-ALS scalable with multiple GPUs? (Section \ref{sec:exp-scalability}) \item with four GPUs on one machine, how would cuMF compare with multi-node methods on large-size problems? (Section \ref{sec:exp-xlarge}) \end{enumerate} \subsection{Experiment setting} \textbf{Data Sets}. We use three public data sets, i.e., Netflix \cite{netflix08}, YahooMusic \cite{kdd-cup-yahoomusic-11} and Hugewiki~\cite{libmf-13} to measure the convergence speed. For large-size problems, we synthesize the data sets used by SparkALS~\cite{sparkals14}, Factorbird~\cite{factorbird14} and Facebook~\cite{facebook15}. For these three systems, we compare the per iteration latency because their convergence speed are not reported. We also synthesize a data set to the size that is beyond any previous attempts. That is, we use the rating matrix of the Facebook data set, with an enlarged $f$ of 100 from the original 16. Characteristics of these data sets are shown in Table \ref{tbl:dataset}. \textbf{Hardware}. Unless otherwise mentioned, we use one to four Nvidia Titan X GPUs, each with 3072 CUDA cores and 12 GB memory, on one machine. The machine is with two Intel Xeon E5 CPUs, 256 GB RAM, and the GPFS \cite{gpfs} as the file system. \textbf{Parameters}. The $f$ and $\lambda$ values for each data set are given in Table \ref{tbl:dataset}. Feature matrices are initiated with random numbers in $[0,1]$. We focus on the speed and scalability of cuMF, and therefore did not spend much effort in hyper-parameter tuning to achieve the best accuracy. \textbf{Evaluation}. For Netflix, YahooMusic and Hugewiki, we evaluate the root-mean-square-error (RMSE) on test set. Performance of libMF and NOMAD is obtained from \cite{libmf-13, nomad14}. For SparkALS, Factorbird and Facebook, since the data is synthetic and no test RMSE is reported, we compare the per iteration run time. \begin{table} \centering \caption{Data sets} \label{tbl:dataset} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Data Set} & \textbf{$m$} & \textbf{$n$} & \textbf{$N_z$} & \textbf{$f$} &\textbf{$\lambda$} \\\hline Netflix &480,189&17,770&99M&100&0.05\\ \hline YahooMusic&1,000,990&624,961&252.8M&100&1.4\\ \hline Hugewiki&50,082,603&39,780&3.1B&100&0.05 \\ \hline \hline SparkALS&660M&2.4M&3.5B&10&0.05\\ \hline Factorbird&229M&195M&38.5B&5&0.05\\ \hline Facebook&1B&48M&112B&16&0.05\\ \hline \textbf{cuMF}&1B&48M&112B&100&0.05\\ \hline \end{tabular} \end{table} \subsection{MO-ALS on a single GPU} \label{sec:exp-one-gpu} We run cuMF on one GPU, measure the test RMSE w.r.t. training time, and compare with NOMAD and libMF on one machine with 30 cores \cite{nomad14}. We choose these two for comparison because they are among the fastest multi-core solutions. In Figure~\ref{fig:netflix-yahoo-perf}, on both Netflix and YahooMusic, cuMF performs slightly worse than NOMAD at the beginning but slightly better later, and constantly faster than libMF. CuMF use ALS where each iteration takes much longer than SGD based methods. This makes it slower at the beginning. Nevertheless cuMF catches up quickly and outperforms soon afterward. \begin{figure} % \centering \subfloat[Netflix]{% \includegraphics[width=0.25\textwidth]{figs/netflix-perf.png}} % % \subfloat[YahooMusic]{% \includegraphics[width=0.25\textwidth]{figs/yahoo-perf.png}} % \caption{% \label{fig:netflix-yahoo-perf} % Test RMSE convergence speed in terms of number of iterations: cuMF (with one GPU), NOMAD and libMF (both with 30 CPU cores).} \end{figure} \subsection{Benefit of using register and texture memory in MO-ALS} We first measure the benefit of aggressively using registers in MO-ALS. Figure \ref{fig:netflix-yahoo-noReg} compares cuMF's performance, with or without using register memory to aggregate $A_u$, on one GPU. On Netflix data, cuMF converges 2.5 times as slow (75 seconds vs. 30 seconds when RMSE reaches 0.92) without using registers. The result strongly supports the idea of aggressively using registers. Among all optimizations done in MO-ALS, using registers for $A_u$ brings the greatest performance gain. Without using the registers, cuMF converges 1.7 times as slow on YahooMusic. YahooMusic has a smaller performance degradation without using registers than Netflix. This is because its rating matrix is more sparse. As a result, its \textsc{Get\_Hermitian\_X()} is less heavy-duty and occupy a smaller percentage of the overall run time. Figure \ref{fig:netflix-yahoo-noTex} compares cuMF's performance with or without using texture memory. Using texture memory, the convergence speed is 25\% to 35\% faster. The reason for the gain is due to the fact that Algorithm \ref{alg:mo-als} updates $\Theta$ and $X$ in an alternating manner, i.e., $\Theta$ is read-only when updating $X$, and $X$ is read-only when updating $\Theta$. This feature enables us to leverage the read-only texture memory in GPU to speed up memory access. Since YahooMusic data is more sparse, the penalty of not using texture memory is also smaller. \label{sec:exp-mo} \begin{figure} % \centering \subfloat[Netflix]{% \includegraphics[width=0.25\textwidth]{figs/netflix-noReg.png}} % % \subfloat[YahooMusic]{% \includegraphics[width=0.25\textwidth]{figs/yahoo-noReg.png}} % \caption{% \label{fig:netflix-yahoo-noReg} % The convergence speed of cuMF, with or without aggressively using registers on one GPU.} \end{figure} \begin{figure} % \centering \subfloat[Netflix]{% \includegraphics[width=0.25\textwidth]{figs/netflix-noTex.png}} % % \subfloat[YahooMusic]{% \includegraphics[width=0.25\textwidth]{figs/yahoo-noTex.png}} % \caption{% \label{fig:netflix-yahoo-noTex} % The convergence speed of cuMF, with or without texture memory on one GPU.} \end{figure} \subsection{Scalability of SU-ALS on multiple GPUs} \label{sec:exp-scalability} \begin{figure} % \centering \subfloat[Netflix]{% \includegraphics[width=0.25\textwidth]{figs/netflix-1-2-4.png}} % % \subfloat[YahooMusic]{% \includegraphics[width=0.25\textwidth]{figs/yahoo-1-2-4.png}} % \caption{% \label{fig:netflix-yahoo-124} % The convergence speed of cuMF on one, two, and four GPUs.} \end{figure} This section first studies how a problem with the fixed size data set can be accelerated with multiple GPUs. In both Netflix and YahooMusic, $X$ and $\Theta$ can both fit on one GPU. As a result only model parallelism is needed. We run Netflix and YahooMusic data on one, two and four GPUs, respectively, on one machine. As seen from Figure~\ref{fig:netflix-yahoo-124}, close-to-linear speedup is achieved. For example, the speedup is 3.8x when using four GPUs, measured at RMSE 0.92. Detailed profiling shows that, the very small overhead mainly comes from PCIe IO contention when multiple GPUs read from host memory simultaneously. In contrast, NOMAD observed a sub-linear speedup on certain data sets, due to cache locality effects and communication overhead \cite{nomad14}. CuMF achieves better scalability due to the optimized memory access and inter-GPU communication. An advantage of cuMF is that, it consolidates massive computation on a single machine, so that it only uses PCIe connections which are faster than any existing network. We also tested Hugewiki data on four GPUs. We compare with multi-node NOMAD (on 64-node HPC cluster and 32-node AWS cluster) because it outperforms DSGD~\cite{DSGD-kdd11} and DSGD++~\cite{DBLP:conf/icdm/TeflioudiMG12}. Hugewiki is a relatively large data set where $m\approx50$M, $n\approx40$K, and $N_z\approx 3$B. When using $X$ to solve $\Theta$, $X$ is too big to fit on one GPU. According to Algorithm~\ref{alg:su-als} we partition $X$ evenly into four GPUs and apply data parallelism. We use the two-phase parallel reduction scheme shown in Figure \ref{fig:par-reduce} (b), because our machine has two sockets each connecting to two GPUs. With all the intra- and inter-GPU optimizations, cuMF performs slightly better than NOMAD on a 64-node HPC cluster (again, with a slower start), and much better than NOMAD on a 32-node AWS cluster, as shown in Figure ~\ref{fig:hugewiki}. This result is very impressive, as a 64-node HPC cluster is outperformed by only one node plus four GPUs. This indicates that cuMF brings a big saving in infrastructure and management cost. \begin{figure} \center{\includegraphics[width=0.35\textwidth] {figs/hugewiki.png}} \caption{CuMF@4GPU, vs. NOMAD on a 64-node HPC cluster and a 32-node AWS cluster, with Hugewiki data. CuMF converges similar to NOMAD with 64 nodes, and 10x as fast as NOMAD with 32 nodes.} \label{fig:hugewiki} \end{figure} \subsection{Solve extremely large-scale problems} \label{sec:exp-xlarge} \begin{figure} \center{\includegraphics[width=0.35\textwidth] {figs/verylarge.png}} \caption{CuMF@4GPU on three very large data sets, compared with the their original implementations as baselines.} \label{fig:verylarge} \end{figure} We conduct experiments on three extremely large problems. In this experiment we use four Nvidia GK210 cards on one machine. Each card is with 2496 CUDA cores (slightly fewer than Titan X) and 12 GB memory, and every two cards are encapsulated as one K80 GPU. The results for the following experiments are shown in Figure~\ref{fig:verylarge}. SparkALS~\cite{sparkals14} is a benchmark of Spark MLlib ALS. Its rating matrix is from the 100-by-1 duplication of the \textit{Amazon Reviews}~\cite{amazonreviews} data. It uses 50$\times$m3.2xlarge AWS nodes with Spark MLlib 1.1, and takes 240 seconds per ALS iteration. We synthesize the data in the same way as~\cite{sparkals14}, apply model parallelism solving $X$, and apply data parallelism solving $\Theta$. CuMF with four GPUs completes one iteration in 24 seconds, which is \textbf{ten times as fast} as SparkALS. Factorbird~\cite{factorbird14} is a parameter server system for MF. It trains a data set ($m=229$M, $n=195$M, $f=5$, and $N_z=38.5$B) on a cluster of 50 nodes. We synthesize the data using the method described in~\cite{DBLP:conf/icdm/TeflioudiMG12}. We use only model parallelism in solving $X$ and $\Theta$ because they both fit into one GPU. CuMF with four GPUs completes one iteration in 92 seconds. Factorbird needs 563 seconds per iteration, and with SGD it may need more iterations than ALS. Facebook~\cite{facebook15} recently revealed that its MF system deals with 1 billion users, millions of items and over 100 billion ratings. Given this hint we did a 160-by-20 duplication of the Amazon Review data, yielding a data set with $m=1056$M, $n=48$M, $f=16$, and $N_z=112$B. We use data parallelism to solve both $X$ and $\Theta$. Especially, when solving $\Theta$, because $X$ is huge ($1056$M$\times16$ floats) and cannot fit on 4 GPUs, we change the \textbf{parfor} in Line 9-18 of Algorithm~\ref{alg:su-als} into a \textbf{sequential for} with many batches. By doing this, cuMF completes one ALS iteration in 746 seconds. \cite{facebook15} does not report its speed on 50 Giraph workers, but we believe cuMF is competitive given the size of the problem and the low cost of one machine with GPUs. We further try a larger $f=100$, and cuMF completes one iteration in 3.8 hours. To the best of our knowledge, this is by far the largest matrix factorization problem ever reported in literature. As a summary, on two extremely large data sets, CuMF with four GPUs significantly outperforms the original distributed implementations. CuMF is also able to factorize the largest collaborative filtering matrix ever reported. \section{Introduction} Sparse matrix factorization (SMF or MF) factors a sparse rating matrix $R$ ($m$ by $n$, with $N_z$ non-zero elements) into a $m$-by-$f$ and a $f$-by-$n$ matrices, as shown in Figure~\ref{fig:mf}. MF is widely used for collaborative-filtering-based recommendations~\cite{mf-computer09} in e-commerce (e.g., Amazon) and digital content streaming (e.g., Netflix). Very recently, MF is also applied in text mining, deriving hidden features of words~\cite{pennington2014glove}. Given the widespread use of MF, a scalable and speedy implementation is very important. In terms of \textbf{scale}, many parallel solutions~\cite{libmf-13,ccd++-icdm12,nomad14} target at medium-sized problems such as the Netflix challenge \cite{netflix08}. However, the industry-scale recommendation problems have evolved to two-orders-of-magnitude larger. Figure \ref{fig:scale} shows the scale of MF problems, in terms of number of ratings and number of model parameters. As an example, Facebook's MF is with $100+$ billion ratings, 1 billion users, and millions of items~\cite{facebook15}. No existing system except \cite{facebook15} has tackled problems at this scale. In terms of \textbf{speed}, recommendations need to evolve promptly in online applications. Current approaches including MPI \cite{nomad14}, Spark \cite{sparkals14} and parameter server \cite{factorbird14} address large-scale MF problems. However, they require costly clusters (e.g., 50-node) and still suffer from long latency. \begin{figure} \center{\includegraphics[width=\linewidth] {figs/mf.png}} \caption{Matrix factorization.} \label{fig:mf} \end{figure} \begin{figure} \center{\includegraphics[width=0.95\columnwidth] {figs/scale.png}} \caption{The scale of MF data sets\protect\footnotemark. Y-axis is the $N_z$ of $R$, and x-axis is $(m+n)\times f$. CuMF can tackle MF problems of greater size, compared with existing systems.} \label{fig:scale} \end{figure} \footnotetext{CCD++ \protect\cite{ccd++-icdm12}, DSGD \protect\cite{DSGD-kdd11}, DSGD++ \protect\cite{DBLP:conf/icdm/TeflioudiMG12}, Facebook \cite{facebook15}, Factorbird \protect\cite{factorbird14}, Flink \protect\cite{flink15}, Hugewiki \protect\cite{libmf-13}, Netflix \protect\cite{netflix08} SparkALS \protect\cite{sparkals14}, and YahooMusic \protect\cite{kdd-cup-yahoomusic-11}.} Recently, the GPU emerges as an accelerator for parallel algorithms~\cite{GeMTC2014,hpdc2015gpu}. It has big compute power (typically 10x floating-point operations per second, flops vs. a CPU) and memory bandwidth (typically 5x vs. a CPU) \cite{hennessy2011computer}, but limited amount of control logic and memory capacity. Particularly, GPU's success in deep learning \cite{costshpc2013} inspires us to try it for MF. In deep learning, the computation is mainly dense matrix multiplication which is \textbf{compute bound}. As a result, GPU can train deep neural network 10x as fast as CPU by saturating its flops. However, unlike deep learning, a MF problem involves sparse matrix manipulation which is usually \textbf{memory bound}. Given this, we want to explore a MF algorithm and a system that can still leverage GPU's compute and memory capability. We identified that, the alternating least square (ALS) algorithm \cite{mf-computer09} for MF is inherently parallel so as to exploit thousands of GPU cores. Moreover, compared with stochastic gradient descent (SGD), ALS has advantage when $R$ is made up of implicit ratings and therefore cannot be considered sparse \cite{mf-computer09}. Based on these observations, we design and implement \textbf{cuMF} (CUDA Matrix Factorization), a scalable ALS solution on one machine with one or more GPUs. CuMF achieves excellent scalability and performance by making \textbf{the following contributions}. (1) On a single GPU, MF is inherently sparse and memory bound and thus difficult to utilize GPU's compute power. We optimize memory access in ALS by various techniques including reducing discontiguous memory access, retaining hotspot variables in faster memory, and aggressively using registers. By this means cuMF gets closer to the roofline performance of a single GPU. (2) On multiple GPUs, we add data parallelism to ALS's inherent model parallelism. Data parallelism needs a faster reduction operation among GPUs, leading to (3). (3) We also develop an innovative topology-aware, parallel reduction method to fully leverage the bandwidth between GPUs. By this means cuMF ensures that multiple GPUs are efficiently utilized simultaneously. \begin{table} \begin{threeparttable} \centering \caption{Speed and cost of cuMF on one machine with four GPUs, compared with three distributed CPU systems, on the cloud} \label{tbl:cumf} \begin{tabular}{|p{0.075\textwidth}|p{0.08\textwidth}|p{0.055\textwidth}|p{0.065\textwidth}|p{0.045\textwidth}|p{0.045\textwidth}|} \hline \textbf{Baseline}& baseline config &\#nodes&price\break/node/hr& cuMF speed & cuMF cost\\\hline NOMAD&m3.xlarge&32&\$0.27&\textbf{10x}&\textbf{3\%} \\ \hline SparkALS&m3.2xlarge&50&\$0.53&\textbf{10x} &\textbf{1\%}\\ \hline Factorbird&c3.2xlarge&50&\$0.42&\textbf{6x} & \textbf{2\%}\\ \hline \end{tabular} \begin{tablenotes}[para,flushleft] Note: Experiment details are in Section~\ref{sec:exp}. NOMAD~\cite{nomad14} uses Hugewiki data and AWS servers; it used m1.xlarge which is now superseded by m3.xlarge by Amazon. Factorbird's node is similar to AWS c3.2xlarge \cite{factorbird14}. \end{tablenotes} \end{threeparttable} \end{table} The resulting CuMF is competitive in both speed and monetary cost. Table~\ref{tbl:cumf} shows cuMF's speed and cost compared with three CPU systems, NOMAD (with Hugewiki data)~\cite{nomad14}, Spark ALS~\cite{sparkals14}, and Factorbird~\cite{factorbird14}. NOMAD and Spark ALS use Amazon AWS, and we pick an AWS node type similar to what Factorbird uses. CPU and GPU systems' cost is calculated by $(price~per~node~per~hr)*(\#nodes)*(execution~time)$, with unit price taken when submitting this paper\footnote{AWS price: https://aws.amazon.com/ec2/pricing/; GPU machine price: http://www.softlayer.com/gpu}. CuMF runs on one machine with two Nvidia K80 (four GPUs devices in total) from IBM Softlayer, with an amortized hourly cost of \$2.44. With faster speed and fewer machine requirement, cuMF's overall cost of running these benchmarks is merely $1\%$-$3\%$ of the baseline systems compared. That is, cuMF is 33-100x as cost-efficient. In summary, this paper describes a novel implementation of MF on a machine with GPUs and the set of exemplary optimization techniques in leveraging the GPU architectural characteristics. The experimental results demonstrate that with up to four Nvidia GPUs on one machine, cuMF is (1) competitive compared with multi-core methods, on medium-sized problems; (2) much faster than vanilla GPU implementations without memory optimization; (3) 6-10 times as fast, and 33-100 times as cost-efficient as distributed CPU systems, on large-scale problems; (4) more significantly, able to solve the largest matrix factorization problem ever reported. This paper is organized as follows. Section 2 formulates the problem of matrix factorization and explains the two challenges in large-scale ALS, i.e., memory access on one GPU and scalability on multiple GPUs. Section 3 introduces the memory-optimized ALS algorithm on a single GPU, to address challenge 1. Section 4 introduces the scale-up ALS algorithm to parallelize MF on multiple GPUs, to address challenge 2. Section 5 shows the experiment results and Section 6 reviews related work. Section 7 concludes the paper. \section{Memory-optimized ALS on one GPU} \label{sec:singlegpu} \subsection{The GPU memory hierarchy} To address \textbf{Challenge 1} ``On a single GPU, how to optimize sparse, irregular and intensive memory access'', we need direct control on GPU's memory hierarchy. We choose Nvidia GPUs because they provides a rich set of \textit{programmable memory} of different characteristics, shown in Table \ref{tbl:gpu-mem}. \footnote{There are \textit{non-programmable memory} such as L1 and L2 cache. They also accelerate memory access but are not directly controllable by programmers. Therefore in cuMF we focus on the optimization by using the programmable memory.} \begin{table}[h!] \centering \caption{Programmable GPU memory}\label{tbl:gpu-mem} \begin{tabular}{|c|p{1cm}|p{1.2cm}|p{2.6cm}|} \hline \textbf{Memory type} & \textbf{Size} & \textbf{Latency} & \textbf{Scope} \\ \hline \textit{global} & large & high & application\\ \hline \textit{texture} & medium & medium & application, read-only \\ \hline \textit{shared} & small & low & thread block \\ \hline \textit{register} & small & lowest & thread; not indexable \\ \hline \end{tabular} \end{table} Although the principles of memory optimization are generally known, the specific implementation of ALS on GPU is not trivial due to the following reasons: \begin{enumerate} \item GPU has a lower clock frequency than CPU (typically $<1$ GHz vs. 2-3 GHz). If the massive parallelism in GPU is not fully utilized, cuMF is likely to be slower than the highly-optimized CPU implementations. \item Compared with CPU, GPU's global memory is smaller, e.g., 12 GB. In contrast, GPU has a much larger register file, e.g., 4 MB, which is largely ignored nowadays. \item The control of register, shared, texture and global memory is complex. The global memory is large but slow, texture memory is read-only, and register and shared memory are not visible across GPU kernels (i.e., device functions). Moreover, registers are not \textit{dynamically indexable}, which prevents them from being used for large arrays. \end{enumerate} Due to these difficulties, without insight on both GPU hardware and algorithm specifics, an implementation can easily be bounded by memory capacity, latency or bandwidth, preventing us from harnessing the full power of GPU. \subsection{The base ALS algorithm} The base ALS algorithm \ref{alg:base-als} shows how to update $X$ with eq. \eqref{eq:update-x}. The algorithm to update $\Theta$ is similar with all variables symmetrically exchanged. Algorithm \ref{alg:base-als} consists of two procedures: \textsc{Get\_Hermitian\_X()} and \textsc{Batch\_Solve()}. \begin{algorithm}[h!] \caption{Base ALS: Update $X$ \newline \textbf{Input} $R_{m \times n}$ \newline \textbf{Input} $\Theta^T: [\theta_1, \theta_2, ..., \theta_n]_{f \times n}$ \newline \textbf{Output} $X: [\textbf{x}_1^T; \textbf{x}_2^T; ...; \textbf{x}_m^T]_{m \times f}$ } \label{alg:base-als} \begin{algorithmic}[1] \Procedure{Get\_Hermitian\_X}{$R,\Theta^T$} \For{$u \gets 1, m$} \State $\Theta^T_u \gets$ sub-matrix of $\Theta^T$ with cols $\theta_v$ s.t. $r_{uv}\neq 0$\label{line:3} \State $A_u \gets 0$ \ForAll {columns $\theta_v$ in $\Theta^T_u$} \State $A_u \gets A_u+\theta_v\theta^T_v+\lambda I$ \label{line:6} \EndFor \State $B_u \gets \Theta^{T} \cdot R_{u*}^T$ \EndFor \State \textbf{return} $([A_1,A_2,...A_m], [B_1,B_2,...,B_m])$ \EndProcedure \Statex \Procedure{Batch\_Solve}{$[A_1,A_2,...A_m], [B_1,B_2,...,B_m]$} \For{$u \gets 1, m$} \State $\textbf{x}_u \gets$ solve $A_{u}\cdot \textbf{x}_u=B_u$ \EndFor \State \textbf{return} $[\textbf{x}_1,\textbf{x}_2,...\textbf{x}_m]^T$ \EndProcedure \Statex \State $(A, B) \gets$ \Call{Get\_Hermitian\_X}{$R, \Theta^T$} \State $X \gets$ \Call{Batch\_Solve}{$A, B$} \end{algorithmic} \end{algorithm} \subsection{The memory-optimized ALS algorithm MO-ALS} Table~\ref{tbl:cost} indicates that, \textsc{Get\_Hermitian\_X()} in Algorithm \ref{alg:base-als} is memory intensive. We observed that Lines 3-7, i.e., computing $A_u$, takes much of the overall execution time. To optimize the performance, we enhance Algorithm \ref{alg:base-als} by leveraging different types of GPU memory. We call this memory-optimized ALS algorithm \textbf{MO-ALS}, as described in Algorithm~\ref{alg:mo-als}. The following lines in Algorithm \ref{alg:base-als} are enhanced in MO-ALS: \begin{enumerate} \item{Reading from $\Theta^T$ in Line \ref{line:3}}. $\Theta^T$ with dimension $f \times n$ is stored in global memory. When collecting the sub-matrix $\Theta_u^T$ from $\Theta^T$, we use \textbf{texture memory} as the cache because: (1) this collecting process enjoys spatial locality, (2) $\Theta^T$ is read-only when updating $X$, and (3) different $\Theta_u^T$s can potentially re-use the same $\theta_v$s cached in texture memory. As a result, this caching step reduces discontiguous memory access. This optimization is shown in Line 3 in Algorithm ~\ref{alg:mo-als}. \item{Storage of $\Theta^T_u$ in Line \ref{line:3}}. We use one thread block with $f$ threads to calculate each $A_u$, and use the per-block \textbf{shared memory} to store $\Theta_u^T$, so as to speed up the subsequent read in Line \ref{line:6}. However, for each $A_u$, we are not able to copy the whole $\Theta_u^T$ into its shared memory space. This is because $\Theta_u^T$ is of size $f \times n_{x_u}$ and too big compared to the 48 or 96 KB per-SM\footnote{SM or SMX: stream multiprocessor. A GPU device usually consists of 10 to 15 SMs.} shared memory. If a single thread block consumes too much shared memory, other blocks are prohibited from launching, resulting in low parallelism. In order to achieve higher parallelism, we select a bin size $bin$, and for each ${\textbf{x}_u}$ only allocate a share memory space $\Theta_u^T[bin]$ of size $f \times bin$. In practice we choose $bin$ between 10 and 30, while $n_{x_u}$ can be hundreds to thousands. We iteratively move a subset of $\Theta_u^T$ into $\Theta_u^T[bin]$ to be processed in the following step. This optimization is shown in Lines 5-10 in Algorithm ~\ref{alg:mo-als}. \item{Update of $A_u$ in Line \ref{line:6}}. Here we need to read a $\theta_v$ from $\Theta_u^T[bin]$, calculate the $f \times f$ elements of $\theta_v \cdot \theta^T_v$, and add them to global variable $A_u$. Obviously $A_u$ is a memory hotspot. In order to speedup the aggregation in $A_u$, we choose \textbf{register memory} to hold $\sum\limits_{\theta_v \in \Theta_u^T[bin]}\theta_v \theta_v^T$, and only update global memory $A_u$ after we iterate over all columns in $\Theta_u^T$. This reduces global memory access by a factor of $n_{x_u}$. This optimization is shown in Line 8 in Algorithm ~\ref{alg:mo-als}. More details are discussed in the following Section \ref{sec:register}. \end{enumerate} Figure~\ref{fig:gpu-memory} illustrates the memory usage of MO-ALS. \begin{algorithm}[h!] \caption{MO-ALS: Memory-Optimized ALS; update $X$ on one GPU. \newline $\mathcal{G}\{ var \}$: $var$ in global memory \newline $\mathcal{T}\{ var \}$: $var$ in texture memory \newline $\mathcal{S}\{ var \}$: $var$ in shared memory \newline $\mathcal{R}\{ var \}$: $var$ in register memory \newline \textbf{Input} $R_{m \times n}$ \newline \textbf{Input} $\Theta^T: [\theta_1, \theta_2, ..., \theta_n]_{f \times n}$ \newline \textbf{Output} $X: [\textbf{x}_1^T; \textbf{x}_2^T; ...; \textbf{x}_m^T]_{m \times f}$ } \label{alg:mo-als} \begin{algorithmic}[1] \Procedure{Get\_Hermitian\_X\_MO}{$R,\Theta^T$} \For{$u \gets 1, m$} \State $\mathcal{T}\{\Theta^T_u\} \gets$ sub-matrix of $\mathcal{G}\{\Theta^T\}$ with cols $\theta_v$ s.t. $r_{uv}\neq 0$ \label{line:2-1} \State $\mathcal{R}\{A_u\} \gets 0$ \While {$\mathcal{T}\{\Theta^T_u\}$ has more cols not processed} \State $\mathcal{S}\{\Theta_u^T[bin]\} \gets$ next $bin$ cols from $\mathcal{T}\{\Theta^T_u\}$ \ForAll {cols $\theta_v$ in $\mathcal{S}\{\Theta^T_u[bin]\}$} \State $\mathcal{R}\{A_u\} \gets \mathcal{R}\{A_u\}+\mathcal{S}\{\theta_v\}\mathcal{S}\{\theta^T_v\}+\lambda I$ \label{line:2-2} \EndFor \EndWhile \State $\mathcal{G}\{A_u\} \gets \mathcal{R}\{A_u\}$ \State $\mathcal{G}\{B_u\} \gets \mathcal{G}\{\Theta^{T}\} \cdot \mathcal{G}\{R_{u*}^T\}$ \EndFor \State \textbf{return} $\mathcal{G}([A_1,A_2,...A_m], [B_1,B_2,...,B_m])$ \Statex \State $(A, B) \gets$ \Call{Get\_Hermitian\_X\_MO}{$R, \Theta^T$} \State $X \gets$ \Call{Batch\_Solve}{$A, B$} \EndProcedure \end{algorithmic} \end{algorithm} \begin{figure*}[t] \center{\includegraphics[width=0.65\linewidth] {figs/mo-als.png}} \caption{Illustration of memory usage in MO-ALS. Line numbers correspond to those in Algorithm \ref{alg:mo-als}. For simplicity, we solve two rows of $X$, i.e., $\textbf{x}_{u1}$ and $\textbf{x}_{u2}$, in parallel. In reality we solve as many rows of $X$ as possible in parallel.} \label{fig:gpu-memory} \end{figure*} \subsection{Enhanced utilization of registers} \label{sec:register} We exploit the GPU register file which is larger and has higher bandwidth compared to its shared memory \cite{DBLP:conf/emnlp/CannyHK13}. For example, in the latest Nvidia Maxwell generation GPUs, each SM has a 256 KB register file and only 96 KB shared memory. However, while there is much focus on using shared memory \cite{optimizecuda08}, the use of registers is surprisingly ignored. This \textbf{under-utilization of registers} is mainly due to the fact that, register variables cannot be dynamically indexed. That is to say, you cannot declare and refer to an array in register file\footnote{An exception is that, the CUDA compiler may put very small ($\leq 5$) arrays on registers in loop unfolding.}. In Algorithm \ref{alg:mo-als}, $A_u$ is with size $f^2$ and to put it in register and access it, we have to declare $f^2$ variables instead of a single array. This makes the CUDA code hard to write. We use macro expansion in C to generate such a verbose paragraph of code. The snippet in Listing \ref{register-code} demonstrates how the expanded code looks like when $f=10$. \begin{lstlisting}[label=register-code, caption=CUDA kernel code to use registers when \textit{f} is 10, language=C] get_Au_kernel() { ... //declare Au in registers float temp0 = 0, temp1 = 0, temp2 = 0, temp3 = 0, temp4 = 0, temp5 = 0, temp6=0, temp7=0, temp8=0, temp9=0; ... float temp90 = 0, temp91 = 0, temp92 = 0, temp93 = 0, temp94 = 0, temp95 = 0, temp96=0, temp97=0, temp98=0, temp99=0; //aggregate Au in register for(k){ temp0 += theta[k*f]*theta[k*f]; temp1 += theta[k*f]*theta[k*f+1]; ... temp98 += theta[k*f+9]*theta[k*f+8]; temp99 += theta[k*f+9]*theta[k*f+9]; } //copy register to global memory Au[0] = temp0; Au[1] = temp1; ... Au[98] = temp98; Au[99] = temp99; } \end{lstlisting} \textbf{Limitation of MO-ALS.} Algorithm \ref{alg:mo-als} is able to deal with big $X$ with one GPU, as long as $\Theta$ can fit into it. When $X$ is big and $\Theta$ is small, we first load the whole $\Theta$ to the GPU, then load $R$ and solve $X$ in batches. However, this batch-based approach does not work when $\Theta$ cannot fit into a single GPU. This motivates us to scale to multiple GPUs on a single machine, as presented in Section \ref{sec:mgpu}. \section{Problem Definition} \label{sec:background} \subsection{ALS algorithm for matrix factorization} \begin{table} \begin{threeparttable} \centering \caption{Notations} \label{tbl:notations} \begin{tabular}{|c|p{5cm}|c|} \hline \textbf{Name} & \textbf{Meaning} & \textbf{Range} \\ \hline $R$ & sparse rating matrix: $m$ by $n$ & \\ \hline $X$ & low rank matrix: $m$ by $f$ & \\ \hline $\Theta$ & low rank matrix: $n$ by $f$ & \\ \hline $m$ & vertical dimension of $R$ & $10^3$ to $10^9$\\ \hline $n$ & horizontal dimension of $R$ & $10^3$ to $10^9$ \\ \hline $f$ & dimension of latent features & 5 to 100s \\ \hline $N_z$ & number of non-zero entries in $R$ & $10^8$ to $10^{11}$ \\ \hline $r_{uv}$ & $R$'s value at position $(u,v); 1\leq u\leq m, 1\leq v\leq n $ & \\ \hline $\textbf{x}_u^T$ & $X$'s $u$th row; $1\leq u\leq m$ & \\ \hline $\theta_v$ & $\Theta^T$'s $v$th column; $1\leq v\leq n$ & \\ \hline $R_{u*}$ & $R$'s $u$th row; $1\leq u\leq m$ & \\ \hline $R_{*v}$ & $R$'s $v$th column; $1\leq v\leq n$ & \\ \hline \end{tabular} \begin{tablenotes}[para,flushleft] Note: usually $N_z \gg m, n$ and $m, n \gg f$. \end{tablenotes} \end{threeparttable} \end{table} Referring to the notations listed in Table~\ref{tbl:notations}, matrix factorization is to factor a sparse matrix $R$ with two lower-rank, dense matrices $X$ and $\Theta$, such that: \begin{center} $R\approx X \cdot \Theta^{T}$ \end{center} As illustrated in Figure \ref{fig:mf}, suppose $r_{uv}$ is the non-zero element of $R$ at position $(u,v)$, we want to minimize the following cost function \eqref{eq-mf}. To avoid overfitting we use weighted-$\lambda$-regularization proposed in \cite{netflix08}, where $n_{x_u}$ and $n_{\theta_v}$ denote the number of total ratings on user $u$ and item $v$, respectively. \begin{equation} J = \sum_{u,v} (r_{uv} - \mathbf{x}_u^T\theta_v)^2 +\lambda (\sum_{u}n_{x_u}||\mathbf{x}_u||^2 +\sum_{v}n_{\theta_v}||\mathbf{\theta}_v||^2) \label{eq-mf} \end{equation} Many optimization methods, including ALS \cite{netflix08}, CGD \cite{ccd++-icdm12}, and SGD \cite{libmf-13} have been applied to minimize $J$. We adopt the ALS approach that would first optimize $X$ while fixing $\Theta$, and then to optimize $\Theta$ while fixing $X$. Consider \begin{align*} {\frac{\partial J}{\partial \textbf{x}_u}=0} \end{align*} and \begin{align*} {\frac{\partial J}{\partial \theta_v}=0} \end{align*} which lead to the following equation: \begin{align} \label{eq:update-x} \sum\limits_{r_{uv}\neq0} (\theta_v \theta_v^T+\lambda I) \cdot \textbf{x}_u &= \Theta^T\cdot R_{u*}^T \end{align} together with: \begin{align} \label{eq:update-theta} \sum\limits_{r_{uv}\neq0} (\textbf{x}_u \textbf{x}_u^T+\lambda I) \cdot \theta_v &= X^T\cdot R_{*v} \end{align} By this means, ALS updates $X$ using eq.~\eqref{eq:update-x}, and updates $\Theta$ using eq.~\eqref{eq:update-theta}, in an alternating manner. Empirically, ALS often converges in 5-20 iterations, with each iteration consisting of both update-$X$ and update-$\Theta$. In the rest of this paper, we explain our method using update-$X$. The same method is applicable to update-$\Theta$. The formalism of ALS enables solving in parallel so as to harness the power of GPU. Eqs.~\eqref{eq:update-x} and~\eqref{eq:update-theta} shows that, the updates of each $\textbf{x}_u$ and $\theta_v$ are independent of each other. This independent nature does not hold for SGD, which randomly selects a sample $r_{uv}$, and updates the parameters by: \begin{align}\label{eq-sgd} \textbf{x}_u & = \textbf{x}_u - \alpha [ (\textbf{x}_u^T \theta_v - r_{uv}) \theta_v + \lambda \textbf{x}_u] \nonumber\\ \theta_v & = \theta_v - \alpha [ (\textbf{x}_u^T \theta_v - r_{uv})\textbf{x}_u + \lambda \theta_v] \end{align} Suppose there are two random samples $r_{uv}$ and $r_{uv'}$ with the same row index $u$, their updates to $\textbf{x}_u$ cannot be treated independently. Previous works on CPUs ~\cite{libmf-13, nomad14, DSGD-kdd11, DBLP:conf/icdm/TeflioudiMG12} all partition $R$ into blocks with no overlapping rows and columns. Such a strategy works effectively on tens of CPU cores but is difficult to scale to a GPU with thousands of cores. As a result, we choose ALS instead of SGD for cuMF. \subsection{Challenges of speedy and scalable ALS} \label{sec:als-challenges} \begin{table*} \begin{threeparttable} \centering \caption{Compute cost and memory footprint of ALS: the update-$X$ step} \label{tbl:cost} \begin{tabular}{ |l|l|l|l|l|l| } \hline & & \multicolumn{2}{c}{\textbf{compute cost}} & \multicolumn{2}{|c|}{\textbf{memory footprint}} \\ \hline & & $A_u$ in~\eqref{eq:update-x} & $B_u$ in~\eqref{eq:update-x}& $A_u$ in~\eqref{eq:update-x}& $B_u$ in~\eqref{eq:update-x}\\ \hline \multirow{3}{*}{\textbf{get\_hermitian\_x}}& one item & $N_zf(f+1)/2m$ & $(N_z+N_zf)/m+2f$ & $f^2$ & $nf+f+(2N_z+m+1)/m$\\ & $m_b$ items & $m_bN_zf(f+1)/2m$ & $m_b(N_z+N_zf)/m+2m_bf$ & $m_bf^2$ & $nf+m_bf+m_b(2N_z+m+1)/m$\\ & all $m$ items & $N_zf(f+1)/2$ & $N_z+N_zf+2mf$ & $mf^2$ & $nf+mf+(2N_z+m+1)$\\ \hline \hline \multirow{3}{*}{\textbf{batch\_solve}}& one item &$f^3$&&&\\ & $m_b$ items & $m_bf^3$ & &&\\ & all $m$ items & $mf^3$ & &&\\ \hline \end{tabular} \begin{tablenotes}[para,flushleft] Note: here we omit some minor computations and auxiliary data structures needed in eq.~\eqref{eq:update-x}. \end{tablenotes} \end{threeparttable} \end{table*} Table~\ref{tbl:cost} lists the compute cost and memory footprint of solving $X$ with eq.~\eqref{eq:update-x}, using single precision. The calculation is divided into two phases, i.e., \textbf{get\_hermitian\_x} to obtain the left-hand Hermitian matrix $A_u=\sum\limits_{r_{uv}\neq0}(\theta_v \theta_v^T+\lambda I)$ and the right-hand $B_u=\Theta^T\cdot R_{u*}^T$, and \textbf{batch\_solve} to solve many equations $A_u\textbf{x}_u=B_u$. In line 3 of Table~\ref{tbl:cost}: \textit{one item} in phase get\_hermitian\_x, to solve one row $\textbf{x}_u$, obtaining $A_u$ needs to calculate $N_z/m$ times\footnote{$N_z/m$ is the average number of non-zero entries per row.} of $\theta_v \theta_v^T$s, each of which needs $f(f+1)/2$ multiplications. The cost of obtaining $B_u$ is $(N_z+N_zf)/m+2f$ \cite{cusparse}. In terms of memory, $A_u$ uses $f^2$ floats, $B_u$ uses $f$, $\Theta^T$ uses $nf$, and a row of $R$ in Compressed Sparse Row (CSR) format uses $(2N_z+m+1)/m$. In phase batch\_solve, solving the linear equation $A_u\textbf{x}_u=B_u$ does not need additional memory storage by using in-place solvers, but has an $f^3$ computation cost. \noindent\textbf{Challenge 1. On a single GPU, how to optimize sparse, irregular and intensive memory access.} Table~\ref{tbl:cost} shows that, computation is bounded in both phases \textbf{get\_hermitian\_x} ($\mathcal{O}(N_zf^2)$) and \textbf{batch\_solve} ($\mathcal{O}(mf^3)$). CUDA library cuBLAS~\cite{cublas} already provides dense solvers for phase batch\_solve, so we focus on the get\_hermitian\_x phase. This phase is very costly, especially when $N_z\gg m$ and therefore $N_zf^2>mf^3$. What is more troublesome is the \textit{sparse}, \textit{irregular} and \textit{intensive} memory access in this phase. Details are as follows: \begin{enumerate} \item Access many columns $\theta_v$ subject to $r_{uv}\neq 0$ for every $u$. This access is \textit{irregular} w.r.t. $\Theta^T$, due to the sparseness of $R$. In each iteration to solve one $\textbf{x}_u$ we need to access $n_{x_u}$ columns ($N_z/m$, on average) spread \textbf{sparsely} and \textbf{discontiguously} across the $n$ columns of $\Theta^T$. For example, in the Netflix data set \cite{netflix08}, one user rates around 200 items on average, leading to a discontiguous access of 200 columns from the total 17,770 in $\Theta^T$. \item Aggregate many $\theta_v \theta_v^T$s and $\textbf{x}_u \textbf{x}_u^T$s, is memory \textit{intensive} due to the large number of $\theta_v$s and $\textbf{x}_u$s to aggregate. According to eq. \eqref{eq:update-x}, obtaining $A_u$ needs to calculate many $\theta_v \theta_v^T$s and aggregate them. Therefore, each element in column vector $\theta_v$ is accessed frequently, and the partial aggregation result is updated frequently. To calculate $\theta_v\theta_v^T$ we need to read each element of $\theta_v$ $f$ times; after obtaining a $\theta_v\theta_v^T$, to add it to $\sum\limits_{r_{uv}\neq0}(\theta_v \theta_v^T+\lambda I)$ we need to write $f(f+1)/2$, or $f^2$ elements if the downstream solver does not appreciate symmetricity. \end{enumerate} Section \ref{sec:singlegpu} presents how cuMF tackles Challenge 1, with experiment results shown in Sections \ref{sec:exp-one-gpu} and \ref{sec:exp-mo}. \noindent\textbf{Challenge 2. On multiple GPUs, how to scale and minimize communication overhead.} When $m$, $n$, $N_z$ and $f$ get larger, ALS is bounded by the memory capacity of a single GPU. For example, the update-$X$ iteration is to be bounded by memory footprint of $m$ $A_u$s ($mf^2$ without considering symmetricity), $X^T$ ($mf$), $\Theta^T$ ($nf$) and $R$ ($2N_z+m+1$). The current Nvidia Maxwell and Kepler GPUs have 12 GB memory per device. Each device would only be able to load 3 billion ($3\times 10^9$) single precision floats. However, the smallest data set, i.e., Netflix, in Figure \ref{fig:scale}, has $m=480$K. When $f=100$, $m$ Hermitian matrices are with size $mf^2=480$K$\times 100^2=4.8$ billion floats $>$ 3 billion. Previous CPU solutions already encountered and partially addressed this memory capacity issue. PALS~\cite{netflix08} partitions $X$ and $R$ by rows, solving each partition in parallel by replicating $\Theta^T$. However, this \textbf{model parallelism} is only feasible when $\Theta^T$ is small. SparkALS~\cite{sparkals14}, the ALS implementation in Spark MLlib~\cite{sparkmllib15}, also partitions $X$ and $R$ by rows, and then solve each partition $X_i$ in parallel. Its improvement to PALS is that, instead of replicating $\Theta^T$, it splits $\Theta^T$ into overlapping partitions $\{\Theta^T_i\}$, where $\Theta^T_i$ contains only the necessary $\theta_v$ columns for all $\textbf{x}_u$s in $X_i$. This improvement still has several deficiencies: \begin{enumerate} \item Generating ${\Theta^T_i}$ from ${X_i}$ is actually a graph partitioning task and time consuming. \item Transferring each $\Theta^T_i$ to $X_i$ involves much network traffic, especially when $N_z \gg m$. \item $\Theta^T_i$ may still be too big to fit into a single GPU device, especially when $N_z \gg m$. \end{enumerate} Section \ref{sec:mgpu} presents how cuMF tackles Challenge 2, with experiment results shown in Sections \ref{sec:exp-scalability} and \ref{sec:exp-xlarge}. \section{Related Work} SGD, Coordinate Gradient Descent (CGD) and ALS are the three main algorithms for MF. This section firstly reviews the three algorithms and then the methods to parallelize them. Subsequently, we review GPU-based MF solutions. \subsection{MF algorithms} SGD based algorithms \cite{mf-computer09} have been often applied to matrix factorization. SGD handles large scale problems by splitting the rating matrix into blocks along with sophisticated conflict-avoiding updates. CGD based algorithms update along one coordinate direction in each iteration. \cite{fastccd} improved the default cyclic CGD scheme by prioritizing the more important coordinates. ALS algorithms \cite{netflix08, als-10} have advantages in easy to parallelize, converging in fewer iterations, and dealing with non-sparse rating matrices \cite{mf-computer09}. CuMF is based on ALS. \subsection{Parallel computing paradigms} \textbf{Parallel SGD.} SGD has been parallelized in environments including multi-core \cite{libmf-13}, multi-node MPI \cite{DBLP:conf/icdm/TeflioudiMG12, nomad14}, MapReduce \cite{DSGD-kdd11, sparkler13} and parameter-server \cite{factorbird14, Xing2014-PS}. These studies are inspired by HOGWILD! \cite{hogwild-nips11}, which shows how to avoid expensive memory locking in memory sharing systems for some optimization problems with sparse updates. These methods partition the rating matrix into blocks with no overlapping rows or columns, and work on these blocks in parallel. They also use asynchronous communication, overlapping of communication and computation, and shared memory to achieve further speedup. LibMF \cite{libmf-13} is a very efficient SGD based library for matrix factorization on multi-cores. It has out performed nearly all other approaches on a 12-core machine. However, our experimental results show that libMF stops scaling beyond 16 cores, similar to the observation of \cite{dcMF15}. Moreover, libMF is a single-machine implementation, which limits its capability to solve large-scale problems. NOMAD \cite{nomad14} extends the idea of block partitioning, adding the capability to release a portion of a block to another thread before its full completion. It performs similar to libMF on a single machine, and can scale out to a 64-node HPC cluster. \textbf{Parameter Server with SGD.} More recently, the idea of ``parameter server" \cite{Smola2014-PS, Xing2014-PS} emerges for extremely large-scale machine learning problems. In this paradigm, the \textit{server nodes} store parameters, while the \textit{worker nodes} store training data and compute on them. The parameter-server framework manages asynchronous communication between nodes, flexible consistency models, elastic scalability, and fault tolerance. Following this idea, Petuum \cite{Xing2014-PS} runs Netflix data on a 512 cores cluster using SGD. Factorbird \cite{factorbird14} is a parameter server specifically implemented for matrix factorization, also based on SGD. \textbf{Parallel CGD.} CCD++ \cite{ccd++-icdm12} performs sequential updates on one row of the decomposed matrix while fixing other variables. CCD++ has lower time complexity but makes less progress per iteration, compared with ALS. In practice, CCD++ behaves well in the early stage of optimization, but then becomes slower than libMF. \textbf{Parallel ALS.} As discussed in Section~\ref{sec:als-challenges}, PALS \cite{netflix08} and SparkALS \cite{sparkmllib15} parallelize ALS by feature matrix replication and partial replication, respectively. These approaches does not work when feature matrices get extremely large. Facebook \cite{facebook15} tackles this issue by feeding a feature matrix in parts to a node. For example, when solving $X$, $X$ is partitioned disjointedly across nodes; $\Theta$ is also partitioned and rotated across the same set of nodes. When a $\Theta$ partition $\Theta^{(j)}$ meets $X$ partition $X^{(i)}$, $X^{(i)}$ is updated by observing $\Theta^{(j)}$; $X^{(i)}$ completes an iteration of update after it meets all $\Theta^{(j)}$s. This is somewhat similar to SU-ALS but SU-ALS does not use rotation, as GPUs do not have sufficient memory to do rotation. GraphLab \cite{graphlab12} implements ALS in such a way that when $\Theta$ is big, it is distributed among multiple machines. When updating a $\textbf{x}_u$ in a node, all needed $\theta_v$s are fetched on-the-fly from all nodes. This involves a lot of cross-node traffic and puts a high requirement on network bandwidth. \subsection{GPU approaches} \cite{gpu-rbm} employs GPU-based restricted Boltzmann machine for collaborative filtering, which gives relative performance compared with a CPU implementation on Netflix data. \cite{Zastrau:2012:SGD} implements both SGD and ALS on GPU to solve MF. It uses a mini-batch-based and sequential version of SGD, and a variant of ALS that adjusts (rather than re-calculates) the inverse of the Hermitian matrices in each iteration. They neither optimize the memory access to fully utilize GPU's compute power, nor scale to multiple GPUs to handle large-scale problems. Compared with CPU-based approaches, cuMF has better performance with a fraction of hardware resources. Compared with GPU-based approaches, our optimization in memory access and parallelism yields higher performance and scalability. \section{Scale-up ALS on multiple GPUs} \label{sec:mgpu} Section~\ref{sec:singlegpu} addresses \textbf{Challenge 1} regarding memory optimization on a single GPU. As problem size gets bigger, we need to address \textbf{Challenge 2}: ``On multiple GPUs, how to scale and minimize communication overhead.'' This section presents a scale-up algorithm called \textbf{SU-ALS} which adds \textbf{data-parallelism} and \textbf{parallel-reduction} on top of MO-ALS. \subsection{The SU-ALS algorithm} In distributed machine learning, \textbf{model parallelism} and \textbf{data parallelism} are two common schemes~\cite{jeffdean-dnn-nips12}. Model parallelism partitions \textbf{parameters} among multiple learners with each one learns a subset of parameters. Data parallelism partitions the training \textbf{data} among multiple learners with each one learns all parameters from its partial observation. These two schemes can be combined when both model parameters and training data are large. ALS is inherently suitable for model parallelism, as the updates of each $\textbf{x}_u$ and $\theta_v$ are independent. As discussed in Section~\ref{sec:als-challenges}, both PALS and SparkALS employ only model parallelism without considering data parallelism. To solve $X$ in parallel, PALS and SparkALS partition $X$ among multiple nodes. PALS broadcasts the whole $\Theta^T$ while SparkALS transfers a subset of it to each $X$ partition. As pointed out by \cite{ccd++-icdm12}, both approaches are inefficient and may cause out-of-memory failure, when $\Theta^T$ is big and ratings are skewed. To tackle large-scale problems, on top of the existing model parallelism, we design a data-parallel approach. A limitation of model parallelism is that, it requires all $A_u$s in one partition $X^{(j)}$ ($1\leq j\leq q$) to be computed on the same GPU. Consequently, a subset of $\Theta^T$ has to be transferred into that GPU. In contrast, our data-parallel approach distributes the computation of any single Hermitian matrix $A_u$ to multiple GPUs. Instead of transferring all $\theta_v$s to one GPU, it calculates a local $A_u$ on each GPU with only the local $\theta_v$s, and reduce (aka., aggregate) many local $A_u$s later. Assume that there are \textit{p} GPUs to parallelize on, we re-write eq. \eqref{eq:update-x} to its data-parallelism form as: \begin{align} A_u=\sum\limits_{r_{uv}\neq0} (\theta_v \theta_v^T+\lambda I) = \sum\limits_{i=1}^p\sum\limits_{r_{uv}\neq0}^{GPU_i} (\theta_v \theta_v^T+\lambda I) \end{align} This approach is described in Algorithm~\ref{alg:su-als} and illustrated in Figure~\ref{fig:su-als}. \begin{figure} \center{\includegraphics[width=\columnwidth] {figs/su-als.png}} \caption{SU-ALS. $\Theta^T$ is partitioned evenly and vertically, and stored on $p$ GPUs. $X$ is partitioned evenly and horizontally, and solved in batches, achieving model parallelism. Each $X$ batch is solved in parallel on $p$ GPUs, each with $\Theta^T$'s partition on it, achieving data parallelism.} \label{fig:su-als} \end{figure} \begin{algorithm}[h!] \caption{SU-ALS: Scale-Up ALS; update $X$ on multiple GPUs. } \label{alg:su-als} \begin{algorithmic}[1] \State Given $p$ GPUs: GPU$_1$, GPU$_2$, ..., GPU$_p$. \State $\{\Theta^{T(1)}, \Theta^{T(2)},...,\Theta^{T(p)}\} \gets VerticalPartition(\Theta^T, p)$ \State $\{X^{(1)}, X^{(2)},...,X^{(q)}\} \gets HorizontalPartition(X, q)$ \State $\{R^{(11)}, R^{(12)},...,R^{(pq)}\} \gets GridPartition(R, p, q)$ \ParFor{$i \gets 1, p$} \Comment{parallel copy to each GPU$_i$} \State copy GPU$_i \gets \Theta^{T(i)}$ \EndParFor \For{$j \gets 1, q$} \Comment{model parallel} \ParFor{$i \gets 1, p$} \Comment{data parallel on GPU$_i$} \State copy GPU$_i$$ \gets R^{(ij)}$ \State $(A^{(ij)}, B^{(ij)})\gets $ \Call{Get\_Hermitian\_X\_MO}{$R^{(ij)}, \Theta^{T(i)}$} \State \Call{synchronize\_threads}{$ $} \State $\{A^{(ij)}_1, A^{(ij)}_2, ..., A^{(ij)}_p\} \gets A^{(ij)}$ \State $\{B^{(ij)}_1, B^{(ij)}_2, ..., B^{(ij)}_p\} \gets B^{(ij)}$ \State $A^{(j)}_i \gets \sum\limits_{k=1}^p A^{(kj)}_i$ \State $B^{(j)}_i \gets \sum\limits_{k=1}^p B^{(kj)}_i$ \State $X^{(j)}_i \gets$ \Call{Batch\_Solve}{$A^{(j)}_i, B^{(j)}_i$} \EndParFor \State $X^{(j)} \gets \{X^{(j)}_1,X^{(j)}_2,...,X^{(j)}_p\}$ \EndFor \end{algorithmic} \end{algorithm} \textit{Lines 2-4}: partitions the input data. $\Theta^T$ is evenly split by columns into \textit{p} partitions, $X$ is evenly split by rows into \textit{q} partitions, and $R$ is split by rows and columns following the partition schemes of $X$ and $\Theta^T$. \textit{Lines 5-7}: copies $\Theta^{T(i)}$ to GPU$_i$ ($1\leq i\leq p$), in parallel. \textit{Lines 8-20}: loop over $\{X^{(1)}, X^{(2)},...,X^{(q)}\}$ and solve each $X^{(j)}$ partition in sequence ($1\leq j\leq q$). Given more GPUs, this sequential loop can further be parallelized. \textit{Line 9-18}: parallel loop over $\{\Theta^{T(i)}\}$ ($1\leq i\leq p$) to solve $X^{(j)}$. Without sufficient number of GPUs, this \textbf{parallel for} loop can degrade to a \textbf{sequential} one. \textit{Lines 11}: on GPU$_i$ ($1\leq i\leq p$), for each row $\textbf{x}_u$ in $X^{(j)}$, calculate the $A_u$ local to GPU$_i$ by only observing $\Theta^{T(i)}$ and $R^{(ij)}$: \begin{align} A_u^{i}=\sum\limits_{r_{uv}\neq0}^{GPU_i}(\theta_v \theta_v^T+\lambda I) \end{align} Similarly, we calculate the local $B_u$ matrix: \begin{align} B_u^{i}=\Theta^{T(i)}\cdot (R^{(ij)}_{u*})^T \end{align} The collection of all $A_u^{i}$s and $B_u^{i}$s on GPU$_i$ are denoted as $(A^{(ij)}, B^{(ij)})$. \textit{Line 12}: a synchronization barrier to wait for all parfor threads to reach this step. \textit{Lines 13-14}: evenly partition $A^{(ij)}$ and $B^{(ij)}$ by rows of $X^{(j)}$. That is, $A^{(ij)}$ on GPU$_i$ is evenly divided into $p$ portions: \begin{center} $A^{(ij)}_1$, $A^{(ij)}_2$, ..., $A^{(ij)}_p$ \end{center} $B^{(ij)}$ is partitioned in the same manner into: \begin{center} $B^{(ij)}_1$, $B^{(ij)}_2$, ..., $B^{(ij)}_p$ \end{center} \textit{Lines 15-16}: \textbf{parallel reduce} $p$ $A^{(ij)}$s and $B^{(ij)}$s into the global $A^{(j)}$ and $B^{(j)}$, on $p$ GPUs. GPU$_i$ takes care of the reduction of partition $i$ of all $A^{(kj)}$s ($1\leq k\leq p$). See Figure~\ref{fig:par-reduce} (a) for an example where $j=1$ and $p=4$: GPU$_1$ reduces $\{A^{(11)}_1, A^{(21)}_1, A^{(31)}_1, A^{(41)}_1\}$, GPU$_2$ reduces $\{A^{(11)}_2, A^{(21)}_2, A^{(31)}_2, A^{(41)}_2\}$, and so on. $B^{(ij)}$s are reduced in the same manner. \textit{Line 17}: solves the $p$ partitions concurrently on $p$ GPUs. GPU${_i}$ solves the local partition ($A^{(j)}_i,B^{(j)}_i$) it reduces in \textit{Lines 15-16}. \textit{Line 19}: obtain $X^{(j)}$ by collecting $p$ partitions $\{X^{(j)}_1,X^{(j)}_2,...,X^{(j)}_p\}$ on $p$ GPUs. \subsection{Topology-aware parallel reduction to speed up SU-ALS} \begin{figure}[tb] \subfloat[One-phase parallel reduction.]{% \includegraphics[clip,width=0.9\columnwidth]{figs/par-reduce1.png} } \subfloat[Two-phase parallel reduction considering PCIe hierarchy: phase-1 (intra-socket) in dash lines; phase-2 (inter-socket) in solid lines.]{ \includegraphics[clip,width=0.87\columnwidth]{figs/par-reduce2.png} } \caption{Parallel reduce $A^{(ij)}$ in SU-ALS when $j=1$ and $p=4$. For $1\leq i\leq p$, on GPU$_i$, $A^{(ij)}$ is evenly partitioned into $p$ pieces: $A^{(ij)}_1, A^{(ij)}_2, ..., A^{(ij)}_p$. Afterward GPU$_i$ reduces all $A^{(kj)}_i$ across $p$ GPUs ($1\leq k\leq p$). This not only achieves parallel get\_hermitian\_x and batch\_solve, but also leverages cross-GPU bandwidth efficiently.} \label{fig:par-reduce} \end{figure} \noindent\textbf{Parallel reduction.} In Lines 13-17 of Algorithm~\ref{alg:su-als}, ($A^{(j)}, B^{(j)}$) could have been reduced in one GPU (say, GPU$_1$) and $X^{(j)}$ solved there. However, this simple approach fails to parallelize either data transfer or computation. Moreover, multiple GPUs on a machine are usually connected through a PCIe bus. PCIe channels are full-duplex, meaning that data transfer in both directions can happen simultaneously without affecting each other. To leverage the bandwidth in both directions, we develop a parallel reduction scheme that evenly utilizes both incoming and outgoing channels of all GPUs, as shown in Figure~\ref{fig:par-reduce} (a). Experiment on Hugewiki data set shows that this optimization is 1.7x as fast compared with the reducing-by-one-GPU approach. After this parallel reduction, \textsl{batch\_solve} begins on $p$ GPUs in parallel. \noindent\textbf{Topology-aware parallel reduction.} Figure~\ref{fig:par-reduce} (a) assumes a flat interconnection where all GPUs directly connect to a PCIe root. This assumption may not always hold. For example, in a two-socket machine with four GPUs, a typical configuration is that every two GPUs connect to one socket. Communications between the two GPUs in the same socket still go though the local PCIe bus, while communications between GPUs in different sockets go through the inter-socket connection. In this case, intra-socket transfers enjoy zero-copy and faster duplex PCIe channel, compared with inter-socket transfers. In such a topology, the scheme shown in Figure~\ref{fig:par-reduce} (a) is not optimal. Based on the GPU connection topology, we design a two-phase parallel reduction scheme shown in Figure~\ref{fig:par-reduce} (b). In this scheme, each partition is first reduced intra socket (see the dash line). Afterward, the partial, intra-socket reduction results are moved across socket and generate the final reduction result (the solid line). Experiments show that this two-phase scheme enjoys an additional 1.5x speedup compared with the one-phase scheme shown in Figure~\ref{fig:par-reduce} (a). \subsection{How to partition?} Assume a single GPU's memory capacity is $C$. According to Algorithm~\ref{alg:su-als}, one GPU needs to hold $X^{(j)}$, $\Theta^{(i)}$, $R^{(ij)}$, $A^{(j)}$, and $B^{(j)}$. Therefore the choices of $p$ and $q$ are subject to \eqref{eq:partition}. \begin{align} \label{eq:partition} \frac{m\times f}{q} + \frac{n\times f}{p} + |R^{(ij)}| + \frac{m}{q}\times f^2 + \frac{m}{q}\times f + \epsilon < C \end{align} $\epsilon$ is a headroom space for miscellaneous small variables. In practice, when $C=12$ GB we choose $\epsilon=500$ MB. Here are some best practices in choosing $p$ and $q$: \begin{enumerate} \item If $p=1$ can satisfy \eqref{eq:partition}, you can solve $X$ in a single GPU in sequential batches. In this case SU-ALS is equivalent to MO-ALS. \item When $q$ increases and $p=1$ satisfies \eqref{eq:partition}, $q$ should not increase any more. At this time there is already no need to further partition $X$. \item We usually start from $p$ such that $\displaystyle{\frac{n\times f}{p}} \approx \displaystyle{\frac{C}{2}} $, and then choose the smallest $q$ that satisfies \eqref{eq:partition}. \end{enumerate} \subsection{Implementation of cuMF} This section describes selected details of cuMF. CuMF is implemented in C, using CUDA 7.0 and GCC OpenMP v3.0. It has circa 6,000 lines of code. \textbf{Out-of-core computation.} As seen in Figure~\ref{fig:scale} and Table~\ref{tbl:dataset}, rating and feature matrices can both have 100 billion entries. This goes far beyond the host and device memory limit. For such out-of-core problems, cuMF first generate a partition scheme, planning which partition to send to which GPU in what order. With this knowledge in advance, cuMF uses separate CPU threads to preload data from disk to host memory, and separate CUDA streams to preload from host memory to GPU memory. By this proactive and asynchronous data loading, we manage to handle out-of-core problems with close-to-zero data loading time except for the first load. \textbf{Elasticity to resources.} Algorithm~\ref{alg:su-als} is generic enough to cover many deployment scenarios where the number of GPUs are fewer or more than $p$ or $q$. With more GPUs, the sequential \textbf{for} at \textit{Line 8} can be parallelized; with fewer GPUs, the \textbf{parfor} at \textit{Line 9} can be turned into a sequential for. This is similar to how MapReduce deals with resource elasticity: when there are fewer/more parallel tasks compared with task slots, tasks will be executed in fewer/more waves. By this design cuMF is able to solve ALS of any size. \textbf{Fault tolerance.} Handling machine failure is straightforward in cuMF which uses a single machine. During ALS execution we asynchronously checkpoint $X$ and $\Theta$ generated from the latest iteration, into a connected parallel file system. When the machine fails, the latest $X$ or $\Theta$ (whichever is more recent) is used to restart ALS.
1,314,259,996,152
arxiv
\section{INTRODUCTION}\label{sec_introduction} \gls{DRL} has achieved impressive experimental results in video game playing, in which DRL agents are deployed under a trial-error-replay model. However, safety critical applications normally are not able to trial and replay at will in the real-world, such as autonomous vehicles, power systems, and humanoid robots. \yi{ A recent trend in autonomous navigation, including ground and underwater vehicles, is to use end-to-end controllers trained by reinforcement learning methods \cite{kiran2021deep,chen2021interpretable,levine2016end,ye2021survey}. For these applications that require a high level of safety integrity, deploying unverified DRL policies can lead to catastrophic consequences. } In the meantime, \gls{DNN} in DRL algorithms are known to be unrobust to adversarial examples, i.e., the output of a DNN may be subject to dramatic change under a minor input disturbance. Such issues have motivated this work to formally analyse the end-to-end DRL systems and assure their reliability before they are deployed. DRL verification and testing are emerging in recent years, including safe exploration, run-time monitoring, adversarial training, etc.. Pattanaik \textit{et al.} first proposed adversarial attacks for DRL algorithms, which trains the DRL with engineered adversarial attacks to improve the DRL performance and obtain more robust DRL models \cite{pattanaik2017robust}. However, these adversarial training methods cannot guarantee the safety during the training process. To this end, Dalal \textit{et al.} designed a safety layer that solves an action correction formulation per state \cite{dalal2018safe}. Felix \textit{et al.} used the Lyapunov function to calculate the region of attraction for a specific policy, and applied statistical models to obtain high-performance DRL policies \cite{berkenkamp2019safe}. These algorithms focus on the safety property in the training process, but the testing environment may be different from the training environment. Therefore, run-time monitors were proposed to assure the safety during the operations. The shield structure is created to prevent the agent from making unsafe decisions \cite{alshiekh2018safe}---it bans all unsafe actions for each state to achieve safe reinforcement learning. \yi Aforementioned methods answer the \textit{binary/worst-case} question of whether there exists any safety violation in the presence of extreme perturbations/attacks. They do not, however, provide an \textit{overall} understanding of how safe the DRL policy is whenever a violation can be found locally (in line with the insight gained from evaluating Deep Learning (DL) classifiers \cite{webb2018statistical,wangUAI21oxford}). In this regard, we introduce a reliability metric based on the \textit{probabilistic} notion of proportions of safety violations in the global input space representing the environment and formulated by the \gls{OP}. Furthermore, most DRL algorithms are explored and learned in simulation environments for, e.g., efficiency and cost considerations. However, the gap between simulated and real environments may lead to violation of the safety property set in simulation. Observation in the simulation environment is assumed as an accurate and unbiased signal, which is an unacceptably strong assumption in safety critical context. To this end, we proposed a novel two-level verification framework to assess the reliability of DRL algorithms: i) at the local level of an initial state, reachability analysis tools are utilised to generate safety verification evidence that are interval based trajectories starting from an initial state; ii) at the global level, we borrow ideas from software reliability engineering to model the distribution of initial states as the \gls{OP} (representing all possible operational scenarios), and then aggregate local safety evidence according to the OP to statistically estimate the overall reliability. } The main contributions of this paper include: \textit{a)} \yi{A two-level assessment framework is designed to assess the reliability of DRL algorithms. Reachability analysis formally verifies the safety of trajectories starting from an initial state at the local level, while overall reliability claims are supported statistically at the global level across initial states. } \textit{b)} \yi{At the local level, the state-of-the-art reachability verification tool POLAR \cite{huang2021polar} is integrated and optimised for faster computation and tighter bounded results. Meanwhile, OP is introduced and approximated to support a global level reliability claims. } \textit{c)} A publicly accessible repository of our method with all source code, datasets, experiments and a real-world case study based on BlueRov2 unmanned underwater vehicles (UUVs). % \yi{The rest of this paper is organised as follows. The mathematical preliminaries used in this paper are summarised in Section \ref{sec_preliminaries}. A detailed two-level verification algorithm is demonstrated in Section \ref{sec_alg}, including local-level reachability analysis and global-level statistical analysis. Simulation results and corresponding discussion are presented in Section \ref{sec_sim}. Finally, Section \ref{sec_con} concludes this paper.} \section{PRELIMINARIES}\label{sec_preliminaries} \subsection{Deep Reinforcement Learning} \label{sec:DRLpreliminary} We use discounted infinite-horizon \gls{MDP} to model the interaction of an agent with the environment $E$. An MDP is a 5-tuple ${\cal M}^E=(\mathcal{S},\mathcal{A}, \mathcal{P}, \mathcal{R}, \gamma)$, where $\mathcal{S}$ is the state space, $\mathcal{A}$ is the action space, $\mathcal{P}(\textbf{x}'|\textbf{x},\textbf{a})$ is a probabilistic transition, $\mathcal{R}(\textbf{x},\textbf{a})\in {\mathbb R}_{\ge 0}$ is a reward function, and $\gamma\in [0,1)$ is a discount factor. We use $\textbf{x}$ to range over the state space $\mathcal{S}$ because it not only is a state but also will later be used as input to a policy neural network. We consider different DRL algorithms, such as DDPG \cite{lillicrap2015continuous}, TD3 \cite{fujimoto2018addressing}, PPO \cite{schulman2017proximal}, which return an optimal policy $\pi^*$ includes a mapping $\mu: \mathcal{S} \rightarrow \mathcal{A}$ that maps from states to actions. Based on $\mathcal{M}^E$, a policy $\pi$ induces a trajectory distribution $\rho^{\pi,E}(\zeta)$ where $ \zeta=(\textbf{x}_0,\textbf{a}_0,\textbf{x}_1,\textbf{a}_1,...) $ denotes a random trajectory. The state-action value function of $\pi$ is defined as \begin{equation} \small Q^{\pi}(\textbf{x},\textbf{a})= \mathbb{E}_{\zeta\sim \rho^{\pi,E}}[\sum_{t=0}^{\infty}\gamma^t\mathcal{R}(\textbf{x}_t,\textbf{a}_t)] \end{equation} and the state value function of $\pi$ is $ V^\pi(\textbf{x})=Q^{\pi}(\textbf{x},\pi(\textbf{x})). $ \iffalse In terms of the target policy $\pi:\mathcal{S}\rightarrow\mathcal{A}$, a value function $V^\pi$ is designed as a description of total discounted reward $G_t$ for each state $s\in\mathcal{S}$: \begin{equation} V^\pi(s) = \mathbb{E}_\pi[G_t|\textbf{x}_t=s] \end{equation} where $t$ is the discrete time step and $G_t$ is the expected return after time step $t$. With the Bellman equation \cite{bellman1966dynamic}, $V^\pi$ can be represented by a recursive form: \begin{equation} V^\pi(s) = \mathbb{E}_\pi[r_t+\gamma V^\pi(\textbf{x}_{t+1})|\textbf{x}_t=s] \end{equation} The action-value function $Q^\pi$ is formulated as follows: \begin{equation} Q^\pi(\textbf{x},\textbf{a}) = \mathbb{E}_\pi[r_t+\gamma Q^\pi(\textbf{x}_{t+1},\textbf{a}_{t+1})|\textbf{x}_t=s,\textbf{a}_t = a] \end{equation} The \gls{DRL} algorithms are trying to find an optimal action that maximises the action-value function or the value function: $\pi^*(s) = \arg\max_{a\in\mathcal{A}} Q^*(\textbf{x},\textbf{a})$, here $Q^*(\textbf{x},\textbf{a})$ is the value of taking action $\textbf{a}$ in state $s$ under the optimal policy $\pi^*$. \fi We consider a DRL driven robot that navigates, and avoids collisions, in a complex environment where there are static and dynamic objects (or obstacles). % % % The interaction of the robot with the environment can be modelled as an \gls{MDP}. At each time $t$, the robot has its sensor observation from the environment, namely state $\textbf{x}_t$, i.e., $ \textbf{x}_t = (o^1_t,o^2_t,\cdots,o^n_t)^T $ where $o^1_t,o^2_t,\cdots,o^n_t$ are observable sensor signals at time $t$. It is worth noting that the robot's parameters are also formatted as observations because these variables are relative to the environment, such as the pose in the global coordinate system and the velocity relative to the stationary ground. The sensors can only observe partial information of the environment, e.g., by scanning the environment within a certain distance. For example, the observation range is within 3.15 metres in Turtlebot Waffle Pi \cite{name} for a distance sensor. An action $\textbf{a}_t\in\mathcal{A}$ consists of several decision variables. With the \gls{PID} controller on the robot, we consider two action variables, representing line velocity and angle velocity, respectively, i.e., $ \textbf{a}_t = (v^{line}_t, v^{angle}_t)^T. $ At each time $t$, the DRL policy outputs an action $\textbf{a}_t$ from the action set $\mathcal{A}$. A fundamental functionality of an autonomous robot is to avoid the obstacles and reach a goal area. On every state $\textbf{x}_t$, the sensory input $o^i_t$ can be utilised to e.g., predict the distance to the obstacles when they are within the sensing range. To implement the functionality, the environment may impose a reward function $\mathcal{R}$ on the states, the actions or both. A reward on the states can be, e.g., with respect to the distance to obstacles, and a reward on the actions can be, e.g., with respect to the acceleration in linear or angular speeds. \subsection{Reachability Analysis and Verification} Reachability analysis has been developed recently for the verification of safety and robustness of DNNs \cite{ruan2018reachability,huang2019reachnn,huang2020survey,althoff2010reachability}. Adapted to the context described in Section~\ref{sec:DRLpreliminary}, reachability analysis determines the set of states that a system can reach, starting from a set of initial states and considering the interaction between the DRL policy and the MDP. Safety verification, which is to determine whether a given DRL policy may lead to any unsafe state over an MDP, can be reduced to the reachability problem of whether an unsafe state is reachable. In this paper, we verify a safety property on a model-free DRL algorithm by computing its reachable set over a full trajectory. Similar to POLAR \cite{huang2021polar}, the following two mathematics are employed to calculate the reachable set: Taylor Arithmetic and Bernstein Polynomial. \subsubsection{Taylor Arithmetic} Followed by \cite{makino2003taylor,huang2021polar,ivanov2021verisig}, any interval could be transferred into a Taylor model. A Taylor model is combined by a polynomial approximation $p$ and an interval error bound $I$: \begin{equation} \small TM = p(\textbf{x}) + I, \textbf{x}\in D \end{equation} where $D$ is the input domain of the Taylor model. $I$ is the remainder of the Taylor model. Given two Taylor Models: $TM_1 = (p_1,I_1)$ and $TM_2 = (p_2,I_2)$, the addition and multiplication are computed as: \begin{align*} \small TM_1 + TM_2 = & (p_1+p_2,I_1+I_2)\\ TM_1 \times TM_2 = &(p_1\times p_2 - r_k,I_1\times I_2+Int(p_1)\times I_2\\&+Int(p_2)\times I_1+Int(r_k)) \end{align*} where $Int(\cdot)$ is the interval of the polynomials, $k$ is the maximum order of the Taylor models, and $r_k$ is the truncated polynomial of the Taylor models. \subsubsection{Bernstein Polynomial} There are several different activation functions in \gls{DNN}s, e.g., Sigmoid, ReLU, and Tanh. The Taylor model can only propagate with continuous activation functions, but DNNs may use piece-wise activation functions to fit the data, such as the ReLU function. Inspired by \cite{huang2021polar}, the Bernstein polynomial can be applied to calculate the Taylor models with discrete activation functions. To achieve over-approximation for safety, the conservative remainder for Bernstein polynomial should be considered: \begin{equation} \small \label{Bern} p_\sigma = \sum_{i=0}^k\left(\sigma(a+\frac{b-a}{k}i)C^k_i\frac{(y_i-a)^k(b-y_i)^{k-i}}{(b-a)^k}\right) \end{equation} where $a, b$ are the infimum and supremum of the activation function input, $k$ is the maximum order of the Bernstein polynomial, $y$ is the sampled point between $a$ and $b$. \begin{align} \small \label{BernErr} \epsilon = \max_{i=0,\cdots,m}&\left(\left|p_\sigma\left(\frac{b-a}{m}(i+\frac{1}{2})+a\right)\right.\right.\nonumber\\&\left.\left.-\sigma\left(\frac{b-a}{m}(i+\frac{1}{2})+a\right)\right|+\frac{b-a}{m}\right) \end{align} where $m$ is the sampling steps and $\epsilon$ is the conservative error bound of the Bernstein polynomial $I_\sigma = [-\epsilon,+\epsilon]$. \subsection{Operational Profile based Reliability Assessment} \yi{In real world scenarios, the agent can take different trajectories from different initial states under a given policy and an operational environment. To support reliability claims for DRL in safety critical applications, all possible trajectories need to be considered. In real applications, the initial state (and its trajectory) of the agent usually obeys some distribution that can be approximated from data. } The OP has been widely modelled in reliability assessment, which is applied to represent the occurrence probabilities of function calls and the distributions of parameter values \cite{musa_operational_1993, koziolek2005operational}. In other words, the OP is a Probability Density Function over the whole input domain $D$, and returns the probability of $\textbf{x}\in D$ being selected as the next random input. Later, we formally define the reliability metric of a DRL policy in a given environment, considering the OP. \iffalse \section{PROBLEM FORMULATION}\label{sec_veri} We formulate the studied problem in this section. \subsection{Kripke Structure} The application of an DRL policy on a robot running in the environment can be regarded as two interacting agents: an environment agent $Env$ and a DRL agent $Ag$. % First of all, we assume an underlying dynamic model for the environment agent $Env$ as follows: \begin{equation} \textbf{e}_k = \textbf{F}_k (\textbf{e}_{k-1},\textbf{a}_k) + \textbf{w}_k \end{equation} where $\textbf{e}_k$ is the environment state at time $k$, $ \textbf{a}_k$ is the action taken by the DRL agent at time $k$, $\textbf{F}_k$ is the state transition model, and $\textbf{w}_k$ is the process and control combined noise which is often assumed to be drawn from a zero mean multivariate normal distribution with covariance $\textbf{Q}_k$, i.e., $\textbf{w}_k \sim \mathcal{N}(0,\textbf{Q}_k)$. Based on the dynamic model, we have an observation model \begin{equation} \textbf{x}_k = \textbf{H}_k \textbf{e}_{k} + \textbf{v}_k \end{equation} where $\textbf{H}_k$ is the observation model, which maps the state space into the observation space and $\textbf{v}_k$ is the observation noise, which is often assumed to be zero mean Gaussian white noise with covariance $\textbf{R}_k$, i.e., $\textbf{v}_k \sim \mathcal{N}(0,\textbf{R}_k)$. A DRL policy is a function $f(\textbf{x}_k)$ that maps observations into discrete actions\footnote{Some DRL policies do not return discrete actions directly. Standard discretisation can be applied to discretise the output of DRL policies into discrete actions. }. Moreover, some DRL algorithms such as DDPG also train a critic network $g$ that outputs the predictive expected reward $g(\textbf{x}_k,\textbf{a}_k)$. In the following, we construct a Kripke structure $M=(S,s_0,T,L)$ from the above dynamic model, observation model, and the DRL policy. \paragraph{State Space} Let $S$ be the set of states such that every state $s$ is a $p$-norm neighborhood of some observation $\textbf{x}$. Let a $p$-norm neighborhood of $\textbf{x}$ be \begin{equation} nh(\textbf{x},p,d)= \{ \textbf{x}' ~|~ ||\textbf{x} - \textbf{x}'||_p \leq d\}, \end{equation} where we use $||\cdot||_p$ to express the $L_p$ norm of a vector. The initial state $s_0= nh(\textbf{x}_0,p,d)$ is the neighborhood of an initial observation $\textbf{x}_0$ for given $p$ and $d$. Furthermore, given a set of observations $\textbf{X}$, we write $\eta(\textbf{X})$ for the minimum shape that contains the set $\textbf{X}$ such that $\textbf{X}\subseteq \eta(\textbf{X})$, where $\eta$ can be e.g., \textbf{Box} and \textbf{Zonotope}, as expressed with the above $p$-norm neighborhood. \paragraph{Transition Relation} Now we define the transition relation $T$. Given a state $s$, we define its associated set of actions as \begin{equation} \mathcal{A}(s) = \{f(\textbf{x})~|~\textbf{x} \in s\} \end{equation} Intuitively, $\mathcal{A}(s)$ includes all actions that are enabled by an observation in the state $s$. Given a state $s$ and an action $\textbf{a} \in \mathcal{A}(s)$, we have \begin{equation}\label{equ:transition} \begin{array}{rl} s' = & \eta(\{\textbf{H}_k \textbf{e} + \textbf{v}_k ~|~ \textbf{e}\in E\})\\ \text{s.t. } E = & \{\textbf{F}_k(\textbf{e}(\textbf{x}),\textbf{a})+\textbf{w}_k ~|~\textbf{x}\in s\}\\ \end{array} \end{equation} where $\textbf{e}(\textbf{x})$ returns the corresponding environment state of the observation $\textbf{x}$. We write $s'=R(s,\textbf{a})$ to represent the relation of $s'$, $s$ and $\textbf{a}$ that satisfies Equation (\ref{equ:transition}). Based on this, the transition relation $T$ includes all pairs $(s,s')$ such that $\textbf{a} \in \mathcal{A}(s)$ and $s'=R(s,\textbf{a})$. \paragraph{Labelling Function} \textcolor{red}{This subsection should be revised to a crash labelling function and Section B Properties.} \xiaowei{this subsection and Section B need to rewrite to fit the properties that will be verified. } First of all, on every state $s$, we define a set $Prop$ of atomic propositions. For example, we have an atomic proposition \begin{equation}\label{equ:ap1} \textbf{a} \not\in \mathcal{A}(s) \end{equation} that expresses that some action $\textbf{a}$ is not possible on the current state. Also, we may have the atomic proposition \begin{equation}\label{equ:ap2} r\leq c \end{equation} which suggests that the predictive expected reward (by the critic network $g$) will not be greater than $c$. Note that $r=g(\textbf{x},\textbf{a})$ where $\textbf{x}\in s$ and $\textbf{a}\in \{f(\textbf{x})~|~\textbf{x}\in s\}$. In addition, there can be other atomic propositions by considering e.g., the number of possible actions $|\mathcal{A}(s)|$, and the range of predictive expected reward. We remark that, all the propositions in $Prop$ can be evaluated locally on a state $s$, with knowledge of the neural networks $f$ and $g$. \subsection{Properties} There have been existing works aiming to estimate the safety properties of the induced discrete time Markov chain (DTMC) (i.e., the application of the DRL policy over the environment modelled as a Markov decision process), including model-based methods such as~\cite{DBLP:journals/corr/abs-2109-06523} and statistical methods such as~\cite{DBLP:conf/iclr/UesatoKSERADHK19,DBLP:journals/corr/abs-1912-05663}, within which \cite{DBLP:journals/corr/abs-1912-05663} considers statistical properties that are not only for the trained models but also for the training process. There are few works considering the extent to which such safety properties may be subject to external perturbations on the sensor readings. In particular, the following question is often discussed: \begin{quote} Given a trajectory which is known to be safe (i.e., no clash at all), how do we know if it will stay safe when the environment may have noise to compromise the sensor reading? \end{quote} This question is closely related to the Sim2Real challenge. Formally, for every trajectory $h$, we may require certain temporal evolution of atomic propositions. Below, we use a temporal logic LTL to express properties. For example, we may be interested in \begin{equation} G ~ p \end{equation} for all $p\in Prop$, which expresses a safety property that the atomic proposition $p$ will be maintained throughout the path. Also, we might be interested in \begin{equation} GF ~ p \end{equation} which express a liveness property that the atomic proposition $p$ will always hold eventually. Besides, we may have \begin{equation} G ~ (p_1 \Rightarrow X^{\leq k} \neg p_2) \end{equation} which says that whenever $p_1$ occurs, $p_2$ will not occur in the next $k$ steps. For example, by writing $G (r\leq 190)$, it is required that the predictive expected reward will remain below 180, by writing $GF (r\leq 100)$, it is required that the predictive expected reward will always drop back to below 100, although there can be some time in the trajectory where it is more than 100, and by writing $G ~ ((100\leq r\leq 120) \Rightarrow X^{\leq k} \neg (r>150))$, it is required that safety risk will not increase drastically in $k$ steps to over 150 when it was between 100 and 120. \paragraph{Why are these properties important?} We note that those papers \cite{DBLP:journals/corr/abs-2109-06523,DBLP:conf/iclr/UesatoKSERADHK19,DBLP:journals/corr/abs-1912-05663} studying DTMC properties only concern the interactions with the environment. Beyond the interactions, the above properties concern in addition the safety performance of the neural networks ($f$ and $g$), because the atomic propositions (e.g., in Equations (\ref{equ:ap1}) and (\ref{equ:ap2})) need to be evaluated by studying the neural networks. The latter is not possible without a verification tool that can output reachability range with provable guarantee, as in e.g., \cite{ruan2018reachability,huang2021polar}. We remark that, there is a recent work \cite{DBLP:journals/corr/abs-2108-13264} which proposes a few statistical metrics to estimate how environmental uncertainty may be considered for reliability estimation. The computation of the metrics are purely statistical, without any guarantee. Moreover, the metrics are ad-hoc, and may not be easily generalisable (as we do by adopting temporal logic formulas). \subsection{Reliability Assessment} \xiaowei{I expect this to be part of the formula} As can be seen from the above, the model $M$ has an initial state $s_0$, corresponding to an initial observation $\textbf{x}$. We write $M(\textbf{x})$ for convenient. Then, for a temporal property $\phi$, we write $M\models \phi$ to denote the property holds on $M$. For LTL properties, it requires $\phi$ holds on all paths starting from the initial state. Now, if given a set $\textbf{X}_0$ of initial observations, we can estimate the reliability by computing the percentage of initial states that can satisfy the property $\phi$. \begin{equation} \frac{|\{M(\textbf{x})\models \phi~|~\textbf{x}\in \textbf{X}_0\}|}{|\textbf{X}_0|} \end{equation} \fi \section{ALGORITHM DESIGN}\label{sec_alg} \yi{In this paper, we design a novel two-level framework for assessing the reliability of DRL controlled \gls{RAS} based on the reachability verification tools and statistical analysis technologies. At the local-level, safety verification is reduced to a reachability problem of checking whether an unsafe state is reachable from an initial state, while the global-level claims on reliability can be obtained by \gls{OP}-based statistical methods.} \subsection{Local-level Safety Verification}\label{low_level} At the local level, we introduce an interval based method to calculate the reachable set of a DRL policy. DDPG algorithm is composed of an actor network and a critic network. We reduce the verification problem to a reachability problem, which is to calculate the overall reachable set of actor neural network and environment, cf. Fig.~\ref{framework}. \begin{figure}[h] \centering \includegraphics[width=0.7\hsize]{Figs/Picture1.png} \caption{Reachability verification framework.} \label{framework} \end{figure} The observation from the environment is normally noisy due to, e.g., inaccurate sensor signal and external disturbance. These noises could be ignored under safe conditions and in wide spaces but will cause safety issues in some corner cases, as illustrated in subfigures (A-D) of Fig. \ref{RAV}. It is obvious that the UUV is safe in case A because the real paths and observable path are safe. In cases B, C, and D, the UUV is unsafe since at least one signal shows that a crash occurs. If the sensor did not observe the correct distance perception in a corner case, the decision of the current policy will lead to a crash with high probability. Consequently, these errors should also be considered in safety verification. Following the recommendation from \cite{agarwal2021deep}, the interval methods are suitable to deal with the uncertain sensor noises. Based on the reachability verification tools, we use an interval to bound the real paths around the observable path. Subfigure (E) in Fig. \ref{RAV} shows that all the real paths are bounded in green intervals. \begin{figure}[h] \centering \includegraphics[width=0.85\hsize]{Figs/shiyitu.jpg} \caption{Different corner cases and illustration diagram.} \label{RAV} \end{figure} Specifically, the robot's initial states are over-approximated with a range $\eta$ and all possible true states are theoretically bounded in this green interval. Then we have \begin{equation} \small \textbf{a}_t\in NN_{actor}(\textbf{x}),\ \ \ \ \textbf{x}\in\eta \end{equation} where $NN_{actor}$ is the actor neural network of DDPG. Transferring the initial input domain into a Taylor model: \begin{equation} \small TM^0_i = \left(p_i(x_i),I_i\right), \, \, x_i\in\eta_i \end{equation} where $x_i$ is the $i$th state value of the observation. There is no activation function after the first layer of the neural network. The Taylor model can be directly propagated to the next layer. \begin{equation} \small tm^j_i = \sum_{k=1}^N\boldsymbol{w}_k\times TM^{j-1}_k + \boldsymbol{b}_i \end{equation} where $tm^j_i$ is the temporary Taylor model that has not passed the activation functions. $\boldsymbol{w}, \boldsymbol{b}$ are the weights and biases of $NN_{actor}$, respectively. $j, k$ are the indices of neural network layers and neurons, $N$ is the total number of neurons. For the rest layers of $NN_{actor}$, there are activation functions. Meanwhile, Taylor arithmetic cannot process the interval input with the noncontinuous activation functions. Therefore, we employ the Bernstein polynomial to deal the activation functions. Although the transformation will yield errors, we can summarise these errors into the remainder of the Taylor models by equations \eqref{Bern} and \eqref{BernErr}. \begin{equation} \small TM_i^j = p_\sigma(\sum_{k=1}^N\boldsymbol{w}_k\times TM^{j-1}_k + \boldsymbol{b}_i) + Int(r_k) + I_\sigma,\ \ \ \ j>0 \end{equation} It is noticed that the maximum order of the Taylor models will be increased after the activation functions, and therefore we truncate the part polynomial $r_k$ of the output Taylor model to maintain the complexity of the Taylor models. After several layers propagation, the output Taylor model is \begin{equation} \small TM^{out}_i = \sum_{k=1}^N\boldsymbol{w}_k\times TM^{out-1}_k + \boldsymbol{b}_i \end{equation} Here, the output Taylor model could be reverse to the interval by calculating the upper and lower bound of the Taylor model regarding to the input domain $D$. \paragraph{POLAR Algorithm Optimisation} In this paper, POLAR algorithm is selected to test DRL algorithm due to its advantages of tighter remainder bounds \cite{huang2021polar}. Different from other interval arithmetic methods, POLAR algorithm generate better performance in reachable set calculation, especially for DNNs. The deeper neural network layers cost POLAR longer time to do Bernstein polynomial sampling. In consideration of the characteristics of ReLU activation function, the propagation law of the Taylor models is separated into three parts, which depends on the interval ranges $TM\in[a,b]$. Furthermore, it is time consuming to calculate the accurate bound of the high order Taylor models. Similar to \cite{huang2021polar,yang2019efficient}, we calculate the conservative bound for the Taylor models with the Minkowski addition \cite{gritzmann1993minkowski}: \begin{equation} \small P \oplus Q = \{\textbf{p} + \textbf{q}|\textbf{p}\in P,\textbf{q}\in Q\} \end{equation} Consequently, to accelerate the testing speed and further enhance the testing accuracy of ReLU activation functions, we define the following propagation law for the Taylor models: \begin{equation} \footnotesize \label{eq6} TM^{o}=\left\{ \begin{aligned} &0 , &if\ \ \ \ b\leq 0, \\ p_\sigma(TM^{i}) + &Int(r_k) + I_\sigma , &if\ \ a\leq0\ \ \&\ \ b\geq 0, \\ &TM^{i} , &if\ \ \ \ a\geq 0, \end{aligned} \right. \end{equation} where $TM^{i}$ and $TM^{o}$ are the Taylor models before and after the Relu activation function, respectively. \yi{We enhance the original POLAR algorithm by first splitting the polynomial fit on each neuron into three piecewise functions based on the characteristics of the ReLU function, and then using the Bernstein polynomial to calculate the TMs if and only the interval of TMs contains $0$. It is obvious that the proposed approach \eqref{eq6} is much faster than the original POLAR algorithm since the sampling steps of the Bernstein polynomial are excused when the lower and upper bounds of TMs are positive or negative simultaneously. Nevertheless, the proposed method \eqref{eq6} can get the results without error (when the interval is positive-definite) or even eradicate the errors passed from the previous layer to zero (when the interval is negative-definite). Therefore, this optimisation step can reduce the computation time and increase accuracy.} \begin{table*}[htbp] \centering \caption{Connections between reliability assessment for traditional software, DL classifiers and DRL controllers.} \label{table_compare} \begin{tabular}{|c|c|c|c|} \hline & Traditional safety critical software & DL classifiers & \textbf{DRL controllers} \\ \hline Metric & \begin{tabular}[c]{@{}c@{}}Probability of failure \\ per random demand ($\mathit{pfd}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Probability of misclassification\\ per random image ($pmi$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Probability of crash per}\\ \textbf{random initial state}\end{tabular} \\ \hline OP & \begin{tabular}[c]{@{}c@{}}Distribution of\\ independent demands\end{tabular} & \begin{tabular}[c]{@{}c@{}}Distribution of\\ independent images\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Distribution of}\\ \textbf{independent initialisations}\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Event of \\ interest\end{tabular} & \begin{tabular}[c]{@{}c@{}}Failure that has \\ safety impact\end{tabular} & \begin{tabular}[c]{@{}c@{}}Misclassification that\\ has safety impact\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Failure that leads to} \\ \textbf{ hazards, e.g., crash}\end{tabular} \\ \hline Partitions & \begin{tabular}[c]{@{}c@{}}Classes of input demands (``bins'') \\ based on functionalities\end{tabular} & \begin{tabular}[c]{@{}c@{}}Norm-balls in the input space\\ with certain radius \\ (e.g., the $r$-separation distance \cite{yang2020closer})\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Norm-balls in the input space with certain}\\ \textbf{radius representing a set of initial states}\\ (e.g., a specified bound of errors)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}V\&V in \\ each partition\end{tabular} & \begin{tabular}[c]{@{}c@{}}Estimation on conditional\\ $\mathit{pfd}$ of each bin (e.g., by stress\\ testing of a ``bin'')\end{tabular} & \begin{tabular}[c]{@{}c@{}}Local robustness estimation \\ within a given norm-ball\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Reachability verification}\\ (a ``strip'' of bounded trajectories in the input\\ space starting with a norm-ball)\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}}Oracle of \\ each partition\end{tabular} & \begin{tabular}[c]{@{}c@{}}By observation or given\\ by the specification\end{tabular} & \begin{tabular}[c]{@{}c@{}}Label of the central-point\\ (seed) of the norm-ball\end{tabular} & \begin{tabular}[c]{@{}c@{}}\textbf{Emptiness of the intersection of}\\ \textbf{the ``strip'' and predefined unsafe-areas}\end{tabular} \\ \hline \end{tabular} \end{table*} \subsection{Global-level Reliability Assessment}\label{high_level} The execution of a DRL-driven RAS in an environment leads to a trajectory distribution (modelled as a Discrete-Time Markov Chain, discussed later), where the uncertainty (modelled with probability distribution) is from the environment\footnote{For simplicity and without loss of generality, we assume DRL policy is deterministic, while our method can be adapted for probabilistic policies.}. Formally, given an environment $E$, a policy $\pi$, and an initial state $\textbf{x}_0$, we can construct a model ${\cal M}^E(\pi,\textbf{x}_0)$ representing the probability distribution of a set of trajectories. Assume that we have a verification technique $g$, as discussed in Sections~\ref{low_level}. \begin{definition} The verification problem is, given a constructed model ${\cal M}^E(\pi,\textbf{x}_0)$ and a verification tool $g$, to determine whether the model is safe with respect to certain property $\phi$, written as ${\cal M}^E(\pi,\textbf{x}_0)\models^{g} \phi$. We may omit the superscript $g$ and write ${\cal M}^E(\pi,\textbf{x}_0)\models \phi$, if it is clear from the context. We can also assume that $g$ returns a probability value -- a Boolean answer can be converted into a Dirac probability. Then, the verification problem is to compute $Pr({\cal M}^E(\pi,\textbf{x}_0), \phi)$, i.e., the probability of safety. \end{definition} In the following, we discuss how the above verification problem may contribute to the computation of the reliability. Similar as \cite{dong_reliability_2022,zhao_assessing_2021}, we partition the space of initial states into $m$ sets, each of which is represented as a constraint $\mathcal{C}_i$, for $i=1..m$. Then, we can also define the empirical distribution $p_{op}$ over the partitions, and find a model $G_{\theta}$ that is as close as possible to $p_{op}$. Formally, assume that $G_{\theta}$ is a generative model over parameters $\theta$, we have \begin{equation} \small \theta^* = \argmin_{\theta} \text{KL}(G_{\theta}, p_{op}) \end{equation} where $\text{KL}(\cdot,\cdot)$ is the KL divergence between two distributions. Based on these, we can estimate the reliability (defined as the probability of failure in satisfying $\phi$ with the policy $\pi$ in the environment $E$) as \begin{equation}\label{pdfeq} \small \text{Reliability}(E,\pi,\phi) = \sum_{i=1}^m G_{\theta}(\mathcal{C}_i)(1-Pr({\cal M}^E(\pi,\textbf{x}_{\mathcal{C}_i}), \phi)) \end{equation} where $G_{\theta}(\mathcal{C}_i)$ returns the probability density of the partition $i$ that is represented as the constraint $\mathcal{C}_i$, $\textbf{x}_{\mathcal{C}_i}$ denotes the central point (i.e., a representative) of $\mathcal{C}_i$, and $1-Pr({\cal M}^E(\pi,\textbf{x}_{\mathcal{C}_i}), \phi)$ returns the failure rate of the DRL agent $\pi$ working on inputs satisfying the constraint $\mathcal{C}_i$ under the environment $E$. Note that, $G_{\theta}$ can be estimated in the same way as the data distribution in \cite{dong_reliability_2022,zhao_assessing_2021}. \yi{To better position the scope of our work, we summarise our approach for DRL, compared with reliability assessment methods designed for traditional software and DL classifiers, in Table \ref{table_compare}, where the proposed method is written in bold font.} \section{EXPERIMENT RESULTS}\label{sec_sim} \yi{This section presents experimental results regarding the following research questions (RQs): \\ \noindent \textbf{RQ1}: How effective the proposed local-level reliability assessment is of a single trajectory (reachability verification)? \\ \noindent \textbf{RQ2}: How accurate the proposed method is compared to original POLAR algorithm? \\ \noindent \textbf{RQ3}: How conservative it is between our interval-based method and traditional sampled point-based method?\\ In the following, we will first introduce our experimental environment, and then address the \textbf{RQ}s individually.} \subsection{Experiment Setup} The mission of this experiment example is to automatically dock the UUV based on DRL algorithm. We have both simulation and physical experiments for Bluerov2 UUVs\footnote{All source code, DRL models, datasets, and experiment results are available at solitude website \url{https://github.com/Solitude-SAMR}}. For the simulation environment, we use ROS \cite{quigley2009ros} and Gazebo \cite{koenig2004design} as shown in the left side of Fig. \ref{ros+gazebo}. \begin{figure}[htbp] \centerline{\includegraphics[width=0.8\hsize]{Figs/Experiment.pdf}} \caption{Simulation and physical experiment environments.} \label{ros+gazebo} \end{figure} \yi{For the physical experiment environment, the training process is conducted on a real-world water tank with a docking cage as shown in the right side of Fig. \ref{ros+gazebo}. In the real-world experiment environment, we randomly initialise the UUV from different start points, which naturally generate the OP in our reliability assessment framework. Theoretically, we will accurately estimate the model's safety if the verified all initial intervals can cover whole initial space. Due to the limitation of computing power and the scalability problem, we use the sampling method to estimate the safety of the entire input space in this paper. It can be obtained from Fig. \ref{Distribution}, the distribution of the approximated OP converges as the number of samples increases.} \begin{figure}[htbp] \centerline{\includegraphics[width=\hsize]{Figs/DRL1.jpg}} \caption{Approximating the OP as more data is sampled. Multivariate distribution is projected to a 2D space for visualisation.} \label{Distribution} \end{figure} \subsection{Verification of an Initial State} \yi{In this subsection, we show the local-level reachability verification process of an initial interval.} In this paper, we use POLAR \cite{huang2021polar} as the reachability verification tool to over-approximate the reliability of an initial state. Here, we set a deadzone ([-0.1, +0.1]) around the target to accelerate the UUV's stability and improve the missions' success rate. It can significantly solve the scalability problem of the verification tool and the over-conservative problem caused by the error accumulation. The verification results are shown in Fig. \ref{RSDSV}. \begin{figure}[htbp] \centerline{\includegraphics[width=\hsize]{Figs/DRL3.jpg}} \caption{The reachable set of six state variables (Top 3 figures: x, y, z velocity; Bottom 3 figures: roll, pitch, yaw velocity). Each plot shows a ``strip'' of bounded trajectories.} \label{RSDSV} \end{figure} From the figure, the reachable sets of the robot converges to $0$, which means the robot from that initial state interval will reach its destination. Due to the existence of the deadzone, if the variables tend to the target value, then all the variables will eventually converge to the deadzone, that is, the UUV will eventually be parallel to the dock frame. Note that state variables 3 (roll velocity) and 4 (pitch velocity) are not changed ideally since they are uncontrollable by the DRL policy in this experiment. \subsection{Comparison with Original POLAR Algorithm} \yi{Here, we show the over-approximated range between our method and POLAR \cite{huang2021polar} to illustrate the advantages of the proposed algorithm. The experiments are performed using Python on a same computer equipped with a AMD core $EPYC\ 7452$. The initial states, system dynamics, neural network models and weights are same. In the experiment, we found that the POLAR algorithm is too loose after iterating 50 steps. To better show the performance, we update the input state interval on every single steps. The output Q ranges of each step in a trajectory are compared in Fig. \ref{comparePOLAR1}.} \begin{figure}[htbp] \centering \includegraphics[width=0.65\hsize]{Figs/compare2.pdf} \caption{Comparisons between the original POLAR and our optimised POLAR---the latter yields tighter bounds.} \label{comparePOLAR1} \end{figure} In Fig.~\ref{comparePOLAR1}, our optimisation for the original POLAR greatly improves its performance. Although both algorithms can obtain the reachable set of the DNNs, the proposed approach yields tighter thus more accurate results. \subsection{Comparison with Point-based Assessment} \yi{After the local-level verification, we show the reliability assessment of different DRL policies to validate the effectiveness of the proposed framework in this subsection. For a given DRL policy and a set of initial spaces, we can estimate the reliability of the system based on equation \eqref{pdfeq} and reachability verification tools. In this paper, we collected the data from the real world water tank environment and trained a DRL model to finish the mission task. After we sampled 500 initial states in the real-world environment, we calculate the reachable set by the proposed method and estimate the reliability of the UUV system between our interval-based method and traditional point-based methods, where we use the same OP on the global-level assessment. The comparison results are summarised in Fig. \ref{comparePOLAR2}:} \begin{figure}[htbp] \centering \includegraphics[width=0.7\hsize]{Figs/compare3.pdf} \caption{Comparisons between point-based and our two-level interval-based assessments---the latter yields more conservative results.} \label{comparePOLAR2} \end{figure} \yi{As can be seen from the Fig.\ref{comparePOLAR2}, the proposed interval-based approach is more conservative than the point-based testing method. This is because the point-based method is equivalent to a sample from our interval. Therefore, the proposed local-level reachability verification algorithm considers the entire environment and is an over-approximation of all possible points. The result of original POLAR algorithm is not shown here, because the result is too conservative, where the reachable set covers all safe and unsafe area.} \yi{It can be seen from the Fig. \ref{comparePOLAR2}, as the number of sampled initial states (representing trajectories starting from them) increases, the predicted reliability of UUV system converges. We found that the reliability of the system tested here is not very high. This is mainly due to the fact that safety factors are not considered in the training of the DRL model. It is conceivable that the goal of the reinforcement learning model is to dock the UUV at the harbour, which will cause the UUV always attempt to approach the destination in a straightforward route to achieve maximum reward and speed. } \iffalse \subsection{Comparison results of different DRL Policies} \yi{In this subsection, we show the reliability assessment of different DRL policies to validate the effectiveness of the proposed framework. For a given DRL policy and a set of initial spaces, we can estimate the reliability of the system based on the $pdf$ equation \eqref{pdfeq} and reachability verification tools. In this paper, we collected the data from the real world water tank environment, and trained different DRL models with different neural network structures. After we sampled 500 initial states in the real-world environment, we calculate the reachable set by optimised POLAR and estimate the reliability of the UUV system for each DRL policies, which are summerised in Fig. \ref{RRe}:} \begin{figure}[htbp] \centerline{\includegraphics[width=0.6\hsize]{Figs/DRL4.pdf}} \caption{Assessed reliability results with different policies.} \label{RRe} \end{figure} \yi{It can be seen from the Fig. \ref{RRe}, as the number of samples increases, the predicted reliability of UUV system converges. We found that the reliability of the system tested here is not very high. This is mainly due to the fact that safety factors are not considered in the training of the DRL model. It is conceivable that the goal of the reinforcement learning model is to dock the UUV at the harbour, which will cause the UUV always attempt to approach the destination in a straightforward route to achieve maximum reward and speed. } \fi \section{CONCLUSION}\label{sec_con} This paper studies the probability of failures that can cause hazards in DRL controlled RASs. A two-level reliability assessment framework is proposed, using reachability verifications at the local-level and statistically supporting a probabilistic reliability claims based on the OP at the global-level. An optimisation on the local-level reachability analysis algorithm is applied to enhance the verification speed and accuracy. The results in the simulation and the real world manifest the effectiveness of the proposed framework. \section*{ACKNOWLEDGMENT} We thank Vibhav Bharti for his support in the experiments. \bibliographystyle{IEEEtran}
1,314,259,996,153
arxiv
\section{Introduction}\label{Sec:Intro} Physical unclonable functions (PUFs) provide hardware instance-specific outputs (known as responses) to queried inputs (known as challenges), thus challenge-response-pairs (CRPs) generally function as `fingerprints' of hardware devices~\cite{herder2014physical,gao2020physical,liu2019xor}. PUFs can be categorized based on the number of yielded CRPs into weak and strong PUFs~\cite{herder2014physical,gao2020physical}. Weak PUFs have a limited number of CRPs which must be protected, so that its primary application is volatile key provision~\cite{gao2018lightweight,gao2021noisfre}. On the other hand, strong PUFs offer a very large number of CRPs, which can be used in many security applications ranging from identification, lightweight authentications to oblivious transfer~\cite{gao2020physical}. Among strong PUFs, the arbiter PUF (APUF)~\cite{gassend2002silicon,suh2007physical,zalivaka2018reliable} is the most studied design due to its compactness and compatibility with silicon fabrication processes. However, the APUF is vulnerable to modeling attacks due to its linear structure. To increase the complexity of modeling attacks, various non-linearity injection techniques have been used to construct APUF variants including the representative $l$-XOR-APUF, ($x,y$)-Iterpose PUF (iPUF)~\cite{nguyen2019interpose}, feed-forward APUF (FF-APUF)~\cite{lim2005extracting}, and ($x,y,z$)-OAX-APUF~\cite{yao2022design}. These APUF variants are resilient to modeling attacks to a large extent given that their scale is increased (i.e., 128-stage or/and large number of underlying APUFs composited)~\cite{gao2022systematically} when accessing to high performance computing platform, e.g., server with a cluster of GPUs and resourceful memory is unavailable. \noindent{\bf State-of-the-Art of Modeling Attacks:} Majority of modeling attacks only exploit CRPs to train the model. By using deep learning (DL) (i.e., multiple layer perception network), purely CRP based modeling attacks can break $128$-stage $7$-XOR-APUF, $64$-stage ($11,11$)-iPUF, $128$-stage FF-APUF with $5$ loops, $64$-stage $6$-XOR-FF-APUF with $5$ loops~\cite{wisiol2021neural}. In addition, it has been shown that the side-channel information (SCI) including unreliability~\cite{delvaux2013side,becker2015gap}, power or timing~\cite{ruhrmair2014efficient}, and photonic emission~\cite{tajik2017photonic} can be utilized to model the APUF or its variants. However, merely relying on SCI is insufficient to break APUF variants once it is properly scaled, and acquisition of some SCIs, e.g., photonic emission and timing require costly peripheral equipment. To date, little efforts have been paid to hybrid modeling attacks on strong PUFs, in particular, APUF variants. The two hybrid attacks that are multi-class/single-label multi-SCI attack (SLMSA)~\cite{liu2022multiclass}, and gradient-based reliability hybrid attack (GRA)~\cite{tobisch2021combining} use not only CRP but also SCIs concurrently to break the APUF variants at a larger scale. However, there are still limitations in these attempts. More specifically, the SLMSA has a large dimension as its trained model output dimension is a multiplication per SCIs (SCI including the binary response information). In addition, this work mainly examines the efficiency of CRP as well as power SCI hybrid attacks, but efficacy when easy-to-obtain unreliability SCI is available has not been explored by~\cite{liu2022multiclass}. The reason is that Liu \textit{et al.}~\cite{liu2022multiclass} recognized the dimension of the reliability SCI could be much higher, potentially resulting in a dimensional curse (detailed in Section~\ref{sec:curse}). For the GRA specifically devised to attack iPUFs, it requires the differential mathematical model of underling APUFs, which is non-trivial to adopt without in-depth knowledge of the model given the APUF variant. \mypara{Our Contributions} The primary contributions and results of this work are summarized as follows. Significantly, all reported results are achieved with \textit{a common personal computer} and modeling attacks are completed \textit{within an hour} even for large-scaled strong APUF variants. \begin{itemize}[noitemsep, topsep=2pt, partopsep=0pt,leftmargin=0.4cm] \item We are the first to introduce multi-label/head classification to facilitate multi-SCI DL modeling attack, coined as MLMSA that eliminates the curse of dimensionality in the SLMSA. Specifically, the MLMSA model output dimension is now equal to the dimension summation per SCI rather a dimension multiplication per SCI in the SLMSA. In contrast to SLMSA that requires mapping from the predicted label to the response, MLMSA directly outputs the response. \item We have successfully attacked 128-stage $10$-XOR-APUF, ($2,2,8$)-OAX-APUF and ($5,5$)-iPUF with the MLMSA by simultaneously using the \textit{response and easy-to-obtain reliability SCI}. Notably, 128-stage $12$-XOR-APUF, ($2,2,9$)-OAX-APUF are also breakable statistically, that is, among five repetitions, one attempt succeeds in our experiments. For these attacks, the training size is no more than 600,000 and training completes within an hour. In contrast to GRA, the MLMSA does not require a mathematical model of underlying PUFs. As a comparison, the purely CRP based DL modeling attacks can break 128-stage $7$-XOR-APUF but with significantly increased training size of 30M \cite{wisiol2021neural}, \item We have advanced the breakable APUF variants to a even larger scale, albeit the concise design of the proposed MLMSA. By simultaneously exploiting multiple SCIs including response, power and reliability, the MLMSA successfully breaks $30$-XOR-APUF, ($2,2,30$)-OAX-APUF, ($9,9$)- and ($2,18$)-iPUFs, all with 128-stage underlying APUFs. \item Based on silicon measurements, we have further affirmed the merits of leveraging \textit{additional easy-to-obtain reliability information} to attack XOR-APUFs compared to the setting of merely using response information. In particular, the response and reliability based DL can successfully attack 128-stage 10-XOR-APUF with 1.5M challenges corresponded response and reliability pairs, whereas merely response based DL can only attack a 6-XOR-APUF with the same 128-stage and training set size. \end{itemize} \mypara{Paper Organization} In Section~\ref{sec:background}, we introduce APUF and its variants of XOR-APUF, OAX-APUF, iPUF and FF-APUF that are evaluated in this study. In Section~\ref{sec:relatedwork}, we categorize modeling attacks on strong PUFs into three general classes: purely CRP-based attacks, purely SCI-based attacks, and CRP and SCI hybrid attacks. In Section~\ref{sec: multilabel classification}, we present our multi-label DL based attack model using multiple side channel information (SCI), coined as MLMSA. In Section~\ref{sec:experimental results}, we evaluate the MLMSA against XOR-APUF, OAX-APUF, iPUF and FF-APUF and compare it with SLMSA. Further discussions are made in Section~\ref{sec:discussion}, followed by conclusions in Section~\ref{sec:conclusion}. \section{Background}\label{sec:background} This section provides necessary background on APUF and its representative variants that this study examines. \subsection{Arbiter-based PUF} The APUF exploits manufacturing variability that results in random interconnect and transistor gate time delays~\cite{gassend2002silicon}. This structure is simple, compact, and capable of yielding a large CRP space. In contrast to the optical PUF that lacks mathematical model~\cite{pappu2002physical}, the APUF has a linear additive structure, leading to vulnerability to modeling attacks. In the modeling attack, an attacker utilizes observed CRPs to build a mathematical PUF model that can accurately predict responses for unseen challenges~\cite{ruhrmair2010modeling,ruhrmair2013puf,becker2015gap,becker2015pitfalls}. \mypara{$\textbf{Linear Additive Delay Model}$}A linear additive delay model of APUFs is formulated as~\cite{lim2005extracting}: \begin{equation} \Delta = \boldsymbol{w}^{\rm T}\boldsymbol{\Phi} \label{con:delta}, \end{equation} where $\boldsymbol{w}$ is the weight vector that characterizes the time delay segments in the APUF, and $\boldsymbol{\Phi}$ is the parity (or feature) vector that can be generally understood as a transformation of the challenge. The dimension of both $\boldsymbol{w}$ and $\boldsymbol{\Phi}$ is $n+1$ given an $n$-stage APUF, where. \begin{equation} \boldsymbol{\Phi}[n]=1,\boldsymbol{\Phi}[i]=\prod_{j=i}^{n-1}{(1-2\boldsymbol{c}[j])},i=0,...,n-1. \label{con:feature vector} \end{equation} The response of an $n$-stage APUF is determined by the delay difference $\Delta$ between the top path and bottom path of the APUF. This delay difference is the sum of the delay differences of each individual $n$ stages. The delay difference of each stage depends on the corresponding challenge\cite{becker2015gap}. Based on Eq.~\ref{con:delta}, the response $r$ of the challenge $\boldsymbol{c}$ is modeled as: \begin{equation} r=\begin{cases} 1, \text{ if } \Delta<0 \\ 0, \text{otherwise} . \end{cases} \label{con:responses} \end{equation} \begin{figure} \centering \includegraphics[trim=0 0 0 0,clip,width=0.4\textwidth]{./Fig/XOR_APUF.pdf} \caption{$l$-XOR-APUF consists of $l$ APUFs and each of the APUF response, $\{r_1,...,r_l\}$ is XOR-ed at the end to form a 1-bit response $r$. All APUFs share the same challenge $\boldsymbol{c}$.} \label{fig:xor} \end{figure} \subsection{XOR-APUF} As shown in Fig.~\ref{fig:xor}, $l$-XOR-APUF has $l$ underlying APUFs in parallel. Each APUF shares the same challenge and produces a digital response. All $l$ responses are XOR-ed to form the final $l$-XOR-APUF response. Using a larger $l$ can nearly exponentially increase the modeling attack complexity when \textit{only CRP is used}. However, the $l$-XOR-APUF unreliaiblity increases when $l$ is increasing, which negatively restricts the large $l$ usage to some extent. In addition, the large $l$ of a $l$-XOR-APUF is still ineffective against reliability-based modeling attacks---it uses reliability information of the CRP---since the complexity of such attack is only linearly increased as a function of $l$. \subsection{OAX-APUF} As is shown in Fig.~\ref{fig:oax}, the OAX-APUF~\cite{yao2022design} consists of OR, AND and XOR blocks. The $x$ APUFs' responses are OR-ed to get $r_{\rm or}$, $y$ APUFs are belong to AND block, in which the responses are AND-ed. The XOR block contains $z$ APUFs, whose responses are XOR-ed to gain $r_{\rm xor}$. The responses of three blocks are XOR-ed to obtain the final response $r$. According to~\cite{yao2022design}, the OAX-APUF has higher reliability than XOR-APUF, while OAX-APUF can defeat Covariance Matrix Adaptation Evolution Strategy (CMA-ES) based reliability attacks and demonstrate comparable modeling resilience to logistics regression attack compared to ($x+y+z$)-XOR-APUF. However, it has relatively lower resilience than ($x+y+z$)-XOR-APUF when the DL attack is applied~\cite{yao2022design}. \begin{figure} \centering \includegraphics[trim=0 0 0 0,clip,width=0.4\textwidth]{./Fig/OAX_APUF.pdf} \caption{Overview of ($x,y,z$)-OAX-PUF, which has three blocks: OR, AND, and XOR blocks. } \label{fig:oax} \end{figure} \subsection{iPUF} The iPUF contains two layers of XOR-APUFs~\cite{nguyen2019interpose,liu2022multiclass}. As shown in Fig.~\ref{fig:ipuf}, the response of $x$-XOR-APUF is inserted into the $i$-th position of the challenge to obtain the challenge of ($n+1$) bits. The new ($n+1$)-bit challenge is input into $y$-XOR-APUF to get the final response $r$. In theory and experiment, the iPUF has been demonstrated to have desired resistance to LR and reliability based CMA-ES attacks~\cite{nguyen2019interpose,liu2022multiclass}. According to~\cite{wisiol2020splitting}, the security of the ($x,y$)-iPUF against modeling resilience is similar to a ($\frac{x}{2}+y$)-XOR-APUF when the logistic regression (LR)-based divide-and-conquer attack is applied. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.40\textwidth]{./Fig/IPUF.pdf} \caption{$n$-bit (x,y)-iPUF~\cite{nguyen2019interpose}.} \label{fig:ipuf} \end{figure} \subsection{Feed Forward Arbiter PUF} The Feed Forward Arbiter-PUF (FF-APUF)~\cite{lim2005extracting} adds one or more intermediate arbiters within a basic APUF, and the output response of the intermediate arbiter replaces one or multiple bits of the challenge. This is a typical design of obfuscating the APUF challenge bit(s). The structure of a FF-APUF with one loop is depicted in Fig.~\ref{fig:ff-puf with one loop}~\cite{gao2022systematically}. This FF-APUF can be incorporated with XOR or OAX operations when multiple underlying FF-APUFs are used. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{./Fig/FF-PUF.pdf} \caption{An exemplified FF-APUF with one loop.} \label{fig:ff-puf with one loop} \end{figure} \section{Related Works}\label{sec:relatedwork} Modeling attacks on strong PUFs normally rely on machine learning (ML) techniques. The ML attacks against strong PUFs can be divided into three categories according to the type of training data used: CRP-based ML attacks, SCI-based attacks and SCI hybrid attacks, which uses CRPs; SCI; and CRPs along with SCI(s) as training data, respectively. \subsection{CRP-based ML Attacks} Logistic regression (LR), support vector machine (SVM), and evolution strategies were utilized by Rührmair~\textit{et el.}~\cite{ruhrmair2010modeling} to model XOR-APUF, FF-APUF and LSPUF in 2010. There are number of improvements to increase the attacking accuracy~\cite{sahoo2015case}. It is always suggested increase the scale of the APUF variants, in particular, the XOR-APUF to increase the modeling resilience against those CRP-based modeling attacks. In order to increase the complexity of these ML attacks, more APUF variants building upon various forms of recompositions have been proposed, such as MPUF~\cite{sahoo2017multiplexer}, iPUF~\cite{nguyen2019interpose} and OAX-APUF~\cite{yao2022design}. According to~\cite{nguyen2019interpose}, LR is the most efficient attack against $l$-XOR-APUF, but it cannot be used to attack the iPUF directly. Wisiol~\textit{et el.}~\cite{wisiol2021neural} made some improvements to the LR attack and reported that the improved LR attack can break $64$-bit $8$-XOR-APUF with an accuracy of 96.4\% by using up to $150$M CRPs (i.e., training time is 391 minutes using 4 threads). The LR-based divide-and-conquer attack~\cite{wisiol2020splitting} (LDA) was proposed to attack the iPUF, which can successfully break $64$-stage ($1,7$)-iPUF with an accuracy of 97\%~\cite{liu2022multiclass}. As reported by Liu~\textit{et el.}~\cite{liu2022multiclass}, the LDA attack can successfully break $128$-stage ($6,6$)-iPUF. According to Wisiol~\textit{et el.}~\cite{wisiol2021neural}, the MLP-based divide-and-conquer attack is able to attack 64-stage ($11,11$)-iPUF with $650$M CRPs. More recently, DL has been shown to be a simple and effective way to attack strong PUFs \textit{without knowing the underling mathematical strong PUF model}. Alkatheriri~\textit{et el.}~\cite{alkatheiri2017towards} showed that a $1$-hidden layer MLP attack can successfully model FF-APUF with $6$ loops in 2017. In 2018, Aseeri~\textit{et el.}~\cite{aseeri2018machine} proposed a $3$-hidden layer MLP attack, which can successfully model $128$-bit $7$-XOR-APUF with $40$M CRPs according to Wisiol~\textit{et el.}~\cite{wisiol2021neural}. Santikellur~\textit{et el.}~\cite{santikellur2019deep} proposed DL attacks on XOR-APUF, MPUF and iPUF in 2019, which can break $128$-stage ($4,4$)-iPUF, ($128,5$)-rMPUF and $5$-XOR-APUF. Mursi~\textit{et el.}~\cite{mursi2020fast} proposed a 3-hidden layer MLP attack, which mainly focuses on XOR-APUF. According to Wisiol~\textit{et el.}~\cite{wisiol2021neural}, this 3-hidden layer MLP~\cite{mursi2020fast} can successfully break 128-stage $7$-XOR-APUF with $30$M fully reliable CRPs. \subsection{SCI-based Attacks} SCI-based attacks can be divided into pure side-channel analysis (SCA) attacks and SCA-based ML attacks. The pure SCA attacks can be conducted alone to attack a single APUF, such as reliability-based analysis~\cite{delvaux2013side} and the photonic emission attack~\cite{tajik2017photonic}. The SCA-based ML attacks mainly utilize reliability and power SCIs. The reliability-based ML attack establishes the reliability model of PUF that exploits the relationship between the response reliability and internal parameters~\cite{becker2015gap}. The measured reliability data and challenges are provided to, e.g., CMA-ES model, as training data to learn the internal parameters of e.g., XOR-APUFs. The fault injection can be utilized to accelerate the reliability SCI collection~\cite{delvaux2014fault,liu2022multiclass}. The reliability-based CMA-ES attack~\cite{becker2015gap} can successfully break XOR-APUF and LSPUF. The CMA-ES attack is based on the assumption that the unreliability contribution per APUF of the $l$-XOR-APUF is equal, so that the CMA-ES can converge to any of $l$ APUFs in an equal chance when the attack repeats. Therefore, the complexity of breaking the $l$-XOR-APUF is linear in $l$. The OAX-APUF~\cite{yao2022design} and iPUF~\cite{nguyen2019interpose} breaks such an assumption, thus can defeat the CMA-ES based reliability modeling attacks. Different power-based ML attacks leverage differing methods for analyzing power leakages, e.g., simple power analysis (SPA) and correlation power analysis (CPA)~\cite{liu2022multiclass}. Becker~\textit{et el.}~\cite{becker2014active} proposed a CPA-based CMA-ES attack that uses power correlation coefficients as the fitness function to model controlled PUFs and LSPUFs. An SPA-based LR attack was proposed by Rührmair~\textit{et el.}~\cite{ruhrmair2014efficient}, which adopts a gradient-based algorithm similar to LR to learn the power side-channel model of XOR-APUF. However, because the relationship between other APUF variants' power and response is difficult to deduce, so fewer power-based ML attacks are used to model other APUF variants. Though there are a number of SCI sources that can be used to attack strong PUFs, the reliability SCI is the most easily obtainable one. To collect power SCI, physical access to the PUF device and some expertise are required. The photonic emission collection is costly and usually requires proficient expertise. \subsection{SCI Hybrid Attacks} The above two types of ML based attacks with a CRP-based attack or SCI-based attack only use CRP or SCI, and thus knowledge gained by both is not utilized. The other type of SCI hybrid ML attack considers using them concurrently to be more efficient. There are two recent studies on SCI hybrid attacks, exhibiting greatly improved attack efficacy. One is the gradient-based reliability hybrid attack (GRA)~\cite{tobisch2021combining} and the other is the multi-class/single-label multi-SCI attack (SLMSA)~\cite{liu2022multiclass}. The GRA~\cite{tobisch2021combining} was mainly devised to attack ($x,y$)-iPUF. In essence, it combines the CMA-ES reliability attack and LR CRP attack together. For the CMA-ES reliability attack term, it learns multiple APUFs concurrently (i.e., regular CMA-ES learn an APUF per run~\cite{becker2015gap}) and enforces that each APUF is dissimilar to others to prevent the APUF converging to those easiest-to-learn APUFs through reliability SCI. The GRA attack requires careful constraints to each attack term, which potentially requires manual settings in practical upon trials. Note that the GRA attack is less effective on ($x,y$)-iPUF with $x>1$. Therefore, a multiple pass attack similar to the iPUF splitting attack~\cite{wisiol2020splitting} has to be adopted. In this context, the $y$-APUF are firstly learned, then the $x$-APUF are learned sequentially. In addition, the GRA requires to construct a differential model for the iPUF, which is non-trivial for adoption as it requires in-depth understanding of the underlying PUFs under attack. The multi-class classification based side-channel hybrid attack (SLMSA) proposed by Liu \textit{et al.}~\cite{liu2022multiclass} is the state-of-the-art to attack XOR-APUF and iPUF, which avoids the underlying PUF mathematical models by using DL techniques. The SLMSA combines response and power SCI to validate its efficiency. To transform the hybrid information into multiple classes, where there is only one true value and the rest are false values, we use one-hot vector encoding also referred to as single-label classification, so the SLMSA has to firstly fuse CRP information with SCI to construct the so called challenge-synthetic-feature pairs (CSPs) via the feature crossing method of Liu {\it et al.}~\cite{liu2022multiclass}. More specifically, feature crossing uses the Cartesian product of the response \textbf{r} and side-channel information \textbf{p} to form a single label. Therefore, the number of categories of the multi-class classification is substantially increased when the dimension per \textbf{p} and number of \textbf{p} increases due to usage of the Cartesian product. This could incur dimension curse as recognized by Liu \textit{et al.}~\cite{liu2022multiclass}. For example, the response has two categories, and the $l$-XOR-APUF has ($l+1$) power SCI categories, so that the dimension is $2\times (l+1)$. In this context, Liu \textit{et al.} take the CRP and the power SCI into consideration. The reliability SCI is not used. Nonetheless, the SLMSA has shown to successfully break $128$-stage $16$-XOR-APUF and $(2,16)$-iPUF with $600,000$ training CSPs (response and power SCI). However, this study does not validate the efficacy of the reliability SCI that is the most easily obtainable SCI. In addition, the prediction of the response is not immediately available, which is recovered through a remapping according to the CSP process. \section{Multi-Label Multi-SCI based Deep Learning Attacks}\label{sec: multilabel classification} We propose a multi-label DL based attack to efficiently and effectively take advantage of multiple SCIs, coined as MLMSA. Firstly, its output dimension is merely a summation per used SCI. Secondly, its can directly predict the response of the learned strong PUF without additional remapping. Thirdly, it can allow flexible weight tuning per SCIs and response to gain improved attack accuracy---we thus can break 128-stage 12-XOR-APUFs by using response and the \textit{easy-to-obtain reliability SCI} that is not considered in~\cite{liu2022multiclass}. \subsection{Multi-Label Model for MLMSA} Liu \textit{et al.} used a single-label model~\cite{liu2022multiclass} to exploit multiple information, e.g., CRP and power SCI. In the single-label model, a given input instance can only belong to one of more than two classes. This results in the inconvenient CSP synthesis, where the dimension of the output is greatly increased, especially when multiple SCIs are concurrently exploited---the {\it curse of dimensionality} recognized by Liu \textit{et al.}~\cite{liu2022multiclass}. We note that the single-label model can be circumvented via the multi-label model. In the multi-label classification, there is no constraint on how many of the classes the instance can be assigned to. For instance, an analogy is that a movie can has multiple classes of comedy, romance, and action in the multi-label output. The multi-label DL model~\cite{tsoumakas2007multi,yeh2017learning} is also usually referred to as multi-head/output DL model. Different from the single-label/head DL model, the output layer of the multi-head model has multiple outputs or heads, which each head corresponds to a label (i.e., response or power SCI or reliability SCI). Supposing there are $k$ heads/outputs in the multi-head model, then the loss of the model can be expressed as Eq.\ref{eq:total loss}: \begin{equation} \label{eq:total loss} L=\sum_{i=1}^{k}\lambda_{i}L_{i}, \end{equation} where $L_{i}$ means the loss of $i$-th head, $\lambda_i$ means the weight or the regularized factor of the $L_{i}$, which can be flexibly tuned to gain optimal performance. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.40\textwidth]{./Fig/Multihead.pdf} \caption{Exemplified three-head DL architecture with heads of response, reliability SCI, and power SCI. The input layer and hidden layer(s) are shared by multiple heads.} \label{fig:multi head} \end{figure} \subsection{MLMSA} As shown in Fig.~\ref{fig:multi head}, the multi-head model can use CRP and a number of SCIs, thus enhancing the model for learning the underlying PUFs better by leveraging more useful information sources. Because not only the response but also other SCIs observed for a given challenges are all simultaneously used to train the model, providing more meaningful information to model the internal parameters of the underlying strong PUF. This work focuses on power or/and reliability SCIs. The power consumed by the e.g., $l$-XOR-APUF is linearly proportional to number of responses being `1' in $l$ APUFs. More specifically, reliability SCI is obtained by computing the number of responses of `1's from $m$ repeated measurements given the same challenge queries. Let us consider ten repeated measurements of a given challenge as an example: the number of responses with `1's obtained from $10$ repeated measurements has a value ranging from $0$ to $10$, and thus there are $11$ possible values. If the reliability SCI is divided into $11$ categories, each integer number stands for one category. The proposed MLMSA attack has three stages: \textbf{Pretreatment Stage:} Collecting CRPs and the exploited SCI(s). Note that the label of a given SCI needs to be converted to one-hot vector. \textbf{Training Stage:} Using multi-head model to train the targeted strong PUF model. The input is a challenge. One head predicts the response, and the other head(s) predict(s) the rest SCI(s), respectively, of the given challenge. The difference between the predictions and the ground-truth labels are used to optimize the multi-head model, according to Eq.~\ref{eq:total loss}. \textbf{Prediction Stage:} Once the multi-head model is trained, the response given an unseen challenge can be \textit{directly predicted by the response head/output}. \section{Experimental Results and Analysis}\label{sec:experimental results} \subsection{Experimental Setup} According to the dynamic power analysis of PUFs, the amount of drawn charge is linearly proportional to the number of latches exhibiting a value of `1's~\cite{liu2022multiclass}. For PUF designs that employ more than one APUFs in parallel, by measuring the amount of current drawn from the supply voltage during any latch transition, the cumulative number of APUFs that respond with `1's can be determined~\cite{mahmoud2013combined,liu2022multiclass}. Consequentially, following~\cite{liu2022multiclass}, we use the number of APUFs with response `1's as simulated power SCI. As for reliability information, we apply the same challenge repeatedly many times, and classify the reliability information according to the number of responses `1's obtained by repeated measurements. More specifically, if there are 10 repeated measurements, the number of categories of reliability SCI is 11 (i.e., from 0 to 10). Following~\cite{ruhrmair2010modeling,becker2015pitfalls,nguyen2019interpose,wisiol2020splitting,yao2022design}, we use MATLAB to numerically simulate CRPs, power SCI and reliability SCI required by the following experiments---silicon measurement validations of reliability SCI are detailed in Section~\ref{sec:silicon}. Each APUF is $128$ stage---majority of previous studies using 64-stage APUF. For the response and power SCI, most experiments use noise free simulation, in which $\mu= 0$, $\sigma = 1$ are used to generate the weights corresponding to the APUFs as in Eq.~\ref{con:delta}. In this context, we collect training/testing CRPs. The unreliability is produced by injecting Gaussian noise into the above generated weights by setting, $\mu_{\rm noise} = 0$ and $\sigma_{\rm noise} = 0.05$ to get the noisy weights per repeated same challenge query. The unreliability of APUFs ranges from $0.05$ to $0.08$ after noise injection. The reliability SCI consequentially can be collected. For the power SCI, we count the number of `1's in simulated APUFs as the SCI. The training set, validation set and test set are divided according to the ratio of 4:1:1. It should be noted that if CRPs used for testing and CRPs used for training collected under different conditions (e.g., enrolled at 25\textcelsius ~but regenerated at 50\textcelsius), testing accuracy is expected to be degraded. For FF-APUF, we have only considered the combination of response and reliability, the responses of training set and validation set are obtained by majority voting, and the response of testing set is noise-free. For the XOR-APUF, iPUF, and OAX-APUF, we have considered the hybrid of response, power, and reliability. The number of hidden layers of the multi-head model as exemplified in Fig.~\ref{fig:multi head} in each experiment is 3 or 4, and the activation function is \textsf{ReLU}. In~\cite{liu2022multiclass}, Liu \textit{et al.} used 2 or 3 hidden layers, which breaks 16-XOR-APUF. Our reproduction results of Liu \textit{et al.} successfully attack 30-XOR-APUF, which can be potentially attributed to the adopted DL architecture with more hidden layers. For the response head, the loss function uses \textsf{binary\_crossentropy}, while the loss function of other heads uses \textsf{categorical \_crossentropy}. The \textsf{Adam} optimizer is used for all experiments. The settings of different head loss weights of MLMSA are summarized in Table~\ref{tab:Loss Weight}. All experiments are completed using a common personal computer with an Intel(R) Core(TM) i5-6200U CPU, and 12~GB memory. \subsection{Modeling Attacks and Results Analysis} \begin{table}[] \caption{MLMSA Multi-Head Loss Weight $\lambda$ Settings.} \label{tab:Loss Weight} \centering \resizebox{0.5\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{|c|c|c|c|c|} \hline \textbf{PUF} & \textbf{Multi-Head} & \textbf{Response Weight} & \textbf{Power Weight} & \textbf{Reliability Weight} \\ \hline \multirow{3}{*}{XOR-APUF} & Two-Head A & 10 & 2 & / \\ \cline{2-5} & Two-Head B & 1 & / & 0.8(1.8) \\ \cline{2-5} & Three-Head & 10 & 2 & 2(1) \\ \hline \multirow{3}{*}{OAX-APUF} & Two-Head A & 10 & 2 & / \\ \cline{2-5} & Two-Head B & 1 & / & 0.8(1.8) \\ \cline{2-5} & Three-Head & 10 & 2 & 2 \\ \hline \multirow{3}{*}{iPUF} & Two-Head A & 10(2) & 2(3) & / \\ \cline{2-5} & Two-Head B & 1 & / & 0.8 \\ \cline{2-5} & Three-Head & 10(2) & 2 & 2 \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item [*]Two-Head A of MLMSA uses response and power SCI. Two-Head B of MLMSA uses response and reliability SCI. Three-Head of MLMSA uses response, power and reliability SCIs. \item [**] In Two-Head B, for $l$-XOR-APUF, the reliability weight is $1.8$ when $l=10$; and $0.8$ in other cases. In Three-Head, the reliability weight is $1$ when $l=29, 30$; $2$ in other cases. \item [***] In Two-Head B, for ($x,y,z$)-OAX-APUF, the reliability head loss weight is $1.8$ when $x+y+z=12$; $0.8$ in other cases. \item [****] In Two-Head A, for ($8,8$)-iPUF, ($9,9$)-iPUF, ($2,16$)-iPUF and ($2,18$)-iPUF, the response weight is $2$, power weight is $3$, these two weights are $10$ and $2$ respectively in other cases. In Three-Head, for ($8,8$)-iPUF, ($9,9$)-iPUF, ($2,16$)-iPUF and ($2,18$)-iPUF, the response weight is $2$. In Three-Head, for other iPUFs, the weight of response is $10$ (Three-Head). \item [*****] These head loss weight settings are based on few empirically tuning. There may be better choices, which need to be analyzed on a case by case basis. \end{tablenotes} \end{threeparttable} } \end{table} The XOR-APUF, OAX-APUF, iPUF, and FF-APUF are used to validate the effectiveness of the proposed MLMSA. Note that the SLMSA is the most efficient attack using not only CRPs but also SCI (in particular, power SCI). We compare the results with SLMSA by reproducing it under the same experimental settings for fair comparisons. Table~\ref{tab: attack results} summarizes the main results of the MLMSA and SLMSA attacks on three strong APUF variants. Notably, the Multi-Class A and Multi-Class B belong to the SLMSA attacks~\cite{liu2022multiclass}, where different SCIs are used: \begin{enumerate} \item \textbf{Multi-Class A:} The CSPs are formed with power SCI and CRPs; \item \textbf{Multi-Class B:} The CSPs are formed with reliability SCI and CRPs. \end{enumerate} Note that the Multi-Class B is not considered in~\cite{liu2022multiclass} for experimental evaluations, which we explore, for the first time, for comparison purposes. \begin{table*}[] \caption{Comparisons of MLMSA with SLMSA (i.e., multi-class)~\cite{liu2022multiclass} and DL with pure CRP attack~\cite{santikellur2019deep} on complicated strong PUFs.} \label{tab: attack results} \centering \resizebox{\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \textbf{PUF} & \textbf{Training CRPs} & \textbf{Two-Head A} & \textbf{Three-Head} & \textbf{Multi-Class A~\cite{liu2022multiclass}} & \textbf{Two-Head B} & \textbf{Multi-Class B~\cite{liu2022multiclass}} & \textbf{DL2019~\cite{santikellur2019deep}} \\ \hline 5-XOR-APUF & 300,000 (600,000 / 655,000) & 96.97\% & 97.45\% & 98.25\% & 98.51\% & 98.27\% & 97.61\% \\ \hline 6-XOR-APUF & 300,000 (600,000 / 1,200,000) & 96.98\% & 97.38\% & 98.09\% & 98.24\% & 97.54\% & 49.96\% \\ \hline 10-XOR-APUF & 300,000 (600,000) & 95.43\% & 95.61\% & 97.12\% & 96.14\% & 96.32\% & / \\ \hline 30-XOR-APUF & 600,000 & 91.85\% & 89.77\% & 92.13\% & / & / & / \\ \hline (2,2,3)-OAX-APUF & 300,000 (600,000) & 97.49\% & 97.94\% & 97.40\% & 98.08\% & 97.87\% & 97.57\% \\ \hline (2,2,4)-OAX-APUF & 300,000 (600,000) & 97.18\% & 97.82\% & 97.37\% & 97.98\% & 96.81\% & 50.08\% \\ \hline (2,2,8)-OAX-APUF & 300,000 (600,000) & 96.17\% & 95.00\% & 96.31\% & 95.30\% & 96.84\% & / \\ \hline (2,2,30)-OAX-APUF & 300,000 (600,000) & 87.27\% & 84.23\% & 88.95\% & / & / & / \\ \hline (4,4)-iPUF & 300,000 (600,000 / 647,000) & 97.05\% & 97.58\% & 97.33\% & 96.82\% & 96.63\% & 74.73\% \\ \hline (5,5)-iPUF & 600,000 (1,200,000) & 96.94\% & 97.29\% & 97.09\% & 95.70\% & 95.36\% & / \\ \hline (8,8)-iPUF & 600,000 & 95.79\% & 94.35\% & 95.71\% & / & / & / \\ \hline (9,9)-iPUF & 600,000 & 95.29\% & 94.72\% & 95.47\% & / & / & / \\ \hline (2,16)-iPUF & 600,000 & 92.81\% & 90.33\% & 92.82\% & / & / & / \\ \hline (2,18)-iPUF & 600,000 & 89.26\% & 89.00\% & 90.49\% & / & / & / \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item [*]Two-Head A of MLMSA uses response and power SCI. Two-Head B of MLMSA uses response and reliability SCI. Three-Head of MLMSA uses response, power SCI, and reliability SCI. Multi-Class A of SLMSA uses response and power SCI. Multi-Class B of SLMSA uses response and reliability SCI. \item [**] For Two-Head A, Three-Head and Multi-Class A, the training size is $300,000$ or $600,000$. Take $5$-XOR-APUF as an example, the size of $300,000$ is used when the attacks use power SCI; $600,000$ when attacks use reliability SCI. The $655,000$ is the number given by~\cite{santikellur2019deep}. \end{tablenotes} \end{threeparttable} } \end{table*} \subsubsection{$l$-XOR-APUF} The loss weight settings of multi-head model are described in Table~\ref{tab:Loss Weight}. As for the Two-Head A model (response head and rower head), the response loss weight is $10$, the power loss weight is $2$. As for Two-Head B model (response head and reliability head), the response weight is $1$, the reliability loss weight is $0.8$ when $l\leq 9$; $1.8$ when $l=10$. As for Three-Head, the response loss weight is $10$, the power loss weight is $2$. While the reliability loss weight is $2$ when $l\leq 28$; $1$ when $l=29,30$. The training size is $300,000$ when $l\leq 12$; $600,000$ when $l\geq16$ if the attack uses \textit{power} SCI (e.g., Two-Head A, Three-Head and Multi-Class A that is the SLMSA). As for Two-Head B that only uses the easy-to-obtain reliability SCI, the training size is $600,000$ for all $l$ settings---the largest $l$ in this case is $10$. Fig.~\ref{fig: xor results} depicts the results of multi-head (i.e., our MLMSA) and multi-class (in particular, the SLMSA) attacks on $l$-XOR-APUF ($l\leq 30$). When the power SCI is used, both MLMSA and SLMSA classification can attack 128-stage $30$-XOR-APUF with accuracy about 90\%, which scale has not been achieved in all previous studies. Liu~\textit{et el.}~\cite{liu2022multiclass} only reported the accuracy of 97.8\% when modeling $16$-XOR-APUF by using Multi-Class A---as mentioned above, one potential reason is that Liu~\textit{et el.}~\cite{liu2022multiclass} used 2 or 3 hidden layers that is less powerful than 3 or 4 hidden layers we adopted in the reproduction. When using \textit{only response and reliability SCI}, the two attacks can reliably break $10$-XOR-APUF with accuracy more than 95\%---later we show $12$-XOR-APUF is statistically breakable under multiple repeated attacks. The larger $l$, the harder to minimize reliability loss during the training optimization. The Three-Head attack using response, power, and reliability SCI exhibits an improvement over the two-head model that uses response and power SCI only when $l$ is small. This means the power SCI is more efficient than reliability SCI for attacks. The accuracy of Two-Head A, Three-Head and Multi-Class~A are similar, This further indicates the dominant contribution of the power SCI compared to reliability SCI. Notably, \textit{when the power SCI is unavailable}, reliability SCI can indeed help to break larger scale strong PUFs that CRP based modeling attacks cannot achieve alone, as specifically validated in Section~\ref{sec:relattack}. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{./Fig/XORResultsNew.pdf} \caption{Comparisons of MLMSA (i.e., multi-head) and SLMSA (i.e., Multi-Class) attacks using CRP, or/and power or/and reliability SCI on $l$-XOR-APUFs.} \label{fig: xor results} \end{figure} \subsubsection{($x,y$)-iPUF} The loss weight settings for the ($x,y$)-iPUF are detailed in Table~\ref{tab:Loss Weight}. For most of the iPUF configurations with 128-stage $x$-APUF and 129-stage $y$-APUF, the response head loss weight is set to $10$, the power head loss weight is $2$, and the reliability head loss weight is $2$. However, these settings are not successful in attacking ($8,8$)-iPUF and ($2,16$)-iPUF. It is necessary to tune the corresponding loss weight settings for optimization, the response head loss weight is $2$, the power head loss weight is $3$ when the Two-Head A attack is used; the response head loss weight is $2$ when the Three-Head attack is used. As for Two-Head A, Three-Head and Multi-Class A, the training size is $600,000$. While for Two-Head B and Multi-Class B, the training size is $600,000$ for ($4,4$)-iPUF, $1,200,000$ for ($5,5$)-iPUF, respectively. Though MLMSA is simple, it can also break ($2,16$)/($8,8$)-iPUF with accuracy of 90.33\%/94.35\% that is comparable to the SLMSA when response, power SCI and reliability SCI are exploited. When using response, power SCI and reliability SCI, Three-Head of MLMSA can also successfully model ($2,18$)/($9,9$)-iPUF with accuracy of 89.00\%/94.72\%, which has not been reached by existing works include~\cite{liu2022multiclass}. As for Two-Head B by using response and reliability SCI, both the MLMSA and SLMSA can break ($5,5$)-iPUF with accuracy more than 95\%. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{./Fig/iPUFResultsNew.pdf} \caption{Comparisons of MLMSA (i.e., multi-head) and SLMSA (i.e., Multi-Class) attacks on ($x,y$)-iPUFs.} \label{fig: ipuf results} \end{figure} \subsubsection{($x,y,z$)-OAX-APUF} For the ($x,y,z$)-OAX-APUF, we fix $x=2$, $y=2$, and change the setting of $z$. The loss weight settings of multi-head attacks are detailed in Table~\ref{tab:Loss Weight}. As for Two-Head A model (response head and power head), the response head loss weight is $10$, the power head loss weight is $2$. As for Two-Head B model (response head and reliability head), the response head loss weight is $1$, the reliability head loss weight is $0.8$ when $z\leq 7$; $1.8$ when $z=8$, respectively. As for Three-Head, the response head loss weight is $10$, the power head loss weight is $2$ and the reliability head loss weight is $2$. The training size is $300,000$ when $z\leq 10$; $600,000$ when $z\geq12$, respectively, if the attacks use power SCI (Two-Head A, Three-Head, and Multi-Class A). As for Two-Head B, the training size is $600,000$. As shown in Fig.~\ref{fig: oax results}, the performance of MLMSA and SLMSA are similar though the MLMSA is simpler. More specifically, when power SCI is leveraged, both MLMSA and SLMSA can break ($2,2,30$)-OAX-APUF with an accuracy of about 88\%. Two-Head B with reliability SCI can reliably break ($2,2,8$)-OAX-APUF with an accuracy of more than 95\%---later in Section~\ref{sec:relattack} we show that ($2,2,9$)-OAX-APUF is statistically breakable. These experiments further validate the security of the OAX-APUF. Compared with the $l$-XOR-APUF, the ($x,y,z$)-OAX-APUF with $l=x+y+z$ is slightly easier to be modeled in front of DL based attacks, because the OR and AND are easier to be approximated than the XOR operation by DL. Despite the OAX-APUF defeats CMA-based reliability modeling attacks and improves the modeling resilience to the LR based modeling attacks with only CRPs for training~\cite{yao2022design}. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{./Fig/OAXResultsNew.pdf} \caption{Comparisons of MLMSA and SLMSA using power and reliability SCIs on OAX-APUFs. } \label{fig: oax results} \end{figure} \subsubsection{FF-APUF} For FF-APUF, we compare i) the Two-Head model of MLMSA with multi-class of SLMSA attack and ii) the pure CRP based DL attack~\cite{alkatheiri2017towards}. Note for the first two type attacks, only the reliability SCI is utilized. In this experiment, the number of hidden layer is set to be 2 for Two-Head B and Multi-Class B. The training size is $30,000$ when the loop number is less than $4$; $600,000$ when the loop number is $4$, $5$ and $6$. The weight of response head loss is $10$, and the weight of reliability head loss is $2$. There are three reliability SCI settings: $10$ times of repeated measurement with $11$ classes; $19$ times measurements with $4$ classes (e.g., 0-4 are one class, 5-9 are one class); and $19$ measurements with $20$ classes. The challenge feature vector extraction method is consistent with~\cite{alkatheiri2017towards}. As results detailed in Table~\ref{tab: ffpuf results}, the multi-head of MLMSA and multi-class of SLMSA attack can successfully model FF-APUF which has $6$ loops with an accuracy of about 90\%. Both attacks that are hybrid attacks exhibit a better accuracy than the purely CRP-based DL modeling attack~\cite{alkatheiri2017towards}. As for the repeated times of reliability SCI, when the response is measured repeatedly for $19$ times and the results are divided into $20$ categories (i.e., more repeated times and fine-grained class), the response accuracy obtained by the two attacks is the highest. This indicates the higher fine-grained reliability SCI, the better. MLMSA can be used to break XOR-FF-APUF, where the FF-APUFs are further XOR-ed. As shown in Fig.~\ref{fig: xorffpuf results}, when using response and power SCI, MLMSA can successfully break $10$-XOR-FF-APUF when the training size is $300,000$. By increasing the training size, larger XOR-FF-APUF (i.e., 20-XOR-FF-APUF) is also breakable. Note, here all FF-APUFs have one-loop. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{./Fig/XORFFPUFResults.pdf} \caption{Comparisons of Two-Head A Attack with different number of training CRPs and power SCI on $l$-XOR-FF-APUF in which FF-APUFs have $1$ loop (63→80).} \label{fig: xorffpuf results} \end{figure} \begin{table*} \caption{Comparisons of MLMSA with SLMSA (i.e., multi-class)~\cite{liu2022multiclass} and DL with pure CRP attack~\cite{alkatheiri2017towards} on FF-APUFs with varying number of loops.} \label{tab: ffpuf results} \centering \resizebox{\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \textbf{loopNum} & \textbf{($m,cn$)} & \textbf{Start → End} & \textbf{Two-Head B} & \textbf{Multi-Class B} & \textbf{TowardsFast} & \textbf{Start → End} & \textbf{Two-Head B} & \textbf{Multi-Class B} & \textbf{TowardsFast} \\ \hline & (19,20) & & 99.02\% & 99.04\% & 95.87\% & & 98.94\% & 99.17\% & 97.77\% \\ \cline{2-2} \cline{4-6} \cline{8-10} & (19,4) & & 98.75\% & 99.03\% & 95.61\% & & 98.86\% & 99.14\% & 96.85\% \\ \cline{2-2} \cline{4-6} \cline{8-10} \multirow{-3}{*}{1} & (10,11) & \multirow{-3}{*}{15→80} & 98.83\% & 98.91\% & 95.25\% & \multirow{-3}{*}{63→80} & 98.95\% & 99.11\% & 98.06\% \\ \hline & (19,20) & & 94.90\% & 95.00\% & 95.09\% & & 94.88\% & 95.11\% & 95.17\% \\ \cline{2-2} \cline{4-6} \cline{8-10} & (19,4) & & 94.79\% & 94.90\% & 95.11\% & & 94.79\% & 94.83\% & 94.99\% \\ \cline{2-2} \cline{4-6} \cline{8-10} \multirow{-3}{*}{2} & (10,11) & \multirow{-3}{*}{15→80,85} & 94.82\% & 94.92\% & 94.99\% & \multirow{-3}{*}{63→80,85} & 94.83\% & 94.98\% & 95.07\% \\ \hline & (19,20) & & 94.33\% & 94.31\% & 91.84\% & & 94.34\% & 94.59\% & 93.36\% \\ \cline{2-2} \cline{4-6} \cline{8-10} & (19,4) & & 94.17\% & 94.24\% & 91.53\% & & 94.14\% & 94.39\% & 93.30\% \\ \cline{2-2} \cline{4-6} \cline{8-10} \multirow{-3}{*}{3} & (10,11) & \multirow{-3}{*}{15→80,85,90} & 94.26\% & 94.69\% & 91.83\% & \multirow{-3}{*}{63→80,85,90} & 94.22\% & 94.43\% & 92.88\% \\ \hline & (19,20) & & 91.45\% & 91.53\% & 91.48\% & & 91.50\% & 91.51\% & 91.44\% \\ \cline{2-2} \cline{4-6} \cline{8-10} & (19,4) & & 91.41\% & 91.55\% & 91.40\% & & 91.43\% & 91.49\% & 91.59\% \\ \cline{2-2} \cline{4-6} \cline{8-10} \multirow{-3}{*}{4} & (10,11) & \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}15→80,85\\ 15→90,95\end{tabular}} & 91.43\% & 91.55\% & 91.32\% & \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}63→80,85\\ 63→90,95\end{tabular}} & 91.29\% & 91.38\% & 91.38\% \\ \hline & (19,20) & & 90.83\% & 90.73\% & 88.62\% & & 91.12\% & 91.04\% & 89.62\% \\ \cline{2-2} \cline{4-6} \cline{8-10} & (19,4) & & 90.66\% & 90.80\% & 88.69\% & & 90.93\% & 91.05\% & 89.39\% \\ \cline{2-2} \cline{4-6} \cline{8-10} \multirow{-3}{*}{5} & (10,11) & \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}15→80,85\\ 15→90,95,100\end{tabular}} & 91.00\% & 91.01\% & 88.21\% & \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}63→80,85\\ 63→90,95,100\end{tabular}} & 91.12\% & 90.91\% & 89.49\% \\ \hline & (19,20) & & 91.28\% & 91.38\% & 89.69\% & & 91.22\% & 91.30\% & 90.15\% \\ \cline{2-2} \cline{4-6} \cline{8-10} & (19,4) & & 91.33\% & 91.17\% & 89.52\% & & {\color[HTML]{080808} 90.92\%} & 91.21\% & 89.67\% \\ \cline{2-2} \cline{4-6} \cline{8-10} \multirow{-3}{*}{6} & (10,11) & \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}15→80,85,90\\ 15→95,100,105\end{tabular}} & 91.02\% & 91.21\% & 89.69\% & \multirow{-3}{*}{\begin{tabular}[c]{@{}c@{}}63→80,85,90\\ 63→95,100,105\end{tabular}} & 91.09\% & 91.37\% & 90.02\% \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item [*] Two-Head B of MLMSA uses response and reliability SCI. Multi-Class B of SLMSA uses response and reliability SCI. \item [**] ($m,cn$) means reliability SCI is obtained by repeating the measurements for $m$ times, which is divided into $cn$ categories/classes. \item [***] In the experiments of FF-APUFs, the responses of training set are obtained by a majority vote. While the responses of test set are noise-free. \end{tablenotes} \end{threeparttable} } \end{table*} \section{Discussion}\label{sec:discussion} \subsection{Different Loss Weights} We take Two-Head B of MLMSA attack to further explore the impact of different head loss weight settings on performance of the MLMSA. In this experiment, the response head loss weight is set to be $1$, while the reliability head loss weight ranges between $0.5$ and $2.0$. The attacked strong PUFs are $10$-XOR-APUF, ($2,2,8$)-OAX-APUF, and ($5,5$)-iPUF. As shown in Fig.~\ref{fig: lossweight results}, when the reliability head loss weight is small, the chance of response accuracy greater than 90\% tends to be small---each weight setting per strong PUF runs one time. The other observation is the attacking accuracy stability, for $10$-XOR-APUF and ($5,5$)-iPUF, when the reliability head loss weight is greater than $1.5$, the response prediction accuracy is high and stably maintained, e.g., above 90\% of $10$-XOR-APUF. There are two general implications. Firstly, a slightly higher reliability head loss weight is necessary to enforce its contribution. Otherwise, if its weight is too small, the Two-Head B attack degrades to CRP-only based DL attacks, which is less effective exhibited by the lower chance of breaking large-scale APUF variants, e.g., 128-stage $10$-XOR-APUF. Secondly, a properly set higher head loss can make the attacking accuracy remain stably high with smaller variance, which can be observed by the accuracy of $10$-XOR-APUF and ($5,5$)-iPUF. According to our observations on different loss curves during the training in all aforementioned experiments, the power output head converges the fastest, followed by the response output head, and finally the reliability output head. Though we always adopt a fixed head loss in all our experiments throughout the training process, it is expected that dynamically tuning these loss weights may achieve improved attack effect e.g., better accuracy or faster convergence for the total loss. \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.45\textwidth]{./Fig/ReliabilityWeightResultsNew.pdf} \caption{Response prediction accuracy under different loss settings. } \label{fig: lossweight results} \end{figure} \subsection{Reliability Hybrid Attack}\label{sec:relattack} The proposed MLMSA and reproduced SLMSA attack~\cite{liu2022multiclass} using the reliability SCI obtained from 11 repeated measurements can reliably model $10$-XOR-APUF with an accuracy of 96\%. As for ($2,2,8$)-OAX-APUF, the accuracy of the two attacks both reliably achieve 95\%. In fact, when these two attacks run for multiple times, there is a certain probability that the response accuracy of even larger scaled $11$, $12$-XOR-APUF, ($2,2,9$)-OAX-APUF can reach more than 90\% (i.e., being successfully broken). To be more precise, we have run 5 times, and there is one attempt reaching about 94\%. As shown in Table~\ref{tab: attack results}, Two-Head B of MLMSA and Multi-Class B of SLMSA can successfully break $10$-XOR-APUF, ($2,2,8$)-OAX-APUF and ($5,5$)-iPUF when the reliability SCI assists the model training. In comparison, when only CRPs are used, the DL attack~\cite{santikellur2019deep} can only break $5$-XOR-APUF, ($2,2,3$)-OAX-APUF and ($4,4$)-iPUF, which are inferior to the hybrid attacks. Note to be fair, these attacks by us have used the same MLP structure except for the output layer. According to Liu \textit{et al.}~\cite{liu2022multiclass}, the GRA based hybrid attack using reliability SCI~\cite{tobisch2021combining} can successfully crack a 128-stage 6-XOR-APUF. But it cannot crack a $12$-XOR-APUF. GRA can break ($6,6$)-iPUF, which cannot be broken by the two attacks we have attempted. The GRA exhibits improved performance over iPUF due to its knowledge of the differential model for the iPUF. In addition, the GRA relies on a multiple pass attack when the $x>1$ in the ($x,y$)-iPUF. In other words, the $y$-APUF are firstly learned, then the $x$-APUF are learned sequentially by fixing the learned $y$-APUF. In summary, by using the easily obtainable reliability SCI to assist the hybrid modeling attacks, larger scaled strong APUF variants can be successfully broken, which \textit{cannot be achieved by using the CRP-only based DL attacks}. \begin{figure*} \centering \subfigure[8-XOR-APUF]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.8\linewidth]{Fig/8XORReliabilityNew.pdf} \end{minipage} }% \subfigure[9-XOR-APUF]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.8\linewidth]{Fig/9XORReliabilityNew.pdf} \end{minipage} }% \subfigure[10-XOR-APUF]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.8\linewidth]{Fig/10XORReliabilityNew.pdf} \end{minipage} }% \subfigure[(2,2,6)-OAX-APUF]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.8\linewidth]{Fig/226OAXReliabilityNew.pdf} \end{minipage} }% \subfigure[(2,2,7)-OAX-APUF]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.8\linewidth]{Fig/227OAXReliabilityNew.pdf} \end{minipage} }% \subfigure[(2,2,8)-OAX-APUF]{ \begin{minipage}[t]{0.33\linewidth} \centering \includegraphics[width=0.8\linewidth]{Fig/228OAXReliabilityNew.pdf} \end{minipage} }% \centering \caption{The response unreliability SCI category statistics of XOR-APUF (a-c) and OAX-APUF (d-f) based on silicon measurement sythesized RO-APUF and MATLAB numerical simulated APUF.} \label{fig:reliability statistics} \end{figure*} \subsection{Silicon Measurement Validations}\label{sec:silicon} \begin{figure}[h] \centering \includegraphics[trim=0 0 0 0,clip,width=0.40\textwidth]{./Fig/ROAPUFstage.pdf} \caption{Example of $i$-th stage of an APUF with two signal paths (i.e., top and bottom paths). } \label{fig:roapuf stage} \end{figure} Following~\cite{gao2022treverse,gao2014highly}, we use the public ROPUF dataset HOST2018~\cite{hesselbarth2018large} to synthesize APUF, coined as RO-APUF. The key of this method is to use the reciprocal of four RO frequencies as the four time delays of each stage of the RO-APUF (see illustration in Fig.~\ref{fig:roapuf stage}). To synthesize a $128$-stage RO-APUF, $512$ RO frequencies are utilized, which mainly has the following three steps. \begin{enumerate} \item Obtain the reciprocal of RO frequencies to serve as the path segment delays of APUF: the reciprocal of four RO frequencies are used as the segment time delays of the $i$-th stage of APUF: t$_{13}^{i}$, t$_{14}^{i}$, t$_{23}^{i}$, t$_{24}^{i}$, as illustrated in Fig.~\ref{fig:roapuf stage}. \item The $\rm delay\_cross^{i}=t_{14}^{i}-t_{23}^{i}$ and $\rm delay\_uncross^{i}=t_{13}^{i}-t_{24}^{i}$ are computed to represent the cross path delay difference and uncross path delay difference of the $i_{\rm th}$ stage. The $\boldsymbol{w}[i]$ is obtained through Eq.~\ref{wform}. \begin{equation} \begin{split} \label{wform} &\boldsymbol{w}[0]=(\rm delay\_uncross^{0}-delay\_cross^{0})/2, \\ &\boldsymbol{w}[128]=(\rm delay\_uncross^{127}+delay\_cross^{127})/2,\\ &\boldsymbol{w}[i]=(\rm delay\_uncross^{i-1}+delay\_cross^{i-1}\\ &+\rm delay\_uncross^{i}-delay\_cross^{i})/2,\\ &i=1,2,...,127. \end{split} \end{equation} \item Compute the response of a given challenge according to Eq.~\ref{con:delta}, Eq.~\ref{con:feature vector}, and Eq.~\ref{con:responses}. \end{enumerate} The HOST2018 ROPUF dataset provides raw data of 217 Xilinx Artix-7 XC7A35T FPGAs, each containing a total of 6,592 ROs, comprising six different routing paths with 550 to 1,696 instances per type~\cite{hesselbarth2018large}. Each RO frequency is evaluated 100 times at 5\textcelsius, 15\textcelsius, 25\textcelsius, 35\textcelsius, 45\textcelsius, and 55\textcelsius. After synthesis, RO-APUF, XOR-RO-APUF and OAX-RO-APUF are used to validate the efficiency of the proposed MLMSA attack with silicon measurements. The reference response is measured at 25\textcelsius. The reliability SCI is measured $10$ times at 55\textcelsius. For the numerical simulated CRPs and the corresponding reliability SCI obtained through MATLAB simulator, the training size is $600,000$. Both MLMSA and SLMSA can successfully break $10$-XOR-APUF and ($2,2,8$)-OAX-APUF. However, the same training size and loss head weight settings are not directly applicable to XOR-RO-APUF and OAX-RO-APUF with silicon measurements. In order to achieve the same attack effect as the MATLAB numerical simulation, the training size and head loss weight settings are adjusted in some cases. When modeling the $l$-XOR-RO-APUF, the training size is adjusted to $600,000$ when $l\leq 7$; $1,200,000$ when $l=8,9$; and $1,500,000$ when $l=10$. As for ($x=2,y=2,z$)-OAX-RO-APUF, the training size is adjusted to $600,000$ when $x+y+z \leq 9$; $1,200,000$ when $x+y+z=10,11$; $1,500,000$ when $x+y+z=12$. The response weight of all Two-Head B attacks on XOR-RO-APUF and OAX-RO-APUF are $1$. For $l$-XOR-RO-APUF, the reliability loss weight is $0.8$ when $l=5,6,8,9$; $1.8$ when $l=7$; and $1$ when $l=10$. The reliability loss weight for the OAX-RO-APUF is always $0.8$. Both Two-Head B and Multi-Class B can model $10$-XOR-APUF with an accuracy of about 95\%. While Multi-Class B of SLMSA can not break ($2,2,8$)-OAX-RO-APUF, Two-Head B of MLMSA can successfully break it with an accuracy of 97.35\%. Generally, the required number of training CRPs and corresponding reliability SCI silicon measurement attacks is larger than the number of numerical simulated strong PUFs. The potential reason is that the unreliability of RO-APUF is lower than that from numerical simulation. The bit error rate or unreliability of RO-APUF is about 3\% to 5\%, while the unreliability of the APUF upon numerical simulation is about 5\% to 8\%. More precisely, as we repeatedly measure the same response $10$ times to gain $11$ categories for the reliability SCI, it indicates that the unreliable responses (categories of $0$ and $10$ are from those reliable responses, rest $1$ to $9$ categories are unreliable responses) in the RO-APUF is less than that in numerical simulation. Therefore, the contribution from the reliability SCI is reduced, which requires a larger training size. Fig.~\ref{fig:reliability statistics} manifests this conjecture. We can see that the unreliable response categories of ($1$--$9$) of the silicon measurement based strong PUFs are much less than that from numerical simulations. Nonetheless, MLMSA and SLMSA can still successfully model XOR-RO-APUF and OAX-RO-APUF once the number of unreliable responses (i.e., those in categories $1$--$9$) increases. \subsection{Curse of Dimensionality}\label{sec:curse} We see that SLMSA faces a high dimensionality output once multiple SCIs are used together, especially the dimensionality per SCI is relatively high. In all previous experiments, we maximized the use of SCI to be combined with response information for the SLMSA to retain its output dimensionality within a relatively small range. For instance, the output dimensionality of SLMSA is $62$ when attacking $30$-XOR-APUF with power SCI and response. We further evaluate SLMSA performance when power SCI, reliability SCI and response are all used and compare this with Three-Head of MLMSA. As detailed in Table~\ref{tab:two sci}, the accuracy of SLMSA is inferior to MLMSA when the $l$ of $l$-XOR-APUF is repeated $m$ times for reliability SCI increases. In addition, SLMSA always needs more time to complete the attack on the same scaled $l$-XOR-APUF. Notably, our evaluations stop at $l=20$, the discrepancy between the MLMSA and SLMSA will be larger when $l$ increases further. The reason for inferior accuracy is potentially because of the SLMSA always treats the contribution of response, power SCI, and reliability SCI as equal. In other words, it cannot flexibly tune the contribution weight per SCI to better reflect their impacts, while our MLMSA fundamentally obviates such limitation. As for the longer SLMSA training time, this is due to the output layer width (directly related to the number of classes) being higher, increasing the number of parameters to be updated during the training phase---therefore, the training time will be longer. \begin{table}[] \caption{Comparison between MLMSA and SLMSA when all power and reliability SCIs are used.} \label{tab:two sci} \centering \resizebox{0.45\textwidth}{!}{ \begin{threeparttable} \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Attack} & \textbf{XORnum} & \textbf{(m,cn)} & \textbf{Num.Classes} & \textbf{Response Acc} & \textbf{Time} \\ \hline \multirow{4}{*}{Three-Head} & 10 & (10,11) & (2,11,11) & 95.61\% & 8 min 36 s \\ \cline{2-6} & 12 & (10,11) & (2,13,11) & 94.87\% & 13 min 11 s \\ \cline{2-6} & 16 & (10,11) & (2,17,11) & 96.19\% & 33 min 2 s \\ \cline{2-6} & 20 & (10,11) & (2,21,11) & 95.08\% & 36 min 44 s \\ \hline \multirow{6}{*}{Multi-Class C} & 10 & (10,11) & 242 & 95.57\% & 24 min 24 s \\ \cline{2-6} & 10 & (19,20) & 440 & 96.61\% & 47 min 23 s \\ \cline{2-6} & 12 & (10,11) & 286 & 94.93\% & 48 min 30 s \\ \cline{2-6} & 12 & (19,20) & 520 & 94.68\% & 1 h 17 min \\ \cline{2-6} & 16 & (19,20) & 680 & 94.92\% & 3 h 54 min \\ \cline{2-6} & 20 & (19,20) & 840 & 93.59\% & 4 h 34 min \\ \hline \end{tabular} \begin{tablenotes} \footnotesize \item [*] ($m,cn$) means reliability side-channel information is obtained by repeating the measurement for $m$ times, which is divided into $cn$ categories/classes. \item[**] Multi-Class C represents SLMSA that uses response, power SCI and reliability SCI to build CSPs. \item[***] MLMSA and SLMSA have the same number of hidden layers. The number of hidden layer is $3$ when $l=10,12,16$. The number of hidden layer is $4$ when $l=20$. \item[****] The Num.Classes for Three-Head means the number of classes of (response, power, reliability). The Num.Classes for Multi-Class C means $2*power*reliability$. \end{tablenotes} \end{threeparttable} } \end{table} \subsection{Lightweight Cryptographic Module Incorporation} The general take-away of this study is that by barely increasing the scale of a strong PUF, in particular, the APUF variants, is still challenging when it is confronted with rapidly evolving DL techniques, especially by combining response information and multiple SCIs. Therefore, the practical solution of using strong PUFs appears to incorporate lightweight cryptographic modules such as the Lockdown-PUF~\cite{yu2016lockdown}, TREVERSE constructions~\cite{gao2022treverse}, and RSO-APUF~\cite{zhang2020set} to protect the CRP interface. The overhead caused by lightweight cryptographic modules can be lower or comparable to the overhead incurred by increasing the APUF variants to a large-scale, e.g., 128-stage 30-XOR-APUF being breakable. \subsection{Limitations} We have evaluated the MLMSA efficacy through silicon measurement via the synthesized RO-APUFs---the response and reliability SCI are collected in this context. The characteristics are the same for the numerical simulation based evaluations. However, there is slight difference in terms of training size used due to the small unreliability discrepancy between the numeral and silicon measurement. However, as for the case of SLMSA~\cite{liu2022multiclass}, the power SCI is not collected from silicon measurements. In our MLMSA evaluations, we use the numerical simulation to generate SCI. In~\cite{liu2022multiclass}, the power information is collected from simulation models built with PSPICE. Silicon measurement based validations are preferable and have been conducted in other modeling resilience studies~\cite{ruhrmair2013puf,ruhrmair2014efficient,nguyen2019interpose}. It is worth validating the MLMSA and SLMSA with silicon measurements, especially, for the power SCI in future work. \section{Conclusion} \label{sec:conclusion} This work proposes the MLMSA attack that constructively leverages multi-head DL to concurrently exploit useful multi-channel information to attack strong PUFs, particularly, APUF variants. With this simple and efficient MLMSA attack, we have successfully attacked a 128-stage 30-XOR-APUF, a (9, 9)- and (2, 18)-iPUF, and a (2, 2, 30)-OAX-APUF when CRPs, power SCI and reliability SCI are simutaneously used. With access to only easy-to-obtain reliability SCI and CRPs, the MLMSA can stably break a 128-stage $10$-XOR-APUF, ($2,2,8$)-OAX-APUF, and $6$-loop FF-APUF; and statistically break a $12$-XOR-APUF and ($2,2,9$)-OAX-APUF. All these large-scaled strong APUF variants have not been achieved by state-of-the-arts attacks. We conclude that MLMSA can serve as an efficient technique for examining other existing or emerging strong PUF's modeling resilience due to its simplicity, efficacy and the avoidance of underlying mathematical model. \input{arxiv.bbl} \bibliographystyle{IEEEtran} \end{document}
1,314,259,996,154
arxiv
\section{Introduction} \label{introduction} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \indent Let $M^n$ be a smooth and compact manifold of dimension $n\geq 2$ without boundary, and $X_0:M^n\longrightarrow {\mathbb R}^{n+1}$ be a smooth hypersurface immersion of $M^n$ which is strictly convex. We consider a smooth family of maps $X_t=X(\cdot, t)$ evolving according to \begin{eqnarray}\left\{\begin{array}{lll} \frac {\partial }{\partial t} X(x,t) &=& \{h(t)-H(x, t)\}{\bf v}(x,t), \quad x\in M^n,\\[2mm] X(\cdot, 0)&=& X_0, \end{array}\right. \end{eqnarray} where $H$ is the mean curvature of $M_t=X_t(M^n)$, ${\bf v}$ the outer unit normal vector field, and $h(t)$ a nonnegative continuous function. The curvature flow (1.1) is a strictly parabolic equation and the short time existence easily follows from \cite {hp}. Therefore we suppose that the evolution equation (1.1) has a smooth solution on a maximal time interval $[0, T_{\max})$ for some $T_{\max}>0$. Often different forcing term will lead to different maximal time interval. We always assume that $h(t)$ is continuous in $[0, T_{\max})$. If $h(t)=0$, (1.1) is just the well-known mean curvature flow \cite {h1}. In this case, (1.1) is contracting and $T_{\max}$ is finite. If $h(t)$ is the average of the mean curvature on $M_t$, i.e. $h(t)= {\int _{M_t}H_td\mu_t}/{\int _{M_t}d\mu_t}$, where $d\mu_t$ is the area element of $M_t$, (1.1) is then the volume preserving mean curvature flow \cite {h2}, which exists on all time $[0, \infty)$, and the solution converges to a round sphere. The hypersurfaces area preserving mean curvature flow for which $h(t)=\int_{M_t}H_t^2d\mu_t/\int_{M_t}H_td\mu_t$ also exists for all time and converges to a round sphere \cite {m1}. The mixed volume preserving mean curvature flow \cite {m2} for which $h(t)={\int _{M_t}HE_{k+1}d\mu_t }/{\int _{M_t}E_{k+1}d\mu_t}$, $k=-1, 0, 1, \cdots, n-1$, where $E_l$ is the $l$-th elementary symmetric function of the principal curvatures of $M_t$, generalizes the results of the volume preserving mean curvature flow \cite {h2} and surfaces area preserving mean curvature flow \cite {m2}, and exists for all time and converges to a round sphere. In fact, it can be checked that if the forcing term $h$ is a small constant, the solution to (1.1) is still contracting. But if $h$ is large enough, the curvature flow (1.1) expands and the solution exists for all time. From above, we see that different forcing term $h(t)$ leads to different existence and convergence. A natural question is how to unify all these cases? In this paper, we study the curvature flow (1.1) with a general forcing term $h(t)$ such that the limit $\lim_{t\rightarrow T_{\max}} h(t)$ exists. We want to show that if the initial hypersurface is convex and compact, the shape of $M_t$ approaches the shape of a round sphere as $t\rightarrow T_{\max}$. In order to describe the shape of the limiting hypersurface, we carry out a normalization as in \cite {h1}. For any time $t$, where the solution $X(\cdot, t)$ of (1.1) exists, let $\psi (t)$ be a positive factor such that the hypersurface $\widetilde{M}_t$ given by $$\widetilde{X}(x, t)=\psi (t)X(x,t)$$ has total area equal to $|M_0|$, the area of $M_0$ $$\int_{\widetilde{M}_t}d\widetilde{\mu}_t=|M_0|, \qquad \mbox {for all } t\in [0, T_{\max}).$$ After choosing the new time variable $\tilde{t}(t)=\int _0^t\psi^2(\tau)d\tau$, we will see that $\widetilde{X}$ satisfies the following evolution equation \begin{eqnarray} \left\{\begin{array}{l}\frac{\partial } {\partial \tilde{ t}} \widetilde{X} =\{\widetilde{h}-\widetilde{H}\}{\bf \widetilde{v}}+{\frac 1n} \widetilde{\theta}\widetilde{X}, \\[2mm] \widetilde{X}(\cdot, 0)= X_0, \end{array}\right. \end{eqnarray} where $\widetilde{h}=\psi ^{-1}h$, $\widetilde{\theta}=\psi ^{-2}\theta$ and $\theta$ is given by $$\theta=-\frac {\int _{M}(h-H)Hd\mu}{\int _Md\mu}.$$ In section 3, we have a time sequence $\{T_i\}$ such that $T_i\rightarrow T_{\max}$ as $i\rightarrow \infty$, and a limit $$\lim_{T_{i}\rightarrow T_{\max}}\psi (T_i)=\Lambda.$$ We now state our main theorem: \begin{theorem}Let $n\geq 2$ and $M_0$ an $n$-dimensional smooth, compact and strictly convex hypersurface immersed in ${\mathbb R}^{n+1}$. Then for any nonnegative continuous function $h(t)$, there exists a unique, smooth solution to the evolution equation $(1.1)$ on a maximal time interval $[0, T_{\max})$. If additionally the following limit exists and satisfies \begin{equation} \lim_{t\rightarrow T_{\max}}h(t)=\overline h< +\infty, \end{equation} then we have:\\[-3mm] $(I)$ If $\Lambda =\infty$, then $T_{\max}<\infty$ and the curvature flow $(1.1)$ converges uniformly to a point as $t\rightarrow T_{\max}$. Moreover the normalized equation $(1.2)$ has a solution $\widetilde{X}(x,\tilde{t})$ for all times $0\leq \tilde{t}< \infty$, and the hypersurfaces $\widetilde{M}(x, \tilde{t})$ converge to a round sphere of area $|M_0|$ in the $C^{\infty}-$topology, as $\tilde{t}\rightarrow \infty$.\\[-4mm] $(II)$ If $0<\Lambda<\infty$, then $T_{\max}=\infty$, and the solutions to $(1.1)$ converge uniformly to a round sphere in the $C^{\infty}-$topology as $t\rightarrow \infty$.\\[-4mm] $(III)$ If $\Lambda =0$, then $T_{\max}=\infty$. Moreover if $\overline h\neq 0$, the solutions to $ (1.1)$ expand uniformly to $\infty$ as $t\rightarrow \infty$ and if the rescaled solutions to $(1.2)$ converge to a smooth hypersurface, then the limit must be a round sphere of total area $|M_0|$. \end{theorem} \begin{remark} $(i)$ One can check that Theorem 1 includes Huisken's mean curvature flow \cite {h1} and volume preserving mean curvature flow \cite {h2}, McCoy's surface area preserving mean curvature flow \cite {m1} and mixed volume preserving mean curvature flow \cite {m2}.\\[1mm] $(ii)$ The assumption $(1.3)$ seems not natural since often the maximal existing time $T_{\max}$ of $(1.1)$ depends on $h(t)$. In fact we can use a stronger assumption that $h(t)$ is a nonnegative continuous function on $[0, \infty)$ and satisfies $\lim_{t\rightarrow \infty}h(t)< +\infty$. Our result still includes all cases in $(i)$. \end{remark} The extreme cases of Theorem 1 can also be considered. \begin{remark} $(i)$ For case $(I)$, when $\overline h=\infty$, $T_{\max}$ may not be finite, even though $M_t$ is contracting (see Remark 3 $(ii)$ in section 4). A sphere: $r(t)=\frac{1}{t+1}$, $h(t)=n(t+1)-\frac{1}{(t+1)^2}$, is such an example, whose maximal existing time $T_{\max}=\infty$.\\[1mm] $(ii)$ For case $(III)$, if $\overline h=0$, $T_{\max}$ is also infinite (see section 6). We don't know whether the solutions to $(1.1)$ expand uniformly to $\infty$ as $t\rightarrow \infty$, but we can find the special solution satisfying that condition. In fact, a sphere: $r(t)=\sqrt{t+1}$, $h(t)=\frac{2n+1}{2\sqrt{t+1}}$, is such a particular example, for which $M_t$ expands to infinity. If $\overline h=\infty$, by similar discussion as in section 6, we can show that $M_t$ expands to infinity, but $T_{\max}$ may not be $\infty$. For example, the sphere $r(t)=\frac{1}{1-t}$, $h(t)=n(1-t)+\frac{1}{(1-t)^2}$ is a solution to (1.1), for which $T_{\max}=1$, and $r\rightarrow \infty$, as $t\rightarrow 1$. \end{remark} We remark that Curvature flow in Euclidean spaces with different forcing terms $h(t)$ were also studied by Schn$\ddot{u}$rer-Smoczyk \cite {ss}, and Liu-Jian \cite {lj1}. If the ambient space is a Minkowski space, Aarons \cite {aa} studied the forced mean curvature flow of graphs and obtained the long time existence and convergence under suitable assumptions on $h(t)$. And a kind of trichotomy to the initial hypersurface was used by Chou-Wang \cite {cw} in logarithmic Gauss curvature flow. This paper is organized as follows: Section 2 introduces some known results on curvature flow (1.1) and some preliminary facts of convex hypersurfaces, which will be used later. In section 3, we carry out the normalization of (1.1), and estimate the inner and outer radii of the rescaled convex hypersurfaces. In terms of the limiting shape of the scaling factor $\psi (t)$ as $t\rightarrow T_{\max}$, long time existence and convergence of solutions to (1.1) or (1.2) are proved in section 4, 5 and 6, separately, and therefore we complete the proof of Theorem 1. \section{Preliminaries} \label{section:2} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \indent Let $M$ be a smooth hypersurface immersion in ${\mathbb R}^{n+1}$. We will use the same notation as in \cite {h2}. In particular, for a local coordinate system $\{x^1, \cdots, $ $x^n\}$ of $M$, $g=g_{ij}$ and $A=h_{ij}$ denote respectively the metric and second fundamental form of $M$. Then the mean curvature and the square of the second fundamental form are given by $$H=g^{ij}h_{ij}, \qquad |A|^2 =g^{ij}g^{lm}h_{il}h_{jm}, $$ where $g^{ij}$ is the $(i,j)$-entry of the inverse of the matrix $(g_{ij})$. In the sequel we will use $\lambda _i$ to denote the $i$-th principle curvature of the hypersurface. Throughout this paper we sum over repeated indices from $1$ to $n$ unless otherwise indicated. The system of (1.1) is a strictly parabolic equation for which short time existence is well known. The gradient on $M_t$ and Beltrami-Laplace operator on $M_t$ are denoted by $\nabla $ and $\triangle$ respectively. As in \cite {h2,m2}, we have the following evolution equations for various geometric quantities under the flow (1.1) \begin{lemma} The following evolution equations hold for any solution to equation (1.1)\\[-2mm] (i) ~~~$\frac {\partial }{\partial t}g_{ij}=2(h-H)h_{ij}$.\\[-3mm] (ii) ~~$\frac {\partial }{\partial t}d\mu _t=H(h-H)d\mu _t$.\\[-3mm] (iii) ~ $\frac {\partial }{\partial t}{\bf v}=\nabla H$.\\[-3mm] (iv) ~~$\frac {\partial }{\partial t}h_{ij}=\triangle h_{ij}+(h-2H)h_{ik}h_j^k+|A|^2h_{ij}$.\\[-3mm] (v) ~~~$\frac {\partial }{\partial t}H=\triangle H-(h-H)|A|^2$.\\[-3mm] (vi) ~~ $\frac {\partial }{\partial t}|A|^2=\triangle |A|^2-2|\nabla A|^2+2|A|^4-2h\emph{tr}(A^3) $.\\[2mm] Here $d\mu _t$ is the area element of $M_t$, and $h_i^j=h_{ik}g^{kj}$. \end{lemma} Since $M_0$ is strictly convex, the curvature flow (1.1) preserves the convexity of all $M_t$ as long as the solution exists \cite {h2,m2}. \begin{lemma} (i) If $h_{ij}\geq 0$ at $t=0$, then it remains so on $[0, T_{\max})$.\\[-3mm] (ii) If initially $H>0$ and $h_{ij}\geq \varepsilon Hg_{ij}$ for some $\varepsilon \in (0, \frac 1n]$, then $h_{ij}\geq \varepsilon Hg_{ij}$ remains true, with the same $\varepsilon $ on $[0, T_{\max})$. \end{lemma} This leads to the following consequence of convexity \cite {h1} \begin{lemma} If initially $H>0$ and $h_{ij}\geq \varepsilon Hg_{ij}$ for some $\varepsilon \in (0, \frac 1n]$ then\\[-2mm] (i) $H\emph{tr}(A^3)-|A|^4 \geq n\varepsilon ^2H^2 (|A|^2-\frac 1nH^2)$.\\[-3mm] (ii) $|H\nabla _ih_{kl}-h_{kl}\nabla _iH|^2\geq \frac 12 \varepsilon ^2H^2|\nabla H|^2$. \end{lemma} Let $|M|$ be the area of $M$, and $|V|$ the volume of the region $V$ contained inside $M$. Lemma 2 implies that every solution of (1.1) is a compact, convex hypersurface, therefore we have the following relations between $|V|$ and $|M|$ by Aleksandrov-Fenchel inequality and divergence theorem (see Theorem 2.3 in \cite {m2}) \begin{lemma} Let $M$ be a compact and convex hypersurface embedded into ${\mathbb R}^{n+1}$ satisfying $H>0$ and $h_{ij}\geq \varepsilon Hg_{ij}$, for some $\varepsilon \in (0, \frac 1n]$. Then there exists a constant $c_1$ depending on $n$ and $\varepsilon$ such that $$c_1^{-1}|M|^{\frac {n+1}{n}}\leq |V|\leq c_1|M|^{\frac {n+1}{n}}.$$ \end{lemma} In order to study (1.1), the following facts of convex hypersurfaces will be used. Recall that the second fundamental form of a convex hypersurface $X:M^n\longrightarrow {\mathbb R}^{n+1}$ is positive definite, and the outer unit normal vector field ${\bf v}$ to the hypersurface defines the Gauss map ${\bf v}: M^n\longrightarrow {\mathbb S}^n$. Since the hypersurface is convex and compact, i.e. the Gauss map is everywhere non-degenerate, we use the Gauss map to reparametrize the convex hypersurface (see \cite {an,u,z}) $$X=X({\bf v}^{-1}(z)), \quad z\in {\mathbb S}^n.$$ Then the support function is defined as $$\mathcal{Z}(z)=\langle z, X({\bf v}^{-1}(z))\rangle, \quad z\in {\mathbb S}^n.$$ If we denote by $\overline{\nabla}$ and $\overline{g}$ the covariant derivative and standard metric on ${\mathbb S}^n$, the hypersurface can be represented by the support function $$X(z)=\mathcal{Z}(z)z+\overline {\nabla}\mathcal{Z}(z).$$ The second fundamental form now can be calculated directly from the support function as follows \begin{equation} h_{ij}=\overline {\nabla}_i\overline {\nabla }_j\mathcal{Z}+\mathcal{Z}\overline g_{ij} \quad \mbox{ on } {\mathbb S}^n, \end{equation} and the metric is given by \begin{equation} g_{ij}=h_{ik}\overline g^{kl}h_{lj}. \end{equation} The width function of the hypersurface $X$ is defined by $$w(z)=\mathcal{Z}(z)+\mathcal{Z}(-z), \quad z\in {\mathbb S}^n.$$ In order to control the width of a convex hypersurface, we cite a theorem of Andrews \cite {an} \begin{lemma} Let $M$ be a smooth, compact and convex hypersurface in ${\mathbb R}^{n+1}$. Suppose that there exists a positive constant $c_2$ such that $M$ satisfies the pointwise pinching estimate $\lambda _{\max}(x)\leq c_2\lambda _{\min}(x)$, for every $x\in M$. Then the following estimate holds $$w_{\max}\leq c_2w_{\min},$$ where $\lambda _{\max}(x)$ and $\lambda _{\min}(x)$ are the largest and smallest principal curvatures of $M$ at $x$ respectively, and $w_{\max}=\max_{z\in {\mathbb S}^n}w(z)$ and $w_{\min}=\min_{z\in {\mathbb S}^n}w(z)$. \end{lemma} By this lemma, a pinching estimate on the inner radius $r_{in}$ and outer radius $r_{out}$ immediately follows \cite {an} \begin{corollary} Let $M$ be a smooth, compact and convex hypersurface in ${\mathbb R}^{n+1}$. Suppose that there exists a positive constant $c_2$ such that $M$ satisfies the pointwise pinching estimate $\lambda _{\max}(x)\leq c_2\lambda _{\min}(x)$, for every $x\in M$. Then there exists a constant $c_3$ such that $$r_{out}\leq c_3r_{in}.$$ \end{corollary} For a convex hypersurface $M^n$, we can also parametrize it as a graph over the unit sphere ${\mathbb S}^n$ (cf. \cite {an,g}, see also \cite {z}). Let $$\pi (x)=\frac {X(x)}{|X(x)|}:M^n\longrightarrow {\mathbb S}^n,$$ then we write the solution $M_t$ to equation (1.1) as a radial graph \begin{equation} X(x,t)=r(z,t)z: {\mathbb S}^n\longrightarrow {\mathbb R}^{n+1}, \end{equation} where $r(z,t)=|X(\pi ^{-1}(z),t)|$. We calculate the metric of $M_t$ in terms of $r$ as $$ g_{ij}=r^2\overline{g}_{ij}+\overline{\nabla }_ir\overline{\nabla }_jr, $$ and its inverse is \begin{equation} g^{ij}=r^{-2}\left(\overline{g}^{ij}-\frac {\overline{\nabla }^ir\overline{\nabla }^jr}{r^2+|\overline{\nabla}r|^2 }\right). \end{equation} The outer unit normal vector and the second fundamental form of $M_t$ in terms of $r$ are given respectively by \begin{equation} {\bf v}=\frac 1{\sqrt{r^2+|\overline{\nabla }r|^2}}(rz-\overline {\nabla }r), \end{equation} and \begin{equation} h_{ij}=\frac 1{\sqrt{r^2+|\overline{\nabla }r|^2}}(-r\overline{\nabla }_i\overline{\nabla }_jr+2\overline{\nabla }_ir\overline{\nabla }_jr+r^2\overline{g}_{ij}). \end{equation} \section{The Normalized Equation} \label{section:3} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \indent The solution of the curvature flow (1.1) may shrink to a point if $h$ is small enough (e.g. $h=0$ \cite {h1}), or expand to infinity if $h$ is large enough (e.g. $h$ is a constant and $h>\sup_{x\in M^n}H(x,0)$). The solution can also converge to a smooth hypersurface, for some special initial hypersurface and $h$ (e.g. the volume preserving mean curvature flow \cite {h2}, the surface area preserving mean curvature flow \cite {m1}). In order to see this, we normalize the equation (1.1) by keeping some geometrical quantity fixed, for example as in \cite {h1} the total area of the hypersurfaces $M_t$. As that mentioned in section 1, multiplying the solution $X$ of (1.1) at each time $0\leq t<T_{\max}$ with a positive constant $\psi (t)$ such that the total area of the hypersurfaces $\widetilde{M}_t$ given by $$\widetilde{X}(x, t)=\psi (t)X(x,t)$$ has total area equal to $|M_0|$, the area of $M_0$ \begin{equation}\int_{\widetilde{M}_t}d\widetilde{\mu}_t=|M_0|, \qquad 0\leq t<T_{\max}. \end{equation} Then we introduce a new time variable $\tilde{t}(t)=\int _0^t\psi^2(\tau)d\tau$, such that $\frac {\partial \tilde{t}}{\partial t}=\psi^2$. As in \cite {h1,an}, for a geometric quantity $P$ on $M_t$, we denote by $\widetilde{P}$ the corresponding quantity on the rescaled hypersurface $\widetilde{M}_{\tilde{t}}$. By direct calculation we have \begin{eqnarray*} \widetilde{g}_{ij}=\psi^2g_{ij},& \widetilde{h}_{ij}=\psi h_{ij},\\ \widetilde{H}=\psi^{-1}H,& \quad |\widetilde{A}|^2=\psi ^{-2}|A|^2,\\ d\widetilde{\mu}=\psi ^nd\mu,&\widetilde{w}=\psi w, \end{eqnarray*} and so on. If we differentiate (3.1) for time $t$, we obtain \begin{eqnarray*}\psi ^{-1}\frac {\partial \psi}{\partial t}=\frac 1n \frac {\int _M(H-h)Hd\mu}{\int _{M}d\mu}=\frac 1n \theta. \end{eqnarray*} Now by differentiating $\widetilde{X}$ with respect to $\tilde{t}$, we derive the normalized evolution equation for a different maximal time interval $0\leq \tilde{t}<\widetilde{T}_{\max}$ \begin{eqnarray} \left\{\begin{array}{l} \frac{\partial }{\partial \tilde{t}} \widetilde{X}(x,\tilde{t}) =\{\widetilde{h}(\tilde{t})-\widetilde{H}(x, \tilde{t})\}{\bf \widetilde{v}}(x,\tilde{t})+{\frac 1n} \widetilde{\theta}(\tilde{t})\widetilde{X}(x, \tilde{t}), \\[2mm] \widetilde{X}(\cdot, 0)= X_0, \end{array}\right. \end{eqnarray} where $\widetilde{h}=\psi ^{-1}h$, $\widetilde{\theta}=\psi ^{-2}\theta$ and $\theta$ is given by $$\theta=-\frac {\int _{M}(h-H)Hd\mu}{\int _Md\mu}.$$ Since $M_t$ is convex, and $\widetilde{M}_{\tilde{t}}$ is just a rescaling of $M_t$, therefore which is also convex, we can write $M_t$ or $\widetilde{M}_{\tilde{t}}$ to be a graph over a unit sphere as in (2.3). By (1.1), (2.4)$\sim$(2.6) we have the evolution equation for $r(t)$ \begin{equation} \frac {\partial r}{\partial t}=\frac hr\sqrt{r^2+|\overline{\nabla}r|^2}+ r^{-3}\left(\overline{g}^{ij}-\frac {\overline{\nabla }^ir\overline{\nabla }^jr}{ r^2+|\overline{\nabla}r|^2 }\right)\left(r\overline{\nabla }_i\overline{\nabla }_jr-2\overline{\nabla }_ir\overline{\nabla }_jr-r^2\overline{g}_{ij}\right). \end{equation} Then $\tilde{r}=\psi r$ satisfies the evolution equation \begin{eqnarray} \frac {\partial \tilde{r}}{\partial \tilde{t}}&=&\frac {\widetilde{\theta}}{n} \tilde{r}+\frac {\widetilde{h}}{\tilde{r}}\sqrt{\tilde{r}^2 +|\overline{\nabla}\tilde{r}|^2}\nonumber\\ &&+ \tilde{r}^{-3}\left(\overline{g}^{ij}-\frac {\overline{\nabla }^i\tilde{r}\overline{\nabla }^j\tilde{r}}{ \tilde{r}^2+|\overline{\nabla}\tilde{r}|^2 }\right) \left(\tilde{r}\overline{\nabla }_i\overline{\nabla }_j\tilde{r}-2\overline{\nabla }_i\tilde{r}\overline{\nabla }_j\tilde{r}-\tilde{r}^2\overline{g}_{ij}\right). \end{eqnarray} In the remainder of this section, we will estimate the outer and inner radii of the normalized hypersurfaces $\widetilde{M}$. First we see that since at each time the whole configuration of $\widetilde{M}$ is only dilated by a constant factor $\psi $, the solutions to (3.2) are compact and convex hypersurfaces, and Lemma 2 still holds. This means that $$\widetilde{h}_{ij}\geq \varepsilon \widetilde{H}\widetilde{g}_{ij},$$ for some $\varepsilon \in (0, \frac 1n]$. The hypersurface $\widetilde{M}$ encloses a region $\widetilde{V}$ of volume $|\widetilde{V}|$. Then by Lemma 4 \begin{equation}c_1^{-1}|\widetilde{M}|^{\frac {n+1}{n}}\leq |\widetilde{V}|\leq c_1|\widetilde{M}|^{\frac {n+1}{n}}. \end{equation} Since $|\widetilde{V}|$ is controlled by the volume of its inner and outer sphere $$c_4\tilde{r}_{in}^{n+1}\leq |\widetilde{V}|\leq c_4\tilde{r}_{out}^{n+1},$$ for a constant $c_4$, we obtain the following estimate by the fixed total area of $\widetilde{M}$ by (3.5) \begin{equation} \tilde{r}_{out}\geq c_5 \mbox { and } \tilde{r}_{in}\leq c_6, \end{equation} for some two positive constants $c_5$ and $c_6$. By Corollary 1 and (3.6) we have \begin{proposition} The lower bound of the inner radius and the upper bound of the outer radius of $\widetilde{M}_{\tilde{t}}$ are all uniformly bounded, i.e. $$c_7^{-1}\leq \tilde{r}_{in}\leq \tilde{r}_{out}\leq c_7$$ for some constant $c_7$. \end{proposition} Now for any given time sequence $\{T_i\}$, $T_i\in [0, T_{\max})$, such that $T_i\rightarrow T_{\max}$ as $i\rightarrow \infty$, there corresponds to a sequence $\{\psi_i=\psi(T_i)\}$. By limiting theory, there exists at least one accumulation of this sequence. Denote by $\Lambda _i$ the minimal accumulation of the sequence $\{\psi_i=\psi(T_i)\}$. We define $\Lambda $ to be the infimum of $\Lambda _i$ for all possible sequences $\{\psi_i=\psi(T_i)\}$, i.e. \begin{eqnarray*} \Lambda =\inf\left\{\Lambda _i|\Lambda _i \mbox { is the minimal accumulation of a sequence }\{\psi_i=\psi(T_i)\}\right.,\\\left.\mbox{ where }\{T_i\} \mbox{ is any sequence in }[0, T_{\max}) \mbox { such that } T_i\rightarrow T_{\max} \mbox { as } i\rightarrow \infty\right\}. \end{eqnarray*} Therefore by the method of extracting diagonal subsequences we have a subsequence, still denoted by $\{\psi_i=\psi(T_i)\}$, which converges to $\Lambda $ as $T_i\rightarrow T_{\max}$ (or $i\rightarrow \infty$), that is to say we have the following limit \begin{equation} \lim_{i\rightarrow \infty}\psi _i=\Lambda. \end{equation} There are three cases in terms of the limit $\Lambda$: $\Lambda =\infty$, $0<\Lambda <\infty$ and $\Lambda =0$. We will consider the three cases separately in the sequel. \section{Case (I) $\Lambda =\infty$} \label{section:4} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \indent In this section we consider the case $\Lambda =\infty$, and prove Theorem 1(I). Since $\tilde{r}_{out}=r_{out}\psi$, we have by Proposition 1 $$\frac {c_7^{-1}}{\psi}\leq r_{out}\leq \frac {c_7}{\psi}, $$ which implies that for the sequence $\{T_i\}$ in last section (see (3.7)), we have a limit \begin{equation} \lim_{T_i\rightarrow T_{\max}} r_{out}(T_i)=0. \end{equation} By limiting theory, there exists a time $T^*<T_{\max}$ such that for any $T_i\geq T^*$, $r_{out}(T_i)$ is less than any given positive number $r^*$. By the assumption (1.3), $h(t)$ has a uniformly upper bound $h^+$ on $[0, T_{\max})$ (We can always assume $h^+>0$ even in the case of mean curvature flow, i.e. $h(t)=0$). We now choose $r^*$ is less than $ n/{h^+}$. We follow an idea in \cite {an,z} to prove the following lemma which implies that when $t$ is very near $T_{\max}$, $M_t$ is in fact contracting. \begin{lemma} When $t\geq T^*$, the regions enclosed by the hypersurfaces $M_t$ are decreasing. Furthermore $T_{\max}<\infty$, and the solutions to (1.1) converge uniformly to a point in ${\mathbb R}^{n+1}$ as $t\rightarrow T_{\max}$. \end{lemma} \begin{proof} Let $\partial B_{r^*}(O)$ be a sphere in ${\mathbb R}^{n+1}$ centered at the origin $O$, with radius $r^*$. Since the outer radius of $M_{T^*}$ is less than $r^*$, without loss of generality, we may assume that the hypersurface $M_{T^*}$ is enclosed by $\partial B_{r^*}(O)$. Now we evolve the sphere $\partial B_{r^*}(O)$ in terms of (1.1), the radius $r_B(t)$ satisfies \begin{eqnarray}\left\{\begin{array}{l}\frac {dr_B(t)}{dt} =h-\frac n{r_B(t)}\leq h^+-\frac n{r_B(t)},\quad t\geq T^*,\\[2mm] r_B(T^*)=r^*, \end{array} \right. \end{eqnarray} which yields that $r_B(t)$ is decreasing because $r^*<n/{h^+}$. Then by containment principle, which can be easily derived from (3.3), we see that the enclosed regions of $M_t$ are decreasing for $t\geq T^*$. Furthermore it can be checked that the solution to the differential inequality (4.2) is given by \begin{equation} r_B(t)+\frac n{h^+} \log (n-{h^+}r_B(t))\geq h^+(t-T^*)+r^*+\frac n{h^+}\log(n-h^+r^*), \end{equation} which yields the finiteness of $T_{\max}$ since the left hand side of (4.3) is uniformly bounded for $t\geq T^*$. By convexity in Lemma 2, the pinching estimate in Corollary 1 will imply the uniformly convergence of solutions to (1.1) to a point if we can show that the enclosed area of $M_t$ tends to $0$ as $t\rightarrow T_{\max}$. If this is not true, we then can place a small ball $B_{r_0}(x_0)$ in the region enclosed by $M_t$ for all $t\in [T^*, T_{\max})$. Again without loss of generality we assume $x_0$ is the origin. Then the diameter of $M_t$ is uniformly bounded from below, and $|\overline{\nabla}r|$ is also uniformly bounded by convexity. Therefore equation (3.3) is a uniformly parabolic equation with bounded coefficients. Hence we can apply the standard regularity theory of uniformly parabolic equations (cf. \cite {k} or \cite {an,z}) to conclude that the solution to (3.3) can not be singular at $t=T_{\max}$, which is a contradiction. Therefore $X(\cdot, t)$ must converge to a point as $t\rightarrow T_{\max}$. This completes the proof of the lemma. \end{proof} \begin{remark} (i) From the proof of Lemma 6, we see that the containment principle implies that $r_{out}$ tends to zero, as $t\rightarrow T_{\max}$. Therefore by Proposition 1 again, the function $\psi(t)$ must tend to infinity as $t\rightarrow T_{\max}$, i.e. \begin{equation}\lim_{t\rightarrow T_{\max}}\psi(t) =\infty.\end{equation} (ii) We can see that for $\overline h=\infty$, (1.1) is still contracting to a point. In fact from the limit of $\psi(T_i)$ in section 3, we see that $\Lambda $ is the smallest limit of $\psi $. That is to say if $\Lambda =\infty$, then for any sequence $\{T_j\}\subset [0, T_{\max})$ satisfying $T_j\rightarrow T_{\max}$ as $j\rightarrow \infty$ , $\lim _{j\rightarrow \infty}\psi(T_j)=\infty$. Therefore similarly by Proposition 1, the inner and outer radii of the evolving hypersurfaces all tend to zero as $t\rightarrow T_{max}$. Then the containment principle implies that the solutions to $(1.1)$ converge to a point as $t\rightarrow T_{max}$ for all possible limits of $h(t)$. \end{remark} To understand the solution $X(\cdot, t)$ near the maximal time $T_{\max}$, we consider the solution of the rescaled equation (3.2). We want to bound the curvature $\widetilde{H}$ of $\widetilde{M}_{\tilde{t}}$, for this purpose, we will use a trick of Chow (Tso) \cite {t} (see also \cite {an,m2,z}) to consider the function \begin{equation}\Phi=\frac {H}{\mathcal{Z}-\alpha}, \end{equation} for a constant $\alpha $ to be chosen later. First we compute the evolution equation of $\Phi$. \begin{lemma} For $t\in[0, T_{\max})$, for any constant $\alpha$ we have \begin{eqnarray} \frac {\partial }{\partial t}\Phi &=& g^{ij}\overline{\nabla}_i\overline{\nabla}_j\Phi+\frac 2{\mathcal{Z}-\alpha}g^{ij}\overline{\nabla }_i\Phi \overline{\nabla }_j\mathcal{Z}\nonumber\\ &&+\frac 1{(\mathcal{Z}-\alpha)^2}\left\{2H^2-hH-\alpha H|A|^2-h(\mathcal{Z}-\alpha)|A|^2\right\}. \end{eqnarray} \end{lemma} \begin{proof} The proof is just the one in \cite {m2}. Because we shall consider the evolution equations of similar functions in section 5 and 6, we outline its proof here. We first have $$\overline {\nabla }_i\Phi=\frac{\overline{\nabla}_iH}{\mathcal{Z}-\alpha}-\frac{H\overline {\nabla }_i\mathcal{Z}}{(\mathcal{Z}-\alpha)^2},$$ and \begin{eqnarray*} \overline {\nabla }_i\overline {\nabla }_j\Phi=\frac{\overline {\nabla }_i\overline{\nabla}_jH}{\mathcal{Z}-\alpha}-\frac{\overline {\nabla }_iH\overline {\nabla }_j\mathcal{Z}+\overline {\nabla }_i\mathcal{Z}\overline {\nabla }_jH}{(\mathcal{Z}-\alpha)^2}-\frac{H\overline {\nabla }_i\overline{\nabla}_j\mathcal{Z}}{(\mathcal{Z}-\alpha)^2}+\frac{2H\overline {\nabla }_i\mathcal{Z}\overline{\nabla}_j\mathcal{Z}}{(\mathcal{Z}-\alpha)^3}, \end{eqnarray*} which yields \begin{equation}g^{ij}\overline {\nabla }_i\overline {\nabla }_j\Phi=\frac{g^{ij}\overline {\nabla }_i\overline{\nabla}_jH}{\mathcal{Z}-\alpha}-\frac {2g^{ij}\overline {\nabla }_i\Phi\overline{\nabla}_j\mathcal{Z}}{\mathcal{Z}-\alpha}-\frac {Hg^{ij}\overline {\nabla }_i\overline{\nabla}_j\mathcal{Z}}{(\mathcal{Z}-\alpha)^2}. \end{equation} By differentiating the support function with respect to time $t$ we have $$\frac {\partial \mathcal{Z}}{\partial t }=h-H.$$ By using (2.2), one has \begin{eqnarray*} H=g^{ij}h_{ij}=\overline{g}_{ij}(h^{-1})^{ij}, \end{eqnarray*} where $(h^{-1})^{ij}$ is the inverse of $h_{ij}$. Thus by (2.1) we have the evolution equation of $H$ in terms of the connection on ${\mathbb{S}}^n$ \begin{eqnarray*} \frac {\partial H}{\partial t}=g^{ij}\left[\overline {\nabla }_i\overline{\nabla}_jH+(H-h)\overline{g}_{ij}\right]. \end{eqnarray*} Then the time derivative of $\Phi$ is given by \begin{equation}\frac{\partial \Phi }{\partial t}= \frac {g^{ij}}{\mathcal{Z}-\alpha}\left[\overline {\nabla }_i\overline{\nabla}_jH+(H-h)\overline{g}_{ij}\right]-\frac {H(h-H)}{(\mathcal{Z}-\alpha)^2}. \end{equation} Now by (2.2) again, we have the identity $g^{ij}\overline{g}_{ij}=|A|^2$. Therefore by combining (4.7) and (4.8), we obtain the expression \begin{eqnarray*} \frac{\partial \Phi }{\partial t}&=&g^{ij}\overline{\nabla}_i\overline{\nabla}_j\Phi+\frac 2{\mathcal{Z}-\alpha}g^{ij}\overline{\nabla }_i\Phi \overline{\nabla }_j\mathcal{Z}\\ &&+\frac {Hg^{ij}\overline {\nabla }_i\overline{\nabla}_j\mathcal{Z}}{(\mathcal{Z}-\alpha)^2}-\frac {h-H}{\mathcal{Z}-\alpha}|A|^2-\frac {H(h-H)}{(\mathcal{Z}-\alpha)^2}\\ &=&g^{ij}\overline{\nabla}_i\overline{\nabla}_j\Phi+\frac 2{\mathcal{Z}-\alpha}g^{ij}\overline{\nabla }_i\Phi \overline{\nabla }_j\mathcal{Z}\\ &&+\frac{1}{(\mathcal{Z}-\alpha)^2}\left\{2H^2-hH-\alpha H|A|^2-h(\mathcal{Z}-\alpha)|A|^2\right\}, \end{eqnarray*} which establishes the lemma. \end{proof} For $t\in [0, T^*]$, $M_t$ is smooth, compact and convex, and therefore the mean curvature $H$ is uniformly bounded in this time interval. Similarly, the mean curvature of $\widetilde{M}$ is also bounded in the corresponding time interval. Moreover we can prove the following \begin{lemma} There exists a positive constant $c_8$ such that for any $\tilde{t}\in [0, \widetilde{T}_{\max})$, $$\widetilde{H}(x, \tilde{t})\leq c_8, \quad \forall x\in M^n.$$ \end{lemma} \begin{proof} Let $\widetilde{T}^*=\int _0^{T^*}\psi^2 (t)dt$. For any $\tilde{t}\in[0, \widetilde{T}^*]$, $\widetilde{M}_{\tilde{t}}$ is a smooth, compact and convex hypersurface, the mean curvature $\widetilde{H}$ is therefore uniformly bounded in $[0, \widetilde{T}^*]$. Consider any time $t_0\in [T^*, T_{\max})$, and choose the origin of ${\mathbb R}^{n+1}$ to be the center of the sphere of radius $r_{in}(t_0)$, which is enclosed by $X(\cdot, t_0)$. By Lemma 6, on the time interval $[T^*, t_0]$, the support function satisfies $$\mathcal{Z}=\langle X, {\bf v}\rangle \geq r_{in}(t_0).$$ Let $\alpha =\frac 12r_{in}(t_0)$, we consider the function $\Phi(z,t)$ defined in (4.5) for any $(z, t)\in {\mathbb{S} }^n\times [T^*, t_0]$. Let $(z_1, t_1)\in {\mathbb{S}}^n\times [T^*, t_0]$ be such that $\Phi$ achieves the maximum $\sup\{\Phi (z,t)|(z,t)\in {\mathbb{S}}^n\times [T^*, t_0]\}$. If $t_1=T^*$, we are done, since in this case, $H(z, t_0)\leq constant$. Thus we may assume $t_1>T^*$, then by Lemma 7, at $(z_1, t_1)$ $$2H^2-hH-\alpha H|A|^2-h(\mathcal{Z}-\alpha)|A|^2\geq 0. $$ We use $|A|^2\geq \frac 1nH^2$ and $\mathcal{Z}\geq 2\alpha$ to obtain $$H(z_1, t_1)\leq \frac {2n}{\alpha}.$$ Therefore for any $z\in {\mathbb{S}}^n$, $$\Phi(z, t_0)=\frac {H(z, t_0)}{\mathcal{Z}(z, t_0)- \alpha}\leq \Phi (z_1, t_1),$$ which implies $$H(z, t_0)\leq \frac {c_9}{r_{in}(t_0)},$$ for a constant $c_9$, where we have used Corollary 1. By combining with Proposition 1, we have $$\widetilde{H}(z, \tilde{t}_0)\leq c_{10},$$ for all $z\in {\mathbb{S}}^n$. Here $\tilde{t}_0=\int _0^{t_0}\psi^2 (t)dt$. Since $t_0\in [T^*, T_{\max})$ is arbitrary, $\tilde{t}_0\in [\widetilde{T}^*, \widetilde{T}_{\max})$ is also arbitrary, we thus have the uniform bound on $\widetilde{H}$ in $[\widetilde{T}^*, \widetilde{T}_{\max})$. Combination with the bound in $[0, \widetilde{T}^*]$, we at last arrive at the inequality $\widetilde{H}(x, \tilde{t})\leq c_8$, for a constant $c_8$. \end{proof} We can now prove the following long time existence of (3.2). In section 3, we have bounded the inner radius and the outer radius for $\widetilde{X}(\cdot, \tilde{t})$, and in above, we have bounded the speed of the equation (3.2). Thus there is a positive constant $\delta >0$ such that for each $\tilde{t}_0\in [0,\widetilde{T}_{\max} )$, we can write the solution $\widetilde{X}(\cdot, \tilde{t})$ to (3.2) on the time interval $[\tilde{t}_0,\tilde{ t}_0+\delta]$ as a graph for some $\delta >0$ $$\widetilde{X}(z, \tilde{t})=\tilde{r}(z,\tilde{t})z, \quad z\in {\mathbb S}^n$$ for some chosen origin, and satisfies $0<c_7^{-1}\leq \tilde{r}(z,\tilde{t})\leq c_7$, on ${\mathbb{S}}^n\times [\tilde{t}_0, \tilde{t}_0+\delta]$. By the convexity of all evolving hypersurfaces, we know that $\overline{\nabla }\tilde{r}$ is also uniformly bounded. We write down the evolution equation of $\tilde{r}$, similar to (3.4), we know that it is uniformly parabolic. So we can use the the standard regularity theory of uniformly parabolic equations to bound the derivatives and all higher order derivatives of $\tilde{r}$ ( see \cite {k} or \cite {an,z}). Hence we have proved \begin{lemma} $\widetilde{T}_{\max}=\infty$, and $\widetilde{M}_{\tilde{t}}$ converges to a smooth hypersurface $\widetilde{M}_{\infty}$, as $\tilde{t}\rightarrow \infty$. \end{lemma} \begin{remark} By convexity the zero order estimate of $\widetilde{A}$ follows from Lemma 8, then one can use the induction argument as in \cite {ha} and \cite {h1,h2} to show that the curvature derivatives $|\widetilde{\nabla} ^m\widetilde{A}|^2$ are each bounded by a corresponding constant $C_m(n, M_0)$ for any $m\geq 1$, since the terms containing $\widetilde{h}$ in the evolution equation can be easily controlled. This in turn can also imply the long time existence of (3.2). \end{remark} It remains to show that the limiting hypersurface $\widetilde{M}_{\infty}$ is a round sphere. For this purpose, we define a function $$\tilde{f}=\frac{|\widetilde{A}|^2}{\widetilde{H}^2}.$$ It is easy to see that $\tilde{f}$ is a scaling invariant and we have the following lemma similar as in (\cite{m2}) \begin{lemma}We have the following evolution equation \begin{eqnarray} \frac{\partial}{\partial\tilde{t}}\tilde{f} &=&\widetilde{\triangle}\tilde{f} +\frac{2}{\widetilde{H}}\langle\widetilde\nabla_l\tilde{f}, \widetilde{\nabla}_l\widetilde{H}\rangle\nonumber\\ &&-\frac{2}{\widetilde{H}^{4}}|\widetilde{H}\widetilde{\nabla}_{l} \widetilde{h}_{ij} -\widetilde{h}_{ij}\widetilde{\nabla}_{l}\widetilde{H}|^{2} -\frac{2\widetilde{h}}{\widetilde{H}^{3}}(\widetilde{H}\emph{tr} (\widetilde{A}^{3})-|\widetilde{A}|^{4}). \end{eqnarray} \end{lemma} \begin{proof} First we have the evolution equation of $f=\frac {|A|^2}{H^2}$ (cf. \cite {m2}) \begin{equation}\frac{\partial}{\partial t}f =\triangle f +\frac2H\langle\nabla_lf, \nabla_lH\rangle-\frac{2}{H^4}|H\nabla_lh_{ij}-h_{ij}\nabla_lH|^2 -\frac{2h}{H^3}(H\textrm{tr}(A^3)-|A|^4).\end{equation} Therefore we have \begin{eqnarray*} \frac{\partial}{\partial\tilde{t}}\tilde{f} &=&\frac{\partial}{\partial t}(\frac{|A|^2}{H^2}) \cdot\frac{\partial t}{\partial\tilde{t}}\\ &=&\left\{\triangle(\frac{|A|^2}{H^2}) +\frac2H\langle\nabla_l(\frac{|A|^2}{H^2}), \nabla_lH\rangle\right.\\ &&\left.-\frac{2}{H^4}|H\nabla_lh_{ij}-h_{ij}\nabla_lH|^2 -\frac{2h}{H^3}(H\textrm{tr}(A^3)-|A|^4)\right\}\cdot\psi^{-2}, \end{eqnarray*} which implies the desired equality. \end{proof} We then can prove the first part of Theorem 1. \begin{proof} Recalling Lemma 3 we have by Lemma 10, $$(\frac{\partial}{\partial\tilde{t}}-\widetilde{\triangle})\tilde{f} \leq\frac{2}{\widetilde{H}}\langle\widetilde\nabla_l\tilde{f}, \widetilde{\nabla}_l\widetilde{H}\rangle.$$ By the weak maximum principle, $$\max_{\widetilde{M}_{\tilde{t}}}\tilde{f}\leq \max_{\widetilde{M}_0}\tilde{f}.$$ Furthermore, by the strong maximum principle, if the maximum is attained at some $(x,\tilde{t}_{0})$, $\tilde{t}_{0}>0$, then $\tilde{f}$ is identically constant. Substituting into (4.9) yields $$\frac{2}{\widetilde{H}^{4}}|\widetilde{H}\widetilde{\nabla}_{l} \widetilde{h}_{ij} -\widetilde{h}_{ij}\widetilde{\nabla}_{l}\widetilde{H}|^{2} +\frac{2\widetilde{h}}{\widetilde{H}^{3}}(\widetilde{H}\textrm{tr} (\widetilde{A}^{3})-|\widetilde{A}|^{4})\equiv0.$$ Now, $\widetilde{H}\textrm{tr} (\widetilde{A}^{3})-|\widetilde{A}|^{4}\equiv0$ implies by Lemma 3 that $$|\widetilde{A}|^{2}-\frac{1}{n}\widetilde{H}^{2}\equiv0,$$ i.e. $$\sum\limits_{i<j}(\widetilde{\lambda}_{i}- \widetilde{\lambda}_{j})^2\equiv0,$$ so at any point of $\widetilde{M}_{\tilde t}$, all the principal curvatures are equal. Also $|\widetilde{H}\widetilde{\nabla}_{l}\widetilde{h}_{ij} -\widetilde{h}_{ij}\widetilde{\nabla}_{l}\widetilde{H}|^{2}\equiv0$ implies $\widetilde{\nabla}\widetilde{H}\equiv0$ by Lemma 3 (ii), which then implies $\widetilde{\nabla}\widetilde{A}\equiv0$, so $\widetilde{M}_{\tilde t_0}$ is a sphere. Therefore we have showed that the function $\max\limits_{\widetilde{M}_{\tilde t}}\tilde{f}$ is strictly decreasing unless $\widetilde{M}_{\tilde{t}}$ is a sphere. This implies that $\widetilde{M}_{\tilde{t}}$ approaches a sphere as $\tilde t\rightarrow \infty$. Of course $\widetilde{M}_\infty$ has the same total area $|M_0|$. Therefore the proof of Theorem 1(I) is completed. \end{proof} \begin{remark} (i) One can use a similar method as in \cite {an,h1} to prove that $\widetilde{M}_{\tilde{t}}$ converges to a sphere exponentially.\\[1mm] (ii) It is easy to check that $0\leq h<\inf_{x\in M^n} H(x, 0)$ is of this case, and $T^*$ below $(4.1)$ is equal to zero. \end{remark} \section{Case (II) $0<\Lambda <\infty$} \label{section:5} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \indent In this section we consider the case $0<\Lambda <\infty$ and prove the main Theorem 1(II). Since $\tilde{r}_{out}=r_{out}\psi$ and $\tilde{r}_{in}=r_{in}\psi$, we have by Proposition 1 $$\frac {c_7^{-1}}{\psi}\leq r_{in}\leq r_{out}\leq \frac {c_7}{\psi}, $$ which implies for the sequence $\{T_i\}$ in section 3, there exists a time $T^*<T_{\max}$ such that for any $T_i\geq T^*$, \begin{equation} c_{12}^{-1}\leq r_{in}(T_i)\leq r_{out}(T_i)\leq c_{12} \end{equation} for some constant $c_{12}$. The following lemma shows that the inner and outer radii of all evolving hypersurfaces $M_t$ are uniformly bounded from below and above. \begin{lemma} There exists a constant $c_{13}$ such that $$c_{13}^{-1}\leq r_{in}(t)\leq r_{out}(t)\leq c_{13}, \qquad \mbox {for any } t\in [0, T_{\max}). $$ \end{lemma} \begin{proof} We only prove the upper bound, the lower bound is similar. First we claim that $\overline h>0$ in this case, where $\overline h$ is the limit in (1.3). Suppose not, we can take any $h^+ >0$, such that there exists a time $T'<T_{\max}$ and $h(t)<h^+$ for any $t\in [T', T_{\max})$. Then by similar proof as in Lemma 6, we prove that $M_t$ is contracting for $t\geq T'$. Therefore $r_{out}(T_i)\rightarrow 0$ as $T_i\rightarrow T_{\max}$, which is a contradiction to (5.1). The claim follows. From the claim we know that there must exist a time $T'\in (T^*, T_{\max})$ such that for any $t\in [T', T_{\max})$, $h(t)$ has a positive lower bound $h^-> 0$. Since $M_t$ for any $t\in [0, T']$ is smooth, compact and convex, the corresponding outer radius is uniformly bounded from above in this time interval. Suppose there is a time $T''>T'$ such that $r_{out}(T'')>c_{13}$. By Corollary 1 we can assume $c_{13}$ is large enough so that $r_{in}(T'')>\frac n{h^-}$. Again, we evolve a sphere $\partial B_{r_{in}(T'')}(O)$ under (1.1). The solution $r_{B}(t)$ to the differential inequality \begin{eqnarray*}\left\{\begin{array}{l} \frac {dr_B(t)}{dt}=h-\frac n{r_B(t)}\geq h^- -\frac n{r_B(t)},\quad t\geq T'',\\ [3mm]r_B(T'')=r_{in}(T'')>\frac n{h^-}, \end{array} \right. \end{eqnarray*} is given by \begin{eqnarray*} r_B(t)+\frac n{h^-} \log ({h^-}r_B(t)-n)&\geq& h^-(t-T'')+r_{in}(T'')\\ &&+\frac n{h^-}\log(h^-r_{in}(T'')-n). \end{eqnarray*} Clearly $r_{B}(t)\rightarrow \infty$ as $t\rightarrow \infty$. On the other hand, by containment principle, $\partial B_{r_B(t)}(O)$ is enclosed by $M_t$ for any $t\geq T''$, since $M_{T''}$ encloses $\partial B_{r_B(T'')}(O)$. Therefore there exists some $T_i>T''$ such that $r_{out}(T_i)\geq r_{B}(T_i)>c_{12}$, which is a contradiction to (5.1). Combining the case in $[0, T']$, we finish the proof of the lemma. \end{proof} \begin{remark} Similar as in Remark 3, by Lemma 11 and that the hypersurface $M_t$ uniformly converges to a round sphere (see below for the proof), we have a limit \begin{equation} \lim_{t\rightarrow T_{\max}}\psi (t)=\Lambda. \end{equation} \end{remark} Based on a theorem of Chow and Gulliver \cite {cg}, we have as in \cite{m1,m2} by Lemma 11 and 4, \begin{lemma} There is a $d=d(M_0)$ such that $M_t\subset B_d(O)$ for all $t\in [0, T_{\max})$. \end{lemma} The following lemma also follows from McCoy \cite {m2} \begin{lemma} If $B_{4\alpha}(p_0)\subset V_{t_0}$ for some $t_0\in [0, T_{\max})$ and a point $p_0\in {\mathbb R}^{n+1}$, then $B_{2\alpha}(p_0)\subset V_{t}$ for any $t\in [t_0, t_0+\min(\frac {6\alpha^2}{n}, T_{\max}))$. \end{lemma} Similar as in section 4, we consider the function $\Phi$ defined in (4.5) for $t\in [t_0, t_0+\min(\frac {6\alpha^2}{n}, T_{\max}))$, and $\alpha =\frac 14c_{13}^{-1}$, where $c_{13}$ is given in Lemma 11. By using the same method as in \cite {m2}, we obtain the uniform upper bound of the evolving mean curvature $H$. \begin{lemma} There exists a constant $c_{14}$ such that for any $t\in [0, T_{\max})$ $$H(x,t)\leq c_{14}, \quad \forall x\in M^n.$$ \end{lemma} Again by the standard regular theory of parabolic equations as in section 4, or the argument as in \cite {h2,m1,m2}, we have \begin{lemma} $T_{\max}=\infty$, and $M_t$ converges to a smooth hypersurface $M_{\infty}$, as $t\rightarrow \infty$. \end{lemma} Now we can prove the second part of Theorem 1. \begin{proof} We again consider the function $f=\frac {|A|^2}{H^2}$. By the evolution equation of $f$ in (4.10) and Lemma 3, similar to the proof of Theorem 1(I), we have that $\max\limits_{M_t}f$ is strictly decreasing unless $M_t$ is a sphere. This finishes the proof of Theorem 1(II). \end{proof} \begin{remark} (i) One can also prove that $M_t$ converges to a sphere exponentially as in \cite {h2,m1}.\\[1mm] (ii) By the limit (5.2), we easily see that $\widetilde{M}_{\tilde{t}}$ converges to a sphere of total area $|M_0|$. \end{remark} \section{Case (III) $\Lambda =0$} \label{section:6} \renewcommand{\thesection}{\arabic{section}} \renewcommand{\theequation}{\thesection.\arabic{equation}} \setcounter{equation}{0} \indent This section is devoted to discuss the case $\Lambda =0$ and prove the main Theorem 1(III). Similar to section 4, we have a limit \begin{eqnarray} \lim_{T_i\rightarrow T_{\max}} r_{in}(T_i)=\infty. \end{eqnarray} Then there exists a time $T^*<T_{\max}$ such that for any $T_i\geq T^*$, $r_{in}(T_i)$ is greater than any given positive number $N$. As before we evolve $\partial B_{r_{in}(T^*)}(O)$ and $\partial B_{r_{out}(T^*)}(O)$ under (1.1), respectively. That is to say, they satisfy the following equation \begin{eqnarray}\frac {dr_B(t)}{dt}=h(t)-\frac {n}{r_B(t)},\quad t\geq T^*, \end{eqnarray} with initial data $r_{in}(T^*)$ and $r_{out}(T^*)$ respectively. First we consider the case $\overline h=0$. Integrating (6.2) from $T^*$ to $T_i$ and using integral mean-value theorem, the outer radius $r^+_B(t)$ of $M_t$ satisfies \begin{equation} r^+_B(T_i)-r_{out}(T^*)=\left[h(t_i)-\frac{n}{r^+_B(t_i)}\right] (T_i-T^*), \end{equation} where $t_i\in [T^*, T_i]$. If we suppose $T_{\max}<\infty$, and take limits of both sides in (6.3), we have $\lim_{t\rightarrow T_{\max}}h(t)=\infty$, which contradicts to $\overline h=0$. So $T_{\max}=\infty$. Next we consider the case $0<\overline h<\infty$. In this case, we choose $N$ greater than $\frac{n}{h^-}$ (now $h^-$ is the uniform positive lower bound of $h(t)$ in $[T^*, T_{\max}$)). Therefore by (6.2), the inner radius $r^{-}_{B}(t)$ and outer radius $r^+_{B}(t)$ of $M_t$ satisfy the following inequalities, respectively \begin{eqnarray} r^-_B(t)+\frac n{h^-} \log (h^-r^-_B(t)-n)&\geq& h^-(t-T^*)+r_{in}(T^*)\nonumber \\ &&+\frac n{h^-}\log(h^-r_{in}(T^*)-n), \end{eqnarray} and \begin{eqnarray} r^+_B(t)+\frac n{h^-} \log (h^-r^+_B(t)-n)&\geq& h^-(t-T^*)+r_{out}(T^*)\nonumber\\ &&+\frac n{h^-}\log(h^-r_{out}(T^*)-n). \end{eqnarray} \begin{lemma} When $t\geq T^*$, the regions enclosed by the hypersurfaces $M_t$ are increasing. Furthermore $T_{\max}=\infty$, and the solutions to (1.1) expand uniformly to $\infty$ as $t\rightarrow \infty$. \end{lemma} \begin{proof} For $t\geq T^*$, (6.2) implies that $r_B(t)$ is increasing since $r_B(t)>\frac n{h^-}$ initially. By containment principle again, the enclosed regions of $M_t$ are increasing. Moreover, all $M_t{\;}'s$ are contained in the regions between $\partial B_{r^-_{B}(t)}(O)$ and $\partial B_{r^+_{B}(t)}(O)$ for every $t\in [T^*, T_{\max})$. Suppose $T_{\max}$ is finite. Integrating Equation (6.2) from $T^*$ to $t$, we have \begin{eqnarray*} r^+_B(t)-r_{out}(T^*)=\int^t_{T^*}h(\tau)d\tau- \int^t_{T^*}\frac{n}{r^+_B(\tau)}d\tau, \end{eqnarray*} which implies that $r^+_B(t)$ is uniformly bounded from above in $[T^*, T_{\max})$. This is a contradiction to (6.1). Therefore $T_{\max}=\infty$. Obviously $r(z,t)\rightarrow \infty$ for any $z\in {\mathbb S}^n$ as $t\rightarrow \infty$ by (6.4), (6.5) and the containment principle, which implies that $M_t$ expands to $\infty$ in this case. The lemma follows. \end{proof} \begin{remark} Lemma 16 and Proposition 1 imply the limit \begin{eqnarray*} \lim_{t\rightarrow \infty}\psi(t)=0. \end{eqnarray*} \end{remark} We don't know whether the rescaled mean curvature $\widetilde{H}$ is uniformly bounded from above or not, but we can prove that if the rescaled hypersurface $\widetilde{M}_{\tilde{t}}$ converges to a smooth hypersurface, it must be a sphere. To this end, we need to estimate the lower bound of the rescaled mean curvature. Again we consider the function $$\Phi=\frac {H}{\beta-\mathcal{Z}}$$ for some constant $\beta$. As in Lemma 7 we have the evolution equation of $\Phi$ \begin{lemma} For $t\in [0, \infty)$ and $z\in {\mathbb S}^n$, \begin{eqnarray*} \frac {\partial }{\partial t}\Phi &=& g^{ij}\overline{\nabla}_i\overline{\nabla}_j\Phi-\frac 2{\beta-\mathcal{Z}}g^{ij}\overline{\nabla}_i\Phi \overline{\nabla}_j\mathcal{Z}\nonumber\\ &&+\frac 1{(\beta-\mathcal{Z})^2}\left\{(\beta|A|^2+h)H- [2H^2+h(\beta-\mathcal{Z})|A|^2]\right\}. \end{eqnarray*} \end{lemma} For any $t_0\in [T^*, \infty)$, let $\beta =2r_{out}(t_0)$ in Lemma 17. Then by Lemma 16, for any $t\in [T^*, t_0]$, $$\mathcal{Z}=<X, {\bf v}>\leq r_{out}(t_0).$$ Applying the maximum principle to the evolution equation of $\Phi$, by the same approach as in the proof of Lemma 8 we have \begin{lemma} There is a positive constant $c_{15}$ such that for any $(x, \tilde{t})\in M^n\times [0, \infty)$ $$\widetilde{H}(x, \tilde{t})\geq c_{15}.$$ \end{lemma} At last we show that the eigenvalues of the second fundamental form approach to each other, when $\tilde{t}\rightarrow \widetilde{T}_{\max}$. As before we consider the function defined in section 4 $$f=\frac {|A|^2}{H^{2}}.$$ It is easy to see that $f$ is a scaling invariant. We also have the evolution equation of $\tilde f$ as in (4.9). By similar discussion as in the proof of Theorem 1(I), the rescaled evolving hypersurfaces $\widetilde{M}_{\tilde t}$ tends to a sphere as $\tilde t\rightarrow \infty$. This finishes the proof of Theorem 1(III).
1,314,259,996,155
arxiv
\section{Analysis} \label{sec:Analysis} In this section, we prove under mild assumptions that an $\epsilon$-coreset of size $\BigO_{\epsilon,\delta}\left(\log n (d\log \log n + \log^2 n) \right)$ can be constructed with probability at least $1-\delta$, in $\BigO(nd)$ time. At a high level, our algorithm first efficiently approximates the importance of each point $p_i \in \PP$, which we refer to as the point's \emph{sensitivity} $s(p_i)$. The number of sample points required for an $(\epsilon, \delta)$-approximation is then computed as a function of the points' sensitivities using an analogue of the Estimator Theorem covered in class (and by Motwani et al. \cite{motwani2010randomized}), i.e., Theorem~\ref{thm:sampling} by Braverman et al. \cite{braverman2016new}. The outline of our proof is as follows. We begin by enumerating the preliminary material as well as the assumptions we impose on the problem. We then bound the sensitivity of each point by computing a tight upper bound that can be efficiently computed for all points. We then sum over all the upper bounds for the sensitivities of the points and show this sum is logarithmic in the number of points $n$. We then invoke Theorem~\ref{thm:sampling} with the computed sum of sensitivities and a straightforward application of the theorem's expression for the number of points required yields the existence of an $\epsilon$-coreset polylogarithmic in the number of points $n$ and polynomial in the number of features $d$. Combining these procedures, we finally arrive at the $(\epsilon, \delta)$-FPRAS, shown as Alg.~\ref{algorithm}. \subsection{Preliminaries} \label{sec:Preliminaries} In the following, we state some assumptions and results upon which, we base our analysis. \begin{assumption}[Normalized Input] \label{asm:normalizedInput} The training data is normalized such that for any $p_i = (x_i, y_i)$, $i \in [n]$ we have $||x_i||_2 \leq 1$. \end{assumption} \begin{assumption}[Scaled Input] \label{asm:centeredInput} The training data is centered around its mean $\mu = \frac{1}{n} \sum_{i=1}^n x_i$, such that $\mu = 0$. \end{assumption} Assumptions~\ref{asm:normalizedInput} and ~\ref{asm:centeredInput} are very commonly fulfilled in practical settings since both normalization and mean-centering of the input points are desirable before the training procedure for more robust results. \begin{assumption}[Bounded Query Space] \label{asm:boundedQuerySpace} Let $\mathcal{W} = \{w \in \mathbb{R}^d: ||w||_2 \leq \log n\}$. The query set $Q(\PP)$ is then defined to be the set \begin{equation} Q(\PP) = \big\{w \in \mathcal{W} : \sum_{i \in [n]} \max \{0, 1 - w^T x_i y_i\} \geq n/\log n\big\}. \end{equation} \end{assumption} In other words, we consider the set of candidate margins that do not entirely separate the labeled data, as is usually the case in target coresets applications that involve an extremely large number of data points\footnote{In future work, we intend to relax this assumption in our sensitivity analysis by leveraging a probabilistic argument in conjunction with the fact that data points are centered.}. Assumption~\ref{asm:boundedQuerySpace} of having a bounded query space is justified by the fact that the points lie within a unit ball and therefore margins in accordant scale are reasonable. Moreover, we note that in many coreset applications a bounded space is a necessary condition for having coresets of sublinear size as shown for the case of Logistic Regression \cite{huggins2016coresets}. Our analysis further relies on Theorem~\ref{thm:sampling} given by Braverman et al.~\cite{braverman2016new}, which is stated below. This result states that for any given overapproximation $\gamma(p_i)$ of the sensitivity $s(p_i)$, a coreset of sufficiently large size, where the size depends on the tightness of the overapproximation, gives an $\eps$-coreset with probability at least $1-\delta$. This theorem, together with our subsequent analysis, will allow us to establish the aforementioned $(\eps,\delta)$-FPRAS for computing the margin of a SVM classifier. \begin{theorem}[Braverman et al. \cite{braverman2016new}] \label{thm:sampling} Let $\gamma: P \to \Reals_+$ be a function such that $$ \forall{i \in [n]} \tab \gamma(p_i) \ge s(p_i), $$ and let $t = \sum_{i \in [n]} \gamma(p_i)$. Further, let $\mathcal{F} = \left(\PP, Q(\PP), f \right)$ denote the query space and let $\text{dim}(\mathcal{F})$ be the corresponding VC dimension \cite{vapnik1998statistical}. Then, for all $\epsilon \in (0, 1)$, there exists some sufficiently large constant $c \ge 1$ such that for a random sample $\SS \subset \PP$ of size \begin{equation} \label{eqn:samplesNeeded} |\SS| \ge \frac{ct}{\epsilon^2}\left(dim(\mathcal{F})\log(t) + t\log(\frac{1}{\delta})\right) \end{equation} we have that $p = q$ with probability $s(p)/t$ for every $p \in \PP$ and $q \in S$. Let \begin{equation} u(p) = \frac{K_p t (p)}{s(p) |\SS|} \end{equation} be the weight for every $p \in \SS$, where $K_p$ is the number of times point $p$ is sampled. Then, with probability at least $1 - \delta$, $(S,u)$ is an $\epsilon$-coreset for $\PP$. \end{theorem} \subsection{Sensitivity Upper Bound} \label{sec:sensitivityUpperBounds} To be able to derive an $(\eps,\delta)$-FPRAS according to Theorem~\ref{thm:sampling}, we need to be able to efficiently and tightly upperbound the sensitivity $s(p_i)$ of each point. In particular, we start out from the sensitivity description in its most basic form and use multiple insights from SVM. \begin{lemma}[Sensitivity Bounds] \label{lem:sensitivityBounds} The sensitivity of any arbitrary point $p_i \in \PP$ is bounded above by \begin{equation} s(p_i) \leq \gamma(p_i) = \frac{1}{n} + \frac{\log n + ||x_i||_2 \log^2 n}{n}. \end{equation} \end{lemma} \begin{proof} Consider the sensitivity of a particular point $p_i \in \PP$: \begin{align} s(p_i) &= \sup_{w \in Q(\PP)} \,\, \frac{\tilde{f}(p_i, w)}{\sum_{j \in [n]} \tilde{f}(p_j, w)} \\ &\leq\sup_{w \in Q(\PP)} \,\, \frac{1}{n} + \frac{\max \{0, 1 - w^T x_i y_i\}}{ \sum_{j \in [n]} \max \{0, 1 - w^T x_j y_j\}} && \left(\frac{a + b}{c + d} \leq \frac{a}{c} + \frac{b}{d}, \forall{a,b,c,d \in \mathbb{R}_+}\right) \\ &\leq\sup_{w \in Q(\PP)} \,\, \frac{1}{n} + \frac{1 + ||w||_2 ||x_i||_2}{ \sum_{j \in [n]} \max \{0, 1 - w^T x_j y_j\}} && \text{(Cauchy-Schwarz Inequality)} \\ &\leq \frac{1}{n} + \frac{1 + ||x_i||_2 \log n}{ (n/\log n)} &&\text{(By Assumption ~\ref{asm:boundedQuerySpace})} \\ &= \gamma(p_i) \end{align} \end{proof} From Lemma~\ref{lem:sensitivityBounds}, we can now derive an upper bound on the total sensitivity $S(\PP)$: \begin{corollary} \label{cor:SumSensitivities} The sum of sensitivities over all points, $S(\PP) = \sum\limits_{i \in [n]} s(p_i)$, is bounded above by \begin{align} S(\PP) &\leq \sum_{i \in [n]} \gamma (p_i) \leq t = 1 + \log n + \log^2 n \\ &\leq \BigO(\log^2 n). \end{align} \end{corollary} \begin{proof} Leveraging the inequality established by Lemma~\ref{lem:sensitivityBounds}, we have the following upper bound for the sum of sensitivities, i.e., for $S(\PP)$: \begin{align*} S(\PP) &\leq \sum_{i \in [n]} \gamma(p_i) \\ &= \sum_{i \in [n]} \left(\frac{1}{n} + \frac{\log n + ||x_i||_2 \log^2 n}{n} \right) \\ &\leq 1 + \log n + \log^2 n \\ &= \BigO(\log^2 n) \end{align*} \end{proof} Finally, under the combination of the results above, we present the derivation of the $(\eps, \delta)$-FPRAS. \begin{theorem}[$(\epsilon,\delta)$-Coreset FPRAS] \label{thm:epsilonCoreset} Given any $\epsilon, \delta \in (0,1)$ and a data set $\PP$, Algorithm~\ref{algorithm} generates an $\epsilon$-coreset, $(S, u)$, of size $$ |\SS| = \BigO\left(\log n /\epsilon^2 (d\log \log n + \log^2 n \log (1/ \delta)\right) = \BigO_{\epsilon,\delta}\left(\log n (d\log \log n + \log^2 n) \right), $$ for the L2-norm SVM problem with probability at least 1 - $\delta$. Moreover, our algorithm runs in $\BigO(nd)$ time. \end{theorem} \begin{proof} First, we leverage the seminal result by Vapnik et al. \cite{vapnik1998statistical} stating that the VC dimension of a separating hyperplane with a margin $w$, i.e., $\text{dim}(\mathcal{F})$, is bounded above by $$ \text{dim}(\mathcal{F}) \leq d + 1 = \BigO(d). $$ Now, by Theorem~\ref{thm:sampling}, we have that the coreset constructed by our algorithm is an $\epsilon$-coreset for the SVM problem with probability at least $1 - \delta$, and the size of the coreset is established by plugging in the bound for the sum of sensitivities from Corollary~\ref{cor:SumSensitivities} to Equation \eqref{eqn:samplesNeeded}: $$ |\SS| \ge \frac{ct}{\epsilon^2}\left(d\log(t) + t\log(\frac{1}{\delta})\right). $$ Moreover, note that the computation time of our algorithm is dominated by computing the upper bounds on the sensitivity, i.e., Line~\ref{E2} of our Algorithm, which is in turn a $\mathcal{O}(d)$ time operation per point, yielding a total running time of $\mathcal{O}(nd)$. \end{proof} Thus, our coresets are of size polylogarithmic in the number of points $n$ and polynomial in the dimension of the points $d$. Note that since $d \ll n$ in the applications we are considering, the theorem above proves that the coreset generated by our approximation scheme is capable of generating an $1 \pm \epsilon$ approximation to the SVM problem with probability $1-\delta$, even when the coreset size is significantly smaller than the size of the original input points $n$. \section{Conclusion} \label{sec:conclusion} We presented an efficient coreset algorithm for obtaining substantial speedups in SVM training at the cost of small, provably-bounded approximation error. Our approach relies on the intuitive fact that data is often redundant and that some data points are more important than others. We showed that by obtaining tight bounds on the importance, i.e. sensitivity, of each point, coresets of size polylogarithmic in the number of points and polynomial in the dimension of the points can be efficiently constructed. To the best of our knowledge, this paper presents the first method for constructing coresets of this size that is also applicable to streaming settings, using the merge-and-reduce approach familiar to coresets \cite{braverman2016new}. Our favorable empirical results demonstrate the effectiveness of our algorithm in accelerating the training time of SVMs in real-world data sets. We conjecture that our coreset construction method can be extended to significantly speed up SVM training for nonlinear kernels as well as other popular machine learning algorithms, such as deep learning. \section{Introduction} \label{sec:Introduction} Popular machine learning algorithms are computationally expensive, or worse yet, intractable to train on Big Data. Recently, the notion of using \textit{coresets} \cite{agarwal2005geometric,feldman2011unified,braverman2016new}, small weighted subsets of the input points that approximately represent the original data set, has shown promise in accelerating machine learning algorithms, such as $k$-means clustering \cite{feldman2011unified}, training mixture models \cite{feldman2011scalable}, and logistic regression \cite{huggins2016coresets}. Support Vector Machines (SVMs) are one of the most popular algorithms for classification and regression analysis. However, with the rising availability of Big Data, training SVMs on massive data sets has shown to be computationally expensive. In this paper, we present a coreset construction algorithm for efficiently training Support Vector Machines. Our approach entails a randomized coreset construction that is based on the insight that data is often redundant and that some input points are more important than others for large-margin classification. Using importance sampling, our algorithm can be considered an $(\epsilon,\delta)$-FPRAS which generates a coreset that could be used for training instead of the original (massive) set of input points, but yet still provide an $\epsilon$-approximation to the ground-truth classifier if all the points were used instead, with probability at least $1 - \delta$. In this paper, we prove that such coresets of size $\BigO_{\epsilon,\delta}\left(\log n (d\log \log n + \log^2 n)) \right)$ can be efficiently constructed for a set of $n$ points with $d$ features and present an intuitive, importance sampling-based approach for constructing them. \section{Acknowledgments} \bibliographystyle{plain} \section{Method} \label{sec:method} \begin{algorithm}[!htb] \label{algorithm} \caption{$\textsc{Coreset}(\PP,\eps,\delta)$\label{one}} {\begin{tabbing} \textbf{Input:} \quad\quad\= A set of training points $\PP \subseteq \REAL^d$ containing $n$ points, \\ \>an error parameter $\eps\in(0,1)$, and failure probability $\delta\in(0,1)$.\\ \textbf{Output:} \>An $\epsilon$-coreset $(\SS,u)$ for the query space $\mathcal{F}$ with probability at least $1-\delta$. \end{tabbing}} \label{E1} \For{$i \in [n]$} { $\gamma(p_i) \gets \frac{1}{n} + \frac{\log n + ||x_i||_2 \log^2 n}{n}$ \label{E2} \\ } $t \gets \sum_{i \in [n]} \gamma(p_i)$ \label{E3} \\ Let \[ m \gets \Omega \left(\frac{t}{\eps^2}\left(d\log t+\log\left(\frac{1}{\delta}\right)\right) \right), \]\\ \label{E4} $(K_1,\ldots,K_n) \sim \text{Multinomial}\left(m, \pi_i = \gamma(p_i)/t \, \, \, \forall{i \in [n]}\right)$ \label{E5} \\ $\SS \gets \left\{ p_i \in \PP \, : \, K_i > 0 \right\}$ \\ \label{E9} //\textit{Compute the weights $u:\PP \to \mathbb{R}_{\geq 0}$ for every point $p_i \in \SS$.} \\ \label{E6} \For{\label{E7} $i \in [n]$} { $u(p_i) \gets \frac{t K_i}{\gamma(p_i) |\SS|}$ \\ \label{E8} } \Return $(\SS,u)$ \label{E10} \end{algorithm} In this section, we will give an overview of the algorithm to compute the coreset $\SS$, which can then be used to train the SVM classifier. We will highlight the important insights of the method, before explaining each step closely. Our algorithm is based off the idea that for any given dataset $\PP$, we assign an importance $\gamma(p_i)$ to each data point $p_i$ and then sample from the dataset according to the multinomial distribution emerging from this procedure. The crucial insight to this method is how we assign the importances $\gamma(p_i)$. In particular, we use an overapproximation of the sensitivity $s(p_i)$ of each point, i.e., $\gamma(p_i)$, to assign importances, which are obtained from the analysis from the previous section. Following the sampling of points, we further assign weights $u(p_i)$ to each data points, which are proportional to the number of times the point has been sampled. We then train a SVM classifier on the weighted coreset $(\SS,u)$ using any standard SVM library. The resulting algorithm is an $(\eps,\delta)$-FPRAS for approximating the trained classifier. The overall method to compute the desired coreset is outlined in Algorithm~\ref{algorithm}. Given a set of input data $\PP$, an error parameter $\eps$, and the desired failure probability $\delta$, the algorithm returns an $\eps$-coreset $(\SS,u)$ from the query space $\mathcal{F}$ with probability at least $1-\delta$. In Line~\ref{E2} we compute the importance of a point, i.e., the upper bound on the sensitivity $s(p_i)$ of a point $p_i$, see Lemma~\ref{lem:sensitivityBounds} for more details. In Line~\ref{E4}, we compute the necessary number of samples to include in $(\SS,u)$, according to Theorem~\ref{thm:sampling}, and we then sample from the resulting multinomial distribution, see Line~\ref{E5}. Note that samples in Line~\ref{E8} are weighted according to Theorem~\ref{thm:sampling}. \section{Problem Definition} \label{sec:problem-definition} Given a set of training points $\PP = \{(x_1, y_1) \ldots, (x_n, y_n) \}$ where $x_i \in \PP \subseteq \Reals^d$ and $y_i \in \{-1,1\}$ for all $i \in [n]$, the soft-margin two-class L2-SVM problem is the following quadratic program: \begin{align} \min_{w \in Q(\PP)} &\tab f(\PP, w), \end{align} where \begin{equation} \label{eqn:svm} f(\PP, w) = ||w||_2^2 + C \sum_{i \in [n]} \max\{0, 1 - y_iw^Tx_i\}, \end{equation} for some regularization parameter $C \in \REAL_+$ and query space $Q(\PP)$, which is defined to be the set of all candidate margins. We note that we ignore the bias term $b$ in the problem formulation for simplicity, since this term can always be embedded into the feature space by expanding the dimension to $d+1$. Instead of establishing new algorithms for the SVM problem, we focus on reducing the size of the training data by sampling \emph{informative points}, while providing each point a proper weight. \subsection{Soft-margin SVM} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{figures/softmargin2} \caption{Soft-margin SVM} \label{fig:softmargin} \end{figure} If the training data are linearly separable, we can select two parallel hyperplanes that separate the two classes of data, so that the distance between them $\rho = 2/||w||$ is as large as possible. The region bounded by these two hyperplanes is called the margin. To extend to cases in which the data are not linearly separable a hinge loss function is introduced that penalizes the violation of the margin constraints. This approach is known as soft-margin SVM. Each given data point $x_i$ falls in one of three categories. It either lies beyond the margin $y_i w^Tx_i>1$, in which case it does not contribute to the SVM loss \eqref{eqn:svm}. The point could lie directly on the margin $y_i w^Tx_i=1$, where the point is a support vector and directly affects the cost function but does not directly add to it. Note that the distance between the separating hyperplane and the points on the margin is exactly $1/||w||$. This distance is commonly referred to as \emph{margin} as well. If the point lies within the margin $y_i w^Tx_i<1$ it adds a cost to \eqref{eqn:svm} proportional to the amount of constraint violation. The regularization parameter $C$ weights the relative importance of maximizing the margin and margin constraint satisfaction for each data point $x_i$ and is used to increase robustness against outliers. Accordingly, if C is very small, margin constraint violation is only penalized weakly, $||w||$ is small and the safety margin around the decision boundary will be large. Contrary, if C is very large, violation of the margin constraint is penalized heavily and the formulation approaches the hard-margin SVM case which is sensitive to outliers in the training set. \subsection{Coresets} Instead of introducing an entirely new algorithm for solving the SVM problem itself, we use (smally) subsets of the input data instead for training, i.e., coresets, which Using coresets, the main benefit is that we can reduce the runtime through reduction of the number of training data points, while maintaining a close approximation to the optimal solution of the problem. The following definitions for coresets are used throughout the paper: \begin{definition}[Query Space] \label{querySpace} Let $\PP$ be the set of input points, let $Q(\PP)$ denote the set of possible margins over which the SVM optimization is performed over, and let $f$ be the SVM objective function given by \eqref{eqn:svm}. Then, the tuple $\mathcal{F} = (\PP, Q(\PP), f)$ is called the \emph{query space}. \end{definition} \begin{definition}[$\eps$-coreset] \label{coreset} Let $(\SS,u)$ be a weighted subset of the input points $\PP$ such that the function $u: \PP \to \mathbb{R}_{\ge 0}$ maps each point to its corresponding weight. The pair $(\SS,u)$ is called a \emph{coreset} with respect to the input points $\PP$. Let $Q(\PP)$ be the \emph{query set}, i.e., the set of candidate margins, and let $f$ be the SVM objective function given by \eqref{eqn:svm}. Then $(\SS,u)$ is an $\eps$-coreset if for every margin $w \in Q(\PP)$ we have $$ |f(\PP,w)-f((\SS,u), w)|\leq \eps f(\PP,w). $$ \end{definition} In other words, $(\SS,u)$ is a $\eps$-coreset if $f((\SS,u), w)$ is an $1 \pm \epsilon$ approximation to the objective function value with all of the training points used, $f(\PP, w)$. Our overarching goal is to construct an $\eps$-coreset $\SS \subset \PP$ such that the cardinality of $\SS$ is sublinear in $n$, the number of data points. In our analysis, we will also rely on the concept of sensitivity $s(p_i)$ of a point $p_i$, see definition below, which has been previously introduced in~\cite{braverman2016new}: \begin{definition}[Sensitivity] \label{def:sensitivity} The sensitivity of an arbitrary point $p_i = (x_i,y_i)$ is defined as \begin{equation} \forall{i \in [n]} \tab s(p_i) = \sup_{w \in Q(\PP)} \,\, \frac{\tilde{f}(p_i, w)}{\sum_{j \in [n]} \tilde{f}(p_j, w)}, \end{equation} where $$ \tilde{f}(p_i, w) = \frac{||w||_2^2}{n} + C \max\{0, 1-y_i w^T x_i\}. $$ \end{definition} Note that $\tilde{f}(p_i, w)$ represents the \textit{contribution} of point $p_i$ to the objective function of the SVM and that \begin{align*} \sum_{j \in [n]} \tilde{f}(p_j, w) = ||w||_2^2 + C\sum_{j \in [n]} \max\{0, 1 - w^T x_j y_j\} = f(\PP, w) \end{align*} where $f(\PP, w)$ is the objective function of the two class L2-norm SVM as in \eqref{eqn:svm}. \section{Related Work} \label{sec:RelatedWork} Training a canonical Support Vector Machine (SVM) requires $\BigO(n^3)$ time and $\BigO(n^2)$ space \cite{tsang2005core} where $n$ is the number of training points, which may be impractical for certain applications. Work by Tsang et al.~\cite{tsang2005core} investigated computationally-efficient approximations in terms of coresets to the SVM problem, termed Core Vector Machines (CVMs), and leveraged existing coreset methods for the Minimum Enclosing Ball (MEB) \cite{agarwal2005geometric,badoiu2003smaller}. The authors propose a method that reduces the training time required for the two-class L2-SVM to $\mathcal{O}(n)$ and the space to an expression that is (surprisingly) independent of $n$. Similar geometric approaches based on convex hulls and extreme points were investigated by \cite{nandan2014fast}. Since the SVM problem is inherently a quadratic optimization problem, prior work has investigated approximations to the quadratic programming problem using the Frank-Wolfe algorithm or Gilbert's algorithm \cite{clarkson2010coresets}. Another line of research has been in reducing the problem of polytope distance to solve the SVM problem \cite{gartner2009coresets}. The authors establish lower and upper bounds for the polytope distance problem and use Gilbert's algorithm to train an SVM in linear time. Har-Peled et al.\ constructed coresets to approximate the maximum margin separation, i.e., a hyperplane that separates all of the input data with margin larger than $(1-\epsilon)\rho^*$, where $\rho^*$ is the best achievable margin \cite{har2007maximum}. They study the running time of a simple coreset algorithm for binary and ``structured-output'' classification and the use of coresets for active learning and noise-tolerant learning in the agnostic setting. There have been probabilistic approaches to the SVM problem. Most notable are the works of Clarkson et al.\ \cite{clarkson2012sublinear} and Hazan et al \cite{hazan2011beating}. Hazan et al. used a primal-dual approach combined with the Stochastic Gradient Descent (SGD) approach in order to learn linear SVMs in sublinear time. They propose the SVM-SIMBA approach which returns an $\epsilon$-approximate solution with probability at least $1/2$ to the SVM problem that uses Hinge loss as the objective function. The key idea in their method is to access single features of the training vectors rather than the entire vectors themselves. However, their method is nondeterministic and returns the correct $\epsilon$-approximation only with some probability greater than a constant ($1/2$). Clarkson et al.~\cite{clarkson2012sublinear} present sublinear-time (in the size of the input) approximation algorithms for some optimization problems such as training linear classifiers (e.g., perceptron) and finding MEB. They introduce a technique that is originally applied to the perceptron algorithm, but extend it to the related problems of MEB and SVM in the hard margin or $L2$-SVM formulations. The drawback of their method is that the approximation can be successfully computed only with high probability. Pegasos~\cite{shalev2007pegasos} employs primal estimated Subgradient Descent independent of the input size. Joachims presents an alternative approach to training SVMs in linear time based on the cutting plane method that hinges on an alternative formulation of the SVM optimization problem \cite{joachims2006training}. He shows that the Cutting-Plane algorithm can be leveraged to train SVMs in $\BigO(sn)$ time for classification and $\BigO(sn \log n)$ time for ordinal regression where $s$ is the average number of non-zero features. However, this approach does not trivially extend to SVMs with kernels and is not sublinear with respect to the number of points $n$. \section{Results} \label{sec:results} We evaluate the performance of our coreset construction algorithm against two real-world, publicly available data sets \cite{Lichman:2013} and compare its effectiveness to Core Vector Machines (CVMs), another approximation algorithm for SVM \cite{tsang2005core}, and categorical uniform sampling, i.e., we sample uniformly a number of data points from each label $y_i$. In particular, we selected a set of $M = 10$ subsample sizes $S_1,\ldots,S_\text{M} \subset [n]$ for each data set of size $n$ and ran all of the three aforementioned coreset construction algorithms to construct and evaluate the accuracy of subsamples sizes $S_1,\ldots,S_\text{M}$. The results were averaged across $100$ trials for each subsample size. Our experiments were implemented in Python and performed on a 3.2GHz i7-6900K (8 cores total) machine. \subsection{Credit Card Dataset} The Credit Card dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients}} contains $n = 30,000$ entries each consisting of $d = 24$ features of a client, such as education, and age. The goal of the classification task is to predict whether a client will default on a payment. Figure~\ref{fig:CreditCard} depicts the accuracy of the subsamples generated by each algorithm and the computation time required to construct the subsamples. Our results show that for a given subsample size, our coreset construction algorithm runs more efficiently and achieves significantly higher approximation accuracy with respect to the SVM objective function in comparison to uniform sampling and CVM. Note that the relative error is still relatively high since the dataset is not very large and benefits of coresets become even more visible for significantly larger datasets. As can be seen in Fig.~\ref{fig:CreditCardSensitivity} our coreset sampling process noticeably differs from uniform sampling and some data points are sampled with much higher probability than others. This is in line with the idea of sampling more important points with higher probability. \begin{figure*}[!htb] \centering \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figures/CreditCardCoresetCVX.png} a) \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figures/CreditCardCoresetCVXtime.png} b) \end{minipage} \caption{a) Accuracy of each algorithm with respect to the SVM objective function \eqref{eqn:svm}. b) Computation time required to train the SVM using subsamples relative to using all of the input data.} \label{fig:CreditCard} \end{figure*} \begin{figure*}[!htb] \centering \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{figures/sens-1-CreditCardCoreset.png} a) \end{minipage} \begin{minipage}{0.48\textwidth} \centering \includegraphics[width=1\textwidth]{figures/sens-2-CreditCardCoreset.png} b) \end{minipage} \caption{Sampling distribution of points based on the computed upper bounds on sensitivity.} \label{fig:CreditCardSensitivity} \end{figure*} \subsection{Skin Dataset} The Skin dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/Skin+Segmentation/}} consists of $n = 245,057$ data points with $d = 4$ attributes per point. The attributes include random samples of B,G,R values from face images and the goal of the classification task is to determine whether these samples are skin or non-skin samples. Our coreset outperforms uniform sampling for all coreset sizes (cf.~Fig.~\ref{fig:SkinData}), while computation time of coreset generation and SVM training remains a fraction of the original dataset. Due to poor performance CVM is omitted from the showed results. Note that this dataset is significantly larger than the Credit Card dataset and thus the advantages in error and most significantly runtime are more prominent. \begin{figure*}[!htb] \centering \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figures/SkinCoreset.png} a) \end{minipage} \begin{minipage}{0.49\textwidth} \centering \includegraphics[width=1\textwidth]{figures/SkinCoresettime.png} b) \end{minipage} \caption{a) Accuracy of each algorithm with respect to the SVM objective function \eqref{eqn:svm}. b) Computation time required to train the SVM using subsamples relative to using all of the input data. CVM is omitted due to poor performance.} \label{fig:SkinData} \end{figure*}
1,314,259,996,156
arxiv
\section{Introduction} We consider estimation of a pathwise differentiable real valued target parameter based on observing $n$ independent and identically distributed observations $O_1,\ldots,O_n$ with a data distribution $P_0$ known to belong in a highly nonparametric statistical model ${\cal M}$. A target parameter $\Psi:{\cal M}\rightarrow \hbox{${\rm I\kern-.2em R}$}$ is a mapping that maps a possible data distribution $P\in {\cal M}$ into real number, while $\psi_0=\Psi(P_0)$ represents the answer to the question of interest about the data experiment. The canonical gradient $D^*(P)$ of the pathwise derivative of the target parameter at a $P$ defines an asymptotically efficient estimator among the class of regular estimators \citep{Bickeletal97}: An estimator $\psi_n$ is asymptotically efficient at $P_0$ if and only if it is asymptotically linear at $P_0$ with influence curve $D^*(P_0)$: \[ \psi_n-\psi_0=\frac{1}{n}\sum_{i=1}^n D^*(P_0)(O_i)+o_P(n^{-1/2}).\] The target parameter depends on the data distribution $P$ through a parameter $Q=Q(P)$, while the canonical gradient $D^*(P)$ possibly also depends on another nuisance parameter $G(P)$: $D^*(P)=D^*(Q(P),G(P))$. Both of these nuisance parameters are chosen so that they can be defined as a minimizer of the expectation of a specific loss function: $P L_1(Q(P))=\min_{P_1\in {\cal M}}PL_1(Q(P_1))$ and $PL_2(G(P))=\min_{P_1\in {\cal M}}P L_2(G(P_1))$, where we used the notation $Pf\equiv \int f(o)dP(o)$. We assume that the parameter spaces $Q({\cal M})=\{Q(P):P\in {\cal M}\}$ and $G({\cal M})=\{G(P):P\in {\cal M}\}$ for these nuisance parameters $Q$ and $G$ are contained in the set of multivariate cadlag functions with sectional variation norm $\parallel \cdot\parallel_v^*$ \citep{Gill&vanderLaan&Wellner95} bounded by a constant (this norm will be defined in the next section). We consider a targeted minimum loss-based (substitution) estimator $\Psi(Q_n^*)$ \citep{vanderLaan&Rubin06,vanderLaan08, vanderLaan&Rose11} of the target parameter that uses as initial estimator of these nuisance parameters $(Q_0,G_0)$ the highly adaptive lasso minimum loss-based estimators (HAL-MLE) $(Q_n,G_n)$ defined by minimizing the empirical mean of the loss over the parameter space \citep{Benkeser&vanderLaan16}. Since the HAL-MLEs converge at a rate faster than $n^{-1/2}$ w.r.t.\ the loss-based quadratic dissimilarities (which corresponds with a rate faster than $n^{-1/4}$ for estimation of $Q_0$ and $G_0$), this HAL-TMLE has been shown to be asymptotically efficient under weak regularity conditions \citep{vanderLaan15}. Statistical inference could therefore be based on the normal limit distribution in which the asymptotic variance is estimated with an estimator of the variance of the canonical gradient. In that case, inference is ignoring the potentially very large contributions of the higher order remainder which could in finite samples easily dominate the first order empirical mean of the efficient influence curve term when the size of the nuisance parameter spaces is large (e.g., dimension of data is large and model is nonparametric). In this article we present four methods for inference that use the nonparametric bootstrap to estimate the finite sample distribution of the HAL-TMLE or a conservative distribution dominating its true finite sample distribution. \subsection{Organization} Firstly, in Section \ref{sectformulation} we formulate the estimation problem and motivate the challenge for statistical inference. We also provide an easy to implement finite sample highly conservative confidence interval whose width converges to zero at the usual square-root sample size rate, but is not asymptotically sharp. We use this result to demonstrate the potential impact of the dimension of the data and sectional variation norm bound on the width of a finite sample confidence interval. In Section \ref{sectnp} we present the nonparametric bootstrap estimator of the actual sampling distribution of the HAL-TMLE which thus incorporates estimation of its higher order stochastic behavior, and can thereby be expected to outperform the Wald-type confidence intervals. We prove that this nonparametric bootstrap is asymptotically consistent for the optimal normal limit distribution. Our results also prove that the nonparametric bootstrap preserves the asymptotic behavior of the HAL-MLEs of our nuisance parameters $Q$ and $G$, providing further evidence for good performance of the nonparametric bootstrap. In the second subsection of Section \ref{sectnp} we propose to bootstrap the exact second-order expansion of the HAL-TMLE. This results in a very direct estimator of the exact sampling distribution of the HAL-TMLE, although it comes at a cost of not respecting that the HAL-TMLE is a substitution estimator. Importantly, our results demonstrate that the approximation error of the two nonparametric bootstrap estimates of the true finite sample distribution of the HAL-TMLE is mainly driven by the approximation error of the nonparametric bootstrap for estimating the finite sample distribution of a well behaved empirical process. We suggest that these two nonparametric bootstrap methods are the preferred methods for {\em accurate} inference, among our proposals, by not being aimed to be conservative. In Section \ref{sectupperb} we upper-bound the absolute value of the exact remainder for the second-order expansion of the HAL-TMLE in terms of a specified function of the loss-based dissimilarities for the HAL-MLEs of the nuisance parameters $Q$ and $G$. The resulting conservative finite sample second-order expansion is highly conservative but is still asymptotically sharp by converging to the actual normal limit distribution of the HAL-TMLE (but from above). We then propose to use the nonparametric bootstrap to estimate this conservative finite sample distribution. In the Appendix Section \ref{sectsupupperb} we further upper bound the previously obtained conservative finite sample expansion by taking a supremum over a set of possible realizations of the HAL-MLEs that will contain the true $Q_0$ and $G_0$ with probability tending to 1, where this probability is controlled/set by the user. We also propose a simplified conservative approximation of this supremum which is easy to implement. Even though these two sampling distributions are even more conservative they are still asymptotically sharp, so that also the corresponding nonparametric bootstrap method is asymptotically converging to the optimal normal limit distribution. In Section \ref{sectexample} we demonstrate our methods for two examples involving a nonparametric model and a specified target parameter (average treatment effect and integral of the square of the data density). We conclude with a discussion in Section \ref{sectdisc}. Some of the technical results and proofs have been deferred to the Appendix, while the overall proofs are presented in the main part of the article. \subsection{Why does it work, and how it applies to adaptive TMLE} The key behind the validity of the nonparametric bootstrap for estimation of the sampling distribution of the HAL-MLE and HAL-TMLE is that the HAL-MLE is an actual MLE thereby avoiding data adaptive trade-off of bias and variance as naturally achieved with cross-validation. However, even though the inference is based on such a non-adaptive HAL-TMLE, one can still use an highly adaptive HAL-TMLE as point estimate in our reported confidence intervals. Specifically, one can use our confidence intervals with the point estimate defined as a TMLE using a super-learner \citep{vanderLaan&Dudoit03,vanderVaart&Dudoit&vanderLaan06,vanderLaan&Dudoit&vanderVaart06,vanderLaan&Polley&Hubbard07,Chpt3} that includes the HAL-MLE as one of the candidate estimators in its library. By the oracle inequality for the cross-validation selector, such a super-learner will improve on the HAL-MLE so that the proposed inference based on the non-adaptive HAL-TMLE will be more conservative. In addition, our confidence intervals can be used with the point estimate defined by adaptive TMLEs incorporating additional refinements such as collaborative TMLE \citep{vanderLaan:Gruber10,Gruber:vanderLaan10a,Stitelman:vanderLaan10,vanderLaan:Rose11,Wang:Rose:vanderLaan11,gruber2012ijb}; cross-validated TMLE \citep{Zheng&vanderLaan12,vanderLaan&Rose11}; higher order TMLE \citep{Carone2014techreport,Caroneetal17,diaz2016ijb}; and double robust inference TMLE \citep{vanderLaan14a,Benkeseretal17}. Again, such refinements generally improve the finite sample accuracy of the estimator, so that it will improve the coverage of the confidence intervals based on the non-adaptive HAL-TMLE. Our confidence intervals can also be used if the statistical model ${\cal M}$ has no known bound on the sectional variation norm of the nuisance parameters. In that case, we recommend to select such a bound with cross-validation (just as one selects the $L_1$-norm penalty in Lasso regression with cross-validation), which, by the oracle inequality for the cross-validation selector \citep{vanderLaan&Dudoit03,vanderVaart&Dudoit&vanderLaan06,vanderLaan&Dudoit&vanderVaart06} is guaranteed to be larger that the sectional variation norm of the true nuisance parameters $(Q_0,G_0)$ with probability tending to 1. In that case, the confidence intervals will still be asymptotically correct, incorporate most of the higher order variability, but ignores the potential finite sample underestimation of the true sectional variation norm. In addition, in that case the inference adapts to the underlying unknown sectional variation norm of the true nuisance parameters $(Q_0,G_0)$. We plan to evaluate the practical performance of our methods in the near future. \subsection{Relation to literature on higher order influence functions} J. Pfanzagl \citep{pfanzagl1985} introduced the notion of higher order pathwise differentiability of finite dimensional target parameters and corresponding higher order gradients. He used these higher order expansions of the target parameter to define higher order one-step estimators that might result in asymptotically linear estimators where regular one-step estimators \citep{levit1975,ibragimov1981,pfanzagl1982,bickel1982annals} might fail to behave well due to a too large second-order remainder. This is the perspective that inspired the seminal contributions of J. Robins, L. Li, E. Tchetgen \& A. van der Vaart (e.g., \citealp{robins2008imsnotes,robins2009metrika,li2011statsprobletters,vandervaart2014statscience}). They develop a rigorous theory for (e.g.) second-order one-step estimators, including the typical case that the parameter is not second-order pathwise differentiable. They allow the case that the second-order remainder asymptotically dominates the first order term, resulting in estimators and confidence intervals that converge to zero at a slower rate than $n^{-1/2}$. Their second-order expansion uses approximations of ''would be'' second-order gradients, where the approximation results in a bias term they termed the representation error. Unfortunately, this representation error, due to the lack of second order pathwise differentiability, obstructs the construction of estimators with a third order remainder (and thereby asymptotic linearity under the condition that a third order term is $o_P(n^{-1/2})$) These second-order one-step estimators involve careful selection of tuning/smoothing parameters for approximating the ''would be'' second-order gradient in order to obtain an optimal bias-variance trade-off. These authors applied their theory to nonparametric estimation of a mean with missing data and the integral of the square of the density. The higher-order expansions that come with the construction of higher order one-step estimators can be directly incorporated in the construction of confidence intervals, thereby possibly leading to improved finite sample coverage. These higher order expansions rely on hard to estimate objects such as a multivariate density in a denominator, giving rise to enormous practical challenges to construct robust higher order confidence intervals, as noted in the above articles. \cite{pfanzagl1982} already pointed out that the one-step estimators and till a larger degree higher order one-step estimators fail to respect global known bounds implied by the model and target parameter mapping, by adding to an initial estimator an empirical mean of a first order influence function and higher order U-statistics (i.e. higher orders empirical averages) of higher order influence functions. He suggested that to circumvent this problem one would have to carry out the updating process in the model space instead of in the parameter space. This is precisely what is carried out by the general TMLE framework \citep{vanderLaan&Rubin06,vanderLaan&Rose11}, and higher order TMLE based on approximate higher order influence functions were developed in \citep{Carone2014techreport,diaz2016ijb,Caroneetal17}. These higher order TMLE represents the TMLE-analogue of higher order one-step estimators, just as the regular TMLE is an analogue of the regular one-step estimator. These TMLEs automatically satisfy the known bounds and thus never produce non-sensical output such a negative number for a probability. The higher order TMLE is just another TMLE but using a least favorable submodel with an extra parameter, thereby providing a crucial safeguard against erratic behavior due to estimation of the higher order influence functions, while also being able to utilize the C-TMLE framework to select the tuning parameters for approximating these higher order influence functions. The approach in this article for construction of higher order confidence intervals is quite different from the construction of higher order one-step estimators or higher order TMLE and using the corresponding higher order expansion for inference. To start with, we use an asymptotically efficient HAL-TMLE so that we preserve the $n^{-1/2}$-rate of convergence, asymptotic normality and efficiency, even in nonparametric models that only assume that the true nuisance parameters have finite sectional variation norm. As point estimate we can still use an adaptive HAL-TMLE which can, for example, include the higher-order HAL-TMLE refinement, beyond refinements mentioned above. However, for inference, we avoid the delicate higher order expansions based on approximate higher order gradients, but instead use the exact second-order expansion $\Psi(Q_n^*)-\Psi(Q_0)=(P_n-P_0)D^*(Q_n^*,G_n)+R_{20}(Q_n^*,G_n,Q_0,G_0)$ implied by the definition of the exact second-order remainder $R_{20}()$ (\ref{exactremainder}), which thus incorporates any higher order term. In addition, by using the robust HAL-MLE as estimators of $Q_0,G_0$, the HAL-TMLE is not only efficient but one can also use nonparametric bootstrap to estimate its sampling distribution. We then use the nonparametric bootstrap to estimate the sampling distribution of HAL-TMLE itself, or its exact expansion, or an exact conservative expansion in which $R_{20}()$ is replaced by a robust upper bound which only depends on well behaved empirical processes for which the nonparametric bootstrap works (again, due to using the HAL-MLE). Our confidence intervals have width of order $n^{-1/2}$ and are asymptotically sharp by converging to the optimal normal distribution based confidence interval as sample size increases. In addition, they are easy to implement as a by product of the computation of the HAL-TMLE itself. \section{General formulation of statistical estimation problem and motivation for finite sample inference}\label{sectformulation} \subsection{Statistical model and target parameter} Let $O_1,\ldots,O_n$ be $n$ i.i.d. copies of a random variable $O\sim P_0\in {\cal M}$. Let $P_n$ be the empirical probability measure of $O_1,\ldots,O_n$. Let $\Psi:{\cal M}\rightarrow\hbox{${\rm I\kern-.2em R}$}$ be a real valued parameter that is pathwise differentiable at each $P\in {\cal M}$ with canonical gradient $D^*(P)$. That is, given a collection of one dimensional submodels $\{P_{\epsilon}^S:\epsilon\}\subset {\cal M}$ through $P$ at $\epsilon =0$ with score $S$, for each of these submodels the derivative $\left . \frac{d}{d\epsilon}\Psi(P_{\epsilon}^S)\right |_{\epsilon =0}$ can be represented as $ E_P D(P)(O)S(O)$. The latter is an inner product of a gradient $D(P)\in L^2_0(P)$ with the score $S$ in the Hilbert space $L^2_0(P)$ of functions of $O$ with mean zero (under $P$) endowed with inner product $\langle S_1,S_2\rangle_P=P S_2S_2$. Let $\parallel f\parallel_P\equiv \sqrt{\int f(o)^2dP(o)}$ be the Hilbert space norm. Such an element $D(P)\in L^2_0(P)$ is called a gradient of the pathwise derivative of $\Psi$ at $P$. The canonical gradient $D^*(P)$ is the unique gradient that is an element of the tangent space defined as the closure of the linear span of the collection of scores generated by this family of submodels. Define the exact second-order remainder \begin{equation}\label{exactremainder} R_2(P,P_0)=\Psi(P)-\Psi(P_0)+(P-P_0)D^*(P),\end{equation} where $(P-P_0)D^*(P)=-P_0D^*(P)$ since $D^*(P)$ has mean zero under $P$. Let $Q:{\cal M}\rightarrow Q({\cal M})$ be a function valued parameter so that $\Psi(P)=\Psi_1(Q(P))$ for some $\Psi_1$. For notational convenience, we will abuse notation by referring to the target parameter with $\Psi(Q)$ and $\Psi(P)$ interchangeably. Let $G:{\cal M}\rightarrow G({\cal M})$ be a function valued parameter so that $D^*(P)=D^*_1(Q(P),G(P))$ for some $D^*_1$. Again, we will use the notation $D^*(P)$ and $D^*(Q,G)$ interchangeably. Suppose that $O\in [0,\tau]\subset\hbox{${\rm I\kern-.2em R}$}^d_{\geq 0}$ is a $d$-variate random variable with support contained in a d-dimensional cube $[0,\tau]$. Let $D_d[0,\tau]$ be the Banach space of $d$-variate real valued cadlag functions endowed with a supremum norm $\parallel\cdot\parallel_{\infty}$ \citep{Neuhaus71}. Let $L_1:Q({\cal M})\rightarrow D_d[0,\tau]$ and $L_2:G({\cal M})\rightarrow D_d[0,\tau]$ be loss functions that identify the true $Q_0$ and $G_0$ in the sense that $P_0L_1(Q_0)=\min_{Q\in Q({\cal M})}P_0L_1(Q)$ and $P_0L_2(G_0)=\min_{G\in G({\cal M})} P_0L_2(G)$. Let $d_{01}(Q,Q_0)=P_0L_1(Q)-P_0L_1(Q_0)$ and $d_{02}(G,G_0)=P_0L_2(G)-P_0L_2(G_0)$ be the loss-based dissimilarities for these two nuisance parameters. {\bf Loss functions and canonical gradient have a uniformly bounded sectional variation norm:} We assume that these loss functions and the canonical gradient map into functions in $D_d[0,\tau]$ with a sectional variation norm bounded by some universal finite constant: \begin{eqnarray} M_1\equiv \sup_{P\in {\cal M}}\parallel L_1(Q(P))\parallel_v^* &<&\infty\nonumber \\ M_2\equiv \sup_{P\in {\cal M}}\parallel L_2(G(P))\parallel_v^*&<& \infty\nonumber \\ M_3\equiv \sup_{P\in {\cal M}}\parallel D^*(P)\parallel_v^*&<& \infty. \label{sectionalvarbound} \end{eqnarray} For a given function $F\in D_d[0,\tau]$, we define the sectional variation norm as follows. For a given subset $s\subset\{1,\ldots,d\}$, let $F_s(x_s)=F(x_s,0_{-s})$ be the $s$-specific section of $F$ that sets the coordinates outside the subset $s$ equal to 0, where we used the notation $(x_s,0_{-s})$ for the vector whose $j$-th component equals $x_j$ if $j\in s$ and $0$ otherwise. The sectional variation norm is now defined by \[ \parallel F\parallel_v^*=\mid F(0)\mid +\sum_{s\subset\{1,\ldots,d\}}\int_{(0_s,\tau_s]}\mid dF_s(u_s)\mid ,\] where the sum is over all subsets $s$ of $\{1,\ldots,d\}$. Note that $\int_{(0_s,\tau_s]}\mid dF_s(u_s)\mid $ is the standard variation norm of the measure $dF_s$ generated by its $s$-specific section $F_s$ on the $\mid s\mid$-dimensional edge $(0_s,\tau_s]\times \{0_{-s}\}$ of the $d$-dimensional cube $[0,\tau]$. Thus, the sectional variation norm is the sum of the variation of $F$ itself and of all its $s$-specific sections, plus $F(0)$. We also note that any function $F\in D_d[0,\tau]$ with finite sectional variation norm (i.e., $\parallel F\parallel_v^*<\infty$) can be represented as follows \citep{Gill&vanderLaan&Wellner95}: \begin{equation}\label{Frepresentation} F(x)=F(0)+\sum_{s\subset\{1,\ldots,d\}}\int_{(0_s,x_s]} dF_s(u_s) .\end{equation} As utilized in \citep{vanderLaan15} to define the HAL-MLE, since $\int_{(0_s,x_s]}dF_s(u_s)=\int I_{u_s\leq x_s}dF_s(u_s)$, this representation shows that $F$ can be written as an infinite linear combination of $s$-specific indicator basis functions $x\rightarrow I_{u_s\leq x_s}$ indexed by a cut-off $u_s$, across all subsets $s$, where the coefficients in front of the indicators are equal to the infinitesimal increments $dF_s(u_s)$ of $F_s$ at $u_s$. For discrete measures $F_s$ this integral becomes a finite linear combination of such $\mid s\mid$-way indicators. One could think of this representation as a saturated model of a function $F$ in terms of single way indicators, two-way indicators, etc, till the final $d$-way indicator basis functions. For a function $f\in D_d[0,\tau]$, we also define the supremum norm $\parallel f\parallel_{\infty}=\sup_{x\in [0,\tau]}\mid f(x)\mid$. {\bf Assuming that parameter spaces for $Q$ and $G$ are cartesian products of sets of cadlag functions with bounds on sectional variation norm:} Although the above bounds $M_1,M_2,M_3$ are the only relevant bounds for the asymptotic performance of the HAL-MLE and HAL-TMLE, for practical formulation of a model ${\cal M}$ one might prefer to state the sectional variation norm restrictions on the parameters $Q$ and $G$ themselves. For that purpose, let's assume that $Q=(Q_1,\ldots,Q_{K_1})$ for variation independent parameters $Q_k$ that are themselves $m_{1k}$-dimensional cadlag functions on $[0,\tau_{1k}]\subset \hbox{${\rm I\kern-.2em R}$}^{m_{1k}}_{\geq 0}$ with sectional variation norm bounded by some upper-bound $C_{1k}^u$ and lower bound $C_{1k}^l$, $k=1,\ldots,K_1$, and similarly for $G=(G_1,\ldots,G_{K_2})$ with sectional variation norm bounds $C_{2k}^u$ and $C_{1k}^l$, $k=1,\ldots,K_2$. Typically, we have $C_{1k}^l=0$. Specifically, let \begin{eqnarray*} {\cal F}_{1k}\equiv Q_k({\cal M})\\ {\cal F}_{2k}\equiv G_k({\cal M}), \end{eqnarray*} denote the parameter spaces for $Q_k$ and $G_k$, and assume that these parameter spaces ${\cal F}_{jk}$ are contained in the class ${\cal F}_{jk}^{np}$ of $m_{jk}$-variate cadlag functions with sectional variation norm bounded from above by $C_{jk}^u$ and from below by $C_{jk}^l$, $k=1,\ldots,K_j$, $j=1,2$. These bounds $C_1^u=(C_{1k}^u:k)$ and $C_2^u=(C_{2k}^u:k)$ will then imply bounds $M_1,M_2,M_3$. In such a setting, $L_1(Q)$ would be defined as a sum loss function $L_1(Q)=\sum_{k=1}^{K_1}L_{1k}(Q_k)$ and $L_2(G)=\sum_{k=1}^{K_2}L_{2k}(G_k)$. We also define the vector losses ${\bf L}_1(Q)=(L_{1k}(Q_k):k=1,\ldots,K_1)$, ${\bf L}_2(G)=(L_{2k}(G_k):k=1,\ldots,K_2)$, and corresponding vector dissimilarities ${\bf d}_{01}(Q,Q_0)=(d_{01,k}(Q_k,Q_{k0}):k=1,\ldots,K_1)$ and ${\bf d}_{02}(G,G_0)=(d_{02,k}(G_k,G_{k0}):k=1,\ldots,K_2)$. In a typical case we would have that the parameter space ${\cal F}_{jk}$ of $Q_k$ ($j=1$) or $G_k$ ($j=2$) would be equal to \begin{equation}\label{calFmodel} {\cal F}_{jk,A_{jk}}^{np}\equiv \{F\in {\cal F}_{jk}^{np}: dF_s(u_s)=I_{(s,u_s)\in A_{jk}}dF_s(u_s)\}, \end{equation} for some set $A_{jk}$ of possible values for $(s,u_s)$, $k=1,\ldots,K_j$, $j=1,2$, where one evaluates this restriction on $F$ in terms of the representation (\ref{Frepresentation}). Note that we used short-hand notation $g(x)=I_{x\in A} g(x)$ for $g$ being zero for $x\not \in A$. We will make the convention that if $A$ excludes $0$, then it corresponds with assuming $F(0)=0$. This subset ${\cal F}_{1k,A_{1k}}^{np}$ of all cadlag functions ${\cal F}_{1k}^{np}$ with sectional variation norm smaller than $C_{1k}^u$ further restricts the support of these functions to a set $A_{1k}$. For example, $A_{1k}$ might set $dF_s=0 $ for subsets $s$ of size larger than $3$ for all values $u_s\in (0_s,\tau_s]$, in which case one assumes that the nuisance parameter $Q_k$ can be represented as a sum over all subsets $s$ of size $1,2$ and $3$ of a function of the variables indicated by $s$. In order to allow modeling of monotonicity (e..g, nuisance parameter $Q_k$ is an actual cumulative distribution function), we also allow that this set restricts $dF_s(u_s)\geq 0$ for all $(s,u_s)\in A_{jk}$. We will denote the latter parameter space with \begin{equation}\label{calFmodelplus} {\cal F}_{jk,A_{jk}}^{np,+}=\{F\in {\cal F}_{jk}^{np}: dF_s(u_s)=I_{(s,u_s)\in A_{jk}}dF_s(u_s), dF_s\geq 0, F(0)\geq 0\}. \end{equation} For the parameter space (\ref{calFmodelplus}) of monotone functions we allow that the sectional variation norm is known by setting $C_{jk}^u=C_{jk}^l$ (e.g, for the class of cumulative distribution functions we would have $C_{jk}^u=C_{jk}^l=1$), while for the parameter space (\ref{calFmodel}) of cadlag functions with sectional variation norm between $C_{jk}^l$ and $C_{jk}^u$ we assume $C_{jk}^l<C_{jk}^u$. Although not necessary at all, for the analysis of our proposed nonparametric bootstrap sampling distributions {\em we assume this extra structure} that ${\cal F}_{jk}={\cal F}_{jk,A_{jk}}^{np}$ or ${\cal F}_{jk}={\cal F}_{jk,A_{jk}}^{np,+}$ for some set $A_{jk}$, $k=1,\ldots,K_j$, $j=1,2$. This extra structure allows us to obtain concrete results for the validity of the nonparametric bootstrap for the HAL-MLEs $Q_n$ and $G_n$ defined below, and thereby the HAL-TMLE (see Appendix \ref{AppendixB}). In addition, the implementation of the HAL-MLE for such a parameter space ${\cal F}_{jk,A_{jk}}^{np}$ still corresponds with fitting a linear combination of indicator basis functions $I_{u_s\leq x_s}$ under the sole constraint that the sum of the absolute value of the coefficients is bounded by $C_{jk}^u$ (and possibly from below by $C_{jk}^l$), and possibly that the coefficients are non-negative, where the set $A_{jk}$ implies the set of indicator basis functions that are included. Specifically, in the case that the nuisance parameter is a conditional mean we can compute the HAL-MLE with standard lasso regression software \citep{Benkeser&vanderLaan16}. Therefore, this restriction on our set of models allows straightforward computation of its HAL-MLEs and corresponding HAL-TMLE. Thus, a typical statistical model would be of the form ${\cal M}=\{P: Q_{k_1}(P)\in {\cal F}_{1k_1,A_{1k_1}}^{np},G_{k_2}(P)\in {\cal F}_{2k_2,A_{2k_2}}^{np},k_1,k_2\}$ for sets $A_{1k_1},A_{2k_2}$, but the model might include additional restrictions on $P$ beyond restricting the variation independent components of $Q(P)$ and $G(P)$ to be elements of these sets ${\cal F}_{jk_j,A_{jk_j}}^{np}$, as long as their parameter spaces equal these sets ${\cal F}_{jk_j,A_{jk_j}}^{np}$ or ${\cal F}_{jk_j,A_{jk_j}}^{np,+}$. \paragraph{Remark regarding creating nuisance parameters with parameter space of type (\ref{calFmodel}) or (\ref{calFmodelplus}):} In our first example we have a nuisance parameter ${G}(W)=E_P(A\mid W)$ that is not just assumed to be cadlag and have bounded sectional variation norm but is also bounded between $\delta$ and $1-\delta$ for some $\delta>0$. This means that the parameter space for this ${G}$ is not exactly of type (\ref{calFmodel}). This is easily resolved by reparameterizing ${G}=\delta +(1-2\delta)\mbox{expit}(f(W))$ where $f$ can be any cadlag function with sectional variation norm bounded by some constant. One now defines the nuisance parameter as $f({G})$ instead of ${G}$ itself. Similarly, in our second example, $Q$ is the data density $p$ itself, which is assumed to be bounded from below by a $\delta\geq 0$ and from above by an $M<\infty$, beyond being cadlag and having a bound on the sectional variation norm. In this case, we could parameterize $p$ as $p(o)=c(f)\{ \delta+(M-\delta)\mbox{expit}(f(o))\}$, where $c(f)$ is the normalizing constant guaranteeing that $\int p(o)d\mu(o)=1$. One now defines the nuisance parameter as $f(Q)$ instead of $Q$ itself. These just represent a few examples showcasing that one can reparametrize the natural nuisance parameters $Q$ and $G$ in terms of nuisance parameters that have a parameter space of the form (\ref{calFmodel}) or (\ref{calFmodelplus}). These representations are actually natural steps for the implementation of the HAL-MLE since they allow us now to minimize the empirical risk over a linear model with the sole constraint that the sum of absolute value of coefficients is bounded (and possibly coefficients are non-negative). {\bf Bounding the exact second-order remainder in terms of loss-based dissimilarities:} Let \[ R_2(P,P_0)=R_{20}(Q,G,Q_0,G_0)\] for some mapping $R_{20}()=R_{2P_0}()$ possibly indexed by $P_0$. We often have that $R_{20}(Q,G,Q_0,G_0)$ is a sum of second-order terms of the types $\int (H_1(Q)-H_1(Q_0))^2 f(P,P_0)dP_0$, $\int (H_2(G)-H_2(G_0))^2 f(P,P_0)dP_0$ and $\int (H_1(Q)-H_1(Q_0))(H_2(G)-H_2(G_0))f(P,P_0) dP_0$ for certain specifications of $H_1, H_2 $ and $f()$. Specifically, in all our applications it has the form $\int R_2(Q,G,Q_0,G_0) dP_0$ for some quadratic function $R_2(Q,G,Q_0,G_0)$. If it only involves terms of the third type, then $R_2(P,P_0)$ has a double robust structure allowing the construction of double robust estimators whose consistency relies on consistent estimation of either $Q$ or $G$. In particular, in that case the HAL-TMLE is double robust as well. We assume the following upper bound: \begin{equation}\label{boundingR2}\mid R_2(P,P_0)\mid =\mid R_{20}(Q,G,Q_0,G_0)\mid \leq f({\bf d}_{01}^{1/2}(Q,Q_0),{\bf d}_{02}^{1/2}(G,G_0))\end{equation} for some function $f:\hbox{${\rm I\kern-.2em R}$}^K_{\geq 0}\rightarrow\hbox{${\rm I\kern-.2em R}$}_{\geq 0}$, $K=K_1+K_2$, of the form $f(x)=\sum_{i,j} a_{ij} x_ix_j$, a quadratic polynomial with positive coefficients $a_{ij}\geq 0$. In all our examples, one simply uses the Cauchy-Schwarz inequality to bound $R_{20}(P,P_0)$ in terms of $L^2(P_0)$-norms of $Q_{k_1}-Q_{k_10}$ and $G_{k_2}-G_{k_20}$, and subsequently one relates these $L^2(P_0)$-norms to its loss-based dissimilarities $d_{01,k_1}(Q_{k_1},Q_{k_10})$ and $d_{02,k_2}(G_{k_2},G_{k_20})$, respectively. This bounding step will also rely on a positivity assumption so that denominators in $R_{20}(P,P_0)$ are uniformly bounded away from zero. {\bf Continuity of efficient influence curve as function of $P$:} We also assume a basic uniform continuity condition on the efficient influence curve: \begin{equation}\label{contDstar} \sup_{P\in {\cal M} }\frac{P_0\{D^*(P)-D^*(P_0)\}^2}{ d_{01}(Q(P),Q_0)+d_{02}(G(P),G_0)}<\infty .\end{equation} The above two uniform bounds (\ref{boundingR2}) and (\ref{contDstar}) on the model ${\cal M}$ will generally hold under a strong positivity assumption that guarantees that there are no nuisance parameters (e.g., a parameter of $G$) in the denominator of $D^*(P)$ and $R_2(P,P_0)$ that can be arbitrarily close to 0 on the support of $P_0$. \subsection{HAL-MLEs of nuisance parameters} We estimate $Q_0,G_0$ with HAL-MLEs $Q_n,G_n$ satisfying \begin{eqnarray*} P_n L_1(Q_n)&=&\min_{Q\in Q({\cal M})} P_n L_1(Q)\\ P_n L_2(G_n)&=& \min_{G\in G({\cal M})}P_n L_2(G). \end{eqnarray*} Due to the sum-loss and variation independence of the components of $Q$ and $G$, these HAL-MLEs correspond with separate HAL-MLEs for each component. We have the following previously established result \citep{vanderLaan15} for these HAL-MLEs. We represent estimators as mappings on the nonparametric model ${\cal M}_{np}$ containing all possible realizations of the empirical measure $P_n$. \begin{lemma}\label{lemma2} Let $O\sim P_0\in {\cal M}$. Let $Q:{\cal M}\rightarrow Q({\cal M})$ be a function valued parameter and let $L:Q({\cal M})\rightarrow D_d[0,\tau]$ be a loss function so that $Q_0\equiv Q(P_0)=\arg\min_{Q\in Q({\cal M})}P_0L(Q)=\arg\min_{Q\in Q({\cal M})} \int L(Q)(o)dP_0(o)$. Let $\hat{Q}:{\cal M}_{np}\rightarrow Q({\cal M})$ be an estimator $Q_n\equiv \hat{Q}(P_n)$ so that $P_n L_1(Q_n)=\min_{Q\in Q({\cal M})}P_n L(Q)$. Let $d_0(Q,Q_0)=P_0L(Q)-P_0L(Q_0)$ be the loss-based dissimilarity. Then, \[ d_0(Q_n,Q_0)\leq -(P_n-P_0)\{L(Q_n)-L(Q_0)\}. \] If $\sup_{Q\in Q({\cal M})}\parallel L(Q)\parallel_v^*<\infty$, then \[ E_0 d_0(Q_n,Q_0)=O(n^{-1/2-\alpha(d)}),\] where $\alpha(d)=1/(2d+4)$. \end{lemma} Application of this general lemma proves that $d_{01}(Q_n,Q_0)=O_P(n^{-1/2-\alpha(d)})$ and $d_{02}(G_n,G_0)=O_P(n^{-1/2-\alpha(d)})$. It also shows that we have the following actual empirical process upper-bounds:\begin{eqnarray*} d_{01}(Q_n,Q_0)&\leq&-(P_n-P_0)L_1(Q_n,Q_0)\\ d_{02}(G_n,G_0)&\leq&-(P_n-P_0)L_2(G_n,G_0), \end{eqnarray*} where we defined $L_1(Q,Q_0)\equiv L_1(Q)-L_1(Q_0)$ and $L_2(G,G_0)\equiv L_2(G)-L_2(G_0)$. These upper bounds will be utilized in our proposed conservative sampling distributions of the HAL-TMLE in Appendix \ref{sectsupupperb}. \paragraph{Super learner including HAL-MLE outperforms HAL-MLE} Suppose that we estimate $Q_0$ and $G_0$ instead with super-learners $\tilde{Q}_n,\tilde{G}_n$ in which the library of the super-learners contains this HAL-MLE $Q_n$ and $G_n$. Then, by the oracle inequality for the super-learner, we know that $d_{01}(\tilde{Q}_n,Q_0)$ and $d_{02}(\tilde{G}_n,G_0)$ will be asymptotically equivalent with the oracle selected estimator, so that $d_{01}(Q_n,Q_0)$ and $d_{02}(G_n,G_0)$ represent asymptotic upper bounds for $d_{01}(\tilde{Q}_n,Q_0)$ and $d_{02}(\tilde{G}_n,G_0)$ \citep{vanderLaan15}. In addition, practical experience has demonstrated that the super-learner outperforms its library candidates in finite samples. Therefore, assuming that each estimator in the library of the super-learners for $Q_0$ and $G_0$ falls in the parameter spaces ${\cal F}_1$ and ${\cal F}_2$ of $Q$ and $G$, respectively, our proposed estimators of the sampling distribution of the HAL-TMLE can also be used to construct a confidence interval around the super-learner based TMLE. These width of these confidence intervals are not adapting to possible superior performance of the super-learner and could thus be overly conservative in case the super-learner outperforms the HAL-MLE. \subsection{HAL-TMLE} Consider a finite dimensional local least favorable model $\{Q_{n,\epsilon}:\epsilon\}\subset Q({\cal M})$ through $Q_n$ at $\epsilon =0$ so that the linear span of the components of $\frac{d}{d\epsilon}L_1(Q_{n,\epsilon})$ at $\epsilon =0$ includes $D^*(Q_n,G_n)$. Let $Q_n^*=Q_{n,\epsilon_n}$ for $\epsilon_n=\arg\min_{\epsilon}P_n L_1(Q_{n,\epsilon})$. We assume that this one-step TMLE $Q_n^*$ already satisfies \begin{equation}\label{efficeqn} r_n\equiv \mid P_n D^*(Q_n^*,G_n)\mid=o_P(n^{-1/2}).\end{equation} As shown in \citep{vanderLaan15} this holds for the one-step HAL-TMLE under regularity conditions. Alternatively, one could use the one-dimensional canonical universal least favorable model satisfying $\frac{d}{d\epsilon}L_1(Q_{n,\epsilon})=D^*(Q_{n,\epsilon},G_n)$ at each $\epsilon$ (see our second example in Section \ref{sectexample}). In that case, the efficient influence curve equation (\ref{efficeqn}) is solved exactly with the one-step TMLE: i.e., $r_n=0$ \citep{vanderLaan&Gruber15}. The HAL-TMLE of $\psi_0$ is now the plug-in estimator $\psi_n^*=\Psi(Q_n^*)$. Sometimes, we will refer to this estimator as the HAL-TMLE$(C^u)$ to indicate its dependence on the specification of $C^u=(C_1^u,C_2^u)$. In the Appendix \ref{AppendixA} we show that under smoothness condition on the least favorable submodel (as function of $\epsilon$) $d_{01}(Q_{n,\epsilon_n},Q_0)$ converges at the same rate as $d_{01}(Q_n,Q_0)=O_P(n^{-1/2-\alpha(d)})$ (see (\ref{thalmle})). This also implies this result for any $K$-th step TMLE with $K$ fixed. The advantage of a one-step or $K$-th step TMLE is that it is always well defined, and it easily follows that it converges at the same rate as the initial $Q_n$ to $Q_0$. Even though we derive some more explicit results for the one-step TMLE (and thereby $K$-th step TMLE), our results are presented so that they can be applied to any TMLE $Q_n^*$, including iterative TMLE, but we then simply assume that it has been shown that $d_{01}(Q_n^*,Q_0)$ converges at same rate to zero as $d_{01}(Q_n,Q_0)$. It is assumed that for any $Q$ in its parameter space $\sup_{\epsilon}\parallel Q_{\epsilon}\parallel_v^*< C \parallel Q\parallel_v^*$ for some $C<\infty$ so that the least favorable model preserves the bound on the sectional variation norm. Since the HAL-MLE $Q_n$ has the maximal allowed uniform sectional variation norm $C_1^u$, it is likely that $Q_n^*$ has a slightly larger variation norm than this bound. \subsection{Asymptotic efficiency theorem for HAL-TMLE and CV-HAL-TMLE} The bound ${\bf d}_{01}(Q_n^*,Q_0)=O_P({\bf d}_{01}(Q_n,Q_0))$, the rate results for ${\bf d}_{01}(Q_n,Q_0)$ and ${\bf d}_{02}(G_n,G_0)$ implied by Lemma \ref{lemma2}, combined with (\ref{boundingR2}), now shows that the second-order term $R_{20}(Q_n^*,G_n,Q_0,G_0)=O_P(n^{-1/2-\alpha(d)})$. We have the following identity for the HAL-TMLE: \begin{eqnarray}\nonumber \Psi(Q_n^*)-\Psi(Q_0)&=&(P_n-P_0)D^*(Q_n^*,G_n)+R_{20}(Q_n^*,G_n,Q_0,G_0)+r_n\\ &=&(P_n-P_0)D^*(Q_0,G_0)+(P_n-P_0)\{D(Q_n^*,G_n)-D^*(Q_0,G_0)\}\nonumber \\ &&+R_{20}(Q_n^*,G_n,Q_0,G_0)+r_n.\label{exactexpansiontmle} \end{eqnarray} The second term on the right-hand side is $O_P(n^{-1/2-\alpha(d)})$ by empirical process theory and the continuity condition (\ref{contDstar}) on $D^*$. Thus, this proves the following asymptotic efficiency theorem. \begin{theorem}\label{thefftmle} Consider the statistical model ${\cal M}$, target parameter $\Psi:{\cal M}\rightarrow\hbox{${\rm I\kern-.2em R}$}$ and the model assumptions (\ref{sectionalvarbound}), (\ref{boundingR2}), (\ref{contDstar}). In addition, assume that the HAL-TMLE $Q_n^*$ is such that it solves the efficient influence curve equation (\ref{efficeqn}) up till $r_n=o_P(n^{-1/2})$; it preserves the sectional variation norm in the sense that $\parallel Q_n^*\parallel_v^*<C \parallel Q_n\parallel_v^*$ for some $C<\infty$; and $d_{01}(Q_n^*,Q_0)=O_P(d_{01}(Q_n,Q_0))$. Then the HAL-TMLE $\Psi(Q_n^*)$ of $\psi_0$ is asymptotically efficient: \[ \Psi(Q_n^*)-\Psi(Q_0)=(P_n-P_0)D^*(Q_0,G_0)+O_P(n^{-1/2-\alpha(d)}).\] \end{theorem} {\bf Wald type confidence interval:} A first order asymptotic 0.95-confidence interval is given by $\psi_n^*\pm 1.96 \sigma_n/n^{1/2}$ where $\sigma_n^2=P_n \{D^*(Q_n^*,G_n)\}^2$ is a consistent estimator of $\sigma^2_0=P_0\{D^*(Q_0,G_0)\}^2$. Clearly, this first order confidence interval ignores the exact remainder $\tilde{R}_{2n}$ in the exact expansion $\Psi(Q_n^*)-\Psi(Q_0)=(P_n-P_0)D^*(Q_0,G_0)+\tilde{R}_{2n}$ as presented in (\ref{exactexpansiontmle}): \begin{equation}\label{exactremainder} \tilde{R}_{2n}\equiv R_{20}(Q_n^*,G_n,Q_0,G_0)+(P_n-P_0)\{D^*(Q_n^*,G_n)-D^*(Q_0,G_0)\}+r_n.\end{equation} The asymptotic efficiency proof above of the HAL-TMLE$(C^u)$ relies on the HAL-MLEs $(Q_{n,C_1^u},G_{n,C_2^u})$ to converge to the true $(Q_0,G_0)$ at rate faster than $n^{-1/4}$, and that their sectional variation norm is uniformly bounded from above by $C^u=(C_1^u,C_2^u)$. Both of these conditions are still known to hold for the CV-HAL-MLE $(Q_{n,C_{1n}},G_{n,C_{2n}})$ in which the constants $(C_1,C_2)$ are selected with the cross-validation selector $(C_{1n},C_{2n})$ \citep{vanderLaan15}. This follows since the cross-validation selector is asymptotically equivalent with the oracle selector, thereby guaranteeing that $C_n$ will exceed the sectional variation norm of the true $(Q_0,G_0)$ with probability tending to 1. Therefore, we have that this CV-HAL-TMLE is also asymptotically efficient. Of course, this CV-HAL-TMLE is more practical and powerful than the HAL-TMLE at an apriori specified $(C_1^u,C_2^u)$ since it adapts the choice of bounds $(C_1,C_2)$ to the true sectional variation norms $C_0=(C_{10},C_{20})$ for $(Q_0,G_0)$. \begin{theorem}\label{thefftmlecv} Let $C_{10}=\parallel Q_0\parallel_v^*$, $C_{20}=\parallel G_0\parallel_v^*$. Suppose that $C_1^u$ and $C_2^u$ that define the HAL-MLEs $Q_n=Q_{n,C_1^u}$ and $G_n=G_{n,C_2^u}$ are replaced by data adaptive selectors $C_{1n}$ and $C_{2n}$ for which \begin{equation} P(C_{10}\leq C_{1n}<C_1^u,C_{20}\leq C_{2n}<C_2^u)\rightarrow 1,\mbox{ as $n\rightarrow\infty$.}\label{Cn} \end{equation} Then, under the same assumptions as in Theorem \ref{thefftmle}, the TMLE $\Psi(Q_n^*)$, using $Q_n=Q_{n,C_{1n}}$ and $G_n=G_{n,C_{2n}}$ as initial estimators, is asymptotically efficient. \end{theorem} In general, when the model is defined by global constraints, then one should use cross-validation to select these constraints, which will only improve the performance of the initial estimators and corresponding TMLE, due to its asymptotic equivalence with the oracle selector. So our model might have more global constraints beyond $(C_1^u,C_2^u)$ and these could then also be selected with cross-validation resulting in a CV-HAL-MLE and corresponding HAL-TMLE (see also our two examples). \subsection{Motivation for finite sample inference} In order to understand how large the exact remainder $\tilde{R}_{2n}$ could be relative to the leading first order term, we need to understand the size of $d_{01}(Q_n,Q_0)$ and $d_{02}(G_n,G_0)$. This will then motivate us to propose methods that estimate the finite sample distribution of the HAL-TMLE or conservative versions thereof. To establish this behavior of $d_{01}(Q_n,Q_0)$ we will use the following general integration by parts formula, and a resulting bound. \begin{lemma}\label{lemmaip} Let $F,Z\in D_d[0,\tau]$. For a given function $Z\in D_d[0,\tau]$ we define \[ \bar{Z}(u)=Z([u,\tau])=\int_{[u,\tau]} dZ(s),\] the measure $dZ$ assigns to the cube $[u,\tau]$, which is a generalized difference across the $2^d$-corners of $[u,\tau]$ \citep{Gill&vanderLaan&Wellner95}. For any two functions $F,Z\in D[0,\tau]$ with $\parallel F\parallel_v^*<\infty$ and $\parallel Z\parallel_v^*<\infty$, we have the following integration by parts formula: \[ \int_{[0,\tau]} F(x)dZ(x)=F(0)\bar{Z}(0) +\sum_s \int_{u_s} \bar{Z}(u_s,0_{-s})dF_s(u_s).\] This implies \[ \int_{[0,\tau]} F(x)dZ(x)\leq \parallel \bar{Z}\parallel_{\infty}\parallel F\parallel_v^* .\] \end{lemma} {\bf Proof:} The representation of $F(x)$ is presented in \citep{Gill&vanderLaan&Wellner95,vanderLaan15}. Using this representation yields the presented integration by parts formula as follows: \begin{eqnarray*} \int F dZ&=&\int \{F(0)+\sum_s \int_{(0_s,x_s]} dF_s(u)\} dZ(x)\\ &=&F(0) Z([0,\tau])+\sum_s \int_x\int_{u_s} I_{x_s\geq u_s} dF_s(u_s)dZ(x)\\ &=& F(0)Z([0,\tau])+\sum_s \int_{u_s} Z([u_s,\tau_s]\times [0_{-s},\tau_{-s}]) dF_s(u_s)\\ &\leq & \max_s \sup_{u_s\in [0_s,\tau_s]}\mid Z ([u_s,\tau_s]\times [0_{-s},\tau_{-s}])\mid \parallel F\parallel_v^*\\ &=& \parallel Z\parallel_{\infty} \parallel F\parallel_v^*.\Box \end{eqnarray*} For a $P\in {\cal M}$ and $P=P_n$ we define \[ \bar{P}(u)=P([u,\tau])=\int_{[u,\tau]} dP(s).\] By Lemma \ref{lemmaip}, we have \begin{eqnarray*} d_{01}(Q_n,Q_0)&\leq & \parallel \bar{P}_n-\bar{P}_0\parallel_{\infty}\parallel L_1(Q_n,Q_0)\parallel_v^* \\ d_{02}(G_n,G_0)&\leq &\parallel \bar{P}_n-\bar{P}_0\parallel_{\infty}\parallel L_2(G_n,G_0)\parallel_v^* . \end{eqnarray*} By \citep{vanderVaart&Wellner11}, we can bound the expectation of the supremum norm of an empirical process over a class of functions with uniformly bounded envelope by the entropy integral: \[ E\parallel \sqrt{n}(\bar{P}_n-\bar{P}_0)\parallel_{\infty}\lesssim J(1,{\cal F}_I)\equiv \sup_Q \int_{0}^1 \sqrt{\log N(\epsilon,L^2(Q),{\cal F}_I)} d\epsilon .\] The covering number $N(\epsilon,L^2(Q),{\cal F}_I)$ for the class of indicators ${\cal F}_I=\{I_{[u,\tau]}:u\in [0,\tau]\}$ behaves as $\epsilon^{-d}$. This proves that $E\parallel \sqrt{n}(\bar{P}_n-\bar{P}_0)\parallel_{\infty}=O(d^{1/2})$, and thus \begin{eqnarray*} Ed_{01}(Q_n,Q_0)&=& O(d^{1/2}n^{-1/2}M_1)\\ Ed_{02}(G_n,G_0)&=& O(d^{1/2}n^{-1/2}M_2). \end{eqnarray*} In particular, this shows that the exact second-order remainder (\ref{exactremainder}) can be bounded in expectation as follows: \[ E\mid \tilde{R}_{2n} \mid =O(n^{-1/2}d^{1/2} \sqrt{M_1M_2}).\] Even though these bounds are overly conservative, these bounds provide a clear indication how the size of $d_{01}(Q_n,Q_0)$ and $d_{02}(G_n,G_0)$, and thereby the second-order remainder is potentially affected by the dimension $d$ (i.e., for nonparametric models) and the allowed complexity of the model as measured by the bounds $M_1,M_2$. One can thus conclude that there are many settings in which the exact second-order remainder $\tilde{R}_{2n}$ will dominate the leading linear term $(P_n-P_0)D^*(P_0)$ in finite samples. Therefore, for the sake of accurate inference we will need methods that estimate the actual finite sample sampling distribution of the HAL-TMLE. \subsection*{A very conservative finite sample confidence interval} Consider the case that $r_n=0$. Let ${\bf M}_1^*=\sup_{P\in {\cal M}}\max_{\epsilon}\parallel {\bf L}_1(Q(P)_{\epsilon})\parallel_v^*$ and ${\bf M}_2=\sup_{P\in {\cal M}}\parallel {\bf L}_2(G(P))\parallel_v^*$ be the deterministic upper bound on the sectional variation norms of ${\bf L}_1(Q_n^*)$ and ${\bf L}_2(G_n)$. Let $\bar{Z}_n=n^{1/2}(\bar{P}_n-\bar{P}_0)$. The integration by parts bound applied to (\ref{exactexpansiontmle}) yields the following bound: \begin{eqnarray*} \mid n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))\mid&\leq& \parallel D^*(Q_n^*,G_n)\parallel_v^*\parallel \bar{Z}_n\parallel_{\infty}\\ &&\hspace*{-2cm} +f(\parallel {\bf L}_1(Q_n^*,Q_0)\parallel_v^{*1/2}\parallel \bar{Z}_n\parallel_{\infty}^{1/2},\parallel {\bf L}_2(G_n,G_0)\parallel_v^{*1/2}\parallel\bar{Z}_n\parallel_{\infty}^{1/2}) .\end{eqnarray*} Let $M_3^*\equiv \sup_{P\in {\cal M}}\max_{\epsilon}\parallel D^*(Q(P)_{\epsilon},G)\parallel_v^*$. Then, we obtain the following bound: \begin{equation}\mid n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))\mid\leq \left\{ M_3^*+f({\bf M}_1^{*1/2},{\bf M}_2^{1/2})\right\} \parallel \bar{Z}_n\parallel_{\infty}.\label{consbound} \end{equation} Let $q_{n,0.95}$ be the $0.95$-quantile of $\parallel \bar{Z}_n\parallel_{\infty}$. A conservative finite sample $0.95$-confidence interval is then given by: \[ \Psi(Q_n^*)\pm C({\bf M}_1^*,{\bf M}_2,M_3^*)q_{n,0..95}/n^{1/2},\] where $C({\bf M}_1,{\bf M}_2,M_3)= M_3+f({\bf M}_1^{1/2},{\bf M}_2^{1/2})$. One could estimate the distribution of $\bar{Z}_n$ with the nonparametric bootstrap and thereby obtain an bootstrap-estimate $q_{n,0.95}^{\#}$ of $q_{n,0.95}$. One could push the conservative nature of this confidence interval further by using theoretical bounds for the tail-probability $P(\parallel \bar{Z}_n\parallel_{\infty}>x)$ and define the quantile $q_{n,0.95}$ in terms of this theoretical upper bound (such exponential bounds are available in (e.g.) \citep{vanderVaart&Wellner96}, but the constants in these exponential bounds appear to not be concretely specified). The bound (\ref{consbound}) simplifies if we focus on the sampling distribution of the one-step estimator $\psi_n^1=\Psi(Q_n)+P_n D^*(Q_n,G_n)$ by being able to replace the targeted version $Q_n^*$ by $Q_n$. For the one-step estimator we have \[ \psi_n^1-\psi_0=(P_n-P_0)D^*(Q_n,G_n)+R_{20}(Q_n,G_n,Q_0,G_0) .\] Let ${\bf M_1}=\sup_{P\in {\cal M}}\parallel {\bf L}_1(Q(P))\parallel_v^*$ and ${\bf M}_2=\sup_{P\in {\cal M}}\parallel {\bf L}_2(G(P))\parallel_v^*$ be the upper bound on the sectional variation norms of ${\bf L}_1(Q_n)$ and ${\bf L}_2(G_n)$. Analogue to above, we obtain \begin{eqnarray*} \mid n^{1/2}(\psi_n^1-\Psi(Q_0))\mid&\leq& \parallel D^*(Q_n,G_n)\parallel_v^*\parallel \bar{Z}_n\parallel_{\infty}\\ &&\hspace*{-2cm} +f(\parallel {\bf L}_1(Q_n,Q_0)\parallel_v^{*1/2}\parallel \bar{Z}_n\parallel_{\infty}^{1/2},\parallel {\bf L}_2(G_n,G_0)\parallel_v^{*1/2}\parallel\bar{Z}_n\parallel_{\infty}^{1/2}) .\end{eqnarray*} Recall $M_3\equiv \sup_{P\in {\cal M}}\parallel D^*(P)\parallel_v^*$. Then, we obtain the following conservative sampling distribution: \begin{equation} Z_n^+\equiv \mid n^{1/2}(\psi_n^1-\Psi(Q_0))\mid\leq \left\{ M_3+f({\bf M}_1^{1/2},{\bf M}_2^{1/2})\right\} \parallel \bar{Z}_n\parallel_{\infty},\label{consbound1} \end{equation} and conservative finite sample $0.95$-confidence interval \[ \psi_n^1\pm C({\bf M}_1,{\bf M}_2,M_3)q_{n,0..95}/n^{1/2},\] where $C({\bf M}_1,{\bf M}_2,M_3)= M_3+f({\bf M}_1^{1/2},{\bf M}_2^{1/2})$. Clearly, this same confidence interval can be applied to the TMLE since the TMLE is asymptotically equivalent with the one-step estimator and generally performs better in finite samples by being a substitution estimator. Above, we pointed out that $\parallel \bar{Z}_n\parallel_{\infty}=O_P((n/d)^{-1/2})$, which shows that this confidence interval has a width of order $(n/d)^{-1/2}$. This confidence interval is not only finite sample conservative but is also not asymptotically sharp. Nonetheless, this formula appears to demonstrate that the dimension $d$ of the data $O$ enters directly into the rate of convergence as $(n/d)^{1/2}$. In addition, it shows and that the actual bounds $(C_1^u,C_2^u)$ on the sectional variation norms of $Q$ and $G$ are directly affecting the width of the confidence interval (essentially linearly). In addition, the dimension $d$ itself naturally affects the chosen upper bounds $(C_1^u,C_2^u)$ and thereby ${\bf M}_1,{\bf M}_2,M_3$, so that the dimension $d$ may also affect the width of the finite sample confidence interval through the constant $C({\bf M}_1,{\bf M}_2,M_3)$. Though interesting, we suggest that in most applications this bound is much too conservative for practical use. This motivates us to construct much more accurate estimators of the actual sampling distribution of the HAL-TMLE. The above finite sample bound could also be applied to choices $M_{1n},M_{2n},M_{3n}$ implied by the cross-validation selector $(C_{1n},C_{2n})$ of $(C_1,C_2)$. We suggest that this would make the resulting confidence interval more reasonable, by not being so conservative (by being forced to select conservative upper bounds $C^u=(C_1^u,C_2^u)$). \section{The nonparametric bootstrap for the HAL-TMLE}\label{sectnp} Let $O_1^{\#},\ldots,O_n^{\#}$ be $n$ i.i.d. draws from the empirical measure $P_n$. Let $P_n^{\#}$ be the empirical measure of this bootstrap sample. In the following we define a generalized definition of $Q$ being absolutely continuous w.r.t. $Q_n$: $Q\ll Q_n$. \begin{definition}\label{defabscont} Recall the representation (\ref{Frepresentation}) for a mulivariate real valued cadlag function $F$ in terms of its sections $F_s$. We will say that $Q_k$ is absolutely continuous w.r.t. $Q_{k,n}$ if for each subset $s\subset\{1,\ldots,m_{1k}\}$, its $s$-specific section $Q_{k,s}$ defined by $u_s\rightarrow Q_{k}(u_s,0_{-s})$ is absolutely continuous w.r.t. $Q_{n,k,s}$ defined by $u_s\rightarrow Q_{n,k}(u_s,0_{-s})$. We use the notation $Q_k\ll Q_{n,k}$. In addition, we use the notation $Q\ll Q_n$ if $Q_k\ll Q_{n,k}$ for each component $k\in \{1,\ldots,K_1\}$. Similarly, we use this notation $G\ll G_n$ if $G_k\ll G_{n,k}$ for each component $k\in \{1,\ldots,K_2\}$. \end{definition} In practice, the HAL-MLE $Q_n=\arg\min_{Q\in Q({\cal M})}P_n {\bf L}_1(Q)$ is attained by a discrete measure $Q_n$ so that it can be computed by minimizing the empirical risk over a large linear combination of indicator basis functions (e.g., $2^{m_{1k}} n$ for $Q_{nk}$) under the constraint that the sum of the absolute value of the coefficients is bounded by the specified constant $C_1$ \citep{Benkeser&vanderLaan16}. However, $Q_n$ will only have around $n$ non-zero coefficients. In that case, the constraint $Q\ll Q_n$ states that $Q$ is a linear combination of the indicator basis functions that had a non-zero coefficient in $Q_n$. Let $Q_n^{\#}=\arg\min_{Q\in Q({\cal M}),Q\ll Q_n}P_n^{\#}L_1(Q)$ and $G_n^{\#}=\arg\min_{G\in G({\cal M}),G\ll G_n}P_n^{\#}L_2(G)$ be the corresponding HAL-MLEs of $Q_n=\arg\min_{Q\in Q({\cal M})}P_nL_1(Q)$ and $G_n=\arg\min_{G\in G({\cal M})}P_nL_2(G)$ based on these bootstrap samples. Since the empirical measure $P_n^{\#}$ has a support contained in $P_n$, we expect that in many problems $Q_n^{\#}=\arg\min_{Q\in Q({\cal M})}P_n^{\#}L_1(Q)$ satisfies $Q_n^{\#}\ll Q_n$ and similarly for $G_n^{\#}$. Either way, the extra restriction $Q\ll Q_n$ makes the computation of the HAL-MLE on the bootstrap sample much faster than the HAL-MLE $Q_n$ based on the original sample, so that enforcing this extra constraint is only beneficial from a computational point of view. That is, the computation of $Q_n^{\#}$ only involves minimizing the empirical risk w.r.t. $P_n^{\#}$ over maximally $n$ non-zero coefficients, making the calculation of $Q_n^{\#}$ relatively trivial. Let $\epsilon_n^{\#}=\arg\min_{\epsilon}P_n^{\#}L_1(Q_{n,\epsilon}^{\#})$ be the one-step TMLE update of $Q_n^{\#}$ based on the least favorable submodel $\{Q_{n,\epsilon}^{\#}:\epsilon\}$ through $Q_n^{\#}$ at $\epsilon =0$ with score $D^*(Q_n^{\#},G_n^{\#})$ at $\epsilon =0$. Let $Q_n^{\#*}=Q_{n,\epsilon_n^{\#}}^{\#}$ be the TMLE update which is assumed to solve $r_n^{\#}\equiv \mid P_n^{\#}(Q_n^{\#*},G_n^{\#})\mid =o_{P_n}(n^{-1/2})$, conditional on $(P_n:n\geq 1)$ (just like $r_n=o_P(n^{-1/2})$). Finally, let $\Psi(Q_n^{\#*})$ be the TMLE of $\Psi(Q_n^*)$ based on this nonparametric bootstrap sample. We estimate the finite sample distribution of $n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))$ with the sampling distribution of $Z_n^{1,\#}\equiv n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))$, conditional on $P_n$. Let $\Phi_n^{\#}(x)=P(n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))\leq x\mid P_n)$ be the cumulative distribution of this bootstrap sampling distribution. So a bootstrap based 0.95-confidence interval for $\psi_0$ is given by \[ [\psi_n^{*}+q_{0.025,n}^{\#}/n^{1/2},\psi_n^*+q_{0.975,n}^{\#}/n^{1/2} ],\] where $q_{\alpha,n}^{\#}=\Phi_n^{\#-1}(\alpha)$ is the $\alpha$-quantile of this bootstrap distribution. One could also apply this nonparametric bootstrap to $n^{1/2}\mid \Psi(Q_n^*)-\Psi(Q_0)\mid/\sigma_n$, where $\sigma_n^2$ is an estimator of the variance of $D^*(Q_0,G_0)$. It is not clear if this has any advantage, beyond that the confidence interval is now of the form $[\psi_n^{*}+q_{0.025,n}^{\#}\sigma_n/n^{1/2},\psi_n^*+q_{0.975,n}^{\#}\sigma_n/n^{1/2} ]$, where $q_{\alpha,n}^{\#}$ is the $\alpha$-quantile of the cumulative distribution function of $n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))/\sigma_n^{\#}$, conditional on $P_n$, imitating the Wald-type confidence interval. We now want to prove that $\Phi_n^{\#}$ converges to the cumulative distribution function of limit distribution $N(0,\sigma^2_0)$ so that we are consistently estimating the limit distribution of the TMLE. Importantly, this nonparametric bootstrap confidence interval could potentially dramatically improve the coverage relative to using the first order Wald-type confidence interval since this bootstrap distribution is estimating the variability of the full-expansion of the TMLE, including the exact remainder $\tilde{R}_{2n}$. In the next subsection we show that the nonparametric bootstrap works for the HAL-MLEs $Q_n$ and $G_n$. Subsequently, not surprisingly, we can show that this also establishes that the bootstrap works for the one-step TMLE $Q_n^*$ ($K$-th step TMLE for fixed $K$). This provides then the basis for proving that the nonparametric bootstrap is consistent for the HAL-TMLE. \subsection{Nonparametric bootstrap for HAL-MLE} The following theorem establishes that the bootstrap HAL-MLE $Q_n^{\#}$ estimates $Q_n$ as well w.r.t. an empirical loss-based dissimilarity $d_{n1}(Q_n^{\#},Q_n)=P_n L_1(Q_n^{\#})-P_nL_1(Q_n)$ as $Q_n$ estimates $Q_0$ with respect to $d_{01}(Q_n,Q_0)=P_0L_1(Q_n,Q_0)$. Moreover, it proves that $d_{n1}(Q_n^{\#},Q_n)$ is at minimal equivalent with a square of an $L^2(P_n)$-norm defined by the exact second-order remainder in a first order Tailor expansion of $P_n L_1(Q)$ at $Q_n$. The analogue results apply to $G_n^{\#}$. We are stating the theorem for the sum loss function $L_1$, but it can also be applied to each separate HAL-MLE of $Q_{0,k_1}$ with its loss $L_{1k_1}(Q_{k_1})$ and $G_{0,k_2}$ with its loss $L_{2k_2}(G_{k_2})$ to provide a separate result for each HAL-MLE. In fact, we could simply replace $L_1$ by ${\bf L}_1$ and $L_2$ by ${\bf L}_2$ to obtain the theorem for all components of $Q$. {\bf Sectional variation norm of HAL-MLE dominates sectional variation norm of bootstrapped HAL-MLE:} We either assume that $\parallel Q_n\parallel_v^*=C_1^u$ achieves the maximal allowed value $C_1^u$ or the weaker assumption $\parallel Q_n^{\#}\parallel_v^*\leq \parallel Q_n\parallel_v^*$, conditional on $P_n$. Of course, for the sake of asymptotics we would only need this to hold with probability tending to 1. The same assumption is used for $G_n$ and $G_n^{\#}$. If $C_1^u$ is chosen so that the sectional variation norm of an MLE $Q_n$ is smaller than $C_1^u$ even though it is a perfect fit of the data in the sense that $P_n L_1(Q_n)=0$ (i.e., smallest possible value), then $\parallel Q_n\parallel_v^*=C_1^u$ would not be satisfied. Therefore $\parallel Q_n\parallel_v^*=C_1^u$ requires to make sure that $C_1^u$ is selected small enough relative to sample size so that the MLE is not a complete overfit of the data. If $C_1^u$ is replaced by a the cross-validation selector $C_{1n}$, our experience is that the HAL-MLE (i.e.,, the Lasso) achieves it maximal allowed value for the sum of the absolute value of its coefficients: i.e, $\parallel Q_n\parallel_v^*=C_{1n}$. In fact, all we need is that $\parallel Q_n^{\#}\parallel_v^*\leq \parallel Q_n\parallel_v^*$, which is a weaker assumption and could easily be true for all choices of $C_1^u$: for example, Lasso regression applied to bootstrap sample (i.e., subset of original data but using weights) might select an $L_1$-norm of its coefficient vector smaller than the $L_1$-norm when applied to the original sample, whatever $C_1^u$ is selected. \begin{theorem}\label{thnpbootmle} Recall our assumption (\ref{calFmodel}) or (\ref{calFmodelplus}) on the parameter spaces of $Q$ and $G$.\newline {\bf Definitions:} Let $d_{n1}(Q,Q_n)=P_n \{L_1(Q)-L_1(Q_n)\}$ be the loss-based dissimilarity at the empirical measure, where $Q_n=\arg\min_{Q\in Q({\cal M})}P_n L_1(Q)$. Similarly, let $d_{n2}(G,G_n)=P_n \{L_2(G)-L_2(G_n)\}$ be the loss-based dissimilarity at the empirical measure, where $G_n=\arg\min_{G\in G({\cal M})}P_n L_2(G)$. Let $P_n R_{2L_1,n}(Q_n^{\#},Q_n)$ be defined as the exact second-order remainder of a first order Tailor expansion of $P_nL_1(Q)$ at $Q_n$: \[ P_n \{L_1(Q_n^{\#})-L_1(Q_n)\}=P_n \frac{d}{dQ_n}L_1(Q_n)(Q_n^{\#}-Q_n)+P_n R_{2L_1,n}(Q_n^{\#},Q_n),\] where $\frac{d}{dQ_n}L_1(Q_n)(h)=\left . \frac{d}{d\epsilon} L_1(Q_n+\epsilon h)\right |_{\epsilon =0}$ is the directional derivative in direction $h$. Similarly, we define $P_0R_{2L_1,0}(Q_n,Q_0)$ as the exact second-order remainder of a first order Tailor expansion of $P_0L_1(Q)$ at $Q_0$: \[ P_0\{L_1(Q_n)-L_1(Q_0)\}=P_0\frac{d}{dQ_0}L_1(Q_0)(Q_n-Q_0)+P_0R_{2L_1,0}(Q_n,Q_0).\] Similarly, we define $P_0R_{2L_2,0}(G_n,G_0)$ and $P_n R_{2L_2,n}(G_n^{\#},G_n)$. \newline {\bf Assumption:} Assume $\parallel Q_n\parallel_v^*=C_1^u$, $\parallel G_n\parallel_v^*=C_2^u$ (i.e., they attain the maximal allowed value) or assume that $\parallel Q_n^{\#}\parallel_v^*\leq \parallel Q_n\parallel_v^*$ and $\parallel G_n^{\#}\parallel_v^*\leq \parallel G_n\parallel_v^*$ with probability 1, conditional on $(P_n:n\geq 1)$. Suppose that \begin{eqnarray} P_0\{L_1(Q_n)-L_1(Q_0)\}^2&\lesssim& P_0 R_{2L_1,0}(Q_n,Q_0)\nonumber \\ P_n \{L_1(Q_n^{\#})-L_1(Q_n)\}^2&\lesssim& P_nR_{2L_1,n}(Q_n^{\#},Q_n)\nonumber\\ P_0\{L_2(G_n)-L_2(G_0)\}^2&\lesssim& P_0 R_{2L_2,0}(G_n,G_0)\nonumber \\ P_n \{L_2(G_n^{\#})-L_2(G_n)\}^2&\lesssim& P_n R_{2L_2,n}(G_n^{\#},G_n).\label{boundb} \end{eqnarray} {\bf Conclusion:} Then, \[ d_{n1}(Q_n^{\#},Q_n)=O_P(n^{-1/2-\alpha(d)})\mbox{ and } d_{n2}(G_n^{\#},G_n)=O_P(n^{-1/2-\alpha(d)}).\] In addition, we have $P_n\frac{d}{dQ_n}L_1(Q_n)(Q_n^{\#}-Q_n)\geq 0$ so that $d_{n1}(Q_n^{\#},Q_n)$ is more powerful dissimilarity than the quadratic $P_n R_{2L_1,n}(Q_n^{\#},Q_n)$ (i.e., convergence w.r.t. $d_{n1}$ implies convergence w.r.t. latter): \[ P_n\{L_1(Q_n^{\#})-L_1(Q_n)\}\geq P_n R_{2L_1,n}(Q_n^{\#},Q_n).\] Similarly, we have $P_0\frac{d}{dQ_0}L_1(Q_0)(Q_n-Q_0)\geq 0$ so that $d_{01}(Q_n,Q_0)$ dominates $P_0R_{2L_1,0}(Q_n,Q_0)$: \[ P_0\{L_1(Q_n)-L_1(Q_0)\}\geq P_0 R_{2L_1,0}(Q_n,Q_0).\] As a consequence, we also have \[ P_n R_{2L_1,n}(Q_n^{\#},Q_n)=O_P(n^{-1/2-\alpha(d)})\mbox{ and } P_n R_{2L_2,n}(G_n^{\#},G_n)=O_P(n^{-1/2-\alpha(d)}).\] {\bf Bootstrapping HAL-MLE$(C)$ at $C=C_n$:} This theorem also applies to the case that $C^u=(C_1^u,C_2^u)$ is replaced by a data adaptive choice $C_n=(C_{1n},C_{2n})$ (i.e., depending on $P_n$) satisfying (\ref{Cn}). \end{theorem} Note that if $C^u=C_n$, then conditional on $P_n$, $C_n$ is still fixed, so that establishing the latter result only requires checking that the convergence of the bootstrapped HAL-MLE $(Q_{n,C_1}^{\#},G_{n,C_2}^{\#})$ to the HAL-MLE $(Q_{n,C_1},G_{n,C_2})$ at a fixed $C$ w.r.t. the loss-based dissimilarities $d_{n1}$ and $d_{n2}$ holds uniformly in $C$ between the true sectional variation norms $C_0$ and the model upper bound $C^u$. The validity of this theorem does not rely on $C_n$ exceeding $C_0$, but the latter is needed for establishing that the HAL-MLE $Q_{n,C_n}$ is consistent for $Q_0$ and thus the efficiency of the HAL-TMLE $\Psi(Q_n^*)$. The proof of Theorem \ref{thnpbootmle} is presented in the Appendix \ref{AppendixB}. Clearly, $P_n R_{2L_1,n}(Q_n^{\#},Q_n)$ will behave as a square of a difference of $Q_n^{\#}$ and $Q_n$. In our proof below of the validity of the nonparametric bootstrap method for the HAL-TMLE we will need that convergence of $d_{n1}(Q_n^{\#},Q_n)$ and $d_{01}(Q_n,Q_0)$ implies convergence of $d_{01}(Q_n^{\#},Q_0)$ as well. This requires showing that convergence w.r.t. an $L^2(P_n)$-norm implies convergence at the same rate w.r.t. $L^2(P_0)$-norm. For that purpose we note the following lemma, which is also proved in the Appendix \ref{AppendixB}. \begin{lemma}\label{lemmadndo} If $P_n f_n^2=O_P(n^{-1/2-\alpha(d)})$ for some $f_n$ with $\parallel f_n\parallel_v^*<M$ for some $M<\infty$ with probability 1, then we also have $P_0 f_n^2=O_P(n^{-1/2-\alpha(d)})$. \end{lemma} \subsection{Preservation of rate of convergence for the targeted bootstrap estimator} It is no surprise that under a weak regularity condition, we have that $\epsilon_n^{\#}=O_P(n^{-1/4-\alpha(d)/2})$ converges at same rate as $Q_n^{\#}$ to $Q_n$. As a result, the TMLE-update $Q_{n,\epsilon_n^{\#}}^{\#*}$ of $Q_n^{\#}$ converges at the same rate to $Q_n$ as $Q_n^{\#}$. A general proof of this result is presented in the Appendix \ref{AppendixC} under weak regularity conditions on the least favorable submodel. \subsection{The nonparametric bootstrap for the HAL-TMLE} We can now imitate the efficiency proof for the HAL-TMLE to obtain the desired result for the bootstrapped HAL-TMLE of $\Psi(Q_n^*)$. In addition to the model assumptions of Theorem \ref{thefftmle} for asymptotic efficiency of the TMLE, we asssume the conditions (\ref{boundb}) of Theorem \ref{thnpbootmle} for validity of the nonparametric bootstrap for the HAL-MLE. In addition, we assume the very weak condition that convergence of $d_{n1}(Q_n^{\#},Q_n)$ and $d_{01}(Q_n,Q_0$ implies the same convergence of $d_{01}(Q_n^{\#},Q_0)$: \begin{equation}\label{dn1bounding} \max(d_{n1}(Q_n^{\#},Q_n),d_{01}(Q_n,Q_0))=O_P(r(n)) \mbox{ implies $d_{01}(Q_n^{\#},Q_0)=O_P(r(n))$,}\end{equation} and similarly $\max(d_{n2}(G_n^{\#},G_n),d_{02}(G_n,G_0))=O_P(r(n))$ implies $d_{02}(G_n^{\#},G_0)=O_P(r(n))$. To verify this assumption, one can use that $P_n R_{2L_1,n}(Q_n^{\#},Q_n)\leq d_{n1}(Q_n^{\#},Q_n)$, $P_n R_{2L_2,n}(G_n^{\#},G_n)\leq d_{n2}(G_n^{\#},G_n)$, $P_0R_{2L_1,0}(Q_n,Q_0)\leq d_{01}(Q_n,Q_0)$, and $P_0R_{2L_2,0}(G_n,G_0)\leq d_{02}(G_n,G_0)$, and Lemma \ref{lemmadndo} to translate $\int f_n^2dP_n=o_P(r(n))$ implies $\int f_n^2 dP_0=O_P(r(n))$. Finally, we assume a the empirical analogue of the uniform continuity condition (\ref{contDstar}) on the efficient influence curve: \begin{equation}\label{contDstaremp} P_n\{D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)\}^2\lesssim d_{n1}(Q_n^{\#},Q_n)+d_{n2}(G_n^{\#},G_n).\end{equation} Again, to verify this we can use $P_n R_{2L_1,n}(Q_n^{\#},Q_n)\leq d_{n1}(Q_n^{\#},Q_n)$ and $P_n R_{2L_2,n}(G_n^{\#},G_n)\leq d_{n2}(G_n^{\#},G_n)$, so that it suffices to verify \[ P_n\{D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)\}^2\lesssim P_n R_{2L_1,n}(Q_n^{\#},Q_n)+P_n R_{2L_2,n}(G_n^{\#},G_n).\] \begin{theorem}\label{thnpboothaltmle}\ \newline {\bf Assumptions:} Assume the conditions of Theorem \ref{thefftmle} providing asymptotic efficiency of $\Psi(Q_n^*)$; $\parallel Q_n\parallel_v^*=C_1^u$, $\parallel G_n\parallel_v^*=C_2^u$ (i.e., they attain the maximal allowed value) or that $\parallel Q_n^{\#}\parallel_v^*\leq \parallel Q_n\parallel_v^*$ and $\parallel G_n^{\#}\parallel_v^*\leq \parallel G_n\parallel_v^*$ with probability 1, conditional on $(P_n:n\geq 1)$; (\ref{boundb}); (\ref{dn1bounding}); (\ref{contDstaremp}) on loss functions $L_1(Q)$ and $L_2(G)$; $r_n^{\#}=P_n^{\#}D^*(Q_n^{\#*},G_n^{\#})=o_P(n^{-1/2})$, conditional on $(P_n:n\geq 1)$; and that $Q_n^{\#*}$ preserves rate of convergence of $Q_n^{\#}$ in the sense that the following three conditions hold: 1) $\parallel Q_n^{\#*}\parallel_v^*<C\parallel Q_n^{\#}\parallel_v^*$ for some $C<\infty$; 2) $P_n \{D^*(Q_n^{\#*})-D^*(Q_n^{\#})\}^2\rightarrow_p 0$, conditional on $(P_n:n\geq 1)$; 3) $P_0\{D^*(Q_n^{\#*})-D^*(Q_n^{\#})\}^2\rightarrow_p 0$. If we use the one-step TMLE $Q_n^*=Q_{n,\epsilon_n}$, then the last three conditions can be replaced by $\epsilon_n^{\#}=O_P(d_{n1}^{1/2}(Q_n^{\#},Q_n))$. {\bf Conclusion:} Then, $d_{n1}(Q_n^{\#},Q_n)=O_P(n^{-1/2-\alpha(d)})$, $d_{n2}(G_n^{\#},G_n)=O_P(n^{-1/2-\alpha(d)})$, $P_n R_{2L_1,n}(Q_n^{\#*},Q_n)=O_P(n^{-1/2-\alpha(d)})$ and $P_n R_{2L_2,n}(G_n^{\#},G_n)=O_P(n^{-1/2-\alpha(d)})$. In addition, \[ \Psi(Q_n^{\#*})-\Psi(Q_n)=(P_n^{\#}-P_n)D^*(Q_n,G_n)+O_P(n^{-1/2-\alpha(d)}),\] and thus $Z_n^{1,\#}\equiv n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))\Rightarrow_d N(0,\sigma^2_0)$, conditional on $(P_n:n\geq 1)$. \newline {\bf Consistency of the nonparametric bootstrap for HAL-TMLE at cross-validation selector $C_n$:} This theorem can be applied to $C^u=C_n$ satisfying (\ref{Cn}). \end{theorem} {\bf Proof:} We provide the proof for the one-step TMLE using the condition $\epsilon_n^{\#}=O_P(d_{n1}^{1/2}(Q_n^{\#},Q_n)$. The proof for the general TMLE $Q_n^*$ using conditions 1-3 instead follows immediately from the following proof as well and below we point out how the proof is generalized to this general case. Firstly, by definition of the remainder $R_{20}()$ we have the following two expansions: \begin{eqnarray*} \Psi(Q_n^{\#*})-\Psi(Q_0)&=&(P_n^{\#}-P_0) D^*(Q_n^{\#*},G_n^{\#})+R_{20}(Q_n^{\#*},G_n^{\#},Q_0,G_0)\\ &=& (P_n^{\#}-P_n)D^*(Q_n^{\#*},G_n^{\#})+(P_n-P_0)D^*(Q_n^{\#*},G_n^{\#})\\ && +R_{20}(Q_n^{\#*},G_n^{\#},Q_0,G_0)\\ \Psi(Q_n^*)-\Psi(Q_0)&=&(P_n-P_0) D^*(Q_n^*,G_n)+R_{20}(Q_n^*,G_n,Q_0,G_0), \end{eqnarray*} where we ignored $r_n=P_nD^*(Q_n^*,G_n)$ and its bootstrap analogue $r_n^{\#}=P_n^{\#}D^*(Q_n^{\#*},G_n^{\#})$ in these two expressions (which were both assumed to be $o_P(n^{-1/2})$). Subtracting the first equality from the second equality yields: \begin{eqnarray} \Psi(Q_n^{\#*})-\Psi(Q_n^*)&=&(P_n^{\#}-P_n)D^*(Q_n^{\#*},G_n^{\#})+(P_n-P_0)\{D^*(Q_n^{\#*},G_n^{\#})-D^*(Q_n^*,G_n)\}\nonumber \\ &&+ R_{20}(Q_n^{\#*},G_n^{\#},Q_0,G_0)-R_{20}(Q_n^*,G_n,Q_0,G_0). \label{helpa} \end{eqnarray} Under the conditions of Theorem \ref{thefftmle}, we already established that $R_{20}(Q_n^*,G_n,Q_0,G_0)=O_P(n^{-1/2-\alpha(d)})$. By assumption (\ref{boundingR2}), we can bound the first remainder $R_{20}(Q_n^{\#*},G_n^{\#},Q_0,G_0)$ by $f({\bf d}_{01}^{1/2}(Q_n^{\#*},Q_0),{\bf d}_{02}^{1/2}(G_n^{\#},G_0))$. Theorem \ref{thnpbootmle} established that $d_{n1}(Q_n^{\#},Q_n)=O_P(n^{-1/2-\alpha(d)})$ and $d_{n2}(G_n^{\#},G_n)=O_P(n^{-1/2-\alpha(d)})$. By assumption (\ref{dn1bounding}), this implies also that $d_{01}(Q_n^{\#},Q_0)$ and $d_{02}(G_n^{\#},G_0)$ are $O_P(n^{-1/2-\alpha(d)})$. Again, by assumption (\ref{boundingR2}) this yields that $R_{20}(Q_n^{\#},G_n^{\#},Q_0,G_0)=O_P(n^{-1/2-\alpha(d)})$. By assumption, $\epsilon_n^{\#}=O_P(n^{-1/4-\alpha(d)/2})$, using the fact that $f$ is a quadratic polyonomial, this now also establishes that $R_{20}(Q_n^{\#*},G_n^{\#},Q_0,G_0)=O_P(n^{-1/2-\alpha(d)})$. Similarly, if we work with a general TMLE $Q_n^*$ satisfying conditions 1-3, then this result follows as well. It remains to analyze the two leading empirical process terms in (\ref{helpa}). Firstly, replace $Q_n^{\#*}$ and $Q_n^*$ by $Q_n^{\#}$ and $Q_n$, respectively, in these two terms. This generates three additional remainder terms: \[ \begin{array}{l} (P_n^{\#}-P_n)\{D^*(Q_n^{\#*},G_n^{\#})-D^*(Q_n^{\#},G_n^{\#})\}\\ (P_n-P_0)\{D^*(Q_n^{\#*},G_n^{\#})-D^*(Q_n^{\#},G_n)\}\\ (P_n-P_0)\{D^*(Q_n^*,G_n)-D^*(Q_n,G_n)\}. \end{array} \] Since $Q_n^{\#*}=Q_{n,\epsilon_n^{\#}}^{\#}$ and $Q_n^*=Q_{n,\epsilon_n}$, each of these terms can be written as $f_n(\epsilon_n^{\#})-f_n(0)$ or $f_n(\epsilon_n)-f_n(0)$ for certain specified $f_n$. We can carry out an exact first order tailor expansion of this $f_n(\epsilon)$ at $\epsilon =0$ to represents these three terms as $\epsilon_n^{\#}(P_n^{\#}-P_n)f_{1n} $, $\epsilon_n^{\#}(P_n-P_0)f_{2n}$ and $\epsilon_n (P_n-P_0)f_{3n}$, respectively, for certain functions $f_{1n},f_{2n},f_{3n}$. By assumption (\ref{sectionalvarbound}) these functions $f_{1n},f_{2n},f_{3n}$ have a uniformly bounded sectional variation norm. Thus $(P_n-P_0)f_{jn}=O_P(n^{-1/2})$ for $j=1,2,3$, so that these three terms can be bounded by $O_P(n^{-1/2})$ times $\max(\epsilon_n,\epsilon_n^{\#})$, which is $O_P(n^{-3/4-\alpha(d)/2})$. The above three terms are also $o_P(n^{-1/2})$ if we work with a general TMLE $Q_n^*$ and the three conditions 1-3 on $Q_n^{\#*}$ apply. The remainder of the proof now only involves non-targeted $Q_n^{\#}$ and $Q_n$ so that it generally applies to a general TMLE $Q_n^*$. Let's now return to the two leading terms in (\ref{helpa}) but with $Q_n^{\#*}$ and $Q_n^*$ replaced by $Q_n^{\#}$ and $Q_n$, respectively. By our continuity assumption (\ref{contDstaremp}) on the efficient influence curve as function in $(Q,G)$, we have that convergence of $d_{n1}(Q_n^{\#},Q_n)+d_{n2}(G_n^{\#},G_n)$ to zero implies convergence of the square of the $L^2(P_n)$-norm of $D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)$ at the same rate. By empirical process theory \citep{vanderVaart&Wellner11}, this teaches us that $(P_n^{\#}-P_n)D^*(Q_n^{\#},G_n^{\#})=(P_n^{\#}-P_n)D^*(Q_n,G_n)+O_P(n^{-1/2-\alpha(d)}$. This deals with the first leading term in (\ref{helpa}). By our continuity condition (\ref{contDstar}) we also have that $P_0\{D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)\}^2\rightarrow_p 0$ at this rate. Again, by \citep{vanderVaart&Wellner11} this shows $(P_n-P_0)\{D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)\}=O_P(n^{-1/2-\alpha(d)})$. Thus we have shown that \[ \begin{array}{l} (P_n^{\#}-P_n)D^*(Q_n^{\#},G_n^{\#})+(P_n-P_0)\{D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)\}\\ =(P_n^{\#}-P_n)D^*(Q_n^{\#},G_n^{\#}) +O_P(n^{-1/2-\alpha(d)}).\end{array} \] The latter term can be written as $(P_n^{\#}-P_n)D^*(Q_n,G_n)+(P_n^{\#}-P_n)\{D^*(Q_n^{\#},G_n)-D^*(Q_n,G_n)\}$. The second term can be analyzed with empirical process theory as above using (\ref{contDstaremp}) to establish that it is $O_P(n^{-1/2-\alpha(d)})$. Thus, we have now shown \[ n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n^*,G_n)+o_P(1)\Rightarrow_d N(0,\sigma^2_0). \] This completes the proof of the Theorem for the HAL-TMLE at the fixed $C^u$. As remarked earlier, it follows straightforwardly that this proof applies uniformly to any $C$ in between $C_0$ and $C^u$, and thereby to a selector $C_n$ satisfying (\ref{Cn}). $\Box$ \paragraph{Remark regarding robustness in underlying data distribution} Consider the exact second order remainder $R_{20}$ for the HAL-TMLE $\Psi(Q_n^*)-\Psi(Q_0)=(P_n-P_0)D^*(Q_0,G_0)+R_{20}(Q_n^*,G_n,Q_0,G_0)$ and the exact second order remainder $R_{2n}^{\#}$ for the bootstrapped HAL-TMLE $\Psi(Q_n^{\#*})-\Psi(Q_n^*)=(P_n^{\#}-P_n)D^*(Q_n,G_n)+R_{2n}^{\#}$, as specified in the above proof. Under our model assumptions, the bounding of these two remainders only concern empirical processes $(P_n-P_0)$ and $(P_n^{\#}-P_n)$ indexed by a uniform Donsker class. As shown in \citep{vanderVaart&Wellner96}, such empirical processes converge and satisfy exact finite sample bounds that apply uniformly in all possible data distributions. Therefore, it follows that we can also establish that the nonparametric bootstrap is consistent for the normal limit distribution of the HAL-TMLE, {\em uniformly in all $P_0\in {\cal M}$}. This would mean that there exist sufficient sample sizes to obtain a particular level of precision in approximating the normal limit distribution, uniformly in all $P_0\in {\cal M}$. This further demonstrates the robustness of the HAL-MLE, HAL-TMLE, and its bootstrap distribution in statistical models that have uniform model bounds $(M_1,M_2,M_3)$. \subsection{The nonparametric bootstrap for the exact second-order expansion of the HAL-TMLE} Recall the exact second-order expansion of the HAL-TMLE: \begin{equation}\label{displ} n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))=n^{1/2}(P_n-P_0)D^*(Q_n^*,G_n)+n^{1/2}R_{20}(Q_n^*,G_n,Q_0,G_0).\end{equation} Recall that $R_{20}(Q,G,Q_0,G_0)=R_{2P_0}(Q,G,Q_0,G_0)$ potentially depends on $P_0$ beyond $(Q_0,G_0)$. Typically, we have \begin{equation} R_{2P_0}(Q,G,Q_0,G_0)=P_0 R_2(Q,G,Q_0,G_0)\mbox{ for some $R_2(Q,G,Q_0,G_0)$.}\label{P0repR2} \end{equation} Let $R_{2n}()=R_{2P_n}()$ be obtained by replacing the $P_0$ by the empirical measure $P_n$. Thus, if we have (\ref{P0repR2}), then $R_{2n}(Q,G,Q_n,G_n)=P_n R_2(Q,G,Q_n,G_n)$. We assume the analogue of the bound (\ref{boundingR2}) on $R_{20}$ for $R_{2n}$: \begin{equation}\label{boundingR2n} \mid R_{2n}(Q,G,Q_n,G_n)\mid \leq f({\bf d}_{n1}^{1/2}(Q,Q_n),{\bf d}_{n2}^{1/2}(G,G_n))\end{equation} for some function $f:\hbox{${\rm I\kern-.2em R}$}^K_{\geq 0}\rightarrow\hbox{${\rm I\kern-.2em R}$}_{\geq 0}$, $K=K_1+K_2$ of the form $f(x)=\sum_{i,j} a_{ij} x_ix_j$, a quadratic polynomial with positive coefficients $a_{ij}\geq 0$. Consider the nonparametric bootstrap analogue of the right-hand side of (\ref{displ}): \[ Z_n^{2,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n^{\#*},G_n^{\#})+n^{1/2}R_{2n}(Q_n^{\#*},G_n^{\#},Q_n^*,G_n).\] This bootstrap sampling distribution $Z_n^{\#*}$ provides a very direct estimate of the sampling distribution of $n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))$. Let $\Phi_n^{\#}(x)=P(Z_n^{2,\#}\leq x\mid P_n)$ be the cumulative distribution of this bootstrap sampling distribution. So a bootstrap based 0.95-confidence interval for $\psi_0$ is given by \begin{equation}\label{ciexactexp} [\psi_n^{*}+q_{0.025,n}^{\#}/n^{1/2},\psi_n^*+q_{0.975,n}^{\#}/n^{1/2} ],\end{equation} where $q_{\alpha,n}^{\#}=\Phi_n^{\#-1}(\alpha)$ is the $\alpha$-quantile of this bootstrap distribution. \begin{theorem}\label{thnpbootexactexp} Under the same conditions as Theorem \ref{thnpboothaltmle} and condition (\ref{boundingR2n}), we have $Z_n^{2,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n,G_n)+O_P(n^{-1/2-\alpha(d)})$, and thereby \[ Z_n^{2,\#}\Rightarrow_d N(0,\sigma^2_0)\mbox{ conditional on $(P_n:n\geq 1)$}.\] In particular the above confidence interval (\ref{ciexactexp}) contains $\psi_0$ with probability tending to 0.95 as $n\rightarrow\infty$. \end{theorem} One might simplify $Z_n^{2\#}$ by replacing the targeted versions by their initial estimators: \begin{equation}\label{simplified} Z_n^{2a,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n^{\#},G_n^{\#})+n^{1/2}R_{2n}(Q_n^{\#},G_n^{\#},Q_n,G_n).\end{equation} In this case $Z_n^{2a,\#}$ is the bootstrap sampling distribution of the exact second-order expansion \[ n^{1/2}(\psi_n^1-\Psi(Q_0))=n^{1/2}(P_n-P_0)D^*(Q_n,G_n)+R_{20}(Q_n,G_n,Q_0,G_0)\] of the HAL-one-step estimator $\psi_n^1=\Psi(Q_n)+P_n D^*(Q_n,G_n)$. The latter bootstrap sampling distribution can also be used for the HAL-TMLE. As above, let $\Phi_n^{a\#}(x)=P(Z_n^{2a,\#}\leq x\mid P_n)$ be the cumulative distribution of $Z_n^{2a,\#}$, conditional on $P_n$. A corresponding bootstrap based 0.95-confidence interval for $\psi_0$ is given by \begin{equation}\label{ciexactexpa} [\psi_n^1+q_{0.025,n}^{a\#}/n^{1/2},\psi_n^1+q_{0.975,n}^{a\#}/n^{1/2} ],\end{equation} where $q_{\alpha,n}^{a\#}=\Phi_n^{a\#-1}(\alpha)$. We have the analogue of the above theorem for the bootstrap distribution $Z_n^{2a,\#}$ of the one-step estimator, where we can remove the specific conditions needed for the TMLE $Q_n^*$. Since the proof is remarkably simple and demonstrates that $Z_n^{2,\#}$ and $Z_n^{2a,\#}$ provide very direct approximations of the sampling distributions of the HAL-TMLE and HAL-one-step estimator, we show here its proof. \begin{theorem}\label{thnpbootexactexpa} Assume the conditions of Theorem \ref{thefftmle}; (\ref{boundb}); (\ref{dn1bounding}); and (\ref{contDstaremp}). Then, $Z_n^{2a,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n,G_n)+O_P(n^{-1/2-\alpha(d)})$, and thereby \[ Z_n^{2a,\#}\Rightarrow_d N(0,\sigma^2_0)\mbox{ conditional on $(P_n:n\geq 1)$}.\] In particular the above confidence interval (\ref{ciexactexpa}) contains $\psi_0$ with probability tending to 0.95 as $n\rightarrow\infty$. \end{theorem} {\bf Proof:} Consider (\ref{simplified}). By Theorem \ref{thnpbootmle} we have that $d_{n1}(Q_n^{\#},Q_n)$ and $d_{n2}(G_n^{\#},G_n)$ are $O_P(n^{-1/2-\alpha(d)})$. Using the bound (\ref{boundingR2n}) implies now that $R_{2n}(Q_n^{\#},G_n^{\#},Q_n,G_n)=O_P(n^{-1/2-\alpha(d)})$. Regarding the leading term in $Z_n^{2a,\#}$ we write is as \[ n^{1/2}(P_n^{\#}-P_n)\{D^*(Q_n^{\#},G_n^{\#})-D^*(Q_n,G_n)\}+n^{1/2}(P_n^{\#}-P_n)D^*(Q_n,G_n).\] In the proof of Theorem \ref{thnpboothaltmle} we showed that the first term is $O_P(n^{-1/2-\alpha(d)})$. This completes the proof of Theorem \ref{thnpbootexactexp}. $\Box$ The above bootstrap distribution $Z_n^{2,\#}$ is different from the sampling distribution of $Z_n^{1,\#}=n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))$ used in the previous subsection (and similarly, $Z_n^{2a,\#}$ is different from the bootstrap distribution of the standardized one-step estimator). The advantage of $Z_n^{1\#}$ is that it is an actual sampling distribution of our HAL-TMLE and thereby fully respects that our estimator is a substitution estimator. On the other hand, its asymptotic expansion as analyzed in the proof of Theorem \ref{thnpboothaltmle} and the remarkable direct and simple proof of Theorem \ref{thnpbootexactexp} suggests that the sampling distribution $Z_n^{1,\#}$ is more different from the desired sampling distribution of $n^{1/2}(\psi_n^*-\psi_0)$ than $Z_n^{2,\#}$. Therefore it will be of interest to compare both bootstrap methods through a simulation study. As in our previous theorems, the above theorems also apply to the setting in which we replace $C^u$ by the cross-validation selector $C_n$. \section{The nonparametric bootstrap for a conservative finite sample bound of exact second-order expansion of HAL-TMLE or HAL-one-step-estimator}\label{sectupperb} We have the following finite sample upper bound for our HAL-TMLE relative to its target $\Psi(Q_0)$: \begin{eqnarray*} n^{1/2}\mid \Psi(Q_n^*)-\Psi(Q_0)\mid&\leq & \mid n^{1/2}(P_n-P_0)D^*(Q_n^*,G_n) \mid \\ &&+ f({\bf d}_{01}(Q_n^*,Q_0),{\bf d}_{02}(G_n,G_0)) +n^{1/2}r_n\\ &\equiv&X_n(Q_n^*,G_n)+n^{1/2}r_n, \end{eqnarray*} where we defined a process $X_n(Q,G)$ (suppressing its dependence on $P_0$). Similarly, we have this upper bound for the HAL one-step estimator $\psi_n^1=\Psi(Q_n)+P_n D^*(Q_n,G_n)$: \[ n^{1/2}\mid \psi_n^1-\Psi(Q_0)\mid \leq X_n(Q_n,G_n).\] Let's focus on the latter, which could just as well be used for the sampling distribution of the HAL-TMLE as well. {\bf How is this upper bound conservative?} This upper bound is conservative from various points of view. Firstly, the true second-order remainder $R_{20}(Q_n,G_n,Q_0,G_0)$ could have both negative and positive values that could cancel out a positive or negative value of $(P_n-P_0)D^*(Q_n,G_n)$. For example, in many models $R_{20}$ has a double robust structure $\int (H_1(Q_n)-H_1(Q_0))(H_2(G_n)-H_2(G_0)) H_3(P_0,P_n) dP_0$. In these double robust problems $Q_n$ and $G_n$ are based on different factors of the likelihood so that $H_1(Q_n)-H_1(Q_0))$ is generally almost uncorrelated with $H_2(G_n)-H_2(G_0))$, so that such a second order term could be reasonably symmetric distributed around zero. The above upper bound does not allow any cancelation making it particularly conservative for double robust estimation problems. Secondly, the actual size of $R_{20}(Q_n,G_n,Q_0,G_0)$ could be significantly smaller than our upper bound. For example, if we have the double robust structure, then the upper bound bounds a term $\int (H_1(Q_n)-H_1(Q_0))(H_2(G_n)-H_2(G_0)) H_3(P_0,P_n)dP_0$ by Cauchy-Schwarz while bounding $H_3$ by its supremum norm. Since $Q_0$ and $G_0$ are very different functions, the Cauchy-Schwarz bound is very conservative itself, and, the supremum norm bound on $H_3$ will involve replacing a denominator by its smallest value. Therefore, this bound is highly conservative for double robust estimation problems. If the second-order remainder has the form $\int (H_1(Q_n)-H_1(Q_0))^2 H_3(P_n,P_0)dP_0$, then this bound is more reasonable by only being conservative due to the bounding of $H_3$ by its supremum norm and that we do not allow cancelation of a mean zero centered $n^{1/2}(P_n-P_0)D^*(Q_n,G_n)$ with $R_{20}(P_n^*,P_0)$. Finally, the sampling distribution of this upper bound is not incorporating the known bounds for $n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))$ such as, for example, that $\Psi(P_0)$. The first nonparametric bootstrap method of the previous section is a sampling distribution of a substitution estimator thereby respects all the global bounds of the model and target parameter (e.g., $\Psi(P)$ is a probability). Respecting global constraints is particularly important when the target parameter is weakly supported by the data and asymptotics has not kicked in for the given sample size. \ \newline We estimate the distribution of this upper bound with the nonparametric bootstrap. That is, we (conservatively) approximate the sampling distribution of $Z_n=\mid n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0)\mid$ or $\mid n^{1/2}(\psi_n^1-\Psi(Q_0))\mid$ with \[ Z_n^{3,\#}=\mid n^{1/2}(P_n^{\#}-P_n)D^*(Q_n^{\#},G_n^{\#})\mid + f({\bf d}_{n1}(Q_n^{\#},Q_n),{\bf d}_{n2}(G_n^{\#},G_n)) \mid \] conditional on $(P_n:n\geq 1)$. This distribution can now be used to construct an $0.95$-confidence interval. Let $F_n^{\#}(x)=P(Z_n^{3,\#}\leq x\mid (P_n:n\geq 1))$ and $q_{n,0.95}^{\#}=F_n^{\# -1}(0.95)$ be its $0.95$-quantile. Then, $\Psi(Q_n^*)\pm q_{n,0.95}^{\#}/n^{1/2}$ is the resulting $0.95$-confidence interval. {\bf Alternative reasonable upper-bound:} It appears to also be reasonable to use as upper-bound of $X_{n1}(Q_n,G_n)\equiv \mid n^{1/2}(P_n-P_0)D^*(Q_n^*,G_n) + f(d_{01}(Q_n^*,Q_0),d_{02}(G_n,G_0))\mid $. Note that $X_{n1}(Q_n,G_n)$ is different from $X_n(Q_n,G_n)$, by putting the absolute value outside the sum of the two terms. This is not a deterministic upper bound in the sense that $\mid n^{1/2}(\psi_n^1-\Psi(Q_0))\mid $ is smaller than this bound with probability 1, but we certainly expect this to be a conservative distribution since we are adding a positive bias to a mean zero centered symmetric empirical process at $D^*(Q_n,G_n)$. In spite of the conservative nature of our upper bound it is still asymptotically sharp. Again, not surprisingly, asymptotic consistency of the above conservative sampling distribution is an immediately corollary of our analysis of the nonparametric bootstrap for the HAL-TMLE. \begin{theorem}\label{thupperbound} Under the same conditions as Theorem \ref{thnpboothaltmle}, $Z_n^{3,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n,G_n)+O_P(n^{-1/2-\alpha(d)})$, and thereby \[ Z_n^{3,\#}\Rightarrow_d N(0,\sigma^2_0)\mbox{ conditional on $(P_n:n\geq 1)$}.\] In particular the above confidence interval (\ref{ciexactexp}) contains $\psi_0$ with probability tending to 0.95 as $n\rightarrow\infty$. \end{theorem} \paragraph{More conservative asymptotically sharp sampling distribution and corresponding bootstrap method:} In the Appendix \ref{sectsupupperb} we propose an even more conservative sampling distribution for $n^{1/2}(\psi_n^1-\psi_0)$ (or $n^{1/2}(\psi_n^*-\psi_0)$) by replacing $X_n(Q_n,G_n)$ by the supremum of $X_n(Q,G)$ over all $(Q,G)$ in the parameter space for which $d_{01}(Q,Q_0)$ and $d_{02}(G,G_0)$ are smaller than specified constants $x_{1n} $ and $x_{2n}$ chosen so that the probability that $d_{01}(Q_n,Q_0)$ and $d_{02}(G_n,G_0)$ are smaller than these constants are known to be larger than $1-\bar{\alpha}_{n}$ for some small number $\bar{\alpha}_n\rightarrow 0$. We propose a concrete method that expresses $(x_{1n},x_{2n})$ in terms of a quantile of the supremum norm of a standard empirical cumulative survival function process $n^{1/2}(\bar{P}_n-\bar{P}_0)$. Bootstrapping this latter process provides now an estimator $(x_{1n}^{\#},x_{2n}^{\#})$ of $(x_{1n},x_{2n})$. The distribution of the corresponding supremum of $X_n(Q,G)$ is then estimated with the nonparametric bootstrap as well. The same method could be applied to $X_{n1}(Q,G)$. Since this supremum might be cumbersome to compute in practice, in Appendix \ref{sectsupupperb} we proceed with proposing a simplified conservative approximation of this supremum in which the second-order remainder is separately maximized by plugging in the values $(x_{1n},x_{2n})$, resulting in an easy to compute sampling distribution. Again, we show that both of these methods are still asymptotically sharp. \section{Examples}\label{sectexample} \subsection{Nonparametric estimation of average treatment effect} Let $O=(W,A,Y)\sim P_0$, where $W\in [0,\tau_1]\subset \hbox{${\rm I\kern-.2em R}$}^{m_1}_{\geq 0}$ is an $m_1$-dimensional vector of baseline covariates, $A\in \{0,1\}$ is a binary treatment, and $Y\in \{0,1\}$ is a binary outcome. For a possible data distribution $P$, let $\bar{Q}(P)=E_P(Y\mid A,W)$, $G(P)=P(A=1\mid W)$, and let $Q_W(P)$ be the cumulative probability distribution of $W$. Let $Q=(Q_W,\bar{Q})$. Let $g(a\mid W)=P(A=a\mid W)=G(W)^a(1-G(W))^{1-a}$. Thus $Q_1=Q_W$, $Q_2=\bar{Q}$, $m_{11}=m_1$ and $m_{12}=m_1+1$, in terms of our general notation. Suppose that our model assumes that $G(W)$ depends on a possible subvector of $W$, and let $m_2 $ be the dimension of this subvector. {\bf Statistical model:} Since $Q_W$ is a cumulative distribution function it is a monotone $m_1$-variate cadlag function and its sectional variation norm equals its total variation which thus equals 1. Let $\delta>0$ be given. We assume $\bar{Q}\in (\delta,1-\delta)$ and that it is an element of the class of $m_{12}$-dimensional cadlag functions with sectional variation norm bounded by some $C_{12}^u$. (here one can treat $A$ as continuous on $[0,1]$ and assume that $\bar{Q}$ is a step-function in $A$ with single jump at 1, allowing us to embed functions of continuous and discrete covariates in a cadlag function space.) Similarly, we assume $G\in (\delta,1-\delta)$ and that it is an element of the class of $m_2$-dimensional cadlag functions with sectional variation norm bounded by a $C_2^u$. Let's denote these parameter spaces for $Q_W,\bar{Q}$ and $G$ with ${\cal F}_{11}$, ${\cal F}_{12}$ and ${\cal F}_2$, respectively. Let ${\cal F}_1={\cal F}_{11}\times {\cal F}_{12}$ be the parameter space of $Q=(Q_W,\bar{Q})$. For a given $C_1^u=(C_{11}^u=1,C_{12}^u),C_2^u<\infty$ and $\delta>0$, consider the statistical model \[ {\cal M}=\{P: Q_W\in {\cal F}_{11}, \bar{Q}\in {\cal F}_{12}, G\in {\cal F}_2\}.\] Thus, ${\cal M}$ is defined as the set of all possible probability distributions for which the conditional means of $Y$ and $A$ are cadlag functions with sectional variation norm bounded by $C_1$ and $C_2$, respectively, and the conditional density of $A$, given $W$, is bounded away from $0$ and $1$, $P_W$-a.e, while we make no assumptions on the probability distribution of $W$. {\bf Parameter space of type (\ref{calFmodel}) or (\ref{calFmodelplus})}: As shown in Section 2, we can reparametrize $G=\delta+(1-2\delta)\mbox{expit}(f_2(G)(W))$ and $\bar{Q}=\delta+(1-2\delta)\mbox{expit}(f_1(Q))$, where now $f_1$ and $f_2$ can be any cadlag function that is only restricted by upper bounds on their sectional variation norm implied by $C_{12}^u$ and $C_2^u$, while $C_{12}^l=C_2^l=0$, so that the parameter space for $f_1$ and $f_2$ is indeed of type (\ref{calFmodel}). Obviously, $Q_W$ is of the type (\ref{calFmodelplus}) with $C_{11}^l=C_{11}^u=1$. This demonstrates that our model ${\cal M}$ can be represented as a model as defined in Section 2. {\bf Target parameter:} Let $\Psi:{\cal M}\rightarrow\hbox{${\rm I\kern-.2em R}$}$ be defined by $\Psi(P)=\Psi_1(P)-\Psi_0(P)$, where $\Psi_a(P)=E_PE_P(Y\mid A=a,W)$. Note that $\Psi(P)$ only depends on $P$ through $Q(P)$, so that we will also use the notation $\Psi(Q)$ instead of $\Psi(P)$. Let's focus on $\Psi_1(P)$ which will also imply the formulas for $\Psi_0(P)$ and thereby $\Psi(P)$. {\bf Loss functions for $Q$ and $G$:} Let $L_{11}(Q_W)=\int_x(I(W\leq x)-Q_W(x))^2r(x) dx$ for some weight function $r>0$ be the loss function for $Q_{W,0}$. Let $d_{011}(Q_W,Q_{W,0})=P_0L_{11}(Q_W)-P_0L_{11}(Q_{W,0})$ be the corresponding loss based dissimilarity. Let $L_{12}(\bar{Q})=-\{Y\log\bar{Q}(A,W)+(1-Y)\log(1-\bar{Q}(A,W))\}$ be the log-likelihood loss function for the conditional mean $\bar{Q}_0$, and let $d_{012}(\bar{Q},\bar{Q}_0)=P_0 L_{12}(\bar{Q})-P_0L_{12}(\bar{Q}_0)$ be the corresponding Kullback-Leibler dissimilarity. We can then define the sum-loss $L_1(Q)=L_{11}(Q_W)+L_{12}(\bar{Q})$ for $Q_0$, and its loss-based dissimilarity $d_{01}(Q,Q_0)=P_0L_1(Q)-P_0L_1(Q_0)$ which equals the sum of the following two dissimilarities \begin{eqnarray*} d_{012}(Q,Q_0)&=&\int_x (Q_{W}(x)-Q_{W,0}(x))^2 r(x) dx\\ d_{011}(Q_W,Q_{W,0})&=&\int \log \left(\frac{\bar{Q}_0}{\bar{Q}}\right)^y \left( \frac{1-\bar{Q}_0}{1-\bar{Q}}\right)^{1-y}(a,w) dP_0(w,a,y) .\end{eqnarray*} Let $L_2(G)=-\{A\log G(W)+(1-A)\log(1-G(W))\}$ be the loss function for $G_0=P_0(A=1\mid W)$, and let $d_{02}(G,G_0)=P_0L_2(G)-P_0L_2(G_0)$ be the Kullback-Leibler dissimilarity between $G$ and $G_0$. {\bf Canonical gradient and corresponding exact second order expansion:} The canonical gradient of $\Psi_a$ at $P$ is given by: \[ D^*_a(Q,G)=\frac{I(A=a)}{g(A\mid W)}(Y-\bar{Q}(A,W))+\bar{Q}(1,W)-\Psi_a(Q).\] The exact second-order remainder $R_{20}^a(P,P_0)\equiv \Psi_a(P)-\Psi_a(P_0)+P_0 D^*_a(P)$ is given by: \[ R_{20}^a(\bar{Q},G,\bar{Q}_0,G_0)=\int \frac{(g-g_0)(a\mid w)}{g(a\mid w)}(\bar{Q}-\bar{Q}_0)(a,w) dP_0(w).\] {\bf Bounding the second order remainder:} By using Cauchy-Schwarz inequality, we obtain the following bound on $R_{20}^a(P,P_0)$: \[ \mid R_{20}^a(P,P_0)\mid \leq \delta^{-1}\parallel \bar{Q}_a-\bar{Q}_{a0}\parallel_{P_0}\parallel G-G_0\parallel_{P_0} ,\] where $\bar{Q}_a(W)=\bar{Q}(a,W)$, $a\in \{0,1\}$. Thus, $D^*(P)=D^*_1(P)-D^*_0(P)$, $R_{20}(P,P_0)=R_{20}^1(P,P_0)-R_{20}^0(P,P_0)$, and the upper bound for $R_{20}(P,P_0)$ can be defined as the sum of the two upper bounds for $R_{20}^a(P,P_0)$ in the above inequality, $a\in \{0,1\}$. By \citep{vanderVaart98} we have $\parallel p^{1/2}-p_0^{1/2}\parallel_{P_0}^2\leq P_0 \log p_0/p $. For Bernoulli distributions, we have $\parallel p-p_0\parallel^2_{P_0}\leq 4 \parallel p^{1/2}-p_0^{1/2}\parallel^2_{P_0}\leq P_0\log p_0/p$. From this it follows that $\int (\bar{Q}-\bar{Q}_0)^2(a,w)dP_0(a,w)\leq 4 d_{012}(\bar{Q},\bar{Q}_0)$ and thus $\parallel \bar{Q}_a-\bar{Q}_{a0}\parallel^2_{P_0}\leq 4\delta^{-1}d_{012}(\bar{Q},\bar{Q}_0)$. Therefore, $\parallel \bar{Q}_a-\bar{Q}_{a0}\parallel_{P_0}\leq 2\delta^{-1/2} d_{012}^{1/2}(\bar{Q},\bar{Q}_0)$. Similarly, it follows that $\parallel G-G_0\parallel_{P_0}\leq 2 d_{02}^{1/2}(G,G_0)$. This thus shows the following bound on $R_{20}^a(P,P_0)$: \[ \mid R_{20}^a(P,P_0)\mid \leq 4\delta^{-1.5} d_{012}^{1/2}(\bar{Q},\bar{Q}_0) d_{02}^{1/2}(G,G_0).\] The right-hand side represents the function $f({\bf d}_{01}^{1/2}(Q,Q_0),{\bf d}_{02}^{1/2}(G,G_0))$ for the parameter $\Psi_a$ in our general notation: $f(x=(x_1,x_2),y)= 4 \delta^{-1.5} x_2 y$. The sum of these two bounds for $a\in \{0,1\}$ (i.e, $2f()$) provides now a conservative bound for $R_{20}=R_{20}^1-R_{20}^0$: \begin{equation}\label{r2upperboundexample1} \mid R_{20}(P,P_0)\mid \leq f(d_{012}^{1/2}(\bar{Q},\bar{Q}_0),d_{02}(G,G_0))\equiv 8\delta^{-1.5} d_{012}^{1/2}(\bar{Q},\bar{Q}_0) d_{02}^{1/2}(G,G_0).\end{equation} This verifies (\ref{boundingR2}). We note that this bound is very conservative due to the arguments we provided in general in the previous section for double robust estimation problems. {\bf Continuity of canonical gradient:} Regarding the continuity assumption (\ref{contDstar}), we note that $P_0\{D^*_a(P)-D^*_a(P_0))^2$ can be bounded by $\parallel G-G_0\parallel_{P_0}^2+\parallel \bar{Q}_a-\bar{Q}_{a0}\parallel^2_{P_0}$ and $(\Psi_a(Q)-\Psi_a(Q_0))^2$, where the constant depends on $\delta$. The latter square difference can be bounded in terms of $\parallel \bar{Q}_a-\bar{Q}_{a0}\parallel^2_{P_0}$ and by applying our integration by parts formula to $\int \bar{Q}_a(w) d(Q_W-Q_{W0})(w)$ by $d_{011}(Q_W,Q_{W0})$, where the constant depends on $C_1^u$. Thus this proves (\ref{contDstar}) for $D^*=D^*_1-D^*_0$. {\bf Uniform model bounds on sectional variation norm:} It also follows immediately that the sectional variation norm model bounds $M_1,M_2,M_3$ (\ref{sectionalvarbound}) of $L_1(Q)$, $L_2(G)$ and $D^*(P)$ are all finite, and can be expressed in terms of $(C_1^u,C_2^u,\delta)$. This verifies the model assumptions of Section 2. {\bf HAL-MLEs:} Let $Q_n=\arg\min_{Q\in {\cal F}_1}P_n {\bf L}_1(Q)$ and $G_n=\arg\min_{G\in {\cal F}_2}P_n L_2(G)$ be the HAL-MLEs. Here we can use the above mentioned reparameterizations of $Q$ and $G$ in terms of $f_1$ and $f_2$, respectively, that varies over a parameter space of type $(\ref{calFmodel})$. As shown in \citep{vanderLaan15,Benkeser&vanderLaan16}, if one simply sets $\delta=0$, then $\bar{Q}_n$ and $G_n$ can be computed with standard Lasso logisitic regression software using a linear logistic regression model with around $n 2^{m_1}$ indicator basis functions, where $m_1$ is the dimension of $W$. The reparameterization would now enforce the bounds $\delta$ and $1-\delta$ for these HAL-MLEs. Note that $Q_{W,n}$ is just an unrestricted MLE and thus equals the empirical cumulative distribution function. Therefore, we actually have that $\parallel Q_{W,n}-Q_{W,0}\parallel_{\infty}=O_P(n^{-1/2})$ in supremum norm, while $d_{012}(\bar{Q}_n,\bar{Q}_0)$ and $d_{02}(G_n,G_0)=O_P(n^{-1/2-\alpha(d)})$ where $d$ is the dimension of $O$. If $m_2<d-2$, then one should be able to improve the bound into $n^{-1/2-\alpha(m_2)}$. {\bf CV-HAL-MLEs:} The above HAL-MLEs are determined by $(C_1^u=(1,C_{12}^u),C_2^u)$ and could thus be denoted with $Q_{n,C_1^u}=\hat{Q}_{C_1^u}(P_n)$ and $G_{n,C_2^u}=\hat{G}_{C_2^u}(P_n)$. Let $C_{10}=\parallel Q_0\parallel_v^*=(1,\parallel \bar{Q}_0\parallel_v^*)$ and $C_{20}=\parallel G_0\parallel_v^*$, respectively, which are thus smaller than $C_1^u$ and $C_2^u$, respectively. We can now define the cross-validation selector that selects the best HAL-MLE over all $C_{1}$ and $C_2 $ smaller than these upper-bounds: \begin{eqnarray*} C_{1n}&=&\arg\min_{C_{11}=1,C_{12}<C_{12}^u}E_{B_n}P_{n,B_n}^1L_1(\hat{Q}_{C_1}(P_{n,B_n}^0)) \\ C_{2n}&=&\arg\min_{C_2<C_2^u}E_{B_n}P_{n,B_n}^1L_2(\hat{G}_{C_2}(P_{n,B_n}^0)), \end{eqnarray*} where $B_n\in \{0,1\}^n$ is a random split in training sample $\{O_i:B_n(i)=0\}$ with empirical measure $P_{n,B_n}^0$ and validation sample $\{O_i:B_n(i)=1\}$ with empirical measure $P_{n,B_n}^1$. This defines now the CV-HAL-MLE $Q_n=Q_{n,C_{1n}}$ and $G_n=G_{n,C_{2n}}$ as well. Thus, by setting $C_1^u=C_{1n}$ and $C_2^u=C_{2n}$, our HAL-MLEs equal the CV-HAL-MLE. {\bf HAL-TMLE:} Let $\mbox{Logit}\bar{Q}_{n,\epsilon}=\mbox{Logit}\bar{Q}_n+\epsilon C(G_n)$, where $C(G_n)(A,W)=(2A-1)/g_n(A\mid W)$. Let $\epsilon_n=\arg\min_{\epsilon}P_n L_{11}(\bar{Q}_{n,\epsilon})$. This defines the TMLE $\bar{Q}_n^*=\bar{Q}_{n,\epsilon_n}$ of $\bar{Q}_0$. We can also define a local least favorable submodel $\{Q_{W,n,\epsilon_2}:\epsilon_2\}$ for $Q_{W,n}$ but since $Q_{W,n}$ is an NPMLE one will have that $\epsilon_{2n}=\arg\min_{\epsilon_2}P_n L_{11}(Q_{W,n,\epsilon_2})=0$, and thereby that the TMLE of $Q_0$ for any such 2-dimensional least favorable submodel is given by $Q_n^*=(Q_{W,n},\bar{Q}_n^*)$. It follows that $P_n D^*(Q_n^*,G_n)=0$. {\bf Preservation of rate for HAL-TMLE:} The proof in the Appendix \ref{AppendixA} for $\epsilon_n=O_P(d_{012}^{1/2}(\bar{Q}_n,\bar{Q}_0))$ applies to this submodel, so that indeed $d_{01}(Q_n^*,Q_0)$ converges at same rate as $d_{01}(Q_n,Q_0)$. {\bf Asymptotic efficiency of HAL-TMLE and CV-HAL-TMLE:} Application of Theorem \ref{thefftmle} shows that $\Psi(Q_n^*)$ is asymptotically efficient, where one can either choose $Q_n$ as a fixed HAL-MLE using $C_1=C_1^u$ or the CV-HAL-MLE using $C_1=C_{1n}$, and similarly, for $G_n$. The preferred estimator would be the CV-HAL-TMLE. {\bf Finite sample conservative confidence interval:} Let's first consider the exact finite sample conservative confidence interval presented in (\ref{consbound1}). For this we need bounds $M_3$, $M_{12}$ and $M_2$ on the sectional variation norm of $D^*(Q,G)$, $L_{12}(\bar{Q})$ and $L_2(G)$, respectively. These can be expressed in terms of the sectional variation norm bounds $(C_{12}^u,C_2^u)$ on $(\bar{Q}, G)$ and the lower bound $\delta$ of $\min_a g(a\mid W)$ and $\bar{Q}$. Here one can use that the sectional variation norm of $1/g(a\mid W)$ can be bounded in terms of $\delta$ and $\parallel w\rightarrow g(a\mid w)\parallel_v^*$. (\ref{consbound1}) tells us that $\mid n^{1/2}(\psi_n^1-\Psi(Q_0))\mid $ is dominated by the distribution of $Z_n^+=(M_3+f(M_{12}^{1/2},M_2^{1/2}))\parallel n^{1/2}(\bar{P}_n-\bar{P}_0)\parallel_{\infty}$, where $f$ is defined by (\ref{r2upperboundexample1}), and $\bar{P}(u)=P([u,\tau])=\int_{[u,\tau]} dP(s)$ is the probability that $O\in [u,\tau]$ under $P$. Estimation of the sampling distribution of $n^{1/2}(\bar{P}_n-\bar{P}_0)$ with $n^{1/2}(\bar{P}_n^{\#}-\bar{P}_n)$ results then in the estimate $Z_n^{+,\#}$ of $Z_n^+$ and corresponding finite sample conservative $0.95$-confidence interval. (Similarly, we have this bound for the TMLE $\Psi(Q_n^*)$ with $M_3$ and $M_{12}$ replaced by $M_3^*$ and $M_{12}^*$, respectively.) However, as we argued in general, this conservative confidence interval will generally be of little practical use by being much too conservative, although these confidence intervals will still have a width or order $n^{-1/2}$. {\bf Asymptotic validity of the nonparametric bootstrap for the HAL-MLEs:} Firstly, note that the bootstrapped HAL-MLEs \[ \bar{Q}_n^{\#}=\arg\min_{\parallel \bar{Q}\parallel_v^*< C_{12}^u,\bar{Q}\ll \bar{Q}_n}P_n^{\#}L_{12}(\bar{Q}),\] and $G_n^{\#}=\arg\min_{\parallel G\parallel_v^*<C_2^u,G\ll G_n}P_n^{\#}L_2(G)$ are easily computed as a standard Lasso regression using $L_1$-penalty $C_{12}^u$ and $C_2^u$ and including the maximally $n$ indicator basis functions with the non-zero coefficients selected by $Q_n$ and $G_n$, respectively. This makes the actual computation of the nonparametric bootstrap distribution a very doable computational problem, even though the single computation of $Q_n$ and $G_n$ is highly demanding for large dimension of $W$. {\bf Verification of conditions for validity of bootstrap for HAL-MLE Theorem \ref{thnpbootmle}:} We now want to verify the conditions for our asymptotic consistency of the nonparametric bootstrap of Theorem \ref{thnpbootmle}. This requires us to establish a second-order expansion for $d_{012}(\bar{Q}_n^*,\bar{Q}_0)$ and $d_{n12}(\bar{Q}_n^{\#*},\bar{Q}_n)$, so that we can specify the second-order terms $P_0 R_{20,L_{12}}(\bar{Q}_n^*,\bar{Q}_0)$ and $P_n R_{2n,L_{12}}(\bar{Q}_n^{\#*},\bar{Q}_n)$. Subsequently, we have to bound the square of the $L^2(P_0)$-norm and $L^2(P_n)$ norm of the loss-differences $L_{12}(\bar{Q}_n^*,\bar{Q}_0)$ and $L_{12}(\bar{Q}_n^{\#*},\bar{Q}_n)$ in terms of these second-order terms. Since this concerns a log-likelihood loss, we consider this problem in general. Let $L(p)=-\log p$ be the log-likelihood loss. Firstly, we consider the exact second-order Tailor expansion of $P_0L(p)-P_0L(p_0)$ at $p_0$: \[ P_0\log p-P_0\log p_0=\int p_0^{-1}(p-p_0) dP_0-P_0 R_{20,L}(P,P_0).\] Since the first order linear term equals zero, it follows that $P_0R_{20,L}(P,P_0)=-P_0\log p/p_0$. Thus, indeed $P_0 \{L(p)-L(p_0)\}^2$ can be bounded by $P_0 R_{20,L}(P,P_0)$ due to known result that the Kullback-Leibler dissimilarity is equivalent with $\int (p-p_0)^2 d\mu$ if the densities are bounded away from 0. Similarly, the exact second-order Tailor expansion of $P_n L(p)-P_n L(p_n)$ at $p_n$ is given by: \[ P_n \log p-P_n \log p_n=P_n p_n^{-1}(p-p_n) -P_n R_{2n,L}(p,p_n) \] for an exact second-order remainder $P_n R_{2n,L}(p,p_n)$. By the exact second-order Tailor expansion of the function $\log x$ at $x=p_n(o)$, we obtain \[ \log p(o)-\log p_n(o)=p_n^{-1}(p-p_n)(o) -\xi(p_n(o),p(o))^{-2} (p-p_n)^2(o),\] where $\xi(p_n(o),p(o))$ is a value in between $p_n(o)$ and $p(o)$. Thus, $P_n R_{2n,L}(p,p_n)=P_n \xi(p_n,p)^{-2}(p-p_n)^2$. If $\min(p_n,p)>\delta$ for some $\delta>0$, then $P_n (p-p_n)^2\lesssim P_n R_{2n,L}(p,p_n)$. It follows trivially that $P_n (L(p_n^{\#}-L(p_n))^2\lesssim P_n (p_n^{\#}-p_n)^2$, and thus also that $P_n (L(p_n^{\#})-L(p_n))^2\lesssim P_n R_{2n,L}(p,p_n)$. This verifies the conditions on the loss function for the bootstrap theorem \ref{thnpbootmle}. {\bf Behavior of HAL-MLE under sampling from $P_n$:} This shows that $d_{n12}(\bar{Q}_n^{\#},\bar{Q}_n)=O_P(n^{-1/2-\alpha(d)})$, and that this dissimilarity is equivalent with the square $L^2$-norms $P_n R_{2n,L_{12}}(\bar{Q}_n^{\#},\bar{Q}_n)$, which is equivalent with $\sum_a \int (\bar{Q}_{na}^{\#}-\bar{Q}_{na})^2 dP_n$. Theorem \ref{thnpbootmle} also shows that $d_{n2}(G_n^{\#},G_n)=O_P(n^{-1/2-\alpha(d)})$ and that this loss based dissimilarity is equivalent with $P_n (G_n^{\#}-G_n)^2$. {\bf Preservation of rate of TMLE under sampling from $P_n$:} The proof in the Appendix \ref{AppendixC} for $\epsilon_n^{\#}=O_P(d_{n12}^{1/2}(\bar{Q}_n^{\#},\bar{Q}_n))$ applies to our smooth submodel, so that indeed $d_{01}(Q_n^{\#*},Q_0)$ converges at same rate as $d_{01}(Q_n,Q_0)$. {\bf Consistency of nonparametric bootstrap for HAL-TMLE:} This verifies all conditions of Theorem \ref{thnpboothaltmle} which establishes the asymptotic efficiency and asymptotic consistency of the nonparametric bootstrap. \begin{theorem} We have that $\Psi(Q_n^*)$ is asymptotically efficient, i.e. $n^{1/2}(\Psi(Q_n^*)-\Psi(Q_0))\Rightarrow_d N(0,\sigma^2_0)$, where $\sigma^2_0=P_0 \{D^*(P_0)\}^2$. In addition, conditional on $(P_n:n\geq 1)$, $Z_n^{1,\#}=n^{1/2}(\Psi(Q_n^{\#*})-\Psi(Q_n^*))\Rightarrow_d N(0,\sigma^2_0)$. This can also be applied to the setting in which $C^u$ is replaced by the cross-validation selector $C_n^u$ defined above. \end{theorem} {\bf Consistency of nonparametric bootstrap for exact expansion of HAL-TMLE/HAL-one-step:} Recall the exact second-order remainder of $\Psi(Q_n^*)-\Psi(Q_0)$ defined by $R_{20}(Q_n^*,G_n,Q_0,G_0)=(R_{20}^1-R_{20}^0)(Q_n^*,G_n,Q_0,G_0)$. Let $R_{2n}(Q_n^{\#*},G_n^{\#},Q_n^*,G_n)$ be the nonparametric bootstrap analogue. Thus, \[ R_{2n}^a(Q_n^{\#*},G_n^{\#},Q_n^*,G_n)=\int \frac{(g_n^{\#}-g_n)(a\mid w)}{g_n^{\#}(a\mid w)}(\bar{Q}_n^{\#*}-\bar{Q}_n^*)(a,w) dP_n(w).\] Let \[ Z_n^{2,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(Q_n^{\#*},G_n^{\#})+n^{1/2}R_{2n}(Q_n^{\#*},G_n^{\#},Q_n^*,G_n).\] Consider also the conservative version \[ Z_n^{3,\#}=\mid n^{1/2}(P_n^{\#}-P_n)D^*(Q_n^{\#*},G_n^{\#})\mid + f(d_{n12}(\bar{Q}_n^{\#*},\bar{Q}_n^*),d_{n2}(G_n^{\#},G_n)) \mid .\] where the upper bound $f()$ for the remainder is defined in (\ref{r2upperboundexample1}). Application of Theorems \ref{thnpbootexactexp} and \ref{thupperbound} prove the asymptotic consistency of these two nonparametric bootstrap distributions. \begin{theorem} We have $Z_n^{2,\#}\Rightarrow_d N(0,\sigma^2_0)$ and $Z_n^{3,\#}\Rightarrow_d \mid N(0,\sigma^2_0)\mid$, conditional on $(P_n:n\geq 1)$. As a consequence, an $0.95$-confidence interval for $\psi_0$ based on $Z_n^{2,\#}$ and $Z_n^{3,\#}$ have asymptotic coverage 0.95 of $\psi_0$. This also applies to the setting in which $C^u$ is replaced by the cross-validation selector $C_n$. \end{theorem} Finally, we remark that our HAL-MLE is really indexed by the model bounds $(C_{12}^u,C_2^u,\delta)$ and these might all three be unknown to the user. So in that case, we recommend to select all three with the cross-validation selector $(C_{12n},C_{2n},\delta_)$ and define the HAL-TMLE and bootstrap of the HAL-TMLE at this fixed choice $(C_n,\delta_n,M_n)$. \subsection{Nonparametric estimation of integral of square of density} {\bf Statistical model, target parameter, canonical gradient:} Let $O\in \hbox{${\rm I\kern-.2em R}$}^d$ be a multivariate random variable with probability distribution $P_0$ with support $[0,\tau]$. Let ${\cal M}$ be a nonparametric model dominated by Lebesgue measure $\mu$, where we assume that for each $P\in {\cal M}$ its density $p=dP/d\mu$ is bounded away from below by $\delta>0$ and from above by $M<\infty$. In addition, we assume that all densities are cadlag functions and have sectional variation norm bounded by $C^u<\infty$. As shown under the remark in Section 2, we can reparametrize $p=c(f)\{\delta+(M-\delta)\mbox{expit}(f) \}$, where $f$ can be any cadlag function with sectional variation norm bounded from above by some finite constant implied by $C^u$ (while $C^l=0$), in which case our model is of the type (\ref{calFmodel}). The target parameter $\Psi:{\cal M}\rightarrow\hbox{${\rm I\kern-.2em R}$}$ is defined as $\Psi(P)=E_Pp(O)=\int p^2(o)d\mu(o)$. This target parameter is pathwise differentiable at $P$ with canonical gradient \[ D^*(P)(O)=2 (p(O)-\Psi(P)).\] {\bf Exact second order remainder:} It implies the following exact second-order expansion: \[ \Psi(P)-\Psi(P_0)=(P-P_0)D^*(P)+R_{20}(P,P_0),\] where \[ R_{20}(P,P_0)\equiv -\int (p-p_0)^2 d\mu .\] {\bf Loss function:} As loss function for $p$ we could consider the log-likelihood loss $L(p)(O)=-\log p(O)$ with $d_0(p,p_0)=P_0\log p_0/p$. We have $\parallel p^{1/2}-p_0^{1/2}\parallel_{P_0}^2\leq P_0 \log p_0/p $ so that \begin{eqnarray*} \mid R_{20}(P,P_0)\mid &=&\int (p-p_0)^2 d\mu \\ &=&\sup_x\frac{(p^{1/2}+p_0^{1/2})^{2}}{p_0} \int (p^{1/2}-p_0^{1/2})^2 dP_0\\ &\leq& M/\delta P_0\log p_0/p =M/\delta d_0(p,p_0). \end{eqnarray*} Alternatively, we could consider the loss function \[ L(p)(O)=-2p(O)+\int p^2(o)d\mu(o) .\] Note that this is indeed a valid loss function with loss-based dissimilarity given by \begin{eqnarray*} d_0(p,p_0)&=&P_0 L(p)-P_0L(p_0)\\ &=&-2\int p(o)p_0(o)d\mu(o)+\int p^2d\mu+2\int p_0^2 d\mu-\int p^2_0 d\mu\\ &=& \int (p-p_0)^2 d\mu .\end{eqnarray*} {\bf Bounding second order remainder:} Thus, if we select this loss function, then we have \[ \mid R_{20}(P,P_0)\mid =d_0(p,p_0) .\] In terms of our general notation, we now have $f(x)=x^2$ for the upper bound on $R_{20}$ so that $\mid R_{20}(P,P_0)\mid =f(d_0^{1/2}(p,p_0))$. We will proceed with the latter loss function so that our bound on $R_2(P,P_0)$ is sharp. In addition, if we use this loss function, we do not need a lower bound $\delta $ for our densities so that we can set $\delta=0$ in our definition of the model ${\cal M}$. The canonical gradient is indeed continuous in $P$ as stated in (\ref{contDstar}) and the bounds $M_1,M_2,M_3$ (\ref{sectionalvarbound}) are obviously finite and can be expressed in terms of $(C^u,M,\delta)$. This verifies the assumptions on our model as stated in Section 2. {\bf HAL-MLE and CV-HAL-MLE:} Let $p_n=\arg\min_{P\in {\cal M}}P_n L(p)$ be the MLE. Using our reparameterization this can be computed as \[ f_n=\arg\min_{ f}P_nL(c(f)\{\delta+(M-\delta)\mbox{expit}f\}),\] where $f$ can be represented by our general representation (\ref{Frepresentation}), $f(o)=f(0)+\sum_{s\subset \{1,\ldots,d\}}\int_{(0_s,o_s]} df_s(u_s)$, and constrained to satisfy $\mid f(0)\mid+\sum_{s\subset\{1,\ldots,d\}}\int_{(0_s,\tau_s]} \mid df_s(u_s)\mid \leq C$ for a $C$ implied by $C^u$. Let's denote this $f_n$ with $f_{n,C}$. Thus, for a given $C$ computation of $f_{n,C}$ can be done with a Lasso type algorithm. Let $C_n=\arg\min_CE_{B_n}P_{n,B_n}^1L(\hat{p}_C(P_{n,B_n}^0))$ be the cross-validation selector of $C$, as defined in previous example. If we set $C=C_n$, then we obtain the CV-HAL-MLE $f_n=f_{n,C_n}$. We have $d_0(p_n,p_0)=O_P(n^{-1/2-\alpha(d)})$. {\bf Universal least favorable submodel:} We now define the HAL-TMLE. Consider the universal least favorable submodel $\{p_{n,\epsilon}:\epsilon\}$ through the HAL-MLE $p_n$: for $\epsilon\geq 0$ \[ p_{n,\epsilon}=p_n\exp\left(\int_0^{\epsilon} D^*(p_{n,x}) dx\right).\] This submodel recursively defines $p_{n,\epsilon}$ where one starts calculating $p_{n,dx}$ for an infinitesimal $dx>0$ from $p_n$, and then $p_{n,2dx}$ from $p_{n,dx}$ and $p_n$ etc. This recursive definition generates $\{p_{n,\epsilon}:\epsilon \geq 0\}$. Similarly, one computes $p_{n,-dx}$ from $p_n$, and $p_{n,-2dx}$ from $p_n,p_{n,-dx}$ etc, where for $\epsilon<0$ $\int_0^{\epsilon}=-\int_{\epsilon}^0$. One can also define this universal least favorable submodel by recursively applying a local least favorable submodel: \[ p_{n,\epsilon+d\epsilon}=p_{n,\epsilon,d\epsilon}^{lfm},\] where $p_{x}^{lfm}$ is a local least favorable submodel through $p$ at parameter value $x$ so that $p_{n,\epsilon,d\epsilon}^{lfm}$ is the local least favorable submodel through $p_{n,\epsilon}$ at parameter value $d\epsilon$. A possible local least favorable submodel choice is $p_{x}=(1+x D^*(p)) p$ for $x$ in a small neighborhood around $0$. {\bf HAL-TMLE:} Let $\epsilon_n=\arg\min_{\epsilon}P_n L(p_{n,\epsilon})$ be the MLE, and $p_n^*=p_{n,\epsilon_n}$ is the TMLE. The TMLE of $\Psi(P_0)$ is the plug-in estimator $\psi_n^*=\Psi(P_n^*)=\int p_n^{*2}d\mu$. It is easily verified that the universal least favorable submodel $p_{\epsilon}=p\exp(\int_0^{\epsilon}D^*(p_x) dx)$ is such that $\log p_{\epsilon}$ is twice differentiable in $\epsilon$. Therefore we can carry out the general proof in the Appendix \ref{AppendixA} establishing that $\epsilon_n=O_P(n^{-1/4-\alpha(d)/2})$. {\bf Efficiency of HAL-TMLE and CV-HAL-TMLE:} Application of Theorem \ref{thefftmle} shows that $\Psi(P_n^*)$ is asymptotically efficient, where one can either choose the HAL-MLE with fixed index $C$ implied by $C^u$ or one can set $C=C_n$ equal to cross-validation selector defined above. {\bf Finite sample conservative confidence interval:} Let's first consider the exact finite sample conservative confidence interval presented in (\ref{consbound1}). For this we need bounds $M_3$, $M_{1}$ on the sectional variation norm of $D^*(P)=2p-\Psi(p)$ and $L(p)=-2p+\Psi(p)$, respectively. These bounds will thus be identical: $M_3=M_1$. We have that $M_1=\sup_{p\in {\cal M}} \parallel L(p)\parallel_v^*=2 C^u$. (\ref{consbound1}) tells us that $\mid n^{1/2}(\psi_n^1-\Psi(P_0))\mid $ is dominated by the distribution of $Z_n^+=4C^u\parallel n^{1/2}(\bar{P}_n-\bar{P}_0)\parallel_{\infty}$, where $\bar{P}(u)=P([u,\tau])=\int_{[u,\tau]} dP(s)$ is the probability that $O\in [u,\tau]$ under $P$. Estimation of the sampling distribution of $n^{1/2}(\bar{P}_n-\bar{P}_0)$ with the bootstrap distribution $n^{1/2}(\bar{P}_n^{\#}-\bar{P}_n)$ results then in $Z_n^{+,\#}$ and corresponding finite sample conservative $0.95$-confidence interval, which can also be used for $\Psi(P_n^*)$. In the previous example, this confidence interval appeared to be much too conservative to be practically useful, but in this example, since our bound on $R_{20}(P,P_0)$ is sharp and the sectional variation norm bounds $M_1=M_3=2C^u$ are easily determined in terms of the sectional variation norm bound $C^u$ of the model, this appears to be an interesting finite sample conservative confidence interval. We propose to apply this confidence interval to $C^u=C_n$ and the corresponding bounds $M_{1n}=M_{3n}=2C_n$. {\bf Asymptotic validity of the nonparametric bootstrap for the HAL-MLE Theorem \ref{thnpbootmle}:} As remarked in the previous example, computation of the HAL-MLE $p_n^{\#}=\arg\min_{\parallel p\parallel_v^*\leq C,p\ll p_n}P_n^{\#}L(p)$ is much faster than the computation of $p_n=\arg\min_{\parallel p\parallel_v^*\leq C^u}P_nL(p)$, due to only having to minimize the empirical risk over the bootstrap sample over the linear combinations of indicator functions that had non-zero coefficients in $p_n$. The conditions for our asymptotic consistency of the nonparametric bootstrap of Theorem \ref{thnpbootmle} hold, as we show now. We have $P_0 L(P)-P_0L(P_0)=\int (p-p_0)^2 d\mu$ and thus $P_0 R_{2,L}(P,P_0)=\int (p-p_0)^2 d\mu$. It easily follows that $P_0(L(P)-L(P_0))^2$ can be bounded by $P_0 R_{2,L}(P,P_0)$. We now have to establish the second-order exact expansion for $d_n(P_n^{\#*},P_n^*)=P_n L(P_n^{\#*})-P_n L(P_n^*)$. We have \begin{eqnarray*} d_n(P_n^{\#*},P_n^*)&=&P_n\{ L(P_n^{\#*})-L(P_n^*)\}\\ &=&P_n\{ -2(p_n^{\#*}-p_n^*)+\int p_n^{\#*2} d\mu-\int p_n^{*2} d\mu\\ &=&P_n\{ -2(p_n^{\#*}-p_n^*)+\int (p_n^{\#*}-p_n^*)(p_n^{\#*}+p_n^*) d\mu \\ &=&P_n\{ (-2+2p_n^*) (p_n^{\#*}-p_n^*)d\mu \\ &=&\int (p_n^{\#*}-p_n^*)^2 d\mu \\ \end{eqnarray*} Thus, $P_n R_{2,L}(P_n^{\#*},P_n^*)=\int (p_n^{\#*}-p_n^*)^2 d\mu$. Clearly, $P_n (L(P_n^{\#*})-L(P_n^*))^2$ can be bounded by $P_n R_{2,L}(P_n^{\#*},P_n^*)$. Application of Theorem \ref{thnpbootmle} now shows that $\int (p_n^{\#}-p_n)^2 d\mu=O_P(n^{-1/2-\alpha(d)})$. {\bf Preservation of rate for HAL-TMLE under sampling from $P_n$:} We can carry out the general proof in the Appendix \ref{AppendixC} establishing that $\epsilon_n^{\#}=O_P(n^{-1/4-\alpha(d)/2})$. {\bf Asymptotic consistency of the bootstrap for the HAL-TMLE:} This verifies all conditions of Theorem \ref{thnpboothaltmle} which establishes the asymptotic efficiency and asymptotic consistency of the nonparametric bootstrap. \begin{theorem} Consider the model ${\cal M}$ defined by upper and lower bound $M<\infty$, $\delta\geq 0$ on the densities on support $[0,\tau]$, and the assumption that the sectional variation norm of the densities over $[0,\tau]$ is bounded by $C^u<\infty$. We have that $\Psi(P_n^*)$ is asymptotically efficient, i.e. $n^{1/2}(\Psi(P_n^*)-\Psi(P_0))\Rightarrow_d N(0,\sigma^2_0)$, where $\sigma^2_0=P_0 \{D^*(P_0)\}^2$. In addition, conditional on $(P_n:n\geq 1)$, $Z_n^{1,\#}=n^{1/2}(\Psi(P_n^{\#*})-\Psi(P_n^*))\Rightarrow_d N(0,\sigma^2_0)$. This theorem can also be applied to the setting in which $C^u=C_n$. \end{theorem} {\bf Asymptotic consistency of the bootstrap for the exact second-order expansion of HAL-TMLE:} We have $n^{1/2}(\Psi(P_n^*)-\Psi(P_0))=n^{1/2}(P_n-P_0)D^*(P_n^*)-n^{1/2}\int (p_n^*-p_0)^2 d\mu$. Let $Z_n^{2,\#}$ be the nonparametric bootstrap estimator of this exact second-order expansion: \[ Z_n^{2,\#}=n^{1/2}(P_n^{\#}-P_n)D^*(P_n^{\#*})-n^{1/2}\int (p_n^{\#*}-p_n^*)^2 d\mu.\] In addition, consider the upper bound: \[ \mid n^{1/2}(\Psi(P_n^*)-\Psi(P_0))\mid\leq \mid n^{1/2}(P_n-P_0)D^*(P_n^*)\mid +\int (p_n^*-p_0)^2 d\mu .\] Let $Z_n^{3,\#}$ be the nonparametric bootstrap estimator of this conservative sampling distribution: \[ Z_n^{3,\#}\equiv \mid n^{1/2}(P_n^{\#}-P_n)D^*(P_n^{\#*})\mid +n^{1/2}\int (p_n^{\#*}-p_n^*)^2 d\mu .\] Application of Theorems \ref{thnpbootexactexp} and \ref{thupperbound} prove the asymptotic consistency of these two nonparametric bootstrap distributions. \begin{theorem} We have that conditional on $(P_n:n\geq 1)$, $Z_n^{j,\#}\Rightarrow_d N(0,\sigma^2_0)$ as $n\rightarrow\infty$, $j=2,3$. As a consequence, an $0.95$-confidence interval for $\psi_0$ based on $Z_n^{2,\#}$ and $Z_n^{3,\#}$ have asymptotic coverage 0.95 of $\psi_0$. This theorem can also be applied to setting in which $C^u=C_n$. \end{theorem} Finally, we remark that our HAL-MLE is really indexed by the hypothesized model bounds $(C^u,\delta,M)$ and these might all three be unknown to the user. So in that case, we recommend to select all three with the cross-validation selector $(C_n,\delta_n,M_n)$ and define the HAL TMLE and bootstrap of the HAL-TMLE at this fixed choice $(C_n,\delta_n,M_n)$. \section{Discussion}\label{sectdisc} In parametric models and, more generally, in models small enough so that the MLE is still well behaved, one can use the nonparametric bootstrap to estimate the sampling distribution of the MLE. It is generally understood that in these small models the nonparametric bootstrap outperforms estimating the sampling distribution with a normal distribution (e.g., with variance estimated as the sample variance of the influence curve of the MLE), by picking up the higher order behavior of the MLE, {\em if asymptotics has not set in yet}. In such small models, reasonable sample sizes already achieve the normal approximation in which case the Wald type confidence intervals will perform well. Generally speaking, the nonparametric bootstrap is a valid method when the estimator is a compactly differentiable function of the empirical measure, such as the Kaplan-Meier estimator (i.e., one can apply the functional delta-method to analyze such estimators) \citep{Gill89,vanderVaart&Wellner96}. These are estimators that essentially do not use smoothing of any sort. On the other hand, efficient estimation of a pathwise differentiable target parameter in large realistic models generally requires estimation of the data density, and thereby machine learning such as super-learning to estimate the relevant parts of the data distribution. Therefore, efficient one-step estimators or TMLEs are not compactly differentiable functions of the data distribution. Due to this reason, we moved away from using the nonparametric bootstrap to estimate its sampling distribution, since it represents a generally inconsistent method (e.g., a cross-validation selector behaves very differently under sampling from the empirical distribution than under sampling from the true data distribution). Instead we estimated the normal limit distribution by estimating the variance of the influence curve of the estimator. Such an influence curve based method is asymptotically consistent and therefore results in asymptotically valid $0.95$-confidence intervals. However, in such large models the nuisance parameter estimators will converge at low rates (like $n^{-1/4}$ or lower) with large constants depending on the size of the model, so that for normal sample sizes the exact second-order remainder could be easily larger than the leading empirical process term with its normal limit distribution. So one has to pay a significant price for using the computationally attractive influence curve based confidence intervals, by generally reporting overly optimistic confidence intervals. That is, for small models the bootstrap is available but not that important since estimators will quickly achieve asymptotics, while in large models it appears to not be available even though it is crucial since estimators generally achieve asymptotics for very large sample sizes. One might argue that one should use a smooth bootstrap instead by sampling from an estimator of the density of the data distribution. General results show that such a smooth bootstrap method will be asymptotically valid as long as the density estimator is consistent. This is like carrying out a simulation study for the estimator in question using an estimator of the true data distribution as sampling distribution. However, estimation of the actual density of the data distribution is itself a very hard problem, with bias heavily affected by the curse of dimensionality, and, in addition, it can be immensely burdensome to construct such a density estimator and sample from it when the data is complex and high dimensional. As demonstrated in this article, the HAL-MLE provided a solution to this bottleneck. The HAL-MLE($C^u$) of the nuisance parameter is an actual MLE minimizing the empirical risk over a highly nonparametric parameter space (depending on the model ${\cal M}$) in which it is assumed that the sectional variation norm of the nuisance parameter is bounded by universal constant $C^u$. This MLE is still well behaved by being consistent at a rate that is in the worst case still faster than $n^{-1/4}$. However, this MLE is not an interior MLE, but will be on the edge of its parameter space: the MLE will itself have sectional variation norm equal to the maximal allowed value $C^u$. Nonetheless, our analysis shows that it is still a smooth enough function of the data (while not being compactly differentiable at all) that it is equally well behaved under sampling from the empirical distribution. As a consequence of this robust behavior of the HAL-MLE, for models in which the nuisance parameters of interest are cadlag functions with a universally bounded sectional variation norm (beyond possible other assumptions), we presented asymptotically consistent estimators of the sampling distribution of the HAL-TMLE and HAL-one step estimator of the target parameter of interest using the nonparametric bootstrap. Our proposals range from a bootstrap estimator of the HAL-TMLE itself, a bootstrap estimator of the exact second-order expansion of the HAL-TMLE, and two bootstrap estimators of conservative upper bounds on the exact second-order expansion of the HAL-TMLE. In addition, we presented a highly conservative finite sample sampling distribution based on applying general integration by parts formulas to the leading empirical process term and a conservative bound of the exact second-order remainder. We also provided slight variations of these proposals, corresponding with the HAL-one step estimator. Our estimators of the sampling distribution are highly sensitive to the curse of dimensionality, just as the sampling distribution of the HAL-TMLE itself: specifically, the HAL-MLE on a bootstrap sample will converge just as slowly to its truth as under sampling from the true distribution. Therefore, in high dimensional estimation problems, we expect highly significant gains in valid inference relative to Wald type confidence intervals that are purely based on the normal limit distribution of the HAL-TMLE. In general, the user will typically not know how to select the upper bound $C^u$ on the sectional variation norm of the nuisance parameters (except if the nuisance parameters are cumulative distribution functions). Therefore, we recommend to select this bound with cross-validation just as we use cross-validation to select the sectional variation norm bound in the HAL-MLE. Due to the oracle inequality for the cross-validation selector $C_n$ (which only relies on a bound on the supremum norm of the loss function), the data adaptively selected upper bound will be selected larger than the true sectional variation norm $C_0$ of the nuisance parameters $(Q_0,G_0$, as sample size increases. Therefore, our bootstrap estimators will still be guaranteed to be consistent for its normal limit distribution while incorporating its higher order behavior. For small sample sizes, one would most likely select a bound smaller than the sectional variation norm of the true nuisance parameter (optimally trading off bias and variance of the HAL-MLE), and as sample size increases it will get larger and larger till at some large enough sample size it will plateau, having reached the sectional variation norm of the true nuisance parameter. The advantage of this data adaptive choice of the bound is that our resulting bootstrap inference will have adapted to the true underlying sectional variation norm once it has reached that plateaux. The disadvantage is that for small sample size the selected model (implied by the selected bound) might be smaller than a model containing the true data distribution, so that our bootstrap methods might still be optimistic for such small sample sizes. Nonetheless, it will evaluate the finite sample sampling distribution of the HAL-TMLE (relative to its truth under $P_n$ satisfying this same bound) in a correctly specified model (just too small model). In particular, it still has the same first order behavior as the sampling distribution of the actual HAL-TMLE (using cross-validation to select the bound), but it may underestimate its higher order behavior. In these settings there is still use for our conservative sampling distributions whose conservative nature might outweigh the potential underestimation of uncertainty due to a selecting a bound smaller than the true sectional variation norm of the nuisance parameter. Simulations will likely shed light on this. The sectional variation norm plays a fundamental role in this work. The sectional variation norm of a function can be interpreted as a measure of complexity or smoothness of the function: it represents the sum of the absolute value of the coefficients in our integral presentation (\ref{Frepresentation}) of the function as an infinite linear combination of indicator basis functions. The sectional variation norm of the true nuisance parameter such as a regression function can be viewed as a general measure of degree of sparsity. For example, a regression function of $d$ variables that is a sum of functions of maximally three variables has a sectional variation norm that behaves as $d^3$ instead of the worst case behavior $2^d$. This definition of degree of sparsity (i.e., the true function has a certain sectional variation norm) is not dependent on a choice of a main term regression model as in the typical Lasso literature (far from a saturated model). The HAL-MLE and HAL-TMLE using cross-validation to select the sectional variation norm bounds will adapt to this underlying sparsity and so will our inference for the target parameter using this selected bound as fixed in the bootstrap of the corresponding HAL-TMLE. This demonstrates the enormous importance of this measure of sparsity for the behavior of the cross-validated HAL-TMLE (and HAL-MLE). Presumably, by selecting another basis and corresponding function representation, one could also define sparsity as the sum of the absolute value of the coefficients of these basis functions in its representation. Possible advantages of the indicator basis and its representation (\ref{Frepresentation}) is that it allows approximation of discontinuous functions; it it easy to determine the subset of indicator basis functions that are relevant for the given sample, making the implementation of HAL-MLE doable; and the HAL-MLE has good convergence properties, and is highly robust (as shown by our bootstrap results), which might not be available for many other basis choices. The latter appears to be due to the feature of the collection of indicator basis function in that it represents a Donsker class, while many other choices of basis functions cannot be embedded in a Donsker class (e.g. Fourrier series). Therefore, we wonder if the indicator basis and its function representation (\ref{Frepresentation}) is a particular powerful (and possibly unique) choice for defining a measure of complexity of a true parameter, bounding the model accordingly, defining an MLE of the nuisance parameters for such a model, selecting the bound with cross-validation, and using the nonparametric bootstrap to estimate the sampling distribution of its corresponding TMLE treating the selected bound as fixed, for the sake of inference as carried out in this article. This article focused on a HAL-TMLE that represents the statistical target parameter $\Psi(P)$ as a function $\Psi(Q_1(P),\ldots,Q_{K_1}(P))$ of variation independent nuisance parameters $(Q_1,\ldots,Q_{K_1})$. In some examples it has important advantages to represent $\Psi(P)$ in terms of recursively defined nuisance parameters. For example, the longitudinal one-step TMLE of causal effects of multiple time point interventions in \citep{Gruber&vanderLaan12,Petersen&Schwab&vanderLaan13} relies on a sequential regression representation of the target parameter \citep{Bang&Robins05}. In this case, the next regression is defined as the regression of the previous regression on a shrinking history, across a number of regressions, one for each time point at which an intervention takes place. By fitting each of these sequential regressions with an HAL-regression we obtain the analogue of the HAL-TMLE for this sequential regression type TMLE. Our convergence results for the HAL-MLE and the bootstrapped HAL-MLE can be applied to these HAL-MLEs of each regression, in which case the outcome is the HAL-MLE fit of the previous regression. Some additional work will be needed to deal with the dependence on the previous regression to analyze this type of sequential HAL-TMLE, but we conjecture that the nonparametric bootstrap will be valid for this type of non-variation independent HAL-TMLE as well. \subsection*{Acknowledgement.} This research is funded by NIH-grant 5R01AI074345-07.
1,314,259,996,157
arxiv
\section{Introduction}\label{sec:radar.intro} Planetary radar observations have been used to probe the surfaces of all of the planets with solid surfaces and many smaller bodies in the solar system \citep{o93,o03}, delivering information on their spins, orbital states, and surface and subsurface electrical properties and textures. Notable findings include characterizing the distribution of water at the south pole of the Moon \citep{1997Sci...276.1527S,2003Natur.426..137C}, the first indications of water ice in the permanently shadowed regions at the poles of Mercury \citep{1992Sci...258..635S,1994Natur.369..213H}, polar ice and anomalous surface features on Mars \citep{1991Sci...253.1508M}, establishing the icy nature of the Jovian satellites \citep{1978Icar...34..268O}, and the initial characterizations of Titan's surface \citep{1990Sci...248..975M,2003Sci...302..431C}. In multiple cases, the ground-based radar observations have served as the foundation for a subsequent space-based mission. Radar observations are currently conducted in the S~band ($\approx 2.3$~GHz, Arecibo Observatory) and~X~band ($\approx 8.5$~GHz, the Deep Space Network's Goldstone Solar System Radar [GSSR]), and future radar observations may also be conducted in the Ka~band ($\sim 30$~GHz). All of the planetary radar bands could be within the frequency coverage of the next-generation Very Large Array ({ngVLA}). As we discuss in more detail below (\S\ref{sec:radar.sensitivity}), the ngVLA need not be equipped with a transmitter to provide a powerful enhancement to planetary radar capabilities. Indeed, many of the results summarized previously involved \emph{bistatic} observations in which the radar transmissions originated from one antenna and were received by a separate antenna. \cite{bcdg04} previously considered the use of the Square Kilometre Array (SKA) as a receiver for a bistatic system. Since the time of their paper, there have been a number of developments, including multiple radar instruments on Mars orbiters, the \textit{Cassini} radar instrument's observations of Titan, and the MESSENGER studies that confirmed earlier radar indications of polar ice at Mercury. This consideration of the ngVLA capabilities is similar to the earlier SKA consideration, but takes many of these subsequent spacecraft-based radar results into account. We begin by motivating the scientific measurements that could be obtained from various target bodies by bistatic radar (\S\ref{sec:radar.targets}), then turn to the specific benefit of the ngVLA in the context of the radar equation (\S\ref{sec:radar.sensitivity}), and conclude with a discussion on radar imaging (\S\ref{sec:radar.imaging}). \section{Target Bodies}\label{sec:radar.targets} In this section, we review target bodies and the science motivations for which future bistatic planetary radar observations could be relevant. \subsection{Venus}\label{sec:radar.venus} Venus is Earth's closest analog in the solar system in terms of its bulk properties, yet Venus and Earth have clearly had different evolutionary paths. There are potentially billions of Venus analogs in the Galaxy, and characterizing and understanding the differences between Venus and Earth has been given additional impetus for understanding the habitability of terrestrial-mass planets. Venus remains enigmatic on a variety of fundamental levels: The size of its core is unknown; whether the core is solid or liquid is uncertain; its atmospheric superrotation, 60$\times$ faster than the solid body, is not understood; and the atmosphere exhibits distinctive planetary-scale features that are stationary with respect to the solid body. High-precision measurements of the spin state of Venus with radar have the potential of providing key advances in all of these areas. First, a measurement of the spin precession rate ($\approx 2^{\prime\prime}\,\mathrm{yr}^{-1}$) will yield a direct measurement of the polar moment of inertia, which is unknown. The moment of inertia provides an integral constraint on the distribution of mass in a planetary interior. Apart from bulk density, it is arguably the most important quantity needed to determine reliable models of the interior structure of Venus, including the size of its core. Second, a time history of length of day (LOD) variations at the 10~ppm level will identify the geophysical forcings responsible for spin variations, which are primarily related to transfer of angular momentum between the atmosphere and the solid planet. They will provide a crucial input to general circulation models and the key to elucidate poorly understood phenomena such as superrotation and stationary planetary-scale structures in the atmosphere. Planetary radar provides a powerful tool for monitoring planetary spin states via observations of the ``speckle displacement effect'' or \emph{radar speckle tracking} \citep{mpjsh07}. Radar echoes from solid bodies exhibit spatial irregularities in the wavefront caused by the constructive and destructive interference of waves scattered by the irregular surface. The corrugations in the wavefront, i.e., speckles, are tied to the rotation of the target body. When the trajectory of the wavefront corrugations is parallel to a roughly east-west antenna baseline, echoes received at two receiving stations display a high degree of correlation. The time of day and value of the time delay at the correlation peak are directly related to the orientation and magnitude of the spin vector of the body. For typical solar system observations, the speckle size ($\sim R\lambda/D$, for a target at range~$R$, observing wavelength~$\lambda$, and diameter~$D$) is on the order of~1~km and the high-correlation condition lasts for approximately 30~s. The current approach using Goldstone and the Green Bank Telescope (GBT) yields instantaneous spin rate measurements at the 10~ppm level with X-band transmission from the GSSR and reception at Goldstone and the \hbox{GBT}. For example, with observations obtained between~2002 and~2012, the orientation of Mercury's spin axis has been measured with~$5^{\prime\prime}$ precision, and measurements of the amplitude of longitude librations have revealed Mercury has a molten core \citep{mpjsh07,mps+12}. The accuracy of these measurements has been validated at the 1\% level by independent measurements obtained by the MESSENGER spacecraft during a four-year duration \citep{mhmpp18}. On-going observations include Venus, Europa, and Ganymede. With a single pair of antennas, it is possible to obtain one measurement per day when the stringent geometry and signal-to-noise (S/N) requirements are satisfied. However, measurements accumulate at a slow rate because each measurement requires simultaneous scheduling on two large radio antennas, successful transmission during the appropriate 30-second window, and successful reception at both antennas during the relevant 30-second windows. In order to fully constrain the spin axis orientation, it is imperative to secure observations at a variety of baseline orientations, which typically takes several years. An instrument such as the ngVLA would open up the possibility of securing up to 168 independent measurements in a single 20-minute session with the antennas located in the plains of San Agustin. At conjunction, the individual antenna S/N ratio and the correlation S/N ratio would exceed 100. Although the range of baseline orientations between array elements and Goldstone would remain small at any given observation epoch, the number of independent estimates of the correlation properties would improve the quality of the spin state determination by a factor of $\sqrt{N}$. Because the measurements are instantaneous, LOD variations that occur on 30-minute timescales would be detectable, which would place strong constraints on the mechanisms of angular momentum transfer. Consequently, the ngVLA would enable (1)~improved determination of the spin axis precession and therefore moment of inertia and core size; and (2)~improved quantification of the amplitude of LOD variations on daily, seasonal, and secular timescales, providing strong constraints on the dynamics of the atmosphere and its interactions with the solid planet, and exploring a regime that may be common on exoplanets. \subsection{Asteroids}\label{sec:radar.asteroids} Radar observations of asteroids provides information on their sizes, shapes, spin states, surface properties, masses, bulk densities, orbits, and the presence of satellites (Figure~\ref{fig:radar.2017bq6}). Recent improvements in transmitter capabilities have resulted in obtaining meter-scale spatial resolutions on various near-Earth asteroids, resolutions comparable to those obtained by spacecraft (either for fly-bys or orbiting). Thus, radar observations complement the fewer, but often more comprehensive spacecraft measurements. \begin{figure}[tbh] \centering \includegraphics[width=0.95\textwidth]{2017BQ6.Feb6.small.collage.eps} \vspace*{-1ex} \caption{Radar observations of the near-Earth asteroid 2017~BQ6. Its rotation is apparent, as are the sharp, angular sides. A bright spot particularly apparent in the lower, middle panel may be a few meter-scale boulder on the surface. The sharpness of this asteroid's structure is currently unexplained. The time delay (range) increases from top to bottom, and Doppler frequency increases from left to right. (I.e., the top of the figure is closest to Earth, and the image can be considered to be a ``top-down'' view.) The color scale shows the echo power strength in units of standard deviation.} \label{fig:radar.2017bq6} \end{figure} There have been a series of comprehensive reviews on radar observations of both near-Earth and Main Belt asteroids \citep{mor+99,ohb+02,o03,bcdg04,bbgtm15,nbmbt16}. We do not repeat that material here, but focus on specific aspects relative to the \hbox{ngVLA}. The motivation for radar observations of asteroids is three-fold. First, asteroids represent primitive remnants of the early solar system, and their properties and orbits provide constraints on the formation and evolution of the solar system. Second, they represent targets for spacecraft \citep[e.g.,][]{2014Icar..235....5C}, for which orbital information and the presence of satellites are essential for mission planning and, for sample return missions, characterization of the surfaces is valuable. Finally, precise knowledge of their orbits is essential to assess the extent to which they might represent impact hazards to the Earth \citep{NEO_hazard}, a topic that has increased in visibility over the past decade, and for which a ``National Near-Earth Object Preparedness Strategy and Action Plan'' has been issued \citep{DAMIEN2018}. In particular, the orbits determined from radar observations are sufficiently precise that they can be used to assess whether a near-Earth asteroid presents any risk of colliding with the Earth over the next several decades to a century \citep[e.g.,][]{og04}. Specifically for near-Earth asteroids, bistatic radar observations can be valuable in two respects. First, for objects with close approaches to the Earth (short round-trip light travel times), it can be difficult or impossible to switch a radar facility from transmitting to receiving rapidly enough. Bistatic radar observations either simplify the observations or enable them for objects on extremely close approaches. Second, the increased sensitivity of the ngVLA would increase the range to which near-Earth asteroids could be targeted for radar observations, particularly for targets that are outside of the declination range of the Arecibo Observatory. Particularly from the perspective of planetary defense, obtaining orbits for as many near-Earth asteroids, and especially those classified as ``potentially hazardous'' is valuable, and the larger the volume that is accessible, the more asteroids that can be targeted. We return to this topic, in quantitative detail, in Section~\ref{sec:radar.sensitivity}. As quantitative estimates, we consider the improvement in range that the ngVLA might offer over the Green Bank Telescope, which is also used as the receiving element for bistatic radar. (See also \citealt*{nbmbt16}.) If a subset of the ngVLA can be used for bistatic radar reception such that a sensitivity of \textbf{three} times that of the current \hbox{GBT} is obtained, it would more than \textbf{double} the accessible volume (increase the range by a factor of~30\%) for near-Earth asteroid observations; if the sensitivity is \textbf{five} times that of the current \hbox{GBT}, it would more than \textbf{triple} the accessible volume (increase the range by a factor of~50\%). Not only could Goldstone-ngVLA bistatic observations rival those of Arecibo, they would provide access to a much larger fraction of the sky. Beyond the simple improvement in sensitivity offered by the ngVLA (\S\ref{sec:radar.sensitivity}), its antenna distribution offers the promise of improved shape modeling and spin state determinations via radar speckle tracking \citep{bkb+10}. Speckle tracking of asteroids operates in a fundamentally different regime than the case of Mercury or Venus \citep{mpjsh07}. The general inability to predict the speckle trajectory in the asteroid case requires an observing configuration in which the speckle size is larger than the antenna baseline, otherwise speckles observed at different stations would not correlate. As a result, the ratio of speckle size to antenna baseline, which determines the fractional precision of the estimates, is three orders of magnitude larger for asteroids than it is for Mercury or Venus. The VLA has a dense set of antennas, but few asteroids approach the Earth sufficiently close that the VLA can be used. For example, for a 100~m object, it must be within about 10\% of the lunar distance for the resulting speckle pattern to be comparable in scale to the \hbox{VLA}. Conversely, the VLBA has much longer baselines, allowing use of the technique to larger distances, but it has few antennas, so the number of speckle measurements that could be made is few. With a relatively dense network of antennas and antenna separations to of order 100~km, the ngVLA could be used for objects approaching to within one lunar distance ($10^{-3}$~au), for which the number of objects is higher. Finally, the dynamics of the Sun-Earth-Moon system allow for small asteroids to be captured into meta-stable geocentric orbits. Various predictions are that there should be a population of meter-scale ``temporarily-captured orbiters'' or ``mini-moons,'' and at least one such mini-moon, 2006~RH120, has been detected \citep{kkp+09,bjg+14,jbb+18}. The advent of future large scale-surveys, such as the Large Synoptic Survey Telescope (LSST), may result in several more being found. Due to their small size, such observations are extremely challenging, but feasible, as 2006~RH120 has been detected from Goldstone \citep{ bbbgsl16}. Not only do mini-moons have small radar cross sections, they are relatively close ($\approx 3$~s round-trip light travel time). As noted above, with such short light travel times, bistatic observations would be required to avoid subjecting the transmitter to frequent power fluctuations. \subsection{Icy Satellites/Ocean Worlds}\label{sec:radar.oceanworlds} Ground-based radar observations provided some of the first clear evidence for the icy surfaces of the Galilean satellites and subsequent characterization \citep[e.g.,][]{1977Sci...196..650C, 1978Icar...34..268O,1992JGR....9718227O,2001Icar..151..167B}, demonstrating the capability to probe several meters into the surface. There have been radar observations of Saturnian satellites as well, though these are even more challenging due to Saturn's greater distance \citep{1990Sci...248..975M,2003Sci...302..431C,2007Icar..191..702B}. Subsequent spacecraft investigations have provided clear evidence that at least some of the moons of Jupiter (Europa, Ganymede, Callisto) and Saturn (Enceladus, Titan), and potentially moons of Uranus (Ariel) harbor sub-surface oceans \citep{2016JGRE..121.1378N}. As a consequence, the planned Europa Clipper mission would carry a radar designed to probe and constrain the thickness of the icy shell of Europa. However, spacecraft missions are infrequent, plausibly only two missions might fly to the outer solar system in a decade, and a radar instrument might not be part of the spacecraft's payload. By contrast, ground-based radar can potentially happen essentially annually, near an outer planet's opposition, when the distance to the icy satellite is minimized. The orbits of the Galilean satellites of Jupiter are affected by tidal interactions with Jupiter, due to their relatively small semi-major axes. Of particular interest are the tidal responses of Io and Europa as the tidal response of Io is related to the heat dissipation responsible for its active volcanism and the tidal response of Europa is related to the depth of its sub-surface ocean Tidal dissipation is parameterized by $k_2/Q$, where $k_2$ is the Love number, which is a measure of the amplitude of tidal response in the body, and~$Q$ is the quality factor or a measure of the viscous damping in the body. Both quantities are related to the properties of Jupiter's or satellites' interiors, and the ratio~$k_2/Q$ quantifies how the bulge raised by the satellite, or on the satellite, leads or trails its ``precursor.'' The Juno mission will provide estimates of $k_2$, which can be combined with the radar ranging, to estimate the ratio~$k_2/Q$. Moreover, radar ranging measurements have the advantage of being able to be carried out indefinitely, while the Juno mission is of a limited duration. (At the time of writing, the Juno prime science mission terminates in~2021 June.) \cite{lakh09} used astrometric data to suggest that orbits of Io, Europa, and Ganymede have shifted due to tidal acceleration by~55~km, $-125$~km, and~$-365$~km, respectively, over a period of~116~years. The highest precision astrometric measurements have a resolution of~75~km and originate from mutual occultations and eclipses. The Arecibo planetary radar can measure a line-of-sight distance (or range) to Io with~10~km precision and distances to Europa, Ganymede, and Callisto with~1.5~km precision. However, Jupiter is only observable from Arecibo six out of every 12~years because of the constraints of the Arecibo's antenna pointing (declination range $-1^\circ$ to~$+38^\circ$). In~2015, GSSR-GBT bistatic radar demonstrated the capability to obtain ranging measurements of the Galilean moons. Both antennas are fully steerable and allow observations on a yearly basis. A range to Europa was measured with~75~km precision (Figure~\ref{fig:radar.europa}), comparable to the highest precision optical astrometry. However, ranging to Io was not possible with in this bistatic configuration due to low echo strength. \begin{figure}[bth] \centering \includegraphics[width=0.95\textwidth]{Brozovic-Europa-ranging.eps} \vspace*{-1ex} \caption{Goldstone Solar System Radar-Green Bank Telescope delay-Doppler image of Europa. The time delay (range) increases from top to bottom, and Doppler frequency increases from left to right. (I.e., the top of the figure is closest to Earth, and the image can be considered to be a ``top-down'' view.) The range resolution is 500~$\mu$s or~75~km. The color scale shows the echo power strength in units of standard deviation. The scale has been saturated at~3 units in order to enhance the echo outline. With its higher sensitivity, the ngVLA would offer higher signal-to-noise ratios and higher ranging precision.} \label{fig:radar.europa} \end{figure} If ngVLA achieves 3 times the sensitivity of the \hbox{GBT}, it would likely be able to achieve yearly ranging measurements of Europa with sub-10~km precision. Furthermore, it would possible to measure a distance to Io with~100~km precision. If a five-fold sensitivity of the ngVLA materializes, ranging precisions of~15~km--30~km for Io could be obtained. These measurements will contribute to the maintenance of highly accurate ephemerides of the Galilean satellites that would lead to continued improvements in the constraints on tidal dissipation ($k_2/Q$) and that could enhance, and potentially enable, future missions to these bodies (e.g., monitor Io's volcanism, explore Europa's ice shell for biosignatures). \subsection{Comets and Interstellar Objects}\label{sec:radar.comets} Much like the case for asteroids, radar observations of comets can provide information on the size, shape, and spin state of comet nuclei. For instance, the Arecibo radar observed comet 103P/Hartley~2 shortly before NASA's EPOXI spacecraft encountered it in~2010 November. The radar observations determined that the comet nucleus has a bi-lobed shape, a result confirmed by the spacecraft \citep{hnhgt11}. Also, much like the case for asteroids, the radar observations of cometary nuclei complement the fewer, but more incisive spacecraft measurements---approximately five times as many comets have been detected by radar observations as have been visited by spacecraft. The recent recognition of the first interstellar object, 1I/2017 U1 `Oumuamua, suggested that such objects might have extremely low optical albedos and at least this first object appeared to have a large aspect ratio, potentially in excess of 5:1 \citep{mwm+17}. The number of identified interstellar objects may increase in the future as additional wide-field surveys occur, particularly if the survey strategies explicitly account for the potential trajectories of interstellar objects. Notably, \cite{jlr+17} predict that there may be as many as $10^4$ such objects within the orbit of Neptune at any given time. If an interstellar object did have a trajectory that took it sufficiently close to Earth to warrant radar observations, constraints on its properties would be invaluable. \section{The ngVLA and the Radar Equation}\label{sec:radar.sensitivity} The increased sensitivity of the ngVLA would expand the set of targets for traditional bistatic delay-Doppler planetary radar. The classic radar equation is that the received power~$P_R$ is \citep[e.g.,][]{bho+99} \begin{equation} P_R = \frac{P_T G_T G_R \lambda^2 \sigma}{(4\pi)^3 R^4}, \label{eqn:radar} \end{equation} where $P_T$ is the power of the transmitter; the gains of the transmitting and receiving antennas are $G_T$ and~$G_R$, respectively; $\lambda$ is the operational wavelength; $\sigma$ is a measure of the radar cross section of the target body; and the range (distance) to the target is $R$. More concisely, the signal-to-noise ratio of radar observations scales as $R^{-4}$. This $R^{-4}$ dependence can be understood as the product of two inverse square laws. The transmissions from the transmitting antenna to the target body suffer a $R^{-2}$ loss by the inverse square law. By Huygen's principle, the target body re-radiates, and these emissions suffer an additional $R^{-2}$ loss by the inverse square law. With a fixed radar transmitter power (and antenna gain), the signal-to-noise ratio can only be improved by using increasing the gain (i.e., sensitivity) of the (bistatic) receiving element. For the \hbox{ngVLA} to participate in this kind of radar observation, it would have to have a ``phased array'' mode, in which voltages from the individual antennas are summed, after applying the appropriate time delays, so that the array appears as an effective single aperture. A phased array mode would also be valuable for observations of pulsars and for very long baseline interferometry (VLBI) imaging. \section{Radar Imaging}\label{sec:radar.imaging} Existing planetary radar observations with the VLA have been used to image the radar return and make plane-of-the-sky astrometry measurements. Such \emph{bistatic radar aperture synthesis} has produced spectacular images that convincingly demonstrated the presence of water ice at the poles of Mercury and Mars. With a factor of \textbf{five} better angular resolution, the ngVLA could produce a correspondingly better linear resolution on the surface of target bodies. Alternately, an improved angular resolution opens the possibility of producing resolved images of smaller objects or higher spatial resolution bistatic radar aperture synthesis on planets. For example, \cite{1994Icar..111..489D} imaged 4179 Toutatis with the VLA in its A configuration. The asteroid was at a distance of~0.063~au, and the VLA's angular resolution corresponded to a linear resolution of approximately 10~km on the asteroid. They found that the asteroid showed clearly distinct residual radar features, suggestive of a bi-lobed structure, but their estimate of the separation of these features was limited by the VLA's beam size. With the improved angular resolution of the \hbox{ngVLA}, features with scales of about~2~km would have been distinguishable. \bigskip {\small% We thank L.~Benner, M.~Busch, and P.~Taylor for helpful comments. This work made use of NASA's Astrophysics Data System Abstract Service. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc.}
1,314,259,996,158
arxiv
\section{Introduction} \label{sect1} Orthogonal frequency division multiplexing (OFDM) is a viable air interface for providing ubiquitous communication services and high spectral efficiency, due to its ability to combat frequency selective multipath fading and flexibility in resource allocation. However, power-hungry circuitries and the limited energy supply in portable devices remain the bottlenecks in prolonging the lifetime of networks and guaranteeing quality of service. As a result, energy-efficient mobile communication has received considerable interest from both industry and academia \cite{JR:Mag_green}-\nocite{CN:static_power,JR:TCOM_harvesting}\cite{JR:TWC_large_antennas}. Specifically, a considerable number of technologies/methods such as energy harvesting and power optimization have been proposed in the literature for maximizing the energy efficiency (bit-per-Joule) of wireless communication systems. Energy harvesting is particularly appealing as it is envisioned to be a perpetual energy source which provides self-sustainability to systems. Traditionally, energy has been harvested from natural renewable energy sources such as solar, wind, and geothermal heat, thereby reducing substantially the reliance on the energy supply from conventional energy sources. On the other hand, background radio frequency (RF) electromagnetic (EM) waves from ambient transmitters are also an abundant source of energy for energy harvesting. Indeed, EM waves can not only serve as a vehicle for carrying information, but also for carrying energy (/power) simultaneously \cite{CN:WIPT_fundamental}-\nocite{CN:Shannon_meets_tesla,CN:MIMO_WIPT}\cite{CN:WIP_receiver}. The utilization of this dual characteristic of EM waves leads to a paradigm shift for both receivers design and resource allocation algorithm design. In \cite{CN:WIPT_fundamental} and \cite{CN:Shannon_meets_tesla}, the signal input distribution and the power allocation were used for achieving a trade-off between information and power transfer for different system settings, respectively. However, in \cite{CN:WIPT_fundamental} and \cite{CN:Shannon_meets_tesla} it was assumed that the receiver is able to decode information and extract power from the same received signal which is not yet possible in practice. As a compromise solution, the concept of a power splitting receiver was introduced in \cite{CN:MIMO_WIPT} and \cite{CN:WIP_receiver} for facilitating simultaneous energy harvesting and information decoding. The authors of \cite{CN:MIMO_WIPT} and \cite{CN:WIP_receiver} investigated the rate-energy regions for multiple antenna and single antenna narrowband systems with power splitting receivers, respectively. Nevertheless, the possibly high power consumption of both electronic circuitries and RF transmission was not taken into account in \cite{CN:WIPT_fundamental}-\nocite{CN:Shannon_meets_tesla,CN:MIMO_WIPT}\cite{CN:WIP_receiver} but may play an important role in designing energy efficient communication systems. In this paper, we address the above issues. To this end, we formulate the power allocation algorithm design for energy efficient communication in OFDM systems with concurrent wireless information and power transfer as an optimization problem. The resulting high-dimensional non-convex optimization problem is solved by using an iterative algorithm whose components include nonlinear fractional programming, dual decomposition, and a one-dimensional search. Simulation results illustrate an interesting trade-off between energy efficiency, system capacity, and wireless power transfer. \begin{figure*}\vspace*{-4mm} \centering \includegraphics[width=5in]{system_model.eps}\vspace*{-1mm} \caption{OFDM transceiver model for downlink wireless information and power transfer.} \label{fig:system_model}\vspace*{-4mm} \end{figure*} \section{System Model} \label{sect:OFDMA_AF_network_model} In this section, we present the adopted system model. \subsection{OFDM Channel Model} We consider an OFDM system which comprises one transmitter and one receiver. The receiver is able to decode information and harvest energy from noise and radio signals (desired signal and interference signal). All transceivers are equipped with a single antenna, cf. Figure \ref{fig:system_model}. The total bandwidth of the system is $\cal B$ Hertz and there are $n_F$ subcarriers. Each subcarrier has a bandwidth $W={\cal B}/n_F$ Hertz. We assume a frequency division duplexing (FDD) system and the downlink channel gains can be accurately obtained by feedback from the receiver. The channel impulse response is assumed to be time invariant (slow fading). The downlink received symbol at the receiver on subcarrier $i\in\{1,\,\ldots,\,n_F\}$ is given by \begin{eqnarray} Y_{i}=\sqrt{P_{i}l g}H_{i}X_{i}+I_i+Z_{i}^s +Z_{i}^a, \end{eqnarray} where $X_{i}$, $P_{i}$, and ${H}_{i}$ are the transmitted symbol, transmitted power, and the small-scale fading coefficient for the link from the transmitter to the receiver on subcarrier $i$, respectively. $l$ and $g$ represent the path loss and shadowing between the transmitter and receiver, respectively. $Z_{i}^s$ and $Z_{i}^a$ represent the signal processing and the antenna noises on subcarrier $i$, respectively. $Z_{i}^s$ and $Z_{i}^a$ are modeled as additive white Gaussian noise (AWGN) with zero mean and variances $\sigma_{z^s}^2$ and $\sigma_{z^a}^2$, respectively, cf. Figure \ref{fig:system_model}. $I_i$ is the received co-channel interference signal on subcarrier $i$ with zero mean and variance $\sigma_{I_i}^2$ which is emitted by an unintended transmitter in the same channel. \subsection{Hybrid Information and Energy Harvesting Receiver} \label{sect:receiver} In practice, the energy harvesting receiver model depends on the specific implementation. For instance, both electromagnetic induction and electromagnetic radiation are able to transfer wireless power and information \cite{CN:Shannon_meets_tesla,CN:WIP_receiver}. However, the associated hardware circuitries can vary significantly. Besides, most energy harvesting circuits suffer from the half-duplex constraint in energy harvesting. Specifically, the signal used for harvesting energy cannot be used for decoding of the modulated information \cite{CN:WIP_receiver}. In order to provide a general model for a receiver which can harvest energy and decode information, we do not assume a particular type of energy harvesting receiver. Instead, we follow a similar approach as in \cite{CN:WIP_receiver} and focus on a receiver which splits the received signal into two power streams carrying a proportion of $\rho$ and $1-\rho$ of the total received signal power before any active analog/digial signal processing is performed, cf. Figure \ref{fig:system_model}. Subsequently, the two streams carrying a fraction of $\rho$ and $1-\rho$ of the total received signal power are used for energy harvesting and decoding the information in the signal, respectively. In this paper, we assume a perfect passive power splitter unit which does not consume any power nor introduce any power loss or noise. Besides, we assume that the receiver is equipped with a battery with finite capacity for storing the harvested energy. In other words, there is a finite maximum amount of power which can be harvested by the receiver. We note that in practice, the receiver may be powered by more than one energy source and the harvested energy can be used as a supplement for supporting the energy consumption\footnote{In this paper, we use a normalized energy unit, i.e., Joule-per-second. Thus, the terms ``power" and ``energy" are interchangeable in this context. } of the receiver. \section{Resource Allocation}\label{sect:forumlation} In this section, we introduce the adopted system performance metric and formulate the corresponding power allocation problem. \subsection{Instantaneous Channel Capacity} \label{subsect:Instaneous_Mutual_information} In this subsection, we define the adopted system performance measure. Given perfect channel state information (CSI) at the receiver, the channel capacity\footnote{Note that the received interference signal $I_i$ on each subcarrier is treated as AWGN which results in a lower bound of the channel capacity and is commonly done in the literature. } between the transmitter and the receiver on subcarrier $i$ with channel bandwidth $W$ is given by \begin{eqnarray}\label{eqn:cap} C_{i}&=&W\log_2\Big(1+P_i\Gamma_{i}\Big)\,\,\,\, \mbox{and}\,\,\\ \Gamma_{i}&=&\frac{(1-\rho)l g\abs{H_i}^2}{(1-\rho)(\sigma_{z^a}^2+\sigma_{I_i}^2)+\sigma_{z^s}^2}, \end{eqnarray} where $P_i \Gamma_i$ is the received signal-to-interference-plus-noise ratio (SINR) on subcarrier $i$. The \emph{system capacity} is defined as the total average number of bits successfully delivered to the receiver and is given by \begin{eqnarray} \label{eqn:avg-sys-goodput} && \hspace*{-5mm} U({\cal P}, { \rho})=\sum_{i=1}^{n_F} C_{i}, \end{eqnarray} where ${\cal P}=\{ P_i \ge 0, \forall i\}$ is the power allocation policy and $\rho$ is the power splitting ratio introduced in Section \ref{sect:receiver}. On the other hand, we take into account the total power consumption of the system in the objective function for designing an energy efficient power allocation algorithm. To this end, we model the power dissipation in the system as: \begin{eqnarray} \label{eqn:power_consumption} U_{TP}({\cal P},\rho)\hspace*{-2mm}&=&\hspace*{-2mm}P_C+ \sum_{i=1}^{n_F}\varepsilon P_i- P_D -P_I \\ \mbox{where}\label{eqn:Power_harvested_d}\,\,P_D &=& \hspace*{-3mm}\underbrace{\eta\sum_{i=1}^{n_F} P_i l g \abs{H_i}^2\rho}_{\mbox{Power harvested from desired signal}}\\ \mbox{and}\,\,\label{eqn:Power_harvested_s} P_I &=&\hspace*{-2.65cm}\underbrace{\eta\sum_{i=1}^{n_F}(\sigma_{z^a}^2+\sigma_{I_i}^2)\rho}_{\mbox{Power harvested from inteference signal and antenna noise}}\hspace*{-0.65cm}. \end{eqnarray} $P_C>0$ is the constant \emph{circuit signal processing power consumption} in both transmitter and receiver which includes the power dissipation in the digital-to-analog (/analog-to-digital) converter, digital/analog filters, mixer, and frequency synthesizer, and is independent of the actual transmitted or harvested power. $\varepsilon\ge 1$ is a constant which accounts for the inefficiency of the power amplifier. For instance, $5$ Watts is consumed in the power amplifier for every 1 Watt of power radiated in the radio frequency (RF) if $\varepsilon=5$; the power efficiency is $\frac{1}{\varepsilon}=\frac{1}{5}=20\%$. On the other hand, the minus sign in front of $P_D$ in (\ref{eqn:power_consumption}) indicates that a portion of the power radiated by the transmitter can be harvested by the receiver. $0\le\eta\le1$ is a constant which denotes the efficiency of the energy harvesting unit for converting the radio signal to electrical energy for storage. Specifically, the term $\eta l g \abs{H_i}^2\rho$ in (\ref{eqn:Power_harvested_d}) can be interpreted as a \emph{frequency selective power transfer efficiency} for transferring power from the transmitter to receiver on subcarrier $i$. Similarly, the minus sign in front of $P_I$ in (\ref{eqn:power_consumption}) accounts for the ability of the receiver to harvest power from interference signals and antenna noise. We note that $U_{TP}({\cal P},\rho)>0$ always holds in practical communication systems for the following reasons. First, it can be observed that $\sum_{i=1}^{n_F}\varepsilon P_i\ge \sum_{i=1}^{n_F} P_i> P_D$ due to path loss and the limited energy harvesting efficiency ($\eta\le 1$). Second, for achieving a reasonable system performance in communication, the interference level in the same channel has to be controlled (via regulation) to a reasonable level. Therefore, for a typical value of $P_C$, $P_C \gg P_I$ is always valid in practice. The \emph{energy efficiency} of the considered system is defined as the total average number of bits/Joule which is given by \begin{eqnarray} \label{eqn:avg-sys-eff} \hspace*{-8mm}U_{eff}({\cal P},\rho)&=&\frac{U_{}({\cal P},\rho)}{U_{TP}({\cal P},\rho)}. \end{eqnarray} \subsection{Optimization Problem Formulation} \label{sect:cross-Layer_formulation} The optimal power allocation policy, ${\cal P}^*$, ${\rho}^*$, can be obtained by solving \begin{eqnarray} \label{eqn:cross-layer}&&\hspace*{10mm} \max_{{\cal P}, \rho }\,\, U_{eff}({\cal P},\rho) \nonumber\\ \notag \hspace*{-5mm}\mbox{s.t.} &&\hspace*{-5mm}\mbox{C1: } P_{\max}^{req}\ge P_D +P_I\ge P_{\min}^{req},\notag\\ &&\hspace*{-5mm}\mbox{C2:}\notag\sum_{i=1}^{n_F}P_i\le P_{\max}, \\ &&\hspace*{-5mm}\notag \mbox{C3:}\,\, P_C+\sum_{i=1}^{n_F}\varepsilon P_i\le P_{PG}, \hspace*{3.2mm} \mbox{C4: }\sum_{i=1}^{n_F} C_i\ge R_{\min},\\ &&\hspace*{-5mm}\mbox{C5:}\,\, P_i\ge 0, \,\, \forall i,\hspace*{18.8mm} \mbox{C6:}\,\, 0\le\rho\le 1. \end{eqnarray} Variable $P_{\min}^{req}$ in C1 specifies the minimum required power transfer to the receiver. $P_{\max}^{req}$ in C1 limits that maximum amount of harvested power because of the finite capacity of the battery. The value of $P_{\max}$ in C2 puts a limit on the transmit spectrum mask to reduce the amount of out-of-cell interference. C3 is imposed to guarantee that the total power consumption of the system is less than the maximum power supply from the power grid $P_{PG}$, cf. Figure \ref{fig:system_model}. C4 is the minimum required data rate $R_{\min}$ whose values is provided by the application layer. \section{Solution of the Optimization Problem} \label{sect:solution} The first step in solving the non-convex problem in (\ref{eqn:cross-layer}) is to handle the objective function which comprises the ratio of two functions. We note that there is no standard approach for solving non-convex optimization problems in general. However, in order to derive an efficient power allocation algorithm for the considered problem, we transform the objective function using techniques from nonlinear fractional programming. \subsection{Transformation of the Objective Function} \label{sect:solution_dual_decomposition} For the sake of notational simplicity, we first define $\mathcal{F}$ as the set of feasible solutions of the optimization problem in (\ref{eqn:cross-layer}) and $\{{\cal P},{\cal \rho}\}\in\mathcal{F}$. Without loss of generality, we denote $q^*$ as the maximum energy efficiency of the considered system which is given by \begin{eqnarray} q^*=\frac{U({\cal P^*},{\cal \rho^*})}{U_{TP}({\cal P^*},{\cal \rho^*})}=\max_{{\cal P}, {\cal \rho}}\,\frac{U({\cal P},{\cal \rho})}{U_{TP}({\cal P},{\cal \rho})}. \end{eqnarray} We are now ready to introduce the following Theorem which is borrowed from nonlinear fractional programming \cite{JR:fractional}. \begin{Thm}\label{Thm:1} The maximum energy efficiency $q^*$ is achieved if and only if \begin{eqnarray}\notag \max_{{\cal P}, {\cal \rho}}&& \hspace*{-2mm}\,U({\cal P},{\cal \rho})-q^*U_{TP}({\cal P},{\cal \rho})\\ =&& \hspace*{-2mm}U({\cal P^*},{\cal \rho^*})-q^*U_{TP}({\cal P^*}, {\cal \rho^*})=0, \end{eqnarray} \end{Thm} for $U({\cal P},{\cal \rho})\ge0$ and $U_{TP}({\cal P},{\cal \rho})>0$. \emph{\,Proof:} Please refer to \cite[Appendix A]{JR:TWC_large_antennas} for a proof similar to the one required for Theorem 1. By Theorem \ref{Thm:1}, for any optimization problem with an objective function in fractional form, there exists an equivalent optimization problem with an objective function in subtractive form, e.g. $U({\cal P},{\cal \rho})-q^*U_{TP}({\cal P}, {\cal \rho})$ in the considered case, such that both problem formulations lead the same optimal power allocation solution. As a result, we can focus on the equivalent objective function in the rest of the paper. \begin{table}[t]\caption{Iterative Power Allocation Algorithm.}\label{table:algorithm} \vspace*{-5mm} \begin{algorithm} [H] \caption{Iterative Power Allocation Algorithm } \label{alg1} \begin{algorithmic} [1] \normalsize \STATE Initialize the maximum number of iterations $L_{max}$ and the maximum tolerance $\epsilon$ \STATE Set maximum energy efficiency $q=0$ and iteration index $n=0$ \REPEAT [Main Loop] \STATE Solve the inner loop problem in ($\ref{eqn:inner_loop}$) for a given $q$ and obtain power allocation policy $\{{\cal P'}, {\cal \rho'}\}$ \IF {$U({\cal P'}, {\cal \rho'})-q U_{TP}({\cal P'},{\cal \rho'})<\epsilon$} \STATE $\mbox{Convergence}=\,$\TRUE \RETURN $\{{\cal P^*,\rho^*}\}=\{{\cal P',\rho'}\}$ and $q^*=\frac{U({\cal P'},{\cal \rho'})}{ U_{TP}({\cal P'}, {\rho'})}$ \ELSE \STATE Set $q=\frac{U({\cal P'}, {\cal \rho'})}{ U_{TP}({\cal P'},{\cal \rho'})}$ and $n=n+1$ \STATE Convergence $=$ \FALSE \ENDIF \UNTIL{Convergence $=$ \TRUE $\,$or $n=L_{max}$} \end{algorithmic} \end{algorithm} \vspace*{-8mm} \end{table} \subsection{Iterative Algorithm for Energy Efficiency Maximization} In this section, an iterative algorithm (known as the Dinkelbach method \cite{JR:fractional}) is proposed for solving (\ref{eqn:cross-layer}) with an equivalent objective function in subtractive form such that the obtained solution satisfies the conditions stated in Theorem 1. The proposed algorithm is summarized in Table \ref{table:algorithm} and the convergence to the optimal energy efficiency is guaranteed if the inner problem (\ref{eqn:inner_loop}) can be solved in each iteration. \emph{Proof: }Please refer to \cite[Appendix B]{JR:TWC_large_antennas} for a proof of convergence. As shown in Table \ref{table:algorithm}, in each iteration in the main loop, i.e., lines 3--12, we solve the following optimization problem for a given parameter $q$: \begin{eqnarray}\label{eqn:inner_loop} &&\hspace*{-18mm}\max_{{\cal P}, {\cal \rho}} \quad\,{U}({\cal P},{\cal \rho})-q{U}_{TP}({\cal P},{\cal \rho} )\nonumber\\ &&\hspace*{-15mm}\mbox{s.t.} \,\,\mbox{C1, C2, C3, C4, C5, C6}. \end{eqnarray} We note that ${U}({\cal P},{\cal \rho})-q{U}_{TP}({\cal P},{\cal \rho} )\ge 0$ holds for any value of $q$ generated by Algorithm I. Please refer to \cite[Proposition 3]{JR:TWC_large_antennas} for a proof. \subsubsection*{Solution of the Main Loop Problem} The transformed problem has now an objective function in subtractive form which is less difficult to handle compared to the original formulation. However, there is still an obstacle in tackling the problem. The power splitting ratio $\rho$ appears in the capacity equation in each subcarrier which couples the power allocation variables and results in a non-convex function, cf. (\ref{eqn:cap}). In order to derive a tractable power allocation algorithm, we have to overcome this problem. To this end, we perform a full search with respect to (w.r.t.) $\rho$. In particular, for a given value of $\rho$, we optimize the transmit power for energy efficiency maximization. We repeat the procedure for all possible values\footnote{In practice, we discretize the range of $\rho$, i.e., $[0,1]$, into $M\gg 1$ equally spaced intervals with an interval width of $\frac{1}{M}$ for facilitating the full search.} of $\rho$ and record the corresponding achieved energy efficiencies. At last, we select that $\rho$ from all the trials which provides the maximum system energy efficiency. Note that for a fixed $\rho$, the transformed problem in (\ref{eqn:inner_loop}) is concave w.r.t. the power allocation variables and (\ref{eqn:inner_loop}) satisfies Slater's constraint qualification. As a result, the search space of the solution set can be reduced from $n_F+1$ dimensions (in problem (\ref{eqn:cross-layer})) to a one-dimensional search w.r.t. $\rho$ due to the proposed transformation in Theorem \ref{Thm:1} and dual decomposition which will be introduced in the next section. Now, we solve the transformed problem for a given value of $\rho$ by exploiting the concavity of the problem. It can been seen that strong duality holds for the transformed problem for a given value of $\rho$, then solving the dual problem is equivalent to solving the primal problem \cite{book:convex}. \subsection{Dual Problem Formulation} In this subsection, for a given value of $\rho$, we solve the power allocation optimization problem by solving its dual. For this purpose, we first need the Lagrangian function of the primal problem. The Lagrangian of (\ref{eqn:inner_loop}) is given by \begin{eqnarray}\hspace*{-2mm}&&\notag{\cal L}( \alpha, \beta,\gamma,\lambda,\theta,{\cal P},{\cal \rho})\\ \hspace*{-5mm}&=&\hspace*{-3mm}\sum_{i=1}^{n_F} (1+\gamma)C_i\hspace*{-0.5mm}-\hspace*{-0.5mm}q\Big(U_{TP}({\cal P},\rho)\Big)\hspace*{-0.5mm}-\hspace*{-0.5mm}\lambda\Big( P_C+\sum_{i=1}^{n_F}\varepsilon P_i- P_{PG}\hspace*{-1mm}\Big)\notag\\ \hspace*{-2mm}&-&\hspace*{-3mm}\beta\Big(\sum_{i=1}^{n_F}P_i- P_{\max}\Big)-\gamma R_{\min}-\alpha\Big(P_{\min}^{req}-P_D-P_I \Big)\notag\\ \hspace*{-2mm}&+&\hspace*{-3mm}\theta\Big(P_{\max}^{req}-P_D-P_I \Big). \label{eqn:Lagrangian} \end{eqnarray} Here, $\lambda\ge0$ is the Lagrange multiplier connected to C3 accounting for the power usage from the power grid. $\beta\ge0$ is the Lagrange multiplier corresponding to the maximum transmit power limit in C2. $\alpha\ge 0$ and $\gamma\ge 0$ are the Lagrange multipliers associated with the minimum required power transfer and the minimum data rate requirement in C1 and C4, respectively. $\theta\ge 0$ is the Lagrange multiplier accounts for the maximum allowed power transfer in C1. On the other hand, boundary constraints C5 and C6 will be absorbed into the Karush-Kuhn-Tucker (KKT) conditions when deriving the optimal power allocation solution in the following. The dual problem is given by \begin{eqnarray} \underset{ \alpha, \beta,\gamma,\lambda,\theta \ge 0}{\min}\ \underset{{\cal P,\rho}}{\max}\quad{\cal L}( \alpha, \beta,\gamma,\lambda,\theta,{\cal P},{\cal \rho}).\label{eqn:master_problem} \end{eqnarray} \subsection{Dual Decomposition Solution } \label{sect:sub_problem_solution} By Lagrange dual decomposition, the dual problem can be decomposed into two layers: Layer 1 consists of $n_F$ subproblems with identical structure which can be solved in parallel; Layer 2 is the master problem. The dual problem can be solved iteratively, where in each iteration the transmitter solves the subproblems by using the KKT conditions for a fixed set of Lagrange multipliers, and the master problem is solved using the gradient method. \subsubsection*{Layer 1 (Subproblem Solution)} Using standard optimization techniques and KKT conditions, the optimal power allocation on subcarrier $i$ for a given $q$ is obtained as \begin{eqnarray}\label{eqn:power1} \hspace*{-3mm}P_{i}^*\hspace*{-2mm}&=&\hspace*{-2mm}\Bigg[\frac{W(1+\gamma)}{\ln(2)\Lambda_i}-\frac{1}{\Gamma_i}\Bigg]^+, \,\forall i,\quad\mbox{where}\\ \hspace*{-3mm}\Lambda_i\hspace*{-2mm}&=&\hspace*{-2mm}q\Big(\varepsilon-\eta\rho l g\abs{H_i}^2\Big)+\lambda\varepsilon\hspace*{-0.5mm}+\hspace*{-0.5mm}\beta\hspace*{-0.5mm}+\hspace*{-0.5mm}(\theta-\alpha)\eta\rho l g\abs{H_i}^2 \end{eqnarray} and $\big[x\big]^+=\max\{0,x\}$. The power allocation solution in (\ref{eqn:power1}) is in the form of water-filling. Interestingly, the water-level in (\ref{eqn:power1}), i.e., $\frac{W(1+\gamma)}{\ln(2)\Lambda_i}$, is different across different subcarriers due to the \emph{frequency selective power transfer efficiency} described after (\ref{eqn:Power_harvested_d}). On the other hand, Lagrange multipliers $\gamma$ and $\alpha$ force the transmitter to allocate more power for transmission to fulfill the data rate requirement $R_{\min}$ and the minimum power transfer requirement $P_{\min}^{req}$, respectively. \subsubsection*{Layer 2 (Master Problem Solution)} The dual function is differentiable and, hence, the gradient method can be used to solve the Layer 2 master problem in (\ref{eqn:master_problem}) which leads to \begin{eqnarray}\label{eqn:multipler1} \hspace*{-5.5mm}\alpha(m+1)\hspace*{-3mm}&=&\hspace*{-3mm}\Big[\alpha(m)-\xi_1(m)\times \Big( P_D+P_I- P_{\min}^{req}\Big)\Big]^+\hspace*{-1.5mm},\\ \hspace*{-5.5mm}\beta(m+1)\hspace*{-3mm}&=&\hspace*{-3mm}\Big[\beta(m)-\xi_2(m)\times \Big(P_{\max}-\sum_{i=1}^{n_F} P_i\Big)\Big]^+\hspace*{-1.5mm}, \label{eqn:multipler2}\\ \hspace*{-5.5mm}\gamma(m+1)\hspace*{-3mm}&=&\hspace*{-3mm}\Big[\gamma(m)-\xi_3(m)\times \Big(\sum_{i=1}^{n_F} C_i -R_{\min}\Big )\Big]^+\hspace*{-1.5mm},\label{eqn:multipler3}\\ \hspace*{-5.5mm}\lambda(m+1)\hspace*{-3mm}&=&\hspace*{-3mm}\Big[\lambda(m)-\xi_4(m)\times \Big(P_{PG} -P_C-\sum_{i=1}^{n_F}\varepsilon P_i \Big)\Big]^+\hspace*{-2.2mm}, \label{eqn:multipler4}\\ \hspace*{-5.5mm}\theta(m+1)\hspace*{-3mm}&=&\hspace*{-3mm}\Big[\theta(m)-\xi_5(m)\times \Big(P_{\max}^{req}-P_D-P_I\Big)\Big]^+\hspace*{-1.5mm}, \label{eqn:multipler5} \end{eqnarray} where index $m\ge 0$ is the iteration index and $\xi_u(m)$, $u\in\{1,2,3,4,5\}$, are positive step sizes. Then, the updated Lagrange multipliers in (\ref{eqn:multipler1})-(\ref{eqn:multipler5}) are used for solving the subproblems in (\ref{eqn:master_problem}) via updating the power allocation solution in (\ref{eqn:power1}). Since the transformed problem is concave for given parameters $q$ and $\rho$, it is guaranteed that the iteration between the Layer 2 master problem and the Layer 1 subproblems converges to the primal optimal solution of (\ref{eqn:inner_loop}) in the main loop, if the chosen step sizes satisfy the infinite travel condition \cite{book:convex,Notes:Sub_gradient}. After obtaining the solution of (\ref{eqn:inner_loop}) with the above algorithm for a fixed $\rho$, we solve (\ref{eqn:inner_loop}) again for another value of $\rho$ until we obtain the energy efficiency for all considered values of $\rho$. \section{Results} \label{sect:result-discussion} In this section, we evaluate the performance of the proposed power allocation algorithm using simulations. The TGn path loss model \cite{report:tgn} for indoor communication is adopted with 20 dB directional transmit and receive antennas gains. The distance between the transmitter and receiver is 10 meters. The system bandwidth is ${\cal B}=1$ MHz and the number of subcarriers is $n_F=128$. We assume a carrier center frequency of $470$ MHz which will be used by IEEE 802.11 for the next generation of Wi-Fi systems \cite{report:80211af}. Each subcarrier for RF transmission has a bandwidth of $W=78$ kHz with antenna noise and signal processing noise powers of $\sigma_{z^a}^2=-128$ dBm and $\sigma_{z^s}^2=-125$ dBm \cite{book:microwave}, respectively. The small-scale fading coefficients of the transmitter and receiver links are generated as independent and identically distributed (i.i.d.) Rician random variables with Rician factor equal to 6 dB. Besides, the received interference at the receiver on each subcarrier is generated as i.i.d. Rayleigh random variables with variance specified in each case study. The shadowing of both the desired and interference communication links are set to $0$ dB, i.e., $g=1$ for the desired link. Unless specified otherwise, we assume a static signal processing power consumption of $P_C=$ 40 dBm, a minimum data rate requirement of $R_{\min}=10$ Megabits/s, a minimum required power transfer of $P_{\min}^{req}=0$ dBm, a maximum allowed power transfer of $P_{\max}^{req}=20$ dBm, and an energy harvesting efficiency of $\eta=0.8$. We set $M=1000$ for discretizing the range of $\rho$ into 1000 equally spaced intervals for performing the full search\footnote{In practice, much smaller values for $M$ (e.g., $M=100$) can be used to reduce complexity at the expense of a small loss in performance. }. On the other hand, we assume a power efficiency of $38\%$ for the power amplifier used at the transmitter, i.e., $\varepsilon=\frac{1}{0.38}=2.6316$. The average system energy efficiency is obtained by counting the number of bits which are successfully decoded by the receiver over the total power consumption averaged over multipath fading. Note that if the transmitter is unable to guarantee the minimum required data rate $R_{\min}$ or the minimum required power transfer $P_{\min}^{req}$, we set the energy efficiency and the system capacity for that channel realization to zero to account for the corresponding failure. For the sake of illustration, we define the interference-to-signal processing noise ratio (INR) as $\frac{ \sigma_{I_{i}}^2}{\sigma_{z^s}^2}$. In the following results, the ``number of iterations'' refers to the number of outer loop iterations of Algorithm 1 in Table I. \vspace*{-0.1cm} \subsection{Convergence of Iterative Algorithm 1 } Figure \ref{fig:convergence} illustrates the evolution of the average system energy efficiency of the proposed iterative algorithm for different levels of average received interference. In particular, we focus on the convergence speed of the proposed algorithm for a given value of optimal $\rho$. The results in Figure \ref{fig:convergence} were averaged over 100000 independent realizations for multipath fading. The dashed lines denote the average maximum energy efficiency for each case study. It can be observed that the iterative algorithm converges to the optimal value within 5 iterations for all considered scenarios. Besides, the variations in the INR level $\frac{ \sigma_{I_{i}}^2}{\sigma_{z^s}^2}$ and the maximum transmit power allowance $P_{\max}$ have a negligible impact on the convergence speed of the proposed algorithm. In the sequel, we set the number of iterations to 5 for illustrating the performance of the proposed algorithm. \begin{figure}[t]\vspace*{-5mm} \centering \includegraphics[width=3.5 in]{q_convergence.eps}\vspace*{-4mm} \caption{Average system energy efficiency (bit/joule) versus number of iterations for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$, and different values of maximum transmit power allowance, $P_{\max}$. The dashed lines represent the maximum energy efficiency for the different cases. } \label{fig:convergence}\vspace*{-4mm} \end{figure} \subsection{Average Energy Efficiency} Figure \ref{fig:EE_PT} depicts the average system energy efficiency versus the maximum transmit power allowance, $P_{\max}$, for different received levels of interference. It can be seen that for $P_{\max}<10$ dBm, the system energy efficiency is zero since the optimization problem in (\ref{eqn:cross-layer}) is infeasible due to an insufficient power transmission in the RF for satisfying the constraints on $R_{\min}$ and $P_{\min}^{req}$. However, for a large enough $P_{\max}$, the energy efficiency of the proposed algorithm first increases with increasing $P_{\max}$ and then approaches a constant as the energy efficiency gain due to a higher transmit power allowance gets saturated. This is because the transmitter is not willing to consume an exceedingly large amount of power for RF transmission, when the maximum system energy efficiency is achieved. Furthermore, the energy efficiency of the system is impaired by an increasing amount of interference, despite the potential energy efficiency gain due to energy harvesting from interference signals, cf. (\ref{eqn:Power_harvested_d}) and (\ref{eqn:avg-sys-eff}). For comparison, Figure \ref{fig:EE_PT} also contains the energy efficiency of a baseline power allocation scheme in which the system capacity (bit/s) with constraints C1--C6 in (\ref{eqn:cross-layer}) is maximized. It can be observed that for the low-to-moderate maximum transmit power allowance regimes, i.e., $P_{\max}<24$ dBm, the baseline scheme achieves the same performance as the proposed algorithm in terms of energy efficiency. This result indicates that in the low transmit power allowance regime, an algorithm which achieves the maximum system capacity may also achieve the maximum energy efficiency and vice versa. However, the energy efficiency of the baseline scheme decreases dramatically in the high transmit power allowance regime. This is because the baseline scheme employs a large transmit power for capacity maximization which is detrimental for energy efficiency maximization. \begin{figure}[t] \centering\vspace*{-5mm} \includegraphics[width=3.5 in]{ee_pt.eps}\vspace*{-4mm} \caption{Average system energy efficiency (bit-per-Joule) versus maximum transmit power allowance, $P_{\max}$, for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$.} \label{fig:EE_PT}\vspace*{-4mm} \end{figure} \subsection{Average System Capacity } Figure \ref{fig:CAP_PT} shows the average system capacity versus maximum transmit power allowance $P_{\max}$ for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$. We compare the proposed algorithm again with the baseline scheme described in the last section. The average system capacities of both algorithms are zero for $P_{\max}<10$ dBm due to the infeasibility of the problem. For $P_{\max}>10$ dBm, it can be observed that the average system capacity of the proposed algorithm approaches a constant in the high transmit power allowance regime. This is because the proposed algorithm stops to consume more power for transmitting radio signals for maximizing the system energy efficiency. We note that, as expected, the baseline scheme achieves a higher average system capacity than the proposed algorithm in the high transmit power allowance regime. This is due to the fact that the baseline scheme consumes a larger amount of transmit power compared to the proposed algorithm. However, the baseline scheme achieves the maximum system capacity by sacrificing the system energy efficiency. \vspace*{-0.05cm} \subsection{Average Harvested Power and Power Splitting Ratio} Figures \ref{fig:power_harvested} and \ref{fig:rho} show, respectively, the average harvested power and the average optimal power splitting ratio, $\rho$, of the proposed algorithm versus maximum allowed transmit power, $P_{\max}$, for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$. It can be observed in Figure \ref{fig:power_harvested} that for small values of INR, i.e., $\mbox{INR}\le10$ dB, only a small amount of power is harvested by the receiver for energy efficiency maximization. In other words, a small portion of received power is assigned to the energy harvesting unit, cf. Figure \ref{fig:rho}. In fact, for small values of INR, assigning a larger amount of the received power for information decoding provides a higher capacity gain to the system which results in an improvement in energy efficiency. On the contrary, as shown in Figure \ref{fig:rho}, the receiver has a higher tendency to assigning a larger portion of the received power to the energy harvester in the interference limited regime, i.e., $\mbox{INR}\gg 10$ dB. Indeed, the SINR on each subcarrier approaches a constant in the interference limited regime and is independent of $\rho$, i.e, $\frac{(1-\rho)l g\abs{H_i}^2P_i}{(1-\rho)(\sigma_{z^a}^2+\sigma_{I_i}^2)+\sigma_{z^s}^2}\rightarrow \frac{P_i l g\abs{H_i}^2}{\sigma_{I_i}^2+\sigma_{z^a}^2}$. Thus, assigning more received power for information decoding does not provide a significant gain in channel capacity. On the contrary, the total power consumption decreases linearly w.r.t. an increasing $\rho$. As a result, assigning a larger portion of the received power to energy harvesting can enhance the system energy efficiency when the capacity gain is saturated in the interference limited regime. \begin{figure}[t]\vspace*{-5mm} \centering \includegraphics[width=3.5 in]{cap_pt.eps}\vspace*{-4mm} \caption{Average system capacity (bit-per-second) versus maximum transmit power allowance, $P_{\max}$, for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$.} \label{fig:CAP_PT}\vspace*{-6mm} \end{figure} \begin{figure}[t]\vspace*{-5mm} \centering \includegraphics[width=3.5 in]{HP_TP.eps}\vspace*{-4mm} \caption{Average harvested power (dBm) versus maximum transmit power allowance, $P_{\max}$, for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$. The double-sided arrow indicates the power harvesting gain due to an increasing $\rho$ in the interference limited regime, cf. Figure \ref{fig:rho}. } \label{fig:power_harvested}\vspace*{-3mm} \end{figure} \vspace*{-0.1cm} \section{Conclusions}\label{sect:conclusion} In this paper, we formulated the power allocation algorithm design for simultaneous wireless information and power transfer in OFDM systems as a non-convex optimization problem. In the problem formulation, we took into account a minimum data rate requirement, a minimum required power transfer, and the circuit power dissipation. The multi-dimensional optimization problem was solved by using non-linear fractional programming, dual decomposition, and a one-dimensional full search. The simulation results reveal an interesting trade-off between energy efficiency, system capacity, and wireless power transfer. \begin{figure}[t] \centering\vspace*{-1mm} \includegraphics[width=3.5 in]{rho_TP.eps}\vspace*{-4mm} \caption{Average optimal power splitting ratio, $\rho$, versus maximum transmit power allowance, $P_{\max}$, for different levels of INR, $\frac{\sigma_{I_{i}}^2}{\sigma_{z^s}^2}$.} \label{fig:rho}\vspace*{-6mm} \end{figure} \vspace*{-0.05cm} \bibliographystyle{IEEEtran}
1,314,259,996,159
arxiv
\section{INTRODUCTION} The phenomenon of wind-generated gravity waves on the sea surface is a very interesting subject not only for oceanographers, coastal engineers and naval architects. The ocean is a great natural laboratory, and many phenomena taking place there are interesting for a broad community of physicists. Gravity and capillary surface waves on deep water represent the most conspicuous natural example of nonlinear waves in a strongly dispersive media. The statistical theory of such waves is called a theory of weak turbulence; it is an important part of general classical physics. This theory has been developing for more than forty years, and its basic concepts are now understood very well. However experimental data supporting this theory are scarce. The situation here is opposite to that of strong hydrodynamic turbulence in an incompressible fluid. In this case we have a lot of experimental data but the theory is poor and inconsistent. The theory of weak turbulence is naturally applicable to the description of surface gravity and capillary waves on deep and shallow water. It can also be applied to acoustic Alfven waves in hydrodynamics, to many types of waves in plasmas, to waves in liquid helium and spin waves in ferromagnetics, as well as to Rossby waves in atmosphere and to internal waves; however experimental data collected so far in these areas of physics is too poor to perform a convincing comparison with the theory. Only the oceanographers who collect data on wave spectra, measured in the ocean, in lakes and in wave tanks for almost half a century, have accumulated vast quantities of very valuable experimental materials which could and should be compared with the predictions of the theory. (A contribution of professor M. Donelan in this process is really seminal, see for instance [Donelan, 1985]) This is an ambitious task; a lot of work must be done to perform it, and only first results are obtained in this direction. It was shown that the fetch dependence of wave energy and wave frequency, obtained in basic fetch-limited experiments, can be naturally interpreted in terms of the weak turbulent theory [Zakharov, 2002]. All experimenters agree that the observed spectra of wind-driven sea waves just behind the spectral peak have a universal powerlike form close to $\omega^{-4}$. We will show that this dependence can be easily explained in terms of the weak-turbulent theory. Once more, weak turbulence is the statistical theory of nonlinear waves in dispersive media. A central point of this theory is the following: a wave ensemble is described by the kinetic equation for square wave amplitudes. This equation has different names; for instance, the Boltzmann equation or the Hasselmann equation. Also, this equation could be called the Peierls equation, because it is nothing but the classical limit of the quantum kinetic equation for phonons, derived by Peierls and others in the late twenties. In this article we will use the term "kinetic equation". The kinetic equation for gravity waves was derived by K. Hasselmann in 1962-1963 [Hasselmann, 1962; Hasselmann 1963]; now this equation is accepted as a basic model for the description of wave spectra evolution by the majority of oceanographers. The kinetic equation in a truncated form (known as the DIA, or Direct Interaction Approximation) is widely used in the third generation models of wave prediction. However the physical effects described by the kinetic equation need to be commented. The main function of the nonlinear interaction term $S_{nl}$ in the kinetic equation is a very intensive redistributor of energy, momentum and wave action along the spectrum. Due to $S_{nl}$, the direct cascades of energy and momentum as well as inverse cascades of wave action are formed. These processes govern the evolution of the spectral peak and play a central role in the formation of the universal powerlike spectrum behind the spectral peak. In the simplest idealized cases, these spectra are Kolmogorov-type weak turbulent spectra, corresponding to constant fluxes of energy and wave action. Strictly speaking, such spectra are realized in physical systems, where domains of forcing and damping are essentially separated in $K$-space. In the wind-driven sea the source of wave action is concentrated near the spectral peak, while the source of energy is distributed along the spectrum. On the first glance this fact is an impediment for application of Kolmogorov-type theory; actually this is not a serious difficulty. In a realistic situation the fluxes of energy and wave action are functions of frequency. However this does not affect essentially the spectral shape; it remains close to $\omega^{-4}$. One more point is important. If the wind-driven sea is well-developed, then the main part of momentum fluxes from the wind is concentrated in short waves. This fact, experimentally established and essentially stressed by M. Donelan, can be naturally explained in terms of the weak turbulent theory. There is no reliable parametrization for the white capping dissipation. However it seems very probable that this fundamental process generates dissipation only in the short-wave region. Indeed, wave breaking makes the surface more smooth, acting like an efficient viscosity or even super-viscosity. Moreover, the wave breaking generates turbulence in the boundary layer with a thickness comparable to the length of the most dissipative waves. It is known that this layer is much thinner than the wave length of the spectral peak. Hence it is possible to suggest that the white cap dissipation in the area of the spectral peak can be neglected for the developed sea. In the first approximation the evolution of the spectral peak is described by the "conservative" kinetic equation such that includes only the time derivative, the advective term, and $S_{nl}$. In the range of high frequencies the influence of wind forcing and white capping can be taken into account as a "boundary condition". This condition defines the flux of wave action to the long wave region. The "conservative" kinetic equation has a family of self-similar solutions depending on two free parameters. It was shown recently that by choosing the parameters in a proper way one can explain the results of major fetch-limited experiments made during the last three decades, including the JONSWAP experiment [Zakharov, 2002]. We will present a detailed description of this study, supported by a massive numerical experiment, in an another article. In this paper we present the basic ideas of the weak turbulent theory using the simplest theoretical model of $S_{nl}$ known as "local" or "diffusive" approximation. \section{Basic Theory} We assume that the flow in the wave motion is potential $v=\nabla \Phi$. The condition of incompressibility imposes on potential $\Phi$ the Laplace equation: \begin{equation} \Delta \Phi=0. \end{equation} If $\eta$ is the shape of the surface, then equation (1) should be solved in the domain $z<\eta$ under the boundary condition \begin{eqnarray} &&\Phi \vert_{z=\eta}=\psi(\vec r, t),\nonumber\\ &&\left.\frac{\partial \Phi}{\partial z} \right |_{z \to - \infty} \rightarrow 0 .\nonumber \end{eqnarray} In the linear approximation we should solve equation (1) in the half-space $z<0$. The shape of the surface and the potential on the surface, $\eta$ and $\psi$, are canonically conjugated variables; then the Euler equation for the potential flow of an ideal fluid with a free surface can be written \begin{eqnarray} &&\eta_{t}=\frac{\delta H}{\delta \psi},\nonumber\\ &&\psi_{t}=-\frac{\delta H}{\delta \eta}. \end{eqnarray} The solution of linearized motion equation (2) is the propagating monochromatic wave \begin{eqnarray} &&\eta = \sqrt{\frac{2 \omega_{\vec k}}{g}}\,A_{0}\cos(\vec k \vec r-\omega_{k}t-\phi),\nonumber\\ &&\psi = \sqrt{\frac{2 \omega_{\vec k}}{|\vec k|}}\,A_{0}\sin(\vec k \vec r-\omega_{k}t-\phi),\nonumber \end{eqnarray} where $\omega_{\vec k}=\sqrt{g|\vec k|}$ and $\vec k$ is the wave vector. We can call $$ A=A_{0}\,e^{i\phi} $$ the complex amplitude of the propagating wave. The normalization of $A$ is taken in such way that the energy density is $$ E=\omega A_0 ^{2}. $$ By definition, $A_0 ^{2}=E/\omega$ is the density of "wave action"; then the sea surface is a composition of propagating waves \begin{eqnarray} &&\eta=\int\sqrt{\frac{\omega_{\vec k}}{2g}}\,(A_{\vec k}+A^{*}_{-\vec k})\, e^{i(\vec k \vec r-\omega_{\vec k}t)}\, d\vec k ,\nonumber\\ &&\psi=-i\int\sqrt{\frac{\omega_{\vec k}}{2|\vec k|}}\,(A_{\vec k} - A^{*}_{-\vec k})\, e^{i(\vec k \vec r - \omega _{\vec k} t)}\, d\vec k .\nonumber \end{eqnarray} A real sea should be described statistically; to do this let us introduce the spectral density of wave action, assuming that $$ \left\langle A_{\vec k}\, A^{*}_{\vec k^{'}} \right\rangle = g\, N_{\vec k}\,\,\,\delta(\vec k-\vec k^{'}). $$ Then we can express the spatial correlation function $$ F(\vec R)= \left \langle \eta(\vec r)\, \eta(\vec r+\vec R) \right \rangle $$ in the form $$ F(\vec R)=\int \omega_{\vec k}\,\, N_{\vec k}\, \cos \vec k \vec R\, d\vec k . $$ In this case, the mean squared derivation $\sigma$ is given by the formula $$ \sigma=\left\langle \eta^{2}\right\rangle = \int\omega_{\vec k}\, N_{\vec k}\, d\vec k . $$ Further, let us denote $E_{\vec k}=\omega_{\vec k}\, N_{\vec k}$; this is the energy density in $K$-space divided by $g$. It has dimension $\left[L^{4}\right]$. Now we can express $\eta(\vec r)$ through its Fourier transform $\eta_{\vec k}$ $$ \eta(\vec r)=\int\eta_{\vec k}\,\,e^{i\vec k \vec r}\,d\vec k $$ and define the spatial spectrum as follows \begin{eqnarray} &&\left\langle\eta_{\vec k}\,\eta_{\vec k^{'}}\right\rangle = I_{\vec k}\,\,\delta(\vec k+\vec k^{'}),\nonumber\\ &&F(\vec R)=\int I_{\vec k}\,\, e^{-i \vec k \vec r}\, d\vec k .\nonumber \end{eqnarray} Comparing with (3) we obtain \begin{eqnarray} &&I_{\vec k}=\frac{1}{2}\,\omega_{\vec k}\,(N_{\vec k}+N_{-\vec k}),\nonumber\\ &&\sigma=\int I_{\vec k}\,\,d\vec k . \end{eqnarray} Further, it is convenient to introduce complex amplitudes $$ a_{\vec k}=2\pi\, A_{\vec k} $$ and derive the motion equation (2) in the form \begin{equation} \frac{\partial a_{\vec k}}{\partial t}+i\, \frac{\delta H}{\delta a^{*}_{\vec k}} =0, \end{equation} where the Hamiltonian $H$ can be expanded in powers of $a_{\vec k}$ \begin{eqnarray} H&=&H_{0}+H_{1}+H_{2}+\dots ,\nonumber\\ H_{0}&=&\int \omega_{\vec k} \left|a_{\vec k}\right|^2 \, d\vec k ,\nonumber\\ H_{1}&=&\frac{1}{2}\int V_{\vec k \vec k_{1} \vec k_{2}}\left(a^{*}_{\vec k} a_{\vec k_{1}} a_{\vec k_{2}}+ a_{\vec k} a^*_{\vec k_{1}} a^*_{\vec k_{2}} \right)\nonumber\\ && \hspace{1in}\times \delta(\vec k - \vec k_{1} - \vec k_{2})\,d\vec k\, d\vec k_{1}\, d\vec k_{2}\nonumber\\ &+&\frac{1}{3}\int U_{\vec k \vec k_{1} \vec k_{2}} \left(a_{\vec k}\, a_{\vec k^{1}}\,a_{\vec k^{2}}+a^{*}_{\vec k}\,a^{*}_{\vec k_{1}}\,a^{*}_{\vec k_{2}}\right)\nonumber\\ &&\hspace{1in}\times\delta(\vec k+\vec k_{1}+\vec k_{2})\,d\vec k\, d\vec k_{1}\,d\vec k_{2} .\nonumber \end{eqnarray} The Hamiltonian $H_{2}$ contains terms quartic in $a^{*}_{\vec k}$, $a_{\vec k}$. It is not very convenient to use equation (4). The cubic Hamiltonian $H_{1}$ leads to the formation of "slave" waves; wave numbers and frequencies of "slave waves" are not connected by the dispersion relation. To separate "slave" and "free" waves one should perform a canonical transformation to new variables $b_{\vec k}$, eliminating the cubic term $H_{1}$. In new variables the Hamiltonian takes the form [Zakharov, 1999]: \begin{eqnarray} H&=&H_{0}+H_{2} ,\nonumber\\ H_{0}&=&\int \omega_{\vec k}\, b_{\vec k}\, b^{*}_{\vec k}\, d\vec k ,\nonumber\\ H_{2}&=&\frac{1}{4}\int T_{\vec k \vec k_{1} \vec k_{2} \vec k_{3}}\,\,b^{*}_{\vec k}\, b^{*}_{\vec k_{1}}\, b_{\vec k_{2}}\, b_{\vec k_{3}}\nonumber\\ &&\hspace{0.5in}\times \delta(\vec k + \vec k_{1} - \vec k_{2}-\vec k_{3})\,d\vec k\, d\vec k_{1}\, d\vec k_{2}\, d\vec k_{3} ,\nonumber \end{eqnarray} and the dynamic equation \begin{equation} \frac{\partial b_{\vec k}}{\partial t}+i\frac{\delta H}{\delta b^{*}_{\vec k}}=0 \end{equation} in new variables is known as "Zakharov equation" [Zakharov, 1968] \begin{eqnarray} &&\frac{\partial b_{\vec k}}{\partial t}+i\,\omega_{\vec k}\, b_{\vec k}+\frac{i}{2}\int T_{\vec k \vec k_{1} \vec k_{2} \vec k_{3}}\,\,b^{*}_{\vec k_{1}}\, b_{\vec k_{2}}\, b_{\vec k_{3}}\nonumber\\ &&\hspace{0.5in}\times \delta(\vec k + \vec k_{1} -\vec k_{2}- \vec k_{3})\,d\vec k_{1}\, d\vec k_{2}\,d\vec k_{3}=0.\nonumber \end{eqnarray} Explicit expressions for the coefficients of the Hamiltonian, the coupling coefficient $T_{\vec k \vec k_{1} \vec k_{2} \vec k_{3}}$, and the canonical transformation can be found in (Zakharov, 1999). Equation (5), being approximate, has a very important feature: it conserves the total wave action, i.e., is adiabatic invariant $$ N=\int\left|b_{\vec k}\right|^2\,d\vec k , $$ thus the kinetic wave equation is imposed to the correlation function of $b$-variables: $$ \left\langle b_{\vec k}\,b^{*}_{\vec k^{'}}\right\rangle = n_{\vec k}\,\,\delta(\vec k-\vec k^{'}). $$ On deep water we can put approximately $$ n_{\vec k} \simeq 4 \pi^{2} g\, N_{\vec k} . $$ It is important to stress that the kinetic equation describes not the real sea studied by experimenters but an idealized object: the ensemble of "free" waves filtered from the slave harmonics. On deep water the difference between these two ensembles is small $(1-2\%)$, while on shallow water the difference can be much more essential. The kinetic equation in terms of $N$ reads $$ \frac{\partial N}{\partial t} + \frac{\partial \omega}{\delta \vec k} + \nabla N = S_{nl}+S_{in}+S_{ds}. $$ Here $S_{in}$ and $S_{ds}$ are income from wind and dissipation due to white capping, and $S_{nl}$ has the form \begin{eqnarray} S_{nl}&=&\int S\left(\vec k, \vec k_{1}, \vec k_{2}, \vec k_{3}\right) \left(N_{\vec k_{1}} N_{\vec k_{2}} N_{\vec k_{3}} + N_{\vec k} N_{\vec k_{2}} N_{\vec k_{3}}\right.\nonumber\\ && -\left. N_{\vec k} N_{\vec k_{1}} N_{\vec k_{2}} - N_{\vec k} N_{\vec k_{1}} N_{\vec k_{3}}\right)\,\, \delta\left(\vec k + \vec k_{1}-\vec k_{2} - \vec k_{3}\right)\nonumber\\ &&\hspace{0.5in}\times\delta \left(\omega _{\vec k}+\omega _{\vec k_{1}}-\omega _{\vec k_{2}} - \omega _{\vec k_{3}}\right)\, d\vec k_{1}\, d\vec k_{2}\, d\vec k_{3} ,\nonumber \end{eqnarray} where $$ S\left(\vec k, \vec k_{1}, \vec k_{2}, \vec k_{3}\right) = (2\pi)^{4}\pi g^{2} \left|T_{\vec k \vec k_{1} \vec k_{2} \vec k_{3}}\right|^{2} $$ can be found in [Hasselman, 1963], [Webb, 1978], [Zakharov, 1999]. The explicit expression for $S$ is pretty complicated. The most important fact is the following: $S\left(\vec k, \vec k_{1}, \vec k_{2}, \vec k_{3},\right)$ is a homogeneous function of sixth order $$ S\left(\epsilon \vec k, \epsilon \vec k_{1}, \epsilon \vec k_{2}, \epsilon \vec k_{3},\right)=\epsilon^{6} S\left(\vec k, \vec k_{1}, \vec k_{2}, \vec k_{3}\right). $$ For a rough estimate we can put $$ S\simeq k^{6}\simeq \omega^{12}. $$ This is very fast growing function in frequencies. This fact is of a crucial importance. Most authors agree that $S_{in}$ can be presented in the form $$ S_{in}=\beta(\omega, \theta)\,N(k), $$ where $$ \beta(\omega, \theta)=\mu\,F(\xi)\,\omega,\quad \xi=\frac{\omega \cos\theta}{\omega_0},\quad \omega_0=\frac{g}{u_{10}}. $$ Here $u_{10}$ is the wind velocity at 10 meters height; $\mu=0.1\sim 0.3$; $\rho_a/\rho_w \simeq 10^{-3}$; $\rho_a$ and $\rho_w$ are densities of air and water. There is no agreement about the exact form of the function $F(\xi)$. According to Hsiao and Shemdin [Hsiao, 1983] $$ \mu=0.12,\quad F(\xi)=(0.85\xi -1)^2, $$ according to Donelan [Donelan, 1985] $$ \mu\simeq 0.194,\quad F(\xi)=(\xi -1)^2, $$ while Tolman and Chalikov [Tolman, 1996] proposed a complicated form of $F(\xi)$. In all these models $\beta(\omega)\simeq \omega^3$ as $\omega\to\infty$. According to Snyder [Snyder, 1981] $$ \mu=0.25,\quad F(\xi)=\xi -1 . $$ In this case, $\beta(\omega)\simeq \omega^2$ at large $\omega$. An analytical expression for $S_{ds}$ is much less certain. Komen et al (1984) proposed the form \begin{equation} S_{ds}=-C_{dis}\,\left(\frac{\hat\alpha}{\alpha_{pm}}\right)^4\,\left(\frac{\omega} {\bar\omega}\right)^n\,\bar\omega\,N. \end{equation} Here $\alpha$ is dimensionless steepness and $C_{dis}$ is a dimensionless parameter. This formula is entirely speculative. It doesn't have any theoretical foundation and is not derived from any real experiment in the ocean, lake or in a wave tank. Anyway, this formula is used widely in operational models (WAM, SWAM). It is supposed in most cases that $$ n=2, \quad \alpha=3.33\times 10^{-5},\quad \alpha_{pm}=4.5\times 10^{-3}. $$ In our opinion, expression (6) overestimates dissipation due to white capping in low frequencies. It can be used in absence of a better option; however on our opinion the parameter $n$ should be essentially larger. If $n\geq 3$, the whole picture of ocean wave turbulence does not depend on a particular value of $n$. \section{Constants of motion and their fluxes} In this chapter we study the conservative homogeneous equation \begin{equation} \frac{\partial N}{\partial t}=S_{nl} \end{equation} in absence of wind forcing and dissipation. It is considered that this equation has the following constants of motion - wave action, energy and momentum: \begin{eqnarray} &&N=\int N_{\vec k}\,d\vec k ,\nonumber\\ &&E=\int \omega_{\vec k} N_{\vec k}\, d\vec k ,\nonumber\\ &&\vec M=\int \vec k N_{\vec k}\, d\vec k .\nonumber \end{eqnarray} Formally speaking, this statement is correct; but the reality is much more complicated. Let us introduce polar coordinates $|k|$ and $\phi$: \begin{eqnarray} &&|k|=\frac{\omega^{2}}{g} ,\nonumber\\ &&k\,dk\,d\phi = \frac{2\omega^{3}}{g^{2}}\,d\omega\, d\phi ,\nonumber \end{eqnarray} and denote \begin{eqnarray} &&N_{\omega}\,d\omega=N_k \, d\vec k ,\nonumber\\ &&N(\omega, \phi)=\frac{2\omega^{3}}{g}\,N\left(\frac{\omega^{2}}{g}, \phi\right). \end{eqnarray} In what follows we understand $N(\omega, \phi)$ according to (8). We introduce also angle-independent spectra \begin{eqnarray} &&N(\omega)=\int^{2\pi}_{0}N(\omega, \phi)\,d\phi ,\nonumber\\ &&E(\omega)=\omega \,\frac{N(\omega)}{2\pi},\nonumber\\ &&M_{x}(\omega)=\frac{\omega^{2}}{g}\int^{2\pi}_{0}N(\omega, \phi)\cos(\phi)\,d\phi . \nonumber \end{eqnarray} The conservative quantities in new variables take form \begin{eqnarray} &&N=\int^{\infty}_{0} N(\omega)\, d\omega ,\nonumber\\ &&E=\int^{\infty}_{0} E(\omega)\, d\omega , \nonumber\\ &&M_{x}=\int^{\infty}_{0}M_{x}(\omega)\,d \omega , \nonumber \end{eqnarray} and conservation laws of these quantities can be written in the differential form \begin{eqnarray} &&\frac{\partial N(\omega)}{\partial t}=\frac{\partial Q}{\partial \omega},\nonumber\\ &&\frac{\partial E(\omega)}{\partial t}=-\frac{\partial P}{\partial \omega},\nonumber\\ &&\frac{\partial M_{x}(\omega)}{\partial t}=-\frac{\partial K}{\partial \omega}.\nonumber \end{eqnarray} Here $Q$ is the flux of wave action to small wave numbers, while $P$ and $K$ are fluxes of energy and momentum directed to high wave numbers. A constant of motion is "real" if the corresponding flux is zero both at zero and infinite frequencies. Otherwise it is just a "formal" motion constant [Zakharov, Pushkarev, 2000]. Now let us introduce the differential operator $$ L=\frac{1}{2}\frac{\partial^{2}}{\partial \omega^{2}}+\frac{1}{\omega^{2}}\frac{\partial^{2}}{\partial \phi^{2}} $$ and present kinetic equation (7) in the form \begin{equation} \frac{\partial N(\omega, \phi)}{\partial t}=LA . \end{equation} Here $$ A(\omega, \phi)=L^{-1}\,S_{nl}, $$ and $A$ is a result of action on $N(\omega, \phi)$ of a nonlinear integral operator \begin{eqnarray} A(\omega, \phi)&=&\int F(\omega, \omega_{1}, \omega_{2}, \omega_{3}, \phi-\phi_{1},\phi-\phi_{2},\phi-\phi_{3})\nonumber\\ &&\times N(\omega_{1},\phi_{1})\,N(\omega_{2},\phi_{2})\,N(\omega_{3},\phi_{3})\nonumber\\ &&\hspace{0.5in}\times d\omega_{1}\, d\omega_{2}\, d\omega_{3}\, d\phi_{1}\, d\phi_{2}\, d\phi_{3} . \end{eqnarray} The explicit expression for $F$ is given in Appendix. $F$ is a homogeneous function of order 12: $$ F(\epsilon \omega, \epsilon \omega_{1},\epsilon \omega_{2}, \epsilon \omega_{3})=\epsilon^{12}F(\omega, \omega_{1}, \omega_{2}, \omega_{3})\sim g^{-4}\omega^{12} . $$ Further, if we denote \begin{eqnarray} &&A(\omega)=\frac{1}{2\pi}\int^{2\pi}_{0}A(\omega, \phi)\,d\phi ,\nonumber\\ &&B(\omega)=\frac{1}{2\pi}\int^{2\pi}_{0}B(\omega, \phi)\cos\phi \, d\phi ,\nonumber \end{eqnarray} then the fluxes $Q$, $P$, $K$ can be expressed in terms of $A$, $B$ in the following form \begin{eqnarray} &&Q=\frac{\partial A}{\partial \omega},\\ &&P=A-\omega\,\frac{\partial A}{\partial \omega},\\ &&K=\frac{\omega}{g}\left(2B-\omega\frac{\partial B}{\partial \omega}\right). \end{eqnarray} Formulae (11-13) are of key importance for the theory of weak-turbulent spectra. \section{Kolmogorov spectra} In this chapter we study solutions of the stationary equation \begin{equation} S_{nl}=0 , \end{equation} which is equivalent to equation \begin{equation} LA=0 . \end{equation} One class of solutions for (15) is given by solution of the equation $$ A=0 , $$ and if these solutions exist, they are thermodynamic Rayley-Jeans spectra $$ n_{\vec k}=\frac{T}{\omega_{\vec k}+\mu}. $$ In the case of surface gravity waves these solutions do not exist because of the divergence of integrals in the operator $A$. To get physically significant solutions we can partially integrate equation (15) and put \begin{equation} A(\omega, \phi)=\omega Q+P+\frac{2Kg\cos\phi}{\omega} . \end{equation} In this case, \begin{eqnarray} &&A(\omega)=\omega Q+P,\nonumber\\ &&B(\omega)=\frac{Kg}{\omega}. \end{eqnarray} By substituting (17) into (11-13) we see that constants $Q$, $P$, and $K$ in both cases are the same. Equation (16) defines the most general Kolmogorov-type solution of stationary kinetic equation; due to homogeneity of operator $A$ this equation can be written in the form \begin{equation} N(\omega, \phi)=\frac{g^{\frac{4}{3}}P^{\frac{1}{3}}}{\omega^{5}}R\left(\frac{\omega Q}{P}, \frac{2kg}{\omega P}, \phi\right) \end{equation} with the energy spectrum \begin{equation} E(\omega, \phi)=\frac{g^{\frac{4}{3}}P^{\frac{1}{3}}}{\omega^{4}}R\left(\frac{\omega Q}{P}, \frac{2kg}{\omega P}, \phi\right) . \end{equation} Let us study the most important special cases. If $Q=0$, $K=0$, formulae (18), (19) give the Zakharov-Filonenko Kolmogorov spectrum of the direct cascade \begin{eqnarray} N(\omega, \phi)&=&\frac{C_{p\,}g^{\frac{4}{3}}\,P^{\frac{1}{3}}}{\omega^5},\nonumber\\ E_{\omega}&=&\frac{C_{p}\,g^{\frac{4}{3}}\,P^{\frac{1}{3}}}{\omega^4}. \end{eqnarray} Here $$ C_{p}=R(0,0,0) $$ is the Kolmogorov constant of direct cascade (first Kolmogorov constant). We can offer another definition of $C_{p}$. Suppose that $N(\omega,\phi)$ is an isotropic powerlike function of $\omega$, \begin{equation} N(\omega)=\omega^{-x}. \end{equation} Special consideration (which is not at home in this article) shows that integrals in $A$ converge if $$ 0<x<\frac{19}{4}. $$ Plugging (21) to (10) we obtain \begin{equation} A(\omega)=f(x)\,\omega^{(15-3x)} . \end{equation} Apparently, $$ \left.f\right|_{x=5}=\frac{1}{C_{p}^{3}} . $$ Spectrum (20) has a clear physical interpretation; this spectrum is a direct analog of the classical Kolmogorov spectrum of turbulence in an incompressible fluid. This spectrum is realized if there is a source of energy at small wave numbers and a sink of energy at high frequency region. The most general isotropic solution appears if $K=0$; then the spectrum is \begin{equation} E_{\omega}=\frac{g^{\frac{4}{3}}P^{\frac{1}{3}}}{\omega^{4}}F\left(\frac{\omega Q}{P}\right) . \end{equation} Function $F(\xi)$ depends on one variable; obviously $F(0)=C_{p}$. If $\xi\to\infty$, spectrum should not depend on $P$. Hence $F(\xi)\to C_{q}/C_{p}\,\xi^{1/3}$ as $\xi\rightarrow\infty$, equation (19) goes to Zakharov-Zaslavskii spectrum of inverse cascade \begin{equation} E(\omega)=\frac{g^{\frac{4}{3}}C_{q}Q^{\frac{1}{3}}}{\omega^{\frac{11}{3}}} , \end{equation} where $C_{q}$ is the Kolmogorov constant of the inverse cascade. Spectrum (23) presumes that there is a source of wave action Q at high frequencies and sink of wave action at small frequencies. A general isotropic spectrum (23) is realized if there exists simultaneously a source of energy and sink of wave action at small frequencies together with energy sink and wave action source at high frequencies. If we study the most anisotropic case $Q=0$, $P=0$, then equation (16) has the following solution \begin{equation} N(\omega, \phi)=\frac{g^{\frac{4}{3}}\,h(\phi)\,(Kg)^{\frac{1}{3}}}{\omega^{\frac{13}{3}}}. \end{equation} From the symmetry consideration we can make a conclusion that $$ h(\phi)=-h\,(\pi-\phi). $$ Hence solution (25) is not positive at all values. This is a reason to doubt that the general solution (19) is essentially positive and can be realized in the whole $(\omega,\phi)$ plane. Anyway, it can be used for approximation of real spectra in some finite part of wave-vector plane. In the important case of $Q=0$, solution (19) takes the form \begin{equation} E(\omega,\phi)=\frac{g^{\frac{4}{3}}P^{\frac{1}{3}}}{\omega^4}H\left(\frac{gK}{\omega P},\phi\right). \end{equation} Spectrum (26) can be found at small values of $g K/\omega P$. Expanding in the Taylor series on this parameter, we obtain $$ E(\omega,\phi)=\frac{g^{\frac{4}{3}}\,P^{\frac{1}{3}}}{\omega^4}\left(C_{p}+\frac{\alpha(\phi)\, gK}{\omega P}+ \cdots\right) . $$ The correction to the isotropic spectrum satisfies the linearized equation (16). As far as this situation is invariant with respect to rotations in the $(\omega,\phi)$ plane, $$ \alpha(\phi)=C_{2}\cos\phi . $$ Coefficient $C_{2}$ is known as the second Kolmogorov constant. If $\omega\rightarrow\infty$, then the right hand in (16) becomes independent of angle. This means that the Kolmogorov solution becomes isotropic at large $\omega$. The real observed spectra remain anisotropic for arbitrary large $\omega$. The explanation is the following: in real situation the momentum flux $K$ is not constant but is approximately proportional to frequency. \section{Local diffusion approximation} Many important features of the weak-turbulent theory can be understood in a framework of a very simple theory. Let us accept the following approximation for the operator $A$ [Pushkarev, Zakharov, 1999]: \begin{equation} A(\omega,\phi)=\frac{a\,\omega^{15}}{g^{4}}\,N^{3}. \end{equation} Here $a$ is a dimensionless constant, which should be found by comparison with experiment. In this case the kinetic equation turns to the nonlinear diffusion equation \begin{equation} \frac{\partial N}{\partial t}=\frac{a}{g^4} \left(\frac{1}{2}\,\frac{\partial^2} {\partial\omega^2}+\frac{1}{\omega^2}\,\frac{\partial^2}{\partial\phi^2}\right)\omega^{15}\,N^{3}. \end{equation} Function $A$ has a very simple form and a general Kolmogorov solution is \begin{equation} E(\omega,\phi)=\frac{g^{\frac{4}{3}}\,P^{\frac{1}{3}}}{a^{\frac{1}{3}}\,\omega^{4}}\left(1+\frac{\omega\, Q}{P}+\frac{2K\,g\cos\phi}{\omega P}\right)^{\frac{1}{3}}. \end{equation} Now \begin{eqnarray} &&C_{p}=C_{q}=a^{-\frac{1}{3}},\nonumber\\ &&h(\phi)=\left(\frac{2\cos\phi}{a}\right)^{\frac{1}{3}}.\nonumber \end{eqnarray} Solution (29) is positive for all angles but in the case of large enough frequencies only; that satisfies the inequality $$ \frac{\omega P}{2kg}\left(1+\frac{\omega Q}{P}\right)>1 . $$ Comparing (26) with (9) we find that in this case \begin{eqnarray} A(\omega, \phi)&=& \frac{a}{g^4}\,\omega^{15}\,N^3,\nonumber\\ A(\omega)&=&\frac{a}{g^4}\,\omega^{15}\,\frac{1}{2\pi}\int_{0}^{2\pi}N^3\,d\phi,\nonumber\\ B(\omega)&=&\frac{a}{g^4}\,\omega^{15}\,\frac{1}{2\pi}\int_{0}^{2\pi}n^3\cos\phi\,d\phi,\nonumber \end{eqnarray} and for the general Kolmogorov solution we obtain \begin{eqnarray} A(\omega,\phi)&=&P+\omega\,Q+\frac{2Kg\cos\phi}{\omega},\nonumber\\ A(\omega)&=&P+\omega\,Q,\nonumber\\ B(\omega)&=&\frac{K g}{\omega}. \nonumber \end{eqnarray} Both $A(\omega), B(\omega)$ are essentially positive. This is correct for the general nonlocal case (12). The formula for $A(\omega,\phi)$ presumes that a solution has sources of energy and momentum, $P$ and $K$, as well as a sink of wave action $Q$ at the point $\omega=0$. In a framework of the local approximation we can efficiently study a forced stationary equation \begin{equation} S_{nl}+S_{in}+S_{ds}=0. \end{equation} We can assume that $$ S_{in}+S_{ds}=\beta(\omega,\theta)\,N(\omega,\theta), $$ and restrict our consideration by the case of angular symmetry $\beta=\beta(\omega)$ only. Then equation (30) reads \begin{equation} \frac{a}{2g^4}\,\frac{\partial^2}{\partial\omega^2}\,\omega^{15}\,N^3 +\beta(\omega)\,N=0. \end{equation} Another form of this equation is the following: \begin{equation} \frac{\partial^2}{\partial\omega^2}A+\frac{g^{4/3}}{a^{1/3}}\,\frac{\beta(\omega)}{\omega^5}\,A^{1/3}=0. \end{equation} In a real situation solution $N(\omega)$ is concentrated in a finite frequency band \begin{eqnarray} &&N>0, \quad \omega_1<\omega<\omega_2 ,\nonumber\\ &&N=0, \quad\omega<\omega_1 ,\,\,\omega>\omega_2 . \end{eqnarray} From continuity of $N$ and $\partial N/\partial\omega$ we obtain \begin{equation} N\left.\right|_{\omega=\omega_1}=0,\,\,\, \left. \frac{\partial N}{\partial\omega} \right |_{\omega=\omega_1}=0,\,\,\, N|_{\omega=\omega_2}=0,\,\,\,\left.\frac{\partial N}{\partial\omega}\right |_{\omega=\omega_2}=0. \end{equation} Conditions (33) define the boundary value problem for equations (31), (32). Since in neighborhood of the ends of interval (33) there exists asymptotics \begin{eqnarray} A&\simeq &\frac{1}{6} P_1 (\omega -\omega_1)^3 \quad P_1>0\nonumber\\ A&=&\frac{1}{6} P_2(\omega_2 -\omega)^3 \quad P_2>0 \end{eqnarray} we obtain from (32) the following expressions for $\beta$: \begin{eqnarray} &&\beta(\omega_1)=-P_1^2\,\omega_1^5\,\left(\frac{6a}{g^4}\right)^{1/3},\nonumber\\ &&\beta(\omega_2)=-P_2^2\,\omega_2^5\left(\frac{6a}{g^4}\right)^{1/3}. \end{eqnarray} We can see now that a boundary problem has nontrivial solutions only if $\beta(\omega)$ is negative at both ends of interval $\omega_1<\omega<\omega_2$. This conclusion is very general. To get a stationary solution of equation (30) we should have sinks both in low and high frequency regions. This statement without a proof can be found in the paper [Komen, Hasselmann, Hasselmann, 1984]. Condition (34) impose four restrictions on function $N(\omega)$ satisfying to a second order ODE. This is not too much because the ends of the interval $\omega_1<\omega<\omega_2$ are unknown. They can be found from the following conditions for wave action and energy balance: \begin{eqnarray} &&\int_{\omega_1}^{\omega_2}\beta(\omega)\,N(\omega)\,d\omega=0,\nonumber\\ &&\int_{\omega_1}^{\omega_2}\omega\,\beta(\omega)\,N(\omega)\,d\omega=0.\nonumber \end{eqnarray} To satisfy the balance condition, we should have at least one domain of instability, where $\beta(\omega)>0$, inside the interval $\omega_1<\omega<\omega_2$. In a typical situation there is one such area. In this case $A(\omega)$ has only one maximum at a point $\omega_3$ ($\omega_1<\omega_3<\omega_2$), and the whole interval could be divided into three domains: 1. Area, where $A(\omega)$ grows. Suppose, some interval $A(\omega)$ can be approximated by a linear function $$ A(\omega)=Q(\omega -\omega_0). $$ In this area, \begin{eqnarray} &&Q=\frac{\partial A}{\partial\omega}=const,\nonumber\\ &&P=A-\omega\frac{\partial A}{\partial\omega}\sim -\omega_0\,Q<0 .\nonumber \end{eqnarray} This is the area of \underline{inverse cascade}. A margin of this area is a frequency $\omega^*$, where flux of energy $P$ changes the sign $P(\omega^*)=0$. 2. Area near $\omega\simeq\omega^*$, where $A(\omega)$ is almost constant. Here $Q$ is small, while $P=A(\omega^*)$ is large and positive. This is the area of \underline{direct cascade}. 3. Area of dissipation, where $\partial A/\partial\omega<0$. In this area $P>0,\,Q<0$. Both the energy and the wave action are carried out to a zone of high frequencies. In the area of direct cascade, equation (32) can be approximately integrated. We can rewrite this equation \begin{equation} \frac{\partial}{\partial\omega}\left(\omega\,\frac{\partial A}{\partial\omega}-A\right)+\frac{g^{4/3}}{A^{1/3}}\,\frac{\beta(\omega)}{\omega^4}\,A^{1/3}=0, \end{equation} and put $$ \omega\,\frac{\partial A}{\partial\omega}\ll A. $$ This makes possible to integrate (35); the integration yields \begin{eqnarray} &&\frac{\partial A}{\partial\omega}=\frac{g^{4/3}}{\omega^{1/4}}\,\frac{\beta(\omega)}{\omega^4}\,A^{1/3},\nonumber\\ &&A^{2/3}=\frac{2}{3}\,\frac{g^{4/3}}{a^{1/3}}\,\int_{\omega_0^*}^{\omega}\frac{\beta(\omega)}{\omega^4}\,d\omega .\nonumber \end{eqnarray} In this approximation \begin{eqnarray} &&P=A+P_0\,\ln\left(\frac{\omega}{\omega^*}\right)^{3/2},\nonumber\\ &&P_0=\frac{g^2}{a^{1/2}}\left(\frac{2}{3}c\right)^{3/2},\nonumber \end{eqnarray} and $P_0$ is a slow function of $\omega$. For the spectrum in the direct cascade area we have \begin{equation} E(\omega)\simeq P_0^{1/3}\frac{\left(\ln \frac{\omega}{\omega^*}\right)^{1/2}}{\omega^4}. \end{equation} Since in experiments $\omega^* \leq \omega_p$ ($\omega_p$ is a frequency of spectral peak), at the current level of experimental accuracy it is not easy to distinguish formula (36) from ZF spectrum $\omega^{-4}$. We should stress again that the forced stationary equation (28) has a regular solution if and only if there are regions of intensive damping both in small and high wave numbers. What happens in other cases? Suppose, there is no damping at all. In other words, \begin{eqnarray} &&\beta(\omega)>0,\quad \omega_1<\omega<\omega_2 ,\nonumber\\ &&\beta(\omega)=0, \quad \omega<\omega_1 . ,\,\,\,\,\omega>\omega_2\nonumber \end{eqnarray} Then in the isotropic case, outside of the forcing areas the spectra turn to Kolmogorov type spectra \begin{eqnarray} &&\epsilon(\omega)= \frac{g^{4/3}\,P^{1/3}}{a^{1/3}\,\omega^4}, \quad \omega>\omega_2 ,\nonumber\\ &&\epsilon(\omega)= \frac{g^{4/3}\,Q^{1/3}}{a^{1/3}\,\omega^{11/3}}, \quad \omega<\omega_1 , \end{eqnarray} and the fluxes of energy and wave action have the form \begin{eqnarray} &&P=\int_{\omega_1}^{\omega_2} \beta(\omega)\,\omega \,\epsilon(\omega)\,d\omega ,\nonumber\\ &&Q=\int_{\omega_1}^{\omega_2} \beta(\omega) \,\epsilon(\omega)\,d\omega . \end{eqnarray} There is a difference of principle importance between the area of direct cascade $\omega>\omega_2$ and the area of inverse cascade $\omega<\omega_1$. In the area of direct cascade, the integrals of motion, energy and wave action are finite. On the opposite, in the inverse cascade area the energy and the wave action diverge. We can say that the direct cascade has finite capacity, while inverse cascade has infinite capacity. A situation with no sink at high wave numbers is pure theoretical. Such a sink always exists due to pletora of physical reasons: viscosity, transformation of gravity waves to capillary waves, and finally due to wave breaking. A scrupulous consideration of these processes is not necessary for understanding: what happens near the spectral peak? Moreover, according to our preliminary study, Kolmogorov spectrum of direct cascade can be formed in a finite time [Pushkarev, Resio, Zakharov, 2000]. On the contrary, the Kolnogorov spectrum of inverse cascade, due to its infinite capacity, cannot be formed in a finite time. If there is no intensive enough damping at small wave numbers, the inverse cascade cannot be arrested. The downshift of spectral peak to small frequency area will continue infinitely until it will be stopped by topographical (better to say, geographical) factors. \section{Discussion} The first weak-turbulent Kolmogorov spectrum for gravity waves, $\epsilon_{\omega}\simeq \omega^{-4}$, was derived analytically as an exact solution of the kinetic equation in 1966 [Zakharov, Filonenko, 1966]. The second Kolmogorov spectrum $\epsilon_{\omega}\simeq \omega^{-11/3}$ was obtained in the same year in my PhD thesis in Novosibirsk [Zakharov, 1966]; in a regular journal the spectrum was published in 1982 [Zakharov, Zaslavskii, 1982]. It is called now Zakharov-Zaslavskii (ZZ) spectrum. For the first time, the spectrum $\epsilon_{\omega}\simeq \omega^{-4}$ was observed experimentally by Toba in 1972. Since that this spectrum was observed by many researches [Forrestal, 1981; Kahma, 1981; Kawai et al, 1977; Donelan et al, 1985]. In 1987 Battjes et al found that spectrum $\omega^{-4}$ fits the JONSWAP experiment much better than $\omega^{-5}$. In 1985 O. Phillips published a well known article, where he admitted that $\omega^{-4}$ spectrum fits the experiment better than the "Phillips spectrum" $\omega^{-5}$. However he did not offer a proper theoretical explanation of this fact. In 1982-83 Zakharov and Zaslavskii published in the Russian journal four articles on application of the weak turbulence theory to the wind-driven sea [Zakharov, Zaslavskii. 1982; 1983]; soon after S. Kitaigorodskii used successfully these results for interpretation of experimental data [Kitaigorodskii, 1983]. Since that time the weak turbulent theory became known to the world community of oceanographers; however even now this theory is not completely accepted. For almost thirty years the obvious facts: 1. $\epsilon_{\omega}\sim \omega^{-4}$ is the exact solution of the stationary kinetic equation, 2. $\epsilon_{\omega}\sim \omega^{-4}$ is the spectrum persistently observed in all experiments, coexist separately in the collective conscience, almost not interacting with each other. I believe, this is a unique situation in the history of science. The standard arguments against the weak-turbulent theory are the following (see, for instance, Komen, Cavaliery et al, 1994). 1. Zakharov-Filonenko spectrum is isotropic, while the real spectra are anisotropic. This argument is not a serious one. The isotropic spectrum $\omega^{-4}$ is the simplest example of weak-turbulent spectra. We showed in this paper that more general Kolmogorov spectra, which carry momentum to high frequency region, are anisotropic. Anyway, they are very close to $\omega^{-4}$. 2. In the "classical" theory of turbulence for incompressible fluid, a source of energy is concentrated in small wave numbers while in the case of gravity waves the source of energy is distributed along the whole spectrum. One can add that the source of momentum is concentrated mostly in short waves. The answer is the following: in a real situation the flux of energy is not a constant inside the universal interval; this flux is a slowly growing function of frequency. This leads to nonessential modification: the appearance of slowly growing pre-factor proportional to $(\ln \omega/\omega^*)^{1/2}$. We should stress again that the forced stationary kinetic equation (30) has a solution that contains a finite amount of energy if and only if there is an intensive dissipation in the low frequency region. This dissipation arrests the inverse cascade and is essential in the spectral peak area. Thus in this area equation (30) can be reduced to the form $$ S_{nl}+S_{ds}=0. $$ We like to stress that the physical origin of this low frequency dissipation is unclear; the very fact of existence of this dissipation and the whole concept of the "full-developed" are "mature" sea is questionable. We will discuss this subject in details in another article. In the universal region behind the spectral peak, the solutions of the full equation (30) can be treated as solutions of the simple stationary equation $$ S_{nl}=0 $$ with frequency-dependent values of energy, wave action, and momentum flux. This is the central point of the theory of weak turbulence. We can add that this point is supported now by massive numerical experiments (see for instance [Badulin et al, 2002], [Lavrenov et al, 2002], [Pushkarev et al, to be published]). \medskip The research presented in this paper was supported by NSF grant NDM0072803 and by the Army Corps of Engineers, RTDIE program, grant DACA 42-00-C0044. \section*{Appendix} Solution of the equation \begin{equation} LA=\left(\frac{1}{2}\,\frac{\partial^2}{\partial \omega^2}+\frac{1}{\omega^2}\,\frac{\partial^2}{\partial \phi^2}\right)\,A=f(\omega,\,\phi) \end{equation} is given by the Green function $$ A(\omega,\,\phi)=\int_0^{\infty}\int_0^{2\pi}\,G(\omega,\,\omega',\,\phi -\phi')\,f(\omega',\,\phi')\,d\omega'\,d\phi', $$ where \begin{eqnarray} &&G(\omega,\,\omega',\,\phi-\phi')=-\frac{1}{2\pi}\sum_{n=-\infty}^{\infty} \frac{\sqrt{\omega\,\omega'}}{\Delta_n} \,e^{in(\phi -\phi')} \nonumber\\ &&\times\left[\left(\frac{\omega'}{\omega}\right)^{\Delta_n}\,\Theta\left(1-\frac{\omega'} {\omega}\right)+ \left(\frac{\omega}{\omega'}\right)^{\Delta_n}\,\Theta \left(\frac{\omega'} {\omega}-1\right)\right]\nonumber \end{eqnarray} Here $$ \Delta_n=\sqrt{\frac{1}{4}+2n^2} $$ and $$\Theta(\xi)=\left\{ \begin{array}{cc} 1&\xi>0\\ 0&\xi<0 \end{array} \right\} $$ Equation (10) appears after substituting of $S_{nl}(\omega',\,\phi')$ as $f(\omega',\,\phi')$ in formula (40).
1,314,259,996,160
arxiv
\section{Introduction} It is experimentally well known that the energy dependence of the charged particle multiplicities in $e^+ e^-$ and $pp/{\bar p}p$ processes exhibit a quite similar behavior. In the late 70's, experiments analysing $p p$ collisions at the CERN ISR Collider \cite{basile} have shown that not the total center-of-mass energy $\sqrt s$ is used for particle production; instead, a considerable fraction of the available energy is carried away by the leading proton. These experiments have shown that a more adequate way of comparing average multiplicities from different reactions is in terms of the amount of energy effectively used for multiparticle production. The problem is how to determine this quantity. Observations like these have inspired several attempts to describe ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$ by an universal function. In ref.\cite{pol}, for instance, two corrections are made to compare these quantities: the energy variable for ${\langle n_{ch} \rangle}_{pp}$ is corrected by removing the portion referring to the elasticity (the fraction of the energy taken by the leading particle) and then the average leading proton multiplicity is subtracted. A similar idea is followed in ref.\cite{ref1} where attempts are made to establish this universal behavior by fitting. In the present paper, we analyze the same subject by following an analagous point of view, but rephrasing the procedure in the following way. It is assumed that, if in $e^+ e^-$ collisions the average charged particle multiplicity is given by \begin{equation} {\langle n_{ch} \rangle}_{e^+e^-}=N(\sqrt{s}), \label{mult1} \end{equation} then in ${pp}/{\bar p}p$ collisons we have \begin{equation} {\langle n_{ch} \rangle}_{pp}=\langle n_0 \rangle + N(\langle k_p\rangle\sqrt{s}), \label{mult2} \end{equation} where $N(W)$ is an universal function of the energy available for multiparticle production, $W$, $\langle n_0 \rangle$ is the average leading particle multiplicity, and $\langle k_p\rangle$ is the average inelasticity. In \cite{pol} and \cite{ref1}, the quantities related to $\langle n_0 \rangle$ and $\langle k_p\rangle$ are supposed to be constant. In particular, in \cite{ref1} they are determined by a simultaneous fit of ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$ data. Our procedure, instead, consists in obtainning these quantities ($\langle n_0 \rangle$ and $\langle k_p\rangle$) not from fitting ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$, but in a totally independent way, from the inclusive reaction $p p \rightarrow p X$, paying particular attention to their energy dependence. After doing that, the obtained $\langle n_0 \rangle$ and $\langle k_p\rangle$ are applied to (\ref{mult2}) via a parametrization of (\ref{mult1}) and the result is compared to data in order to verify to what extent such a hypothesis is acceptable. This procedure seems to be very well defined and straightforward, but it should be noticed that it drives to some difficult problems. The question is that it requires a previous knowledge about the energy dependence of the inelasticity and about the behavior of the average leading particle multiplicity which constitute themselves problematic subjects. In particular, the energy dependence of the average inelasticity is a very much disputed question since there are opposite claims that this quantity increases \cite{nos,inc,nik,gaisser} or that it decreases \cite{igm,dec,he} with increasing energy at quite different rates. In spite of the of models predicting extreme behaviors, {\it i.e.} very fast increase of the inelasticity (like in \cite{nik}) or very fast decrease (like in \cite{igm}), most of these analyses referred here report the average inelasticity as having a smooth and slowly changing behavior. \footnote{For a recent account on this subject from the viewpoint of cosmic-ray data, see ref.\cite{bellandi_novo}.} This is once again verified here in a different way. The idea of discussing the energy behavior of the average inelasticity in connection with the energy dependence of $\langle n_0 \rangle$ and $\langle k_p\rangle$ is not new. Of particular interest to present work is an analysis with this purpose performed by He \cite{he}. He has extracted values of the average inelasticity by using arguments similar to those given above and obtained results pretty much in agreement with the predictions of ref.\cite{igm}. We shall argue below that such an agreement is probably due to the fact that two important effects are missing in his analysis. Another controversial question involved in the present analysis (but treated here just {\it en passant}) is that referred to unitarity violation in diffractive dissociation processes. This is an old-standing problem that has come back to the scene due to the fact that recent measurements on hard diffractive production of jets and W's revealed a large discrepancy between data and theoretical predictions. In ref. \cite{dino}, it is proposed that such a discrepancy in hard diffraction has to do with unitarity violation in single diffractive processes. Since we are going to describe the inclusive reaction $p p \rightarrow p X$, we have to face this problem in the region of the spectrum where diffractive processes are dominant. A by-product of the present analysis is a complete parametrization for the reaction $p p \rightarrow p X$ in the whole phase space. This is obtained basically within the Regge-Mueller approach \cite{collins}, but including the modifications suggested in \cite{dino} for the diffractive contribution. The paper is organized as follows. In Section 2 we present the theoretical framework used to describe the leading particle sprectrum. Section 3 is devoted to show how this formalism is applied to describe the experimental data. In Section 4 we discuss the connection between ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$. Our main conclusions are summarized in Section 5. \section{\bf Leading particle spectrum} In order to perform our analysis, we need to calculate the quantities \begin{equation} \langle n_0 \rangle=\frac{1}{\sigma_{inel}} \int{\frac{d\sigma} {dx_F}\ dx_F} \label{n0} \end{equation} and \begin{equation} \langle k_p \rangle = 1 - \langle x_F \rangle = 1-\frac{1}{\sigma_{incl}}\ \int{x_F} \ \frac{d\sigma} {dx_F}\ dx_F \label{kp} \end{equation} as a function of energy. We apply the Landshoff parametrization \cite{mul4} $\sigma_{inel}(s) = 56\ s^{-0.56} + 18.16\ s^{0.08}\ [mb]$ to represent the inelastic cross section within the energy range where multiplicity data are included, and the inclusive cross section is simply given by $\sigma_{incl}\equiv \int{\frac{d\sigma} {dx_F}}\ dx_F$. Thus, the whole analysis depend on the knowledge of the leading particle spectrum ${d\sigma}/{dx_F}$ and its evolution with energy. The obtainment of this spectrum is detailed in the discussion that follows. The invariant cross section for the inclusive reaction $a b \rightarrow c X$ is given by \begin{equation} E\frac{d^3\sigma}{d{\bf p}^3} = \frac{2 E}{\pi {\sqrt s}}\frac{d^{3}\sigma} {dx_F\ dp_{T}^{2}} \end{equation} where $x_F = 2 p_L /{\sqrt s}$ is the Feynman variable for the produced particle $c$ and $E,\ p_L,\ p_T$ are respectively its energy, longitudinal and tranversal momenta. Particularly in the diffractive region ($x_F \approx 1$) such a quantity is usually expressed in terms of \begin{equation} E\frac{d^3\sigma}{d{\bf p}^3} = \frac{s}{\pi} \frac{d^2\sigma}{dt\ dM^2} = \frac{x_E}{\pi x_F} \frac{d^2\sigma}{dt\ d\xi}, \end{equation} with $x_E = 2 E /{\sqrt s}$, $\xi = M^2/s = 1-x_F$ and $-t = m_c^2\ {(1-x_F)^2}/{x_F} + {p_T^2}/{x_F}$. Variable $M^2$ is the missing mass squared defined as $M^2 \equiv (p_a + p_b - p_c)^2$. The procedure to calculate the invariant cross section employed here comes from the Regge-Mueller formalism which consists basically of the application of the Regge theory for hadron interactions to the Mueller's generalized optical theorem. This theorem establishes that the inclusive reaction $a b \rightarrow c X$ is connected to the elastic three-body amplitude $A (a b {\bar c} \rightarrow a b {\bar c})$ via \begin{equation} E\frac{d^3\sigma}{d{\bf p}^3} (a b \rightarrow c X) \sim \frac{1}{s} Disc_{M^2} \ A (a b {\bar c} \rightarrow a b {\bar c}), \label{disc} \end{equation} where the discontinuity is taken across the $M^2$ cut of the elastic amplitude. It is assumed that this amplitude in turn is given by the Regge pole approach. Different kinematical limits imply in specific formulations for the invariant cross section at the fragmentation and central regions. In the following, we specify the concrete expressions that these formulations assume in such regions (details can be found in \cite{collins}). \centerline{\bf A. Fragmentation Region} In our description, the invariant cross section for the reaction $p p \rightarrow p X$ at the fragmentation region is compounded of three predominant contributions which are determined within the Triple Reggeon Model (this is the particular formulation that (\ref{disc}) assumes in the beam fragmentation region with the limits $M^2 \rightarrow \infty$ and $s/M^2 \rightarrow \infty$ \cite{collins}). These contributions, depicted in Fig.2, correspond to pomeron, pion and reggeon exchanges and are referred to as $\rm I\!P \rm I\!P \rm I\!P$, $\pi \pi \rm I\!P$, $\rm I\!R \rm I\!R \rm I\!P$, respectively. In the diffractive region, the $\rm I\!P \rm I\!P \rm I\!P$ contribution is dominant and (we assume for the reasons given below) is given by \begin{equation} \left (\frac{d^2\sigma}{dtd\xi}\right )_{\rm I\!P \rm I\!P \rm I\!P} =f_{\rm I\!P, Ren}(\xi,t)\times \sigma_{\rm I\!P p} (s\xi) \label{mult6} \end{equation} where $f_{\rm I\!P, Ren}(\xi,t)$ is the {\it renormalized} pomeron flux factor proposed in \cite{dino} with the parameters defined in \cite{covolan}, that is \begin{equation} f_{\rm I\!P, Ren}(\xi,t) = \frac{f_{\rm I\!P} (\xi,t)}{N(s)} \label{renf} \end{equation} with the Donnachie-Landshoff flux factor \cite{dl} \begin{equation} f_{\rm I\!P}(\xi,t)=\frac{{\beta}_{0}^{2}}{16\pi} F_{1}^2(t)\ \xi^{1-2{\alpha}_{\rm I\!P}(t)} \label{dlf} \end{equation} and \begin{equation} N(s)=\int_{1.5/s}^{1} \int^{t=0}_{-\infty} f_{\rm I\!P}(\xi,t)\ dt\ d\xi. \end{equation} In the above expressions, $F_1(t)$ is the Dirac form factor, \begin{equation} F_1(t) = \frac{(4m^2-2.79t)}{(4m^2-t)}\ \frac{1} {(1-\frac{t}{0.71})^2}, \end{equation} the pomeron trajectory is $\alpha_{\rm I\!P}(t)= 1+\epsilon +\alpha^{'}t$ with $\epsilon=0.104$, $\alpha^{'}=0.25\ GeV^{-2}$ and $\beta_0=6.56\ GeV^{-1}$, determined from \cite{covolan2}. In Eq.(\ref{mult6}), the pomeron-proton cross section is given by \begin{equation} \sigma_{\rm I\!P p} (M^2) = \beta_{0}\ g_{\rm I\!P}\ (s\xi )^{\epsilon} \end{equation} with the triple pomeron coupling determined from data as $g_{\rm I\!P}=1.21\ GeV^{-1}$. Since this scheme to calculate the diffractive contribution is not the usual one, some comments are in order. The usual derivation of the Triple Pomeron Model gives (\ref{dlf}), the {\it standard flux factor}, instead of (\ref{renf}), the renormalized one. The problem is that the standard flux factor drives to strong unitarity violation and the {\it renormalization} procedure was conceived \cite{dino} as an {\it ad hoc} way to overcome this difficulty. Although a rigorous demonstration of the renormalized scheme is still missing, it is acceptable in the sense that it provides a good description for the experimental data at the diffractive region (see a detailed discussion in \cite{dino_novo}). The pion contribution ($\pi \pi \rm I\!P$) is given by \cite{field} \begin{equation} \left (\frac{d^2\sigma}{dtd\xi}\right )_{\pi \pi \rm I\!P } = f_{\pi}(\xi,t) \times \sigma_{\pi p}(s\xi) \label{mult7} \end{equation} where \begin{equation} f_{\pi}(\xi,t) = \frac{1}{4\pi}\frac{g^2}{4\pi}\ \frac{|t|}{(t-\rho^2)^2}\ e^{b_{\pi} (t-\rho^2)} \xi^{1-2{\alpha}_{\pi}(t)} \end{equation} and $\alpha_{\pi}(t)=0.9(t-\rho^2)$ with $\rho^{2}=m_{\pi}^{2}=0.02\ GeV^2$. We follow \cite{robinson} in fixing the coupling constant in $g^2/4\pi=15.0$ and putting $b_{\pi}=0$ (see also \cite{field}). The pion-proton cross section $\sigma_{\pi p}(s\xi)=10.83\ (s\xi)^{0.104}\ +\ 27.13\ (s\xi)^{-0.32}\ [mb]$ is taken from \cite{covolan2}. If one considers only the diffractive and near-to-diffractive regions and low $p_T$ ($-t \sim 0.0 - 0.1\ GeV^2$), the contributions outlined above are enough to provide a good description of the available data (see \cite{dino_novo}). However, when one wants to consider larger $p_T$ and $x_F < 0.9$, at least a third contribution is required. That is the reason why we introduce the reggeon contribution. The reggeon contribution ($\rm I\!R \rm I\!R \rm I\!P$) is determined by \begin{equation} \left (\frac{d^2\sigma}{dtd\xi}\right )_{\rm I\!R \rm I\!R \rm I\!P} = f_{\rm I\!R}(\xi,t) \times \sigma_{\rm I\!R p}(s\xi) \end{equation} with \begin{equation} f_{\rm I\!R}(\xi,t) = \frac{{\beta}_{0{\rm I\!R}}^{2} } {16\pi} e^{2b_{\rm I\!R} t}\ \xi^{1-2{\alpha}_{{\rm I\!R}}(t)}, \label{mult8} \end{equation} and \begin{equation} \sigma_{\rm I\!R p}(s\xi) = {\beta}_{0{\rm I\!R}}\ g_{{\rm I\!R} }(s\xi )^{\epsilon}. \label{sigreg} \end{equation} In this case, the trajectory is assumed to be $\alpha_{{\rm I\!R}}(t)=0.5+t$ while the constants $\beta_{{\rm I\!R}}\equiv ({\beta}_{0{\rm I\!R} }^{3}\ g_{\rm I\!R})$ and $b_{{\rm I\!R}}$ remain to be determined from data. Thus, with the expressions and parameters given above, the $\rm I\!P \rm I\!P \rm I\!P$ and $\pi \pi \rm I\!P$ contributions are completely specified; only the $\rm I\!R \rm I\!R \rm I\!P$ contribution remains to have the final parameters determined. \centerline{\bf B. Central Region} In order to describe the leading particle spectrum in the central region, we use the Double Reggeon Model \cite{collins} that gives the invariant cross section as \begin{equation} E\frac{d^3\sigma}{d{\bf p}^3}=\sum_{i,j}\ \gamma_{ij}(m_{T}^{2}) \ \left|\frac{t}{s_0}\right |^{\alpha_i(0)-1}\ \left | \frac{u}{s_0}\right |^{\alpha_j(0)-1} \label{mult9} \end{equation} where $m_{T}=({p_{T}^{2}+m^2_p})^{1/2}$ is the transversal mass, and $u=-m_T\sqrt{s}\ e^{-y}$ and $t=-m_T\sqrt{s}\ e^y$ are the Mandelstam variables given in terms of the rapidity $y=ln \frac{(E+p_L)}{m_T}$. Function $\gamma_{ij}(m_{T}^{2})$ corresponds to the product of the three vertices of the diagrams depicted in Fig.3. These diagrams represent the contributions taken into account in the present analysis: $\rm I\!P \rm I\!P$, $\rm I\!P \rm I\!R + \rm I\!R \rm I\!P$, and $\rm I\!R \rm I\!R$ (pion contributions are not considered in this case because they are totally covered by the others). We assume for the coupling function $\gamma_{ij} (m_{T}^{2})$ a simple gaussian form, \begin{equation} \gamma_{i j}(m_{T}^{2})=\Gamma_{i j} \ e^{-a_{i j}m_{T}^{2}} \end{equation} where $\Gamma_{i j}$ is a constant that already embodies the product of the couplings belonging to the triple and quartic vertices. With these definitions, the invariant cross sections for the three contributions become \begin{equation} \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {\rm I\!P \rm I\!P} = \Gamma_{\rm I\!P \rm I\!P} \ e^{-a_{\rm I\!P \rm I\!P}m_{T}^{2}}\ (m_T\ \sqrt{s})^{2\epsilon}, \label{mult10} \end{equation} \begin{eqnarray} \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {\rm I\!P {\rm I\!R} +{\rm I\!R} \rm I\!P}=2\ \Gamma_{\rm I\!P {\rm I\!R}}\ e^{-a_{\rm I\!P {\rm I\!R} }m_{T}^{2}} \ (m_T\ \sqrt{s})^ {\epsilon+\alpha_{{\rm I\!R} }(0)-1} \ cosh[(1+\epsilon-\alpha_{\rm I\!R} (0))y], \label{mult11} \end{eqnarray} and \begin{equation} \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {{\rm I\!R} {\rm I\!R} }= \Gamma_{{\rm I\!R} {\rm I\!R}}\ e^{-a_{{\rm I\!R} {\rm I\!R} } m_{T}^{2}}\ (m_T\ \sqrt{s})^{2(\alpha_ {\rm I\!R} (0)-1)}. \label{mult12} \end{equation} In the above expressions again $\alpha_{\rm I\!R}(0)=0.5$ and $\epsilon=0.104$ \cite{covolan2}. Differently from the fragmentation region where almost all parameters are already established, in this region almost all of them (expect for the intercepts just mentioned) must be fixed from data. The expressions given above could be enriched by detailing the reggeon exchange in terms of $f$, $\rho$, $\omega$, $a_2$, and taking into account all crossed terms, but in fact we are pursuing here a minimal description in which only the dominant and effective contributions are considered. We shall see below that these contributions are enough to provide a good description of the available data. \section{\bf Description of experimental data} Experimental data on leading particle spectrum are very scarce. A compilation for $p p \rightarrow p X$ is shown in Fig.1 where data from three experiments \cite{basile,aguilar,brenner} are put together (the curve and the insert in this figure should be ignored for the moment). As can be seen, a pretty flat spectrum is exhibited, except for $x_F \approx 1$ where the typical diffractive peak appears.\footnote{This peak is absent from the Aguilar-Benitez {\it et al.} data due to trigger inefficiency for $x_F > 0.75$ in this particular experiment \cite{aguilar}.} The problem that arises when one tries to describe the $p p \rightarrow p X$ reaction in the whole phase space is that the available data are not enough to determine unambigously each one of the contributions outlined above. One may have noted in the previous section that we have summarized all secondary reggeon exchanges (except for the pion) in a single contribution denoted by $\rm I\!R$ and the reason is the following. When one analyzes, for instance, total cross section data (like in \cite{covolan2}), it is possible to establish (to a certain extent) the relative amount of the different contributions. Actually, this is enforced by the changing shape exhibited by the data in different regions. That is not the case here because out of the diffractive region the spectrum is pretty flat and that makes it difficult to discriminate the regions where the different exchange processes contribute the most. Thus, in order to establish how the expressions outlined above are summed up to compose the observed spectrum, we have to follow a particular strategy. Since our intention was obtainning an acceptable description for $p p \rightarrow p X$ data in the whole phase space, we did not use in our fitting procedure the data shown in Fig.1 which represent only the $x_F$-dependence. Instead, we have set those data apart to be used only at the end to check our final results which, in fact, were obtained with distributions giving in terms of both $x_F$ and $p_T$ dependences. Our procedures to determine the contributions at the central and at the fragmentation regions are quite different. The main problem is that these regions overlap each other and thus it is pratically impossible to separate them (or establish clear limits). To overcome this difficulty we assumed that, except for normalization effects, the $x_F$ and $p_T$ dependences of the proton produced in the central region through the reaction $p p \rightarrow p X$ is the same as for the antiproton produced in $p p \rightarrow {\bar p} X$. This assumption was implemented by fitting simultaneously the data shown in Figs. 4 and 5 \cite{dados1,dados2} through the expressions \begin{eqnarray} \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_{pp-> \bar{p}X}^{central}= \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {\rm I\!P \rm I\!P} +\left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {\rm I\!P {\rm I\!R} +{\rm I\!R} \rm I\!P} +\left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {{\rm I\!R} {\rm I\!R} } \label{mult12b} \end{eqnarray} and \begin{equation} \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {pp->pX}^{central}=\lambda(s)\ \left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {pp->\bar{p}X}^{central}. \label{mult13} \end{equation} The idea is that the data of Fig.4 provide the information on the $x_F$ and $p_T$ dependences through Eqs. (\ref{mult10})-(\ref{mult12b}) and relation $x_F=2 m_T sinh(y)/{\sqrt s}$, while the connection between $p p \rightarrow {\bar p} X$ and $p p \rightarrow p X$ is established by fitting the data of Fig.5 through the function $\lambda(s)$ of Eq.(\ref{mult13}). The parameters $\Gamma_{ij}$ and $a_{ij}$ of this fit are given in Table \ref{tabmul1} while $\lambda(s)$ is parametrized as \ $\lambda(s)=1.0+11.0\ s^{-0.3}$. The agreement with data of Figs.4 and 5 is not perfect, but that is because we are simplifying the description by considering only a few contibutions, the dominant ones. As stated before, this is enough for the purposes of the present analysis. Now we are able to obtain the total description by adding up central and fragmentation region contributions. As explained before, the contributions dominant at the fragmentation region, Eqs. (\ref{mult6})-(\ref{sigreg}), are almost completely determined. The parameters $\beta_{{\rm I\!R}}$ and $b_{{\rm I\!R}}$ referring to the $\rm I\!R \rm I\!R \rm I\!P$ contribution are established by fitting the data of Fig.6 (from \cite{brenner}). This is done by using the expression \begin{eqnarray} \left (E\frac{d^3\sigma}{d{\bf p}^3}\right )_ {pp->pX}^{total}= \left (E\frac{d^3\sigma}{d{\bf p}^3}\right )_ {\rm I\!P \rm I\!P \rm I\!P} +\left (E\frac{d^3\sigma}{d{\bf p}^3}\right )_ {\pi \pi \rm I\!P} \ +\left (E\frac{d^3\sigma}{d{\bf p}^3}\right )_ {\rm I\!R \rm I\!R \rm I\!P} +\left (E\frac{d^3\sigma}{d{\bf p}^3} \right )_ {pp->pX}^{central} \label{mult14} \end{eqnarray} where the last term refers to Eq.(\ref{mult13}) with the parameters given in Table \ref{tabmul1}. With this final fit the remaining parameters result to be $\beta_{{\rm I\!R} }=2465.7\ mb\ GeV^{-2}$ and $b_{\rm I\!R} =0.1\ GeV^{-2}$. Fig.7 offers a view of how the different contributions are composed to form the final result and how this picture evolves with $p_T$. The different contributions of the invariant cross section in both regions integrated over $p_T$ produce the results of ${d\sigma}/{dx_F}$ for both reactions exhibited in Fig.1 (solid curves) for $p_{lab}= 400\ GeV/c$. We remind the reader that these data were not used in the fit, but are used now to check the reliability of the whole procedure. From this figure it is possible to see that the final description obtained for the leading proton spectrum is quite reasonable. \section{Connection between ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$} The results obtained above specify completely the behavior of the leading particle spectrum and allow us to calculate $\langle n_0 \rangle$ and $\langle k_p\rangle$ as given by (\ref{n0}) and (\ref{kp}). In Fig.8, we show the energy dependence of these quantities as obtained in the present analysis (solid curves). In the same figure, it is also shown the average inelasticity as predicted by the Interacting Gluon Model (IGM) \cite{igm} (dot-dashed curve) for comparison. The average inelasticity obtained from the present analysis is very slowly increasing with energy, close to the behavior predicted by the Minijet Model \cite{gaisser}. With these results we can come back to our original intent which is checking the hypothesis of universal behavior of the multiplicity that is specified by Eqs.(\ref{mult1}) and (\ref{mult2}). In order to do that, we first establish a parametrization for $N(\sqrt{s})$ through \begin{equation} N(\sqrt{s})=a_1+a_2\ ln(\frac{s}{s_{0}})+a_3\ ln^{2}(\frac{s}{s_{0}}) \label{mult15} \end{equation} with $s_{0}=1\ GeV^2$. However, before performing the fit to experimental data, an additional effect has to be considered. This is because, besides the charged particles produced at the primary vertex, ${\langle n_{ch} \rangle}_{e^+e^-}$ data include also decay products of $K^0_s \rightarrow \pi^+ \pi^-$, $\Lambda \rightarrow p \pi^-$, and ${\bar\Lambda} \rightarrow {\bar p} \pi^+$. Following \cite{ref1}, we take this contamination into account by computing the ratio $R={\langle n_{ch} \rangle}_{K^0_s,\Lambda, {\bar\Lambda}}/{{\langle n_{ch} \rangle}_{e^+e^-}}$ and by redefining (\ref{mult1}) as \begin{equation} {\langle n_{ch} \rangle}_{e^+e^-}=\frac{N(\sqrt{s})}{1-R}, \label{multc} \end{equation} with $R=0.097\pm 0.003$. This value was taken from \cite{ref1} and, besides the references quoted therein, it is in good agreement with experimental data from ref.\cite{check}. No energy dependence for $R$ can be inferred from these data. The fit using (\ref{mult15}) and (\ref{multc}) gives $a_1 = 2.392$, $a_2=0.024$, and $a_3=0.193$. In Fig.9, we show the above parametrization describing $\langle n_{ch}\rangle _{e^+e^-}$ data from references quoted in \cite{ee} and the calculated curve for $\langle n_{ch} \rangle _{pp}$ in comparison with data from \cite{pp}. The agreement with these data enables us to consider that our premises about the universal behavior of ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$ are confirmed. Of course, this conclusion is restricted to the energy dependence of $\langle n_0 \rangle$ and $\langle k_p\rangle$ shown in Fig.8. The solid curve of the insert in Fig.9 shows what happens when the IGM average inelasticity is applied to the same purposes. One could argue that this last result is conditioned by the use of $\langle n_0 \rangle$ obtained in the present analysis which increases with energy. However, we note that increase in $\langle n_0 \rangle$ plays against increase in $\langle k_p \rangle$ since these are competitive effects. Now a comment on the He analysis \cite{he}, where the relation \begin{equation} n_{ch}^{e^+ e^-} ({\sqrt {s_{e^+ e^-}}}) = n_{ch}^{pp} (k({\sqrt {s_{pp}}}){\sqrt {s_{pp}}}) \label{eq_he} \end{equation} is employed. After fitting $n_{ch}^{e^+ e^-}$ and $n_{ch}^{pp}$ independently, He imposes that relation (\ref{eq_he}) holds and extracts the inelasticity $k$ from this assumption. This is similar to what we have done, but we think that the result of decreasing inelasticity and the agreement with IGM obtained in such an analysis comes from the fact that neither the leading particle multiplicity ($n_0$) nor the effect of decay products ($R$) is considered and we see no reason for ignoring such effects. A surprising outcome of the present analysis is shown in Fig.10 (a) where the normalized cross section $1/{\sigma_{incl}}\ d\sigma/dx_F$ is calculated up to the LHC energy. It is shown that, if the present description holds up to such high energies, Feynman scaling is approximately observed in the intermediate fragmentation region, $0.2 < x_F < 0.8$, but is violated in opposite ways at the central and diffractive regions. Fig. 10 (b) shows the same results but in a scale that makes more evident the scaling violation at the central region. This result seems to say that the increase of production activity at the central region occurs at the expenses of a supression of the diffractive processes. However, this is just a speculative observation that should be investigated more thoroughly. \section{Conclusions} We have presented in this paper a description of the inclusive reaction $p p \rightarrow p X$ in the whole phase space within the Regge-Mueller formalism, modified by the renormalization of the diffractive cross section. The average multiplicity and the average inelasticity were obtained from the leading proton spectrum and both of them resulted to be increasing functions of energy, in agreement with \cite{nos,inc} and particularly with \cite{gaisser}. The energy dependence of these quantities is such that allows one to accommodate very well the charged particle multiplicities ${\langle n_{ch} \rangle}_{e^+e^-}$ and ${\langle n_{ch} \rangle}_{pp}$ by an universal function once an appropriate relation is used. An additional result is that the normalized leading proton spectrum approximately observe Feynman scaling for intermediate $x_F$, whereas such scaling is violated at the central and diffractive regions. \section*{Acknowledgementes} We are grateful to J. Montanha for valuable discussions and suggestions. We would like to thank also the Brazilian governmental agencies CNPq and FAPESP for financial support.
1,314,259,996,161
arxiv
\section{INTRODUCTION} It has long been recognized that highly luminous ($L_X\sim10^{36-37}$~erg~s$^{-1}$) X-ray sources in the cores of globular clusters are grossly overrepresented with respect to the general galactic population (Katz 1975; Clark 1975): clusters contain $10^{-4}$ of the Galaxy's mass, but $10^{-1}$ of the low-mass X-ray binary (LMXB) sources. A variety of lines of evidence tell us that these objects are neutron stars in very close binary systems, but the precise mechanisms which enhance their formation in clusters, and protect them from disruption thereafter, are obscure. Essentially all of the bright cluster sources are also X-ray bursters, frequently emitting $L_X\sim10^4~L_\odot$ in just a few seconds. Close binaries also have a profound effect on cluster dynamics: just a few such objects in a cluster have a store of orbital kinetic energy which can equal or exceed the orbital energy of all $10^5$ single cluster stars. Thus the study of these rare and odd objects also has important macroscopic implications for the dynamical evolution of clusters (Elson et al. 1987, Hut et al.~1992). Considerable recent progress in the understanding of the intense bursting X-ray sources in globular cluster cores is in large part due to {\it Hubble Space Telescope} ({\it HST}) identifications and follow-on studies of optical/UV counterparts, and to the realization that at least two of the cluster sources are exotic, ultra-short period double-degenerate binary systems: $P=11$ min for X1820--303 in NGC\,6624 (Stella et al. 1987; Anderson et al. 1997) and $P=21$ min for X1850--087 in NGC\,6712 (Homer et al.~1996). The optical/UV studies with {\it HST} have in one sense proven highly successful: a plausible optical counterpart has been identified and/or studied in detail in each of the clusters carefully scrutinized thus far. However, the diversity in properties of the six counterparts now identified is enormous, with optical luminosities ranging from $M_B$=6 to $M_B$=1, and confirmed binary orbital periods ranging from 11~min to 17~hr. The only optical counterpart candidate thus far for X1832--330 in NGC\,6652 was advanced by Deutsch et al. (1998b; hereafter Paper I). The object, denoted Star 49, exhibits a UV excess in the {\it HST} data similar to other known LMXB optical counterparts, and similar absolute magnitude to the optical counterpart of the LMXB in NGC\,1851. However, the region surveyed in Paper I does not completely cover the {\it ROSAT} X-ray error circle derived in that work, and the images are not very deep. Furthermore, the position of Star 49 is discrepant at the $2.3\sigma$ level with the X-ray coordinates. For these reasons, Paper I suggests that while Star~49 is the best candidate for the optical counterpart, its identification remains tentative. \section{ANALYSIS} Since the initial search for the optical counterpart and discovery of Star 49 in Paper I, additional WFPC2 observations have become available in the {\it Hubble Data Archive}. Here we discuss three orbits of F555W (V) and F814W (I) imaging data obtained on 1997 September 5, as well as one orbit of F555W, F439W (B), and F218W imaging data obtained on 1995 September 13. In the former observation, seventeen $\sim$20~s F555W and F814W exposures were taken on the first orbit, twelve 160~s F555W exposures on the second orbit, and twelve 160~s F814W exposures on the third. For the latter observation, fine lock was not achieved and the stellar images are elongated, the cluster is miscentered on the CCDs, and the F218W exposures failed completely. The retake data for this failed observation were successful, and are discussed in Paper I, but did not include the proposed optical counterpart. The early, poor quality data, however, do actually include Star 49, are usable, and will be briefly discussed here as they were overlooked in Paper~I. All these data were acquired during unrelated programs to study the cluster NGC\,6652 itself. Despite the suboptimal sampling of the WFPC2 {\it Wide Field} CCDs, on which the optical counterpart falls in these observations, the cluster is sufficiently sparse and Star 49 sufficiently unblended that aperture photometry is entirely adequate to measure magnitudes for this object and a set of nearby comparison stars. Aperture corrections are taken from Table 2(a) in Holtzman et al. (1995b). The photometric measurements have not been corrected for geometric distortions, nor is any correction for charge transfer efficiency losses (Holtzman et al. 1995b) applied; for most of the images, these effects should contribute errors of only a few percent. We use the photometric zero points for the STMAG system from Table 9 ($Z_{STMAG}$) in Holtzman et al. (1995a). Systematic errors for all magnitudes due to uncertainties in detector performance and absolute calibration are $\sim$ 5\%. Five nearby reference stars also photometered show no variability to the limit of the derived uncertainties; three of the five are of comparable brightness of Star 49. For Star 49 itself, variability is suggested in the 1995 epoch observations, and large amplitude variability is clearly evident in the 1997 epoch observations. This large-amplitude variability, when coupled with the UV excess demonstrated in Paper I, lends considerable confidence that this object is the correct identification of the optical counterpart to X1832--330. From the three orbits of F555W and F814W observations, we find $<m_{555}>=19.48$ and $<m_{814}>=19.90$. To create a single light curve of all the data, we subtract 0.4 mag from $m_{814}$ to create approximately filter-independent magnitudes. This result is searched for periodic components using algorithms described in Horne \& Baliunas (1986). In Fig.~1a we show the Fourier transform with the CLEAN algorithm (Roberts et al. 1987) applied to remove the effects of the window function. We find the strong peak seen at $43.7\pm0.7$~min to have 99.5\% significance, based on the original (i.e. prior to CLEANing) periodogram, using methods in Horne \& Baliunas (1986). In Fig.~1b we show the entire light curve ($m_{555}$, $m_{814}-0.4$) for the three orbit observation. Uncertainty bars are provided for each datapoint, although they are sometimes smaller than the symbols themselves. A non-linear least squares fitting algorithm is used to determine the best fit sinusoid, which is overplotted on the light curve. The result is a best-fit period $43.6 \pm 0.6$~min, semi-amplitude $0.30 \pm 0.05$~mag, and mean magnitude $19.49 \pm 0.03$. The sinusoid does describe the broad trends in the data quite well, but clearly a large amount of aperiodic flickering is also evident. In Fig.~1c we show the light curve averaged into 10 phase bins. Photometric uncertainties are smaller than the symbols. A few points deviate significantly from the sinusoid fit, most likely due to the strong flickering and small amount of data. During 1998 November 28--30, we obtained $\sim35$~ks of X-ray observation on X1832--330 with {\it Rossi X-ray Timing Explorer}. The data are processed through the standard {\it Ftools} package to obtain a calibrated light curve. Two Type I X-ray burst events are evident, confirming the bursting nature of this source first reported by in 't Zand et al. (1998) with {\it BeppoSAX} observations. After background subtraction, we measure a persistent countrate $\sim100$ s$^{-1}$, which is $\sim6$~mCrab, similar to fluxes reported previously for this object. A search of the background-subtracted light curve reveals no significant periodicities, except for some power at half the {\it RXTE} orbital period, apparently induced by the calculated background model. In particular, there does not appear to be any significant power at the 43.6~min optical period. The rms scatter in the X-ray light curve is consistent with Poisson noise; we find no evidence for flickering, which might be expected based on our optical observations. However, as the X-ray and optical observations were made over a year apart, no firm conclusions can be drawn from the lack of X-ray flickering. A further analysis of the light curve, spectra, and bursts in these X-ray data will be discussed elsewhere. Mukai \& Smale (2000) present an X-ray observation of X1832--330 from a 1996 {\it ASCA} observation. They also find no periodic X-ray modulation, but they do provide evidence for X-ray variability with a similar timescale as the flickering seen in our optical light curve. \section{DISCUSSION} The peak in the periodogram of the optical data has 99.5\% significance, indicating that it is quite improbable that uncorrelated, Gaussian noise would randomly generate such a strong periodic signal. However, the evident scatter in the photometry is not due to measurement uncertainties, but rather an inherent flickering in the source, and this behavior is likely to produce significantly correlated ``noise''. Thus the formal significance calculation for the periodicity may overestimate the actual confidence. Our derived period is close to, but statistically different from, half the {\it HST} orbital period. Two further tests increase our confidence that this behavior is not an artifact of the {\it HST} orbit. As noted in \S 2.1, several stars near to and of similar magnitude to Star~49 have been measured from the same data, and show no variability at this or other periods. We have also randomized the association of observation times versus magnitudes for Star~49 and rereduced the data. Although periods due to incomplete removal of the window function should then remain, as the observing times are identical in these randomized data and in the original observations, the resulting periodograms show no significant power at 43~min. We find that only $\sim5\times10^{-4}$ of 10000 trials show a peak at any frequency as high as the 43~min one. Although the marked variability we report here adds confidence to the identification of Star~49 with the X-ray source, the mediocre agreement of the X-ray position with the object still leaves some uncertainty. This issue will almost surely be settled by a scheduled observation by the {\it Chandra X-ray Observatory}, which should yield a highly accurate position. However, if we accept that our observed optical variations in Star~49 are indeed periodic, there are few alternatives to identifying this object as a LMXB, given its measured characteristics, irrespective of the issues of the X-ray position. The most extreme SX~Phe stars, for example, have periods less than 1~hr, but do not display the marked flickering we observe, so stellar pulsation seems implausible. If the period is instead orbital, the flickering implies a mass-transfer system. However, no classical cataclysmic variables (CVs) are known with periods less than 1~hr, and although CVs share the colors and flickering of Star~49, they are typically 3--4~mag less luminous than our object in any case. The He-rich AM~CVn stars have the appropriate colors, flickering and period range, but are thought to have $M_V\sim10$ (Warner 1995), so would be $\sim10^2$ fainter than Star~49. Thus, given the measured period, luminosity, colors, and flickering of this object, one would likely conclude it is an LMXB even without knowledge of the fact that there is indeed a bright X-ray source observed in the region. \bigskip We summarize a variety of parameters for all of the globular cluster LMXB sources and their host clusters in Table 1. Cluster data are primarily compiled from Djorgovski (1993) and other references in the same volume. Column 8 lists ${\rm F_X}$ values, which we derive by taking a mean of the {\it RXTE} ASM flux measurements since 1996, and applying a simple correction to convert approximately to $\mu$Jy. In column 9, we apply the distance correction and give an approximate X-ray luminosity in $10^{36}$ erg s$^{-1}$ for the 2--10 keV ASM band. The absolute calibration is only an estimate and should be treated with caution, but the relative values are likely reliable. Finally, in column 10 we list $\xi=B_0+2.5{\rm\,log\,F_X(\mu Jy})$, the parameter used by van Paradijs \& McClintock (1995) to characterize the ratio of X-ray to optical flux. That the optical luminosity should depend upon the X-ray luminosity and the size of the accretion disk has been quantified by van Paradijs \& McClintock (1994; hereafter vPM94). They define the parameter $\Sigma=(L_X/L_{\rm Edd})^{1/2} (P/1\,{\rm hr})^{2/3}$ and find a strong correlation, such that M$_V=1.57(\pm0.24)-2.27(\pm0.32)\log \Sigma$. In Fig.~\ref{periodrel} we show a similar figure as in vPM94, but we use M$_B$ instead of M$_V$, which is likely to be reasonable as vPM94 find an average $(B-V)_0=-0.09\pm0.14$ for field LMXBs. The data for globular cluster LMXBs are derived here and from Deutsch et al. (1998a), and are plotted with large diamonds, approximately indicating the entire known range of optical and X-ray luminosity. The solid line indicates the best fit to all LMXBs by vPM94. The dotted lines denote the apparent full range of possible values (using vPM94's best fit slope). For NGC\,1851, no orbital period is known, but the optical and X-ray luminosities are measured. We therefore draw a dashed line which is likely to encompass the probable range of orbital periods, $0.2-0.85$ hr, where the lower bound is taken to be the shortest orbital period known and the upper value is the maximum period implied by our dotted line bounds. We thus predict that the orbital period of X0512--401 will prove to be less than 1 hr. Based on model accretion disks, Deutsch et al. (2000) also infer that X0512--401 must have orbital period less than one hour. As listed in Table 1, an eclipse period of 12.4~hr was recently reported by in 't Zand et al. (2000) for the LMXB in the globular cluster Terzan 6. Using behavior exhibited by the GC LMXB sources in Fig.~2, we can now infer that the optical counterpart of that source (for which there has not yet been a search) will have $M_B\sim2\pm1$. The high inclination ($i>74^\circ$) inferred from the eclipse behavior by in 't Zand et al. (2000) suggests that the luminosity may well be at the fainter end of the above range, and thus similar to the optical counterpart in NGC\,6441. However, the high reddening to Terzan~6 will making discovery of the optical counterpart by conventional means ({\it HST} observations of UV-excess) extremely difficult (Deutsch et al. 1998b). A search for eclipses with infrared imaging of the X-ray error circle may be the easiest method of isolating the optical/IR counterpart of this source. \clearpage \section{CONCLUSIONS} The ultraviolet-excess candidate for the optical counterpart of the intense X-ray source in NGC\,6652 suggested by Deutsch et al. (1998b) is found to be highly photometrically variable. Although the data are of limited length, the evidence is strong that a significant component of the variability is periodic, with $P=43.6$~min, most likely the orbital period of the system. Regardless of whether or not the variability is periodic, the marked amplitude of the variations significantly strengthens the case for the identification of the object with the bursting X-ray source. Somewhat tempering this conclusion is the poor positional agreement of the object with the only extant X-ray data, although {\it Chandra X-ray Observatory} observations will almost surely settle this issue. The lack of similar X-ray variability is inconclusive. Star 49 is clearly unusual regardless of its association with the X-ray source, but the probability of two such unrelated objects falling within a few arcseconds of each other is presumably modest, so we favor the identification of the star and the X-ray burster, pending the {\it Chandra} data. We examine a homogeneous set of {\it HST} data on globular cluster X-ray source counterparts, including Star 49 in NGC\,6652, and find that they fit well with the correlation of optical luminosity, X-ray luminosity, and accretion disk size previously discussed by vPM94. Even if the 43~min period is not confirmed, the orbital period of X1832--330 must still be less than $\sim2$~hr if it is to follow the relation of this diagram. Using a somewhat different argument, Mukai \& Smale (2000) also infer that X1832--330 in NGC\,6652 is a short period system. The correlation in this diagram also strongly implies that the X-ray source in NGC\,1851 must have orbital period $P<1$~hr as well. A similar conclusion is reached by Deutsch et al. (2000) via an independent argument, through examination of the spectral energy distribution of that object. Thus four of the seven central globular cluster X-ray sources where orbital periods are constrained or known are inferred to be ultracompact, a fraction considerably in excess of that for field low mass X-ray binaries; only $\sim7$\% of field LMXBs with known periods have $P<1$~hr in the compilation of van Paradijs (1995). In fact, if ultracompact systems in GC LMXBs were as rare as in the field, then the binomial probability that at least four out of seven systems are by chance found to be ultracompact is only $7\times10^{-4}$. One can readily imagine multiple explanations for this significant overabundance of compact systems, although the unique dynamic environment of cluster cores is certainly a most tempting factor to invoke. It is not clear whether the same explanation applies even to all members of this small sample (the double-degenerate systems in NGC\,6624 and NGC\,6712 may be unique), or that observational selection may be at work. \acknowledgments {\it RXTE} ASM data products were provided by the ASM/RXTE teams at MIT and at the {\it RXTE} SOF and GOF at NASA's GSFC. Support for this work was provided by NASA through grants NAG5-7330 and NAG5-7932, as well as grant AR-07990.01 from the ST\,ScI, which is operated by AURA, Inc.
1,314,259,996,162
arxiv
\section{Introduction} Dynamic computed tomography is a major part of CT imaging that studies the structure of dynamic objects in CT scans. State-of-the-art motion-based dynamic CT reconstruction techniques mostly consider the motion in the entire object volume (e.g., \cite{Zehni20, Nguyen22_SPIE}), while in real applications (e.g., lung tissue \cite{Soliman16}) only local regions are deformed. This concern was recently mentioned in \cite{Ruymbeek20}. Although several former reconstruction methods were designed to estimate the deformed regions (e.g., \cite{VanEyndhoven14, VanEyndhoven15, Kazantsev15}), those methods do not consider affine motion models. In this paper, we consider a dynamic CT model for which, in the object volume, there are local regions deformed by affine motion models, while the complementary regions that remain static during the entire acquisition scan. We then propose an iterative method that aims not only to accurately reconstruct the scanned object that contains these locally deformed regions, but also to identify them. Furthermore, the motion parameters corresponding to the deformation are estimated simultaneously with the reconstruction and region estimation. The contributions are summarized as follows: \begin{itemize} \item Mathematical formulation of the class of dynamic CT problems that consider affine motions, which model the deformation in local areas characterized by corresponding binary masks. \item Gradient method that aims to minimize an objective function that depends on the reconstruction, the motion parameters and the deformed regions, whose partial derivatives towards all of them are formulated analytically. \item The biconvexity of the objective function towards the reconstruction and the locally deformed regions that supports the convergence of the iterative schemes in the proposed gradient method. \end{itemize} \section{Proposed method} A dynamic CT image can be represented as a sequence of $n$ images $\bm{x}_1$, $\bm{x}_2$, ..., $\bm{x}_n$, each representing the object at a given point in time. The acquisition can be observed as a collection of finite subscans, where the object is assumed to be static during each subscan. Here, a subscan refers to one or more consecutively acquired projections. This procedure can be mathematically modelled as $n$ systems of linear equations: \begin{equation}\label{eq:forward_model_subscan} \bm{W}_i \bm{x}_i = \bm{b}_i, \hbox{ for } i = 1,..., n, \end{equation} where $\bm{W}_i$ and $\bm{b}_i$ are the projection operator and the projection data corresponding to the $i^{th}$ subscan, respectively. These may be interpreted as a single system of the forward model: \begin{equation}\label{eq:forward_model} \begin{bmatrix} \bm{W}_1 & 0 & 0 & 0 \\ 0 & \bm{W}_2 & 0 & 0 \\ 0 & 0 & \ddots & 0\\ 0 & 0 & 0 & \bm{W}_n \end{bmatrix} \begin{bmatrix} \bm{x}_1 \\ \bm{x}_2 \\ \vdots \\ \bm{x}_n \end{bmatrix} = \begin{bmatrix} \bm{b}_1 \\ \bm{b}_2 \\ \vdots \\ \bm{b}_n \end{bmatrix}. \end{equation} Let $\bm{\alpha}_i \in \left \{0, 1 \right \}^N$ be a binary mask, which encodes the local region of the unknown original image $\bm{x} \in \left[0, 1 \right]^N$ that appears deformed in the image $\bm{x}_i$. Assume the deformation can be modelled by an affine motion model $M$ that depends on the motion parameter $\bm{p}_i \in \mathbb{R}^M$, the deformed object in the $i^{th}$ subscan can be modelled as follows: \begin{equation}\label{eq:deformed_image} \bm{x}_{i} = {{\widebar{\bm{\alpha}_i}} \circ \bm{x}} + M\left(\bm{p}_i \right) {\left(\bm{\alpha}_i \circ \bm{x} \right)}, \end{equation} where $\widebar{\bm{\alpha}_i} := \bm{1} - \bm{\alpha}_i$ and $\circ$ is the commutative Hadamard product. In this model, the static part ${\widebar{\bm{\alpha}_i}} \circ \bm{x}$ of $\bm{x}$ remains conserved in the deformed object $\bm{x}_i$, while the dynamic part ${\bm{\alpha}_i} \circ \bm{x}$ appears distorted under the motion model $M$.\\ \\ By substituting the equation \eqref{eq:deformed_image} to \eqref{eq:forward_model} for all $n$, the forward model of the entire projection data then becomes: \begin{equation} \begin{bmatrix} \bm{W}_1 & 0 & 0 & 0 \\ 0 & \bm{W}_2 & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & \bm{W}_n \end{bmatrix} \begin{bmatrix} {\widebar{\bm{\alpha}_1}} \circ \bm{x} + M\left(\bm{p_1} \right) \left(\bm{\alpha _1} \circ \bm{x} \right)\\ {\widebar{\bm{\alpha}_2}} \circ \bm{x} + M\left(\bm{p_2} \right) \left(\bm{\alpha _2} \circ \bm{x} \right)\\ \vdots \\ {\widebar{\bm{\alpha}_n}} \circ \bm{x} + M\left(\bm{p_n} \right)\left(\bm{\alpha _n} \circ \bm{x} \right) \end{bmatrix} = \begin{bmatrix} \bm{b}_1 \\ \bm{b}_2 \\ \vdots \\ \bm{b}_n \end{bmatrix}. \end{equation} This can be concisely rewritten as a single system: \begin{equation}\label{eq:rMIRT_equation} \bm{W}\left\{ {\widebar{\bm{\alpha}}} [\circ] \bm{x} + \bm{M}\left(\bm{p}\right) \left(\bm{\alpha} [\circ] \bm{x}\right) \right\} = \bm{b}, \end{equation} where \begin{equation} \bm{W} = \begin{bmatrix} \bm{W}_1 & 0 & 0 & 0 \\ 0 & \bm{W}_2 & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & \bm{W}_n \end{bmatrix}, \end{equation} \begin{equation} \bm{\alpha} = \begin{bmatrix} \bm{\alpha}_1 \\ \bm{\alpha}_2 \\ \vdots \\ \bm{\alpha}_n \end{bmatrix}, \bm{p} = \begin{bmatrix} \bm{p}_1 \\ \bm{p}_2 \\ \vdots \\ \bm{p}_n \end{bmatrix}, \bm{b} = \begin{bmatrix} \bm{b}_1 \\ \bm{b}_2 \\ \vdots \\ \bm{b}_n \end{bmatrix}, \end{equation} $[\circ]$ is the modified version of the penetrating face product \cite{Slyusar99} between the two column vectors $\bm{\alpha} \in \left\{0, 1\right \}^{nN}$ and $\bm{x} \in \left[0, 1 \right]^N$ defined by \begin{equation} \bm{\alpha} [\circ] \bm{x} = \begin{bmatrix} \left[\bm{\alpha}_1 \circ \bm{x} \right]^T, \left[\bm{\alpha}_2 \circ \bm{x} \right]^T, \hdots, \left[\bm{\alpha}_n \circ \bm{x} \right]^T \end{bmatrix}^T, \end{equation} and \begin{equation} \bm{M}(\bm{p}) = \begin{bmatrix} M(\bm{p}_1) & 0 & 0 & 0 \\ 0 & M(\bm{p}_2) & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & M(\bm{p}_n) \end{bmatrix}. \end{equation} In order to solve the equation \eqref{eq:rMIRT_equation}, let us consider the following constrained optimization problem as a modified and extended version of \cite{Zehni20, Nguyen22_SPIE, VanEyndhoven12}: \begin{equation}\label{rMIRT_problem} \left[\bm{x}^*, \bm{\alpha}^*, \bm{p}^*\right] = \argmin_{\bm{x} \in \left [0, 1 \right]^N, \bm{\alpha} \in \left \{0, 1 \right \}^{nN}, \bm{p} \in \mathbb{R}^{nM}} f \left(\bm{x}, \bm{\alpha}, \bm{p} \right), \end{equation} where \begin{equation}\label{eq:rMIRT_objective_function} f \left(\bm{x}, \bm{\alpha}, \bm{p}\right) = \frac{1}{2}\left\|\bm{W}\left\{{\widebar{\bm{\alpha}}} [\circ] \bm{x} + \bm{M}\left(\bm{p}\right) \left(\bm{\alpha} [\circ] \bm{x}\right) \right\} - \bm{b}\right \|_2^2. \end{equation} The problem \eqref{rMIRT_problem} can be solved by the iterative schemes presented in Algorithm 1 with the intermediate approximated value of $\bm{\alpha}$ is projected onto the set $\mathcal{S} \equiv \left \{0, 1 \right \}^{nN}$ to obtain the intermediate deformed regions, after which the center of motion is updated. \begin{algorithm}[h!]\label{alg:algorithm_rMIRT} \DontPrintSemicolon \KwIn{Projection $\bm{b}$, projector $\bm{W}$, motion model $\bm{M}$, $\bm{p}^0 \equiv \text{motion parameters in the static case}$, $\bm{x}^0 \equiv \text{motion-uncompensated reconstruction}$, $\bm{\alpha}^0 \equiv \text{observed dynamic region encoder}$, number of iterations $n_{iter}$.} \KwOut{Reconstruction with region-based motion compensation, locally deformed regions, motion parameters.} \SetKwBlock{kwFor}{for}{end for} \kwFor($i = 0:n_{iter}-1$) { $\bm{x}^{i+1} = \bm{x}^{i} - \gamma_{\bm{x}}^i \nabla_{\bm{x}} f \left(\bm{x}^i, \bm{\alpha}^{i}, \bm{p}^i \right)$\\ $\bm{p}^{i+1} = \bm{p}^{i} - \gamma_{\bm{p}}^i \nabla_{\bm{p}} f \left(\bm{x}^i, \bm{\alpha}^{i}, \bm{p}^i \right)$\\ $\bm{\alpha}^{i+1} = \bm{\alpha}^{i} - \gamma_{\bm{\alpha}}^i \nabla_{\bm{\alpha}} f \left(\bm{x}^i, \bm{\alpha}^{i}, \bm{p}^i \right)$\\ Update the center of motion from $\text{Proj}_{\mathcal{S}}\left(\bm{\alpha}^{i+1}\right)$\\ } \caption{rMIRT} \end{algorithm} \noindent The gradient of the objective function is analytically given by $\nabla f = \left[ \left [ \nabla_{\bm{x}} f \right ]^T, \left [ \nabla_{\bm{\alpha}} f \right ]^T, \left [ \nabla_{\bm{p}} f \right ]^T \right ]^T$, with \begin{align}\nonumber \nabla_{\bm{x}} f =& \left\{ \left[ \left( \bm{M}(\bm{p}) - \bm{I} \right) \diag \left\{ \bm{\alpha} \right \} + \bm{I} \right] \underbrace{ \begin{bmatrix} I \\ I \\ \vdots \\ I \end{bmatrix}}_\text{$n$ blocks $I$} \right \}^T \\ & \times \bm{W}^T \bm{r}, \\ \nonumber \nabla_{\bm{\alpha}} f =& \left \{ \left[\bm{M}(\bm{p}) - \bm{I} \right] \underbrace{ \begin{bmatrix} \diag \left\{\bm{x} \right \} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \diag \left\{\bm{x} \right \} \end{bmatrix}}_\text{$n$ blocks $\diag \left\{\bm{x} \right \}$} \right \}^T \\ & \times \bm{W}^T \bm{r}, \\ \nabla_{\bm{p}} f =& \left[\nabla \bm{M}(\bm{p})\left(\bm{\alpha} [\circ] \bm{x} \right)\right]^T\bm{W}^T\bm{r}, \mbox{ where} \end{align} \begin{equation} \diag \left \{ \bm{\alpha} \right \} := \begin{bmatrix} \diag \left \{ \bm{\alpha}_1 \right \} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \diag \left \{ \bm{\alpha}_n \right \} \end{bmatrix}, \end{equation} \begin{equation} \bm{I} = \underbrace{ \begin{bmatrix} I & 0 & 0 & 0 \\ 0 & I & 0 & 0 \\ 0 & 0 & \ddots & 0 \\ 0 & 0 & 0 & I \end{bmatrix}}_\text{$n$ blocks $I$}, \end{equation} and $\bm{r}$ is the residual of the system \eqref{eq:rMIRT_equation} given as the following: \begin{equation}\label{rMIRT_residual} \bm{r} = \bm{W}\left\{ {\widebar{\bm{\alpha}}} [\circ] \bm{x} + \bm{M}\left(\bm{p}\right) \left(\bm{\alpha} [\circ] \bm{x}\right) \right\} - \bm{b}. \end{equation} The operators $\bm{M} \left( \bm{p} \right)$, $\bm{M} \left(\bm{p}\right)^T$ and $\nabla \bm{M}\left(\bm{p} \right)$ are all provided by a matrix-free and GPU-accelerated implementation of cubic image warping, its adjoint and its derivatives \cite{Renders21} designed to study continuous and differentiable affine motions. The operators $\bm{W}$ and $\bm{W}^T$ of the CT system are provided by the ASTRA Toolbox \cite{VanAarle15}. The objective function of the proposed method is non-convex towards the motion parameters $\bm{p}$. Nonetheless, the convergence of the iterative parameter estimation scheme was validated in \cite{ Nguyen22_SPIE}. The convergence of the reconstruction and region encoder estimation schemes is supported by the following property, which shows the biconvexity of the objective function towards the reconstruction $\bm{x}$ and the region encoder $\bm{\alpha}$. \begin{theorem} Let us assume the domain of the objective function $f$ is extended to $\left[0, 1 \right]^{nN}$, $f$ is then biconvex towards the reconstruction $\bm{x}$ and the region encoder $\bm{\alpha}$. \end{theorem} \begin{proof} The objective function \eqref{eq:rMIRT_objective_function} can be written as a quadratic form towards either the reconstruction variable $\bm{x}$ in the convex domain $\left[0, 1 \right]^n$ when $\bm{\alpha}$ and $\bm{\bm{p}}$ are fixed: \begin{equation} \restr{f}{\bm{\alpha}, \bm{p}}(\bm{x}) = \frac{1}{2} \left \| \bm{W} \bm{P} \left(\bm{\alpha}, \bm{p}\right) \bm{x} - \bm{b} \right \|_2^2, \end{equation} with \begin{equation} \bm{P} \left(\bm{\alpha}, \bm{p}\right) = \left[\left( \bm{M}(\bm{p}) - \bm{I} \right) \diag \left\{ \bm{\alpha} \right \} + \bm{I} \right] \underbrace{ \begin{bmatrix} I \\ I \\ \vdots \\ I \end{bmatrix}}_\text{$n$ blocks $I$}. \end{equation} Similarly, in the extended convex domain $\left[0, 1 \right]^{nN}$ of $\bm{\alpha}$ when $\bm{x}$ and $\bm{p}$ are fixed, it yields: \begin{equation} \restr{f}{\bm{x}, \bm{p}}(\bm{\alpha}) = \frac{1}{2} \left \| \bm{W} \bm{Q} \left(\bm{x}, \bm{p} \right) \bm{\bm{\alpha}} - \bm{b} \right \|_2^2, \end{equation} with \begin{equation} \bm{Q} \left(\bm{x}, \bm{p} \right) = \left[\bm{M}(\bm{p}) - \bm{I} \right] \underbrace{ \begin{bmatrix} \diag \left\{\bm{x} \right \} & 0 & 0 \\ 0 & \ddots & 0 \\ 0 & 0 & \diag \left\{\bm{x} \right \} \end{bmatrix}}_\text{$n$ blocks $\diag \left\{\bm{x} \right \}$}. \end{equation} Consequently, it is biconvex towards $\bm{x}$ and $\bm{\alpha}$. \end{proof} \section{Experiment and results} \begin{figure*}[!ht] \begin{subfigure}{.24\linewidth} \includegraphics[width=\linewidth]{reconstruction_result/x_gt_1.png} \caption{\centering Ground truth \newline} \end{subfigure} \begin{subfigure}{.24\linewidth} \includegraphics[width=\linewidth]{reconstruction_result/x_bad_1.png} \caption{\centering without motion compensation \newline} \end{subfigure} \begin{subfigure}{.24\linewidth} \includegraphics[width=\linewidth]{reconstruction_result/x_MIRT_1.png} \caption{\centering without region-based motion compensation} \end{subfigure} \begin{subfigure}{.24\linewidth} \includegraphics[width=\linewidth]{reconstruction_result/x_corrected_1.png} \caption{\centering with region-based motion compensation} \end{subfigure} \caption{\centering x-z cross-section of the reconstructions of the bone scaffold.} \label{fig:recs} \end{figure*} We use a cylindrical bone scaffold of volume size $235 \times 280 \times 280$ (voxel) reconstructed from a real scan as the reference object. Projection data is simulated by generating $720$ uniformly-sampled cone beam projections spread over a full-rotation angular range. Gaussian noise with standard deviation of 1\% of the peak gray value of the projection data is added to the sinogram. We assume the object region from the top to the 50$^{th}$ horizontal cross-section slice to be static in all angular projections. The projections are captured at discrete angular time points and the motion is simulated as continuous constant scaling in all three dimensions y-z, x-z and x-y respectively with the scaling factors range from 1 to 0.99, 0.99 and 1.25 on the deformed area. The experiment is considered in 5 subscans. We use the mean squared error to evaluate the reconstruction quality. The initial guess of the deformed region is the upper part of the object whose bottom z-coordinate is 55. At the $i^{th}$ iteration, the stepsizes $\gamma_{\bm{x}}^i$ and $\gamma_{\bm{p}}^i$ are chosen following the Barzilai-Borwein formula \cite{BB88} and the stepsize $\gamma_{\bm{\alpha}}^i$ is chosen constantly proportional to the quantity $1/i$. The z-coordinate of the center of motion is updated to be the z-coordinate of the bottom non-zero voxel of the intermediate estimated region encoder, when the x- and y- coordinates are in the center of the volume geometry. Convergence is achieved after around 15 iterations with a computation time of approximately 10 seconds per iteration. The reconstruction results are shown in \reffig{fig:recs}, and the behavior of the mean squared error is given in \reffig{fig:MSEs}. The reconstruction result of our method shows a clear improvement over the reconstruction without motion compensation and the reconstruction without region-based motion compensation \cite{Nguyen22_SPIE} wherein the deformation is supposed to appear in the entire volume of the object. \begin{figure}[!ht] \includegraphics[width=\linewidth]{reconstruction_result/MSEs.png} \caption{\centering Mean squared error of the reconstructions as a function of the number of iterations.} \label{fig:MSEs} \end{figure} \section{Conclusion and future work} We have presented a reconstruction algorithm that combines accurate reconstruction, locally affine-deformed region identification and motion parameter estimation. Our method obtained a reconstruction result improved from the result of the reconstruction without motion compensation and the result of the reconstruction with the compensation of motions in the entire object volume \cite{Nguyen22_SPIE}. In future research, we aim to validate our method on real datasets and quantitatively compare the reconstruction and motion estimation result with the results from state-of-the-art methods. \section{Acknowledgement} This study is partially supported by the Research Foundation-Flanders (FWO) (SBO grant no. S007219N and PhD grant no. 1SA2920N). The authors would like to thank Prof. Martine Wevers and Dr. Jeroen Soete for sharing the dataset. \section{Compliance with ethical standards} This is a numerical simulation study for which no ethical approval was required. \bibliographystyle{IEEEbibetal}